Skip to content

[2.14] SEP-1686 tasks#2378

Merged
jlowin merged 7 commits intomainfrom
sep-1686
Dec 5, 2025
Merged

[2.14] SEP-1686 tasks#2378
jlowin merged 7 commits intomainfrom
sep-1686

Conversation

@chrisguidry
Copy link
Copy Markdown
Collaborator

@chrisguidry chrisguidry commented Nov 5, 2025

This PR represents the feature branch for bringing SEP-1686 to fastmcp. The work will proceed in phases, and will land against this branch. When the low-level SDK and fastmcp are both ready to go, this will be the final PR we merge to main.

Phases:

  • Phase 1: FastAPI/Docket-style dependency injection for all components
  • Phase 2: Making a Docket available to all components for background processing
  • Phase 3: Implement the SEP-1686 protocol and backgrounding of components
  • Phase 4: Add command-line support for running additional/dedicated fastmcp workers
  • Phase 5: Promote the feature out of experimental status ahead of the release

@marvin-context-protocol marvin-context-protocol Bot added DON'T MERGE PR is not ready for merging. Used by authors to prevent premature merging. feature Major new functionality. Reserved for 2-4 significant PRs per release. Not for issues. server Related to FastMCP server implementation or server-side functionality. labels Nov 5, 2025
@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Integration test test_call_tool_list_commits hit a rate limit (HTTP 429) from the GitHub Copilot API during teardown, causing an unhandled exception during asyncio shutdown and a pytest teardown error.

Root Cause: The failure is caused by two related issues:

  1. Rate Limiting During Cleanup: When the test timed out on its first attempt (>15s), the client's disconnect logic attempted to properly clean up the HTTP session. During this cleanup, the StreamableHttpTransport made a final HTTP request to https://api.githubcopilot.com/mcp/ which was rate-limited (HTTP 429). This exception occurred during the asyncio shutdown phase, causing an unhandled exception.

  2. Pytest Stash KeyError: The timeout interrupted pytest's normal teardown flow, causing a KeyError when accessing tmppath_result_key from pytest's stash during fixture cleanup. This is a secondary symptom of the abrupt timeout.

The test is marked with @pytest.mark.flaky(retries=2, delay=1) and did pass on the second attempt, indicating this is an intermittent external API issue rather than a code bug.

Suggested Solution:

The failure is primarily caused by external rate limiting from GitHub's API, not a bug in the PR changes. However, there are a few options to make the tests more robust:

  1. Suppress rate limit errors during disconnect - Modify Client._disconnect() to catch and log HTTP 429 errors during cleanup instead of propagating them:
# In src/fastmcp/client/client.py, around line 564
try:
    await self._session_state.session_task
except httpx.HTTPStatusError as e:
    if e.response.status_code == 429:
        logger.warning(f"Rate limited during disconnect: {e}")
    else:
        raise
  1. Extend the rate limit handler - The existing conftest.py handles BrokenResourceError during the call phase, but doesn't handle HTTPStatusError during teardown. Consider adding a teardown hook to catch and suppress 429 errors.

  2. Increase test timeout - The 15s timeout may be too aggressive for external API calls. Consider increasing it for this specific test class.

  3. No action needed - Since the test passed on retry and this is an external API rate limit issue (not caused by the PR changes), this may be acceptable as-is given the @pytest.mark.flaky decorator.

Detailed Analysis

Error Flow

  1. Test test_call_tool_list_commits starts execution
  2. External GitHub Copilot API is slow or rate-limiting requests
  3. Test hits 15-second timeout and is killed by pytest-timeout
  4. Test's async with streamable_http_client context exits, triggering Client.__aexit__()
  5. Client._disconnect() waits for session_task to complete
  6. _session_runner() exits the AsyncExitStack, unwinding _context_manager()
  7. transport.connect_session() context exits, making a final HTTP request
  8. This request gets HTTP 429 response from https://api.githubcopilot.com/mcp/
  9. Exception bubbles up during asyncio shutdown as unhandled
  10. Pytest teardown encounters KeyError because the timeout interrupted normal cleanup

Relevant Log Excerpt

httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.githubcopilot.com/mcp/'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429
ERROR   asyncio:base_events.py:1758 unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-7' coro=<TestGithubMCPRemote.test_call_tool_list_commits() done...

PR Changes Context

The PR implements SEP-1686 (tasks protocol) with significant changes to the client code (client.py +888/-127 lines). The new disconnect logic uses reference counting and proper session cleanup, which is working correctly. The rate limit error is not caused by these changes—it's an external API constraint.

Related Files
  • tests/integration_tests/test_github_mcp_remote.py:100 - The failing test (not modified in this PR)
  • src/fastmcp/client/client.py:529 - _disconnect() method with new cleanup logic
  • src/fastmcp/client/client.py:567 - _session_runner() that manages session lifecycle
  • src/fastmcp/client/client.py:376 - _context_manager() that wraps transport connection
  • tests/integration_tests/conftest.py:8 - Existing rate limit handling for call phase (handles BrokenResourceError, not HTTPStatusError)

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Windows-specific file locking error during teardown of tests when using SQLite files.

Root Cause: The fixture creates a with a SQLite database in a temporary directory. On Windows, the SQLite database file () is not being properly closed before pytest attempts to clean up the temporary directory. Windows file locking prevents deletion of open files, causing a during teardown.

The issue occurs in the fixture at :

@pytest.fixture(params=["memory", "disk"])
async def caching_server(
    self,
    tracking_calculator: TrackingCalculator,
    request: pytest.FixtureRequest,
):
    mcp = FastMCP("CachingTestServer")

    with tempfile.TemporaryDirectory() as temp_dir:
        disk_store = DiskStore(directory=temp_dir)
        response_caching_middleware = ResponseCachingMiddleware(
            cache_storage=disk_store if request.param == "disk" else MemoryStore(),
        )
        # ... setup code ...
        yield mcp
        await disk_store.close()  # Line 306

The problem: context manager exits BEFORE completes, so the temp directory cleanup tries to delete the still-open database file.

Suggested Solution:

Reorder the context management to ensure is called before the temporary directory cleanup:

@pytest.fixture(params=["memory", "disk"])
async def caching_server(
    self,
    tracking_calculator: TrackingCalculator,
    request: pytest.FixtureRequest,
):
    mcp = FastMCP("CachingTestServer")

    # Create temp dir but don't use context manager yet
    temp_dir = tempfile.mkdtemp() if request.param == "disk" else None
    
    try:
        disk_store = DiskStore(directory=temp_dir) if temp_dir else None
        response_caching_middleware = ResponseCachingMiddleware(
            cache_storage=disk_store if request.param == "disk" else MemoryStore(),
        )

        mcp.add_middleware(middleware=response_caching_middleware)
        tracking_calculator.add_tools(fastmcp=mcp)
        tracking_calculator.add_resources(fastmcp=mcp)
        tracking_calculator.add_prompts(fastmcp=mcp)

        yield mcp

    finally:
        # Close disk store BEFORE cleaning up temp directory
        if disk_store is not None:
            await disk_store.close()
        
        # Now safe to remove temp directory
        if temp_dir is not None:
            import shutil
            shutil.rmtree(temp_dir)
Detailed Analysis

Error Pattern

All 9 failing tests follow the same pattern:

  • Test:
  • Error:
  • Location: During pytest teardown when temp directory cleanup runs

Affected Tests

Why This Happens on Windows

Windows enforces stricter file locking than Unix systems. When SQLite has an open file handle, Windows prevents any other process (including Python's ) from deleting the file. Unix systems allow deletion of open files (the file remains until all handles are closed).

File Location

Related Files
    • The problematic fixture
  • External: - SQLite-based cache store from py-key-value-aio library

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Windows-specific file locking error during teardown of TestResponseCachingMiddlewareIntegration tests when using SQLite cache.db files.

Root Cause: The caching_server fixture creates a DiskStore with a SQLite database in a temporary directory. On Windows, the SQLite database file is not being properly closed before pytest attempts to clean up the temporary directory. Windows file locking prevents deletion of open files, causing a PermissionError during teardown.

The issue occurs in the fixture at tests/server/middleware/test_caching.py:283-306. The problem is that tempfile.TemporaryDirectory() context manager exits BEFORE disk_store.close() completes, so the temp directory cleanup tries to delete the still-open database file.

Suggested Solution:

Reorder the context management to ensure disk_store.close() is called before the temporary directory cleanup. Replace the with tempfile.TemporaryDirectory() pattern with try/finally to control cleanup order:

  • Create temp dir manually with tempfile.mkdtemp()
  • In finally block, close disk_store first
  • Then remove temp directory with shutil.rmtree()

This ensures SQLite closes its file handles before Windows attempts to delete the directory.

Affected Tests

All 9 tests in TestResponseCachingMiddlewareIntegration fail with the same pattern:

  • test_list_tools[memory]
  • test_call_tool[memory]
  • test_call_tool_very_large_value[memory]
  • test_call_tool_crazy_value[memory]
  • test_list_resources[memory]
  • test_read_resource[memory]
  • test_list_prompts[memory]
  • test_get_prompts[memory]
  • test_statistics[memory]

Why This Happens on Windows

Windows enforces stricter file locking than Unix systems. When SQLite has an open file handle, Windows prevents any other process from deleting the file. Unix systems allow deletion of open files (the file remains until all handles are closed).

Related Files

  • tests/server/middleware/test_caching.py:283 - The problematic fixture
  • External: key_value.aio.stores.disk.DiskStore - SQLite-based cache store from py-key-value-aio library
  • Note: py-key-value-aio dependency was upgraded from >=0.2.8,<0.3.0 to >=0.3.0 in this PR

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Windows-specific file locking error persists in after switching from pytest-retry to pytest-rerunfailures.

Root Cause: The fixture at uses a context manager that exits before executes. On Windows, this causes because SQLite still has an open file handle when pytest tries to delete the temporary directory.

The recent commit (c3aac59) replaced pytest-retry with pytest-rerunfailures, but this doesn't address the underlying fixture cleanup order issue. The problem exists in the fixture design itself.

Suggested Solution:

Replace the with tempfile.TemporaryDirectory() pattern with manual cleanup using try/finally to ensure disk_store.close() runs before directory deletion:

@pytest.fixture(params=["memory", "disk"])
async def caching_server(
    self,
    tracking_calculator: TrackingCalculator,
    request: pytest.FixtureRequest,
):
    """Create a FastMCP server for caching tests."""
    mcp = FastMCP("CachingTestServer")
    
    # Create temp dir manually (only for disk param)
    temp_dir = tempfile.mkdtemp() if request.param == "disk" else None
    disk_store = None
    
    try:
        disk_store = DiskStore(directory=temp_dir) if temp_dir else None
        response_caching_middleware = ResponseCachingMiddleware(
            cache_storage=disk_store if request.param == "disk" else MemoryStore(),
        )

        mcp.add_middleware(middleware=response_caching_middleware)
        tracking_calculator.add_tools(fastmcp=mcp)
        tracking_calculator.add_resources(fastmcp=mcp)
        tracking_calculator.add_prompts(fastmcp=mcp)

        yield mcp

    finally:
        # CRITICAL: Close disk store BEFORE removing temp directory
        if disk_store is not None:
            await disk_store.close()
        
        # Now safe to remove temp directory (no open file handles)
        if temp_dir is not None:
            import shutil
            shutil.rmtree(temp_dir)

Why This Fixes It:

  1. The try/finally block guarantees cleanup order: disk_store.close() always runs before shutil.rmtree()
  2. This closes SQLite's file handle before attempting directory deletion
  3. Windows file locking is satisfied because the database file is properly closed

Alternative Solution (if you prefer to keep the context manager pattern):

Use contextlib.AsyncExitStack to control cleanup order:

from contextlib import AsyncExitStack

@pytest.fixture(params=["memory", "disk"])
async def caching_server(
    self,
    tracking_calculator: TrackingCalculator,
    request: pytest.FixtureRequest,
):
    """Create a FastMCP server for caching tests."""
    mcp = FastMCP("CachingTestServer")
    
    async with AsyncExitStack() as stack:
        if request.param == "disk":
            temp_dir_ctx = tempfile.TemporaryDirectory()
            temp_dir = stack.enter_context(temp_dir_ctx)
            disk_store = DiskStore(directory=temp_dir)
            # Register close callback BEFORE temp dir cleanup
            stack.push_async_callback(disk_store.close)
        else:
            disk_store = None
            
        response_caching_middleware = ResponseCachingMiddleware(
            cache_storage=disk_store if request.param == "disk" else MemoryStore(),
        )

        mcp.add_middleware(middleware=response_caching_middleware)
        tracking_calculator.add_tools(fastmcp=mcp)
        tracking_calculator.add_resources(fastmcp=mcp)
        tracking_calculator.add_prompts(fastmcp=mcp)

        yield mcp
Affected Tests (9 failures, all same root cause)

All tests in TestResponseCachingMiddlewareIntegration fail during teardown with PermissionError: [WinError 32]:

  • test_list_tools[memory]
  • test_call_tool[memory]
  • test_call_tool_very_large_value[memory]
  • test_call_tool_crazy_value[memory]
  • test_list_resources[memory]
  • test_read_resource[memory]
  • test_list_prompts[memory]
  • test_get_prompts[memory]
  • test_statistics[memory]

Note: The [memory] parameter failures happen because the fixture still creates a DiskStore regardless of param value (line 293), then conditionally uses it. This means the temp directory and SQLite file are created for ALL parameterizations.

Related Files
  • tests/server/middleware/test_caching.py:283 - The problematic fixture
  • pyproject.toml:73 - Recent change from pytest-retry to pytest-rerunfailures (didn't fix the issue)
  • External dependency: key_value.aio.stores.disk.DiskStore from py-key-value-aio (SQLite-based cache)

Note: This PR also bumped py-key-value-aio from <0.3.0 to >=0.3.0 (pyproject.toml:18). If the newer version has different cleanup behavior, that could also be investigated, but the fixture cleanup order is the primary issue.

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Windows-specific file locking error persists in TestResponseCachingMiddlewareIntegration after switching from pytest-retry to pytest-rerunfailures.

Root Cause: The caching_server fixture at tests/server/middleware/test_caching.py:283-306 uses a tempfile.TemporaryDirectory() context manager that exits BEFORE await disk_store.close() executes. On Windows, this causes PermissionError: [WinError 32] because SQLite still has an open file handle when pytest tries to delete the temporary directory.

The recent commit (c3aac59) replaced pytest-retry with pytest-rerunfailures, but this doesn't address the underlying fixture cleanup order issue. The problem exists in the fixture design itself.

Suggested Solution:

Replace the with tempfile.TemporaryDirectory() pattern with manual cleanup using try/finally to ensure disk_store.close() runs before directory deletion:

@pytest.fixture(params=["memory", "disk"])
async def caching_server(
    self,
    tracking_calculator: TrackingCalculator,
    request: pytest.FixtureRequest,
):
    """Create a FastMCP server for caching tests."""
    mcp = FastMCP("CachingTestServer")

    # Create temp dir manually (only for disk param)
    temp_dir = tempfile.mkdtemp() if request.param == "disk" else None
    disk_store = None

    try:
        disk_store = DiskStore(directory=temp_dir) if temp_dir else None
        response_caching_middleware = ResponseCachingMiddleware(
            cache_storage=disk_store if request.param == "disk" else MemoryStore(),
        )

        mcp.add_middleware(middleware=response_caching_middleware)
        tracking_calculator.add_tools(fastmcp=mcp)
        tracking_calculator.add_resources(fastmcp=mcp)
        tracking_calculator.add_prompts(fastmcp=mcp)

        yield mcp

    finally:
        # CRITICAL: Close disk store BEFORE removing temp directory
        if disk_store is not None:
            await disk_store.close()

        # Now safe to remove temp directory (no open file handles)
        if temp_dir is not None:
            import shutil
            shutil.rmtree(temp_dir)

Why This Fixes It:

  1. The try/finally block guarantees cleanup order: disk_store.close() always runs before shutil.rmtree()
  2. This closes SQLite's file handle before attempting directory deletion
  3. Windows file locking is satisfied because the database file is properly closed
Affected Tests (9 failures, all same root cause)

All tests in TestResponseCachingMiddlewareIntegration fail during teardown with PermissionError: [WinError 32]:

  • test_list_tools[memory]
  • test_call_tool[memory]
  • test_call_tool_very_large_value[memory]
  • test_call_tool_crazy_value[memory]
  • test_list_resources[memory]
  • test_read_resource[memory]
  • test_list_prompts[memory]
  • test_get_prompts[memory]
  • test_statistics[memory]

Note: The [memory] parameter failures happen because the fixture still creates a DiskStore regardless of param value (line 293), then conditionally uses it. This means the temp directory and SQLite file are created for ALL parameterizations.

Related Files
  • tests/server/middleware/test_caching.py:283 - The problematic fixture
  • pyproject.toml:73 - Recent change from pytest-retry to pytest-rerunfailures (didn't fix the issue)
  • External dependency: key_value.aio.stores.disk.DiskStore from py-key-value-aio (SQLite-based cache)

Note: This PR also bumped py-key-value-aio from <0.3.0 to >=0.3.0 (pyproject.toml:18). If the newer version has different cleanup behavior, that could also be investigated, but the fixture cleanup order is the primary issue.

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Test test_multi_client_lifespan times out after 5 seconds because child MCP server processes are not being terminated when exiting the Client context manager.

Root Cause: The Client.__aexit__() method at src/fastmcp/client/client.py:376 only calls _disconnect() but does NOT call transport.close(). This means that when using the async with client: pattern, the underlying transport's cleanup is never executed, leaving child processes running.

Code Flow:

  1. Client.__aexit__() → calls _disconnect()
  2. _disconnect() → manages reference counting and stops the session task
  3. BUT transport.close() is never called from __aexit__
  4. Child processes spawned by MCPConfigTransport remain running
  5. Test loops indefinitely waiting for processes to terminate

Suggested Solution: Modify Client.__aexit__() to call transport.close() after _disconnect():

# src/fastmcp/client/client.py:376
async def __aexit__(self, exc_type, exc_val, exc_tb):
    await self._disconnect()
    await self.transport.close()  # Add this line

This ensures that context manager cleanup properly terminates child processes, matching the behavior of explicitly calling client.close().

Detailed Analysis

Test Details (tests/test_mcp_config.py:295):

  • Test creates MCPConfigTransport with 2 child Python MCP servers
  • Gets PIDs of both server processes
  • Exits context manager
  • Expects processes to be terminated via psutil.NoSuchProcess
  • Times out at 5 seconds waiting for process termination

Current Behavior:

  • __aexit__ only calls _disconnect() which manages session lifecycle
  • transport.close() is only called when explicitly calling client.close()
  • Child processes remain running after context manager exits

Expected Behavior:

  • Context manager exit should clean up all resources
  • Child processes should be terminated
  • Test should pass within timeout

Evidence from PR Diff:
The close() method exists and properly calls both _disconnect(force=True) and transport.close():

async def close(self):
    await self._disconnect(force=True)
    await self.transport.close()

But __aexit__ only calls _disconnect(), missing the transport.close() call.

Related Files
  • src/fastmcp/client/client.py:376 - Client.__aexit__() missing transport.close()
  • src/fastmcp/client/client.py:490 - Client.close() properly calls both disconnect and transport.close()
  • src/fastmcp/client/transports.py:980 - MCPConfigTransport.close() cleans up underlying transports
  • tests/test_mcp_config.py:295 - Failing test test_multi_client_lifespan

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Three OAuth authentication tests are failing due to changes in the MCP Python SDK's OAuth 2.1 implementation behavior between versions.

Root Cause: The failures are caused by behavioral changes in the mcp Python SDK (currently version 1.21.0 installed). The SDK now:

  1. Returns both authentication methods in metadata: The OAuth server metadata now includes both client_secret_post AND client_secret_basic in token_endpoint_auth_methods_supported, which is more correct per OAuth 2.1 specification
  2. Changed error response codes: The SDK now returns unauthorized_client instead of invalid_request or invalid_client in certain error scenarios

Suggested Solution:

Update the test assertions to match the new MCP SDK behavior:

1. Fix test_metadata_endpoint in tests/server/test_auth_integration.py:369-371

Change:

assert metadata["token_endpoint_auth_methods_supported"] == [
    "client_secret_post"
]

To:

assert metadata["token_endpoint_auth_methods_supported"] == [
    "client_secret_post",
    "client_secret_basic",
]

2. Fix test_token_validation_error in tests/server/test_auth_integration.py:389

Change:

assert error_response["error"] == "invalid_request"

To:

assert error_response["error"] == "unauthorized_client"

3. Fix test_token_endpoint_invalid_client_error in tests/server/auth/test_oauth_proxy.py:1237

Change:

assert error_data["error"] == "invalid_client", (
    f"Expected 'invalid_client' but got '{error_data.get('error')}'"
)

To:

assert error_data["error"] == "unauthorized_client", (
    f"Expected 'unauthorized_client' but got '{error_data.get('error')}'"
)

Why This Fixes It:

The changes align test expectations with the current MCP SDK behavior:

  1. OAuth 2.1 servers commonly support multiple token endpoint authentication methods. The SDK now correctly advertises both client_secret_post (credentials in request body) and client_secret_basic (credentials in Authorization header).

  2. The error code changes reflect the SDK's internal error handling when client authentication fails. While invalid_client is more specific, unauthorized_client is also a valid OAuth 2.1 error response for authentication failures.

Detailed Analysis

Background

The MCP Python SDK has evolved its OAuth 2.1 implementation:

  • v1.20.0: Implemented RFC 7523 JWT flows
  • v1.21.0: Added OAuth Protected Resource Metadata discovery fallback
  • v1.21.2: Fixed OAuth scope handling for 401 responses
  • Current (v1.21.0): More complete OAuth 2.1 compliance

Test Failure Details

Failure 1 - test_metadata_endpoint:

AssertionError: assert ['client_secr...secret_basic'] == ['client_secret_post']
  Left contains one more item: 'client_secret_basic'
  Full diff:
   [
       'client_secret_post',
  +     'client_secret_basic',
   ]

Failure 2 - test_token_validation_error:

AssertionError: assert 'unauthorized_client' == 'invalid_request'
  - invalid_request
  + unauthorized_client

Failure 3 - test_token_endpoint_invalid_client_error:

AssertionError: Expected 'invalid_client' but got 'unauthorized_client'
assert 'unauthorized_client' == 'invalid_client'
  - invalid_client
  + unauthorized_client

OAuth 2.1 Compliance

Per the OAuth 2.1 specification and RFC 9728:

  • Token endpoint authentication methods should be advertised via token_endpoint_auth_methods_supported
  • Supporting both client_secret_post and client_secret_basic provides flexibility for different client implementations
  • unauthorized_client is a valid error code when "The client is not authorized to use this grant type" or authentication fails
Related Files
  • tests/server/test_auth_integration.py:350 - test_metadata_endpoint (line 369-371)
  • tests/server/test_auth_integration.py:378 - test_token_validation_error (line 389)
  • tests/server/auth/test_oauth_proxy.py:1190 - test_token_endpoint_invalid_client_error (line 1237)
  • pyproject.toml:10 - MCP SDK dependency: mcp>=1.19.0,<2.0.0,!=1.21.1

Sources:

@chrisguidry chrisguidry changed the title [HOLD] SEP-1686 tasks [2.15] SEP-1686 tasks Dec 1, 2025
@chrisguidry chrisguidry changed the base branch from main to sdk-auth-updates December 1, 2025 16:53
@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Test test_github_oauth_with_mock fails with OAuthTokenError: Token exchange failed (401): {"error":"invalid_client","error_description":"Client secret is required"} due to missing token_endpoint_auth_method configuration in the OAuth client.

Root Cause: The PR uses the MCP Python SDK from main branch (mcp @ git+https://github.com/modelcontextprotocol/python-sdk.git@main) instead of the stable release. The SDK's OAuth implementation on main has changed its default behavior, and the commented-out token_endpoint_auth_method="client_secret_post" at src/fastmcp/client/auth/oauth.py:186 is causing client authentication to fail during token exchange.

When token_endpoint_auth_method is not explicitly set in OAuthClientMetadata, the SDK may not send client credentials properly during the token exchange, resulting in the server rejecting the request with "Client secret is required".

Suggested Solution:

Uncomment line 186 in src/fastmcp/client/auth/oauth.py:

client_metadata = OAuthClientMetadata(
    client_name=client_name,
    redirect_uris=[AnyHttpUrl(redirect_uri)],
    grant_types=["authorization_code", "refresh_token"],
    response_types=["code"],
    token_endpoint_auth_method="client_secret_post",  # Uncomment this line
    scope=scopes_str,
    **(additional_client_metadata or {}),
)

This explicitly configures the OAuth client to send credentials in the POST body during token exchange, which is required by the MCP SDK's current implementation on the main branch.

Alternative Solution: If token_endpoint_auth_method="client_secret_post" should remain commented, verify that the SDK version being used supports the intended authentication method by default, or configure it through additional_client_metadata.

Detailed Analysis

Error Flow

  1. Test test_github_oauth_with_mock creates a Client with HeadlessOAuth
  2. HeadlessOAuth inherits from OAuth, which creates OAuthClientMetadata without token_endpoint_auth_method
  3. During OAuth flow, the authorization succeeds and returns a code
  4. Client attempts to exchange the authorization code for tokens at /token endpoint
  5. Token exchange request doesn't include client credentials properly
  6. Server responds with 401: {"error":"invalid_client","error_description":"Client secret is required"}
  7. Exception propagates through _handle_token_responseasync_auth_flowClient._connect

MCP SDK Changes

The PR switched from stable MCP SDK (mcp>=1.19.0,<2.0.0,!=1.21.1) to main branch (mcp @ git+https://github.com/modelcontextprotocol/python-sdk.git@main). The SDK's OAuth implementation has evolved:

  • Stable SDK: May have defaulted to client_secret_post or handled missing token_endpoint_auth_method gracefully
  • Main branch: Requires explicit configuration of token_endpoint_auth_method for proper client authentication

Relevant Log Excerpt

OAuthTokenError: Token exchange failed (401): {"error":"invalid_client","error_description":"Client secret is required"}

.venv/lib/python3.10/site-packages/mcp/client/auth/oauth2.py:443: OAuthTokenError

RuntimeError: Client failed to connect: Token exchange failed (401): {"error":"invalid_client","error_description":"Client secret is required"}

src/fastmcp/client/client.py:520: RuntimeError

Why Uncommenting Fixes It

The token_endpoint_auth_method="client_secret_post" parameter explicitly tells the OAuth client to include client credentials in the POST request body when exchanging the authorization code for tokens. This matches the server's expectation and the MCP SDK's current behavior.

Related Files
  • src/fastmcp/client/auth/oauth.py:186 - Commented out token_endpoint_auth_method="client_secret_post"
  • tests/integration_tests/auth/test_github_provider_integration.py:361 - Failing test
  • pyproject.toml:10 - MCP SDK dependency changed to main branch
  • MCP SDK: https://github.com/modelcontextprotocol/python-sdk

Sources:

@chrisguidry chrisguidry changed the title [2.15] SEP-1686 tasks [2.14] SEP-1686 tasks Dec 3, 2025
@chrisguidry chrisguidry marked this pull request as ready for review December 3, 2025 18:52
Base automatically changed from sdk-auth-updates to main December 3, 2025 22:14
@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: The integration test test_call_tool_list_commits encounters a KeyError during teardown when retried by pytest-retry. The test timed out on first attempt due to a GitHub API rate limit (HTTP 429), was successfully retried, but then failed during teardown with KeyError: <_pytest.stash.StashKey object> related to the tmp_path fixture's internal tmppath_result_key.

Root Cause: This is a known compatibility issue between pytest-retry and pytest's tmp_path fixture. The tmp_path fixture stores test execution results in pytest's stash using tmppath_result_key to track whether cleanup should occur. When pytest-retry retries a test, it runs teardown to clean up state between attempts. However, during this teardown process, pytest-retry and the tmp_path fixture's cleanup logic interact incorrectly—the stash key gets deleted during the first teardown, but the second attempt expects it to still exist, causing a KeyError.

The actual test passed on retry (the HTTP 429 error was transient), but the teardown crash causes pytest to report an ERROR status for the test.

Suggested Solution: Replace pytest-retry with pytest-rerunfailures, which is the officially maintained pytest plugin for handling flaky tests and has better fixture lifecycle management.

Changes needed:

  1. Update pyproject.toml at line 70:

    • Remove: pytest-retry>=1.7.0
    • Add: pytest-rerunfailures>=15.0
  2. Update test files using @pytest.mark.flaky:

    • Replace @pytest.mark.flaky(retries=N, delay=D) with @pytest.mark.flaky(reruns=N, reruns_delay=D)
    • The decorator signature is slightly different but provides the same functionality

    Files to update:

    • tests/integration_tests/test_github_mcp_remote.py line 37
    • tests/client/test_openapi.py line 184
    • tests/test_mcp_config.py line 639
  3. Run the updated tests:

    uv sync
    uv run pytest tests/integration_tests/test_github_mcp_remote.py -v

This change will resolve the KeyError during teardown while maintaining the same retry behavior for flaky tests.

Detailed Analysis

Failure Logs

The test showed this sequence:

  1. First attempt: Timeout after 15s (test successfully called the tool but took too long)
  2. pytest-retry triggered retry: test_call_tool_list_commits failed on attempt 1! Retrying!
  3. Second attempt: Test passed successfully
  4. Teardown phase: KeyError: <_pytest.stash.StashKey object at 0x7fddaca1da00> at .venv/lib/python3.10/site-packages/_pytest/stash.py:84

The actual test failure was a transient HTTP 429 rate limit error during teardown:

httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.githubcopilot.com/mcp/'

This error occurred during the async context manager exit (__aexit__) while disconnecting from the GitHub Copilot MCP server, but was caught by pytest-retry's teardown process, leading to the stash KeyError.

Technical Details

From the pytest source code, the tmp_path fixture teardown does:

result_dict = request.node.stash[tmppath_result_key]

When pytest-retry runs teardown between attempts, it clears or removes this stash key. On the retry attempt, when the fixture tries to access this key again during its own teardown, it's missing, causing the KeyError.

The pytest-rerunfailures plugin handles this correctly by properly re-executing fixture setup/teardown cycles without corrupting the stash state.

Related Files
  • tests/integration_tests/test_github_mcp_remote.py: Test file with @pytest.mark.flaky decorator that needs updating
  • pyproject.toml: Dependency declaration for pytest-retry that should be changed to pytest-rerunfailures
  • tests/client/test_openapi.py: Another test file using @pytest.mark.flaky
  • tests/test_mcp_config.py: Another test file using @pytest.mark.flaky

Sources:

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 4, 2025

Walkthrough

Adds SEP-1686 background task support across FastMCP. Server-side: Docket integration, task lifecycle handlers (tasks/get, tasks/result, tasks/list, tasks/cancel), task key utilities, converters, subscriptions for status/progress notifications, dependency-injection helpers (CurrentContext/CurrentDocket/CurrentWorker, without_injected_parameters), server lifecycle wiring, CLI worker command, examples, and docs. Client-side: Task-aware transports/initialization, Task classes (ToolTask/PromptTask/ResourceTask) with status/result/wait/cancel APIs, notification routing, and task-capable read/get/call flows. Settings and example envs for Docket/tasks added.

Possibly related issues

  • Issue 508 (jlowin/fastmcp) — Adds session subscription task-group management and subscription-based update logic (session._subscription_task_group and subscription handlers), which addresses the session/task-group lifecycle and client-disconnect issues described in the issue.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.01% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '[2.14] SEP-1686 tasks' clearly identifies the main feature being added (SEP-1686 tasks implementation) and includes the version tag, making it specific and descriptive.
Description check ✅ Passed The description addresses most key areas from the template but lacks completion of the Contributors and Review Checklists, though the PR context and phase information are well-documented.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sep-1686

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/fastmcp/client/transports.py (1)

852-878: Align experimental_capabilities with HTTP path and avoid empty dicts

The new experimental capabilities wiring in FastMCPTransport has a couple of problems:

  1. Shape mismatch vs HTTP SSE server path

    Here, tasks are advertised as:

    experimental_capabilities["tasks"] = {
        "tools": True,
        "prompts": True,
        "resources": True,
    }

    but create_sse_app in src/fastmcp/server/http.py uses the more detailed SEP‑1686 shape:

    "tasks": {
        "list": {},
        "cancel": {},
        "requests": {
            "tools": {"call": {}},
            "prompts": {"get": {}},
            "resources": {"read": {}},
        },
    }

    That means same‑process clients using FastMCPTransport will see different (and likely non‑spec‑compliant) experimentalCapabilities.tasks compared to HTTP clients. For spec‑aware tooling, the simple { "tools": True, ... } form may be ignored or treated as invalid, so task support could appear to “disappear” for in‑memory transports. Based on learnings, this transport is heavily used in tests, so this inconsistency will show up there first.

  2. Empty dict vs None when tasks are disabled

    When enable_tasks is false, experimental_capabilities stays {} and is passed through. That causes an empty experimentalCapabilities map on the wire instead of omitting the field entirely (previous behavior). It’s probably harmless but easy to avoid.

  3. Redundant local import

    The inner import fastmcp is redundant because the module is already imported at the top of the file.

A minimal fix that keeps FastMCPTransport aligned with the HTTP SSE path and restores the “omit when empty” behavior would look like:

@@ async def connect_session(
-            async with (
-                anyio.create_task_group() as tg,
-                _enter_server_lifespan(server=self.server),
-            ):
-                # Build experimental capabilities
-                import fastmcp
-
-                experimental_capabilities = {}
-                if fastmcp.settings.experimental.enable_tasks:
-                    # Declare SEP-1686 task support (enable_tasks requires enable_docket via validator)
-                    experimental_capabilities["tasks"] = {
-                        "tools": True,
-                        "prompts": True,
-                        "resources": True,
-                    }
+            async with (
+                anyio.create_task_group() as tg,
+                _enter_server_lifespan(server=self.server),
+            ):
+                # Build experimental capabilities for the server side
+                experimental_capabilities: dict[str, dict[str, object]] | None = None
+                if fastmcp.settings.experimental.enable_tasks:
+                    # Declare SEP-1686 task support (enable_tasks requires enable_docket via validator)
+                    experimental_capabilities = {
+                        "tasks": {
+                            "list": {},
+                            "cancel": {},
+                            "requests": {
+                                "tools": {"call": {}},
+                                "prompts": {"get": {}},
+                                "resources": {"read": {}},
+                            },
+                        }
+                    }
@@
-                tg.start_soon(
-                    lambda: self.server._mcp_server.run(
-                        server_read,
-                        server_write,
-                        self.server._mcp_server.create_initialization_options(
-                            experimental_capabilities=experimental_capabilities
-                        ),
-                        raise_exceptions=self.raise_exceptions,
-                    )
-                )
+                tg.start_soon(
+                    lambda: self.server._mcp_server.run(
+                        server_read,
+                        server_write,
+                        self.server._mcp_server.create_initialization_options(
+                            experimental_capabilities=experimental_capabilities
+                        ),
+                        raise_exceptions=self.raise_exceptions,
+                    )
+                )

This keeps capability advertisement consistent across transports and only sends experimentalCapabilities when there’s something to declare.

🧹 Nitpick comments (22)
examples/tasks/docker-compose.yml (1)

1-10: Docker Compose configuration looks solid.

The redis service is properly configured with appropriate versioning (alpine for lightweight examples), correct healthcheck implementation, and intentional port mapping (24242) to avoid conflicts. Configuration aligns well with the SEP-1686 task queuing infrastructure introduced in this PR.

As a nice-to-have improvement, consider adding a restart policy to ensure the Redis container recovers from unexpected exits:

 services:
   redis:
     image: redis:7-alpine
+    restart: unless-stopped
     ports:
       - "24242:6379"

This is optional for an example but improves reliability in local development.

examples/tasks/README.md (1)

22-22: Clarify single-process limitation more explicitly.

Line 22 mentions that "CLI workers won't work" with memory:// backend, but this may be unclear to users unfamiliar with the architecture. Consider expanding this to explain that the memory backend is in-process only and therefore cannot support separate worker processes.

Suggested change:

-For single-process mode without Redis, set `FASTMCP_EXPERIMENTAL_DOCKET_URL=memory://` (note: CLI workers won't work).
+For single-process mode without Redis, set `FASTMCP_EXPERIMENTAL_DOCKET_URL=memory://`. Note: The memory backend is in-process only and does not support additional CLI workers; use Redis for distributed task processing.
examples/tasks/server.py (1)

32-51: Example logic is solid; Ruff B008 is a DI false positive

The slow_computation tool logic (validation, progress, logging) looks good and matches the tasks docs. The progress: Progress = Progress() default is intentional for Docket‑style dependency injection, so Ruff’s B008 warning is a false positive here—changing it would alter DI behavior.

If Ruff is enforced in CI, consider silencing it explicitly instead of changing the signature, e.g.:

-async def slow_computation(
-    duration: Annotated[int, Logged],
-    progress: Progress = Progress(),
-) -> str:
+async def slow_computation(
+    duration: Annotated[int, Logged],
+    progress: Progress = Progress(),  # noqa: B008 - DI default, not a real call-at-import bug
+) -> str:

The long ValueError message is fine for an example; TRY003 can safely be ignored here.

docs/clients/tasks.mdx (1)

21-34: Content is accurate; consider making examples fully runnable

The Task API description and usage patterns line up with the new background‑task features, and the page follows the docs structure (frontmatter, H2 headings, clear sections).

If you want the snippets to be truly copy‑paste runnable per the docs guidelines, you could:

  • Show a concrete server value (for example, "server.py" or a small fastmcp.json path), and
  • Wrap the top‑level async usage in a small entrypoint:
import asyncio
from fastmcp import Client

async def main():
    async with Client("server.py") as client:
        task = await client.call_tool("slow_computation", {"duration": 10}, task=True)
        print(f"Task started: {task.task_id}")
        result = await task.result()
        print(result)

asyncio.run(main())

The current text is otherwise clear and consistent.

Also applies to: 125-155

src/fastmcp/server/http.py (1)

160-183: Task capabilities structure looks correct; you can avoid sending an empty map

The SSE handler’s experimental_capabilities structure for "tasks" (list/cancel/requests.{tools.call,prompts.get,resources.read}) matches the intended SEP‑1686 layout and will advertise the right features to MCP clients when experimental.enable_tasks is on.

For slightly cleaner behavior (and to align with the in‑memory transport once updated), you could avoid sending an empty experimentalCapabilities when tasks are disabled:

-        async with sse.connect_sse(scope, receive, send) as streams:
-            # Build experimental capabilities
-            experimental_capabilities = {}
-            if fastmcp.settings.experimental.enable_tasks:
-                # Declare SEP-1686 task support per final spec (lines 49-63)
-                # Nested structure: {list: {}, cancel: {}, requests: {tools: {call: {}}}}
-                experimental_capabilities["tasks"] = {
-                    "list": {},
-                    "cancel": {},
-                    "requests": {
-                        "tools": {"call": {}},
-                        "prompts": {"get": {}},
-                        "resources": {"read": {}},
-                    },
-                }
-
-            await server._mcp_server.run(
+        async with sse.connect_sse(scope, receive, send) as streams:
+            experimental_capabilities = None
+            if fastmcp.settings.experimental.enable_tasks:
+                experimental_capabilities = {
+                    "tasks": {
+                        "list": {},
+                        "cancel": {},
+                        "requests": {
+                            "tools": {"call": {}},
+                            "prompts": {"get": {}},
+                            "resources": {"read": {}},
+                        },
+                    }
+                }
+
+            await server._mcp_server.run(
                 streams[0],
                 streams[1],
                 server._mcp_server.create_initialization_options(
                     experimental_capabilities=experimental_capabilities
                 ),
             )

Functionally it’s fine as‑is; this is mainly about keeping the wire payload minimal and consistent.

docs/servers/tasks.mdx (1)

13-86: Server tasks docs align well with implementation

The page clearly describes the experimental flags, Docket backends, task=True usage, the Progress API, and worker CLI, and it matches the actual settings and dependencies in the codebase.

As a small optional enhancement, you might add a short “See also: Client Background Tasks” link near the top or under “Running Additional Workers” to tie this page to clients/tasks and make the end‑to‑end story more discoverable, but the current content is already coherent and actionable.

Also applies to: 95-126

src/fastmcp/cli/tasks.py (1)

92-95: Prefix unused variable with underscore.

The resolved_spec variable from load_and_merge_config is not used. Per Ruff RUF059, prefix it with an underscore to indicate it's intentionally unused.

     try:
-        config, resolved_spec = load_and_merge_config(server_spec)
+        config, _resolved_spec = load_and_merge_config(server_spec)
     except FileNotFoundError:
         sys.exit(1)
examples/tasks/client.py (1)

128-129: Consider replacing assert with explicit type handling.

Using assert for type checking can be problematic since assertions are disabled when Python runs with optimization flags (-O). For example code that users may copy, consider using explicit type checks or guards.

-        assert isinstance(result.content[0], TextContent)
-        console.print(f"  {result.content[0].text}")
+        first_content = result.content[0]
+        if isinstance(first_content, TextContent):
+            console.print(f"  {first_content.text}")
+        else:
+            console.print(f"  {first_content}")

The same applies to Line 156.

docs/servers/context.mdx (1)

120-121: Minor style improvement: remove redundant "of".

Per static analysis, "outside of" is redundant. Consider simplifying to "outside".

-- The `get_context()` function should only be used within the context of a server request. Calling it outside of a request will raise a `RuntimeError`.
+- The `get_context()` function should only be used within the context of a server request. Calling it outside a request will raise a `RuntimeError`.
src/fastmcp/server/tasks/subscriptions.py (2)

98-113: Consider extracting Redis timestamp lookup to reduce duplication.

Both _send_status_notification and _send_progress_notification contain identical logic for parsing the task key, building the Redis key, and fetching created_at. Consider extracting to a helper function.

async def _get_created_at(task_key: str, task_id: str, docket: Docket) -> str:
    """Retrieve createdAt timestamp from Redis, with fallback to now."""
    from fastmcp.server.tasks.keys import parse_task_key
    
    key_parts = parse_task_key(task_key)
    session_id = key_parts["session_id"]
    created_at_key = f"fastmcp:task:{session_id}:{task_id}:created_at"
    
    async with docket.redis() as redis:
        created_at_bytes = await redis.get(created_at_key)
    
    return (
        created_at_bytes.decode("utf-8")
        if created_at_bytes
        else datetime.now(timezone.utc).isoformat()
    )

Also applies to: 172-187


141-143: Using suppress(Exception) for notifications is reasonable.

Notification delivery is non-critical - the task will complete regardless. Silently suppressing failures here prevents notification issues from breaking the subscription loop. However, consider logging at debug level for observability.

-    with suppress(Exception):
-        await session.send_notification(notification)  # type: ignore[arg-type]
+    try:
+        await session.send_notification(notification)  # type: ignore[arg-type]
+    except Exception:
+        logger.debug(f"Failed to send status notification for task {task_id}")

Also applies to: 203-205

src/fastmcp/server/tasks/handlers.py (1)

27-133: Consider extracting common task queuing logic.

All three handlers (handle_tool_as_task, handle_prompt_as_task, handle_resource_as_task) share significant duplicated code:

  • Task ID generation and timestamp creation
  • Session/Docket retrieval and validation
  • Redis key storage with TTL
  • Task created notification sending
  • Subscription task spawning

Consider extracting common logic to reduce duplication. However, given this is new code and the handlers may diverge in behavior, this refactor could be deferred.

Example helper:

async def _prepare_task(
    task_type: str,
    component_identifier: str,
    docket_required_message: str,
) -> tuple[str, str, str, Context, Docket]:
    """Common task preparation: ID, timestamp, context, docket, key."""
    server_task_id = str(uuid.uuid4())
    created_at = datetime.now(timezone.utc).isoformat()
    ctx = get_context()
    session_id = ctx.session_id
    
    docket = _current_docket.get()
    if docket is None:
        raise McpError(ErrorData(code=INTERNAL_ERROR, message=docket_required_message))
    
    task_key = build_task_key(session_id, server_task_id, task_type, component_identifier)
    return server_task_id, created_at, task_key, ctx, docket

Also applies to: 136-239, 242-343

src/fastmcp/server/tasks/converters.py (1)

134-135: Use TypeError for invalid message types.

Per Python convention, TypeError is more appropriate when a value has an invalid type. ValueError is for valid types with invalid values.

         else:
-            raise ValueError(f"Invalid message type: {type(msg)}")
+            raise TypeError(f"Invalid message type: {type(msg)}")
src/fastmcp/prompts/prompt.py (1)

269-274: Redundant import of without_injected_parameters.

This function is already imported at module level (line 17). The local import is unnecessary.

     def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
         """Convert string arguments to expected types based on function signature."""
-        from fastmcp.server.dependencies import without_injected_parameters
-
         wrapper_fn = without_injected_parameters(self.fn)
         sig = inspect.signature(wrapper_fn)
src/fastmcp/server/dependencies.py (4)

84-85: LRU cache on wrapper function creation may cause memory issues with closures.

The @lru_cache(maxsize=5000) on without_injected_parameters caches based on the function identity. If the same function is passed, the same wrapper is returned. However, if lambda functions or closures are used (even if they look identical), each creates a new cache entry. Consider if this could lead to cache pollution in dynamic scenarios.

The current approach is reasonable for most use cases, but document the caching behavior to help users understand memory implications with dynamic function generation.


477-484: Missing return type annotation on __aenter__.

The _CurrentFastMCP.__aenter__ method lacks a return type annotation. For consistency with other dependency classes, add the return type.

-    async def __aenter__(self):
+    async def __aenter__(self) -> FastMCP:
         server_ref = _current_server.get()

487-509: Missing return type annotation on CurrentFastMCP function.

The function lacks a return type annotation for consistency with other Current* functions.

-def CurrentFastMCP():
+def CurrentFastMCP() -> FastMCP:
     """Get the current FastMCP server instance.

512-528: Missing return type annotation on get_server function.

For consistency and type safety, add return type annotation.

-def get_server():
+def get_server() -> FastMCP:
     """Get the current FastMCP server instance directly.
src/fastmcp/client/tasks.py (3)

199-206: Fire-and-forget async callbacks may silently swallow exceptions.

When asyncio.create_task(result) is called without storing the task reference, any exceptions raised by the callback will only be logged to stderr when the task is garbage collected, not propagated or handled. Consider storing references or adding exception handling.

             try:
                 result = callback(status)
                 if inspect.isawaitable(result):
                     # Fire and forget async callbacks
-                    asyncio.create_task(result)  # noqa: RUF006
+                    task = asyncio.create_task(result)
+                    task.add_done_callback(
+                        lambda t: logger.warning(f"Async callback failed: {t.exception()}")
+                        if t.exception() else None
+                    )
             except Exception as e:
                 logger.warning(f"Task callback error: {e}", exc_info=True)

426-452: Type handling for raw_result covers multiple scenarios, but could be fragile.

The branching logic handles dict, mcp.types.CallToolResult, and legacy ToolResult formats. The hasattr checks for legacy format are somewhat fragile. Consider using isinstance with the actual type if available.

The current implementation is defensive and handles edge cases, which is appropriate for protocol compatibility. No change required, but consider adding a comment explaining the legacy format scenario.


597-604: Content type detection uses presence of "blob" key.

The logic assumes if "blob" is in the dict, it's a BlobResourceContents, otherwise TextResourceContents. This should be correct per MCP spec, but consider checking the "type" field if available for more explicit detection.

src/fastmcp/client/client.py (1)

459-472: Consider more specific exception type for timeout.

The TimeoutError catch on line 471 is correct, but the re-raised RuntimeError message "Failed to initialize server session" loses the timeout context. Consider including timeout information in the message.

         except TimeoutError as e:
-            raise RuntimeError("Failed to initialize server session") from e
+            raise RuntimeError(
+                f"Failed to initialize server session: connection timed out after {timeout}s"
+            ) from e
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fe2ec99 and 755c365.

⛔ Files ignored due to path filters (37)
  • .pre-commit-config.yaml is excluded by none and included by none
  • pyproject.toml is excluded by none and included by none
  • tests/cli/test_tasks.py is excluded by none and included by none
  • tests/client/tasks/conftest.py is excluded by none and included by none
  • tests/client/tasks/test_client_prompt_tasks.py is excluded by none and included by none
  • tests/client/tasks/test_client_resource_tasks.py is excluded by none and included by none
  • tests/client/tasks/test_client_task_notifications.py is excluded by none and included by none
  • tests/client/tasks/test_client_task_protocol.py is excluded by none and included by none
  • tests/client/tasks/test_client_tool_tasks.py is excluded by none and included by none
  • tests/client/tasks/test_task_context_validation.py is excluded by none and included by none
  • tests/client/tasks/test_task_result_caching.py is excluded by none and included by none
  • tests/client/test_client.py is excluded by none and included by none
  • tests/server/middleware/test_caching.py is excluded by none and included by none
  • tests/server/middleware/test_logging.py is excluded by none and included by none
  • tests/server/tasks/__init__.py is excluded by none and included by none
  • tests/server/tasks/conftest.py is excluded by none and included by none
  • tests/server/tasks/test_progress_dependency.py is excluded by none and included by none
  • tests/server/tasks/test_server_tasks_parameter.py is excluded by none and included by none
  • tests/server/tasks/test_sync_function_task_disabled.py is excluded by none and included by none
  • tests/server/tasks/test_task_capabilities.py is excluded by none and included by none
  • tests/server/tasks/test_task_dependencies.py is excluded by none and included by none
  • tests/server/tasks/test_task_metadata.py is excluded by none and included by none
  • tests/server/tasks/test_task_methods.py is excluded by none and included by none
  • tests/server/tasks/test_task_prompts.py is excluded by none and included by none
  • tests/server/tasks/test_task_protocol.py is excluded by none and included by none
  • tests/server/tasks/test_task_resources.py is excluded by none and included by none
  • tests/server/tasks/test_task_return_types.py is excluded by none and included by none
  • tests/server/tasks/test_task_security.py is excluded by none and included by none
  • tests/server/tasks/test_task_status_notifications.py is excluded by none and included by none
  • tests/server/tasks/test_task_tools.py is excluded by none and included by none
  • tests/server/tasks/test_task_ttl.py is excluded by none and included by none
  • tests/server/test_dependencies.py is excluded by none and included by none
  • tests/server/test_server_docket.py is excluded by none and included by none
  • tests/server/test_tool_annotations.py is excluded by none and included by none
  • tests/tools/test_tool.py is excluded by none and included by none
  • tests/utilities/test_types.py is excluded by none and included by none
  • uv.lock is excluded by !**/*.lock and included by none
📒 Files selected for processing (31)
  • docs/clients/tasks.mdx (1 hunks)
  • docs/docs.json (2 hunks)
  • docs/servers/context.mdx (5 hunks)
  • docs/servers/tasks.mdx (1 hunks)
  • examples/tasks/README.md (1 hunks)
  • examples/tasks/client.py (1 hunks)
  • examples/tasks/docker-compose.yml (1 hunks)
  • examples/tasks/server.py (1 hunks)
  • src/fastmcp/cli/cli.py (2 hunks)
  • src/fastmcp/cli/tasks.py (1 hunks)
  • src/fastmcp/client/client.py (17 hunks)
  • src/fastmcp/client/tasks.py (1 hunks)
  • src/fastmcp/client/transports.py (1 hunks)
  • src/fastmcp/dependencies.py (1 hunks)
  • src/fastmcp/prompts/prompt.py (10 hunks)
  • src/fastmcp/resources/resource.py (5 hunks)
  • src/fastmcp/resources/template.py (11 hunks)
  • src/fastmcp/server/context.py (2 hunks)
  • src/fastmcp/server/dependencies.py (3 hunks)
  • src/fastmcp/server/http.py (2 hunks)
  • src/fastmcp/server/low_level.py (4 hunks)
  • src/fastmcp/server/server.py (30 hunks)
  • src/fastmcp/server/tasks/__init__.py (1 hunks)
  • src/fastmcp/server/tasks/converters.py (1 hunks)
  • src/fastmcp/server/tasks/handlers.py (1 hunks)
  • src/fastmcp/server/tasks/keys.py (1 hunks)
  • src/fastmcp/server/tasks/protocol.py (1 hunks)
  • src/fastmcp/server/tasks/subscriptions.py (1 hunks)
  • src/fastmcp/settings.py (3 hunks)
  • src/fastmcp/tools/tool.py (8 hunks)
  • src/fastmcp/utilities/tests.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Python source code must use Python ≥3.10 with full type annotations
Never use bare except - be specific with exception types
Prioritize readable, understandable code - clarity over cleverness; avoid obfuscated or confusing patterns even if shorter
Follow existing patterns and maintain consistency in code organization and style

Files:

  • src/fastmcp/dependencies.py
  • src/fastmcp/server/tasks/keys.py
  • src/fastmcp/client/transports.py
  • src/fastmcp/cli/tasks.py
  • src/fastmcp/server/tasks/protocol.py
  • src/fastmcp/server/low_level.py
  • src/fastmcp/server/tasks/subscriptions.py
  • src/fastmcp/server/http.py
  • src/fastmcp/server/tasks/converters.py
  • src/fastmcp/client/tasks.py
  • src/fastmcp/resources/resource.py
  • src/fastmcp/resources/template.py
  • src/fastmcp/tools/tool.py
  • src/fastmcp/server/tasks/handlers.py
  • src/fastmcp/server/tasks/__init__.py
  • src/fastmcp/server/server.py
  • src/fastmcp/server/context.py
  • src/fastmcp/settings.py
  • src/fastmcp/prompts/prompt.py
  • src/fastmcp/cli/cli.py
  • src/fastmcp/utilities/tests.py
  • src/fastmcp/client/client.py
  • src/fastmcp/server/dependencies.py
docs/**/*.mdx

📄 CodeRabbit inference engine (docs/.cursor/rules/mintlify.mdc)

docs/**/*.mdx: Use clear, direct language appropriate for technical audiences
Write in second person ('you') for instructions and procedures in MDX documentation
Use active voice over passive voice in MDX technical documentation
Employ present tense for current states and future tense for outcomes in MDX documentation
Maintain consistent terminology throughout all MDX documentation
Keep sentences concise while providing necessary context in MDX documentation
Use parallel structure in lists, headings, and procedures in MDX documentation
Lead with the most important information using inverted pyramid structure in MDX documentation
Use progressive disclosure in MDX documentation: present basic concepts before advanced ones
Break complex procedures into numbered steps in MDX documentation
Include prerequisites and context before instructions in MDX documentation
Provide expected outcomes for each major step in MDX documentation
End sections with next steps or related information in MDX documentation
Use descriptive, keyword-rich headings for navigation and SEO in MDX documentation
Focus on user goals and outcomes rather than system features in MDX documentation
Anticipate common questions and address them proactively in MDX documentation
Include troubleshooting for likely failure points in MDX documentation
Provide multiple pathways (beginner vs advanced) but offer an opinionated path to avoid overwhelming users in MDX documentation
Always include complete, runnable code examples that users can copy and execute in MDX documentation
Show proper error handling and edge case management in MDX code examples
Use realistic data instead of placeholder values in MDX code examples
Include expected outputs and results for verification in MDX code examples
Test all code examples thoroughly before publishing in MDX documentation
Specify language and include filename when relevant in MDX code examples
Add explanatory comments for complex logic in MDX code examples
Document all API...

Files:

  • docs/clients/tasks.mdx
  • docs/servers/context.mdx
  • docs/servers/tasks.mdx
🧠 Learnings (4)
📚 Learning: 2025-12-04T00:17:41.238Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T00:17:41.238Z
Learning: Applies to tests/**/*.py : Pass FastMCP servers directly to clients for testing using in-memory transport; only use HTTP transport when explicitly testing network features

Applied to files:

  • src/fastmcp/client/transports.py
  • src/fastmcp/server/server.py
📚 Learning: 2025-11-26T21:51:44.174Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: .cursor/rules/core-mcp-objects.mdc:0-0
Timestamp: 2025-11-26T21:51:44.174Z
Learning: Review and update related Manager classes (ToolManager, ResourceManager, PromptManager) when modifying MCP object definitions

Applied to files:

  • src/fastmcp/tools/tool.py
📚 Learning: 2025-12-04T00:17:41.238Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T00:17:41.238Z
Learning: Applies to tests/**/*.py : Use `# type: ignore[attr-defined]` in tests for MCP results instead of type assertions

Applied to files:

  • src/fastmcp/utilities/tests.py
📚 Learning: 2025-12-01T15:53:29.709Z
Learnt from: jlowin
Repo: jlowin/fastmcp PR: 2505
File: src/fastmcp/server/dependencies.py:9-9
Timestamp: 2025-12-01T15:53:29.709Z
Learning: In the MCP SDK (mcp.server.auth.middleware.bearer_auth), the AuthenticatedUser class has an `access_token` attribute (not `token`) that returns an AccessToken object. To get the raw token string, you would use `user.access_token.token`.

Applied to files:

  • src/fastmcp/server/dependencies.py
🧬 Code graph analysis (19)
src/fastmcp/dependencies.py (1)
src/fastmcp/server/dependencies.py (5)
  • CurrentContext (271-293)
  • CurrentDocket (320-345)
  • CurrentFastMCP (487-509)
  • CurrentWorker (370-394)
  • Progress (446-471)
src/fastmcp/server/tasks/keys.py (1)
src/fastmcp/server/context.py (1)
  • session_id (343-392)
src/fastmcp/client/transports.py (2)
src/fastmcp/server/context.py (1)
  • fastmcp (152-157)
src/fastmcp/server/low_level.py (3)
  • fastmcp (42-47)
  • fastmcp (129-134)
  • create_initialization_options (136-149)
src/fastmcp/cli/tasks.py (2)
src/fastmcp/utilities/cli.py (1)
  • load_and_merge_config (32-137)
examples/tasks/client.py (1)
  • load_server (34-42)
src/fastmcp/server/tasks/protocol.py (4)
src/fastmcp/server/server.py (2)
  • docket (388-393)
  • FastMCP (149-2882)
src/fastmcp/server/context.py (3)
  • fastmcp (152-157)
  • Context (100-731)
  • session_id (343-392)
src/fastmcp/server/tasks/converters.py (3)
  • convert_prompt_result (94-145)
  • convert_resource_result (148-203)
  • convert_tool_result (21-91)
src/fastmcp/server/tasks/keys.py (1)
  • parse_task_key (47-77)
src/fastmcp/server/low_level.py (1)
src/fastmcp/server/context.py (1)
  • session (395-405)
src/fastmcp/server/tasks/subscriptions.py (1)
src/fastmcp/server/tasks/keys.py (1)
  • parse_task_key (47-77)
src/fastmcp/server/http.py (1)
src/fastmcp/server/context.py (1)
  • fastmcp (152-157)
examples/tasks/client.py (1)
src/fastmcp/client/tasks.py (6)
  • status (234-262)
  • on_status_change (208-232)
  • result (265-270)
  • result (399-456)
  • result (490-521)
  • result (560-614)
examples/tasks/server.py (2)
src/fastmcp/server/dependencies.py (4)
  • Progress (446-471)
  • set_total (426-430)
  • increment (432-439)
  • set_message (441-443)
src/fastmcp/client/client.py (1)
  • progress (641-651)
src/fastmcp/server/tasks/converters.py (2)
src/fastmcp/server/server.py (4)
  • tool (1683-1698)
  • tool (1701-1716)
  • tool (1718-1871)
  • get_tool (802-806)
src/fastmcp/tools/tool.py (3)
  • ToolResult (73-119)
  • _convert_to_content (589-615)
  • to_mcp_result (106-119)
src/fastmcp/client/tasks.py (2)
src/fastmcp/client/client.py (7)
  • session (329-336)
  • CallToolResult (120-127)
  • Client (130-1543)
  • _handle_task_status_notification (591-611)
  • cancel (624-639)
  • get_task_result (1443-1467)
  • _parse_call_tool_result (1228-1273)
src/fastmcp/client/messages.py (1)
  • MessageHandler (16-126)
src/fastmcp/resources/resource.py (1)
src/fastmcp/server/dependencies.py (1)
  • resolve_dependencies (208-252)
src/fastmcp/resources/template.py (1)
src/fastmcp/server/dependencies.py (3)
  • get_context (255-261)
  • without_injected_parameters (85-139)
  • resolve_dependencies (208-252)
src/fastmcp/tools/tool.py (2)
src/fastmcp/server/dependencies.py (1)
  • get_context (255-261)
src/fastmcp/utilities/types.py (2)
  • get_cached_typeadapter (45-117)
  • create_function_without_params (178-224)
src/fastmcp/server/tasks/handlers.py (3)
src/fastmcp/server/dependencies.py (2)
  • get_context (255-261)
  • message (423-424)
src/fastmcp/server/tasks/keys.py (1)
  • build_task_key (15-44)
src/fastmcp/server/tasks/subscriptions.py (1)
  • subscribe_to_task_updates (27-73)
src/fastmcp/server/tasks/__init__.py (4)
src/fastmcp/server/tasks/converters.py (3)
  • convert_prompt_result (94-145)
  • convert_resource_result (148-203)
  • convert_tool_result (21-91)
src/fastmcp/server/tasks/handlers.py (3)
  • handle_prompt_as_task (136-239)
  • handle_resource_as_task (242-343)
  • handle_tool_as_task (27-133)
src/fastmcp/server/tasks/keys.py (3)
  • build_task_key (15-44)
  • get_client_task_id_from_key (80-93)
  • parse_task_key (47-77)
src/fastmcp/server/tasks/protocol.py (4)
  • tasks_cancel_handler (265-346)
  • tasks_get_handler (45-130)
  • tasks_list_handler (247-262)
  • tasks_result_handler (133-244)
src/fastmcp/server/server.py (5)
src/fastmcp/server/context.py (3)
  • fastmcp (152-157)
  • Context (100-731)
  • debug (408-422)
src/fastmcp/server/tasks/handlers.py (2)
  • handle_prompt_as_task (136-239)
  • handle_resource_as_task (242-343)
src/fastmcp/resources/resource_manager.py (2)
  • get_resources (56-58)
  • get_resource (244-287)
src/fastmcp/server/dependencies.py (1)
  • without_injected_parameters (85-139)
src/fastmcp/server/openapi/components.py (1)
  • run (62-154)
src/fastmcp/server/context.py (2)
src/fastmcp/server/low_level.py (2)
  • fastmcp (42-47)
  • fastmcp (129-134)
src/fastmcp/server/server.py (1)
  • _read_resource_mcp (1457-1476)
🪛 LanguageTool
docs/servers/context.mdx

[style] ~120-~120: This phrase is redundant. Consider using “outside”.
Context: ...context of a server request. Calling it outside of a request will raise a RuntimeError. ...

(OUTSIDE_OF)

🪛 Ruff (0.14.7)
src/fastmcp/server/tasks/keys.py

67-70: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/cli/tasks.py

93-93: Unpacked variable resolved_spec is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

src/fastmcp/server/tasks/protocol.py

93-93: Avoid specifying long messages outside the exception class

(TRY003)


98-98: Avoid specifying long messages outside the exception class

(TRY003)


115-115: Do not catch blind exception: Exception

(BLE001)


209-209: Do not catch blind exception: Exception

(BLE001)


248-248: Unused function argument: server

(ARG001)


248-248: Unused function argument: params

(ARG001)

src/fastmcp/server/tasks/subscriptions.py

72-72: Do not catch blind exception: Exception

(BLE001)

examples/tasks/server.py

35-35: Do not perform function call Progress in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


50-50: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/server/tasks/converters.py

135-135: Prefer TypeError exception for invalid type

(TRY004)


135-135: Avoid specifying long messages outside the exception class

(TRY003)


149-149: Unused function argument: server

(ARG001)

src/fastmcp/client/tasks.py

76-78: Avoid specifying long messages outside the exception class

(TRY003)


162-165: Avoid specifying long messages outside the exception class

(TRY003)


205-205: Do not catch blind exception: Exception

(BLE001)


319-321: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/server/tasks/handlers.py

31-31: Unused function argument: task_meta

(ARG001)


140-140: Unused function argument: task_meta

(ARG001)


243-243: Unused function argument: server

(ARG001)


246-246: Unused function argument: task_meta

(ARG001)

src/fastmcp/server/server.py

415-418: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/client/client.py

472-472: Avoid specifying long messages outside the exception class

(TRY003)


1264-1264: Do not catch blind exception: Exception

(BLE001)


1265-1265: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1510-1512: try-except-continue detected, consider logging the exception

(S112)


1510-1510: Do not catch blind exception: Exception

(BLE001)

src/fastmcp/server/dependencies.py

73-73: Do not catch blind exception: Exception

(BLE001)


193-195: Avoid specifying long messages outside the exception class

(TRY003)


304-307: Avoid specifying long messages outside the exception class

(TRY003)


312-315: Avoid specifying long messages outside the exception class

(TRY003)


355-358: Avoid specifying long messages outside the exception class

(TRY003)


362-365: Avoid specifying long messages outside the exception class

(TRY003)


429-429: Avoid specifying long messages outside the exception class

(TRY003)


435-435: Avoid specifying long messages outside the exception class

(TRY003)


465-468: Avoid specifying long messages outside the exception class

(TRY003)


480-480: Avoid specifying long messages outside the exception class

(TRY003)


483-483: Avoid specifying long messages outside the exception class

(TRY003)


524-524: Avoid specifying long messages outside the exception class

(TRY003)


527-527: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests: Python 3.10 on ubuntu-latest
  • GitHub Check: Run tests with lowest-direct dependencies
  • GitHub Check: Run tests: Python 3.10 on windows-latest

Comment on lines +1242 to +1273
if result.isError and raise_on_error:
msg = cast(mcp.types.TextContent, result.content[0]).text
raise ToolError(msg)
elif result.structuredContent:
try:
if name not in self.session._tool_output_schemas:
await self.session.list_tools()
if name in self.session._tool_output_schemas:
output_schema = self.session._tool_output_schemas.get(name)
if output_schema:
if output_schema.get("x-fastmcp-wrap-result"):
output_schema = output_schema.get("properties", {}).get(
"result"
)
structured_content = result.structuredContent.get("result")
else:
structured_content = result.structuredContent
output_type = json_schema_to_type(output_schema)
type_adapter = get_cached_typeadapter(output_type)
data = type_adapter.validate_python(structured_content)
else:
data = result.structuredContent
except Exception as e:
logger.error(f"[{self.name}] Error parsing structured content: {e}")

return CallToolResult(
content=result.content,
structured_content=result.structuredContent,
meta=result.meta,
data=data,
is_error=result.isError,
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Exception handling is too broad and logging could be improved.

The static analysis correctly identifies issues here:

  1. Line 1264: except Exception is too broad - could mask unexpected errors
  2. Line 1265: logger.error should use logger.exception to include traceback
-            except Exception as e:
-                logger.error(f"[{self.name}] Error parsing structured content: {e}")
+            except (KeyError, ValueError, TypeError, pydantic_core.ValidationError) as e:
+                logger.exception(f"[{self.name}] Error parsing structured content")
🧰 Tools
🪛 Ruff (0.14.7)

1264-1264: Do not catch blind exception: Exception

(BLE001)


1265-1265: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

Comment thread src/fastmcp/client/client.py
Comment thread src/fastmcp/resources/template.py
Comment on lines +80 to +93
def get_client_task_id_from_key(task_key: str) -> str:
"""Extract just the client task ID from a task key.

Args:
task_key: Full encoded task key

Returns:
Client-provided task ID (second segment)

Example:
>>> get_client_task_id_from_key("session123:task456:tool:my_tool")
'task456'
"""
return task_key.split(":", 3)[1]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing validation may cause IndexError on malformed keys.

If task_key contains fewer than 2 colons, split(":", 3)[1] will raise an IndexError. Consider adding validation or reusing parse_task_key for consistency.

 def get_client_task_id_from_key(task_key: str) -> str:
     """Extract just the client task ID from a task key.

     Args:
         task_key: Full encoded task key

     Returns:
         Client-provided task ID (second segment)

     Example:
         >>> get_client_task_id_from_key("session123:task456:tool:my_tool")
         'task456'
     """
-    return task_key.split(":", 3)[1]
+    parts = task_key.split(":", 3)
+    if len(parts) < 2:
+        raise ValueError(f"Invalid task key format: {task_key}")
+    return parts[1]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_client_task_id_from_key(task_key: str) -> str:
"""Extract just the client task ID from a task key.
Args:
task_key: Full encoded task key
Returns:
Client-provided task ID (second segment)
Example:
>>> get_client_task_id_from_key("session123:task456:tool:my_tool")
'task456'
"""
return task_key.split(":", 3)[1]
def get_client_task_id_from_key(task_key: str) -> str:
"""Extract just the client task ID from a task key.
Args:
task_key: Full encoded task key
Returns:
Client-provided task ID (second segment)
Example:
>>> get_client_task_id_from_key("session123:task456:tool:my_tool")
'task456'
"""
parts = task_key.split(":", 3)
if len(parts) < 2:
raise ValueError(f"Invalid task key format: {task_key}")
return parts[1]
🤖 Prompt for AI Agents
In src/fastmcp/server/tasks/keys.py around lines 80 to 93, the function
get_client_task_id_from_key uses task_key.split(":", 3)[1] which can raise
IndexError for malformed keys; change it to validate the key structure before
indexing (e.g., split by ":" and assert there are at least 2 segments) or
call/reuse the existing parse_task_key helper to extract the second segment, and
raise a clear ValueError with a descriptive message when the key is invalid.

Comment thread src/fastmcp/server/tasks/protocol.py Outdated
Comment thread src/fastmcp/settings.py Outdated
Comment on lines +63 to +64
class ExtendedSettingsConfigDict(SettingsConfigDict, total=False):
env_prefixes: list[str] | None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "ExtendedSettingsConfigDict" --type=py

Repository: jlowin/fastmcp

Length of output: 151


Remove unused ExtendedSettingsConfigDict TypedDict.

This class is defined but never used anywhere in the codebase. Either apply it to a settings class or remove it.

🤖 Prompt for AI Agents
In src/fastmcp/settings.py around lines 63–64, the TypedDict class
ExtendedSettingsConfigDict is declared but never used; either remove this unused
definition to clean up dead code, or apply it by typing the relevant settings
Config/usage (for example, annotate the settings.config attribute or replace any
untyped dicts with ExtendedSettingsConfigDict) and update imports/usages
accordingly so the type is actually referenced; choose and implement one of
these fixes and run type checks to confirm no unresolved references remain.

@chrisguidry chrisguidry force-pushed the sep-1686 branch 2 times, most recently from 204884d to 3427154 Compare December 4, 2025 16:10
@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Integration test test_call_tool_list_commits timed out after 15 seconds and encountered an HTTP 429 rate limit error during teardown when disconnecting from the GitHub Copilot MCP API.

Root Cause: The test is hitting external rate limits from GitHub's Copilot MCP API (https://api.githubcopilot.com/mcp/). This is an intermittent issue caused by external API constraints, not a bug in the PR code. The sequence of events:

  1. Test calls list_commits tool via GitHub Copilot MCP
  2. External API is slow or rate-limiting requests
  3. Test hits 15-second timeout and pytest-timeout kills it
  4. During cleanup, Client.__aexit__() calls _disconnect()
  5. _disconnect() awaits the session_task to complete
  6. _session_runner() exits the AsyncExitStack, which triggers _context_manager() teardown
  7. transport.connect_session() context manager exits, making a final HTTP request
  8. This request receives HTTP 429 "Too Many Requests" response
  9. Exception propagates as unhandled during asyncio shutdown

Suggested Solution:

This is a transient external API issue, not caused by the PR changes. The PR implements SEP-1686 (tasks protocol) with significant client refactoring, and the new disconnect logic is working correctly. However, there are a few options to make tests more resilient:

Option 1: Increase test timeout (Recommended)

The 15-second timeout may be too aggressive for external API calls that can be rate-limited:

# tests/integration_tests/test_github_mcp_remote.py:99
@pytest.mark.timeout(30)  # Increase from default 15s
async def test_call_tool_list_commits(
    self,
    streamable_http_client: Client[StreamableHttpTransport],
):

Option 2: Suppress rate limit errors during teardown

Modify the teardown to catch and log HTTP 429 errors instead of propagating them. However, this would require changes to either:

  • The MCP SDK's StreamableHttpTransport (external dependency)
  • Or wrapping the context manager exit in the FastMCP client

Option 3: Mark as flaky with retries

The test could be marked with @pytest.mark.flaky(retries=2) to automatically retry on failures.

Option 4: No action needed

Since this is an external API rate limit (not a code bug), and the test would pass under normal API conditions, this may be acceptable as-is.

Detailed Analysis

Timeout Context

From the logs:

DEBUG    [Client-46ca] called call_tool: list_commits      client.py:1193
Failed: Timeout (>15.0s) from pytest-timeout.

The test successfully called the tool but timed out waiting for the response.

Teardown Error

ERROR   asyncio:base_events.py:1758 unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-6' coro=<TestGithubMCPRemote.test_call_tool_list_commits() done...
httpx.HTTPStatusError: Client error '429 Too Many Requests' for url 'https://api.githubcopilot.com/mcp/'

Stack Trace

The error originates from:

client.py:478 → __aexit__() calls _disconnect()
client.py:563 → _disconnect() awaits session_task  
client.py:581 → _session_runner() exits AsyncExitStack
client.py:395 → _context_manager() exits transport.connect_session()
StreamableHttpTransport makes final HTTP request → 429 response

PR Context

The PR makes extensive changes to client session management (client.py +888/-127 lines) implementing reference counting and proper cleanup. The disconnect flow is working as designed—the rate limit is an external constraint.

Related Files
  • tests/integration_tests/test_github_mcp_remote.py:99 - The failing test
  • src/fastmcp/client/client.py:376 - __aexit__() that calls _disconnect()
  • src/fastmcp/client/client.py:462 - _disconnect() awaits session_task
  • src/fastmcp/client/client.py:480 - _session_runner() manages session lifecycle
  • src/fastmcp/client/client.py:358 - _context_manager() wraps transport connection (where 429 occurs)

chrisguidry and others added 2 commits December 4, 2025 13:49
The example was using a non-existent TaskStatusResponse type.
Updated to use mcp.types.GetTaskResult which is what the
on_status_change callback actually receives.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (6)
src/fastmcp/server/tasks/keys.py (1)

80-93: Missing validation may cause IndexError on malformed keys.

If task_key contains fewer than 2 colons, split(":", 3)[1] will raise an IndexError. Consider reusing parse_task_key for consistency and proper validation.

Apply this diff to reuse the existing validation logic:

 def get_client_task_id_from_key(task_key: str) -> str:
     """Extract just the client task ID from a task key.

     Args:
         task_key: Full encoded task key

     Returns:
         Client-provided task ID (second segment)

     Example:
         >>> get_client_task_id_from_key("session123:task456:tool:my_tool")
         'task456'
     """
-    return task_key.split(":", 3)[1]
+    return parse_task_key(task_key)["client_task_id"]
src/fastmcp/server/tasks/handlers.py (2)

61-66: Environment variable name still incorrect in error message.

The error message references FASTMCP_EXPERIMENTAL_ENABLE_DOCKET, but based on the settings file, the actual variable is FASTMCP_ENABLE_DOCKET. This inconsistency exists at lines 64 and 172.


31-31: Unused task_meta parameter.

The task_meta parameter is declared but never used in any of the three handler functions (lines 31, 140, 246). Either use it for TTL configuration or remove it from the signatures.

src/fastmcp/server/tasks/protocol.py (1)

311-355: Revisit createdAt fallback behavior in cancel response.

CancelTaskResult.createdAt falls back to datetime.now(timezone.utc).isoformat() when the stored value is missing. That can mislead clients about when the task actually started and hides an underlying data consistency problem.

Given this path “should never happen”, consider either:

  • returning an explicit internal error when created_at is None, or
  • returning createdAt=created_at and letting validation fail loudly if it’s unexpectedly missing, rather than synthesizing a timestamp.

At minimum, a log warning when falling back would make debugging easier.

src/fastmcp/client/client.py (2)

1510-1516: Silent exception swallowing in list_tasks could hide issues.

The except Exception: continue pattern suppresses all errors when querying task status. While task expiration is expected, other errors (network issues, serialization errors) should be logged.

Per coding guidelines and static analysis (S112, BLE001), consider logging at debug level:

             try:
                 status = await self.get_task_status(task_id)
                 tasks.append(status.model_dump(by_alias=True))
-            except Exception:
-                # Task may have expired or been deleted, skip it
-                continue
+            except Exception as e:
+                # Task may have expired, been deleted, or query failed - skip it
+                logger.debug(f"[{self.name}] Failed to get status for task {task_id}: {e}")
+                continue

1268-1269: Exception handling is too broad and logging could be improved.

The static analysis correctly identifies issues here:

  1. Line 1268: except Exception is too broad - could mask unexpected errors
  2. Line 1269: logger.error should use logger.exception to include traceback

Based on coding guidelines, bare except should be avoided. Be specific with exception types.

-            except Exception as e:
-                logger.error(f"[{self.name}] Error parsing structured content: {e}")
+            except (KeyError, ValueError, TypeError, pydantic_core.ValidationError) as e:
+                logger.exception(f"[{self.name}] Error parsing structured content")
🧹 Nitpick comments (12)
examples/tasks/docker-compose.yml (1)

1-10: Configuration looks good for the tasks example.

The Redis service definition is clean and includes a proper healthcheck. The alpine image choice keeps the example lightweight, and the healthcheck parameters are reasonable.

Optional: Consider adding a restart policy and container name for better example usability.

For development/example workflows, you might find these additions helpful:

 services:
   redis:
     image: redis:7-alpine
+    container_name: fastmcp-tasks-redis
     ports:
       - "24242:6379"
+    restart: unless-stopped
     healthcheck:
       test: ["CMD", "redis-cli", "ping"]
       interval: 5s
       timeout: 3s
       retries: 5

The container_name makes it easier to reference the service in shell commands, and restart: unless-stopped ensures the container survives accidental restarts during development.

examples/tasks/server.py (1)

32-36: Avoid calling Progress() in the default argument (Ruff B008 / DI clarity).

Right now progress: Progress = Progress() creates a concrete instance at import time, which triggers Ruff B008 and can be surprising for a stateful dependency. In this example you don’t seem to need a real default value—FastMCP’s DI can inject Progress based on the type alone—so you can simplify the signature and silence the warning:

-@mcp.tool(task=True)
-async def slow_computation(
-    duration: Annotated[int, Logged],
-    progress: Progress = Progress(),
-) -> str:
+@mcp.tool(task=True)
+async def slow_computation(
+    duration: Annotated[int, Logged],
+    progress: Progress,
+) -> str:

If the DI layer does require an explicit sentinel instance, consider keeping the existing pattern but adding a brief comment and # noqa: B008 to document that this is intentional.

src/fastmcp/server/tasks/converters.py (2)

132-135: Consider using TypeError for invalid type.

When validating types, TypeError is more semantically appropriate than ValueError for indicating an unexpected type was received.

Apply this diff:

         else:
-            raise ValueError(f"Invalid message type: {type(msg)}")
+            raise TypeError(f"Invalid message type: {type(msg)}")

148-203: Unused server parameter.

The server parameter is not used in this function. Consider removing it if not needed for API consistency, or prefix it with an underscore (_server) to indicate it's intentionally unused.

If the parameter is kept for API consistency with other converters, apply this diff:

 async def convert_resource_result(
-    server: FastMCP, raw_value: Any, uri: str, client_task_id: str
+    _server: FastMCP, raw_value: Any, uri: str, client_task_id: str
 ) -> dict[str, Any]:
     """Convert raw resource return value to MCP resource contents dict.

     Args:
-        server: FastMCP server instance
+        _server: FastMCP server instance (reserved for future use)
         raw_value: The raw return value from user's resource function (str or bytes)
         uri: Resource URI (for the contents response)
         client_task_id: Client task ID for related-task metadata
src/fastmcp/cli/tasks.py (1)

93-93: Unused variable resolved_spec.

The resolved_spec variable is unpacked but never used. Prefix it with an underscore to indicate it's intentionally unused.

Apply this diff:

-    config, resolved_spec = load_and_merge_config(server_spec)
+    config, _resolved_spec = load_and_merge_config(server_spec)
src/fastmcp/server/tasks/handlers.py (1)

242-247: Unused server parameter in handle_resource_as_task.

The server parameter is declared but never used in this function, unlike the tool and prompt handlers which use it to retrieve the component. Consider removing it if not needed.

src/fastmcp/server/tasks/subscriptions.py (1)

124-131: Consider making TTL and poll interval configurable.

The hardcoded values ttl=60000 and pollInterval=1000 (also at lines 194-195) would benefit from being derived from settings or Docket configuration rather than magic numbers. This could be addressed in a follow-up.

+# Could derive from settings or Docket configuration:
+# ttl = int(docket.execution_ttl.total_seconds() * 1000)
+# poll_interval = settings.task_poll_interval_ms
 params_dict = {
     "taskId": task_id,
     "status": mcp_status,
     "createdAt": created_at,
     "lastUpdatedAt": datetime.now(timezone.utc).isoformat(),
-    "ttl": 60000,
-    "pollInterval": 1000,
+    "ttl": 60000,  # TODO: Make configurable
+    "pollInterval": 1000,  # TODO: Make configurable
 }
src/fastmcp/server/tasks/protocol.py (2)

79-139: Clarify behavior when created_at is missing and clean up unused local.

created_at can be None if the Redis timestamp key is missing, but createdAt is required by SEP‑1686 and you currently pass it through with a type ignore. That makes the wire format ambiguous and may break strict clients.

Consider either:

  • treating a missing created_at as an internal error (raise McpError(INTERNAL_ERROR, ...)), or
  • falling back to datetime.now(timezone.utc) consistently (as in the cancel handler), and documenting that this is a last‑resort approximation.

Also, error_message is set in the failure branch but never used outside constructing status_message, so the local variable is redundant.

Please double‑check the GetTaskResult.createdAt type in the current mcp.types version to confirm whether it must be a datetime on the Python side or will happily accept ISO strings. If strict, you may want to parse created_at back into a datetime before returning.


202-253: Confirm cross‑type error handling strategy in tasks_result_handler.

When execution.get_result() raises, you always return a CallToolResult with isError=True, even for prompt (GetPromptResult) and resource (ReadResourceResult/contents) tasks. That gives a uniform error envelope but diverges from the per‑type result shapes implied by the docstring and converters.

If that’s intentional, it would be good to:

  • document that tasks/result always uses a tool‑style error envelope on failure, regardless of task type, and
  • ensure client Task classes (especially PromptTask and ResourceTask) are prepared to encounter a CallToolResult error payload instead of their nominal type.

If not intentional, you may want to branch on task_type even in the exception path and return a type‑appropriate error representation instead of a tool‑specific one.

Please verify against the current MCP task spec and your client tests that prompt/resource task failures are correctly handled end‑to‑end and don’t confuse consumers expecting GetPromptResult or resource contents.

src/fastmcp/server/server.py (1)

581-755: Task routing for tools/resources/prompts is well‑designed; consider mounted‑tool support later.

The new handlers:

  • _setup_read_resource_handler and _setup_get_prompt_handler correctly inspect request_context.experimental.is_task, route to handle_resource_as_task/handle_prompt_as_task when the component’s task flag is true and tasks are enabled, and otherwise fall back to the existing MCP paths.
  • _setup_task_protocol_handlers wires the SEP‑1686 tasks/get, tasks/result, tasks/list, and tasks/cancel endpoints to the protocol module in a way that mirrors how other low‑level handlers are registered.
  • _call_tool_mcp detects task metadata and delegates to handle_tool_as_task when the local Tool has task=True, with a clear debug log and synchronous fallback when backgrounding isn’t supported.

Decorator changes for tool, resource, and prompt to accept task: bool | None and automatically disable task support for synchronous functions (with a warning when task=True is explicitly requested) are also in line with Docket’s async requirement.

One limitation to be aware of: _call_tool_mcp checks self._tool_manager._tools.get(key) for task support, so tools coming from mounted servers won’t currently be treated as task‑capable here, even if the mounted server supports tasks itself. That’s an acceptable initial trade‑off; if you later want cross‑server tasking, you may need a more general lookup (e.g., via get_tools() or delegating the task decision to the mounted server).

If you plan to support background tasks for mounted/imported servers, it’s worth adding a small integration test to confirm how callTool with task metadata behaves when the target tool lives on a mounted FastMCP instance.

src/fastmcp/client/tasks.py (1)

110-351: Task lifecycle (status/wait/cancel) is robust; confirm payload shape for result parsing.

The Task base class:

  • enforces client‑context usage via _check_client_connected while still allowing immediate tasks to be awaited after the client closes,
  • supports notification‑driven status updates with a polling fallback in wait(), and
  • cleanly no‑ops cancel() when the task was executed synchronously.

ToolTask, PromptTask, and ResourceTask then layer type‑specific result() logic and caching on top.

One assumption worth validating is the exact shape of Client.get_task_result():

  • PromptTask.result() calls GetPromptResult.model_validate(mcp_result), assuming mcp_result is already the payload dict, not an extra wrapper (e.g. {"result": ...}).
  • ResourceTask.result() similarly assumes either a ReadResourceResult instance, a dict with a top‑level "contents" key, or a bare list.

If GetTaskPayloadResult in the current mcp.types encodes the payload under a field (instead of as the root), you may need to unwrap that before passing data into the Task classes to avoid subtle parsing bugs.

Please double‑check the current GetTaskPayloadResult schema in your version of mcp.types and confirm that Client.get_task_result() returns the raw payload in the shapes ToolTask, PromptTask, and ResourceTask expect. If not, a small unwrapping step in get_task_result() (or in the Task classes) will keep things aligned.

src/fastmcp/client/client.py (1)

319-326: Consider adding cleanup for _submitted_task_ids to prevent unbounded growth.

The _submitted_task_ids set grows as tasks are submitted but has no cleanup mechanism. For long-running clients, this could lead to unbounded memory growth. Consider:

  1. Removing task IDs when tasks complete/fail
  2. Adding a TTL-based eviction mechanism
  3. Limiting the set size with LRU eviction

Example approach - clean up on task completion:

# In _handle_task_status_notification or Task completion handlers:
if status.status in ("completed", "failed", "cancelled"):
    self._submitted_task_ids.discard(task_id)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3427154 and 5ccfba0.

⛔ Files ignored due to path filters (37)
  • .pre-commit-config.yaml is excluded by none and included by none
  • pyproject.toml is excluded by none and included by none
  • tests/cli/test_tasks.py is excluded by none and included by none
  • tests/client/tasks/conftest.py is excluded by none and included by none
  • tests/client/tasks/test_client_prompt_tasks.py is excluded by none and included by none
  • tests/client/tasks/test_client_resource_tasks.py is excluded by none and included by none
  • tests/client/tasks/test_client_task_notifications.py is excluded by none and included by none
  • tests/client/tasks/test_client_task_protocol.py is excluded by none and included by none
  • tests/client/tasks/test_client_tool_tasks.py is excluded by none and included by none
  • tests/client/tasks/test_task_context_validation.py is excluded by none and included by none
  • tests/client/tasks/test_task_result_caching.py is excluded by none and included by none
  • tests/client/test_client.py is excluded by none and included by none
  • tests/server/middleware/test_caching.py is excluded by none and included by none
  • tests/server/middleware/test_logging.py is excluded by none and included by none
  • tests/server/tasks/__init__.py is excluded by none and included by none
  • tests/server/tasks/conftest.py is excluded by none and included by none
  • tests/server/tasks/test_progress_dependency.py is excluded by none and included by none
  • tests/server/tasks/test_server_tasks_parameter.py is excluded by none and included by none
  • tests/server/tasks/test_sync_function_task_disabled.py is excluded by none and included by none
  • tests/server/tasks/test_task_capabilities.py is excluded by none and included by none
  • tests/server/tasks/test_task_dependencies.py is excluded by none and included by none
  • tests/server/tasks/test_task_metadata.py is excluded by none and included by none
  • tests/server/tasks/test_task_methods.py is excluded by none and included by none
  • tests/server/tasks/test_task_prompts.py is excluded by none and included by none
  • tests/server/tasks/test_task_protocol.py is excluded by none and included by none
  • tests/server/tasks/test_task_resources.py is excluded by none and included by none
  • tests/server/tasks/test_task_return_types.py is excluded by none and included by none
  • tests/server/tasks/test_task_security.py is excluded by none and included by none
  • tests/server/tasks/test_task_status_notifications.py is excluded by none and included by none
  • tests/server/tasks/test_task_tools.py is excluded by none and included by none
  • tests/server/tasks/test_task_ttl.py is excluded by none and included by none
  • tests/server/test_dependencies.py is excluded by none and included by none
  • tests/server/test_server_docket.py is excluded by none and included by none
  • tests/server/test_tool_annotations.py is excluded by none and included by none
  • tests/tools/test_tool.py is excluded by none and included by none
  • tests/utilities/test_types.py is excluded by none and included by none
  • uv.lock is excluded by !**/*.lock and included by none
📒 Files selected for processing (32)
  • docs/clients/tasks.mdx (1 hunks)
  • docs/docs.json (2 hunks)
  • docs/servers/context.mdx (5 hunks)
  • docs/servers/tasks.mdx (1 hunks)
  • examples/tasks/.envrc (1 hunks)
  • examples/tasks/README.md (1 hunks)
  • examples/tasks/client.py (1 hunks)
  • examples/tasks/docker-compose.yml (1 hunks)
  • examples/tasks/server.py (1 hunks)
  • src/fastmcp/cli/cli.py (2 hunks)
  • src/fastmcp/cli/tasks.py (1 hunks)
  • src/fastmcp/client/client.py (17 hunks)
  • src/fastmcp/client/tasks.py (1 hunks)
  • src/fastmcp/client/transports.py (1 hunks)
  • src/fastmcp/dependencies.py (1 hunks)
  • src/fastmcp/prompts/prompt.py (9 hunks)
  • src/fastmcp/resources/resource.py (6 hunks)
  • src/fastmcp/resources/template.py (11 hunks)
  • src/fastmcp/server/context.py (2 hunks)
  • src/fastmcp/server/dependencies.py (3 hunks)
  • src/fastmcp/server/http.py (2 hunks)
  • src/fastmcp/server/low_level.py (4 hunks)
  • src/fastmcp/server/server.py (30 hunks)
  • src/fastmcp/server/tasks/__init__.py (1 hunks)
  • src/fastmcp/server/tasks/converters.py (1 hunks)
  • src/fastmcp/server/tasks/handlers.py (1 hunks)
  • src/fastmcp/server/tasks/keys.py (1 hunks)
  • src/fastmcp/server/tasks/protocol.py (1 hunks)
  • src/fastmcp/server/tasks/subscriptions.py (1 hunks)
  • src/fastmcp/settings.py (3 hunks)
  • src/fastmcp/tools/tool.py (8 hunks)
  • src/fastmcp/utilities/tests.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • docs/servers/tasks.mdx
🚧 Files skipped from review as they are similar to previous changes (6)
  • src/fastmcp/server/context.py
  • src/fastmcp/server/http.py
  • examples/tasks/README.md
  • src/fastmcp/server/low_level.py
  • src/fastmcp/server/tasks/init.py
  • src/fastmcp/cli/cli.py
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Python source code must use Python ≥3.10 with full type annotations
Never use bare except - be specific with exception types
Prioritize readable, understandable code - clarity over cleverness; avoid obfuscated or confusing patterns even if shorter
Follow existing patterns and maintain consistency in code organization and style

Files:

  • src/fastmcp/server/tasks/keys.py
  • src/fastmcp/server/tasks/handlers.py
  • src/fastmcp/dependencies.py
  • src/fastmcp/server/tasks/converters.py
  • src/fastmcp/prompts/prompt.py
  • src/fastmcp/resources/template.py
  • src/fastmcp/server/tasks/subscriptions.py
  • src/fastmcp/server/tasks/protocol.py
  • src/fastmcp/client/tasks.py
  • src/fastmcp/utilities/tests.py
  • src/fastmcp/server/server.py
  • src/fastmcp/tools/tool.py
  • src/fastmcp/client/transports.py
  • src/fastmcp/server/dependencies.py
  • src/fastmcp/resources/resource.py
  • src/fastmcp/settings.py
  • src/fastmcp/client/client.py
  • src/fastmcp/cli/tasks.py
docs/**/*.mdx

📄 CodeRabbit inference engine (docs/.cursor/rules/mintlify.mdc)

docs/**/*.mdx: Use clear, direct language appropriate for technical audiences
Write in second person ('you') for instructions and procedures in MDX documentation
Use active voice over passive voice in MDX technical documentation
Employ present tense for current states and future tense for outcomes in MDX documentation
Maintain consistent terminology throughout all MDX documentation
Keep sentences concise while providing necessary context in MDX documentation
Use parallel structure in lists, headings, and procedures in MDX documentation
Lead with the most important information using inverted pyramid structure in MDX documentation
Use progressive disclosure in MDX documentation: present basic concepts before advanced ones
Break complex procedures into numbered steps in MDX documentation
Include prerequisites and context before instructions in MDX documentation
Provide expected outcomes for each major step in MDX documentation
End sections with next steps or related information in MDX documentation
Use descriptive, keyword-rich headings for navigation and SEO in MDX documentation
Focus on user goals and outcomes rather than system features in MDX documentation
Anticipate common questions and address them proactively in MDX documentation
Include troubleshooting for likely failure points in MDX documentation
Provide multiple pathways (beginner vs advanced) but offer an opinionated path to avoid overwhelming users in MDX documentation
Always include complete, runnable code examples that users can copy and execute in MDX documentation
Show proper error handling and edge case management in MDX code examples
Use realistic data instead of placeholder values in MDX code examples
Include expected outputs and results for verification in MDX code examples
Test all code examples thoroughly before publishing in MDX documentation
Specify language and include filename when relevant in MDX code examples
Add explanatory comments for complex logic in MDX code examples
Document all API...

Files:

  • docs/clients/tasks.mdx
  • docs/servers/context.mdx
🧠 Learnings (4)
📚 Learning: 2025-12-04T00:17:41.238Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T00:17:41.238Z
Learning: Applies to tests/**/*.py : Pass FastMCP servers directly to clients for testing using in-memory transport; only use HTTP transport when explicitly testing network features

Applied to files:

  • src/fastmcp/server/server.py
  • src/fastmcp/client/transports.py
📚 Learning: 2025-12-01T15:53:29.709Z
Learnt from: jlowin
Repo: jlowin/fastmcp PR: 2505
File: src/fastmcp/server/dependencies.py:9-9
Timestamp: 2025-12-01T15:53:29.709Z
Learning: In the MCP SDK (mcp.server.auth.middleware.bearer_auth), the AuthenticatedUser class has an `access_token` attribute (not `token`) that returns an AccessToken object. To get the raw token string, you would use `user.access_token.token`.

Applied to files:

  • src/fastmcp/server/dependencies.py
📚 Learning: 2025-11-26T21:51:44.174Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: .cursor/rules/core-mcp-objects.mdc:0-0
Timestamp: 2025-11-26T21:51:44.174Z
Learning: Review and update related Manager classes (ToolManager, ResourceManager, PromptManager) when modifying MCP object definitions

Applied to files:

  • src/fastmcp/client/client.py
📚 Learning: 2025-12-04T00:17:41.238Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T00:17:41.238Z
Learning: Applies to src/**/*.py : Never use bare `except` - be specific with exception types

Applied to files:

  • src/fastmcp/client/client.py
🧬 Code graph analysis (16)
examples/tasks/client.py (1)
src/fastmcp/client/tasks.py (6)
  • status (234-262)
  • on_status_change (208-232)
  • result (265-270)
  • result (399-456)
  • result (490-521)
  • result (560-614)
src/fastmcp/server/tasks/handlers.py (2)
src/fastmcp/server/dependencies.py (1)
  • get_context (255-261)
src/fastmcp/server/tasks/keys.py (1)
  • build_task_key (15-44)
src/fastmcp/dependencies.py (1)
src/fastmcp/server/dependencies.py (5)
  • CurrentContext (271-293)
  • CurrentDocket (320-345)
  • CurrentFastMCP (487-509)
  • CurrentWorker (370-394)
  • Progress (446-471)
src/fastmcp/server/tasks/converters.py (3)
src/fastmcp/server/low_level.py (2)
  • fastmcp (42-47)
  • fastmcp (129-134)
src/fastmcp/tools/tool.py (3)
  • ToolResult (73-119)
  • _convert_to_content (589-615)
  • to_mcp_result (106-119)
src/fastmcp/client/client.py (1)
  • CallToolResult (120-127)
src/fastmcp/prompts/prompt.py (3)
src/fastmcp/server/dependencies.py (2)
  • get_context (255-261)
  • without_injected_parameters (85-139)
src/fastmcp/utilities/types.py (1)
  • get_cached_typeadapter (45-117)
src/fastmcp/utilities/json_schema.py (1)
  • compress_schema (200-230)
src/fastmcp/resources/template.py (5)
src/fastmcp/server/context.py (1)
  • fastmcp (152-157)
src/fastmcp/server/low_level.py (2)
  • fastmcp (42-47)
  • fastmcp (129-134)
src/fastmcp/server/dependencies.py (2)
  • get_context (255-261)
  • without_injected_parameters (85-139)
src/fastmcp/utilities/json_schema.py (1)
  • compress_schema (200-230)
src/fastmcp/utilities/types.py (1)
  • get_cached_typeadapter (45-117)
src/fastmcp/server/tasks/subscriptions.py (1)
src/fastmcp/server/tasks/keys.py (1)
  • parse_task_key (47-77)
src/fastmcp/server/tasks/protocol.py (3)
src/fastmcp/server/server.py (1)
  • FastMCP (149-2879)
src/fastmcp/server/tasks/converters.py (3)
  • convert_prompt_result (94-145)
  • convert_resource_result (148-203)
  • convert_tool_result (21-91)
src/fastmcp/server/tasks/keys.py (1)
  • parse_task_key (47-77)
src/fastmcp/client/tasks.py (1)
src/fastmcp/client/client.py (6)
  • session (329-336)
  • CallToolResult (120-127)
  • Client (130-1547)
  • _handle_task_status_notification (595-615)
  • get_task_status (1427-1445)
  • get_task_result (1447-1471)
src/fastmcp/server/server.py (6)
src/fastmcp/settings.py (1)
  • settings (214-223)
src/fastmcp/tools/tool_manager.py (1)
  • get_tools (72-76)
src/fastmcp/prompts/prompt_manager.py (1)
  • get_prompts (53-57)
src/fastmcp/resources/resource_manager.py (2)
  • get_resources (56-58)
  • get_resource (244-287)
src/fastmcp/exceptions.py (1)
  • NotFoundError (34-35)
src/fastmcp/server/dependencies.py (1)
  • without_injected_parameters (85-139)
src/fastmcp/tools/tool.py (2)
src/fastmcp/server/context.py (1)
  • fastmcp (152-157)
src/fastmcp/server/dependencies.py (2)
  • get_context (255-261)
  • without_injected_parameters (85-139)
src/fastmcp/client/transports.py (2)
src/fastmcp/server/context.py (1)
  • fastmcp (152-157)
src/fastmcp/server/low_level.py (4)
  • fastmcp (42-47)
  • fastmcp (129-134)
  • run (151-185)
  • create_initialization_options (136-149)
src/fastmcp/server/dependencies.py (3)
src/fastmcp/server/server.py (3)
  • docket (388-393)
  • FastMCP (149-2879)
  • name (361-362)
src/fastmcp/server/context.py (2)
  • fastmcp (152-157)
  • Context (100-731)
src/fastmcp/utilities/types.py (1)
  • is_class_member_of_type (130-149)
src/fastmcp/resources/resource.py (1)
src/fastmcp/server/dependencies.py (2)
  • get_context (255-261)
  • without_injected_parameters (85-139)
src/fastmcp/settings.py (1)
src/fastmcp/server/server.py (2)
  • name (361-362)
  • docket (388-393)
examples/tasks/server.py (1)
src/fastmcp/server/dependencies.py (4)
  • Progress (446-471)
  • set_total (426-430)
  • increment (432-439)
  • set_message (441-443)
🪛 LanguageTool
docs/servers/context.mdx

[style] ~120-~120: This phrase is redundant. Consider using “outside”.
Context: ...context of a server request. Calling it outside of a request will raise a RuntimeError. ...

(OUTSIDE_OF)

🪛 Ruff (0.14.7)
src/fastmcp/server/tasks/keys.py

67-70: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/server/tasks/handlers.py

31-31: Unused function argument: task_meta

(ARG001)


140-140: Unused function argument: task_meta

(ARG001)


243-243: Unused function argument: server

(ARG001)


246-246: Unused function argument: task_meta

(ARG001)

src/fastmcp/server/tasks/converters.py

135-135: Prefer TypeError exception for invalid type

(TRY004)


135-135: Avoid specifying long messages outside the exception class

(TRY003)


149-149: Unused function argument: server

(ARG001)

src/fastmcp/server/tasks/subscriptions.py

72-72: Do not catch blind exception: Exception

(BLE001)

src/fastmcp/server/tasks/protocol.py

124-124: Do not catch blind exception: Exception

(BLE001)


218-218: Do not catch blind exception: Exception

(BLE001)


257-257: Unused function argument: server

(ARG001)


257-257: Unused function argument: params

(ARG001)

src/fastmcp/client/tasks.py

76-78: Avoid specifying long messages outside the exception class

(TRY003)


162-165: Avoid specifying long messages outside the exception class

(TRY003)


205-205: Do not catch blind exception: Exception

(BLE001)


319-321: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/server/server.py

412-415: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/server/dependencies.py

73-73: Do not catch blind exception: Exception

(BLE001)


193-195: Avoid specifying long messages outside the exception class

(TRY003)


304-307: Avoid specifying long messages outside the exception class

(TRY003)


312-315: Avoid specifying long messages outside the exception class

(TRY003)


355-358: Avoid specifying long messages outside the exception class

(TRY003)


362-365: Avoid specifying long messages outside the exception class

(TRY003)


429-429: Avoid specifying long messages outside the exception class

(TRY003)


435-435: Avoid specifying long messages outside the exception class

(TRY003)


465-468: Avoid specifying long messages outside the exception class

(TRY003)


480-480: Avoid specifying long messages outside the exception class

(TRY003)


483-483: Avoid specifying long messages outside the exception class

(TRY003)


524-524: Avoid specifying long messages outside the exception class

(TRY003)


527-527: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/client/client.py

472-472: Avoid specifying long messages outside the exception class

(TRY003)


1268-1268: Do not catch blind exception: Exception

(BLE001)


1269-1269: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1514-1516: try-except-continue detected, consider logging the exception

(S112)


1514-1514: Do not catch blind exception: Exception

(BLE001)

examples/tasks/server.py

35-35: Do not perform function call Progress in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)


50-50: Avoid specifying long messages outside the exception class

(TRY003)

src/fastmcp/cli/tasks.py

93-93: Unpacked variable resolved_spec is never used

Prefix it with an underscore or any other dummy variable pattern

(RUF059)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests with lowest-direct dependencies
  • GitHub Check: Run tests: Python 3.10 on ubuntu-latest
  • GitHub Check: Run tests: Python 3.10 on windows-latest
🔇 Additional comments (38)
src/fastmcp/utilities/tests.py (2)

15-15: Importing LogCaptureFixture directly from pytest is correct here

The explicit import cleanly supports the new type annotation on caplog_for_fastmcp and matches how pytest exposes this fixture type. No issues.


224-227: Typed context-manager signature for caplog_for_fastmcp looks good

Parameter and return types are now fully annotated and correctly use LogCaptureFixture and Generator[LogCaptureFixture, None, None], matching the contextmanager behavior without changing runtime semantics.

examples/tasks/.envrc (1)

1-16: LGTM! Clear and helpful environment configuration.

The file provides a well-documented example configuration with both distributed (Redis) and single-process (memory) backend options, along with clear direnv usage instructions.

src/fastmcp/server/tasks/converters.py (1)

21-91: LGTM! Correct serialization logic.

The function properly replicates tool.run() serialization logic, handles both ToolResult and raw return values, and correctly attaches related-task metadata for SEP-1686 compliance.

docs/clients/tasks.mdx (1)

1-155: Excellent documentation! Clear, comprehensive, and well-structured.

The documentation follows all MDX guidelines:

  • Complete YAML frontmatter
  • Clear, descriptive headings with progressive disclosure
  • Runnable code examples with realistic data
  • Covers success cases, status checking, and graceful degradation
  • Appropriate use of active voice and present tense
docs/docs.json (1)

119-120: LGTM! Clean navigation additions.

The new task documentation pages are appropriately placed in the Advanced Features sections for both Servers and Clients.

Also applies to: 175-175

src/fastmcp/dependencies.py (1)

1-25: LGTM! Clean public API surface.

This module provides a centralized, user-friendly import location for all dependency injection symbols. The approach follows best practices for public API design.

src/fastmcp/client/transports.py (1)

857-875: LGTM! Proper experimental capabilities negotiation.

The code correctly builds and passes SEP-1686 task capabilities during server initialization when tasks are enabled. The runtime import avoids circular dependency issues.

src/fastmcp/cli/tasks.py (2)

22-65: LGTM! Excellent user-facing error messages.

The validation function provides clear, actionable guidance for users when Docket is misconfigured, including installation instructions and configuration examples.


100-118: LGTM! Clean worker implementation.

The worker properly manages server lifespan, provides useful status output, and handles shutdown gracefully via KeyboardInterrupt.

src/fastmcp/server/tasks/keys.py (2)

15-44: LGTM! Correct encoding logic.

The function properly constructs task keys with URL encoding to handle special characters in component identifiers.


47-77: LGTM! Robust parsing with proper validation.

The function correctly validates the task key format and provides a clear error message with the expected format. The error message length is appropriate for this context.

examples/tasks/client.py (1)

1-160: Well-structured example demonstrating the new task API.

The example clearly illustrates both background task execution with progress callbacks and immediate blocking execution. The code is readable and serves its purpose as documentation.

A few minor observations for consideration:

  • Lines 127 and 155 use assert for type checking, which gets disabled with python -O. For an example this is fine, but in production code you'd want explicit checks.
  • Registering the callback after call_tool (line 109) could theoretically miss very early notifications, though practically this is unlikely.
src/fastmcp/resources/resource.py (2)

50-55: Task field addition follows established pattern.

The new task field is consistent with the same field added to Tool and Prompt classes for SEP-1686 support.


199-222: Dependency injection refactor is well-implemented.

The change to wrap fn with without_injected_parameters and simplify read() aligns with the broader DI refactor. The wrapped function handles dependency resolution internally, making the read() method cleaner.

src/fastmcp/server/tasks/subscriptions.py (1)

72-73: Broad exception handling is appropriate here.

The except Exception is intentional for subscription robustness - failures are logged with exc_info=True and don't break the task execution. This defensive pattern is correct for optional notifications per SEP-1686.

src/fastmcp/prompts/prompt.py (2)

69-74: Task field addition is consistent with the codebase pattern.


211-215: Dependency injection refactor is well-implemented.

The wrapped function approach centralizes dependency resolution and simplifies the render flow.

src/fastmcp/settings.py (2)

63-148: Well-documented DocketSettings configuration.

The DocketSettings class provides clear, detailed descriptions for each field. The defaults (memory:// for URL, 10 for concurrency, 5-minute redelivery timeout) are sensible for development.


241-273: Task and Docket settings integration looks correct.

The new enable_docket, enable_tasks, and nested docket settings provide the configuration surface needed for SEP-1686. The field descriptions clearly explain the relationship between Docket and tasks.

src/fastmcp/resources/template.py (1)

94-197: task propagation and DI wrapper usage look solid.

The new task flag is threaded correctly from ResourceTemplate.from_function through FunctionResourceTemplate down to Resource.from_function, so both concrete and template‑derived resources consistently advertise task support.

Using without_injected_parameters + get_cached_typeadapter for:

  • user‑facing parameter discovery (so Context/dependency params don’t affect URI/template validation), and
  • schema generation (with compress_schema(prune_titles=True)),

and then wrapping with validate_call(wrapper_fn) ensures:

  • injected dependencies are resolved exactly once at call time,
  • the public schema excludes injected params, and
  • runtime coercion matches the schema.

This resolves the earlier double‑resolution concern without introducing obvious new edge cases.

docs/servers/context.mdx (2)

31-122: Context access patterns are clearly documented and aligned with the new DI API.

The updated sections on CurrentContext(), legacy type‑hint injection, and get_context() accurately reflect the server‑side behavior in fastmcp.server.dependencies and fastmcp.server.server. Code samples are runnable in isolation (given surrounding server setup), use consistent terminology, and make it clear when each pattern is appropriate.

No changes needed from a docs/code‑alignment perspective.


528-609: Custom dependency examples match the Docket‑backed DI model.

The Depends() examples (simple functions, async context managers, nested dependencies) are consistent with the Docket integration implemented in fastmcp.server.dependencies.resolve_dependencies and without_injected_parameters. Emphasizing that dependency parameters are excluded from the MCP schema mirrors the actual behavior.

If you later add more advanced Dependency subclasses, consider linking to those patterns here; otherwise this section is in good shape.

src/fastmcp/tools/tool.py (2)

139-203: SEP‑1686 task flag and MCP execution mapping look correct.

Adding task: bool to Tool and mapping it to ToolExecution(task="optional") in to_mcp_tool() cleanly advertises background‑task support without forcing task‑only execution. The override logic (execution=overrides.get("execution", execution)) still allows callers to take full control of the execution block when needed.

One minor follow‑up you might consider later is exposing a way to request "always" vs "optional" at the API level if you end up needing task‑only tools, but the current default is sensible.


317-393: Unified wrapper‑based validation in FunctionTool.run and ParsedFunction is well‑structured.

Using without_injected_parameters(self.fn) in run() and the same wrapper in ParsedFunction.from_function for input schema generation ensures:

  • injected parameters (Context, DI dependencies) never appear in the public schema,
  • dependencies and Context are resolved exactly once per invocation via resolve_dependencies, and
  • runtime validation uses the same public signature the schema was built from.

Because FunctionTool.run only calls validate_python() on the TypeAdapter (no .json_schema()), exclude‑args with non‑serializable annotations won’t re‑trigger the schema‑generation issues you guarded against in ParsedFunction.

This is a solid refactor; I don’t see correctness issues in the new flow.

src/fastmcp/server/server.py (3)

191-205: Docket lifecycle integration and server exposure are coherent.

The _support_tasks_by_default flag, _docket attribute, and docket property, combined with _docket_lifespan chained into _lifespan_manager, give a clear lifecycle:

  • tasks are only enabled when fastmcp.settings.enable_docket is true (misconfig with enable_tasks=True fails fast with a clear error),
  • Docket and Worker are created and torn down with the server lifespan,
  • ContextVars (_current_server, _current_docket, _current_worker) are set/reset correctly to support CurrentFastMCP, CurrentDocket, and CurrentWorker.

This matches the dependency helpers in fastmcp.server.dependencies and keeps Docket concerns nicely encapsulated.

Also applies to: 387-505


1680-1868: Decorator‑level task handling is consistent and defensive.

For tools, resources, and prompts:

  • the decorators accept task: bool | None,
  • resolve it against _support_tasks_by_default, and
  • disable task support (with a warning when task=True was explicit) for synchronous functions using asyncio.iscoroutinefunction.

This ensures you don’t accidentally schedule sync callables into Docket, while still letting servers opt‑in to backgrounding globally or per‑component. The task flag is then correctly threaded into Tool.from_function, ResourceTemplate.from_function / Resource.from_function, and Prompt.from_function.

The behavior is clear and matches the documentation you added.


2283-2331: Stdio experimental capabilities correctly advertise SEP‑1686 tasks.

In run_stdio_async, building an experimental_capabilities dict with:

"tasks": {
    "list": {},
    "cancel": {},
    "requests": {
        "tools": {"call": {}},
        "prompts": {"get": {}},
        "resources": {"read": {}},
    },
}

and passing it through create_initialization_options(...) ensures clients see task support for the precise request types you’re handling. This aligns with the task protocol handlers registered earlier and the client‑side Task classes.

You might later want to mirror this in the HTTP transport initialization once the SDK supports experimental capabilities there, but for stdio this looks correct.

src/fastmcp/server/dependencies.py (1)

84-140: Wrapper + dependency resolution pipeline is well thought‑out.

The combination of:

  • _find_kwarg_by_type (using is_class_member_of_type) to detect Context‑typed params including unions/Annotated,
  • without_injected_parameters to remove Context and Docket dependency parameters from the public signature while delegating to resolve_dependencies, and
  • resolve_dependencies to (1) filter out dependency names from user args, (2) resolve Docket Depends() parameters via _resolve_fastmcp_dependencies, and (3) inject Context by type,

gives you:

  • clean, schema‑friendly signatures for TypeAdapter/JSON schema,
  • a single place where DI and Context injection actually happen, and
  • protection against user input overriding injected parameters.

This matches how the server and components now use wrappers for both schema generation and runtime execution, and I don’t see correctness issues in the control flow.

Also applies to: 142-252

src/fastmcp/client/tasks.py (2)

33-87: Task initialization and notification routing integrate cleanly with the MCP client.

_task_capable_initialize() is a pragmatic workaround for the current ClientSession API: it mirrors the SDK’s initialize flow while adding experimental={"tasks": {}} and preserving sampling/elicitation/roots capabilities, then updates _server_capabilities and sends InitializedNotification. That’s exactly what your Task framework needs.

TaskNotificationHandler’s use of a weakref’d Client to route TaskStatusNotification into Client._handle_task_status_notification keeps the notification logic centralized and avoids reference cycles.

This all lines up well with the server‑side task protocol handlers.

Also applies to: 89-107


357-614: ToolTask / PromptTask / ResourceTask result parsing covers the expected shapes.

The concrete Task subclasses correctly:

  • short‑circuit to the immediate result when _is_immediate is true,
  • gate background usage on _check_client_connected(), and
  • handle multiple result encodings:
    • ToolTask: dict → mcp.types.CallToolResult → parsed CallToolResult, direct CallToolResult, or legacy ToolResult‑like objects.
    • PromptTask: raw payload validated as GetPromptResult.
    • ResourceTask: ReadResourceResult, dict with "contents", or a bare list, normalizing to a list of TextResourceContents / BlobResourceContents.

The caching via _cached_result avoids duplicate parsing on repeated result() calls. This looks solid from a client‑API perspective.

src/fastmcp/client/client.py (7)

119-128: LGTM - Clean dataclass for parsed tool results.

The CallToolResult dataclass provides a well-structured representation for parsed tool call results, separating concerns between raw content and parsed data.


411-472: LGTM - Well-structured idempotent initialization with SEP-1686 support.

The initialize method correctly:

  • Returns cached result for idempotent behavior
  • Branches on enable_tasks setting for task-capable initialization
  • Handles timeout conversion properly

595-615: LGTM - Task notification routing with proper weakref handling.

The implementation correctly:

  • Extracts task ID and looks up the weakref
  • Safely dereferences and null-checks the weakref
  • Converts notification params to status for the Task callback

745-763: LGTM - Clean SEP-1686 task metadata integration.

The conditional logic correctly branches between the standard SDK path and the manual request construction needed for task metadata support.


974-1079: LGTM - Consistent task-based prompt retrieval pattern.

The implementation mirrors the resource pattern with proper overloads, task metadata handling, and graceful degradation. Note: the task_id docstring inconsistency was flagged in the previous comment.


1365-1425: LGTM - Well-structured task-based tool execution with graceful degradation.

The implementation correctly handles:

  • Server accepting background execution (extracting server-generated taskId)
  • Server declining (graceful degradation with parsed immediate result)
  • Task registry for notification routing

1427-1471: LGTM - Clean task management API wrappers.

The get_task_status and get_task_result methods are straightforward protocol wrappers that correctly construct and send the appropriate MCP requests.

Comment on lines +822 to +873
async def _read_resource_as_task(
self,
uri: AnyUrl | str,
task_id: str | None = None,
ttl: int = 60000,
) -> ResourceTask:
"""Read a resource for background execution (SEP-1686).

Returns a ResourceTask object that handles both background and immediate execution.

Args:
uri: Resource URI to read
task_id: Optional client-provided task ID (ignored, for backward compatibility)
ttl: Time to keep results available in milliseconds (default 60s)

Returns:
ResourceTask: Future-like object for accessing task status and results
"""
# Per SEP-1686 final spec: client sends only ttl, server generates taskId
# Read resource with task metadata (no taskId sent)
result = await self.read_resource_mcp(
uri=uri,
meta={
"modelcontextprotocol.io/task": {
"ttl": ttl,
}
},
)

# Check if server accepted background execution
if result.meta and "modelcontextprotocol.io/task" in result.meta:
# Background execution accepted - extract server-generated taskId
server_task_id = result.meta["modelcontextprotocol.io/task"]["taskId"]
# Track this task ID for list_tasks()
self._submitted_task_ids.add(server_task_id)

# Create task object
task_obj = ResourceTask(
self, server_task_id, uri=str(uri), immediate_result=None
)

# Register for notification routing
self._task_registry[server_task_id] = weakref.ref(task_obj) # type: ignore[assignment]

return task_obj
else:
# Server declined background execution (graceful degradation)
# Use a synthetic task ID for the immediate result
synthetic_task_id = task_id or str(uuid.uuid4())
return ResourceTask(
self, synthetic_task_id, uri=str(uri), immediate_result=result.contents
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring incorrectly states task_id is ignored.

Line 834 says task_id: Optional client-provided task ID (ignored, for backward compatibility) but line 870 actually uses it:

synthetic_task_id = task_id or str(uuid.uuid4())

The same inconsistency exists in _get_prompt_as_task (line 1039) and _call_tool_as_task (line 1381).

Update the docstrings to accurately describe the behavior:

-            task_id: Optional client-provided task ID (ignored, for backward compatibility)
+            task_id: Optional client-provided task ID (used as fallback when server declines background execution)
🤖 Prompt for AI Agents
In src/fastmcp/client/client.py around lines 822 to 873, the
_read_resource_as_task docstring incorrectly states that the client-provided
task_id is "ignored" while the code uses it to form a synthetic task ID when the
server declines background execution; change the docstring to state that task_id
is optional and will be used as a synthetic task ID for immediate results if the
server does not accept background execution, and make the same docstring
correction in the corresponding functions _get_prompt_as_task (around line 1039)
and _call_tool_as_task (around line 1381) so the comments accurately reflect
that task_id is honored for synthetic/immediate-result tasks.

Comment on lines +1246 to +1248
if result.isError and raise_on_error:
msg = cast(mcp.types.TextContent, result.content[0]).text
raise ToolError(msg)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unsafe assumption about error content structure.

The code assumes result.content[0] exists and is a TextContent type. If the content list is empty or contains a different content type, this will raise an IndexError or AttributeError.

Add bounds checking:

         if result.isError and raise_on_error:
-            msg = cast(mcp.types.TextContent, result.content[0]).text
-            raise ToolError(msg)
+            if result.content and hasattr(result.content[0], 'text'):
+                msg = cast(mcp.types.TextContent, result.content[0]).text
+            else:
+                msg = "Tool returned an error"
+            raise ToolError(msg)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if result.isError and raise_on_error:
msg = cast(mcp.types.TextContent, result.content[0]).text
raise ToolError(msg)
if result.isError and raise_on_error:
if result.content and hasattr(result.content[0], 'text'):
msg = cast(mcp.types.TextContent, result.content[0]).text
else:
msg = "Tool returned an error"
raise ToolError(msg)

Comment on lines 267 to +275
def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
"""Convert string arguments to expected types based on function signature."""
from fastmcp.server.context import Context
from fastmcp.server.dependencies import without_injected_parameters

sig = inspect.signature(self.fn)
wrapper_fn = without_injected_parameters(self.fn)
sig = inspect.signature(wrapper_fn)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Redundant wrapping of already-wrapped function.

self.fn is already wrapped by without_injected_parameters at line 265. Wrapping it again here is unnecessary and adds overhead.

 def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
     """Convert string arguments to expected types based on function signature."""
-    from fastmcp.server.dependencies import without_injected_parameters
-
-    wrapper_fn = without_injected_parameters(self.fn)
-    sig = inspect.signature(wrapper_fn)
+    # self.fn is already wrapped by without_injected_parameters
+    sig = inspect.signature(self.fn)
     converted_kwargs = {}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
"""Convert string arguments to expected types based on function signature."""
from fastmcp.server.context import Context
from fastmcp.server.dependencies import without_injected_parameters
sig = inspect.signature(self.fn)
wrapper_fn = without_injected_parameters(self.fn)
sig = inspect.signature(wrapper_fn)
def _convert_string_arguments(self, kwargs: dict[str, Any]) -> dict[str, Any]:
"""Convert string arguments to expected types based on function signature."""
# self.fn is already wrapped by without_injected_parameters
sig = inspect.signature(self.fn)
converted_kwargs = {}
🤖 Prompt for AI Agents
In src/fastmcp/prompts/prompt.py around lines 270 to 275, the code re-applies
without_injected_parameters to self.fn even though self.fn was already wrapped
at line 265; remove the redundant call and use the already-wrapped function
(i.e., use inspect.signature(self.fn) or the stored wrapper attribute created
earlier) so you don't double-wrap and add overhead; adjust any variable names
accordingly and remove the unnecessary import/call.

Comment on lines +296 to +345
class _CurrentDocket(Dependency):
"""Internal dependency class for CurrentDocket."""

async def __aenter__(self) -> Docket:
import fastmcp

# Check if flag is enabled
if not fastmcp.settings.enable_docket:
raise RuntimeError(
"Docket support is not enabled. "
"Set FASTMCP_ENABLE_DOCKET=true to enable Docket support."
)

# Get Docket from ContextVar (set by _docket_lifespan)
docket = _current_docket.get()
if docket is None:
raise RuntimeError(
"No Docket instance found. This should not happen when "
"FASTMCP_ENABLE_DOCKET is enabled."
)

return docket


def CurrentDocket() -> Docket:
"""Get the current Docket instance managed by FastMCP.

This dependency provides access to the Docket instance that FastMCP
automatically creates when Docket support is enabled.

Requires:
- FASTMCP_ENABLE_DOCKET=true

Returns:
A dependency that resolves to the active Docket instance

Raises:
RuntimeError: If flag not enabled (during resolution)

Example:
```python
from fastmcp.dependencies import CurrentDocket

@mcp.tool()
async def schedule_task(docket: Docket = CurrentDocket()) -> str:
await docket.add(some_function)(arg1, arg2)
return "Scheduled"
```
"""
return cast("Docket", _CurrentDocket())
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

wc -l src/fastmcp/server/dependencies.py

Repository: jlowin/fastmcp

Length of output: 96


🏁 Script executed:

cat -n src/fastmcp/server/dependencies.py | sed -n '296,395p'

Repository: jlowin/fastmcp

Length of output: 3624


🏁 Script executed:

cat -n src/fastmcp/server/dependencies.py | sed -n '474,529p'

Repository: jlowin/fastmcp

Length of output: 1986


🏁 Script executed:

head -50 src/fastmcp/server/dependencies.py

Repository: jlowin/fastmcp

Length of output: 1777


🏁 Script executed:

rg "def CurrentWorker|def CurrentDocket" src/fastmcp/server/dependencies.py -A 2

Repository: jlowin/fastmcp

Length of output: 249


Add return type annotations to CurrentFastMCP() and get_server() to match project guidelines.

Both functions lack return type annotations:

  • CurrentFastMCP() at line 487
  • get_server() at line 512
  • _CurrentFastMCP.__aenter__() at line 477 (should return -> FastMCP)

These functions are part of the public DI surface and must follow the project guideline requiring full type annotations for src/**/*.py. The pattern is already established by CurrentDocket() -> Docket and CurrentWorker() -> Worker in the same file.

Add -> FastMCP return type annotations. Circular imports are not a concern; FastMCP is already imported in the TYPE_CHECKING block (line 35), and the from __future__ import annotations (line 1) ensures annotations are stringified at runtime.

🧰 Tools
🪛 Ruff (0.14.7)

304-307: Avoid specifying long messages outside the exception class

(TRY003)


312-315: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In src/fastmcp/server/dependencies.py around lines 477, 487 and 512, add missing
return type annotations for the FastMCP DI surface: change
_CurrentFastMCP.__aenter__ to return -> FastMCP, change CurrentFastMCP() to
return -> FastMCP, and change get_server() to return -> FastMCP; keep
annotations as forward/string types (the module already has from __future__
import annotations and FastMCP is available under TYPE_CHECKING), no other logic
changes required.

Comment on lines +97 to +100
ctx = get_context()
with suppress(Exception):
# Don't let notification failures break task creation
await ctx.session.send_notification(notification) # type: ignore[arg-type]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Redundant get_context() call.

ctx was already retrieved at line 56. This second call at line 97 is unnecessary.

-    ctx = get_context()
-    with suppress(Exception):
+    with suppress(Exception):
         # Don't let notification failures break task creation
         await ctx.session.send_notification(notification)  # type: ignore[arg-type]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ctx = get_context()
with suppress(Exception):
# Don't let notification failures break task creation
await ctx.session.send_notification(notification) # type: ignore[arg-type]
with suppress(Exception):
# Don't let notification failures break task creation
await ctx.session.send_notification(notification) # type: ignore[arg-type]
🤖 Prompt for AI Agents
In src/fastmcp/server/tasks/handlers.py around lines 97 to 100, remove the
redundant second call to get_context() and reuse the ctx variable already
obtained at line 56; replace the line "ctx = get_context()" with nothing (i.e.,
delete it) so the subsequent with suppress(Exception): block calls await
ctx.session.send_notification(notification) using the existing ctx, and ensure
no local variable shadowing or redefinition occurs in this scope.

Comment thread src/fastmcp/settings.py Outdated
Comment on lines +36 to +60
class ExtendedEnvSettingsSource(EnvSettingsSource):
"""
A special EnvSettingsSource that allows for multiple env var prefixes to be used.

Raises a deprecation warning if the old `FASTMCP_SERVER_` prefix is used.
"""

def get_field_value(
self, field: FieldInfo, field_name: str
) -> tuple[Any, str, bool]:
if prefixes := self.config.get("env_prefixes"):
for prefix in prefixes:
self.env_prefix = prefix
env_val, field_key, value_is_complex = super().get_field_value(
field, field_name
)
if env_val is not None:
if prefix == "FASTMCP_SERVER_":
# Deprecated in 2.8.0
logger.warning(
"Using `FASTMCP_SERVER_` environment variables is deprecated. Use `FASTMCP_` instead.",
)
return env_val, field_key, value_is_complex

return super().get_field_value(field, field_name)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if ExtendedEnvSettingsSource is used anywhere
echo "=== ExtendedEnvSettingsSource usage ===" 
rg -n "ExtendedEnvSettingsSource" --type=py

# Check if settings_customise_sources is defined in Settings
echo -e "\n=== settings_customise_sources usage ===" 
rg -n "settings_customise_sources" --type=py

# Also check for model_config to see how Settings is configured
echo -e "\n=== model_config in settings.py ===" 
rg -n "model_config" src/fastmcp/settings.py -A 5

Repository: jlowin/fastmcp

Length of output: 810


Register ExtendedEnvSettingsSource with the Settings model via settings_customise_sources.

ExtendedEnvSettingsSource is defined but never used anywhere in the codebase and is not registered with the Settings model. Without registration via settings_customise_sources in the model configuration, the multi-prefix environment variable logic and deprecation warning for FASTMCP_SERVER_ will never execute. This class is currently dead code.

🤖 Prompt for AI Agents
In src/fastmcp/settings.py around lines 36 to 60, ExtendedEnvSettingsSource is
implemented but never registered so its multi-prefix env logic and deprecation
warning never run; fix this by registering it in the Settings model via
settings_customise_sources: add a settings_customise_sources classmethod on the
Settings subclass (or update existing one) to insert ExtendedEnvSettingsSource
into the returned tuple of settings sources (preserving default order),
instantiate it with the model config so it sees config["env_prefixes"], and
ensure it runs before the standard EnvSettingsSource so the multi-prefix
behavior and FASTMCP_SERVER_ deprecation log are applied.

The error messages referenced FASTMCP_EXPERIMENTAL_ENABLE_DOCKET but the
actual setting is FASTMCP_ENABLE_DOCKET.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/fastmcp/server/tasks/handlers.py (2)

55-58: Remove redundant second get_context() call in handle_tool_as_task.

ctx is already fetched at Line 56 and used for session_id. The second ctx = get_context() before sending the notifications/tasks/created notification is redundant and was previously called out in review.

You can reuse the existing ctx:

-    ctx = get_context()
-    with suppress(Exception):
+    with suppress(Exception):
         # Don't let notification failures break task creation
         await ctx.session.send_notification(notification)  # type: ignore[arg-type]

This avoids an unnecessary context lookup and matches the pattern used in the other handlers.

Also applies to: 97-101


27-46: Address unused task_meta / server parameters (and docstring mismatch).

Ruff is correctly flagging task_meta (in all three handlers) and server (in handle_resource_as_task) as unused. Docstrings currently advertise task_meta (“contains ttl”), but the implementation derives TTL solely from docket.execution_ttl, so the metadata isn’t consumed yet.

Until you actually honor per-task TTL or other metadata, it would be good to either wire task_meta through or explicitly mark these parameters as intentionally unused so ARG001 stops firing and the API surface is less confusing.

One minimally invasive way (keeps parameter names and call sites intact) is to reference them near the top of each function:

 async def handle_tool_as_task(
     server: FastMCP,
     tool_name: str,
     arguments: dict[str, Any],
     task_meta: dict[str, Any],
 ) -> mcp.types.CallToolResult:
@@
-    Returns:
-        CallToolResult: Task stub with task metadata in _meta
-    """
+    Returns:
+        CallToolResult: Task stub with task metadata in _meta
+    """
+    # TODO: Use task_meta (e.g., TTL overrides) once SEP-1686 metadata is wired through.
+    _ = task_meta
@@
 async def handle_prompt_as_task(
     server: FastMCP,
     prompt_name: str,
     arguments: dict[str, Any] | None,
     task_meta: dict[str, Any],
 ) -> mcp.types.GetPromptResult:
@@
-    Returns:
-        GetPromptResult: Task stub with task metadata in _meta
-    """
+    Returns:
+        GetPromptResult: Task stub with task metadata in _meta
+    """
+    # TODO: Use task_meta (e.g., TTL overrides) once SEP-1686 metadata is wired through.
+    _ = task_meta
@@
 async def handle_resource_as_task(
     server: FastMCP,
     uri: str,
-    resource,  # Resource or ResourceTemplate
+    resource,  # Resource or ResourceTemplate
     task_meta: dict[str, Any],
 ) -> mcp.types.ServerResult:
@@
-    Returns:
-        ServerResult with ReadResourceResult stub
-    """
+    Returns:
+        ServerResult with ReadResourceResult stub
+    """
+    # Currently unused; kept for API symmetry / future use.
+    _ = server
+    _ = task_meta

If you expect to support request-specified TTL soon, alternatively you can feed it into the ttl_seconds calculation instead of shelving it.

Based on static analysis hints and past review feedback.

Also applies to: 136-154, 242-260

🧹 Nitpick comments (4)
src/fastmcp/server/tasks/handlers.py (4)

23-24: Annotate TASK_MAPPING_TTL_BUFFER_SECONDS for stricter typing.

To align with the “full type annotations” guideline and help type-checkers, consider explicitly typing the module constant:

-# Redis mapping TTL buffer: Add 15 minutes to Docket's execution_ttl
-TASK_MAPPING_TTL_BUFFER_SECONDS = 15 * 60
+# Redis mapping TTL buffer: Add 15 minutes to Docket's execution_ttl
+TASK_MAPPING_TTL_BUFFER_SECONDS: int = 15 * 60

As per coding guidelines, full type annotations are preferred.


242-247: Add a type annotation for resource to satisfy “full type annotations”.

resource is currently untyped, with only a comment # Resource or ResourceTemplate. Given the guideline that src/**/*.py should have full type annotations, this should be an explicit type rather than implicit.

If you don’t want to pull in the precise types yet, you can at least make it explicit and refine later:

 async def handle_resource_as_task(
     server: FastMCP,
     uri: str,
-    resource,  # Resource or ResourceTemplate
+    resource: Any,  # TODO: narrow to Resource or ResourceTemplate
     task_meta: dict[str, Any],
 ) -> mcp.types.ServerResult:

Once convenient, consider tightening this to the concrete Resource/ResourceTemplate types you use in this module.

As per coding guidelines, full type annotations are expected for function parameters.

Also applies to: 253-256


273-280: Align Docket error message in handle_resource_as_task with other handlers.

For tools/prompts, the McpError message includes a concrete remediation hint (Set FASTMCP_ENABLE_DOCKET=true). The resource handler only says “Background tasks require Docket”, which is less helpful and inconsistent.

Consider making this message match the others:

     docket = _current_docket.get()
     if docket is None:
         raise McpError(
             ErrorData(
                 code=INTERNAL_ERROR,
-                message="Background tasks require Docket",
+                message="Background tasks require Docket. Set FASTMCP_ENABLE_DOCKET=true",
             )
         )

This gives users a clear, consistent configuration hint when they hit this path.


74-83: Consider factoring out common Redis/notification setup to reduce duplication.

The three handlers share nearly identical sequences:

  • Build redis_key / created_at_key and store them with ttl_seconds in Redis.
  • Construct and send a notifications/tasks/created notification guarded by suppress(Exception).

The only real variations are task_type and the use of ctx (which is already available in each function). To keep the SEP-1686 machinery easier to evolve and less error-prone, you could centralize this into small helpers, e.g.:

  • async def _store_task_mapping(session_id, server_task_id, task_key, docket, created_at) -> None
  • async def _send_task_created_notification(session, task_id) -> None

and call them from each handler.

The current code is clear enough, so this is mainly a maintainability/readability win rather than a correctness issue.

Also applies to: 182-191, 285-294, 84-101, 192-205, 295-308

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5ccfba0 and c981f49.

📒 Files selected for processing (1)
  • src/fastmcp/server/tasks/handlers.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Python source code must use Python ≥3.10 with full type annotations
Never use bare except - be specific with exception types
Prioritize readable, understandable code - clarity over cleverness; avoid obfuscated or confusing patterns even if shorter
Follow existing patterns and maintain consistency in code organization and style

Files:

  • src/fastmcp/server/tasks/handlers.py
🧬 Code graph analysis (1)
src/fastmcp/server/tasks/handlers.py (3)
src/fastmcp/server/dependencies.py (2)
  • get_context (255-261)
  • message (423-424)
src/fastmcp/server/tasks/keys.py (1)
  • build_task_key (15-44)
src/fastmcp/server/tasks/subscriptions.py (1)
  • subscribe_to_task_updates (27-73)
🪛 Ruff (0.14.7)
src/fastmcp/server/tasks/handlers.py

31-31: Unused function argument: task_meta

(ARG001)


140-140: Unused function argument: task_meta

(ARG001)


243-243: Unused function argument: server

(ARG001)


246-246: Unused function argument: task_meta

(ARG001)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests: Python 3.10 on windows-latest
  • GitHub Check: Run tests with lowest-direct dependencies
  • GitHub Check: Run tests: Python 3.10 on ubuntu-latest

- Remove ExtendedEnvSettingsSource (FASTMCP_SERVER_ prefix support)
- Remove dependencies parameter from FastMCP.__init__
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
src/fastmcp/settings.py (1)

6-7: DocketSettings structure is solid; consider validating concurrency/timeouts.

The shape and typing of DocketSettings look good, but concurrency, redelivery_timeout, and reconnection_delay currently accept zero/negative values. If Worker expects strictly positive values, consider adding basic validation (e.g., a field_validator or Field(ge=1) / Field(gt=timedelta(0))) so misconfiguration fails fast instead of producing subtle runtime behaviour.

Also applies to: 34-120

src/fastmcp/server/server.py (4)

5-11: Clarify whether docket is an optional dependency and tighten _docket typing.

Importing Docket/Worker at module import time makes docket a hard requirement for anyone importing fastmcp.server.server, even when enable_docket/enable_tasks are left at False. If Docket is meant to be an optional extra, consider moving the import inside _docket_lifespan (or similar) so non‑task users don’t see an ImportError. Also, declaring self._docket: Docket | None = None in __init__ would better reflect the public docket property for type checkers and IDEs.

Also applies to: 33-34, 73-77, 171-172, 189-196, 369-376


377-487: _docket_lifespan lifecycle wiring looks correct; Ruff TRY003 suggests a custom error type.

The configuration guard (enable_tasks requires enable_docket), ContextVar wiring for server/docket/worker, and task‑group lifetime management all look coherent and should shut the worker down cleanly when the server lifespan ends. Ruff TRY003 will complain about the long RuntimeError message in the config check; if you want to satisfy it, consider introducing a small configuration exception (e.g. TaskConfigurationError(RuntimeError)) and moving the long explanation into that class, or shortening the inline message.

Also applies to: 494-500


1322-1367: Align tool task routing with Docket registration and mounted servers.

The SEP‑1686 branch in _call_tool_mcp currently inspects only self._tool_manager._tools.get(key) and checks tool.task. That has two side effects:

  • Tools from mounted/imported servers are never considered for background execution here.
  • Tools whose task attribute is None but which are treated as task‑enabled via _support_tasks_by_default (as in _docket_lifespan registration) can be registered with Docket yet still execute synchronously when task metadata is present.

To keep behaviour consistent with _docket_lifespan and support mounted servers, consider resolving the tool via the public API and reusing the same supports_task logic:

-                if task_meta and fastmcp.settings.enable_tasks:
-                    # Task metadata present - check if tool supports background execution
-                    tool = self._tool_manager._tools.get(key)
-                    if tool and tool.task:
-                        # Route to background execution
-                        # Convert TaskMetadata to dict for handler
-                        task_meta_dict = task_meta.model_dump(exclude_none=True)
-                        return await handle_tool_as_task(
-                            self, key, arguments, task_meta_dict
-                        )
+                if task_meta and fastmcp.settings.enable_tasks:
+                    tool = None
+                    try:
+                        tool = await self.get_tool(key)
+                    except NotFoundError:
+                        pass
+
+                    if tool:
+                        supports_task = (
+                            tool.task
+                            if tool.task is not None
+                            else self._support_tasks_by_default
+                        )
+                        if supports_task:
+                            task_meta_dict = task_meta.model_dump(exclude_none=True)
+                            return await handle_tool_as_task(
+                                self, key, arguments, task_meta_dict
+                            )

The existing synchronous fall‑through then continues to behave as before when tools don’t support tasks.


2288-2301: Experimental task capabilities payload: confirm SEP‑1686 shape and consider sharing a helper.

The experimental_capabilities["tasks"] structure (with list, cancel, and requests.{tools.call,prompts.get,resources.read}) is properly gated on fastmcp.settings.enable_tasks and passed into create_initialization_options alongside NotificationOptions. To avoid divergence between stdio and HTTP transports over time, consider factoring this into a small helper (e.g. _build_experimental_capabilities()) and reusing it wherever initialization options are built, and double‑check the dict shape against the latest SEP‑1686 spec before release.

Also applies to: 2307-2311

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c981f49 and 7c6f7a1.

📒 Files selected for processing (2)
  • src/fastmcp/server/server.py (28 hunks)
  • src/fastmcp/settings.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Python source code must use Python ≥3.10 with full type annotations
Never use bare except - be specific with exception types
Prioritize readable, understandable code - clarity over cleverness; avoid obfuscated or confusing patterns even if shorter
Follow existing patterns and maintain consistency in code organization and style

Files:

  • src/fastmcp/settings.py
  • src/fastmcp/server/server.py
🧠 Learnings (1)
📚 Learning: 2025-12-04T00:17:41.238Z
Learnt from: CR
Repo: jlowin/fastmcp PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-04T00:17:41.238Z
Learning: Applies to tests/**/*.py : Pass FastMCP servers directly to clients for testing using in-memory transport; only use HTTP transport when explicitly testing network features

Applied to files:

  • src/fastmcp/server/server.py
🧬 Code graph analysis (2)
src/fastmcp/settings.py (1)
src/fastmcp/server/server.py (1)
  • docket (370-375)
src/fastmcp/server/server.py (2)
src/fastmcp/server/tasks/handlers.py (3)
  • handle_prompt_as_task (136-239)
  • handle_resource_as_task (242-343)
  • handle_tool_as_task (27-133)
src/fastmcp/server/dependencies.py (1)
  • without_injected_parameters (85-139)
🪛 Ruff (0.14.7)
src/fastmcp/server/server.py

394-397: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Run tests: Python 3.10 on windows-latest
  • GitHub Check: Run tests: Python 3.10 on ubuntu-latest
  • GitHub Check: Run tests with lowest-direct dependencies
🔇 Additional comments (5)
src/fastmcp/settings.py (1)

212-245: New Docket/task fields on Settings align with server usage.

enable_docket, enable_tasks, and the nested docket: DocketSettings field are consistent with how the server code gates Docket and SEP‑1686 features; no issues spotted here.

src/fastmcp/server/server.py (4)

1661-1850: Tool decorator task flag and sync‑function guard look consistent.

The new task: bool | None kwarg, resolution through _support_tasks_by_default, and explicit warning/disable when a sync function is marked task=True give a clear, predictable story for backgroundable tools. The partial returned in the deferred‑decoration path correctly threads the task argument back into self.tool, so the same logic applies there. No changes needed.


1909-2040: Resource decorator integrates task flag and DI‑aware parameter detection cleanly.

Using without_injected_parameters to decide whether a function has “real” parameters avoids misclassifying DI‑only resources as templates, and the supports_task logic (plus sync‑function guard and warning) mirrors the tool behaviour before passing task into ResourceTemplate.from_function / Resource.from_function. This looks correct and maintainable.


2085-2263: Prompt decorator’s task semantics correctly mirror tools/resources.

The prompt decorator applies the same task kwarg, default resolution against _support_tasks_by_default, and sync‑function guard/warning as tools and resources, and the partial in the deferred path preserves the task argument. This keeps behaviour uniform across all component types. No issues here.


572-577: The review comment is based on an incorrect analysis of the code.

The ResourceManager.get_resource() method explicitly accepts AnyUrl | str (line 244 of resource_manager.py), not just strings. The method internally converts the URI to a string on line 253 (uri_str = str(uri)), so passing an AnyUrl instance directly causes no issues. This design already handles both types correctly, and the suggested fix is unnecessary.

The code at line 603 in server.py passes uri (an AnyUrl instance) to get_resource(), which works as intended because the method signature supports both types. No type mismatch, false NotFoundError, or silent fallback occurs.

Likely an incorrect or invalid review comment.

Copy link
Copy Markdown
Member

@jlowin jlowin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlowin jlowin removed the DON'T MERGE PR is not ready for merging. Used by authors to prevent premature merging. label Dec 5, 2025
@jlowin jlowin merged commit 66aaf42 into main Dec 5, 2025
13 checks passed
@jlowin jlowin deleted the sep-1686 branch December 5, 2025 01:10
@He-Pin
Copy link
Copy Markdown

He-Pin commented Dec 5, 2025

nice, just implemented this at work too

@marvin-context-protocol
Copy link
Copy Markdown
Contributor

Test Failure Analysis

Summary: Windows Python 3.10 tests are failing with pytest-xdist worker crashes in the newly added test_oidc_proxy.py.

Root Cause: Two tests cause worker crashes specifically on Windows:

  • TestOIDCProxyInitialization::test_extra_params_merge_with_audience (tests/server/auth/test_oidc_proxy.py:843)
  • TestOIDCProxyInitialization::test_extra_authorize_params_initialization (tests/server/auth/test_oidc_proxy.py:787)

Worker crashes in pytest-xdist typically indicate an import-time error, resource conflict, or Windows-specific compatibility issue in the test setup/teardown.

Suggested Solution:

  1. Isolate the issue: Run locally on Windows without xdist to see actual error:

    pytest tests/server/auth/test_oidc_proxy.py::TestOIDCProxyInitialization::test_extra_params_merge_with_audience -n0 -v
  2. Likely fixes:

    • Check if OIDCProxy initialization has Windows-specific path handling issues
    • Verify mock patches don't interfere with worker process forking on Windows
    • Consider if fixtures need scope="function" to isolate state between workers
  3. Temporary workaround (if needed to unblock):

    @pytest.mark.skipif(sys.platform == "win32", reason="Worker crash on Windows - investigating")
Detailed Analysis

Failed Workflow: https://github.com/jlowin/fastmcp/actions/runs/19949261935

Error Output:

FAILED tests/server/auth/test_oidc_proxy.py::TestOIDCProxyInitialization::test_extra_params_merge_with_audience - worker 'gw0' crashed
FAILED tests/server/auth/test_oidc_proxy.py::TestOIDCProxyInitialization::test_extra_authorize_params_initialization - worker 'gw1' crashed

Test Status: 2 failed, 2969 passed, 33 skipped (Ubuntu tests passed cleanly)

Context: These tests were newly added in this PR. They test OIDC proxy parameter handling with mocked HTTP configuration fetching. The crashes only occur on Windows with pytest-xdist parallel execution.

Related Files
  • Test file: tests/server/auth/test_oidc_proxy.py (newly added, ~880 lines)
  • Source file: src/fastmcp/server/auth/oidc_proxy.py
  • Failing test lines: L787-816, L843-878

Both failing tests follow similar patterns:

  1. Mock OIDCConfiguration.get_oidc_configuration
  2. Create OIDCProxy with various parameters including extra_authorize_params
  3. Assert parameter merging behavior

The common factor is they both use extra_authorize_params - might be related to dict merging logic or JWT signing key handling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature Major new functionality. Reserved for 2-4 significant PRs per release. Not for issues. server Related to FastMCP server implementation or server-side functionality.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants