Skip to content

❌ AI Integration Tests Failed #5343

@github-actions

Description

@github-actions

AI Integration Test Failure

Date: 2026-03-06T04:09:51.426Z
Platform: python
Framework: all
Workflow Run: https://github.com/getsentry/sentry-python/actions/runs/22748525791

Summary

Metric Value
Total Tests 138
Passed 61
Failed 77
Skipped 0
Duration 313.65s

Results by Framework

FAILED - python/anthropic

Test Status Duration
Basic LLM Test (sync, streaming) ✗ failed 2.93s
Basic LLM Test (sync, blocking) ✓ passed 2.41s
Basic LLM Test (async, streaming) ✗ failed 1.65s
Basic LLM Test (async, blocking) ✓ passed 3.66s
Multi-Turn LLM Test (sync, streaming) ✗ failed 23.33s
Multi-Turn LLM Test (sync, blocking) ✓ passed 59.67s
Multi-Turn LLM Test (async, streaming) ✗ failed 22.90s
Multi-Turn LLM Test (async, blocking) ✗ failed 32.53s
Basic Error LLM Test (sync, streaming) ✗ failed 2.57s
Basic Error LLM Test (sync, blocking) ✓ passed 2.44s
Basic Error LLM Test (async, streaming) ✗ failed 2.52s
Basic Error LLM Test (async, blocking) ✓ passed 2.52s
Vision LLM Test (sync, streaming) ✗ failed 7.68s
Vision LLM Test (sync, blocking) ✗ failed 17.91s
Vision LLM Test (async, streaming) ✗ failed 23.92s
Vision LLM Test (async, blocking) ✗ failed 23.16s
Long Input LLM Test (sync, streaming) ✗ failed 24.98s
Long Input LLM Test (sync, blocking) ✓ passed 12.84s
Long Input LLM Test (async, streaming) ✗ failed 24.00s
Long Input LLM Test (async, blocking) ✓ passed 24.28s

FAILED - python/google-genai

Test Status Duration
Basic Agent Test (sync) ✗ failed 4.21s
Basic Agent Test (async) ✗ failed 4.21s
Tool Call Agent Test (sync) ✗ failed 4.21s
Tool Call Agent Test (async) ✗ failed 4.21s
Tool Error Agent Test (sync) ✗ failed 1.86s
Tool Error Agent Test (async) ✗ failed 1.79s
Vision Agent Test (sync) ✗ failed 1.81s
Vision Agent Test (async) ✗ failed 1.82s
Long Input Agent Test (sync) ✗ failed 1.71s
Long Input Agent Test (async) ✗ failed 1.70s
Basic Embeddings Test (sync, blocking) ✗ failed 1.04s
Basic Embeddings Test (async, blocking) ✗ failed 1.03s

FAILED - python/langchain

Test Status Duration
Basic Embeddings Test (sync, blocking) ✗ failed 4.20s
Basic Embeddings Test (async, blocking) ✗ failed 2.88s
Basic LLM Test (sync, streaming) ✓ passed 3.87s
Basic LLM Test (sync, blocking) ✓ passed 4.05s
Basic LLM Test (async, streaming) ✓ passed 4.41s
Basic LLM Test (async, blocking) ✓ passed 3.90s
Multi-Turn LLM Test (sync, streaming) ✓ passed 26.75s
Multi-Turn LLM Test (sync, blocking) ✓ passed 27.15s
Multi-Turn LLM Test (async, streaming) ✓ passed 22.64s
Multi-Turn LLM Test (async, blocking) ✓ passed 24.01s
Basic Error LLM Test (sync, streaming) ✗ failed 1.14s
Basic Error LLM Test (sync, blocking) ✗ failed 1.24s
Basic Error LLM Test (async, streaming) ✗ failed 1.72s
Basic Error LLM Test (async, blocking) ✗ failed 1.79s
Vision LLM Test (sync, streaming) ✓ passed 3.52s
Vision LLM Test (sync, blocking) ✓ passed 4.02s
Vision LLM Test (async, streaming) ✓ passed 3.24s
Vision LLM Test (async, blocking) ✓ passed 3.63s
Long Input LLM Test (sync, streaming) ✓ passed 3.69s
Long Input LLM Test (sync, blocking) ✓ passed 3.88s
Long Input LLM Test (async, streaming) ✓ passed 3.53s
Long Input LLM Test (async, blocking) ✓ passed 3.30s

FAILED - python/langgraph

Test Status Duration
Basic Agent Test (sync) ✗ failed 6.96s
Basic Agent Test (async) ✗ failed 6.99s
Tool Call Agent Test (sync) ✗ failed 8.68s
Tool Call Agent Test (async) ✗ failed 7.60s
Tool Error Agent Test (sync) ✗ failed 4.23s
Tool Error Agent Test (async) ✗ failed 3.97s
Vision Agent Test (sync) ✗ failed 3.21s
Vision Agent Test (async) ✗ failed 3.37s
Long Input Agent Test (sync) ✗ failed 49.40s
Long Input Agent Test (async) ✗ failed 59.89s

FAILED - python/litellm

Test Status Duration
Basic Embeddings Test (sync, blocking) ✓ passed 6.56s
Basic Embeddings Test (async, blocking) ✗ failed 5.98s
Basic LLM Test (sync, streaming) ✓ passed 6.16s
Basic LLM Test (sync, blocking) ✓ passed 6.25s
Basic LLM Test (async, streaming) ✗ failed 6.21s
Basic LLM Test (async, blocking) ✗ failed 6.84s
Multi-Turn LLM Test (sync, streaming) ✓ passed 29.79s
Multi-Turn LLM Test (sync, blocking) ✓ passed 32.92s
Multi-Turn LLM Test (async, streaming) ✗ failed 35.95s
Multi-Turn LLM Test (async, blocking) ✗ failed 28.39s
Basic Error LLM Test (sync, streaming) ✗ failed 2.77s
Basic Error LLM Test (sync, blocking) ✗ failed 2.81s
Basic Error LLM Test (async, streaming) ✗ failed 3.54s
Basic Error LLM Test (async, blocking) ✗ failed 3.34s
Vision LLM Test (sync, streaming) ✓ passed 6.08s
Vision LLM Test (sync, blocking) ✓ passed 4.83s
Vision LLM Test (async, streaming) ✗ failed 5.78s
Vision LLM Test (async, blocking) ✗ failed 5.71s
Long Input LLM Test (sync, streaming) ✓ passed 5.36s
Long Input LLM Test (sync, blocking) ✓ passed 5.34s
Long Input LLM Test (async, streaming) ✗ failed 5.11s
Long Input LLM Test (async, blocking) ✗ failed 4.99s

PASSED - python/manual

Test Status Duration
Basic Agent Test (sync) ✓ passed 857ms
Basic Agent Test (async) ✓ passed 676ms
Tool Call Agent Test (sync) ✓ passed 677ms
Tool Call Agent Test (async) ✓ passed 696ms
Tool Error Agent Test (sync) ✓ passed 676ms
Tool Error Agent Test (async) ✓ passed 674ms
Vision Agent Test (sync) ✓ passed 678ms
Vision Agent Test (async) ✓ passed 674ms
Long Input Agent Test (sync) ✓ passed 674ms
Long Input Agent Test (async) ✓ passed 675ms
Basic Embeddings Test (sync, blocking) ✓ passed 774ms
Basic Embeddings Test (async, blocking) ✓ passed 770ms
Basic LLM Test (sync, blocking) ✓ passed 786ms
Basic LLM Test (async, blocking) ✓ passed 728ms
Multi-Turn LLM Test (sync, blocking) ✓ passed 695ms
Multi-Turn LLM Test (async, blocking) ✓ passed 697ms
Vision LLM Test (sync, blocking) ✓ passed 683ms
Vision LLM Test (async, blocking) ✓ passed 692ms
Long Input LLM Test (sync, blocking) ✓ passed 686ms
Long Input LLM Test (async, blocking) ✓ passed 684ms

FAILED - python/openai

Test Status Duration
Basic Embeddings Test (sync, blocking) ✓ passed 3.67s
Basic Embeddings Test (async, blocking) ✓ passed 1.73s
Basic LLM Test (sync, streaming) ✗ failed 3.71s
Basic LLM Test (sync, blocking) ✓ passed 5.84s
Basic LLM Test (async, streaming) ✗ failed 4.38s
Basic LLM Test (async, blocking) ✓ passed 4.23s
Multi-Turn LLM Test (sync, streaming) ✗ failed 29.66s
Multi-Turn LLM Test (sync, blocking) ✓ passed 28.50s
Multi-Turn LLM Test (async, streaming) ✗ failed 30.83s
Multi-Turn LLM Test (async, blocking) ✓ passed 32.51s
Basic Error LLM Test (sync, streaming) ✗ failed 630ms
Basic Error LLM Test (sync, blocking) ✗ failed 629ms
Basic Error LLM Test (async, streaming) ✗ failed 648ms
Basic Error LLM Test (async, blocking) ✗ failed 645ms
Vision LLM Test (sync, streaming) ✗ failed 1.96s
Vision LLM Test (sync, blocking) ✗ failed 1.94s
Vision LLM Test (async, streaming) ✗ failed 1.97s
Vision LLM Test (async, blocking) ✗ failed 1.95s
Long Input LLM Test (sync, streaming) ✗ failed 2.48s
Long Input LLM Test (sync, blocking) ✓ passed 2.15s
Long Input LLM Test (async, streaming) ✗ failed 2.26s
Long Input LLM Test (async, blocking) ✓ passed 2.12s

FAILED - python/openai-agents

Test Status Duration
Basic Agent Test (async) ✓ passed 6.85s
Tool Call Agent Test (async) ✗ failed 8.08s
Tool Error Agent Test (async) ✗ failed 5.63s
Vision Agent Test (async) ✗ failed 4.06s
Long Input Agent Test (async) ✗ failed 41.37s

FAILED - python/pydantic-ai

Test Status Duration
Basic Agent Test (async) ✗ failed 9.56s
Tool Call Agent Test (async) ✗ failed 6.50s
Tool Error Agent Test (async) ✗ failed 5.19s
Vision Agent Test (async) ✗ failed 5.13s
Long Input Agent Test (async) ✗ failed 25.45s

Failed Tests Details

python/google-genai - Basic Agent Test (sync)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-sync.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-sync.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Basic Agent Test (async)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-async.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-async.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Tool Call Agent Test (sync)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-sync.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-sync.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Tool Call Agent Test (async)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-async.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-async.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-call-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Tool Error Agent Test (sync)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-sync.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-sync.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Tool Error Agent Test (async)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-async.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-async.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-tool-error-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Vision Agent Test (sync)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-sync.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-sync.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Vision Agent Test (async)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-async.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-async.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-vision-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Long Input Agent Test (sync)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-sync.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-sync.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-sync.py", line 22, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Long Input Agent Test (async)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-async.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-async.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-long-input-agent-test-async.py", line 23, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/langgraph - Basic Agent Test (sync)

Error: 1 check(s) failed:

1 check(s) failed:
Child span (gen_ai.chat, id: 97e82ced) should have gen_ai.agent.name attribute
python/langgraph - Basic Agent Test (async)

Error: 1 check(s) failed:

1 check(s) failed:
Child span (gen_ai.chat, id: 9a8f1a55) should have gen_ai.agent.name attribute
python/langgraph - Tool Call Agent Test (sync)

Error: 4 check(s) failed:

4 check(s) failed:
Attribute validation failed:
  Span 87743f28: Attribute 'gen_ai.tool.type' must exist but is missing
  Span 8c7a6505: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 8dec1a90) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 87743f28) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 817633c3) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 8c7a6505) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 8274e1fc) should have gen_ai.agent.name attribute
Tool call "add" should have argument "a"
Tool call "add" should have argument "b"
Tool call "multiply" should have argument "a"
Tool call "multiply" should have argument "b"
Tool "add" should have type "function" but has "undefined"
Tool "add" output should equal 8 but is {"content":"8","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"add","id":"None","tool_call_id":"call_ysQMbCIf6wXUi0uWodv2KQDu","artifact":"None","status":"success"}
Tool "multiply" should have type "function" but has "undefined"
Tool "multiply" output should equal 32 but is {"content":"32","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"multiply","id":"None","tool_call_id":"call_hbQUr9OOO9MaWDJKBYu0F26L","artifact":"None","status":"success"}
python/langgraph - Tool Call Agent Test (async)

Error: 4 check(s) failed:

4 check(s) failed:
Attribute validation failed:
  Span b203bd11: Attribute 'gen_ai.tool.type' must exist but is missing
  Span bfd95579: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 93e14c05) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: b203bd11) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 9e064bf5) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: bfd95579) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 8b2841aa) should have gen_ai.agent.name attribute
Tool call "add" should have argument "a"
Tool call "add" should have argument "b"
Tool call "multiply" should have argument "a"
Tool call "multiply" should have argument "b"
Tool "add" should have type "function" but has "undefined"
Tool "add" output should equal 8 but is {"content":"8","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"add","id":"None","tool_call_id":"call_IfCUdST7XT7yyAJ5AXKKVMTL","artifact":"None","status":"success"}
Tool "multiply" should have type "function" but has "undefined"
Tool "multiply" output should equal 32 but is {"content":"32","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"multiply","id":"None","tool_call_id":"call_92SQD1dhYETdU5TdQb0ordQW","artifact":"None","status":"success"}
python/langgraph - Tool Error Agent Test (sync)

Error: 4 check(s) failed:

4 check(s) failed:
Attribute validation failed:
  Span a41a8acf: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 81e0fd95) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: a41a8acf) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: b30f9a5d) should have gen_ai.agent.name attribute
Tool call "read_file" should have argument "path"
Tool span should have an error indicator (status=error, data.error, data.exception, gen_ai.tool.error, or tags.error)
python/langgraph - Tool Error Agent Test (async)

Error: 4 check(s) failed:

4 check(s) failed:
Attribute validation failed:
  Span 821a4dbe: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 92406baa) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 821a4dbe) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: b5fbac9b) should have gen_ai.agent.name attribute
Tool call "read_file" should have argument "path"
Tool span should have an error indicator (status=error, data.error, data.exception, gen_ai.tool.error, or tags.error)
python/langgraph - Vision Agent Test (sync)

Error: 2 check(s) failed:

2 check(s) failed:
Child span (gen_ai.chat, id: 8271022d) should have gen_ai.agent.name attribute
Messages should not contain raw base64 data (should be redacted)
python/langgraph - Vision Agent Test (async)

Error: 2 check(s) failed:

2 check(s) failed:
Child span (gen_ai.chat, id: b9931908) should have gen_ai.agent.name attribute
Messages should not contain raw base64 data (should be redacted)
python/langgraph - Long Input Agent Test (sync)

Error: 1 check(s) failed:

1 check(s) failed:
Child span (gen_ai.chat, id: afb0978b) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 9e0bcc8f) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 8c64df0a) should have gen_ai.agent.name attribute
python/langgraph - Long Input Agent Test (async)

Error: 1 check(s) failed:

1 check(s) failed:
Child span (gen_ai.chat, id: a58d2a6c) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 94bc77d9) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 8e34b05e) should have gen_ai.agent.name attribute
python/openai-agents - Tool Call Agent Test (async)

Error: 1 check(s) failed:

1 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
Should have gen_ai.output.messages or gen_ai.response.text
python/openai-agents - Tool Error Agent Test (async)

Error: 1 check(s) failed:

1 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
python/openai-agents - Vision Agent Test (async)

Error: 1 check(s) failed:

1 check(s) failed:
Messages should not contain raw base64 data (should be redacted)
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai-agents - Long Input Agent Test (async)

Error: 2 check(s) failed:

2 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
Message should be trimmed (length 25667 > 20000)
Message should be trimmed (length 25667 > 20000)
python/pydantic-ai - Basic Agent Test (async)

Error: 2 check(s) failed:

2 check(s) failed:
Attribute validation failed:
  Span b758f826: Attribute 'description' must equal 'invoke_agent undefined' but is 'invoke_agent agent'
  Span b758f826: Attribute 'gen_ai.agent.name' must exist but is missing
Agent span (gen_ai.invoke_agent) should have gen_ai.agent.name attribute
python/pydantic-ai - Tool Call Agent Test (async)

Error: 5 check(s) failed:

5 check(s) failed:
Attribute validation failed:
  Span b07e212b: Attribute 'description' must equal 'invoke_agent undefined' but is 'invoke_agent agent'
  Span b07e212b: Attribute 'gen_ai.agent.name' must exist but is missing
Should have gen_ai.output.messages or gen_ai.response.text
Should have gen_ai.output.messages or gen_ai.response.text
Attribute validation failed:
  Span bf487fc2: Attribute 'gen_ai.tool.description' must exist but is missing
  Span af0f59d7: Attribute 'gen_ai.tool.description' must exist but is missing
Agent span (gen_ai.invoke_agent) should have gen_ai.agent.name attribute
Tool "add" should have description "Add two numbers together" but has "undefined"
Tool "multiply" should have description "Multiply two numbers together" but has "undefined"
python/pydantic-ai - Tool Error Agent Test (async)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 47, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 40, in main
    result = await agent.run("Please read the file at /nonexistent/file.txt and tell me what it contains. Use the read_file tool.")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 129, in wrapper
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 119, in wrapper
    result = await original_func(self, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/abstract.py", line 259, in run
    async with self.iter(
               ^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
    await self.gen.athrow(value)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/__init__.py", line 707, in iter
    async with graph.iter(
               ^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
    await self.gen.athrow(value)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 270, in iter
    async with GraphRun[StateT, DepsT, OutputT](
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 423, in __aexit__
    await self._async_exit_stack.__aexit__(exc_type, exc_val, exc_tb)
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 754, in __aexit__
    raise exc_details[1]
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 735, in __aexit__
    cb_suppress = cb(*exc_details)
                  ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 981, in _unwrap_exception_groups
    raise exception
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 750, in _run_tracked_task
    result = await self._run_task(t_)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 782, in _run_task
    output = await node.call(step_context)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/step.py", line 253, in _call_node
    return await node.run(GraphRunContext(state=ctx.state, deps=ctx.deps))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 593, in run
    async with self.stream(ctx):
               ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 217, in __aexit__
    await anext(self.gen)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 607, in stream
    async for _event in stream:
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 744, in _run_stream
    async for event in self._events_iterator:
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 705, in _run_stream
    async for event in self._handle_tool_calls(ctx, tool_calls):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 760, in _handle_tool_calls
    async for event in process_tool_calls(
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1011, in process_tool_calls
    async for event in _call_tools(
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1161, in _call_tools
    if event := await handle_call_or_result(coro_or_task=task, index=index):  # pyright: ignore[reportArgumentType]
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1112, in handle_call_or_result
    (await coro_or_task) if inspect.isawaitable(coro_or_task) else coro_or_task.result()
     ^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1204, in _call_tool
    tool_result = await tool_manager.handle_call(tool_call)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 153, in handle_call
    return await self._call_function_tool(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 290, in _call_function_tool
    tool_result = await self._call_tool(
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/tools.py", line 155, in wrapped_call_tool
    result = await original_call_tool(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 212, in _call_tool
    return await self.toolset.call_tool(name, args_dict, ctx, tool)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/combined.py", line 90, in call_tool
    return await tool.source_toolset.call_tool(name, tool_args, ctx, tool.source_tool)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/function.py", line 383, in call_tool
    return await tool.call_func(tool_args, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_function_schema.py", line 56, in call
    return await run_in_executor(function, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_utils.py", line 83, in run_in_executor
    return await run_sync(wrapped_func)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/to_thread.py", line 63, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2502, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 986, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 33, in read_file
    raise Exception("FileNotFoundError: The file '/nonexistent/file.txt' does not exist")
Exception: FileNotFoundError: The file '/nonexistent/file.txt' does not exist

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 47, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 40, in main
    result = await agent.run("Please read the file at /nonexistent/file.txt and tell me what it contains. Use the read_file tool.")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 129, in wrapper
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 119, in wrapper
    result = await original_func(self, *args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/abstract.py", line 259, in run
    async with self.iter(
               ^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
    await self.gen.athrow(value)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/__init__.py", line 707, in iter
    async with graph.iter(
               ^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
    await self.gen.athrow(value)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 270, in iter
    async with GraphRun[StateT, DepsT, OutputT](
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 423, in __aexit__
    await self._async_exit_stack.__aexit__(exc_type, exc_val, exc_tb)
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 754, in __aexit__
    raise exc_details[1]
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 735, in __aexit__
    cb_suppress = cb(*exc_details)
                  ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 158, in __exit__
    self.gen.throw(value)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 981, in _unwrap_exception_groups
    raise exception
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 750, in _run_tracked_task
    result = await self._run_task(t_)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 782, in _run_task
    output = await node.call(step_context)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/step.py", line 253, in _call_node
    return await node.run(GraphRunContext(state=ctx.state, deps=ctx.deps))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 593, in run
    async with self.stream(ctx):
               ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 217, in __aexit__
    await anext(self.gen)
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 607, in stream
    async for _event in stream:
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 744, in _run_stream
    async for event in self._events_iterator:
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 705, in _run_stream
    async for event in self._handle_tool_calls(ctx, tool_calls):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 760, in _handle_tool_calls
    async for event in process_tool_calls(
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1011, in process_tool_calls
    async for event in _call_tools(
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1161, in _call_tools
    if event := await handle_call_or_result(coro_or_task=task, index=index):  # pyright: ignore[reportArgumentType]
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1112, in handle_call_or_result
    (await coro_or_task) if inspect.isawaitable(coro_or_task) else coro_or_task.result()
     ^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1204, in _call_tool
    tool_result = await tool_manager.handle_call(tool_call)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 153, in handle_call
    return await self._call_function_tool(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 290, in _call_function_tool
    tool_result = await self._call_tool(
                  ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/tools.py", line 155, in wrapped_call_tool
    result = await original_call_tool(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 212, in _call_tool
    return await self.toolset.call_tool(name, args_dict, ctx, tool)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/combined.py", line 90, in call_tool
    return await tool.source_toolset.call_tool(name, tool_args, ctx, tool.source_tool)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/function.py", line 383, in call_tool
    return await tool.call_func(tool_args, ctx)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_function_schema.py", line 56, in call
    return await run_in_executor(function, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_utils.py", line 83, in run_in_executor
    return await run_sync(wrapped_func)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/to_thread.py", line 63, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2502, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 986, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 33, in read_file
    raise Exception("FileNotFoundError: The file '/nonexistent/file.txt' does not exist")
Exception: FileNotFoundError: The file '/nonexistent/file.txt' does not exist

python/pydantic-ai - Vision Agent Test (async)

Error: 3 check(s) failed:

3 check(s) failed:
Attribute validation failed:
  Span afb4e5fb: Attribute 'description' must equal 'invoke_agent undefined' but is 'invoke_agent agent'
  Span afb4e5fb: Attribute 'gen_ai.agent.name' must exist but is missing
Agent span (gen_ai.invoke_agent) should have gen_ai.agent.name attribute
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/pydantic-ai - Long Input Agent Test (async)

Error: 4 check(s) failed:

4 check(s) failed:
Attribute validation failed:
  Span a6c8fa2a: Attribute 'description' must equal 'invoke_agent undefined' but is 'invoke_agent agent'
  Span a6c8fa2a: Attribute 'gen_ai.agent.name' must exist but is missing
Should have gen_ai.output.messages or gen_ai.response.text
Agent span (gen_ai.invoke_agent) should have gen_ai.agent.name attribute
Message should be trimmed (length 25667 > 20000)
Message should be trimmed (length 25667 > 20000)
python/google-genai - Basic Embeddings Test (sync, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py", line 20, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py", line 20, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/google-genai - Basic Embeddings Test (async, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py", line 21, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py", line 21, in <module>
    client = genai.Client(
             ^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
    self._api_client = self._get_api_client(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
    return BaseApiClient(
           ^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
    raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.

python/langchain - Basic Embeddings Test (sync, blocking)

Error: 2 check(s) failed:

2 check(s) failed:
Should have exactly 1 AI span(s) but found 2
Token usage validation failed:
  input_tokens must exist
  total_tokens must exist
gen_ai.response.model is missing (optional but recommended)
gen_ai.response.model is missing (optional but recommended)
python/langchain - Basic Embeddings Test (async, blocking)

Error: 2 check(s) failed:

2 check(s) failed:
Should have exactly 1 AI span(s) but found 2
Token usage validation failed:
  input_tokens must exist
  total_tokens must exist
gen_ai.response.model is missing (optional but recommended)
gen_ai.response.model is missing (optional but recommended)
python/litellm - Basic Embeddings Test (async, blocking)

Error: 2 check(s) failed:

2 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one embedding span
python/anthropic - Basic LLM Test (sync, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/anthropic - Basic LLM Test (async, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/anthropic - Multi-Turn LLM Test (sync, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 100, in <module>
    main()
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 62, in main
    with client.messages.stream(**kwargs) as stream:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
    raw_stream = self.__api_request()
                 ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAdzEggjmkVW57NSqdw'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 100, in <module>
    main()
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 62, in main
    with client.messages.stream(**kwargs) as stream:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
    raw_stream = self.__api_request()
                 ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAdzEggjmkVW57NSqdw'}

python/anthropic - Multi-Turn LLM Test (async, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 101, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 38, in main
    async with client.messages.stream(**kwargs) as stream:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
    raw_stream = await self.__api_request
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAe597U3WAJg7M9Upj8'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 101, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 38, in main
    async with client.messages.stream(**kwargs) as stream:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
    raw_stream = await self.__api_request
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAe597U3WAJg7M9Upj8'}

python/anthropic - Multi-Turn LLM Test (async, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 83, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 57, in main
    response = await client.messages.create(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
    return await _execute_async(f, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
    result = await f(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAeu88yMNoiagEDdyBe'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 83, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 57, in main
    response = await client.messages.create(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
    return await _execute_async(f, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
    result = await f(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAeu88yMNoiagEDdyBe'}

python/anthropic - Basic Error LLM Test (sync, streaming)

Error: 2 check(s) failed:

2 check(s) failed:
Should have at least 1 AI span(s) but found 0
Should have at least one AI span but found none
python/anthropic - Basic Error LLM Test (async, streaming)

Error: 2 check(s) failed:

2 check(s) failed:
Should have at least 1 AI span(s) but found 0
Should have at least one AI span but found none
python/anthropic - Vision LLM Test (sync, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
Should have at least one chat or agent span
python/anthropic - Vision LLM Test (sync, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-blocking.py", line 45, in <module>
    main()
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-blocking.py", line 40, in main
    response = client.messages.create(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 568, in _sentry_patched_create_sync
    return _execute_sync(f, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 556, in _execute_sync
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 551, in _execute_sync
    result = f(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 950, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAfkwnwE8RSgyjfsA7Q'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-blocking.py", line 45, in <module>
    main()
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-blocking.py", line 40, in main
    response = client.messages.create(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 568, in _sentry_patched_create_sync
    return _execute_sync(f, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 556, in _execute_sync
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 551, in _execute_sync
    result = f(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 950, in create
    return self._post(
           ^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAfkwnwE8RSgyjfsA7Q'}

python/anthropic - Vision LLM Test (async, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-streaming.py", line 52, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-streaming.py", line 41, in main
    async with client.messages.stream(**kwargs) as stream:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
    raw_stream = await self.__api_request
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAgfVtvddayzZkMaNAa'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-streaming.py", line 52, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-streaming.py", line 41, in main
    async with client.messages.stream(**kwargs) as stream:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
    raw_stream = await self.__api_request
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAgfVtvddayzZkMaNAa'}

python/anthropic - Vision LLM Test (async, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 46, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 41, in main
    response = await client.messages.create(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
    return await _execute_async(f, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
    result = await f(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAgdkyCFefcLhQdpBhA'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 46, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 41, in main
    response = await client.messages.create(**kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
    return await _execute_async(f, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
    reraise(*exc_info)
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
    raise value
  File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
    result = await f(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAgdkyCFefcLhQdpBhA'}

python/anthropic - Long Input LLM Test (sync, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 48, in <module>
    main()
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 37, in main
    with client.messages.stream(**kwargs) as stream:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
    raw_stream = self.__api_request()
                 ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAhbgEkwhKYn9KkyCxk'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 48, in <module>
    main()
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 37, in main
    with client.messages.stream(**kwargs) as stream:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
    raw_stream = self.__api_request()
                 ^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAhbgEkwhKYn9KkyCxk'}

python/anthropic - Long Input LLM Test (async, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-async-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-async-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-async-streaming.py", line 49, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-async-streaming.py", line 38, in main
    async with client.messages.stream(**kwargs) as stream:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
    raw_stream = await self.__api_request
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAiS7o4M435YSTquQP8'}

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-async-streaming.py", line 49, in <module>
    asyncio.run(main())
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-async-streaming.py", line 38, in main
    async with client.messages.stream(**kwargs) as stream:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
    raw_stream = await self.__api_request
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
    raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYmAiS7o4M435YSTquQP8'}

python/langchain - Basic Error LLM Test (sync, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/langchain - Basic Error LLM Test (sync, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/langchain - Basic Error LLM Test (async, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py", line 14, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py", line 14, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/langchain - Basic Error LLM Test (async, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py", line 14, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py", line 14, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/litellm - Basic LLM Test (async, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Basic LLM Test (async, blocking)

Error: 3 check(s) failed:

3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Multi-Turn LLM Test (async, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Should have exactly 3 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Multi-Turn LLM Test (async, blocking)

Error: 3 check(s) failed:

3 check(s) failed:
Should have exactly 3 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Basic Error LLM Test (sync, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 16, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 16, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/litellm - Basic Error LLM Test (sync, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 16, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 16, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/litellm - Basic Error LLM Test (async, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py", line 17, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py", line 17, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/litellm - Basic Error LLM Test (async, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py", line 17, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py", line 17, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/litellm - Vision LLM Test (async, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
Should have at least one chat or agent span
python/litellm - Vision LLM Test (async, blocking)

Error: 3 check(s) failed:

3 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
Should have at least one chat or agent span
python/litellm - Long Input LLM Test (async, streaming)

Error: 2 check(s) failed:

2 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Long Input LLM Test (async, blocking)

Error: 2 check(s) failed:

2 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
python/openai - Basic LLM Test (sync, streaming)

Error: 2 check(s) failed:

2 check(s) failed:
Attribute validation failed:
  Span 97e1740c: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 97e1740c: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
  input_tokens must exist
  output_tokens must exist
  total_tokens must exist
python/openai - Basic LLM Test (async, streaming)

Error: 2 check(s) failed:

2 check(s) failed:
Attribute validation failed:
  Span b8714d32: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span b8714d32: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
  input_tokens must exist
  output_tokens must exist
  total_tokens must exist
python/openai - Multi-Turn LLM Test (sync, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Attribute validation failed:
  Span b6edf1c7: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span b6edf1c7: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
  Span 837ef3dd: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 837ef3dd: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
  Span 807e1419: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 807e1419: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
  input_tokens must exist
  output_tokens must exist
  total_tokens must exist
Input token progression failed: tokens should increase with each turn
python/openai - Multi-Turn LLM Test (async, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Attribute validation failed:
  Span 944bfa64: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 944bfa64: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
  Span bca60713: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span bca60713: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
  Span 9c6d040a: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 9c6d040a: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
  input_tokens must exist
  output_tokens must exist
  total_tokens must exist
Input token progression failed: tokens should increase with each turn
python/openai - Basic Error LLM Test (sync, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 12, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 12, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/openai - Basic Error LLM Test (sync, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 12, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 12, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/openai - Basic Error LLM Test (async, streaming)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/openai - Basic Error LLM Test (async, blocking)

Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py

Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

Traceback (most recent call last):
  File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/121da677853244cedfe11e95184b2b431af102eb/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py", line 13, in <module>
    import respx
ModuleNotFoundError: No module named 'respx'

python/openai - Vision LLM Test (sync, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Attribute validation failed:
  Span a8ff4697: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span a8ff4697: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
  input_tokens must exist
  output_tokens must exist
  total_tokens must exist
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Vision LLM Test (sync, blocking)

Error: 1 check(s) failed:

1 check(s) failed:
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Vision LLM Test (async, streaming)

Error: 3 check(s) failed:

3 check(s) failed:
Attribute validation failed:
  Span 8c17454d: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 8c17454d: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
  input_tokens must exist
  output_tokens must exist
  total_tokens must exist
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Vision LLM Test (async, blocking)

Error: 1 check(s) failed:

1 check(s) failed:
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Long Input LLM Test (sync, streaming)

Error: 1 check(s) failed:

1 check(s) failed:
Attribute validation failed:
  Span 90dc998b: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span 90dc998b: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
python/openai - Long Input LLM Test (async, streaming)

Error: 1 check(s) failed:

1 check(s) failed:
Attribute validation failed:
  Span ba3c86fe: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
  Span ba3c86fe: Attribute 'gen_ai.usage.output_tokens' must exist but is missing

This issue was automatically created by the AI Integration Testing framework.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions