The official Python client for TaskForceAI's multi-agent orchestration platform.
- ✅ Sync + async clients powered by
httpx - ✅ Automatic authentication with your TaskForceAI API key
- ✅ Convenience helpers for polling task completion
- ✅ Rich error handling with status codes and retry-ready exceptions
- ✅ Mock mode for development without an API key
python -m pip install taskforceaifrom taskforceai import TaskForceAIClient
client = TaskForceAIClient(api_key="your-api-key")
task_id = client.submit_task("Analyze the security posture of this repository.")
status = client.wait_for_completion(task_id)
print(status["result"])# Select a specific model (Pro/Super plans)
task_id = client.submit_task(
"Draft a quarterly strategy update.",
model_id="xai/grok-4.1"
)
# Bring your own Vercel AI Gateway key
task_id = client.submit_task(
"Custom model run",
vercel_ai_key="sk-vercel-your-gateway-key"
)
# Forward arbitrary TaskForceAI orchestration options
task_id = client.submit_task(
"Do a full repository risk review",
options={"agents": 4, "budget": 12},
)Build and test your integration without an API key using mock mode:
from taskforceai import TaskForceAIClient
# No API key required in mock mode
client = TaskForceAIClient(mock_mode=True)
result = client.run_task("Test your integration")
print(result.result) # "This is a mock response. Configure your API key to get real results."Mock mode simulates the full task lifecycle locally—no network requests are made. Tasks go through "processing" then "completed" states, making it easy to build UIs and test error handling before launch.
import asyncio
from taskforceai import AsyncTaskForceAIClient
async def main() -> None:
async with AsyncTaskForceAIClient(api_key="your-api-key") as client:
result = await client.run_task("Summarize the latest launch notes.")
print(result["result"])
asyncio.run(main())from taskforceai import TaskForceAIClient
client = TaskForceAIClient(api_key="your-api-key")
stream = client.run_task_stream("Map open security issues", poll_interval=0.5)
for status in stream:
print(f"{status['status']}: {status.get('result')}")
# Cancel locally if needed
# stream.cancel()Async projects can use AsyncTaskForceAIClient.stream_task_status() and iterate with
async for status in stream for non-blocking workflows.
Both clients expose the same methods:
submit_task(prompt, *, options=None, silent=None, mock=None, model_id=None, vercel_ai_key=None) -> strget_task_status(task_id) -> dictget_task_result(task_id) -> dictwait_for_completion(task_id, poll_interval=2.0, max_attempts=150, on_status=None) -> dictrun_task(prompt, ..., on_status=None) -> dictstream_task_status(task_id, ..., on_status=None) -> Iteratorrun_task_stream(prompt, ..., on_status=None) -> Iterator
Both clients accept response_hook= in their constructors. The hook is invoked with the
raw httpx.Response (headers included) for every request, making it easy to track
rate-limit headers, request IDs, or emit custom metrics without wrapping the SDK.
All responses mirror the REST API payloads. Errors raise TaskForceAIError, which includes status_code for quick branching.
python -m pip install -e "packages/python-sdk[dev]"
pytest packages/python-sdk/tests
ruff format packages/python-sdk/src packages/python-sdk/tests -q
ruff check packages/python-sdk/src packages/python-sdk/tests
mypy --config-file packages/python-sdk/pyproject.toml packages/python-sdk/srcMIT