Skip to content

API Reference

Complete reference documentation for the Consoul SDK.

Consoul Class

The main SDK interface for integrating AI chat into your applications.

consoul.sdk.Consoul

Consoul(
    model: str | None = None,
    tools: bool | str | list[str | BaseTool] | None = True,
    temperature: float | None = None,
    system_prompt: str | None = None,
    persist: bool = True,
    api_key: str | None = None,
    discover_tools: bool = False,
    approval_provider: ApprovalProvider | None = None,
    context_providers: list[Any] | None = None,
    db_path: Path | str | None = None,
    summarize: bool = False,
    summarize_threshold: int = 20,
    keep_recent: int = 10,
    summary_model: str | None = None,
    **model_kwargs: Any,
)

High-level Consoul SDK interface.

The easiest way to add AI chat to your Python application.

Examples:

Basic chat: >>> console = Consoul() >>> console.chat("Hello!") 'Hi! How can I help you?'

With tools: >>> console = Consoul(tools=True) >>> console.chat("List files")

Custom model: >>> console = Consoul(model="gpt-4o") >>> response = console.ask("Explain", show_tokens=True) >>> print(f"Tokens: {response.tokens}")

Introspection: >>> console.settings {'model': 'claude-3-5-sonnet-20241022', 'temperature': 0.7, ...} >>> console.last_cost

Initialize Consoul SDK.

Parameters:

Name Type Description Default
model str | None

Model name (e.g., "gpt-4o", "claude-3-5-sonnet"). Auto-detects provider. If not specified, falls back to config's current model.

None
tools bool | str | list[str | BaseTool] | None

Tool specification. Supports multiple formats: - True: Enable all built-in tools (default) - False/None: Disable all tools (chat-only mode) - "safe"/"caution"/"dangerous": Risk level filtering - "search"/"file-edit"/"web"/"execute": Category filtering - ["bash", "grep"]: Specific tools by name - ["search", "web"]: Multiple categories - ["search", "bash"]: Mix categories and tools - [custom_tool, "bash"]: Mix custom and built-in tools

Security guidelines: - SAFE: Read-only operations (grep, code_search, web_search) - CAUTION: File operations and command execution (requires oversight) - Start with tools="safe" for untrusted AI interactions - Use principle of least privilege (only grant needed tools) - Always use version control (git) when enabling file operations

True
temperature float | None

Override temperature (0.0-2.0)

None
system_prompt str | None

Override system prompt

None
persist bool

Save conversation history (default: True)

True
api_key str | None

Override API key (falls back to environment)

None
discover_tools bool

Auto-discover tools from .consoul/tools/ (default: False) Discovered tools default to CAUTION risk level.

False
approval_provider ApprovalProvider | None

Custom approval provider for tool execution. If None, defaults to CliApprovalProvider (terminal prompts). Use this for web backends, WebSocket/SSE, or custom UX. See examples/sdk/web_approval_provider.py for reference.

None
context_providers list[Any] | None

List of context providers implementing ContextProvider protocol. Each provider's get_context() is called before building system prompts, injecting dynamic context from databases, APIs, or runtime sources. See examples/sdk/context_providers/ for domain-specific examples.

None
db_path Path | str | None

Path to conversation history database (default: ~/.consoul/history.db).

None
summarize bool

Enable conversation summarization (default: False).

False
summarize_threshold int

Number of messages before summarization (default: 20).

20
keep_recent int

Number of recent messages to keep when summarizing (default: 10).

10
summary_model str | None

Model name for summarization (optional, use cheaper model).

None
**model_kwargs Any

Provider-specific parameters passed to the LLM. These are validated and passed through to the model.

       OpenAI-specific:
       - service_tier: "auto"|"default"|"flex" (flex ~50% cheaper)
       - seed: int (deterministic sampling)
       - logit_bias: dict[str, float] (token likelihood)
       - response_format: dict (json_object, json_schema)
       - frequency_penalty: float (-2.0 to 2.0)
       - presence_penalty: float (-2.0 to 2.0)
       - top_p: float (0.0-1.0)

       Anthropic-specific:
       - thinking: dict (extended thinking config)
       - betas: list[str] (experimental features)
       - metadata: dict (run tracing)
       - top_p: float (0.0-1.0)
       - top_k: int (sampling parameter)

       Google-specific:
       - safety_settings: dict (content filtering)
       - generation_config: dict (response modalities)
       - candidate_count: int (completions to generate)
       - top_p: float (0.0-1.0)
       - top_k: int (sampling parameter)
{}

Raises:

Type Description
ValueError

If invalid parameters provided

MissingAPIKeyError

If no API key found for provider

TypeError

If profile parameter is used (removed in v0.5.0)

Examples:

Basic usage: >>> console = Consoul(model="gpt-4o") # Minimal setup >>> console = Consoul(model="gpt-4o", temperature=0.7, tools=False) >>> console = Consoul() # Uses config defaults

Tool specification: >>> # Disable tools >>> console = Consoul(tools=False)

>>> # Only safe read-only tools
>>> console = Consoul(tools="safe")

>>> # Specific tools by name
>>> console = Consoul(tools=["bash", "grep", "code_search"])

>>> # Custom tool + built-in
>>> from langchain_core.tools import tool
>>> @tool
... def my_tool(query: str) -> str:
...     return "result"
>>> console = Consoul(tools=[my_tool, "bash"])

Category-based specification: >>> # All search tools >>> console = Consoul(tools="search")

>>> # All file editing tools
>>> console = Consoul(tools="file-edit")

>>> # Multiple categories
>>> console = Consoul(tools=["search", "web"])

>>> # Mix categories and specific tools
>>> console = Consoul(tools=["search", "bash"])

Tool discovery: >>> # Auto-discover tools from .consoul/tools/ >>> console = Consoul(discover_tools=True)

>>> # Combine with specific tools
>>> console = Consoul(tools=["bash", "grep"], discover_tools=True)

>>> # Only discovered tools (no built-in)
>>> console = Consoul(tools=False, discover_tools=True)

Custom approval provider (for web backends): >>> from examples.sdk.web_approval_provider import WebApprovalProvider >>> provider = WebApprovalProvider( ... approval_url="https://api.example.com/approve", ... auth_token="secret" ... ) >>> console = Consoul(tools=True, approval_provider=provider) >>> # Tool approvals now go through web API instead of terminal

Context providers (domain-specific AI): >>> # Legal AI with case law context >>> from examples.sdk.context_providers.legal_context_provider import LegalContextProvider >>> provider = LegalContextProvider("California", case_database) >>> console = Consoul( ... model="gpt-4o", ... system_prompt="You are a legal assistant...", ... context_providers=[provider], ... tools=False ... )

>>> # Multiple context providers composition
>>> console = Consoul(
...     model="gpt-4o",
...     context_providers=[
...         KnowledgeBaseProvider(kb_id="medical"),
...         PatientContextProvider(patient_id="12345"),
...         ComplianceProvider(regulations=["HIPAA"])
...     ]
... )

Provider-specific parameters: >>> # OpenAI with flex tier (50% cheaper, slower) >>> console = Consoul(model="gpt-4o", service_tier="flex")

>>> # Anthropic with extended thinking
>>> console = Consoul(
...     model="claude-sonnet-4",
...     thinking={"type": "enabled", "budget_tokens": 10000}
... )

>>> # OpenAI with JSON schema mode
>>> schema = {"type": "object", "properties": {"answer": {"type": "string"}}}
>>> console = Consoul(
...     model="gpt-4o",
...     response_format={"type": "json_schema", "json_schema": schema}
... )

>>> # Google with safety settings
>>> console = Consoul(
...     model="gemini-pro",
...     safety_settings={"HARM_CATEGORY_HARASSMENT": "BLOCK_NONE"}
... )

Profile-free SDK usage (domain-specific apps): >>> # Legal AI (Richard project) - Clean prompt, no env noise >>> console = Consoul( ... model="gpt-4o", ... temperature=0.7, ... system_prompt="You are a workers' compensation legal assistant...", ... persist=True, ... db_path="~/richard/history.db", ... tools=False, # Chat-only mode ... service_tier="flex" # Cost optimization ... ) >>> # Note: Profile-free mode has NO environment/git injection by default >>> # For granular context control, use build_enhanced_system_prompt() directly

>>> # Medical chatbot with summarization
>>> console = Consoul(
...     model="claude-sonnet-4",
...     system_prompt="You are a medical assistant...",
...     summarize=True,
...     summarize_threshold=15,
...     keep_recent=8,
...     summary_model="gpt-4o-mini"  # Cheaper model for summaries
... )

>>> # Customer support bot
>>> console = Consoul(
...     model="gpt-4o",
...     system_prompt="You are a customer support agent...",
...     tools=["web_search", "read_url"],  # Only specific tools
... )
Available tool categories

search, file-edit, web, execute

Available tool names

bash, grep, code_search, find_references, create_file, edit_lines, edit_replace, append_file, delete_file, read_url, web_search

Tool discovery

When discover_tools=True, Consoul will scan .consoul/tools/ for custom tools. Create a .consoul/tools/ directory in your project and add Python files with @tool decorated functions:

.consoul/tools/my_tool.py:
    from langchain_core.tools import tool

    @tool
    def my_custom_tool(query: str) -> str:
        '''My custom tool description.'''
        return process(query)

All discovered tools default to RiskLevel.CAUTION for safety.

last_cost property

last_cost: dict[str, Any]

Get token usage and accurate cost of last request.

Returns:

Type Description
dict[str, Any]

Dictionary with input_tokens, output_tokens, total_tokens, and estimated cost

Examples:

>>> console.chat("Hello")
>>> console.last_cost
{'input_tokens': 87, 'output_tokens': 12, 'total_tokens': 99, ...}
Note

Token counts are accurate when available from the model provider's usage_metadata. Falls back to conversation history token counting if unavailable. Cost calculations use model-specific pricing data from major providers (OpenAI, Anthropic, Google). Includes support for prompt caching costs.

last_request property

last_request: dict[str, Any] | None

Get details about the last API request.

Returns:

Type Description
dict[str, Any] | None

Dictionary with message, model, token count, etc.

dict[str, Any] | None

None if no requests made yet.

Examples:

>>> console.chat("Hello")
>>> console.last_request
{'message': 'Hello', 'model': 'claude-3-5-sonnet-20241022', ...}

settings property

settings: dict[str, Any]

Get current configuration settings.

Returns:

Type Description
dict[str, Any]

Dictionary with model, profile, tools, and other settings

Examples:

>>> console.settings
{'model': 'claude-3-5-sonnet-20241022', 'temperature': 0.7, ...}

ask

ask(
    message: str, show_tokens: bool = False
) -> ConsoulResponse

Send a message and get a rich response with metadata.

Parameters:

Name Type Description Default
message str

Your message

required
show_tokens bool

Include token count in response

False

Returns:

Type Description
ConsoulResponse

ConsoulResponse with content, tokens, and model info

Examples:

>>> response = console.ask("Hello", show_tokens=True)
>>> print(response.content)
>>> print(f"Tokens: {response.tokens}")

chat

chat(message: str) -> str

Send a message and get a response.

This is a stateful method - conversation history is maintained across multiple calls.

Parameters:

Name Type Description Default
message str

Your message to the AI

required

Returns:

Type Description
str

AI's response as a string

Examples:

>>> console.chat("What is 2+2?")
'4'
>>> console.chat("What about 3+3?")  # Remembers context
'6'

clear

clear() -> None

Clear conversation history and start fresh.

The system prompt is preserved.

Examples:

>>> console.chat("Hello")
>>> console.chat("Remember me?")  # AI remembers
>>> console.clear()
>>> console.chat("Remember me?")  # AI doesn't remember

ConsoulResponse Class

Structured response object returned by Consoul.ask().

consoul.sdk.ConsoulResponse

ConsoulResponse(
    content: str, tokens: int = 0, model: str = ""
)

Response from Consoul chat/ask methods.

Attributes:

Name Type Description
content

The AI's response text

tokens

Number of tokens used (if requested)

model

Model name that generated the response

Examples:

>>> response = console.ask("Hello", show_tokens=True)
>>> print(response.content)
>>> print(f"Tokens: {response.tokens}")

Initialize response.

Parameters:

Name Type Description Default
content str

Response text

required
tokens int

Token count

0
model str

Model name

''

__repr__

__repr__() -> str

Return detailed representation.

__str__

__str__() -> str

Return content as string for easy printing.

Tool Registry

Manage and configure tools for AI agents.

consoul.ai.tools.registry.ToolRegistry

ToolRegistry(
    config: ToolConfig,
    approval_provider: ApprovalProvider | None = None,
    audit_logger: AuditLogger | None = None,
)

Central registry for managing LangChain tools.

Handles tool registration, configuration, security policy enforcement, and binding tools to chat models for tool calling.

This registry is SDK-ready and works without TUI dependencies.

IMPORTANT - Approval Workflow Coordination

The registry provides approval caching (once_per_session mode) but does NOT handle user approval itself. You must implement an ApprovalProvider (see SOUL-66) that:

  1. Checks registry.needs_approval(tool_name)
  2. If True: Shows approval UI and gets user decision
  3. If approved: Calls registry.mark_approved(tool_name)
  4. Executes the tool

The registry-level caching is an optimization to avoid showing the approval modal multiple times for the same tool. It does NOT replace the approval provider.

Example

from consoul.config.models import ToolConfig from consoul.ai.tools import ToolRegistry, RiskLevel from langchain_core.tools import tool

@tool ... def my_tool(x: int) -> int: ... '''Example tool''' ... return x * 2

config = ToolConfig(enabled=True, timeout=30) registry = ToolRegistry(config) registry.register(my_tool, risk_level=RiskLevel.SAFE) tools_list = registry.list_tools() assert len(tools_list) == 1

Initialize tool registry with configuration.

Parameters:

Name Type Description Default
config ToolConfig

ToolConfig instance controlling tool behavior

required
approval_provider ApprovalProvider | None

Optional approval provider for tool execution. If None, will attempt to use TuiApprovalProvider (if TUI available) or raise error if no provider available.

None
audit_logger AuditLogger | None

Optional audit logger for tool execution tracking. If None, creates FileAuditLogger or NullAuditLogger based on config.

None

register

register(
    tool: BaseTool,
    risk_level: RiskLevel = RiskLevel.SAFE,
    tags: list[str] | None = None,
    enabled: bool = True,
) -> None

Register a LangChain tool in the registry.

Parameters:

Name Type Description Default
tool BaseTool

LangChain BaseTool instance (decorated with @tool)

required
risk_level RiskLevel

Security risk classification for this tool

SAFE
tags list[str] | None

Optional tags for categorization

None
enabled bool

Whether tool is enabled (overrides global config.enabled)

True

Raises:

Type Description
ToolValidationError

If tool is invalid or already registered

Example

from langchain_core.tools import tool @tool ... def bash_execute(command: str) -> str: ... '''Execute bash command''' ... return "output" registry.register(bash_execute, risk_level=RiskLevel.DANGEROUS)

unregister

unregister(tool_name: str) -> None

Remove a tool from the registry.

Parameters:

Name Type Description Default
tool_name str

Name of tool to unregister

required

Raises:

Type Description
ToolNotFoundError

If tool is not registered

get_tool

get_tool(tool_name: str) -> ToolMetadata

Retrieve tool metadata by name.

Parameters:

Name Type Description Default
tool_name str

Name of the tool to retrieve

required

Returns:

Type Description
ToolMetadata

ToolMetadata instance for the requested tool

Raises:

Type Description
ToolNotFoundError

If tool is not registered

Example

metadata = registry.get_tool("bash_execute") assert metadata.risk_level == RiskLevel.DANGEROUS

list_tools

list_tools(
    enabled_only: bool = False,
    risk_level: RiskLevel | None = None,
    tags: list[str] | None = None,
) -> list[ToolMetadata]

List registered tools with optional filtering.

Parameters:

Name Type Description Default
enabled_only bool

Only return enabled tools

False
risk_level RiskLevel | None

Filter by risk level

None
tags list[str] | None

Filter by tags (tool must have ALL specified tags)

None

Returns:

Type Description
list[ToolMetadata]

List of ToolMetadata instances matching filters

Example

safe_tools = registry.list_tools(risk_level=RiskLevel.SAFE) enabled_tools = registry.list_tools(enabled_only=True)

bind_to_model

bind_to_model(
    model: BaseChatModel,
    tool_names: list[str] | None = None,
) -> BaseChatModel

Bind registered tools to a LangChain chat model.

This enables the model to call tools via the tool calling API.

Parameters:

Name Type Description Default
model BaseChatModel

LangChain BaseChatModel instance

required
tool_names list[str] | None

Optional list of specific tools to bind (default: all enabled tools)

None

Returns:

Type Description
BaseChatModel

Model with tools bound (via bind_tools())

Raises:

Type Description
ToolNotFoundError

If a requested tool is not registered

Example

from consoul.ai import get_chat_model chat_model = get_chat_model("claude-3-5-sonnet-20241022") model_with_tools = registry.bind_to_model(chat_model)

Model can now request tool executions

needs_approval

needs_approval(
    tool_name: str, arguments: dict[str, Any] | None = None
) -> bool

Determine if tool execution requires user approval.

IMPORTANT: This method checks registry-level approval caching ONLY. It does NOT invoke the approval provider (see SOUL-66). The approval provider must check needs_approval() first, then show approval UI if needed.

Workflow: 1. Approval provider calls registry.needs_approval(tool_name, arguments) 2. If True: Show approval modal/prompt, get user decision 3. If user approves: Call registry.mark_approved(tool_name) 4. Execute tool

Based on approval_mode/permission_policy configuration: - 'always': Always require approval (PARANOID policy) - 'risk_based': Based on risk level vs threshold (BALANCED/TRUSTING policies) - 'once_per_session': Require approval on first use, then cache approval - 'whitelist': Only require approval for tools not in allowed_tools - 'never': Never require approval (UNRESTRICTED policy - DANGEROUS)

Special handling for bash_execute: - Checks command-level whitelist from BashToolConfig.whitelist_patterns - Whitelisted commands bypass approval even in 'always' mode

Parameters:

Name Type Description Default
tool_name str

Name of tool to check

required
arguments dict[str, Any] | None

Optional tool arguments (used for command-level whitelist and risk assessment)

None

Returns:

Type Description
bool

True if approval UI should be shown, False if cached/whitelisted

Example

config = ToolConfig(approval_mode="once_per_session") registry = ToolRegistry(config) registry.needs_approval("bash_execute") # True (first time)

... approval provider shows modal, user approves ...

registry.mark_approved("bash_execute") registry.needs_approval("bash_execute") # False (cached, skip modal)

Command-level whitelist

config = ToolConfig(bash=BashToolConfig(whitelist_patterns=["git status"])) registry.needs_approval("bash_execute", {"command": "git status"}) # False (whitelisted)

Risk-based approval (BALANCED policy)

from consoul.ai.tools.permissions import PermissionPolicy config = ToolConfig(permission_policy=PermissionPolicy.BALANCED) registry = ToolRegistry(config)

SAFE commands auto-approved, CAUTION+ require approval
Warning

Never execute tools when needs_approval() returns True without going through the approval provider first. The registry-level caching is an optimization, not a replacement for user approval.

mark_approved

mark_approved(tool_name: str) -> None

Mark a tool as approved for this session.

IMPORTANT: This method should ONLY be called by the approval provider AFTER the user has explicitly approved the tool execution. Never call this method directly without user approval.

Used with 'once_per_session' approval mode to cache approval decisions so the user doesn't need to approve the same tool multiple times in one session.

Parameters:

Name Type Description Default
tool_name str

Name of tool to mark as approved

required
Warning

Calling this method bypasses the approval UI for future executions of this tool in the current session. Only call after explicit user approval through the approval provider (SOUL-66).

Example
In approval provider implementation:

if registry.needs_approval("bash"): ... user_approved = show_approval_modal("bash", args) ... if user_approved: ... registry.mark_approved("bash") # Cache approval ... # Now execute tool

Tool Metadata

Tool configuration and metadata structures.

consoul.ai.tools.base.ToolMetadata dataclass

ToolMetadata(
    name: str,
    description: str,
    risk_level: RiskLevel,
    tool: BaseTool,
    schema: dict[str, Any],
    enabled: bool = True,
    tags: list[str] | None = None,
    categories: list[ToolCategory] | None = None,
)

Metadata for a registered tool.

Stores information about a tool's configuration, schema, risk level, and the underlying LangChain tool instance.

Attributes:

Name Type Description
name str

Tool name (used for lookups and binding to models)

description str

Human-readable description of what the tool does

risk_level RiskLevel

Security risk classification

tool BaseTool

The LangChain BaseTool instance

schema dict[str, Any]

JSON schema for tool arguments (auto-generated from tool)

enabled bool

Whether this tool is currently enabled

tags list[str] | None

Optional tags for categorization (e.g., ["filesystem", "readonly"])

categories list[ToolCategory] | None

Optional functional categories for grouping tools

__post_init__

__post_init__() -> None

Validate metadata after initialization.

consoul.ai.tools.base.RiskLevel

Bases: str, Enum

Risk assessment level for tool execution.

Used to classify tools based on their potential impact and inform user approval workflows with appropriate warnings.

Attributes:

Name Type Description
SAFE

Low-risk operations (ls, pwd, echo, cat read-only files)

CAUTION

Medium-risk operations (mkdir, cp, mv, git commit)

DANGEROUS

High-risk operations (rm -rf, dd, kill -9, chmod 777)

BLOCKED

Explicitly prohibited operations (sudo, rm -rf /, fork bombs)

__str__

__str__() -> str

Return string representation of risk level.

consoul.ai.tools.base.ToolCategory

Bases: str, Enum

Functional categories for tool classification.

Used to group tools by their primary purpose, enabling category-based tool filtering in the SDK.

Attributes:

Name Type Description
SEARCH

Search and lookup tools (grep, code_search, find_references)

FILE_EDIT

File manipulation tools (create, edit, delete, append)

WEB

Web-based tools (read_url, web_search)

EXECUTE

Command execution tools (bash_execute)

__str__

__str__() -> str

Return string representation of category.

Configuration

Configuration models for profiles and tools.

consoul.config.models.ConsoulConfig module-attribute

ConsoulConfig = ConsoulCoreConfig

consoul.config.models.ProfileConfig

Bases: BaseModel

Configuration profile with conversation and context settings.

Profiles define HOW to use AI (system prompts, context, conversation settings), including WHICH AI model to use.

validate_profile_name classmethod

validate_profile_name(v: str) -> str

Validate profile name.

consoul.config.models.ToolConfig

Bases: BaseModel

Configuration for tool calling system.

Controls tool execution behavior, security policies, and approval workflows. This configuration is SDK-level (not TUI-specific) to support headless usage.

Supports both predefined permission policies and manual configuration: - Use permission_policy for preset security postures (PARANOID/BALANCED/TRUSTING/UNRESTRICTED) - Use manual settings (approval_mode, auto_approve) for custom configurations - Policy takes precedence over manual settings when both are specified

Example (with policy): >>> from consoul.ai.tools.permissions import PermissionPolicy >>> config = ToolConfig( ... enabled=True, ... permission_policy=PermissionPolicy.BALANCED ... )

Example (manual): >>> config = ToolConfig( ... enabled=True, ... approval_mode="always", ... allowed_tools=["bash"] ... )

validate_auto_approve classmethod

validate_auto_approve(v: bool) -> bool

Validate auto_approve is not enabled (security check).

Raises a warning in logs if auto_approve is True, but allows it for testing purposes. Production code should never set this to True.

validate_permission_policy

validate_permission_policy() -> ToolConfig

Validate permission policy and warn about dangerous configurations.

Checks for UNRESTRICTED policy and warns about security implications. Also validates that policy settings are consistent. Sets default policy to BALANCED if not specified.

Conversation Management

consoul.ai.history.ConversationHistory

ConversationHistory(
    model_name: str,
    max_tokens: int | None = None,
    model: BaseChatModel | None = None,
    persist: bool = True,
    session_id: str | None = None,
    db_path: Path | str | None = None,
    summarize: bool = False,
    summarize_threshold: int = 20,
    keep_recent: int = 10,
    summary_model: BaseChatModel | None = None,
)

Manages conversation history with intelligent token-based trimming.

Stores messages as LangChain BaseMessage objects and provides utilities for token counting, message trimming, and format conversion. Ensures conversations stay within model context windows while preserving important context.

Attributes:

Name Type Description
model_name

Model identifier for token counting

max_tokens

Maximum tokens allowed in conversation

messages list[BaseMessage]

List of LangChain BaseMessage objects

Example

history = ConversationHistory("gpt-4o", max_tokens=4000) history.add_system_message("You are helpful.") history.add_user_message("Hi!") history.add_assistant_message("Hello!") len(history) 3 history.count_tokens() 24

Initialize conversation history.

Parameters:

Name Type Description Default
model_name str

Model identifier (e.g., "gpt-4o", "claude-3-5-sonnet")

required
max_tokens int | None

Context limit override. Special values: - None: Auto-size to 75% of model's context window - 0: Auto-size to 75% of model's context window - > 0: Use explicit limit (capped at model maximum)

None
model BaseChatModel | None

Optional LangChain model instance for provider-specific token counting

None
persist bool

Enable SQLite persistence (default: True)

True
session_id str | None

Optional session ID to resume existing conversation

None
db_path Path | str | None

Optional custom database path (default: ~/.consoul/history.db)

None
summarize bool

Enable conversation summarization for long conversations (default: False)

False
summarize_threshold int

Trigger summarization after N messages (default: 20)

20
keep_recent int

Keep last N messages verbatim when summarizing (default: 10)

10
summary_model BaseChatModel | None

Optional cheaper model for generating summaries (default: use main model)

None
Example

In-memory with auto-sizing (75% of model capacity)

history = ConversationHistory("gpt-4o")

With explicit context limit

history = ConversationHistory("gpt-4o", max_tokens=50000)

With persistence - new session

history = ConversationHistory("gpt-4o", persist=True) session = history.session_id

With persistence - resume session

history = ConversationHistory("gpt-4o", persist=True, session_id=session)

With summarization for cost savings

history = ConversationHistory( ... "gpt-4o", ... model=chat_model, ... summarize=True, ... summarize_threshold=20 ... )

add_user_message

add_user_message(content: str) -> None

Add user message to conversation history.

Parameters:

Name Type Description Default
content str

User message content

required
Example

history = ConversationHistory("gpt-4o") history.add_user_message("Hello!") len(history) 1

add_assistant_message

add_assistant_message(content: str) -> None

Add assistant message to conversation history.

Parameters:

Name Type Description Default
content str

Assistant message content

required
Example

history = ConversationHistory("gpt-4o") history.add_assistant_message("Hi there!") len(history) 1

add_system_message

add_system_message(content: str) -> None

Add or replace system message.

System messages are always stored at position 0. If a system message already exists, it is replaced. Only one system message is supported.

Parameters:

Name Type Description Default
content str

System message content

required
Example

history = ConversationHistory("gpt-4o") history.add_system_message("You are helpful.") history.add_system_message("You are very helpful.") # Replaces first len(history) 1

clear

clear(preserve_system: bool = True) -> None

Clear conversation history.

Parameters:

Name Type Description Default
preserve_system bool

If True, keep the system message (default)

True
Example

history = ConversationHistory("gpt-4o") history.add_system_message("You are helpful.") history.add_user_message("Hi!") history.clear(preserve_system=True) len(history) 1 # System message preserved

get_trimmed_messages

get_trimmed_messages(
    reserve_tokens: int = 1000, strategy: str = "last"
) -> list[BaseMessage]

Get messages trimmed to fit model's context window.

Uses LangChain's trim_messages to intelligently trim the conversation while preserving the system message and ensuring valid message sequences.

If summarization is enabled and the conversation exceeds the threshold, older messages are summarized and recent messages are kept verbatim, providing significant token savings while preserving context.

Parameters:

Name Type Description Default
reserve_tokens int

Tokens to reserve for response (default 1000)

1000
strategy str

Trimming strategy - "last" keeps recent messages (default)

'last'

Returns:

Name Type Description
list[BaseMessage]

Trimmed list of messages that fit within token limit.

list[BaseMessage]

With summarization: [system_msg, summary_msg, recent_messages]

Without list[BaseMessage]

standard LangChain trim_messages result

Raises:

Type Description
TokenLimitExceededError

If reserve_tokens >= max_tokens, preventing any messages from being sent.

Example

history = ConversationHistory("gpt-4o") history.add_system_message("You are helpful.")

... add many messages ...

trimmed = history.get_trimmed_messages(reserve_tokens=1000)

System message is always preserved
With summarization enabled

history = ConversationHistory( ... "gpt-4o", ... model=chat_model, ... summarize=True ... )

... add 30 messages ...

trimmed = history.get_trimmed_messages()

Returns: [system, summary, last_10_messages] instead of 30

count_tokens

count_tokens() -> int

Count total tokens in current conversation history.

Returns:

Type Description
int

Total number of tokens in all messages

Example

history = ConversationHistory("gpt-4o") history.add_user_message("Hello!") tokens = history.count_tokens() tokens > 0 True

Service Layer

Headless services for SDK integration without TUI dependencies.

ConversationService

Service layer for AI conversation management with streaming responses and tool execution.

consoul.sdk.services.conversation.ConversationService

ConversationService(
    model: BaseChatModel,
    conversation: ConversationHistory,
    tool_registry: ToolRegistry | None = None,
    config: ConsoulConfig | None = None,
    context_providers: list[Any] | None = None,
    base_prompt: str | None = None,
    include_tool_docs: bool = True,
    include_env_context: bool = True,
    include_git_context: bool = True,
    auto_append_tools: bool = True,
)

Service layer for AI conversation management.

Provides headless conversation interface with streaming responses, tool execution approval, multimodal message support, and cost tracking. Completely decoupled from TUI/CLI to enable SDK-first architecture.

Attributes:

Name Type Description
model

LangChain chat model for AI interactions

conversation

Conversation history manager

tool_registry

Optional tool registry for function calling

config

Consoul configuration

executor

Thread pool for non-blocking operations

Example - Basic usage

service = ConversationService.from_config() async for token in service.send_message("Hello!"): ... print(token.content, end="", flush=True)

Example - With tool approval

async def approve_tool(request: ToolRequest) -> bool: ... return request.risk_level == "safe" async for token in service.send_message( ... "List files", ... on_tool_request=approve_tool ... ): ... print(token, end="")

Example - With attachments

attachments = [ ... Attachment(path="screenshot.png", type="image"), ... Attachment(path="code.py", type="code") ... ] async for token in service.send_message( ... "Analyze this", ... attachments=attachments ... ): ... print(token, end="")

Initialize conversation service.

Parameters:

Name Type Description Default
model BaseChatModel

LangChain chat model for AI interactions

required
conversation ConversationHistory

Conversation history manager

required
tool_registry ToolRegistry | None

Optional tool registry for function calling

None
config ConsoulConfig | None

Optional Consoul configuration (uses default if None)

None
context_providers list[Any] | None

Optional list of context providers for dynamic context

None
base_prompt str | None

Base system prompt (before context injection)

None
include_tool_docs bool

Include tool documentation in system prompt

True
include_env_context bool

Include OS/shell/directory info

True
include_git_context bool

Include git repository info

True
auto_append_tools bool

Auto-append tool docs if no marker present

True
from_config classmethod
from_config(
    config: ConsoulConfig | None = None,
    custom_system_prompt: str | None = None,
    include_tool_docs: bool = True,
    include_env_context: bool = True,
    include_git_context: bool = True,
    auto_append_tools: bool = True,
    approval_provider: Any | None = None,
    context_providers: list[Any] | None = None,
) -> ConversationService

Create ConversationService from configuration.

Factory method that initializes model, conversation history, and tool registry from config. Provides convenient instantiation without manually creating dependencies.

Parameters:

Name Type Description Default
config ConsoulConfig | None

Optional Consoul configuration (loads default if None)

None
custom_system_prompt str | None

Custom system prompt (overrides profile prompt)

None
include_tool_docs bool

Include tool documentation in system prompt (default: True)

True
include_env_context bool

Include OS/shell/directory info (default: True)

True
include_git_context bool

Include git repository info (default: True)

True
auto_append_tools bool

Auto-append tool docs if no marker present (default: True)

True
approval_provider Any | None

Optional approval provider for tool execution

None
context_providers list[Any] | None

List of ContextProvider implementations for dynamic context injection

None

Returns:

Type Description
ConversationService

Initialized ConversationService ready for use

Example - Default behavior (CLI/TUI): >>> service = ConversationService.from_config()

Example - SDK with custom prompt, tools but no docs: >>> service = ConversationService.from_config( ... custom_system_prompt="My AI assistant", ... include_tool_docs=False, # Tools enabled, not documented ... )

Example - Full SDK control

service = ConversationService.from_config( ... custom_system_prompt="Clean prompt", ... include_tool_docs=False, ... include_env_context=False, ... include_git_context=False, ... )

send_message async
send_message(
    content: str,
    *,
    attachments: list[Attachment] | None = None,
    on_tool_request: ToolApprovalCallback | None = None,
) -> AsyncIterator[Token]

Send message and stream AI response.

Main entry point for SDK consumers. Handles message preparation, streaming response, tool execution approval, and cost tracking.

Parameters:

Name Type Description Default
content str

User message text

required
attachments list[Attachment] | None

Optional file attachments (images, code files, etc.)

None
on_tool_request ToolApprovalCallback | None

Optional callback for tool execution approval. Can be either: - An async callable: async def(request: ToolRequest) -> bool - A ToolExecutionCallback protocol implementation

None

Yields:

Name Type Description
Token AsyncIterator[Token]

Streaming tokens with content, cost, and metadata

Example - Simple streaming

async for token in service.send_message("Hello!"): ... print(token.content, end="", flush=True)

Example - With async function approval

async def approve(request: ToolRequest) -> bool: ... return request.risk_level != "dangerous" async for token in service.send_message( ... "Run command", ... on_tool_request=approve ... ): ... print(token, end="")

Example - With protocol implementation

class MyApprover: ... async def on_tool_request(self, request: ToolRequest) -> bool: ... return request.risk_level == "safe" approver = MyApprover() async for token in service.send_message( ... "Run command", ... on_tool_request=approver ... ): ... print(token, end="")

Example - With image attachment

from consoul.sdk import Attachment attachments = [Attachment(path="image.png", type="image")] async for token in service.send_message( ... "What's in this image?", ... attachments=attachments ... ): ... print(token, end="")

get_stats
get_stats() -> ConversationStats

Get conversation statistics.

Returns:

Type Description
ConversationStats

ConversationStats with message count, tokens, cost, and session ID

Example

stats = service.get_stats() print(f"Messages: {stats.message_count}") print(f"Cost: ${stats.total_cost:.4f}")

get_history
get_history() -> list[Any]

Get conversation message history.

Returns:

Type Description
list[Any]

List of LangChain messages (HumanMessage, AIMessage, ToolMessage)

Example

history = service.get_history() for msg in history: ... print(f"{msg.type}: {msg.content}")

clear
clear() -> None

Clear conversation history.

Resets message history and cost tracking. Useful for starting fresh conversations without creating new service instance.

Example

service.clear() stats = service.get_stats() assert stats.message_count == 0

ToolService

Service layer for tool management and execution.

consoul.sdk.services.tool.ToolService

ToolService(
    tool_registry: ToolRegistry,
    config: ToolConfig | None = None,
)

Service layer for tool management and execution.

Encapsulates ToolRegistry and provides clean interface for: - Tool configuration and registration - Tool listing and metadata - Approval policy checks

Attributes:

Name Type Description
tool_registry

ToolRegistry instance managing tool catalog

config

Tool configuration settings

Example - Basic usage

service = ToolService.from_config(config) tools = service.list_tools() for tool in tools: ... print(f"{tool.name}: {tool.description}")

Example - Check approval

needs_approval = service.needs_approval("bash_execute", {"command": "ls"}) if needs_approval: ... # Show approval modal ... pass

Initialize tool service.

Parameters:

Name Type Description Default
tool_registry ToolRegistry

Configured ToolRegistry instance

required
config ToolConfig | None

Tool configuration settings

None
from_config classmethod
from_config(config: ConsoulConfig) -> ToolService

Create ToolService from configuration.

Factory method that initializes ToolRegistry with tool configuration, enabling/disabling based on config settings, and registering all tools.

Extracted from ConsoulApp._initialize_tool_registry() (lines 433-612).

Parameters:

Name Type Description Default
config ConsoulConfig

Consoul configuration with tool settings

required

Returns:

Type Description
ToolService

Initialized ToolService ready for use

Example

from consoul.config import load_config config = load_config() service = ToolService.from_config(config)

list_tools
list_tools(
    enabled_only: bool = True, category: str | None = None
) -> list[Any]

List available tools.

Parameters:

Name Type Description Default
enabled_only bool

If True, only return enabled tools

True
category str | None

Optional category filter

None

Returns:

Type Description
list[Any]

List of ToolMetadata objects

Example

tools = service.list_tools(enabled_only=True) for tool in tools: ... print(f"{tool.name}: {tool.description}")

needs_approval
needs_approval(
    tool_name: str, arguments: dict[str, Any]
) -> bool

Check if tool needs approval based on policy.

Parameters:

Name Type Description Default
tool_name str

Name of the tool

required
arguments dict[str, Any]

Tool arguments dict

required

Returns:

Type Description
bool

True if approval is needed, False if auto-approved

Example

if service.needs_approval("bash_execute", {"command": "ls"}): ... # Show approval modal ... approved = await show_approval_modal()

get_tools_count
get_tools_count() -> int

Get total number of registered tools.

Returns:

Type Description
int

Total number of tools in registry

Example

total = service.get_tools_count() print(f"Total tools: {total}")

ModelService

Service layer for AI model management and initialization.

consoul.sdk.services.model.ModelService

ModelService(
    model: BaseChatModel,
    config: ConsoulConfig,
    tool_service: ToolService | None = None,
)

Service layer for AI model management.

Encapsulates model initialization, switching, and tool binding. Provides clean interface for model operations without LangChain/provider details.

Attributes:

Name Type Description
config

Consoul configuration

tool_service

Optional ToolService for binding tools

current_model_id

Current model identifier

Example - Basic usage

service = ModelService.from_config(config) model = service.get_model() info = service.get_current_model_info()

Example - With tool binding

service = ModelService.from_config(config, tool_service) model = service.get_model() # Returns model with tools bound

Example - Model switching

service.switch_model("gpt-4o") new_model = service.get_model()

Initialize model service.

Parameters:

Name Type Description Default
model BaseChatModel

Initialized chat model

required
config ConsoulConfig

Consoul configuration

required
tool_service ToolService | None

Optional tool service for binding tools

None
from_config classmethod
from_config(
    config: ConsoulConfig,
    tool_service: ToolService | None = None,
) -> ModelService

Create ModelService from configuration.

Factory method that initializes model from config and binds tools if tool_service is provided.

Extracted from ConsoulApp._initialize_ai_model() (app.py:365-380).

Parameters:

Name Type Description Default
config ConsoulConfig

Consoul configuration with model settings

required
tool_service ToolService | None

Optional tool service for binding tools

None

Returns:

Type Description
ModelService

Initialized ModelService ready for use

Example

from consoul.config import load_config config = load_config() service = ModelService.from_config(config)

get_model
get_model() -> BaseChatModel

Get current chat model.

Returns:

Type Description
BaseChatModel

Current BaseChatModel instance (possibly with tools bound)

Example

model = service.get_model() response = model.invoke("Hello!")

switch_model
switch_model(
    model_id: str, provider: str | None = None
) -> BaseChatModel

Switch to a different model.

Reinitializes model with new ID and re-binds tools if applicable.

Extracted from ConsoulApp._switch_provider_and_model() (app.py:4050-4149).

Parameters:

Name Type Description Default
model_id str

New model identifier (e.g., "gpt-4o")

required
provider str | None

Optional provider override (auto-detected if None)

None

Returns:

Type Description
BaseChatModel

New BaseChatModel instance

Raises:

Type Description
Exception

If model initialization fails

Example

service.switch_model("claude-3-5-sonnet-20241022") model = service.get_model()

list_ollama_models
list_ollama_models(
    include_context: bool = False,
    base_url: str = "http://localhost:11434",
    enrich_descriptions: bool = True,
    use_context_cache: bool = True,
) -> list[ModelInfo]

List locally installed Ollama models.

Efficient method to discover what's actually installed on the device. No catalog overhead - directly queries Ollama API.

Parameters:

Name Type Description Default
include_context bool

Fetch detailed context (slower, /api/show per model)

False
base_url str

Ollama service URL (default: http://localhost:11434)

'http://localhost:11434'
enrich_descriptions bool

Fetch descriptions from ollama.com (default: True)

True
use_context_cache bool

Use cached context sizes (default: True)

True

Returns:

Type Description
list[ModelInfo]

List of ModelInfo for installed Ollama models

Example

service = ModelService.from_config() local_models = service.list_ollama_models() for model in local_models: ... print(f"{model.name} - {model.context_window}") llama3.2:latest - 128K qwen2.5-coder:7b - 32K

Get detailed context info (slower)

detailed = service.list_ollama_models(include_context=True)

list_mlx_models
list_mlx_models() -> list[ModelInfo]

List locally installed MLX models.

Efficient method using HuggingFace scan_cache_dir() for better performance. Scans HF cache (~/.cache/huggingface/hub), ~/.cache/mlx, and ~/.lmstudio/models.

Returns:

Type Description
list[ModelInfo]

List of ModelInfo for installed MLX models

Example

service = ModelService.from_config() mlx_models = service.list_mlx_models() for model in mlx_models: ... print(f"{model.name} - {model.description}") mlx-community/Llama-3.2-3B-Instruct-4bit - Local MLX model (1.8GB) mlx-community/Qwen2.5-Coder-7B-Instruct-4bit - Local MLX model (4.2GB)

list_gguf_models
list_gguf_models() -> list[ModelInfo]

List locally installed GGUF models.

Efficient method using HuggingFace scan_cache_dir() for better performance. Scans HF cache (~/.cache/huggingface/hub) and ~/.lmstudio/models.

GGUF models can be used with llama.cpp for local inference.

Returns:

Type Description
list[ModelInfo]

List of ModelInfo for installed GGUF models

Example

service = ModelService.from_config() gguf_models = service.list_gguf_models() for model in gguf_models: ... print(f"{model.name} - {model.description}") llama-2-7b-chat.Q4_K_M.gguf - Local GGUF model (3.8GB, Q4 quant) mistral-7b-instruct-v0.2.Q8_0.gguf - Local GGUF model (7.7GB, Q8 quant)

list_huggingface_models
list_huggingface_models() -> list[ModelInfo]

List locally cached HuggingFace models.

Scans HuggingFace Hub cache (~/.cache/huggingface/hub) for downloaded models. Excludes MLX and GGUF-only models (discovered by other methods).

Returns models with safetensors, PyTorch bin files, Flax msgpack, etc.

Returns:

Type Description
list[ModelInfo]

List of ModelInfo for cached HuggingFace models

Example

service = ModelService.from_config() hf_models = service.list_huggingface_models() for model in hf_models: ... print(f"{model.name} - {model.description}") meta-llama/Llama-3.1-8B-Instruct - HuggingFace model (8.5GB, safetensors) google/flan-t5-base - HuggingFace model (1.2GB, pytorch)

list_models
list_models(provider: str | None = None) -> list[ModelInfo]

List available models including dynamically discovered local models.

Parameters:

Name Type Description Default
provider str | None

Filter by provider (None returns all)

None

Returns:

Type Description
list[ModelInfo]

List of ModelInfo objects (static catalog + dynamic local models)

Example

all_models = service.list_models() ollama_models = service.list_models(provider="ollama") # Dynamic discovery for model in ollama_models: ... print(f"{model.name}: {model.description}")

get_current_model_info
get_current_model_info() -> ModelInfo | None

Get info for current model (tries catalog, then dynamic discovery).

Returns:

Type Description
ModelInfo | None

ModelInfo for current model, or None if not found

Example

info = service.get_current_model_info() if info: ... print(f"Context window: {info.context_window}")

supports_vision
supports_vision() -> bool

Check if current model supports vision/images.

Returns:

Type Description
bool

True if model supports vision capabilities

Example

if service.supports_vision(): ... # Send image attachment ... pass

supports_tools
supports_tools() -> bool

Check if current model supports tool calling.

Returns:

Type Description
bool

True if model supports function calling

Example

if service.supports_tools(): ... # Enable tool execution ... pass

list_available_models
list_available_models(
    provider: str | None = None, active_only: bool = True
) -> list[ModelInfo]

List all available models from the registry.

Fetches comprehensive model metadata from the centralized registry, which includes 1,114+ models from Helicone API plus 21 flagship models.

Parameters:

Name Type Description Default
provider str | None

Filter by provider ("openai", "anthropic", "google", etc.)

None
active_only bool

Only return non-deprecated models (default: True)

True

Returns:

Type Description
list[ModelInfo]

List of ModelInfo with enhanced metadata (pricing, capabilities)

Example

models = service.list_available_models(provider="anthropic") for model in models: ... print(f"{model.name}: {model.context_window}") Claude Opus 4.5: 200K Claude Sonnet 4.5: 200K

get_model_pricing
get_model_pricing(
    model_id: str, tier: str = "standard"
) -> PricingInfo | None

Get pricing information for a specific model.

Parameters:

Name Type Description Default
model_id str

Model identifier (e.g., "gpt-4o", "claude-sonnet-4-5-20250929")

required
tier str

Pricing tier ("standard", "flex", "batch", "priority")

'standard'

Returns:

Type Description
PricingInfo | None

PricingInfo if model found, None otherwise

Example

pricing = service.get_model_pricing("gpt-4o", tier="flex") if pricing: ... print(f"Input: ${pricing.input_price}/MTok") ... print(f"Output: ${pricing.output_price}/MTok")

get_model_capabilities
get_model_capabilities(
    model_id: str,
) -> ModelCapabilities | None

Get capability information for a specific model.

Parameters:

Name Type Description Default
model_id str

Model identifier

required

Returns:

Type Description
ModelCapabilities | None

ModelCapabilities if model found, None otherwise

Example

caps = service.get_model_capabilities("claude-sonnet-4-5-20250929") if caps and caps.supports_vision and caps.supports_tools: ... print("Model supports both vision and tools")

get_model_metadata
get_model_metadata(model_id: str) -> ModelInfo | None

Get complete metadata for a specific model.

Combines all available information (metadata, pricing, capabilities) into a single ModelInfo object.

Parameters:

Name Type Description Default
model_id str

Model identifier

required

Returns:

Type Description
ModelInfo | None

ModelInfo if model found, None otherwise

Example

model = service.get_model_metadata("gpt-4o") if model: ... print(f"{model.name}") ... print(f"Context: {model.context_window}") ... if model.pricing: ... print(f"Cost: ${model.pricing.input_price}/MTok")

Tool Catalog

Tool discovery and resolution utilities.

consoul.ai.tools.catalog.get_tool_by_name

get_tool_by_name(
    name: str,
) -> tuple[BaseTool, RiskLevel, list[ToolCategory]] | None

Get tool, risk level, and categories by friendly name.

Parameters:

Name Type Description Default
name str

Tool name (e.g., "bash", "grep")

required

Returns:

Type Description
tuple[BaseTool, RiskLevel, list[ToolCategory]] | None

Tuple of (tool, risk_level, categories) if found, None otherwise

Example

result = get_tool_by_name("bash") if result: ... tool, risk, categories = result ... assert risk == RiskLevel.CAUTION

consoul.ai.tools.catalog.get_tools_by_risk_level

get_tools_by_risk_level(
    risk: str | RiskLevel,
) -> list[tuple[BaseTool, RiskLevel, list[ToolCategory]]]

Get all tools matching or below the specified risk level.

Parameters:

Name Type Description Default
risk str | RiskLevel

Risk level filter ("safe", "caution", "dangerous")

required

Returns:

Type Description
list[tuple[BaseTool, RiskLevel, list[ToolCategory]]]

List of (tool, risk_level, categories) tuples

Example

tools = get_tools_by_risk_level("safe") assert all(risk == RiskLevel.SAFE for _, risk, _ in tools)

consoul.ai.tools.catalog.get_tools_by_category

get_tools_by_category(
    category: str | ToolCategory,
) -> list[tuple[BaseTool, RiskLevel, list[ToolCategory]]]

Get all tools in a specific category.

Parameters:

Name Type Description Default
category str | ToolCategory

Category filter (e.g., "search", "file-edit", "web")

required

Returns:

Type Description
list[tuple[BaseTool, RiskLevel, list[ToolCategory]]]

List of (tool, risk_level, categories) tuples

Example

tools = get_tools_by_category("search") assert len(tools) > 0 for tool, _, cats in tools: ... assert ToolCategory.SEARCH in cats

consoul.ai.tools.catalog.get_all_tool_names

get_all_tool_names() -> list[str]

Get list of all available tool names.

Returns:

Type Description
list[str]

Sorted list of tool names

Example

names = get_all_tool_names() assert "bash" in names assert "grep" in names

consoul.ai.tools.catalog.get_all_category_names

get_all_category_names() -> list[str]

Get list of all available category names.

Returns:

Type Description
list[str]

Sorted list of category names

Example

names = get_all_category_names() assert "search" in names assert "file-edit" in names

Tool Discovery

consoul.ai.tools.discovery.discover_tools_from_directory

discover_tools_from_directory(
    directory: Path | str, recursive: bool = True
) -> list[tuple[BaseTool, RiskLevel]]

Discover tools from a directory.

Scans Python files in the specified directory for: - Functions decorated with @tool - Instantiated BaseTool objects

IMPORTANT: This function only discovers tool INSTANCES, not class definitions. If you define a BaseTool subclass, you must instantiate it in the module:

# This will be discovered:
my_tool = MyToolClass()

# This will NOT be discovered:
class MyToolClass(BaseTool):
    ...

Parameters:

Name Type Description Default
directory Path | str

Directory to scan for tools

required
recursive bool

Whether to recursively scan subdirectories (default: True)

True

Returns:

Type Description
list[tuple[BaseTool, RiskLevel]]

List of (tool, risk_level) tuples for discovered tools.

list[tuple[BaseTool, RiskLevel]]

All discovered tools default to RiskLevel.CAUTION for safety.

Example
from pathlib import Path
from consoul.ai.tools.discovery import discover_tools_from_directory

# Discover tools from .consoul/tools/
tools = discover_tools_from_directory(Path(".consoul/tools"))

# Non-recursive scan
tools = discover_tools_from_directory(Path(".consoul/tools"), recursive=False)
Notes
  • Syntax errors in tool files are logged as warnings and skipped
  • Import errors are logged as warnings and skipped
  • Non-tool objects and class definitions are silently ignored
  • Discovered tools are assigned RiskLevel.CAUTION by default

Next Steps