Understanding Hooks

Hooks are the observation and control layer of the Amplifier ecosystem. They allow you to monitor, react to, and extend agent behavior without modifying core logic. Think of hooks as event listeners that tap into the agent lifecycle at specific points.

What is a Hook?

A hook is an async function registered to fire on specific lifecycle events during agent execution. Unlike tools that perform actions, hooks observe and react to what's already happening. They can also influence execution flow -- blocking operations, modifying data, injecting context into the agent's conversation, and requesting human approval.

from amplifier_core.models import HookResult

async def logging_hook(event: str, data: dict) -> HookResult:
    """Simple hook that logs all events."""
    print(f"[{event}] {data}")
    return HookResult(action="continue")

Hooks are plain async functions -- there is no base class to extend. You register them with a HookRegistry for the events you care about. This keeps your agent logic clean while enabling both rich observability and cross-cutting control.

Key Characteristics

  • Non-blocking by default: Hooks should complete quickly to avoid slowing the agent
  • Observable and actionable: Hooks can observe events passively or return a HookResult to influence execution
  • Fail-safe: Hook failures don't crash the agent
  • Priority-ordered execution: Multiple hooks run sequentially by priority (lower number = earlier execution)
  • Precedence rules: When multiple hooks fire on the same event, a defined action hierarchy resolves conflicts

Hook vs Tool

Understanding the difference between hooks and tools is fundamental:

Aspect Hook Tool
Purpose Observe, react, and gate Perform actions
Invocation Automatic on events Explicit by agent
Modifies state Can influence flow via HookResult Yes (side effects expected)
Blocking Should be fast (except approval gates) Can be long-running
Failure impact Logged, continues Reported to agent

Use a hook when you want to: - Log or audit agent activity - Collect metrics and telemetry - Send notifications on specific events - Validate or monitor behavior patterns - Block operations that violate policy (deny) - Modify data before it reaches the agent (modify) - Inject context into the agent's conversation (inject_context) - Gate operations behind human approval (ask_user)

Use a tool when you need to: - Perform an action the agent requests - Return data the agent will use - Modify external state - Execute long-running operations

HookResult and Actions

Hooks return a HookResult to influence execution. Import it from amplifier_core.models:

from amplifier_core.models import HookResult

HookResult Fields

Field Type Default Description
action Literal["continue", "deny", "modify", "inject_context", "ask_user"] "continue" Action to take after hook execution
data dict[str, Any] \| None None Modified event data (for action="modify")
reason str \| None None Explanation for deny/modification; shown to agent when blocked
context_injection str \| None None Text to inject into agent's conversation context (for action="inject_context")
context_injection_role Literal["system", "user", "assistant"] "system" Role for the injected message
ephemeral bool False If True, injection is temporary (current LLM call only, not stored in history)
approval_prompt str \| None None Question to ask the user (for action="ask_user")
approval_options list[str] \| None None User choice options (defaults to ["Allow", "Deny"])
approval_timeout float 300.0 Seconds to wait for user response
approval_default Literal["allow", "deny"] "deny" Default decision on timeout
suppress_output bool False Hide hook's own stdout/stderr from user transcript
user_message str \| None None Message to display to the user (separate from context_injection)
user_message_level Literal["info", "warning", "error"] "info" Severity level for user_message
append_to_last_tool_result bool False Controls injection placement (append to last tool result instead of new message)

Action Types

  • continue: Proceed normally. The default when no HookResult is returned.
  • deny: Block the operation entirely. The reason field is shown to the agent. Use for policy enforcement, safety guardrails, or rate limiting.
  • modify: Alter the event data before it continues through the pipeline. Supply the modified payload in the data field. Changes chain through subsequent handlers.
  • inject_context: Add information to the agent's conversation context. The context_injection string is injected with the role specified by context_injection_role (default "system"). By default, injections are persisted to conversation history. Set ephemeral=True to make the injection temporary (current LLM call only, not stored in history).
  • ask_user: Pause execution and request human approval before continuing. Use approval_prompt to specify the question, approval_options for choices, and approval_timeout / approval_default to control timeout behavior.

Approval Gates

Hooks can pause execution and request human input using the ask_user action. This enables approval workflows where sensitive operations require explicit confirmation before proceeding. Configure approval_timeout (default 5 minutes) so that unattended sessions do not hang indefinitely, and approval_default (default "deny") to control what happens on timeout.

Action Precedence Hierarchy

When multiple hooks fire on the same event, their actions are resolved using a strict precedence hierarchy:

deny > ask_user > inject_context > modify > continue
  • If any hook returns deny, the operation is blocked regardless of other hooks.
  • If no deny but any hook returns ask_user, execution pauses for human input.
  • If no deny or ask_user but any hook returns inject_context, context is injected. Multiple inject_context results are merged.
  • If no blocking or injection actions but any hook returns modify, the data is modified.
  • If all hooks return continue (or return nothing), execution proceeds normally.

Blocking actions (deny, ask_user) always take precedence over non-blocking actions (inject_context, modify, continue). This ensures security gates cannot be silently bypassed by information-flow actions.

Contribution Channels

Hooks support a pull-based aggregation pattern for collecting data across modules without tight coupling.

API

  • register_contributor(): A module registers itself as a data contributor for a named channel.
  • collect_contributions(): Any module can pull aggregated data from all registered contributors on a channel.

Why Contribution Channels?

Traditional hooks push data in response to events. Contribution channels invert this: a consumer pulls data from multiple producers when it needs it. This enables modules to provide data (configuration fragments, status information, capability declarations) without knowing who will consume it or when.

# Module A registers as a contributor
register_contributor("agent_capabilities", my_capability_provider)

# Module B collects from all contributors
capabilities = collect_contributions("agent_capabilities")

Event Types

Hooks subscribe to specific event types that occur during the agent lifecycle. Event names use colon-delimited format.

Common Events

Event Fires when
session:start Agent session begins
session:end Agent session completes
execution:start The orchestrator loop begins a new turn
execution:complete The orchestrator loop finishes a turn
tool:pre Agent is about to invoke a tool
tool:post Tool has returned a result
tool:error Tool execution fails
provider:request Request sent to LLM provider
provider:response Response received from LLM provider
prompt:submit Prompt is about to be sent

These are not exhaustive -- modules can define and emit custom events beyond this set. Subscribe only to the events your hook needs.

Foundation Hook Modules

Amplifier Foundation provides several hook modules for common needs. These are loaded as modules in your bundle configuration, not imported as Python classes:

  • hooks-progress-monitor: Displays progress information during agent execution.
  • hooks-session-naming: Automatically generates descriptive session names based on conversation content.
  • hooks-todo-display: Shows todo list state as ephemeral context so the agent stays aware of task progress.

These modules register their own hook functions internally. You enable them by including them in your bundle's module list.

Creating Hooks

Building custom hooks is straightforward. Write an async function with the correct signature and register it with a HookRegistry.

Basic Hook Structure

from amplifier_core.models import HookResult
from amplifier_core.hooks import HookRegistry
from typing import Any

async def my_custom_hook(event: str, data: dict[str, Any]) -> HookResult:
    """A custom hook with selective event handling."""
    if event == "tool:pre":
        tool_name = data.get("tool_name")
        print(f"Tool called: {tool_name}")
    elif event == "session:end":
        print("Session ended.")

    return HookResult(action="continue")

Registering Hooks

Register hooks with a HookRegistry instance, specifying the event to listen for, the handler function, and an optional priority:

from amplifier_core.hooks import HookRegistry

registry = HookRegistry()

# Register for specific events. Lower priority = earlier execution.
unregister_pre = registry.register(
    event="tool:pre",
    handler=my_custom_hook,
    priority=10,
    name="my_custom_hook_pre"
)
unregister_post = registry.register(
    event="session:end",
    handler=my_custom_hook,
    priority=10,
    name="my_custom_hook_session_end"
)

# Later, to remove a handler:
unregister_pre()

The register() call returns an unregister function you can call to remove the handler.

Handler Signature

Hook handlers receive two arguments directly -- there is no wrapper object:

async def handler(event: str, data: dict[str, Any]) -> HookResult:
    # event: the event name (e.g., "tool:pre")
    # data: event-specific payload dict (e.g., tool_name, tool_input, etc.)
    ...

Practical Examples

Linter feedback hook -- runs a linter after file writes and injects errors into agent context:

import subprocess
from amplifier_core.models import HookResult
from typing import Any

async def linter_hook(event: str, data: dict[str, Any]) -> HookResult:
    """Run linter after file writes and inject feedback."""
    if data.get("tool_name") not in ["Write", "Edit", "MultiEdit"]:
        return HookResult(action="continue")

    file_path = data["tool_input"]["file_path"]
    result = subprocess.run(["ruff", "check", file_path], capture_output=True)

    if result.returncode != 0:
        return HookResult(
            action="inject_context",
            context_injection=f"Linter found issues in {file_path}:\n{result.stderr.decode()}",
            user_message=f"Found linting issues in {file_path}",
            user_message_level="warning"
        )

    return HookResult(action="continue")

# Register on tool:post so it fires after file writes complete
registry.register(event="tool:post", handler=linter_hook, priority=10)

Production protection hook -- requires user approval for writes to production files:

async def production_guard(event: str, data: dict[str, Any]) -> HookResult:
    """Require approval for production file writes."""
    file_path = data.get("tool_input", {}).get("file_path", "")

    if "/production/" in file_path or file_path.endswith(".env"):
        return HookResult(
            action="ask_user",
            approval_prompt=f"Allow write to production file: {file_path}?",
            approval_options=["Allow once", "Allow always", "Deny"],
            approval_timeout=300.0,
            approval_default="deny",
            reason="Production file requires explicit user approval"
        )

    return HookResult(action="continue")

registry.register(event="tool:pre", handler=production_guard, priority=5)

Error Handling

Hooks should handle errors gracefully. Hook failures should not crash the agent or block operations unless explicitly intended:

import logging
from amplifier_core.models import HookResult
from typing import Any

logger = logging.getLogger(__name__)

async def safe_hook(event: str, data: dict[str, Any]) -> HookResult:
    """Hook with proper error handling."""
    try:
        result = do_something(data)

        if result.has_issues:
            return HookResult(
                action="inject_context",
                context_injection=f"Issues found: {result.issues}",
                user_message="Validation found issues",
                user_message_level="warning"
            )

        return HookResult(action="continue")

    except Exception as e:
        logger.error(f"Hook failed: {e}", exc_info=True)
        return HookResult(
            action="continue",  # Don't block on hook failure
            user_message=f"Hook error: {str(e)}",
            user_message_level="error"
        )

Best Practices

  1. Keep hooks fast: Offload heavy work to background tasks
  2. Handle errors gracefully: Catch exceptions, return an appropriate HookResult
  3. Be selective: Register only for the events you need
  4. Use async patterns: Use asyncio for external calls (linters, APIs)
  5. Single responsibility: Each hook should do one thing well
  6. Clear messages: Make approval_prompt and user_message self-explanatory

Context Injection and Ephemeral Behavior

When hooks use the inject_context action, the injected text is added to the agent's conversation context.

By default, injections are persisted -- they become part of the conversation history and remain visible across turns. This is appropriate for feedback that the agent should remember, such as linter errors or validation results.

To make an injection temporary, set ephemeral=True. Ephemeral injections appear only for the current LLM call and are not stored in conversation history. Use this for transient state that updates frequently, such as todo reminders or live status indicators.

To prevent context overflow from runaway injections, Amplifier enforces an injection budget:

  • Size limit: Each individual injection is limited to 10 KB by default (configurable via session.injection_size_limit).
  • Budget per turn: Total injection tokens per turn are capped (configurable via session.injection_budget_per_turn, default 10,000 tokens).

If a hook attempts to inject context beyond the budget, the injection is dropped and a warning is logged. Design your inject_context hooks to be concise -- prefer short, targeted injections over large context dumps.

Key Takeaways

  1. Hooks are plain async functions: No base class to extend. Write a function matching async def handler(event: str, data: dict[str, Any]) -> HookResult, then register it with HookRegistry.register().

  2. Hooks complement tools: While tools perform actions for the agent, hooks monitor and gate what's happening. Both are essential for production systems.

  3. Five actions cover all cases: continue, deny, modify, inject_context, and ask_user handle the full spectrum from observation to intervention.

  4. Precedence resolves conflicts: When multiple hooks fire, deny > ask_user > inject_context > modify > continue ensures safety-critical hooks always win.

  5. Contribution channels decouple modules: Use register_contributor() / collect_contributions() for pull-based data aggregation across modules without tight coupling.

  6. Injection persistence is configurable: Context injections are persisted by default. Set ephemeral=True for temporary, single-call injections. Both are subject to the injection budget.

  7. Foundation hook modules cover common cases: Enable hooks-progress-monitor, hooks-session-naming, and hooks-todo-display in your bundle before building custom solutions.

  8. Custom hooks are simple: Write an async function, register it for the events you care about, and return a HookResult. Keep hooks fast and error-tolerant.

Hooks transform opaque agent execution into transparent, controllable systems. Start with Foundation hook modules to understand behavior, then create custom hooks as your monitoring and control needs evolve.