Get started in 60 seconds
After these three steps every LLM call your agent makes is automatically captured by Kintic. No other changes needed.
Step 1 — Install
pip install kintic-sdk
Step 2 — Initialize
import kintic
tracer = kintic.init(
api_key='kintic_live_<keyId>.<secret>',
debug=True # Remove in production
)Step 3 — Patch your LLM provider
# Anthropic kintic.patch(agent='my-agent', policy='v1.0') # OpenAI kintic.patch(agent='my-agent', policy='v1.0', providers=['openai']) # Both kintic.patch(agent='my-agent', policy='v1.0', providers=['openai', 'anthropic'])
ℹ️ Your API key is available in your dashboard under Settings → API Keys.
Format: kintic_live_<keyId>.<secret>. Keys are shown once.
Legacy kintic_live_<keyId> keys are temporarily supported during migration.
Your LLM provider keys stay with you — Kintic never sees them.
What Kintic captures
Your Agent Code
↓
[kintic.patch()] ← Intercepts here
↓
LLM API Call (Anthropic/OpenAI)
↓
• Full system prompt
• Conversation history
• Model and parameters
• Agent reasoning
• Tool calls made
• Final decision
• Latency and cost
↓
Your Dashboard at kintic.dev
Kintic adds zero latency to your agent — all capturing happens asynchronously in a background thread.
Anthropic (Claude)
import anthropic
import kintic
# Initialize Kintic
tracer = kintic.init(api_key='kintic_live_<keyId>.<secret>')
# Patch Anthropic — one line, captures everything
kintic.patch(agent='refund-agent', policy='refund_policy_v1.7')
# Your existing code unchanged
client = anthropic.Anthropic()
response = client.messages.create(
model='claude-sonnet-4-20250514',
max_tokens=1000,
system='You are a refund processing agent...',
messages=[{
'role': 'user',
'content': 'Should I approve this refund?'
}]
)
# Kintic automatically captured:
# - Your system prompt
# - The user message
# - Claude's reasoning
# - The decision made
# - Token usage and costWorks with all Claude models including claude-opus-4, claude-sonnet-4, and claude-haiku.
OpenAI
import openai
import kintic
tracer = kintic.init(api_key='kintic_live_<keyId>.<secret>')
kintic.patch(agent='support-agent', policy='v2.1', providers=['openai'])
client = openai.OpenAI()
response = client.chat.completions.create(
model='gpt-4o',
messages=[
{'role': 'system', 'content': 'You are a support agent...'},
{'role': 'user', 'content': 'I need help with my order'}
]
)
# Kintic automatically captured everythingLangChain
from langchain_anthropic import ChatAnthropic
from langchain.agents import initialize_agent, Tool
from kintic.integrations.langchain import KinticCallbackHandler
import kintic
tracer = kintic.init(api_key='kintic_live_<keyId>.<secret>')
# Create the Kintic callback handler
handler = KinticCallbackHandler(
tracer=tracer,
agent='langchain-refund-agent',
policy='refund_policy_v1.7'
)
# Pass handler to your LLM and agent
llm = ChatAnthropic(
model='claude-sonnet-4-20250514',
callbacks=[handler]
)
agent = initialize_agent(
tools=your_tools,
llm=llm,
callbacks=[handler]
)
# Run your agent normally
result = agent.run('Process refund for order ORD-8821')Manual instrumentation
import kintic
tracer = kintic.init(api_key='kintic_live_<keyId>.<secret>')
@tracer.decision(
agent='custom-agent',
policy='v1.0',
delegation_chain=[
{'from': 'user', 'to': 'agent', 'authorization': 'standard'}
]
)
def make_decision(context, belief_state=None):
# Your agent logic here
result = your_llm_call(context)
return result
result = make_decision(
context={'order': order_data},
belief_state={
'policy_version': 'v1.0',
'confidence': 0.87,
'available_information': context_data
}
)kintic.init()
kintic.init(api_key, base_url=None, debug=False)
api_key (required), base_url (optional, default https://origin.api.kintic.dev), debug (optional).
Returns: KinticTracer instance.
kintic.patch()
kintic.patch(agent, policy=None, providers=None)
providers can be ['openai', 'anthropic'] or both. Must call kintic.init() first.
tracer.decision()
@tracer.decision(agent, policy=None, delegation_chain=None)
Wraps any function. Captures args, kwargs, return value, and latency. Ships asynchronously with zero latency impact on your agent.
KinticCallbackHandler
KinticCallbackHandler(tracer, agent, policy=None)
Extends LangChain BaseCallbackHandler. Pass to callbacks= on your LLM and agent. Captures rich reasoning signals, tool calls, and delegation context automatically.
Understanding your Kintic dashboard
Decisions feed
Every decision appears in real time with agent, action, policy version, outcome, cost, and drift status.

Understanding drift alerts
Policy drift, repetition drift, and confidence drift are detected automatically with exposure.

Agent Autopsy
When Kintic detects drift, click Run Autopsy on any alert. Kintic analyzes your agent's last 50 decisions using Claude and generates a forensic report in plain English — what happened, when it started, why, and exactly what to investigate.

Delegation chains
See full path from user request to final action, including every intermediate tool call.

