Monitors
Define natural-language monitoring rules that watch agent behavior and fire alerts when conditions are met.
import { Invariance } from '@invariance/sdk';
Overview
Monitors let you define rules in plain English that are continuously evaluated against agent trace data. When a monitor detects matching behavior, it fires a signal.
Monitors can be scoped to specific agents and assigned severity levels (low, medium, high, critical). Use them for safety guardrails, compliance checks, or anomaly detection.
Quick Example
const inv = Invariance.init({ apiKey: process.env.INVARIANCE_API_KEY! });
const monitor = await inv.monitors.create({
name: 'No PII in responses',
natural_language: 'Agent should never include social security numbers or credit card numbers in output',
agent_id: 'support-agent',
severity: 'critical',
});
// Manually trigger evaluation
const result = await inv.monitors.evaluate(monitor.id);
console.log(result.matches_found);
API Reference
monitors.list
List all monitors.
async list(opts?: { status?: string; agent_id?: string }): Promise<Monitor[]>
ReturnsPromise<Monitor[]>
monitors.create
Create a monitor from natural language.
async create(body: CreateMonitorBody): Promise<Monitor>
Parameters
namestringMonitor name
natural_languagestringPlain-English rule
agent_idstringScope to agent
severitystringlow, medium, high, or critical
ReturnsPromise<Monitor>
monitors.evaluate
Trigger manual evaluation of a monitor.
async evaluate(id: string): Promise<MonitorEvaluateResult>
ReturnsPromise<MonitorEvaluateResult>
monitors.delete
Delete a monitor.
async delete(id: string): Promise<void>
ReturnsPromise<void>
Use Cases
- Detect PII leakage in agent responses
- Monitor for unauthorized actions or policy violations
- Set up compliance checks for regulated industries
- Alert on anomalous agent behavior patterns