Skip to content

Analysis Guide

The analysis layer is a reasoning substrate for AI agents. Not a calculator — a system that produces structured explanations agents can parse, chain, and act on.

The Difference

Without analysis:

Agent: "Revenue is £17.7M"

With analysis:

Agent: "Revenue dropped 20% last month, driven by Germany (-72% of change).
        94% of revenue is concentrated in 3 countries — critical risk.
        Churned customers are independent of high spenders (Jaccard=0.049) —
        churn is a frequency problem, not a value problem."

Same data. Different level of understanding.

All 22 Primitives

See the complete reference for every primitive with parameters and examples.

Core (7 — available on Free plan)

PrimitiveQuestion it answers
healthIs this metric behaving normally?
trendsIs this accelerating or decelerating?
driversWhich dimension explains the most variance?
segment_performanceHow does this vary across segments?
contributionWhat drove the change between periods?
compareHow do two groups or time periods differ?
anomaliesWhich data points are outliers?

Advanced (15 — Team plan and above)

PrimitiveQuestion it answers
correlateDo two metrics move together?
root_causeWhy did this metric change?
sensitivityWhat's the concentration risk?
forecastWhere is this metric heading?
causal_impactDid a specific event change the metric?
scenarioWhat if we change a segment by X%?
benchmarkHow does this compare to its historical range?
cohortHow do user cohorts retain over time?
funnelWhere do users drop off in a process?
metric_impactIf metric A changes, how does B respond?
counterfactualWhat would the total be without a segment?
monitorIs a threshold condition currently triggered?
data_qualityNulls, duplicates, freshness, schema health
paretoWhich segments account for 80% of the total?
thresholdWhat's the optimal natural breakpoint?

Response Contract

Every analysis method returns:

json
{
    "value": {},
    "explanation": "one sentence, plain English",
    "confidence": 0.85,
    "warnings": [],
    "suggested_actions": [],
    "metric_health": {}
}

Agents don't need to interpret numbers. The explanation is readable. The suggested_actions are actionable. The confidence tells them when to trust it.

Agent Reasoning Chain

The analysis methods compose. An agent investigating "why is churn increasing?" would:

python
# 1. What's happening?
trends = om.analysis.trends("churn_risk")
# → momentum: accelerating

# 2. Which dimensions explain it?
drivers = om.analysis.drivers("churn_by_country", dimensions=["country", "tier"])
# → country CV=5.7 (explains most variance)

# 3. Where is it concentrated?
sensitivity = om.analysis.sensitivity("churn_by_country", "country", scenario="remove_top_3")
# → 94% in top 3 countries, risk=critical

# 4. Is it related to spending?
correlation = om.analysis.correlate("churned_customers", "high_spenders")
# → Jaccard=0.049, independent

# 5. What would happen without the worst country?
counterfactual = om.analysis.counterfactual("churn_by_country", remove={"country": "UK"})
# → churn would drop 35%

# Agent builds explanation:
# "Churn is accelerating, concentrated in 3 countries (94%),
#  and independent of spending level. Removing UK alone would
#  reduce churn by 35%. This is a frequency/engagement problem,
#  not a value problem."

Six calls. One coherent explanation. That's the reasoning substrate.

MIT Licensed (SDK) | Proprietary (Server)