Odin: Multi-Agent Orchestration for Data Analysis

How Odin Answers a Question

When a user asks Odin a question, it doesn’t just generate an answer. It builds a structured, verifiable reasoning chain: understanding the context of the query, planning a series of actions, executing data retrieval and analysis, validating outputs, and finally synthesizing a complete response.

Unlike traditional LLM-based assistants that rely purely on language reasoning, Odin combines LLM planning with executable data science code, ensuring that every number, trend, and insight is grounded in actual data from the HockeyStack platform.

The following sections break down how Odin interprets, analyzes, validates, and delivers accurate results end-to-end.

1. Understanding the Question

When a user asks a question, Odin first interprets using contextual information about the user’s properties, goals, touchpoints, and other relevant data.

It then generates a multi-step plan describing how to answer the question most effectively.

2. Generating the Plan

Every step in the plan is handled by a specialized agent.

At HockeyStack, we’ve built an ecosystem of agents, each to focused on a different part of the platform, such as data querying, report building, and buyer journey aggregation.

3. Data Retrieval

Every individual agent then interacts with it’s respective data layer, querying, filtering, and retrieving relevant information from the platform. They then return the data in the same relevant, structured format as the corresponding dashboard (tables, line charts, pie charts, etc.)

This means Odin doesn’t just query a database. It uses the platform’s reporting structures to collect the same data in a similar way to a human user, but in a fully automated, composable way.

4. Data Analysis

Once the relevant data is retrieved, Odin passes it to an Analysis Agent.

This agent uses data science code (not LLM reasoning) to extract insights, trends, and key datapoints.

This is a fundamental shift from other agents that enter raw data into an LLM and ask it questions.

Odin writes executable analysis code (like generating an Excel formula) and runs that code on the data to produce real, verifiable results.

For example, instead of asking an LLM “what’s the maximum number,” Odin writes and runs a function that calculates the maximum directly from the dataset.

5. Evaluation and Validation

The results of the analysis are passed through an Evaluation Agent.

This agent cross-checks the analysis:

  • Against the original datasets to ensure every insight and number is accurate.

  • Against the original plan to confirm the plan itself is still optimal for answering the question.

If any issue is detected — like a wrong assumption or missing step — the evaluation agent flags it.

6. Supervision and Orchestration

Overseeing this entire process is the Orchestration Agent.

This is Odin’s orchestrator — it determines:

  • Which agents to run

  • In what sequence

  • When to trigger evaluations

The Orchestration Agent also re-runs or adjusts agents when the Evaluation Agent detects problems.

In short, it supervises the full reasoning chain to ensure both accuracy and efficiency.

7. Delivering the Answer

After all checks pass, the Orchestration Agent sends the final, verified analysis back to the user in the Odin Chat interface — a complete, data-backed answer to their question.

Odin ushers in a new era in AI analysis for GTM data: an ecosystem of specialized agents that combine planning, execution, and validation into a single automated pipeline.

Instead of relying solely on language-based inference, Odin executes real data science workflows: querying HockeyStack’s data layers, writing analysis code, validating outputs, and supervising each step for accuracy.

This architecture enables Odin to deliver not just answers, but also auditable, data-verified insights, bridging the gap between conversational AI and analytical truth.

Last updated