Context Engineering Beyond Semantic Layers
Sources: 1 • Confidence: Medium • Updated: 2026-03-10 08:30
Key takeaways
- Gravity claims a semantic layer is useful context but neither necessary nor sufficient for high-quality analytics because frontier models can work from well-described schemas and business context is required to explain metric importance and ownership.
- Orion provides traceability by letting users select an output claim and view its source and lineage back to input data and transformations, then captures corrections as feedback compacted into memory or a knowledge base for reuse.
- Orion supports partitioned deployments where multiple custom, guardrailed agents can be defined per audience or project with role-specific behavior and restricted access rules.
- Orion aims to answer questions (e.g., revenue last week) by iteratively uncovering drivers (e.g., country- and day-specific drops) and enriching analysis with external data such as weather when relevant.
- Gravity reports using Orion internally to optimize cloud spend and claims a 60% reduction in its cloud cost bill by identifying inefficiencies and making governed changes.
Sections
Context Engineering Beyond Semantic Layers
- Gravity claims a semantic layer is useful context but neither necessary nor sufficient for high-quality analytics because frontier models can work from well-described schemas and business context is required to explain metric importance and ownership.
- Orion builds organizational knowledge by reading dbt model files, BI artifacts (including Looker), and warehouse metadata, and combines this with user-approvable/correctable business context.
- Orion gathers business-operating context such as user goals, accountability, and span of control to improve the usefulness of analytics beyond data definitions alone.
- Orion is reported to produce useful initial analyses in messy enterprise environments, with humans providing a small number of clarifications to correct incorrect assumptions (e.g., discontinued businesses).
- Gravity claims that despite strong semantic models, companies historically did not run quarterly earnings financial reporting through Looker largely because semantic layers were not kept sufficiently up to date.
- Gravity states it believes accessible text augmented with attributes like recency is an effective representation for semantic context because modern models can manipulate large text corpora without complex embedding strategies.
Enterprise Trust Via Traceability And Iterative Workflows
- Orion provides traceability by letting users select an output claim and view its source and lineage back to input data and transformations, then captures corrections as feedback compacted into memory or a knowledge base for reuse.
- Orion is designed as a multi-turn analytical coworker where spending up to an hour iterating can produce an outcome a user can confidently defend even if a single-shot answer would be wrong.
- Gravity claims showing the chain of assumptions and investigative questions used to reach a finding builds trust faster than returning a single correct-looking answer without process visibility.
- Orion is designed for enterprise settings where maintaining trust in outputs is essential because loss of trust is difficult to recover.
Governance Through Partitioned Agents And Scoped Memory
- Orion supports partitioned deployments where multiple custom, guardrailed agents can be defined per audience or project with role-specific behavior and restricted access rules.
- Orion organizes work into projects with separate context stores and memories to preserve stakeholder preferences and corrections for reuse in similar future analyses.
- Gravity claims current constraints on agent-to-agent onboarding and enterprise knowledge access (e.g., Slack history cannot be freely summarized on first connect) limit fully ambient context ingestion.
- Orion is being built with a glass-box design that exposes steps, memories, and knowledge sources to administrators and uses self-reflection to suggest deleting stale memories to avoid overload.
Breadth Of Inputs And Cross-Silo Analysis As A Differentiator
- Orion aims to answer questions (e.g., revenue last week) by iteratively uncovering drivers (e.g., country- and day-specific drops) and enriching analysis with external data such as weather when relevant.
- Orion supports uploading local spreadsheets and unstructured documents (e.g., PDFs, Word files, slide decks) to incorporate them directly into analyses.
- Orion can connect to multiple governed data sources (including multiple warehouses) and produce analyses that span them.
- Gravity positions Orion’s differentiation versus in-database AI assistants as analyzing across disparate sources without requiring hard joins, using multiple specialized agents that exchange context (e.g., goals decks, external data, warehouse data).
Usage Observability And Roi-Shaped Narratives
- Gravity reports using Orion internally to optimize cloud spend and claims a 60% reduction in its cloud cost bill by identifying inefficiencies and making governed changes.
- Gravity uses Looker usage metadata to show customers how business users consume analytics assets, helping data teams assess engagement patterns.
- Lucas reports a POC where Orion surfaced board-meeting follow-up investigation items, contributing to a publicly traded company converting quickly to a customer.
- A customer reportedly used Orion by uploading a competitor research PDF and combining it with internal market data to generate talking points for a next-day Capitol Hill appearance.
Watchlist
- Gravity warns that accumulated context and memory can become technical debt if the system cannot detect when facts or preferences are no longer relevant, leading to incorrect behavior over time.
- Gravity anticipates AI agents will increasingly recommend uncomfortable cross-cutting warehouse changes (e.g., flagging long-unused tables) because agents are less constrained by human social norms and risk aversion.
Unknowns
- What objective quality metrics (accuracy, calibration, error modes) does Orion achieve on enterprise analytics tasks, and how do these change after memory accumulation over weeks/months?
- What is the measured time-to-first-usable-insight in messy environments, and how many human clarifications are typically required before outputs stabilize for a project?
- How exactly does Orion perform cross-warehouse and cross-source analysis without 'hard joins' while maintaining metric consistency and auditability?
- What are the concrete governance controls (RBAC model, policy enforcement points, audit logs) for partitioned agents and for uploaded spreadsheets/documents?
- What memory lifecycle mechanisms exist in production (decay, deletion approvals, conflict resolution) and how often do stale memories cause observable user-visible errors?