/PRODUCTS - AI AGENT TELEMETRY DASHBOARD

MakeeverySAPAIagentobservable-fromfirstprompttofinalaction.

Codemine Agent Telemetry Dashboard gives SAP teams a live control layer for AI agents, LLM calls, tool executions, retrieval steps, errors, latency, token usage, and business context - so every agent run can be monitored, explained, improved, and audited.

Codemine Agent Telemetry Dashboard - operations cockpit showing aif-analysis-agent token usage and request count.
/INTERACTIVE DEMO

TraceanagentrunbyContextID.

Codemine · Agent Telemetry DashboardOperations cockpit - token usage, requests, and errors at a glance.
app.codemine.pl
Dashboard
Agent Summary
Trace Hierarchy
Span Detail
LLM Attributes

AIagentscannotbeoperatedlikeclassicSAPreports.

SAP AI agents behave dynamically. One run may call an LLM, another may retrieve documents, trigger an MCP tool, call an SAP API, or stop because an authorization or grounding step failed. Standard logs show that something happened. Telemetry shows what happened, where it happened, why it failed, and what it cost.

Codemine Agent Telemetry Dashboard turns AI agent execution into a transparent operational view: every agent run is captured as a trace, enriched with SAP business context, and presented in a way SAP, integration, security, and AI teams can understand together.

Without telemetry, AI agents become a black box. With Codemine, every prompt, model call, retrieval step, tool execution, latency spike, failure, and cost driver becomes visible.

/WHERE THE ROI SHOWS UP

MeasurablecontroloverAIagentoperations.

Faster debugging

Find the exact failing step in a multi-step agent run instead of reading disconnected logs across AI Core, middleware, SAP APIs, and external tools.

Better agent reliability

Track failed, degraded, slow, and incomplete sessions. Spot recurring failure patterns before users lose trust in the agent.

Token and cost transparency

See which agents, prompts, tenants, tools, and models consume the most tokens - and where optimization has the highest impact.

Enterprise audit readiness

Keep a traceable record of agent runs, system calls, grounding steps, and execution outcomes for internal governance and compliance reviews.

Adoption insight

Understand which agents are actually used, by whom, in which process, and whether they complete the intended business task.

Architecture confidence

Use real production telemetry to decide whether to change model, prompt design, retrieval strategy, tool design, routing logic, or agent architecture.

/THE PRODUCT IN ACTION

Everyagentrun,explained.

01

Operations dashboard

A single launchpad for SAP AI agent operations. KPI tiles show total sessions, token volume, requests, and errors across all active agents. Navigate to Agent Summary, Trace Hierarchy, or Token Usage from the left sidebar.

Codemine Agent Telemetry dashboard overview showing aif-analysis-agent with 5412 tokens, 17 requests, 0 errors.

The cockpit view for AI operations: what is running, what is consuming tokens, and where errors are accumulating.

02

Agent Summary

Every agent run logged in a table with Context ID, Agent, Time, Duration, Status, Total Tokens, and Model. Filter by time window, status, or agent. Use the Context ID column to find a specific run and trace it across the system.

Agent Summary table showing aif-analysis-agent runs with Context ID, duration, status, token counts, and model columns.

Every run logged - scan the Context ID column to find and trace the exact run you need.

03

Trace Hierarchy

The full execution tree for any agent run: parent span at the top, child spans nested below. Filter by User ID or Context ID to isolate a specific conversation. Each row shows Span, User, Context ID, Duration, and Status.

Trace Hierarchy view showing parent agent.invoke span with nested LangGraph and LLM child spans, filterable by Context ID.

Paste a Context ID into the filter - the complete execution tree for that run appears instantly.

04

Span Detail

Click any span to open the detail view: Span Name, Service, Status, Trace ID, Span ID, Parent Span, Context ID, User, Model, Timestamp, Duration, and Token counts (in/out/total). The Attributes table below shows every OpenTelemetry attribute captured during that span.

Span Detail modal showing ChatLiteLLM.chat span with Trace ID, Context ID, Model sap/gpt-4.1, Duration 1025ms, Tokens 1340/19/1359.

Trace ID, Context ID, model, latency, and token breakdown - one span, all the context you need.

05

LLM Attributes

The Attributes section of each span captures the full OpenTelemetry GenAI semantic conventions: workflow name, entity path, input messages, token counts (input, output, total, cache), output messages with tool calls, model name, provider, and LangGraph step - everything needed to understand and reproduce the LLM call.

Span Attributes showing gen_ai.input.messages, token counts, gen_ai.output.messages with tool call, model sap/gpt-4.1, provider litellm.

The full prompt, tool call output, token breakdown, and provider - captured as standard GenAI telemetry attributes.

06

Token Usage

A dedicated Token Usage view breaks down AI consumption by agent and time period. Each row shows Agent, Timestamp, Input Tokens, Output Tokens, and Total Tokens. Use this to identify which runs drive the highest token consumption and where model routing or prompt optimization would have the highest impact.

Token Usage table showing aif-analysis-agent runs with input, output, and total token counts per run.

Move from "AI is expensive" to knowing exactly which agent run, prompt, and model call is driving the cost.

/CAPABILITIES

Whatthedashboarddoes.

Captures every agent interaction

Collects telemetry from AI agents, LLM calls, retrieval operations, tool executions, workflows, SAP APIs, and external extensions.

Correlates technical spans with business context

Connects low-level execution traces with agent name, tenant, conversation, user, process, environment, and operation type.

Detects failures and degraded runs

Flags failed, slow, incomplete, expensive, or repeatedly retried agent sessions before they become user-facing incidents.

Explains the execution path

Shows how the agent moved from prompt to retrieval, tool call, SAP action, and final response.

Tracks token usage and cost drivers

Breaks down AI consumption by model, provider, tenant, user group, agent, and operation.

Supports secure observability

Allows sensitive prompt, response, and business data to be masked, redacted, or excluded according to customer policy.

Provides audit evidence

Keeps a traceable operational record of what the agent did, which tools it called, what failed, and what result was produced.

/HOW IT WORKS

FourstepstoobservableSAPAIagents.

01

Instrument

Codemine adds OpenTelemetry-based instrumentation to the agent runtime. Auto-instrumentation captures LLM calls, model names, latency, token counts, and runtime signals. Custom spans add the SAP and business context that generic instrumentation cannot infer.

02

Enrich

Every agent invocation is wrapped in a parent span containing business context: agent name, tenant, conversation ID, user group, process, operation type, environment, and customer-specific attributes. Child spans capture LLM calls, retrieval steps, tool executions, and errors.

03

Stream

Telemetry is exported through an OTLP-compatible pipeline to the selected backend or Codemine dashboard layer. The data model follows modern GenAI observability patterns, so agent runs can be queried, filtered, grouped, and compared over time.

04

Operate

SAP, integration, AI, and support teams use the dashboard to debug failures, control cost, improve agent design, prepare audits, and make decisions based on real production behavior - not assumptions from a prototype.

/BEFORE AND AFTER

Fromblack-boxAItooperationalcontrol.

Area
Before
With Agent Telemetry Dashboard
Agent failure analysis
Logs scattered across AI runtime, SAP APIs, middleware, and tools
One trace showing the full execution path
LLM visibility
Model call hidden behind the agent
Model, provider, latency, tokens, status, and error type visible
Tool execution
Hard to prove what the agent called
Every SAP, MCP, API, or workflow call captured as a span
Cost control
Token usage visible only at provider or account level
Cost and token analytics by agent, tenant, model, and process
Governance
Difficult to explain agent behavior after the fact
Traceable record of every agent run and action
Optimization
Based on anecdotal feedback
Based on production telemetry and recurring patterns
/USE CASES

BuiltforrealSAPAIoperations.

Joule and BTP agent monitoring

Observe custom SAP AI agents, Joule-connected skills, BTP multi-agent runtimes, and agent workflows across environments.

SAP interface and integration agents

Track agents that analyze AIF, IDoc, API, CPI, file, and middleware exceptions - including the steps they take to classify, explain, and route failures.

MCP and external tool governance

Monitor MCP servers and enterprise tool calls used by Claude, Copilot, Joule-connected agents, or custom SAP AI assistants.

AI cost governance

Identify expensive agents, prompts, tenants, and model choices before AI usage scales across the enterprise.

Production readiness for AI pilots

Turn a PoC into an operable product by adding telemetry, failure analysis, audit trails, and support visibility before rollout.

/ARCHITECTURE

TelemetryarchitecturedesignedforSAPlandscapes.

Agent runtime

Joule Studio, SAP BTP, custom Python/Node.js agent, LangGraph, CAP service, Kyma workload, or external agent runtime.

Instrumentation layer

OpenTelemetry spans for agent invocation, LLM calls, retrieval, tool execution, workflow steps, SAP API calls, and errors.

Context layer

Tenant ID, user ID, conversation ID, agent name, SAP system, environment, business process, object ID, operation type, and custom customer attributes.

Export layer

OTLP-compatible export to the selected collector, observability backend, or Codemine telemetry storage.

Dashboard layer

Operational cockpit for AI agent runs, traces, failures, latency, token usage, cost, adoption, and remediation.

Security layer

Prompt and response redaction, data minimization, tenant-aware filtering, role-based access, and configurable retention.

/TECHNOLOGY

BuiltonmodernAIobservabilitypatterns.

SAP BTPSAP AI CoreOpenTelemetryOTLPJoule StudioLangGraphMCPSAP AIFSAP Integration SuiteS/4HANACAPKymaPythonNode.jsSAP Cloud SDK
/IMPLEMENTATION OPTIONS

Startwithoneagent.Scaletothewholelandscape.

01

Telemetry readiness assessment

A short assessment of the current agent architecture, logging, privacy constraints, SAP context, runtime, and available observability tools.

Outcome

Telemetry blueprint and implementation roadmap.

02

Pilot dashboard for one agent

Instrumentation and dashboard setup for one productive or near-productive AI agent.

Outcome

Trace view, LLM call monitoring, token analytics, error classification, and basic operational cockpit.

03

Enterprise AI observability layer

A reusable telemetry architecture for multiple SAP AI agents, MCP servers, custom tools, and business processes.

Outcome

Standard span model, context model, dashboard templates, alerts, retention policy, and governance setup.

/FREQUENTLY ASKED

Commonquestions.

Is this only for Joule agents?

No. The dashboard can be used for Joule-connected agents, BTP agents, custom Python or Node.js agents, LangGraph workflows, MCP tools, and SAP automation agents.

Does it store prompts and responses?

It can, but it does not have to. Prompt and response capture should follow customer policy. Sensitive data can be masked, redacted, sampled, or excluded.

Can it work with existing observability tools?

Yes. The telemetry model is OpenTelemetry-based and can be exported through OTLP-compatible pipelines. Codemine can integrate it with an existing observability backend or provide a dedicated dashboard layer.

Why not just use standard logs?

Logs tell you what one component wrote. Traces show the full execution path across the agent, model, retrieval, tools, SAP APIs, and downstream systems.

What makes this SAP-specific?

The dashboard adds SAP context that generic LLM observability tools do not know: tenant, SAP system, business process, interface, business object, user group, and operation type.

/GET STARTED

ReadytooperateAIagentswithSAP-gradecontrol?

AI agents are moving from prototypes to real enterprise processes. Codemine helps you make them observable, auditable, and reliable before they become critical infrastructure.