Documentation Index
Fetch the complete documentation index at: https://docs.odigos.io/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- Odigos v1.25.0 or newer installed on the cluster
- The workload you want to profile must already be a Source
Step 1: Enable Profiling
Profiling is an opt-in feature controlled by a single Helm value. Setprofiling.enabled=true when installing or upgrading Odigos:
- The eBPF profiler receiver is activated on every Data Collector
- A
profilespipeline gets enabled in Odigos components and as soon as profile samples start to arrive, Odigos UI buffers start generating the flame graph - You can also export profiling data to any third-party Destination that supports viewing profiling data
Profiling does not replace or interfere with your existing traces, metrics, or logs pipelines. It runs as a separate pipeline inside the same collector processes.
Step 2: View Profiles in the Odigos UI
Once profiling is enabled on the cluster, you can inspect profiles for any Source directly from the Odigos UI.
Open the Profiling tab
Navigate to Sources in the Odigos UI and click on the workload you want to investigate. This opens the source drawer. Inside the source drawer, click the Profiling tab. Click Enable Profiling to open a slot for this workload. Odigos will start buffering profile data as it arrives from the node.

Inspect the flame graph
Send traffic to your service or wait for background activity. The eBPF profiler samples continuously at the kernel level, higher CPU activity means more profiles getting collected for that workload.
The flame graph updates as profile data accumulates in the buffer. Wider bars represent more CPU time. Click any frame to zoom in.
Inspect the symbols table
The symbols table lists every function captured in the profile with its Self time (CPU spent exclusively in that function) and Total time (including all callees). Use it alongside the flame graph to quickly rank the top CPU consumers without navigating the call tree visually.
Closing the Profiling tab does not immediately destroy the slot. The slot and its buffered data persist until the TTL expires or the UI pod restarts. This is intentional, it lets you reopen the tab and see the same data during an ongoing investigation.
Advanced: Buffer Tuning
The in-memory profile store is sized from environment variables read by the UI pod at startup.| Environment variable | Default | Description |
|---|---|---|
PROFILES_MAX_SLOTS | 24 | Maximum number of concurrently active profiling slots (one per workload). When the limit is reached the oldest slot is evicted (LRU). |
PROFILES_SLOT_TTL_SECONDS | 120 | Seconds of inactivity after which a slot is swept by the background cleanup loop. |
PROFILES_SLOT_MAX_BYTES | 8388608 (8 MiB) | Maximum bytes of raw OTLP profile chunks retained per slot. Older chunks are dropped when this cap is reached. |
PROFILES_CLEANUP_INTERVAL_SECONDS | 15 | How often the background TTL sweep runs inside the UI process. |
OTLP Exporter Tuning
If you need to adjust retry behaviour or compression on the profiling OTLP pipelines (node → gateway, gateway → UI), use theprofiling.exporter.* values: