Skip to main content

Overview

The Analytics dashboard gives you aggregate insights about your MCP server usage. Unlike Traffic Logs (which show individual requests), Analytics shows trends and summaries over time. Access account-level analytics from Analytics in the dashboard sidebar. Per-server analytics are available on the Analytics tab of any server detail page.

Time range presets

All charts and KPI metrics update to reflect the selected time range:
PresetData shown
24HHourly breakdowns for the past 24 hours
7D (default)Daily breakdowns for the past 7 days
30DDaily breakdowns for the past 30 days
90DDaily breakdowns for the past 90 days

Account-level analytics

The account-level view aggregates across all your servers.
Account-level analytics dashboard

KPI cards

Four headline metrics shown at the top:
MetricWhat it measures
Total RequestsCumulative tool calls in the selected period, with % change vs prior period
Error RatePercentage of calls that resulted in an error — green below threshold, red above
Avg LatencyMean execution time across all tools and servers, with % change vs prior period
Active ServersCount of servers that received at least one request in the period
KPI summary cards

Daily (or Hourly) request trend

An area chart showing total requests and errors over time.
  • 24H preset: hourly data points
  • All other presets: daily data points
Hovering over the chart shows the exact values for that day or hour. Use this to spot sudden spikes, drops, or error surges.
Daily request trend area chart

Latency trend

A multi-line chart showing:
  • Avg latency (blue) — mean execution time
  • P95 latency (yellow dashed) — 95th percentile; the worst case for 95% of requests
  • P99 latency (red dashed) — 99th percentile; the worst case for 99% of requests
The Y-axis scales automatically — values under 1000ms are shown in ms; values over 1000ms are shown in seconds.
Watch the P95 line, not just the average. A rising P95 with a flat average means a subset of requests are getting slow — often a database query missing an index or a rate-limited upstream API.

Top tools

A horizontal bar chart showing your top 8 most-called tools across all servers, ranked by request count. This is the clearest signal of which tools the AI is actually using. In practice you’ll usually find that 80% of calls go to 2–3 tools — invest optimisation effort there.

Top servers

A horizontal bar chart showing your top 8 most-called servers by request count.

Geographic distribution

A horizontal bar chart showing the top 10 countries by request origin (ISO 2-letter country codes). Useful for understanding where your users are and for diagnosing region-specific latency issues.

Server-level analytics

Click any server from the Top servers chart, or open a server’s Analytics tab, to see analytics scoped to that server.
Server-level analytics tab

Server KPI cards

MetricWhat it measures
Total RequestsCalls received by this server in the period
Error RatePercentage of calls that failed
Avg LatencyMean execution time for this server’s tools
Max ConcurrencyPeak number of simultaneous active sessions

Request trend

Same dual-line area chart (requests + errors) as account level, scoped to this server.

Tool breakdown

A horizontal bar chart showing the distribution of calls across this server’s tools — up to the top 10. Use this to:
  • Find tools with zero calls (bad description? unused feature?)
  • Understand the AI’s behaviour: a tool called unexpectedly often may have a too-broad description
Tool breakdown chart for a single server

Session trend

An area chart of unique client sessions over time. A session is a single AI client connection.
  • Rising sessions = more clients connecting → growing adoption
  • Flat sessions + rising requests = each session is more active → more intensive use per user

Traffic by hour

A bar chart showing average request count by hour of day (0–23, in your local timezone). Use this to understand peak usage hours and plan maintenance windows.
Traffic by hour bar chart

Using analytics effectively

Identify optimisation targets: Look at P95 latency. If it’s over 3–5 seconds for any time period, your tools are too slow for a good user experience. Check the tool breakdown to find the culprit, then investigate query indexes, pagination, or caching. Find unused tools: In the tool breakdown, zero-call tools over 30 days are candidates for cleanup or improved descriptions. The AI may not be calling them because the description doesn’t match what users ask for. Debug sudden error spikes: Find the time window when errors spiked in the trend chart, then open Traffic Logs filtered to that window to see what went wrong.