mirror of
https://gitea.com/gitea/act_runner.git
synced 2026-04-24 12:50:31 +08:00
## What
Add an optional Prometheus `/metrics` HTTP endpoint to `act_runner` so operators can observe runner health, polling behavior, job outcomes, and RPC latency without scraping logs.
New surface:
- `internal/pkg/metrics/metrics.go` — metric definitions, custom `Registry`, static Go/process collectors, label constants, `ResultToStatusLabel` helper.
- `internal/pkg/metrics/server.go` — hardened `http.Server` serving `/metrics` and `/healthz` with Slowloris-safe timeouts (`ReadHeaderTimeout` 5s, `ReadTimeout`/`WriteTimeout` 10s, `IdleTimeout` 60s) and a 5s graceful shutdown.
- `daemon.go` wires it up behind `cfg.Metrics.Enabled` (disabled by default).
- `poller.go` / `reporter.go` / `runner.go` instrument their existing hot paths with counters/histograms/gauges — no behavior change.
Metrics exported (namespace `act_runner_`):
| Subsystem | Metric | Type | Labels |
|---|---|---|---|
| — | `info` | Gauge | `version`, `name` |
| — | `capacity`, `uptime_seconds` | Gauge | — |
| `poll` | `fetch_total`, `client_errors_total` | Counter | `result` / `method` |
| `poll` | `fetch_duration_seconds`, `backoff_seconds` | Histogram / Gauge | — |
| `job` | `total` | Counter | `status` |
| `job` | `duration_seconds`, `running`, `capacity_utilization_ratio` | Histogram / GaugeFunc | — |
| `report` | `log_total`, `state_total` | Counter | `result` |
| `report` | `log_duration_seconds`, `state_duration_seconds` | Histogram | — |
| `report` | `log_buffer_rows` | Gauge | — |
| — | `go_*`, `process_*` | standard collectors | — |
All label values are predefined constants — **no high-cardinality labels** (no task IDs, repo URLs, branches, tokens, or secrets) so scraping is safe and bounded.
## Why
Teams self-hosting Gitea + `act_runner` at scale need to answer basic SRE questions that are currently invisible:
- How often are RPCs failing? Which RPC? (`act_runner_client_errors_total`)
- Are runners saturated? (`act_runner_job_capacity_utilization_ratio`, `act_runner_job_running`)
- How long do jobs take? (`act_runner_job_duration_seconds`)
- Is polling backing off? (`act_runner_poll_backoff_seconds`, `act_runner_poll_fetch_total{result=\"error\"}`)
- Are log/state reports slow? (`act_runner_report_{log,state}_duration_seconds`)
- Is the log buffer draining? (`act_runner_report_log_buffer_rows`)
Today operators have to grep logs. This PR makes all of the above first-class metrics so they can feed dashboards and alerts (`rate(act_runner_client_errors_total[5m]) > 0.1`, capacity saturation alerts, etc.).
The endpoint is **disabled by default** and binds to `127.0.0.1:9101` when enabled, so it's opt-in and safe for existing deployments.
## How
### Config
```yaml
metrics:
enabled: false # opt-in
addr: 127.0.0.1:9101 # change to 0.0.0.0:9101 only behind a reverse proxy
```
`config.example.yaml` documents both fields plus a security note about binding externally without auth.
### Wiring
1. `daemon.go` calls `metrics.Init()` (guarded by `sync.Once`), sets `act_runner_info`, `act_runner_capacity`, registers uptime + running-jobs GaugeFuncs, then starts the server goroutine with the daemon context — it shuts down cleanly on `ctx.Done()`.
2. `poller.fetchTask` observes RPC latency / result / error counters. `DeadlineExceeded` (long-poll idle) is treated as an empty result and **not** observed into the histogram so the 5s timeout doesn't swamp the buckets.
3. `poller.pollOnce` reports `poll_backoff_seconds` using the pre-jitter base interval (the true backoff level), and only when it changes — prevents noisy no-op gauge updates at the `FetchIntervalMax` plateau.
4. `reporter.ReportLog` / `ReportState` record duration histograms and success/error counters; `log_buffer_rows` is updated only when the value changes, guarded by the already-held `clientM`.
5. `runner.Run` observes `job_duration_seconds` and increments `job_total` by outcome via `metrics.ResultToStatusLabel`.
### Safety / security review
- All timeouts set; Slowloris-safe.
- Custom `prometheus.NewRegistry()` — no global registration side-effects.
- No sensitive data in labels (reviewed every instrumentation site).
- Single new dependency: `github.com/prometheus/client_golang v1.23.2`.
- Endpoint is unauthenticated by design and documented as such; default localhost bind mitigates exposure. Operators exposing externally should front it with a reverse proxy.
## Verification
### Unit tests
\`\`\`bash
go build ./...
go vet ./...
go test ./...
\`\`\`
### Manual smoke test
1. Enable metrics in `config.yaml`:
\`\`\`yaml
metrics:
enabled: true
addr: 127.0.0.1:9101
\`\`\`
2. Start the runner against a Gitea instance: \`./act_runner daemon\`.
3. Scrape the endpoint:
\`\`\`bash
curl -s http://127.0.0.1:9101/metrics | grep '^act_runner_'
curl -s http://127.0.0.1:9101/healthz # → ok
\`\`\`
4. Confirm the static series appear immediately: \`act_runner_info\`, \`act_runner_capacity\`, \`act_runner_uptime_seconds\`, \`act_runner_job_running\`, \`act_runner_job_capacity_utilization_ratio\`.
5. Trigger a workflow and confirm counters increment: \`act_runner_poll_fetch_total{result=\"task\"}\`, \`act_runner_job_total{status=\"success\"}\`, \`act_runner_report_log_total{result=\"success\"}\`.
6. Leave the runner idle and confirm \`act_runner_poll_backoff_seconds\` settles (and does **not** churn on every poll).
7. Ctrl-C and confirm a clean \"metrics server shutdown\" log line (no port-in-use error on restart within 5s).
### Prometheus integration
Add to \`prometheus.yml\`:
\`\`\`yaml
scrape_configs:
- job_name: act_runner
static_configs:
- targets: ['127.0.0.1:9101']
\`\`\`
Sample alert to try:
\`\`\`
sum(rate(act_runner_client_errors_total[5m])) by (method) > 0.1
\`\`\`
## Out of scope (follow-ups)
- TLS and auth on the metrics endpoint (mitigated today by localhost default; add when operators need external scraping).
- Per-task labels (intentionally avoided for cardinality safety).
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/820
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
217 lines
6.7 KiB
Go
217 lines
6.7 KiB
Go
// Copyright 2026 The Gitea Authors. All rights reserved.
|
|
// SPDX-License-Identifier: MIT
|
|
|
|
package metrics
|
|
|
|
import (
|
|
"sync"
|
|
"time"
|
|
|
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
|
"github.com/prometheus/client_golang/prometheus"
|
|
"github.com/prometheus/client_golang/prometheus/collectors"
|
|
)
|
|
|
|
// Namespace is the Prometheus namespace for all act_runner metrics.
|
|
const Namespace = "act_runner"
|
|
|
|
// Label value constants for Prometheus metrics.
|
|
// Using constants prevents typos from silently creating new time-series.
|
|
//
|
|
// LabelResult* values are used on metrics with label key "result" (RPC outcomes).
|
|
// LabelStatus* values are used on metrics with label key "status" (job outcomes).
|
|
const (
|
|
LabelResultTask = "task"
|
|
LabelResultEmpty = "empty"
|
|
LabelResultError = "error"
|
|
LabelResultSuccess = "success"
|
|
|
|
LabelMethodFetchTask = "FetchTask"
|
|
LabelMethodUpdateLog = "UpdateLog"
|
|
LabelMethodUpdateTask = "UpdateTask"
|
|
|
|
LabelStatusSuccess = "success"
|
|
LabelStatusFailure = "failure"
|
|
LabelStatusCancelled = "cancelled"
|
|
LabelStatusSkipped = "skipped"
|
|
LabelStatusUnknown = "unknown"
|
|
)
|
|
|
|
// rpcDurationBuckets covers the expected latency range for short-running
|
|
// UpdateLog / UpdateTask RPCs. FetchTask uses its own buckets (it has a 10s tail).
|
|
var rpcDurationBuckets = []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2, 5}
|
|
|
|
// ResultToStatusLabel maps a runnerv1.Result to the "status" label value used on job metrics.
|
|
func ResultToStatusLabel(r runnerv1.Result) string {
|
|
switch r {
|
|
case runnerv1.Result_RESULT_SUCCESS:
|
|
return LabelStatusSuccess
|
|
case runnerv1.Result_RESULT_FAILURE:
|
|
return LabelStatusFailure
|
|
case runnerv1.Result_RESULT_CANCELLED:
|
|
return LabelStatusCancelled
|
|
case runnerv1.Result_RESULT_SKIPPED:
|
|
return LabelStatusSkipped
|
|
default:
|
|
return LabelStatusUnknown
|
|
}
|
|
}
|
|
|
|
var (
|
|
RunnerInfo = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Name: "info",
|
|
Help: "Runner metadata. Always 1. Labels carry version and name.",
|
|
}, []string{"version", "name"})
|
|
|
|
RunnerCapacity = prometheus.NewGauge(prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Name: "capacity",
|
|
Help: "Configured maximum concurrent jobs.",
|
|
})
|
|
|
|
PollFetchTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "poll",
|
|
Name: "fetch_total",
|
|
Help: "Total number of FetchTask RPCs by result (task, empty, error).",
|
|
}, []string{"result"})
|
|
|
|
PollFetchDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "poll",
|
|
Name: "fetch_duration_seconds",
|
|
Help: "Latency of FetchTask RPCs, excluding expected long-poll timeouts.",
|
|
Buckets: []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2, 5, 10},
|
|
})
|
|
|
|
PollBackoffSeconds = prometheus.NewGauge(prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "poll",
|
|
Name: "backoff_seconds",
|
|
Help: "Last observed polling backoff interval. With Capacity > 1, reflects whichever worker wrote last.",
|
|
})
|
|
|
|
JobsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "job",
|
|
Name: "total",
|
|
Help: "Total jobs processed by status (success, failure, cancelled, skipped, unknown).",
|
|
}, []string{"status"})
|
|
|
|
JobDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "job",
|
|
Name: "duration_seconds",
|
|
Help: "Duration of job execution from start to finish.",
|
|
Buckets: prometheus.ExponentialBuckets(1, 2, 14), // 1s to ~4.5h
|
|
})
|
|
|
|
ReportLogTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "report",
|
|
Name: "log_total",
|
|
Help: "Total UpdateLog RPCs by result (success, error).",
|
|
}, []string{"result"})
|
|
|
|
ReportLogDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "report",
|
|
Name: "log_duration_seconds",
|
|
Help: "Latency of UpdateLog RPCs.",
|
|
Buckets: rpcDurationBuckets,
|
|
})
|
|
|
|
ReportStateTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "report",
|
|
Name: "state_total",
|
|
Help: "Total UpdateTask (state) RPCs by result (success, error).",
|
|
}, []string{"result"})
|
|
|
|
ReportStateDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "report",
|
|
Name: "state_duration_seconds",
|
|
Help: "Latency of UpdateTask RPCs.",
|
|
Buckets: rpcDurationBuckets,
|
|
})
|
|
|
|
ReportLogBufferRows = prometheus.NewGauge(prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "report",
|
|
Name: "log_buffer_rows",
|
|
Help: "Current number of buffered log rows awaiting send.",
|
|
})
|
|
|
|
ClientErrors = prometheus.NewCounterVec(prometheus.CounterOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "client",
|
|
Name: "errors_total",
|
|
Help: "Total client RPC errors by method.",
|
|
}, []string{"method"})
|
|
)
|
|
|
|
// Registry is the custom Prometheus registry used by the runner.
|
|
var Registry = prometheus.NewRegistry()
|
|
|
|
var initOnce sync.Once
|
|
|
|
// Init registers all static metrics and the standard Go/process collectors.
|
|
// Safe to call multiple times; only the first call has effect.
|
|
func Init() {
|
|
initOnce.Do(func() {
|
|
Registry.MustRegister(
|
|
collectors.NewGoCollector(),
|
|
collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
|
|
RunnerInfo, RunnerCapacity,
|
|
PollFetchTotal, PollFetchDuration, PollBackoffSeconds,
|
|
JobsTotal, JobDuration,
|
|
ReportLogTotal, ReportLogDuration,
|
|
ReportStateTotal, ReportStateDuration, ReportLogBufferRows,
|
|
ClientErrors,
|
|
)
|
|
})
|
|
}
|
|
|
|
// RegisterUptimeFunc registers a GaugeFunc that reports seconds since startTime.
|
|
func RegisterUptimeFunc(startTime time.Time) {
|
|
Registry.MustRegister(prometheus.NewGaugeFunc(
|
|
prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Name: "uptime_seconds",
|
|
Help: "Seconds since the runner daemon started.",
|
|
},
|
|
func() float64 { return time.Since(startTime).Seconds() },
|
|
))
|
|
}
|
|
|
|
// RegisterRunningJobsFunc registers GaugeFuncs for the running job count and
|
|
// capacity utilisation ratio, evaluated lazily at Prometheus scrape time.
|
|
func RegisterRunningJobsFunc(countFn func() int64, capacity int) {
|
|
capF := float64(capacity)
|
|
Registry.MustRegister(prometheus.NewGaugeFunc(
|
|
prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "job",
|
|
Name: "running",
|
|
Help: "Number of jobs currently executing.",
|
|
},
|
|
func() float64 { return float64(countFn()) },
|
|
))
|
|
Registry.MustRegister(prometheus.NewGaugeFunc(
|
|
prometheus.GaugeOpts{
|
|
Namespace: Namespace,
|
|
Subsystem: "job",
|
|
Name: "capacity_utilization_ratio",
|
|
Help: "Ratio of running jobs to configured capacity (0.0-1.0).",
|
|
},
|
|
func() float64 {
|
|
if capF <= 0 {
|
|
return 0
|
|
}
|
|
return float64(countFn()) / capF
|
|
},
|
|
))
|
|
}
|