Files
act_runner/internal/pkg/metrics/metrics.go
Bo-Yi Wu 9aafec169b perf: use single poller with semaphore-based capacity control (#822)
## Background

#819 replaced the shared `rate.Limiter` with per-worker exponential backoff counters to add jitter and adaptive polling. Before #819, the poller used:

```go
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
```

This limiter was **shared across all N polling goroutines with burst=1**, effectively serializing their `FetchTask` calls — so even with `capacity=60`, the runner issued roughly one `FetchTask` per `FetchInterval` total.

#819 replaced this with independent per-worker `consecutiveEmpty` / `consecutiveErrors` counters. Each goroutine now backs off **independently**, which inadvertently removed the cross-worker serialization. With `capacity=N`, the runner now has N goroutines each polling on their own schedule — a regression from the pre-#819 baseline for any runner with `capacity > 1`.

(Thanks to @ChristopherHX for catching this in review.)

## Problem

With the post-#819 code:

- `capacity=N` maintains **N persistent polling goroutines**, each calling `FetchTask` independently
- At idle, N goroutines each wake up and send a `FetchTask` RPC per `FetchInterval`
- At full load, N goroutines **continue polling** even though no slot is available to run a new task — every one of those RPCs is wasted
- The `Shutdown()` timeout branch has a pre-existing bug: the "non-blocking check" is actually a blocking receive, so `shutdownJobs()` is never reached on timeout

## Real-World Impact: 3 Runners × capacity=60

Current production environment: 3 runners each with `capacity=60`.

| Metric | Post-#819 (current) | This PR | Reduction |
|--------|---------------------|---------|-----------|
| Polling goroutines (total) | 3 × 60 = **180** | 3 × 1 = **3** | **98.3%** (177 fewer) |
| FetchTask RPCs per poll cycle (idle) | **180** | **3** | **98.3%** |
| FetchTask RPCs per poll cycle (full load) | **180** (all wasted) | **0** (blocked on semaphore) | **100%** |
| Concurrent connections to Gitea | **180** | **3** | **98.3%** |
| Backoff state objects | 180 (per-worker) | 3 (one per runner) | Simplified |

### Idle scenario

All 180 goroutines wake up every `FetchInterval`, each sending a `FetchTask` RPC that returns empty. Server handles 180 RPCs per cycle for zero useful work. After this PR: **3 RPCs per cycle** — one per runner.

> Note: pre-#819 idle behavior was already ~3 RPCs/cycle due to the shared `rate.Limiter`. This PR restores that property while also addressing the full-load case below.

### Full-load scenario (all 180 slots occupied)

All 180 goroutines **continue polling** even though no slot is available. Every RPC is wasted. After this PR: all 3 pollers are **blocked on the semaphore** — **zero RPCs** until a task completes.

> This is a scenario neither the pre-#819 shared limiter nor the post-#819 per-worker backoff handles — both still issue `FetchTask` RPCs when no slot is free. The semaphore is the only approach of the three that ties polling to available capacity.

## Why Not Just Revert to `rate.Limiter`?

Reverting would restore the serialized behavior but is not the right long-term fix:

- **`rate.Limiter` has no concept of available capacity.** At full load it still hands out tokens and issues `FetchTask` RPCs that can't be acted on. The semaphore blocks polling entirely in that case — zero wasted RPCs.
- **It composes poorly with adaptive backoff from #819.** A shared limiter and per-worker backoff pull in different directions.
- **N goroutines serializing on a shared limiter means N-1 of them exist only to wait in line.** A single poller expresses the same behavior more directly.

The semaphore approach ties polling to capacity explicitly: `acquire slot → fetch → dispatch → release`. That invariant becomes structural rather than emergent from a rate limiter.

## Solution

Replace N polling goroutines with a **single polling loop** that uses a buffered channel as a semaphore to control concurrent task execution:

```go
// New: poller.go Poll()
sem := make(chan struct{}, p.cfg.Runner.Capacity)
for {
    select {
    case sem <- struct{}{}:       // Acquire slot (blocks at capacity)
    case <-p.pollingCtx.Done():
        return
    }
    task, ok := p.fetchTask(...)  // Single FetchTask RPC
    if !ok {
        <-sem                     // Release slot on empty response
        // backoff...
        continue
    }
    go func(t *runnerv1.Task) {   // Dispatch task
        defer func() { <-sem }()  // Release slot when done
        p.runTaskWithRecover(p.jobsCtx, t)
    }(task)
}
```

The exponential backoff and jitter from #819 are preserved — just driven by a single `workerState` instead of N per-worker states.

## Shutdown Bug Fix

Fixed a pre-existing bug in `Shutdown()` where the timeout branch could never force-cancel running jobs:

```go
// Before (BROKEN): blocking receive, shutdownJobs() never reached
_, ok := <-p.done   // blocks until p.done is closed
if !ok { return nil }
p.shutdownJobs()    // dead code when jobs are still running

// After (FIXED): proper non-blocking check
select {
case <-p.done:
    return nil
default:
}
p.shutdownJobs()    // now correctly reached on timeout
```

## Code Changes

| Area | Detail |
|------|--------|
| `Poller.runner` | `*run.Runner` → `TaskRunner` interface (enables mock-based testing) |
| `Poll()` | N goroutines → single loop with buffered-channel semaphore |
| `PollOnce()` | Inlined from removed `pollOnce()` |
| `waitBackoff()` | New helper, eliminates duplicated backoff logic |
| `resetBackoff()` | New method on `workerState`, also resets stale `lastBackoff` metric |
| `Shutdown()` | Fixed blocking receive → proper non-blocking select |
| Removed | `poll()`, `pollOnce()` private methods (-2 methods, -42 lines) |

## Test Coverage

Added `TestPoller_ConcurrencyLimitedByCapacity` which verifies:

- With `capacity=3`, at most 3 tasks execute concurrently (`maxConcurrent <= 3`)
- Tasks actually overlap in execution (`maxConcurrent >= 2`)
- `FetchTask` is never called concurrently — confirms single poller (`maxFetchConcur == 1`)
- All 6 tasks complete successfully (`totalCompleted == 6`)
- Mock runner respects context cancellation, enabling shutdown path verification

```
=== RUN   TestPoller_ConcurrencyLimitedByCapacity
--- PASS: TestPoller_ConcurrencyLimitedByCapacity (0.10s)
PASS
ok  	gitea.com/gitea/act_runner/internal/app/poll	0.59s
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/822
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
2026-04-19 08:10:23 +00:00

217 lines
6.6 KiB
Go

// Copyright 2026 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package metrics
import (
"sync"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
)
// Namespace is the Prometheus namespace for all act_runner metrics.
const Namespace = "act_runner"
// Label value constants for Prometheus metrics.
// Using constants prevents typos from silently creating new time-series.
//
// LabelResult* values are used on metrics with label key "result" (RPC outcomes).
// LabelStatus* values are used on metrics with label key "status" (job outcomes).
const (
LabelResultTask = "task"
LabelResultEmpty = "empty"
LabelResultError = "error"
LabelResultSuccess = "success"
LabelMethodFetchTask = "FetchTask"
LabelMethodUpdateLog = "UpdateLog"
LabelMethodUpdateTask = "UpdateTask"
LabelStatusSuccess = "success"
LabelStatusFailure = "failure"
LabelStatusCancelled = "cancelled"
LabelStatusSkipped = "skipped"
LabelStatusUnknown = "unknown"
)
// rpcDurationBuckets covers the expected latency range for short-running
// UpdateLog / UpdateTask RPCs. FetchTask uses its own buckets (it has a 10s tail).
var rpcDurationBuckets = []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2, 5}
// ResultToStatusLabel maps a runnerv1.Result to the "status" label value used on job metrics.
func ResultToStatusLabel(r runnerv1.Result) string {
switch r {
case runnerv1.Result_RESULT_SUCCESS:
return LabelStatusSuccess
case runnerv1.Result_RESULT_FAILURE:
return LabelStatusFailure
case runnerv1.Result_RESULT_CANCELLED:
return LabelStatusCancelled
case runnerv1.Result_RESULT_SKIPPED:
return LabelStatusSkipped
default:
return LabelStatusUnknown
}
}
var (
RunnerInfo = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Namespace: Namespace,
Name: "info",
Help: "Runner metadata. Always 1. Labels carry version and name.",
}, []string{"version", "name"})
RunnerCapacity = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: Namespace,
Name: "capacity",
Help: "Configured maximum concurrent jobs.",
})
PollFetchTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: Namespace,
Subsystem: "poll",
Name: "fetch_total",
Help: "Total number of FetchTask RPCs by result (task, empty, error).",
}, []string{"result"})
PollFetchDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: Namespace,
Subsystem: "poll",
Name: "fetch_duration_seconds",
Help: "Latency of FetchTask RPCs, excluding expected long-poll timeouts.",
Buckets: []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2, 5, 10},
})
PollBackoffSeconds = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: Namespace,
Subsystem: "poll",
Name: "backoff_seconds",
Help: "Last observed polling backoff interval in seconds.",
})
JobsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: Namespace,
Subsystem: "job",
Name: "total",
Help: "Total jobs processed by status (success, failure, cancelled, skipped, unknown).",
}, []string{"status"})
JobDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: Namespace,
Subsystem: "job",
Name: "duration_seconds",
Help: "Duration of job execution from start to finish.",
Buckets: prometheus.ExponentialBuckets(1, 2, 14), // 1s to ~4.5h
})
ReportLogTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: Namespace,
Subsystem: "report",
Name: "log_total",
Help: "Total UpdateLog RPCs by result (success, error).",
}, []string{"result"})
ReportLogDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: Namespace,
Subsystem: "report",
Name: "log_duration_seconds",
Help: "Latency of UpdateLog RPCs.",
Buckets: rpcDurationBuckets,
})
ReportStateTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: Namespace,
Subsystem: "report",
Name: "state_total",
Help: "Total UpdateTask (state) RPCs by result (success, error).",
}, []string{"result"})
ReportStateDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
Namespace: Namespace,
Subsystem: "report",
Name: "state_duration_seconds",
Help: "Latency of UpdateTask RPCs.",
Buckets: rpcDurationBuckets,
})
ReportLogBufferRows = prometheus.NewGauge(prometheus.GaugeOpts{
Namespace: Namespace,
Subsystem: "report",
Name: "log_buffer_rows",
Help: "Current number of buffered log rows awaiting send.",
})
ClientErrors = prometheus.NewCounterVec(prometheus.CounterOpts{
Namespace: Namespace,
Subsystem: "client",
Name: "errors_total",
Help: "Total client RPC errors by method.",
}, []string{"method"})
)
// Registry is the custom Prometheus registry used by the runner.
var Registry = prometheus.NewRegistry()
var initOnce sync.Once
// Init registers all static metrics and the standard Go/process collectors.
// Safe to call multiple times; only the first call has effect.
func Init() {
initOnce.Do(func() {
Registry.MustRegister(
collectors.NewGoCollector(),
collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
RunnerInfo, RunnerCapacity,
PollFetchTotal, PollFetchDuration, PollBackoffSeconds,
JobsTotal, JobDuration,
ReportLogTotal, ReportLogDuration,
ReportStateTotal, ReportStateDuration, ReportLogBufferRows,
ClientErrors,
)
})
}
// RegisterUptimeFunc registers a GaugeFunc that reports seconds since startTime.
func RegisterUptimeFunc(startTime time.Time) {
Registry.MustRegister(prometheus.NewGaugeFunc(
prometheus.GaugeOpts{
Namespace: Namespace,
Name: "uptime_seconds",
Help: "Seconds since the runner daemon started.",
},
func() float64 { return time.Since(startTime).Seconds() },
))
}
// RegisterRunningJobsFunc registers GaugeFuncs for the running job count and
// capacity utilisation ratio, evaluated lazily at Prometheus scrape time.
func RegisterRunningJobsFunc(countFn func() int64, capacity int) {
capF := float64(capacity)
Registry.MustRegister(prometheus.NewGaugeFunc(
prometheus.GaugeOpts{
Namespace: Namespace,
Subsystem: "job",
Name: "running",
Help: "Number of jobs currently executing.",
},
func() float64 { return float64(countFn()) },
))
Registry.MustRegister(prometheus.NewGaugeFunc(
prometheus.GaugeOpts{
Namespace: Namespace,
Subsystem: "job",
Name: "capacity_utilization_ratio",
Help: "Ratio of running jobs to configured capacity (0.0-1.0).",
},
func() float64 {
if capF <= 0 {
return 0
}
return float64(countFn()) / capF
},
))
}