mirror of
https://gitea.com/gitea/act_runner.git
synced 2026-04-24 04:40:22 +08:00
## What
Add an optional Prometheus `/metrics` HTTP endpoint to `act_runner` so operators can observe runner health, polling behavior, job outcomes, and RPC latency without scraping logs.
New surface:
- `internal/pkg/metrics/metrics.go` — metric definitions, custom `Registry`, static Go/process collectors, label constants, `ResultToStatusLabel` helper.
- `internal/pkg/metrics/server.go` — hardened `http.Server` serving `/metrics` and `/healthz` with Slowloris-safe timeouts (`ReadHeaderTimeout` 5s, `ReadTimeout`/`WriteTimeout` 10s, `IdleTimeout` 60s) and a 5s graceful shutdown.
- `daemon.go` wires it up behind `cfg.Metrics.Enabled` (disabled by default).
- `poller.go` / `reporter.go` / `runner.go` instrument their existing hot paths with counters/histograms/gauges — no behavior change.
Metrics exported (namespace `act_runner_`):
| Subsystem | Metric | Type | Labels |
|---|---|---|---|
| — | `info` | Gauge | `version`, `name` |
| — | `capacity`, `uptime_seconds` | Gauge | — |
| `poll` | `fetch_total`, `client_errors_total` | Counter | `result` / `method` |
| `poll` | `fetch_duration_seconds`, `backoff_seconds` | Histogram / Gauge | — |
| `job` | `total` | Counter | `status` |
| `job` | `duration_seconds`, `running`, `capacity_utilization_ratio` | Histogram / GaugeFunc | — |
| `report` | `log_total`, `state_total` | Counter | `result` |
| `report` | `log_duration_seconds`, `state_duration_seconds` | Histogram | — |
| `report` | `log_buffer_rows` | Gauge | — |
| — | `go_*`, `process_*` | standard collectors | — |
All label values are predefined constants — **no high-cardinality labels** (no task IDs, repo URLs, branches, tokens, or secrets) so scraping is safe and bounded.
## Why
Teams self-hosting Gitea + `act_runner` at scale need to answer basic SRE questions that are currently invisible:
- How often are RPCs failing? Which RPC? (`act_runner_client_errors_total`)
- Are runners saturated? (`act_runner_job_capacity_utilization_ratio`, `act_runner_job_running`)
- How long do jobs take? (`act_runner_job_duration_seconds`)
- Is polling backing off? (`act_runner_poll_backoff_seconds`, `act_runner_poll_fetch_total{result=\"error\"}`)
- Are log/state reports slow? (`act_runner_report_{log,state}_duration_seconds`)
- Is the log buffer draining? (`act_runner_report_log_buffer_rows`)
Today operators have to grep logs. This PR makes all of the above first-class metrics so they can feed dashboards and alerts (`rate(act_runner_client_errors_total[5m]) > 0.1`, capacity saturation alerts, etc.).
The endpoint is **disabled by default** and binds to `127.0.0.1:9101` when enabled, so it's opt-in and safe for existing deployments.
## How
### Config
```yaml
metrics:
enabled: false # opt-in
addr: 127.0.0.1:9101 # change to 0.0.0.0:9101 only behind a reverse proxy
```
`config.example.yaml` documents both fields plus a security note about binding externally without auth.
### Wiring
1. `daemon.go` calls `metrics.Init()` (guarded by `sync.Once`), sets `act_runner_info`, `act_runner_capacity`, registers uptime + running-jobs GaugeFuncs, then starts the server goroutine with the daemon context — it shuts down cleanly on `ctx.Done()`.
2. `poller.fetchTask` observes RPC latency / result / error counters. `DeadlineExceeded` (long-poll idle) is treated as an empty result and **not** observed into the histogram so the 5s timeout doesn't swamp the buckets.
3. `poller.pollOnce` reports `poll_backoff_seconds` using the pre-jitter base interval (the true backoff level), and only when it changes — prevents noisy no-op gauge updates at the `FetchIntervalMax` plateau.
4. `reporter.ReportLog` / `ReportState` record duration histograms and success/error counters; `log_buffer_rows` is updated only when the value changes, guarded by the already-held `clientM`.
5. `runner.Run` observes `job_duration_seconds` and increments `job_total` by outcome via `metrics.ResultToStatusLabel`.
### Safety / security review
- All timeouts set; Slowloris-safe.
- Custom `prometheus.NewRegistry()` — no global registration side-effects.
- No sensitive data in labels (reviewed every instrumentation site).
- Single new dependency: `github.com/prometheus/client_golang v1.23.2`.
- Endpoint is unauthenticated by design and documented as such; default localhost bind mitigates exposure. Operators exposing externally should front it with a reverse proxy.
## Verification
### Unit tests
\`\`\`bash
go build ./...
go vet ./...
go test ./...
\`\`\`
### Manual smoke test
1. Enable metrics in `config.yaml`:
\`\`\`yaml
metrics:
enabled: true
addr: 127.0.0.1:9101
\`\`\`
2. Start the runner against a Gitea instance: \`./act_runner daemon\`.
3. Scrape the endpoint:
\`\`\`bash
curl -s http://127.0.0.1:9101/metrics | grep '^act_runner_'
curl -s http://127.0.0.1:9101/healthz # → ok
\`\`\`
4. Confirm the static series appear immediately: \`act_runner_info\`, \`act_runner_capacity\`, \`act_runner_uptime_seconds\`, \`act_runner_job_running\`, \`act_runner_job_capacity_utilization_ratio\`.
5. Trigger a workflow and confirm counters increment: \`act_runner_poll_fetch_total{result=\"task\"}\`, \`act_runner_job_total{status=\"success\"}\`, \`act_runner_report_log_total{result=\"success\"}\`.
6. Leave the runner idle and confirm \`act_runner_poll_backoff_seconds\` settles (and does **not** churn on every poll).
7. Ctrl-C and confirm a clean \"metrics server shutdown\" log line (no port-in-use error on restart within 5s).
### Prometheus integration
Add to \`prometheus.yml\`:
\`\`\`yaml
scrape_configs:
- job_name: act_runner
static_configs:
- targets: ['127.0.0.1:9101']
\`\`\`
Sample alert to try:
\`\`\`
sum(rate(act_runner_client_errors_total[5m])) by (method) > 0.1
\`\`\`
## Out of scope (follow-ups)
- TLS and auth on the metrics endpoint (mitigated today by localhost default; add when operators need external scraping).
- Per-task labels (intentionally avoided for cardinality safety).
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/820
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
144 lines
7.3 KiB
YAML
144 lines
7.3 KiB
YAML
# Example configuration file, it's safe to copy this as the default config file without any modification.
|
|
|
|
# You don't have to copy this file to your instance,
|
|
# just run `./act_runner generate-config > config.yaml` to generate a config file.
|
|
|
|
log:
|
|
# The level of logging, can be trace, debug, info, warn, error, fatal
|
|
level: info
|
|
|
|
runner:
|
|
# Where to store the registration result.
|
|
file: .runner
|
|
# Execute how many tasks concurrently at the same time.
|
|
capacity: 1
|
|
# Extra environment variables to run jobs.
|
|
envs:
|
|
A_TEST_ENV_NAME_1: a_test_env_value_1
|
|
A_TEST_ENV_NAME_2: a_test_env_value_2
|
|
# Extra environment variables to run jobs from a file.
|
|
# It will be ignored if it's empty or the file doesn't exist.
|
|
env_file: .env
|
|
# The timeout for a job to be finished.
|
|
# Please note that the Gitea instance also has a timeout (3h by default) for the job.
|
|
# So the job could be stopped by the Gitea instance if its timeout is shorter than this.
|
|
timeout: 3h
|
|
# The timeout for the runner to wait for running jobs to finish when shutting down.
|
|
# Any running jobs that haven't finished after this timeout will be cancelled.
|
|
shutdown_timeout: 0s
|
|
# Whether skip verifying the TLS certificate of the Gitea instance.
|
|
insecure: false
|
|
# The timeout for fetching the job from the Gitea instance.
|
|
fetch_timeout: 5s
|
|
# The interval for fetching the job from the Gitea instance.
|
|
fetch_interval: 2s
|
|
# The maximum interval for fetching the job from the Gitea instance.
|
|
# The runner uses exponential backoff when idle, increasing the interval up to this maximum.
|
|
# Set to 0 or same as fetch_interval to disable backoff.
|
|
fetch_interval_max: 60s
|
|
# The base interval for periodic log flush to the Gitea instance.
|
|
# Logs may be sent earlier if the buffer reaches log_report_batch_size
|
|
# or if log_report_max_latency expires after the first buffered row.
|
|
log_report_interval: 5s
|
|
# The maximum time a log row can wait before being sent.
|
|
# This ensures even a single log line appears on the frontend within this duration.
|
|
# Must be less than log_report_interval to have any effect.
|
|
log_report_max_latency: 3s
|
|
# Flush logs immediately when the buffer reaches this many rows.
|
|
# This ensures bursty output (e.g., npm install) is delivered promptly.
|
|
log_report_batch_size: 100
|
|
# The interval for reporting task state (step status, timing) to the Gitea instance.
|
|
# State is also reported immediately on step transitions (start/stop).
|
|
state_report_interval: 5s
|
|
# The github_mirror of a runner is used to specify the mirror address of the github that pulls the action repository.
|
|
# It works when something like `uses: actions/checkout@v4` is used and DEFAULT_ACTIONS_URL is set to github,
|
|
# and github_mirror is not empty. In this case,
|
|
# it replaces https://github.com with the value here, which is useful for some special network environments.
|
|
github_mirror: ''
|
|
# The labels of a runner are used to determine which jobs the runner can run, and how to run them.
|
|
# Like: "macos-arm64:host" or "ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest"
|
|
# Find more images provided by Gitea at https://gitea.com/gitea/runner-images .
|
|
# If it's empty when registering, it will ask for inputting labels.
|
|
# If it's empty when execute `daemon`, will use labels in `.runner` file.
|
|
labels:
|
|
- "ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest"
|
|
- "ubuntu-24.04:docker://docker.gitea.com/runner-images:ubuntu-24.04"
|
|
- "ubuntu-22.04:docker://docker.gitea.com/runner-images:ubuntu-22.04"
|
|
|
|
cache:
|
|
# Enable cache server to use actions/cache.
|
|
enabled: true
|
|
# The directory to store the cache data.
|
|
# If it's empty, the cache data will be stored in $HOME/.cache/actcache.
|
|
dir: ""
|
|
# The host of the cache server.
|
|
# It's not for the address to listen, but the address to connect from job containers.
|
|
# So 0.0.0.0 is a bad choice, leave it empty to detect automatically.
|
|
host: ""
|
|
# The port of the cache server.
|
|
# 0 means to use a random available port.
|
|
port: 0
|
|
# The external cache server URL. Valid only when enable is true.
|
|
# If it's specified, act_runner will use this URL as the ACTIONS_CACHE_URL rather than start a server by itself.
|
|
# The URL should generally end with "/".
|
|
external_server: ""
|
|
|
|
container:
|
|
# Specifies the network to which the container will connect.
|
|
# Could be host, bridge or the name of a custom network.
|
|
# If it's empty, act_runner will create a network automatically.
|
|
network: ""
|
|
# Whether to use privileged mode or not when launching task containers (privileged mode is required for Docker-in-Docker).
|
|
privileged: false
|
|
# Any other options to be used when the container is started (e.g., --add-host=my.gitea.url:host-gateway).
|
|
options:
|
|
# The parent directory of a job's working directory.
|
|
# NOTE: There is no need to add the first '/' of the path as act_runner will add it automatically.
|
|
# If the path starts with '/', the '/' will be trimmed.
|
|
# For example, if the parent directory is /path/to/my/dir, workdir_parent should be path/to/my/dir
|
|
# If it's empty, /workspace will be used.
|
|
workdir_parent:
|
|
# Volumes (including bind mounts) can be mounted to containers. Glob syntax is supported, see https://github.com/gobwas/glob
|
|
# You can specify multiple volumes. If the sequence is empty, no volumes can be mounted.
|
|
# For example, if you only allow containers to mount the `data` volume and all the json files in `/src`, you should change the config to:
|
|
# valid_volumes:
|
|
# - data
|
|
# - /src/*.json
|
|
# If you want to allow any volume, please use the following configuration:
|
|
# valid_volumes:
|
|
# - '**'
|
|
valid_volumes: []
|
|
# Overrides the docker client host with the specified one.
|
|
# If it's empty, act_runner will find an available docker host automatically.
|
|
# If it's "-", act_runner will find an available docker host automatically, but the docker host won't be mounted to the job containers and service containers.
|
|
# If it's not empty or "-", the specified docker host will be used. An error will be returned if it doesn't work.
|
|
docker_host: ""
|
|
# Pull docker image(s) even if already present
|
|
force_pull: true
|
|
# Rebuild docker image(s) even if already present
|
|
force_rebuild: false
|
|
# Always require a reachable docker daemon, even if not required by act_runner
|
|
require_docker: false
|
|
# Timeout to wait for the docker daemon to be reachable, if docker is required by require_docker or act_runner
|
|
docker_timeout: 0s
|
|
# Bind the workspace to the host filesystem instead of using Docker volumes.
|
|
# This is required for Docker-in-Docker (DinD) setups when jobs use docker compose
|
|
# with bind mounts (e.g., ".:/app"), as volume-based workspaces are not accessible
|
|
# from the DinD daemon's filesystem. When enabled, ensure the workspace parent
|
|
# directory is also mounted into the runner container and listed in valid_volumes.
|
|
bind_workdir: false
|
|
|
|
host:
|
|
# The parent directory of a job's working directory.
|
|
# If it's empty, $HOME/.cache/act/ will be used.
|
|
workdir_parent:
|
|
|
|
metrics:
|
|
# Enable the Prometheus metrics endpoint.
|
|
# When enabled, metrics are served at http://<addr>/metrics and a liveness check at /healthz.
|
|
enabled: false
|
|
# The address for the metrics HTTP server to listen on.
|
|
# Defaults to localhost only. Set to ":9101" to allow external access,
|
|
# but ensure the port is firewall-protected as there is no authentication.
|
|
addr: "127.0.0.1:9101"
|