mirror of
https://gitea.com/gitea/act_runner.git
synced 2026-04-24 04:40:22 +08:00
feat: add Prometheus metrics endpoint for runner observability (#820)
## What
Add an optional Prometheus `/metrics` HTTP endpoint to `act_runner` so operators can observe runner health, polling behavior, job outcomes, and RPC latency without scraping logs.
New surface:
- `internal/pkg/metrics/metrics.go` — metric definitions, custom `Registry`, static Go/process collectors, label constants, `ResultToStatusLabel` helper.
- `internal/pkg/metrics/server.go` — hardened `http.Server` serving `/metrics` and `/healthz` with Slowloris-safe timeouts (`ReadHeaderTimeout` 5s, `ReadTimeout`/`WriteTimeout` 10s, `IdleTimeout` 60s) and a 5s graceful shutdown.
- `daemon.go` wires it up behind `cfg.Metrics.Enabled` (disabled by default).
- `poller.go` / `reporter.go` / `runner.go` instrument their existing hot paths with counters/histograms/gauges — no behavior change.
Metrics exported (namespace `act_runner_`):
| Subsystem | Metric | Type | Labels |
|---|---|---|---|
| — | `info` | Gauge | `version`, `name` |
| — | `capacity`, `uptime_seconds` | Gauge | — |
| `poll` | `fetch_total`, `client_errors_total` | Counter | `result` / `method` |
| `poll` | `fetch_duration_seconds`, `backoff_seconds` | Histogram / Gauge | — |
| `job` | `total` | Counter | `status` |
| `job` | `duration_seconds`, `running`, `capacity_utilization_ratio` | Histogram / GaugeFunc | — |
| `report` | `log_total`, `state_total` | Counter | `result` |
| `report` | `log_duration_seconds`, `state_duration_seconds` | Histogram | — |
| `report` | `log_buffer_rows` | Gauge | — |
| — | `go_*`, `process_*` | standard collectors | — |
All label values are predefined constants — **no high-cardinality labels** (no task IDs, repo URLs, branches, tokens, or secrets) so scraping is safe and bounded.
## Why
Teams self-hosting Gitea + `act_runner` at scale need to answer basic SRE questions that are currently invisible:
- How often are RPCs failing? Which RPC? (`act_runner_client_errors_total`)
- Are runners saturated? (`act_runner_job_capacity_utilization_ratio`, `act_runner_job_running`)
- How long do jobs take? (`act_runner_job_duration_seconds`)
- Is polling backing off? (`act_runner_poll_backoff_seconds`, `act_runner_poll_fetch_total{result=\"error\"}`)
- Are log/state reports slow? (`act_runner_report_{log,state}_duration_seconds`)
- Is the log buffer draining? (`act_runner_report_log_buffer_rows`)
Today operators have to grep logs. This PR makes all of the above first-class metrics so they can feed dashboards and alerts (`rate(act_runner_client_errors_total[5m]) > 0.1`, capacity saturation alerts, etc.).
The endpoint is **disabled by default** and binds to `127.0.0.1:9101` when enabled, so it's opt-in and safe for existing deployments.
## How
### Config
```yaml
metrics:
enabled: false # opt-in
addr: 127.0.0.1:9101 # change to 0.0.0.0:9101 only behind a reverse proxy
```
`config.example.yaml` documents both fields plus a security note about binding externally without auth.
### Wiring
1. `daemon.go` calls `metrics.Init()` (guarded by `sync.Once`), sets `act_runner_info`, `act_runner_capacity`, registers uptime + running-jobs GaugeFuncs, then starts the server goroutine with the daemon context — it shuts down cleanly on `ctx.Done()`.
2. `poller.fetchTask` observes RPC latency / result / error counters. `DeadlineExceeded` (long-poll idle) is treated as an empty result and **not** observed into the histogram so the 5s timeout doesn't swamp the buckets.
3. `poller.pollOnce` reports `poll_backoff_seconds` using the pre-jitter base interval (the true backoff level), and only when it changes — prevents noisy no-op gauge updates at the `FetchIntervalMax` plateau.
4. `reporter.ReportLog` / `ReportState` record duration histograms and success/error counters; `log_buffer_rows` is updated only when the value changes, guarded by the already-held `clientM`.
5. `runner.Run` observes `job_duration_seconds` and increments `job_total` by outcome via `metrics.ResultToStatusLabel`.
### Safety / security review
- All timeouts set; Slowloris-safe.
- Custom `prometheus.NewRegistry()` — no global registration side-effects.
- No sensitive data in labels (reviewed every instrumentation site).
- Single new dependency: `github.com/prometheus/client_golang v1.23.2`.
- Endpoint is unauthenticated by design and documented as such; default localhost bind mitigates exposure. Operators exposing externally should front it with a reverse proxy.
## Verification
### Unit tests
\`\`\`bash
go build ./...
go vet ./...
go test ./...
\`\`\`
### Manual smoke test
1. Enable metrics in `config.yaml`:
\`\`\`yaml
metrics:
enabled: true
addr: 127.0.0.1:9101
\`\`\`
2. Start the runner against a Gitea instance: \`./act_runner daemon\`.
3. Scrape the endpoint:
\`\`\`bash
curl -s http://127.0.0.1:9101/metrics | grep '^act_runner_'
curl -s http://127.0.0.1:9101/healthz # → ok
\`\`\`
4. Confirm the static series appear immediately: \`act_runner_info\`, \`act_runner_capacity\`, \`act_runner_uptime_seconds\`, \`act_runner_job_running\`, \`act_runner_job_capacity_utilization_ratio\`.
5. Trigger a workflow and confirm counters increment: \`act_runner_poll_fetch_total{result=\"task\"}\`, \`act_runner_job_total{status=\"success\"}\`, \`act_runner_report_log_total{result=\"success\"}\`.
6. Leave the runner idle and confirm \`act_runner_poll_backoff_seconds\` settles (and does **not** churn on every poll).
7. Ctrl-C and confirm a clean \"metrics server shutdown\" log line (no port-in-use error on restart within 5s).
### Prometheus integration
Add to \`prometheus.yml\`:
\`\`\`yaml
scrape_configs:
- job_name: act_runner
static_configs:
- targets: ['127.0.0.1:9101']
\`\`\`
Sample alert to try:
\`\`\`
sum(rate(act_runner_client_errors_total[5m])) by (method) > 0.1
\`\`\`
## Out of scope (follow-ups)
- TLS and auth on the metrics endpoint (mitigated today by localhost default; add when operators need external scraping).
- Per-task labels (intentionally avoided for cardinality safety).
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/820
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
This commit is contained in:
@@ -132,3 +132,12 @@ host:
|
||||
# The parent directory of a job's working directory.
|
||||
# If it's empty, $HOME/.cache/act/ will be used.
|
||||
workdir_parent:
|
||||
|
||||
metrics:
|
||||
# Enable the Prometheus metrics endpoint.
|
||||
# When enabled, metrics are served at http://<addr>/metrics and a liveness check at /healthz.
|
||||
enabled: false
|
||||
# The address for the metrics HTTP server to listen on.
|
||||
# Defaults to localhost only. Set to ":9101" to allow external access,
|
||||
# but ensure the port is firewall-protected as there is no authentication.
|
||||
addr: "127.0.0.1:9101"
|
||||
|
||||
@@ -70,6 +70,12 @@ type Host struct {
|
||||
WorkdirParent string `yaml:"workdir_parent"` // WorkdirParent specifies the parent directory for the host's working directory.
|
||||
}
|
||||
|
||||
// Metrics represents the configuration for the Prometheus metrics endpoint.
|
||||
type Metrics struct {
|
||||
Enabled bool `yaml:"enabled"` // Enabled indicates whether the metrics endpoint is exposed.
|
||||
Addr string `yaml:"addr"` // Addr specifies the listen address for the metrics HTTP server (e.g., ":9101").
|
||||
}
|
||||
|
||||
// Config represents the overall configuration.
|
||||
type Config struct {
|
||||
Log Log `yaml:"log"` // Log represents the configuration for logging.
|
||||
@@ -77,6 +83,7 @@ type Config struct {
|
||||
Cache Cache `yaml:"cache"` // Cache represents the configuration for caching.
|
||||
Container Container `yaml:"container"` // Container represents the configuration for the container.
|
||||
Host Host `yaml:"host"` // Host represents the configuration for the host.
|
||||
Metrics Metrics `yaml:"metrics"` // Metrics represents the configuration for the Prometheus metrics endpoint.
|
||||
}
|
||||
|
||||
// LoadDefault returns the default configuration.
|
||||
@@ -157,6 +164,9 @@ func LoadDefault(file string) (*Config, error) {
|
||||
if cfg.Runner.StateReportInterval <= 0 {
|
||||
cfg.Runner.StateReportInterval = 5 * time.Second
|
||||
}
|
||||
if cfg.Metrics.Addr == "" {
|
||||
cfg.Metrics.Addr = "127.0.0.1:9101"
|
||||
}
|
||||
|
||||
// Validate and fix invalid config combinations to prevent confusing behavior.
|
||||
if cfg.Runner.FetchIntervalMax < cfg.Runner.FetchInterval {
|
||||
|
||||
216
internal/pkg/metrics/metrics.go
Normal file
216
internal/pkg/metrics/metrics.go
Normal file
@@ -0,0 +1,216 @@
|
||||
// Copyright 2026 The Gitea Authors. All rights reserved.
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
package metrics
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
"github.com/prometheus/client_golang/prometheus/collectors"
|
||||
)
|
||||
|
||||
// Namespace is the Prometheus namespace for all act_runner metrics.
|
||||
const Namespace = "act_runner"
|
||||
|
||||
// Label value constants for Prometheus metrics.
|
||||
// Using constants prevents typos from silently creating new time-series.
|
||||
//
|
||||
// LabelResult* values are used on metrics with label key "result" (RPC outcomes).
|
||||
// LabelStatus* values are used on metrics with label key "status" (job outcomes).
|
||||
const (
|
||||
LabelResultTask = "task"
|
||||
LabelResultEmpty = "empty"
|
||||
LabelResultError = "error"
|
||||
LabelResultSuccess = "success"
|
||||
|
||||
LabelMethodFetchTask = "FetchTask"
|
||||
LabelMethodUpdateLog = "UpdateLog"
|
||||
LabelMethodUpdateTask = "UpdateTask"
|
||||
|
||||
LabelStatusSuccess = "success"
|
||||
LabelStatusFailure = "failure"
|
||||
LabelStatusCancelled = "cancelled"
|
||||
LabelStatusSkipped = "skipped"
|
||||
LabelStatusUnknown = "unknown"
|
||||
)
|
||||
|
||||
// rpcDurationBuckets covers the expected latency range for short-running
|
||||
// UpdateLog / UpdateTask RPCs. FetchTask uses its own buckets (it has a 10s tail).
|
||||
var rpcDurationBuckets = []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2, 5}
|
||||
|
||||
// ResultToStatusLabel maps a runnerv1.Result to the "status" label value used on job metrics.
|
||||
func ResultToStatusLabel(r runnerv1.Result) string {
|
||||
switch r {
|
||||
case runnerv1.Result_RESULT_SUCCESS:
|
||||
return LabelStatusSuccess
|
||||
case runnerv1.Result_RESULT_FAILURE:
|
||||
return LabelStatusFailure
|
||||
case runnerv1.Result_RESULT_CANCELLED:
|
||||
return LabelStatusCancelled
|
||||
case runnerv1.Result_RESULT_SKIPPED:
|
||||
return LabelStatusSkipped
|
||||
default:
|
||||
return LabelStatusUnknown
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
RunnerInfo = prometheus.NewGaugeVec(prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Name: "info",
|
||||
Help: "Runner metadata. Always 1. Labels carry version and name.",
|
||||
}, []string{"version", "name"})
|
||||
|
||||
RunnerCapacity = prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Name: "capacity",
|
||||
Help: "Configured maximum concurrent jobs.",
|
||||
})
|
||||
|
||||
PollFetchTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "poll",
|
||||
Name: "fetch_total",
|
||||
Help: "Total number of FetchTask RPCs by result (task, empty, error).",
|
||||
}, []string{"result"})
|
||||
|
||||
PollFetchDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "poll",
|
||||
Name: "fetch_duration_seconds",
|
||||
Help: "Latency of FetchTask RPCs, excluding expected long-poll timeouts.",
|
||||
Buckets: []float64{0.01, 0.05, 0.1, 0.25, 0.5, 1, 2, 5, 10},
|
||||
})
|
||||
|
||||
PollBackoffSeconds = prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "poll",
|
||||
Name: "backoff_seconds",
|
||||
Help: "Last observed polling backoff interval. With Capacity > 1, reflects whichever worker wrote last.",
|
||||
})
|
||||
|
||||
JobsTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "job",
|
||||
Name: "total",
|
||||
Help: "Total jobs processed by status (success, failure, cancelled, skipped, unknown).",
|
||||
}, []string{"status"})
|
||||
|
||||
JobDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "job",
|
||||
Name: "duration_seconds",
|
||||
Help: "Duration of job execution from start to finish.",
|
||||
Buckets: prometheus.ExponentialBuckets(1, 2, 14), // 1s to ~4.5h
|
||||
})
|
||||
|
||||
ReportLogTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "report",
|
||||
Name: "log_total",
|
||||
Help: "Total UpdateLog RPCs by result (success, error).",
|
||||
}, []string{"result"})
|
||||
|
||||
ReportLogDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "report",
|
||||
Name: "log_duration_seconds",
|
||||
Help: "Latency of UpdateLog RPCs.",
|
||||
Buckets: rpcDurationBuckets,
|
||||
})
|
||||
|
||||
ReportStateTotal = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "report",
|
||||
Name: "state_total",
|
||||
Help: "Total UpdateTask (state) RPCs by result (success, error).",
|
||||
}, []string{"result"})
|
||||
|
||||
ReportStateDuration = prometheus.NewHistogram(prometheus.HistogramOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "report",
|
||||
Name: "state_duration_seconds",
|
||||
Help: "Latency of UpdateTask RPCs.",
|
||||
Buckets: rpcDurationBuckets,
|
||||
})
|
||||
|
||||
ReportLogBufferRows = prometheus.NewGauge(prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "report",
|
||||
Name: "log_buffer_rows",
|
||||
Help: "Current number of buffered log rows awaiting send.",
|
||||
})
|
||||
|
||||
ClientErrors = prometheus.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "client",
|
||||
Name: "errors_total",
|
||||
Help: "Total client RPC errors by method.",
|
||||
}, []string{"method"})
|
||||
)
|
||||
|
||||
// Registry is the custom Prometheus registry used by the runner.
|
||||
var Registry = prometheus.NewRegistry()
|
||||
|
||||
var initOnce sync.Once
|
||||
|
||||
// Init registers all static metrics and the standard Go/process collectors.
|
||||
// Safe to call multiple times; only the first call has effect.
|
||||
func Init() {
|
||||
initOnce.Do(func() {
|
||||
Registry.MustRegister(
|
||||
collectors.NewGoCollector(),
|
||||
collectors.NewProcessCollector(collectors.ProcessCollectorOpts{}),
|
||||
RunnerInfo, RunnerCapacity,
|
||||
PollFetchTotal, PollFetchDuration, PollBackoffSeconds,
|
||||
JobsTotal, JobDuration,
|
||||
ReportLogTotal, ReportLogDuration,
|
||||
ReportStateTotal, ReportStateDuration, ReportLogBufferRows,
|
||||
ClientErrors,
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
// RegisterUptimeFunc registers a GaugeFunc that reports seconds since startTime.
|
||||
func RegisterUptimeFunc(startTime time.Time) {
|
||||
Registry.MustRegister(prometheus.NewGaugeFunc(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Name: "uptime_seconds",
|
||||
Help: "Seconds since the runner daemon started.",
|
||||
},
|
||||
func() float64 { return time.Since(startTime).Seconds() },
|
||||
))
|
||||
}
|
||||
|
||||
// RegisterRunningJobsFunc registers GaugeFuncs for the running job count and
|
||||
// capacity utilisation ratio, evaluated lazily at Prometheus scrape time.
|
||||
func RegisterRunningJobsFunc(countFn func() int64, capacity int) {
|
||||
capF := float64(capacity)
|
||||
Registry.MustRegister(prometheus.NewGaugeFunc(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "job",
|
||||
Name: "running",
|
||||
Help: "Number of jobs currently executing.",
|
||||
},
|
||||
func() float64 { return float64(countFn()) },
|
||||
))
|
||||
Registry.MustRegister(prometheus.NewGaugeFunc(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "job",
|
||||
Name: "capacity_utilization_ratio",
|
||||
Help: "Ratio of running jobs to configured capacity (0.0-1.0).",
|
||||
},
|
||||
func() float64 {
|
||||
if capF <= 0 {
|
||||
return 0
|
||||
}
|
||||
return float64(countFn()) / capF
|
||||
},
|
||||
))
|
||||
}
|
||||
50
internal/pkg/metrics/server.go
Normal file
50
internal/pkg/metrics/server.go
Normal file
@@ -0,0 +1,50 @@
|
||||
// Copyright 2026 The Gitea Authors. All rights reserved.
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
package metrics //nolint:revive // "metrics" is the conventional package name for Prometheus instrumentation; runtime/metrics stdlib is not used here.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// StartServer starts an HTTP server that serves Prometheus metrics on /metrics
|
||||
// and a liveness check on /healthz. The server shuts down when ctx is cancelled.
|
||||
// Call Init() before StartServer to register metrics with the Registry.
|
||||
func StartServer(ctx context.Context, addr string) {
|
||||
mux := http.NewServeMux()
|
||||
mux.Handle("/metrics", promhttp.HandlerFor(Registry, promhttp.HandlerOpts{}))
|
||||
mux.HandleFunc("/healthz", func(w http.ResponseWriter, _ *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
_, _ = w.Write([]byte("ok"))
|
||||
})
|
||||
|
||||
srv := &http.Server{
|
||||
Addr: addr,
|
||||
Handler: mux,
|
||||
ReadHeaderTimeout: 5 * time.Second,
|
||||
ReadTimeout: 10 * time.Second,
|
||||
WriteTimeout: 10 * time.Second,
|
||||
IdleTimeout: 60 * time.Second,
|
||||
}
|
||||
|
||||
go func() {
|
||||
log.Infof("metrics server listening on %s", addr)
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
log.WithError(err).Error("metrics server failed")
|
||||
}
|
||||
}()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
shutCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
if err := srv.Shutdown(shutCtx); err != nil {
|
||||
log.WithError(err).Warn("metrics server shutdown error")
|
||||
}
|
||||
}()
|
||||
}
|
||||
@@ -21,6 +21,7 @@ import (
|
||||
|
||||
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||
"gitea.com/gitea/act_runner/internal/pkg/metrics"
|
||||
)
|
||||
|
||||
type Reporter struct {
|
||||
@@ -36,6 +37,11 @@ type Reporter struct {
|
||||
logReplacer *strings.Replacer
|
||||
oldnew []string
|
||||
|
||||
// lastLogBufferRows is the last value written to the ReportLogBufferRows
|
||||
// gauge; guarded by clientM (the same lock held around each ReportLog call)
|
||||
// so the gauge skips no-op Set calls when the buffer size is unchanged.
|
||||
lastLogBufferRows int
|
||||
|
||||
state *runnerv1.TaskState
|
||||
stateChanged bool
|
||||
stateMu sync.RWMutex
|
||||
@@ -93,6 +99,13 @@ func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.C
|
||||
return rv
|
||||
}
|
||||
|
||||
// Result returns the final job result. Safe to call after Close() returns.
|
||||
func (r *Reporter) Result() runnerv1.Result {
|
||||
r.stateMu.RLock()
|
||||
defer r.stateMu.RUnlock()
|
||||
return r.state.Result
|
||||
}
|
||||
|
||||
func (r *Reporter) ResetSteps(l int) {
|
||||
r.stateMu.Lock()
|
||||
defer r.stateMu.Unlock()
|
||||
@@ -421,15 +434,20 @@ func (r *Reporter) ReportLog(noMore bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
start := time.Now()
|
||||
resp, err := r.client.UpdateLog(r.ctx, connect.NewRequest(&runnerv1.UpdateLogRequest{
|
||||
TaskId: r.state.Id,
|
||||
Index: int64(r.logOffset),
|
||||
Rows: rows,
|
||||
NoMore: noMore,
|
||||
}))
|
||||
metrics.ReportLogDuration.Observe(time.Since(start).Seconds())
|
||||
if err != nil {
|
||||
metrics.ReportLogTotal.WithLabelValues(metrics.LabelResultError).Inc()
|
||||
metrics.ClientErrors.WithLabelValues(metrics.LabelMethodUpdateLog).Inc()
|
||||
return err
|
||||
}
|
||||
metrics.ReportLogTotal.WithLabelValues(metrics.LabelResultSuccess).Inc()
|
||||
|
||||
ack := int(resp.Msg.AckIndex)
|
||||
if ack < r.logOffset {
|
||||
@@ -440,7 +458,12 @@ func (r *Reporter) ReportLog(noMore bool) error {
|
||||
r.logRows = r.logRows[ack-r.logOffset:]
|
||||
submitted := r.logOffset + len(rows)
|
||||
r.logOffset = ack
|
||||
remaining := len(r.logRows)
|
||||
r.stateMu.Unlock()
|
||||
if remaining != r.lastLogBufferRows {
|
||||
metrics.ReportLogBufferRows.Set(float64(remaining))
|
||||
r.lastLogBufferRows = remaining
|
||||
}
|
||||
|
||||
if noMore && ack < submitted {
|
||||
return errors.New("not all logs are submitted")
|
||||
@@ -479,16 +502,21 @@ func (r *Reporter) ReportState(reportResult bool) error {
|
||||
state.Result = runnerv1.Result_RESULT_UNSPECIFIED
|
||||
}
|
||||
|
||||
start := time.Now()
|
||||
resp, err := r.client.UpdateTask(r.ctx, connect.NewRequest(&runnerv1.UpdateTaskRequest{
|
||||
State: state,
|
||||
Outputs: outputs,
|
||||
}))
|
||||
metrics.ReportStateDuration.Observe(time.Since(start).Seconds())
|
||||
if err != nil {
|
||||
metrics.ReportStateTotal.WithLabelValues(metrics.LabelResultError).Inc()
|
||||
metrics.ClientErrors.WithLabelValues(metrics.LabelMethodUpdateTask).Inc()
|
||||
r.stateMu.Lock()
|
||||
r.stateChanged = true
|
||||
r.stateMu.Unlock()
|
||||
return err
|
||||
}
|
||||
metrics.ReportStateTotal.WithLabelValues(metrics.LabelResultSuccess).Inc()
|
||||
|
||||
for _, k := range resp.Msg.SentOutputs {
|
||||
r.outputs.Store(k, struct{}{})
|
||||
|
||||
Reference in New Issue
Block a user