mirror of
https://gitea.com/gitea/act_runner.git
synced 2026-04-23 20:30:07 +08:00
## Background #819 replaced the shared `rate.Limiter` with per-worker exponential backoff counters to add jitter and adaptive polling. Before #819, the poller used: ```go limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1) ``` This limiter was **shared across all N polling goroutines with burst=1**, effectively serializing their `FetchTask` calls — so even with `capacity=60`, the runner issued roughly one `FetchTask` per `FetchInterval` total. #819 replaced this with independent per-worker `consecutiveEmpty` / `consecutiveErrors` counters. Each goroutine now backs off **independently**, which inadvertently removed the cross-worker serialization. With `capacity=N`, the runner now has N goroutines each polling on their own schedule — a regression from the pre-#819 baseline for any runner with `capacity > 1`. (Thanks to @ChristopherHX for catching this in review.) ## Problem With the post-#819 code: - `capacity=N` maintains **N persistent polling goroutines**, each calling `FetchTask` independently - At idle, N goroutines each wake up and send a `FetchTask` RPC per `FetchInterval` - At full load, N goroutines **continue polling** even though no slot is available to run a new task — every one of those RPCs is wasted - The `Shutdown()` timeout branch has a pre-existing bug: the "non-blocking check" is actually a blocking receive, so `shutdownJobs()` is never reached on timeout ## Real-World Impact: 3 Runners × capacity=60 Current production environment: 3 runners each with `capacity=60`. | Metric | Post-#819 (current) | This PR | Reduction | |--------|---------------------|---------|-----------| | Polling goroutines (total) | 3 × 60 = **180** | 3 × 1 = **3** | **98.3%** (177 fewer) | | FetchTask RPCs per poll cycle (idle) | **180** | **3** | **98.3%** | | FetchTask RPCs per poll cycle (full load) | **180** (all wasted) | **0** (blocked on semaphore) | **100%** | | Concurrent connections to Gitea | **180** | **3** | **98.3%** | | Backoff state objects | 180 (per-worker) | 3 (one per runner) | Simplified | ### Idle scenario All 180 goroutines wake up every `FetchInterval`, each sending a `FetchTask` RPC that returns empty. Server handles 180 RPCs per cycle for zero useful work. After this PR: **3 RPCs per cycle** — one per runner. > Note: pre-#819 idle behavior was already ~3 RPCs/cycle due to the shared `rate.Limiter`. This PR restores that property while also addressing the full-load case below. ### Full-load scenario (all 180 slots occupied) All 180 goroutines **continue polling** even though no slot is available. Every RPC is wasted. After this PR: all 3 pollers are **blocked on the semaphore** — **zero RPCs** until a task completes. > This is a scenario neither the pre-#819 shared limiter nor the post-#819 per-worker backoff handles — both still issue `FetchTask` RPCs when no slot is free. The semaphore is the only approach of the three that ties polling to available capacity. ## Why Not Just Revert to `rate.Limiter`? Reverting would restore the serialized behavior but is not the right long-term fix: - **`rate.Limiter` has no concept of available capacity.** At full load it still hands out tokens and issues `FetchTask` RPCs that can't be acted on. The semaphore blocks polling entirely in that case — zero wasted RPCs. - **It composes poorly with adaptive backoff from #819.** A shared limiter and per-worker backoff pull in different directions. - **N goroutines serializing on a shared limiter means N-1 of them exist only to wait in line.** A single poller expresses the same behavior more directly. The semaphore approach ties polling to capacity explicitly: `acquire slot → fetch → dispatch → release`. That invariant becomes structural rather than emergent from a rate limiter. ## Solution Replace N polling goroutines with a **single polling loop** that uses a buffered channel as a semaphore to control concurrent task execution: ```go // New: poller.go Poll() sem := make(chan struct{}, p.cfg.Runner.Capacity) for { select { case sem <- struct{}{}: // Acquire slot (blocks at capacity) case <-p.pollingCtx.Done(): return } task, ok := p.fetchTask(...) // Single FetchTask RPC if !ok { <-sem // Release slot on empty response // backoff... continue } go func(t *runnerv1.Task) { // Dispatch task defer func() { <-sem }() // Release slot when done p.runTaskWithRecover(p.jobsCtx, t) }(task) } ``` The exponential backoff and jitter from #819 are preserved — just driven by a single `workerState` instead of N per-worker states. ## Shutdown Bug Fix Fixed a pre-existing bug in `Shutdown()` where the timeout branch could never force-cancel running jobs: ```go // Before (BROKEN): blocking receive, shutdownJobs() never reached _, ok := <-p.done // blocks until p.done is closed if !ok { return nil } p.shutdownJobs() // dead code when jobs are still running // After (FIXED): proper non-blocking check select { case <-p.done: return nil default: } p.shutdownJobs() // now correctly reached on timeout ``` ## Code Changes | Area | Detail | |------|--------| | `Poller.runner` | `*run.Runner` → `TaskRunner` interface (enables mock-based testing) | | `Poll()` | N goroutines → single loop with buffered-channel semaphore | | `PollOnce()` | Inlined from removed `pollOnce()` | | `waitBackoff()` | New helper, eliminates duplicated backoff logic | | `resetBackoff()` | New method on `workerState`, also resets stale `lastBackoff` metric | | `Shutdown()` | Fixed blocking receive → proper non-blocking select | | Removed | `poll()`, `pollOnce()` private methods (-2 methods, -42 lines) | ## Test Coverage Added `TestPoller_ConcurrencyLimitedByCapacity` which verifies: - With `capacity=3`, at most 3 tasks execute concurrently (`maxConcurrent <= 3`) - Tasks actually overlap in execution (`maxConcurrent >= 2`) - `FetchTask` is never called concurrently — confirms single poller (`maxFetchConcur == 1`) - All 6 tasks complete successfully (`totalCompleted == 6`) - Mock runner respects context cancellation, enabling shutdown path verification ``` === RUN TestPoller_ConcurrencyLimitedByCapacity --- PASS: TestPoller_ConcurrencyLimitedByCapacity (0.10s) PASS ok gitea.com/gitea/act_runner/internal/app/poll 0.59s ``` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Reviewed-on: https://gitea.com/gitea/act_runner/pulls/822 Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com> Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com> Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
286 lines
7.1 KiB
Go
286 lines
7.1 KiB
Go
// Copyright 2023 The Gitea Authors. All rights reserved.
|
|
// SPDX-License-Identifier: MIT
|
|
|
|
package poll
|
|
|
|
import (
|
|
"context"
|
|
"errors"
|
|
"fmt"
|
|
"math/rand/v2"
|
|
"sync"
|
|
"sync/atomic"
|
|
"time"
|
|
|
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
|
"gitea.com/gitea/act_runner/internal/pkg/metrics"
|
|
|
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
|
"connectrpc.com/connect"
|
|
log "github.com/sirupsen/logrus"
|
|
)
|
|
|
|
// TaskRunner abstracts task execution so the poller can be tested
|
|
// without a real runner.
|
|
type TaskRunner interface {
|
|
Run(ctx context.Context, task *runnerv1.Task) error
|
|
}
|
|
|
|
type Poller struct {
|
|
client client.Client
|
|
runner TaskRunner
|
|
cfg *config.Config
|
|
tasksVersion atomic.Int64 // tasksVersion used to store the version of the last task fetched from the Gitea.
|
|
|
|
pollingCtx context.Context
|
|
shutdownPolling context.CancelFunc
|
|
|
|
jobsCtx context.Context
|
|
shutdownJobs context.CancelFunc
|
|
|
|
done chan struct{}
|
|
}
|
|
|
|
// workerState holds the single poller's backoff state. Consecutive empty or
|
|
// error responses drive exponential backoff; a successful task fetch resets
|
|
// both counters so the next poll fires immediately.
|
|
type workerState struct {
|
|
consecutiveEmpty int64
|
|
consecutiveErrors int64
|
|
// lastBackoff is the last interval reported to the PollBackoffSeconds gauge;
|
|
// used to suppress redundant no-op Set calls when the backoff plateaus
|
|
// (e.g. at FetchIntervalMax).
|
|
lastBackoff time.Duration
|
|
}
|
|
|
|
func New(cfg *config.Config, client client.Client, runner TaskRunner) *Poller {
|
|
pollingCtx, shutdownPolling := context.WithCancel(context.Background())
|
|
|
|
jobsCtx, shutdownJobs := context.WithCancel(context.Background())
|
|
|
|
done := make(chan struct{})
|
|
|
|
return &Poller{
|
|
client: client,
|
|
runner: runner,
|
|
cfg: cfg,
|
|
|
|
pollingCtx: pollingCtx,
|
|
shutdownPolling: shutdownPolling,
|
|
|
|
jobsCtx: jobsCtx,
|
|
shutdownJobs: shutdownJobs,
|
|
|
|
done: done,
|
|
}
|
|
}
|
|
|
|
func (p *Poller) Poll() {
|
|
sem := make(chan struct{}, p.cfg.Runner.Capacity)
|
|
wg := &sync.WaitGroup{}
|
|
s := &workerState{}
|
|
|
|
defer func() {
|
|
wg.Wait()
|
|
close(p.done)
|
|
}()
|
|
|
|
for {
|
|
select {
|
|
case sem <- struct{}{}:
|
|
case <-p.pollingCtx.Done():
|
|
return
|
|
}
|
|
|
|
task, ok := p.fetchTask(p.pollingCtx, s)
|
|
if !ok {
|
|
<-sem
|
|
if !p.waitBackoff(s) {
|
|
return
|
|
}
|
|
continue
|
|
}
|
|
|
|
s.resetBackoff()
|
|
|
|
wg.Add(1)
|
|
go func(t *runnerv1.Task) {
|
|
defer wg.Done()
|
|
defer func() { <-sem }()
|
|
p.runTaskWithRecover(p.jobsCtx, t)
|
|
}(task)
|
|
}
|
|
}
|
|
|
|
func (p *Poller) PollOnce() {
|
|
defer close(p.done)
|
|
s := &workerState{}
|
|
for {
|
|
task, ok := p.fetchTask(p.pollingCtx, s)
|
|
if !ok {
|
|
if !p.waitBackoff(s) {
|
|
return
|
|
}
|
|
continue
|
|
}
|
|
s.resetBackoff()
|
|
p.runTaskWithRecover(p.jobsCtx, task)
|
|
return
|
|
}
|
|
}
|
|
|
|
func (p *Poller) Shutdown(ctx context.Context) error {
|
|
p.shutdownPolling()
|
|
|
|
select {
|
|
// graceful shutdown completed succesfully
|
|
case <-p.done:
|
|
return nil
|
|
|
|
// our timeout for shutting down ran out
|
|
case <-ctx.Done():
|
|
// Both the timeout and the graceful shutdown may fire
|
|
// simultaneously. Do a non-blocking check to avoid forcing
|
|
// a shutdown when graceful already completed.
|
|
select {
|
|
case <-p.done:
|
|
return nil
|
|
default:
|
|
}
|
|
|
|
// force a shutdown of all running jobs
|
|
p.shutdownJobs()
|
|
|
|
// wait for running jobs to report their status to Gitea
|
|
<-p.done
|
|
|
|
return ctx.Err()
|
|
}
|
|
}
|
|
|
|
func (s *workerState) resetBackoff() {
|
|
s.consecutiveEmpty = 0
|
|
s.consecutiveErrors = 0
|
|
s.lastBackoff = 0
|
|
}
|
|
|
|
// waitBackoff sleeps for the current backoff interval (with jitter).
|
|
// Returns false if the polling context was cancelled during the wait.
|
|
func (p *Poller) waitBackoff(s *workerState) bool {
|
|
base := p.calculateInterval(s)
|
|
if base != s.lastBackoff {
|
|
metrics.PollBackoffSeconds.Set(base.Seconds())
|
|
s.lastBackoff = base
|
|
}
|
|
timer := time.NewTimer(addJitter(base))
|
|
select {
|
|
case <-timer.C:
|
|
return true
|
|
case <-p.pollingCtx.Done():
|
|
timer.Stop()
|
|
return false
|
|
}
|
|
}
|
|
|
|
// calculateInterval returns the polling interval with exponential backoff based on
|
|
// consecutive empty or error responses. The interval starts at FetchInterval and
|
|
// doubles with each consecutive empty/error, capped at FetchIntervalMax.
|
|
func (p *Poller) calculateInterval(s *workerState) time.Duration {
|
|
base := p.cfg.Runner.FetchInterval
|
|
maxInterval := p.cfg.Runner.FetchIntervalMax
|
|
|
|
n := max(s.consecutiveEmpty, s.consecutiveErrors)
|
|
if n <= 1 {
|
|
return base
|
|
}
|
|
|
|
// Capped exponential backoff: base * 2^(n-1), max shift=5 so multiplier <= 32
|
|
shift := min(n-1, 5)
|
|
interval := base * time.Duration(int64(1)<<shift)
|
|
return min(interval, maxInterval)
|
|
}
|
|
|
|
// addJitter adds +/- 20% random jitter to the given duration to avoid thundering herd.
|
|
func addJitter(d time.Duration) time.Duration {
|
|
if d <= 0 {
|
|
return d
|
|
}
|
|
// jitter range: [-20%, +20%] of d
|
|
jitterRange := int64(d) * 2 / 5 // 40% total range
|
|
if jitterRange <= 0 {
|
|
return d
|
|
}
|
|
jitter := rand.Int64N(jitterRange) - jitterRange/2
|
|
return d + time.Duration(jitter)
|
|
}
|
|
|
|
func (p *Poller) runTaskWithRecover(ctx context.Context, task *runnerv1.Task) {
|
|
defer func() {
|
|
if r := recover(); r != nil {
|
|
err := fmt.Errorf("panic: %v", r)
|
|
log.WithError(err).Error("panic in runTaskWithRecover")
|
|
}
|
|
}()
|
|
|
|
if err := p.runner.Run(ctx, task); err != nil {
|
|
log.WithError(err).Error("failed to run task")
|
|
}
|
|
}
|
|
|
|
func (p *Poller) fetchTask(ctx context.Context, s *workerState) (*runnerv1.Task, bool) {
|
|
reqCtx, cancel := context.WithTimeout(ctx, p.cfg.Runner.FetchTimeout)
|
|
defer cancel()
|
|
|
|
// Load the version value that was in the cache when the request was sent.
|
|
v := p.tasksVersion.Load()
|
|
start := time.Now()
|
|
resp, err := p.client.FetchTask(reqCtx, connect.NewRequest(&runnerv1.FetchTaskRequest{
|
|
TasksVersion: v,
|
|
}))
|
|
|
|
// DeadlineExceeded is the designed idle path for a long-poll: the server
|
|
// found no work within FetchTimeout. Treat it as an empty response and do
|
|
// not record the duration — the timeout value would swamp the histogram.
|
|
if errors.Is(err, context.DeadlineExceeded) {
|
|
s.consecutiveEmpty++
|
|
s.consecutiveErrors = 0 // timeout is a healthy idle response
|
|
metrics.PollFetchTotal.WithLabelValues(metrics.LabelResultEmpty).Inc()
|
|
return nil, false
|
|
}
|
|
metrics.PollFetchDuration.Observe(time.Since(start).Seconds())
|
|
|
|
if err != nil {
|
|
log.WithError(err).Error("failed to fetch task")
|
|
s.consecutiveErrors++
|
|
metrics.PollFetchTotal.WithLabelValues(metrics.LabelResultError).Inc()
|
|
metrics.ClientErrors.WithLabelValues(metrics.LabelMethodFetchTask).Inc()
|
|
return nil, false
|
|
}
|
|
|
|
// Successful response — reset error counter.
|
|
s.consecutiveErrors = 0
|
|
|
|
if resp == nil || resp.Msg == nil {
|
|
s.consecutiveEmpty++
|
|
metrics.PollFetchTotal.WithLabelValues(metrics.LabelResultEmpty).Inc()
|
|
return nil, false
|
|
}
|
|
|
|
if resp.Msg.TasksVersion > v {
|
|
p.tasksVersion.CompareAndSwap(v, resp.Msg.TasksVersion)
|
|
}
|
|
|
|
if resp.Msg.Task == nil {
|
|
s.consecutiveEmpty++
|
|
metrics.PollFetchTotal.WithLabelValues(metrics.LabelResultEmpty).Inc()
|
|
return nil, false
|
|
}
|
|
|
|
// got a task, set `tasksVersion` to zero to force query db in next request.
|
|
p.tasksVersion.CompareAndSwap(resp.Msg.TasksVersion, 0)
|
|
|
|
metrics.PollFetchTotal.WithLabelValues(metrics.LabelResultTask).Inc()
|
|
return resp.Msg.Task, true
|
|
}
|