## Background
#819 replaced the shared `rate.Limiter` with per-worker exponential backoff counters to add jitter and adaptive polling. Before #819, the poller used:
```go
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
```
This limiter was **shared across all N polling goroutines with burst=1**, effectively serializing their `FetchTask` calls — so even with `capacity=60`, the runner issued roughly one `FetchTask` per `FetchInterval` total.
#819 replaced this with independent per-worker `consecutiveEmpty` / `consecutiveErrors` counters. Each goroutine now backs off **independently**, which inadvertently removed the cross-worker serialization. With `capacity=N`, the runner now has N goroutines each polling on their own schedule — a regression from the pre-#819 baseline for any runner with `capacity > 1`.
(Thanks to @ChristopherHX for catching this in review.)
## Problem
With the post-#819 code:
- `capacity=N` maintains **N persistent polling goroutines**, each calling `FetchTask` independently
- At idle, N goroutines each wake up and send a `FetchTask` RPC per `FetchInterval`
- At full load, N goroutines **continue polling** even though no slot is available to run a new task — every one of those RPCs is wasted
- The `Shutdown()` timeout branch has a pre-existing bug: the "non-blocking check" is actually a blocking receive, so `shutdownJobs()` is never reached on timeout
## Real-World Impact: 3 Runners × capacity=60
Current production environment: 3 runners each with `capacity=60`.
| Metric | Post-#819 (current) | This PR | Reduction |
|--------|---------------------|---------|-----------|
| Polling goroutines (total) | 3 × 60 = **180** | 3 × 1 = **3** | **98.3%** (177 fewer) |
| FetchTask RPCs per poll cycle (idle) | **180** | **3** | **98.3%** |
| FetchTask RPCs per poll cycle (full load) | **180** (all wasted) | **0** (blocked on semaphore) | **100%** |
| Concurrent connections to Gitea | **180** | **3** | **98.3%** |
| Backoff state objects | 180 (per-worker) | 3 (one per runner) | Simplified |
### Idle scenario
All 180 goroutines wake up every `FetchInterval`, each sending a `FetchTask` RPC that returns empty. Server handles 180 RPCs per cycle for zero useful work. After this PR: **3 RPCs per cycle** — one per runner.
> Note: pre-#819 idle behavior was already ~3 RPCs/cycle due to the shared `rate.Limiter`. This PR restores that property while also addressing the full-load case below.
### Full-load scenario (all 180 slots occupied)
All 180 goroutines **continue polling** even though no slot is available. Every RPC is wasted. After this PR: all 3 pollers are **blocked on the semaphore** — **zero RPCs** until a task completes.
> This is a scenario neither the pre-#819 shared limiter nor the post-#819 per-worker backoff handles — both still issue `FetchTask` RPCs when no slot is free. The semaphore is the only approach of the three that ties polling to available capacity.
## Why Not Just Revert to `rate.Limiter`?
Reverting would restore the serialized behavior but is not the right long-term fix:
- **`rate.Limiter` has no concept of available capacity.** At full load it still hands out tokens and issues `FetchTask` RPCs that can't be acted on. The semaphore blocks polling entirely in that case — zero wasted RPCs.
- **It composes poorly with adaptive backoff from #819.** A shared limiter and per-worker backoff pull in different directions.
- **N goroutines serializing on a shared limiter means N-1 of them exist only to wait in line.** A single poller expresses the same behavior more directly.
The semaphore approach ties polling to capacity explicitly: `acquire slot → fetch → dispatch → release`. That invariant becomes structural rather than emergent from a rate limiter.
## Solution
Replace N polling goroutines with a **single polling loop** that uses a buffered channel as a semaphore to control concurrent task execution:
```go
// New: poller.go Poll()
sem := make(chan struct{}, p.cfg.Runner.Capacity)
for {
select {
case sem <- struct{}{}: // Acquire slot (blocks at capacity)
case <-p.pollingCtx.Done():
return
}
task, ok := p.fetchTask(...) // Single FetchTask RPC
if !ok {
<-sem // Release slot on empty response
// backoff...
continue
}
go func(t *runnerv1.Task) { // Dispatch task
defer func() { <-sem }() // Release slot when done
p.runTaskWithRecover(p.jobsCtx, t)
}(task)
}
```
The exponential backoff and jitter from #819 are preserved — just driven by a single `workerState` instead of N per-worker states.
## Shutdown Bug Fix
Fixed a pre-existing bug in `Shutdown()` where the timeout branch could never force-cancel running jobs:
```go
// Before (BROKEN): blocking receive, shutdownJobs() never reached
_, ok := <-p.done // blocks until p.done is closed
if !ok { return nil }
p.shutdownJobs() // dead code when jobs are still running
// After (FIXED): proper non-blocking check
select {
case <-p.done:
return nil
default:
}
p.shutdownJobs() // now correctly reached on timeout
```
## Code Changes
| Area | Detail |
|------|--------|
| `Poller.runner` | `*run.Runner` → `TaskRunner` interface (enables mock-based testing) |
| `Poll()` | N goroutines → single loop with buffered-channel semaphore |
| `PollOnce()` | Inlined from removed `pollOnce()` |
| `waitBackoff()` | New helper, eliminates duplicated backoff logic |
| `resetBackoff()` | New method on `workerState`, also resets stale `lastBackoff` metric |
| `Shutdown()` | Fixed blocking receive → proper non-blocking select |
| Removed | `poll()`, `pollOnce()` private methods (-2 methods, -42 lines) |
## Test Coverage
Added `TestPoller_ConcurrencyLimitedByCapacity` which verifies:
- With `capacity=3`, at most 3 tasks execute concurrently (`maxConcurrent <= 3`)
- Tasks actually overlap in execution (`maxConcurrent >= 2`)
- `FetchTask` is never called concurrently — confirms single poller (`maxFetchConcur == 1`)
- All 6 tasks complete successfully (`totalCompleted == 6`)
- Mock runner respects context cancellation, enabling shutdown path verification
```
=== RUN TestPoller_ConcurrencyLimitedByCapacity
--- PASS: TestPoller_ConcurrencyLimitedByCapacity (0.10s)
PASS
ok gitea.com/gitea/act_runner/internal/app/poll 0.59s
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/822
Reviewed-by: silverwind <2021+silverwind@noreply.gitea.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
## What
Add an optional Prometheus `/metrics` HTTP endpoint to `act_runner` so operators can observe runner health, polling behavior, job outcomes, and RPC latency without scraping logs.
New surface:
- `internal/pkg/metrics/metrics.go` — metric definitions, custom `Registry`, static Go/process collectors, label constants, `ResultToStatusLabel` helper.
- `internal/pkg/metrics/server.go` — hardened `http.Server` serving `/metrics` and `/healthz` with Slowloris-safe timeouts (`ReadHeaderTimeout` 5s, `ReadTimeout`/`WriteTimeout` 10s, `IdleTimeout` 60s) and a 5s graceful shutdown.
- `daemon.go` wires it up behind `cfg.Metrics.Enabled` (disabled by default).
- `poller.go` / `reporter.go` / `runner.go` instrument their existing hot paths with counters/histograms/gauges — no behavior change.
Metrics exported (namespace `act_runner_`):
| Subsystem | Metric | Type | Labels |
|---|---|---|---|
| — | `info` | Gauge | `version`, `name` |
| — | `capacity`, `uptime_seconds` | Gauge | — |
| `poll` | `fetch_total`, `client_errors_total` | Counter | `result` / `method` |
| `poll` | `fetch_duration_seconds`, `backoff_seconds` | Histogram / Gauge | — |
| `job` | `total` | Counter | `status` |
| `job` | `duration_seconds`, `running`, `capacity_utilization_ratio` | Histogram / GaugeFunc | — |
| `report` | `log_total`, `state_total` | Counter | `result` |
| `report` | `log_duration_seconds`, `state_duration_seconds` | Histogram | — |
| `report` | `log_buffer_rows` | Gauge | — |
| — | `go_*`, `process_*` | standard collectors | — |
All label values are predefined constants — **no high-cardinality labels** (no task IDs, repo URLs, branches, tokens, or secrets) so scraping is safe and bounded.
## Why
Teams self-hosting Gitea + `act_runner` at scale need to answer basic SRE questions that are currently invisible:
- How often are RPCs failing? Which RPC? (`act_runner_client_errors_total`)
- Are runners saturated? (`act_runner_job_capacity_utilization_ratio`, `act_runner_job_running`)
- How long do jobs take? (`act_runner_job_duration_seconds`)
- Is polling backing off? (`act_runner_poll_backoff_seconds`, `act_runner_poll_fetch_total{result=\"error\"}`)
- Are log/state reports slow? (`act_runner_report_{log,state}_duration_seconds`)
- Is the log buffer draining? (`act_runner_report_log_buffer_rows`)
Today operators have to grep logs. This PR makes all of the above first-class metrics so they can feed dashboards and alerts (`rate(act_runner_client_errors_total[5m]) > 0.1`, capacity saturation alerts, etc.).
The endpoint is **disabled by default** and binds to `127.0.0.1:9101` when enabled, so it's opt-in and safe for existing deployments.
## How
### Config
```yaml
metrics:
enabled: false # opt-in
addr: 127.0.0.1:9101 # change to 0.0.0.0:9101 only behind a reverse proxy
```
`config.example.yaml` documents both fields plus a security note about binding externally without auth.
### Wiring
1. `daemon.go` calls `metrics.Init()` (guarded by `sync.Once`), sets `act_runner_info`, `act_runner_capacity`, registers uptime + running-jobs GaugeFuncs, then starts the server goroutine with the daemon context — it shuts down cleanly on `ctx.Done()`.
2. `poller.fetchTask` observes RPC latency / result / error counters. `DeadlineExceeded` (long-poll idle) is treated as an empty result and **not** observed into the histogram so the 5s timeout doesn't swamp the buckets.
3. `poller.pollOnce` reports `poll_backoff_seconds` using the pre-jitter base interval (the true backoff level), and only when it changes — prevents noisy no-op gauge updates at the `FetchIntervalMax` plateau.
4. `reporter.ReportLog` / `ReportState` record duration histograms and success/error counters; `log_buffer_rows` is updated only when the value changes, guarded by the already-held `clientM`.
5. `runner.Run` observes `job_duration_seconds` and increments `job_total` by outcome via `metrics.ResultToStatusLabel`.
### Safety / security review
- All timeouts set; Slowloris-safe.
- Custom `prometheus.NewRegistry()` — no global registration side-effects.
- No sensitive data in labels (reviewed every instrumentation site).
- Single new dependency: `github.com/prometheus/client_golang v1.23.2`.
- Endpoint is unauthenticated by design and documented as such; default localhost bind mitigates exposure. Operators exposing externally should front it with a reverse proxy.
## Verification
### Unit tests
\`\`\`bash
go build ./...
go vet ./...
go test ./...
\`\`\`
### Manual smoke test
1. Enable metrics in `config.yaml`:
\`\`\`yaml
metrics:
enabled: true
addr: 127.0.0.1:9101
\`\`\`
2. Start the runner against a Gitea instance: \`./act_runner daemon\`.
3. Scrape the endpoint:
\`\`\`bash
curl -s http://127.0.0.1:9101/metrics | grep '^act_runner_'
curl -s http://127.0.0.1:9101/healthz # → ok
\`\`\`
4. Confirm the static series appear immediately: \`act_runner_info\`, \`act_runner_capacity\`, \`act_runner_uptime_seconds\`, \`act_runner_job_running\`, \`act_runner_job_capacity_utilization_ratio\`.
5. Trigger a workflow and confirm counters increment: \`act_runner_poll_fetch_total{result=\"task\"}\`, \`act_runner_job_total{status=\"success\"}\`, \`act_runner_report_log_total{result=\"success\"}\`.
6. Leave the runner idle and confirm \`act_runner_poll_backoff_seconds\` settles (and does **not** churn on every poll).
7. Ctrl-C and confirm a clean \"metrics server shutdown\" log line (no port-in-use error on restart within 5s).
### Prometheus integration
Add to \`prometheus.yml\`:
\`\`\`yaml
scrape_configs:
- job_name: act_runner
static_configs:
- targets: ['127.0.0.1:9101']
\`\`\`
Sample alert to try:
\`\`\`
sum(rate(act_runner_client_errors_total[5m])) by (method) > 0.1
\`\`\`
## Out of scope (follow-ups)
- TLS and auth on the metrics endpoint (mitigated today by localhost default; add when operators need external scraping).
- Per-task labels (intentionally avoided for cardinality safety).
---
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/820
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
## Summary
Many teams self-host Gitea + Act Runner at scale. The current runner design causes excessive HTTP requests to the Gitea server, leading to high server load. This PR addresses three root causes: aggressive fixed-interval polling, per-task status reporting every 1 second regardless of activity, and unoptimized HTTP client configuration.
## Problem
The original architecture has these issues:
**1. Fixed 1-second reporting interval (RunDaemon)**
- Every running task calls ReportLog + ReportState every 1 second (2 HTTP requests/sec/task)
- These requests are sent even when there are no new log rows or state changes
- With 200 runners × 3 tasks each = **1,200 req/sec just for status reporting**
**2. Fixed 2-second polling interval (no backoff)**
- Idle runners poll FetchTask every 2 seconds forever, even when no jobs are queued
- No exponential backoff or jitter — all runners can synchronize after network recovery (thundering herd)
- 200 idle runners = **100 req/sec doing nothing useful**
**3. HTTP client not tuned**
- Uses http.DefaultClient with MaxIdleConnsPerHost=2, causing frequent TCP/TLS reconnects
- Creates two separate http.Client instances (one for Ping, one for Runner service) instead of sharing
**Total: ~1,300 req/sec for 200 runners with 3 tasks each**
## Solution
### Adaptive Event-Driven Log Reporting
Replace the recursive `time.AfterFunc(1s)` pattern in RunDaemon with a goroutine-based select event loop using three trigger mechanisms:
| Trigger | Default | Purpose |
|---------|---------|---------|
| `log_report_max_latency` | 3s | Guarantee even a single log line is delivered within this time |
| `log_report_interval` | 5s | Periodic sweep — steady-state cadence |
| `log_report_batch_size` | 100 rows | Immediate flush during bursty output (e.g., npm install) |
**Key design**: `log_report_max_latency` (3s) must be less than `log_report_interval` (5s) so the max-latency timer fires before the periodic ticker for single-line scenarios.
State reporting is decoupled to its own `state_report_interval` (default 5s), with immediate flush on step transitions (start/stop) via a stateNotify channel for responsive frontend UX.
Additionally:
- Skip ReportLog when `len(rows) == 0` (no pending log rows)
- Skip ReportState when `stateChanged == false && len(outputs) == 0` (nothing changed)
- Move expensive `proto.Clone` after the early-return check to avoid deep copies on no-op paths
### Polling Backoff with Jitter
Replace fixed `rate.Limiter` with adaptive exponential backoff:
- Track `consecutiveEmpty` and `consecutiveErrors` counters
- Interval doubles with each empty/error response: `base × 2^(n-1)`, capped at `fetch_interval_max` (default 60s)
- Add ±20% random jitter to prevent thundering herd
- Fetch first, sleep after ��� preserves burst=1 behavior for immediate first fetch on startup and after task completion
### HTTP Client Tuning
- Configure custom `http.Transport` with `MaxIdleConnsPerHost=10` (was 2)
- Share a single `http.Client` between PingService and RunnerService
- Add `IdleConnTimeout=90s` for clean connection lifecycle
## Load Reduction
For 200 runners × 3 tasks (70% with active log output):
| Component | Before | After | Reduction |
|-----------|--------|-------|-----------|
| Polling (idle) | 100 req/s | ~3.4 req/s | 97% |
| Log reporting | 420 req/s | ~84 req/s | 80% |
| State reporting | 126 req/s | ~25 req/s | 80% |
| **Total** | **~1,300 req/s** | **~113 req/s** | **~91%** |
## Frontend UX Impact
| Scenario | Before | After | Notes |
|----------|--------|-------|-------|
| Continuous output (npm install) | ~1s | ~5s | Periodic ticker sweep |
| Single line then silence | ~1s | ≤3s | maxLatencyTimer guarantee |
| Bursty output (100+ lines) | ~1s | <1s | Batch size immediate flush |
| Step start/stop | ~1s | <1s | stateNotify immediate flush |
| Job completion | ~1s | ~1s | Close() retry unchanged |
## New Configuration Options
All have safe defaults — existing config files need no changes:
```yaml
runner:
fetch_interval_max: 60s # Max backoff interval when idle
log_report_interval: 5s # Periodic log flush interval
log_report_max_latency: 3s # Max time a log row waits (must be < log_report_interval)
log_report_batch_size: 100 # Immediate flush threshold
state_report_interval: 5s # State flush interval (step transitions are always immediate)
```
Config validation warns on invalid combinations:
- `fetch_interval_max < fetch_interval` → auto-corrected
- `log_report_max_latency >= log_report_interval` → warning (timer would be redundant)
## Test Plan
- [x] `go build ./...` passes
- [x] `go test ./...` passes (all existing + 3 new tests)
- [x] `golangci-lint run` — 0 issues
- [x] TestReporter_MaxLatencyTimer — verifies single log line flushed by maxLatencyTimer before logTicker
- [x] TestReporter_BatchSizeFlush — verifies batch size threshold triggers immediate flush
- [x] TestReporter_StateNotifyFlush — verifies step transition triggers immediate state flush
- [x] TestReporter_EphemeralRunnerDeletion — verifies Close/RunDaemon race safety
- [x] TestReporter_RunDaemonClose_Race — verifies concurrent Close safety
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/819
Reviewed-by: Nicolas <173651+bircni@noreply.gitea.com>
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>
## Summary
Adds a `container.bind_workdir` config option that exposes the nektos/act `BindWorkdir` setting. When enabled, workspaces are bind-mounted from the host filesystem instead of Docker volumes, which is required for DinD setups where jobs use `docker compose` with bind mounts (e.g. `.:/app`).
Each job gets an isolated workspace at `/workspace/<task_id>/<owner>/<repo>` to prevent concurrent jobs from the same repo interfering with each other. The task directory is cleaned up after job execution.
### Configuration
```yaml
container:
bind_workdir: true
```
When using this with DinD, also mount the workspace parent into the runner container and add it to `valid_volumes`:
```yaml
container:
valid_volumes:
- /workspace/**
```
*This PR was authored by Claude (AI assistant)*
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/810
Reviewed-by: ChristopherHX <38043+christopherhx@noreply.gitea.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
## Summary
- Fix data race on `r.closed` between `RunDaemon()` and `Close()` by protecting it with the existing `stateMu` — `closed` is part of the reporter state. `RunDaemon()` reads it under `stateMu.RLock()`, `Close()` sets it inside the existing `stateMu.Lock()` block
- `ReportState` now has a parameter to not report results from runDaemon even if set, from now on `Close` reports the result
- `Close` waits for `RunDaemon()` to signal exit via a closed channel `daemon` before reporting the final logs and state with result, unless something really wrong happens it does not time out
- Add `TestReporter_EphemeralRunnerDeletion` which reproduces the exact scenario from #793: RunDaemon's `ReportState` racing with `Close`, causing the ephemeral runner to be deleted before final logs are sent
- Add `TestReporter_RunDaemonClose_Race` which exercises `RunDaemon()` and `Close()` concurrently to verify no data race on `r.closed` under `go test -race`
- Enable `-race` flag in `make test` so CI catches data races going forward
Based on #794, with fixes for the remaining unprotected `r.closed` reads that the race detector catches.
Fixes#793
---------
Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: rmawatson <rmawatson@hotmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/796
Reviewed-by: ChristopherHX <christopherhx@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: silverwind <me@silverwind.io>
Co-committed-by: silverwind <me@silverwind.io>
Before this commit, when running locally `act_runner exec` to test workflows, we could only fill env and secrets but not vars
This commit add a new exec option `--var` based on what is done for env and secret
Example:
`act_runner exec --env MY_ENV=testenv -s MY_SECRET=testsecret --var MY_VAR=testvariable`
workflow
```
name: Gitea Actions test
on: [push]
jobs:
TestAction:
runs-on: ubuntu-latest
steps:
- run: echo "VAR -> ${{ vars.MY_VAR }}"
- run: echo "ENV -> ${{ env.MY_ENV }}"
- run: echo "SECRET -> ${{ secrets.MY_SECRET }}"
```
Will echo var, env and secret values sent in the command line
Fixesgitea/act_runner#692
Co-authored-by: Lautriva <gitlactr@dbn.re>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/704
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: lautriva <lautriva@noreply.gitea.com>
Co-committed-by: lautriva <lautriva@noreply.gitea.com>
I used to be able to do something like `./act_runner register --instance https://gitea.com --token testdcff --name test` on GitHub Actions Runners, but act_runner always asked me to enter instance, token etc. again and requiring me to use `--no-interactive` including passing everything per cli.
My idea was to extract the preset input of some stages to skip the prompt for this value if it is a non empty string. Labels is the only question that has been asked more than once if validation failed, in this case the error path have to unset the values of the input structure to not end in a non-interactive loop.
_I have written this initially for my own gitea runner, might be useful to everyone using the official runner as well_
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/682
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
Co-committed-by: Christopher Homberger <christopher.homberger@web.de>
We wanted the ability to disable outputting the logs from the individual job to the console. This changes the logging so that job logs are only output to the console whenever debug logging is enabled in `act_runner`, while still allowing the `Reporter` to receive these logs and forward them to Gitea when debug logging is not enabled.
Co-authored-by: Rowan Bohde <rowan.bohde@gmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/543
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: rowan-allspice <rowan-allspice@noreply.gitea.com>
Co-committed-by: rowan-allspice <rowan-allspice@noreply.gitea.com>
## Description
Issue described in #460
## Changes
- Edited `supervisord.conf` to exit if it detects any of the supervisored processes exiting.
- minor text fix
## Notes
Without this change (or something similar), if act_runner fails, then the container will stay up as a zombie container - it does nothing and does not restart. After this change, if act_runner fails (e.g. due to Gitea instance being down), then supervisord will exit and the container will be restarted.
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/462
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: davidfrickert <david.frickert@protonmail.com>
Co-committed-by: davidfrickert <david.frickert@protonmail.com>
- Removed `deadcode`, `structcheck`, and `varcheck` linters from `.golangci.yml`
- Fixed a typo in a comment in `daemon.go`
- Renamed `defaultActionsUrl` to `defaultActionsURL` in `exec.go`
- Removed unnecessary else clause in `exec.go` and `runner.go`
- Simplified variable initialization in `exec.go`
- Changed function name from `getHttpClient` to `getHTTPClient` in `http.go`
- Removed unnecessary else clause in `labels_test.go`
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/289
Co-authored-by: Bo-Yi Wu <appleboy.tw@gmail.com>
Co-committed-by: Bo-Yi Wu <appleboy.tw@gmail.com>