Retries and concurrency
Retries
Set retries to the number of retry attempts after the initial run.
retries: 2
That produces up to three total attempts.
Retry delay modes
Exponential
retry_delay: exponential
Exponential backoff starts at roughly 30s and doubles on each retry, with jitter.
Fixed delay
retry_delay: fixed:10s
Retries use the same delay each time.
Failure policy
When retries are exhausted, on_failure decides the final behavior.
| Value | Meaning |
|---|---|
alert | Send failure notifications |
skip | Mark the pending path as skipped and continue other work |
stop | Halt downstream pipeline progression |
ignore | Record failure but take no special action |
Concurrency
Concurrency controls overlapping runs of the same job.
| Value | Meaning |
|---|---|
allow | Let multiple runs execute at once |
forbid | Skip a new overlapping run |
replace | Cancel the current run and start a fresh one |
Example
jobs:
nightly_sync:
description: "Sync remote data"
frequency: every:5m
command: "./scripts/sync.sh"
timeout: "10m"
retries: 3
retry_delay: exponential
concurrency: replace
on_failure: alert
Operator commands
husky retry <job>
husky cancel <job>
husky skip <job>
What to expect in history
Run history records:
- attempt count
- trigger
- duration
- status
- reason
- VS SLA column when an SLA is configured
Good patterns
- use
forbidfor jobs that must not overlap - use
replacefor polling jobs where the newest run matters most - use
stopfor critical DAG branches - use
ignoreonly for clearly non-critical work