Writing workflows
Husky workflows live in husky.yaml. Each job should answer four questions clearly:
- what does it do?
- when does it run?
- what does it depend on?
- what happens if it fails?
Minimal job
jobs:
hourly_ping:
description: "Check endpoint reachability"
frequency: hourly
command: "./scripts/ping.sh"
Typical production-ready job
jobs:
weekday_backup:
description: "Back up the primary database"
frequency: on:[monday,tuesday,wednesday,thursday,friday]
time: "0100"
command: "./scripts/backup.sh"
timeout: "30m"
retries: 2
retry_delay: exponential
on_failure: alert
tags: [critical, maintenance]
env:
DB_HOST: "${env:BACKUP_DB_HOST}"
Workflow authoring checklist
When defining a job, consider:
- use a specific, human-readable
description - choose a readable
frequency - add
timewhen wall-clock scheduling is used - define
timeout - define
retriesandretry_delay - choose
on_failure - tag jobs for bulk operations
- add
notifywhen the job matters operationally - use
healthcheckwhen exit code alone is not enough - use
outputwhen downstream jobs need produced values
Defaults
The defaults: block keeps configuration concise and consistent.
Common defaults include:
timeoutretriesretry_delayon_failuredefault_run_timetimezone