Husky notifications guide
Husky can notify external systems when runs fail, succeed, retry, or breach SLA expectations.
Notifications are configured in two layers:
integrations:β provider credentials and reusable integration definitions- per-job
notify:β lifecycle hooks and event behavior
Supported providersβ
Husky currently supports:
- Slack
- Discord
- generic webhook
- PagerDuty
- SMTP email
Integration definitionsβ
Example:
integrations:
slack:
webhook_url: "${env:SLACK_WEBHOOK_URL}"
pagerduty:
routing_key: "${env:PAGERDUTY_ROUTING_KEY}"
smtp:
host: smtp.example.com
port: 587
username: "${env:SMTP_USERNAME}"
password: "${env:SMTP_PASSWORD}"
from: husky@example.com
You can also define multiple integrations of the same provider with explicit names:
integrations:
slack_ops:
provider: slack
webhook_url: "${env:SLACK_OPS_WEBHOOK}"
slack_data:
provider: slack
webhook_url: "${env:SLACK_DATA_WEBHOOK}"
Per-job notify hooksβ
Supported lifecycle hooks:
on_failureon_successon_sla_breachon_retry
Shorthand formβ
notify:
on_failure: slack:#ops
Object formβ
notify:
on_success:
channel: webhook:https://example.test/hook
message: "job={{ job.name }} status={{ run.status }}"
attach_logs: last_30_lines
only_after_failure: true
Channel syntaxβ
Channels use <provider>:<target> syntax.
Examples:
slack:#opsdiscord:deploymentspagerduty:p1webhook:https://hooks.example.test/deploysmtp:team@example.com
How they resolve:
- Husky uses the provider prefix to select the delivery backend
- the provider-specific integration config is taken from the named or inferred integration entry
- the target portion is interpreted according to the provider
Template variablesβ
Notification message templates support job and run values.
Available job fields:
job.namejob.descriptionjob.frequencyjob.tags
Available run fields:
run.idrun.statusrun.attemptrun.triggerrun.reasonrun.sla_breached
Example:
message: "job={{ job.name }} status={{ run.status }} reason={{ run.reason }}"
Husky accepts both {{ job.name }} and Go-template-style {{ .job.name }} forms.
Log attachmentsβ
attach_logs controls how much log output to include.
Supported values:
nonealllast_<N>_linessuch aslast_30_lines
Examples:
attach_logs: none
attach_logs: all
attach_logs: last_50_lines
Recommendations:
- use
last_<N>_linesfor concise operational alerts - use
allonly when logs are known to be small and non-sensitive
only_after_failureβ
This flag only matters on on_success notifications.
When enabled, Husky suppresses the success notification unless the previous completed run of that job failed.
This is useful for βrecoveredβ messages without generating noise for every normal success.
Event-specific behaviorβ
on_failureβ
Fires after retries are exhausted and the job enters terminal failure handling.
on_successβ
Fires on successful completion.
on_sla_breachβ
Fires when a running job exceeds its sla duration.
Fallback behavior:
- if
on_sla_breachis not defined, Husky falls back toon_failure
SLA notifications are informational only; they do not kill or retry the run.
on_retryβ
Fires at the start of each retry attempt.
Delivery behavior and alert persistenceβ
Husky records alert delivery state in the alerts table.
Tracked fields include:
- job name
- run ID
- event
- channel
- delivery status
- attempts
- last attempt timestamp
- payload
- last error
This gives operators a persistent audit trail of notification activity.
Provider notesβ
Slack / Discord / generic webhookβ
These are webhook-style JSON deliveries.
PagerDutyβ
PagerDuty uses the Events API v2 routing key. The target can encode severity-like intent such as p1.
SMTPβ
SMTP sends mail using the configured host and from address. The target portion of the channel becomes the recipient email address.
Good patternsβ
High-signal failure alertβ
notify:
on_failure:
channel: slack:#ops
message: "{{ job.name }} failed on attempt {{ run.attempt }}"
attach_logs: last_40_lines
Recovery-only success noticeβ
notify:
on_success:
channel: slack:#ops
message: "{{ job.name }} recovered"
only_after_failure: true
SLA alerting without killing the jobβ
sla: "15m"
notify:
on_sla_breach:
channel: pagerduty:p2
message: "{{ job.name }} is still running past SLA"
Security remindersβ
- keep credentials in environment variables, not committed YAML
- be careful with attached logs if output may contain secrets
- use auth and TLS if operators trigger test deliveries through exposed APIs
Troubleshooting notificationsβ
If notifications do not arrive:
- verify the integration exists and validates
- confirm
${env:VAR}values are present - review the
alertstable or dashboard alerts view - run
husky integrations test <name> - inspect daemon logs for delivery errors
Related commandsβ
husky integrations list
husky integrations test <name>
husky audit --job my_job