Advanced Local Automation
Advanced Gran features for local rules, prompts, review queues, and side effects on top of the synced meeting archive.
Most people should stop at: local sync, local browsing, and local publishing into folders or Obsidian.
This page is for the optional advanced automation layer inside Gran itself. If you want a broader
toolchain to consume Gran through events, hooks, and machine-readable fetch commands, start with
Integrations instead.
Rule Files
For a fresh project, start with:
gran init --provider openrouterThat generates ~/.config/gran/config.json, starter harnesses, starter prompts, and starter automation
rules that you can edit in place. The examples below describe the same shapes in raw JSON so you
can extend them beyond the starter setup.
The toolkit reads automation rules from a JSON file. By default it uses:
~/.config/gran/automation-rules.json
Override that path with either:
--rules /path/to/rules.jsonGRAN_AUTOMATION_RULES_FILE"automation-rules-file": "/path/to/rules.json"inconfig.json
Rule Shape
{
"rules": [
{
"id": "team-transcript",
"name": "Team transcript ready",
"when": {
"eventKinds": ["transcript.ready"],
"folderNames": ["Team"],
"tags": ["customer"],
"transcriptLoaded": true
},
"actions": [
{
"id": "notes-markdown",
"kind": "export-notes",
"outputDir": "./automation/notes",
"scopedOutput": true
},
{
"id": "rewrite-with-agent",
"kind": "agent",
"approvalMode": "manual",
"pipeline": {
"kind": "notes"
},
"harnessId": "customer-call",
"fallbackHarnessIds": ["customer-call-codex"],
"promptFile": "./agents/customer-call/AGENT.md"
},
{
"id": "sync-obsidian",
"kind": "pkm-sync",
"trigger": "approval",
"sourceActionId": "rewrite-with-agent",
"targetId": "obsidian-team"
},
{
"id": "publish-approved-markdown",
"kind": "write-file",
"trigger": "approval",
"sourceActionId": "rewrite-with-agent",
"format": "markdown",
"outputDir": "./automation/approved"
},
{
"id": "notify-slack",
"kind": "slack-message",
"trigger": "approval",
"sourceActionId": "rewrite-with-agent",
"webhookUrlEnv": "SLACK_WEBHOOK_URL",
"text": "Approved {{artefact.title}}"
},
{
"id": "publish-hook",
"kind": "webhook",
"trigger": "approval",
"sourceActionId": "rewrite-with-agent",
"urlEnv": "POST_REVIEW_WEBHOOK_URL",
"payload": "json"
},
{
"id": "summarise",
"kind": "command",
"trigger": "approval",
"sourceActionId": "rewrite-with-agent",
"command": "/bin/sh",
"args": ["-lc", "cat >/tmp/granola-approved.json"]
},
{
"id": "review",
"kind": "ask-user",
"prompt": "Review this transcript before sharing it"
}
]
}
]
}Each rule can contain zero or more actions. Current action kinds are:
export-notesexport-transcriptagentcommandask-userwrite-filepkm-syncwebhookslack-message
agent actions run your own prompts against OpenRouter, OpenAI-compatible APIs, or the local codex
CLI. They append structured meeting context automatically, so a prompt file can stay focused on
instructions such as “turn this transcript into follow-up notes”.
That structured meeting context now includes speaker-aware role helpers:
- calendar participants when Granola exposes them
- transcript speaker summaries with segment counts and word counts
- canonical owner candidates so prompts can prefer real names over vague owners like
you
Supported agent fields:
pipeline.kind:notesorenrichmentapprovalMode:manualorautoharnessIdfallbackHarnessIdsprovider:openrouter,openai, orcodexmodelpromptpromptFilesystemPromptsystemPromptFilecwddryRunretriestimeoutMs
Use promptFile or systemPromptFile for local playbooks such as AGENT.md. Relative paths are
resolved from the current working directory, or from cwd if you set one on the action.
If you set harnessId, the toolkit resolves that harness first and then layers any inline action
fields on top. That lets one automation rule say “run the customer-call harness” without copying
provider and prompt settings into every rule.
If you add pipeline.kind, the toolkit turns the agent action into a durable post-processing
pipeline. The model is asked for structured JSON, the parsed result is stored as an automation
artefact, and the artefact can be re-run later without waiting for a fresh sync event.
Pipeline JSON can include:
actionItems[].ownerEmailactionItems[].ownerRoleparticipantSummaries[]
The toolkit also normalises actionItems[].owner against the current owner candidates when it can,
so agent outputs like you or Alice can be resolved into canonical meeting owners for downstream
review and delivery actions.
fallbackHarnessIds let a pipeline recover when the primary provider or harness fails. The toolkit
tries the primary harness first, then each fallback harness in order, and records those attempts on
the generated artefact for later review.
approvalMode controls what happens after a pipeline artefact is generated:
manualkeeps the artefact in the review queue until a person approves or rejects itautoapproves it immediately and triggers any approval-phase actions without waiting for manual review
command actions receive a JSON payload on stdin by default, including the matched event, rule,
and meeting bundle when the meeting still exists.
write-file, webhook, slack-message, and command can also run with
"trigger": "approval" plus sourceActionId. In that mode they execute once per approved
artefact from the source pipeline action and receive the approved artefact in their payload.
Approval-triggered actions are how you send reviewed notes into the rest of your stack:
write-filewrites approved markdown, text, or JSON payloads to a local directorypkm-syncwrites approved markdown into Gran-managed local knowledge bases such as Obsidian vaults or folder archivescommandreceives the same review payload on stdin for arbitrary local toolingwebhookposts either JSON or rendered text/markdown bodies to any HTTP endpointslack-messageposts a simple text payload to an incoming Slack webhook
If you want API-backed destinations like Notion, Capacities, or Tana, treat Gran as the source app and let another tool own those remote knowledge-base plugins instead of teaching Gran every publish API.
Delivery payloads include the artefact’s structured arrays too, so downstream actions can use:
artefact.actionItemsartefact.participantSummariesartefact.decisionsartefact.followUpsartefact.highlightsartefact.sections
Template fields such as {{artefact.title}}, {{artefact.summary}}, {{meeting.title}}, and
{{rule.name}} are available in text, contentTemplate, bodyTemplate, and
filenameTemplate.
ask-user actions do not perform side effects immediately. They create a pending automation run
that can be approved or rejected later from the CLI, web workspace, or TUI.
Editing Harnesses In The Web Workspace
The web workspace now includes a dedicated harness editor. Use it when you want to:
- browse every saved harness in one place
- edit prompts, prompt files, providers, models, fallback chains, and match rules
- see why the currently selected meeting did or did not match a harness
- run a harness against the selected meeting before changing live automation
The editor saves back to the shared agent-harnesses.json store, so the CLI, server, web
workspace, and automation loop all stay on the same source of truth.
Evaluating Harnesses
Use fixture-backed evaluations to compare harnesses, prompts, and providers before changing live automation.
The simplest fixture is a JSON meeting bundle exported from the toolkit itself:
gran meeting export <meeting-id> --format json > eval-fixtures/customer-sync.json
gran automation evaluate --fixture eval-fixtures/customer-sync.json --harness customer-call
gran automation evaluate --fixture eval-fixtures/customer-sync.json --harness customer-call --provider openrouter --model openai/gpt-5-miniYou can also point --fixture at a directory of .json files. Each file can be either:
- a raw
meeting exportJSON bundle - a wrapper with
{ "cases": [{ "id": "...", "title": "...", "bundle": { ... } }] }
gran automation evaluate runs the selected harnesses against each case, parses the pipeline
output with the same structured-output rules as live automation, and prints a comparison-friendly
report. Use --format json or --format yaml when you want to diff reports or feed them into
other tooling. Use --provider and --model to compare the same harness against a different
runtime without changing the harness file itself.
The toolkit stores:
- raw sync events in
sync-events.jsonl - rule matches in
automation-matches.jsonl - action runs in
automation-runs.jsonl - generated pipeline artefacts in
automation-artefacts.json
All three live in the shared toolkit data directory so watch loops, serve, web, and tui see
the same history.
Available match fields:
eventKindsfolderIdsfolderNamesmeetingIdstagstitleIncludestitleMatchestranscriptLoaded
Fields are combined with AND, while multiple values inside one field are treated as OR.
Provider Credentials
For HTTP providers, set credentials in the environment used by gran sync, gran serve, or
whatever process runs your automation:
OPENROUTER_API_KEYorGRAN_OPENROUTER_API_KEYOPENAI_API_KEYorGRAN_OPENAI_API_KEY
For local Codex runs, the toolkit shells out to codex exec, so it uses whatever local Codex login
state you already have. You can override the command path with GRAN_CODEX_COMMAND or
"codex-command": "..." in config.json.
For approval-phase delivery actions, provide any external endpoints in the same runtime environment, for example:
SLACK_WEBHOOK_URLPOST_REVIEW_WEBHOOK_URL
Non-secret defaults can live in config.json:
{
"agent-provider": "openrouter",
"agent-model": "openai/gpt-5-mini",
"agent-harnesses-file": "./agent-harnesses.json",
"agent-timeout": "5m",
"agent-max-retries": 2,
"agent-dry-run": false,
"codex-command": "codex"
}Harness definitions live in agent-harnesses.json by default under the shared toolkit data
directory. You can also point them at a project-local file:
{
"harnesses": [
{
"id": "customer-call",
"name": "Customer call",
"priority": 50,
"match": {
"folderNames": ["Customers"],
"recurringEventIds": ["abc123"],
"transcriptLoaded": true
},
"provider": "openrouter",
"model": "openai/gpt-5-mini",
"promptFile": "./agents/customer-call/AGENT.md",
"fallbackHarnessIds": ["customer-call-codex"]
},
{
"id": "customer-call-codex",
"name": "Customer call fallback",
"provider": "codex",
"promptFile": "./agents/customer-call/AGENT.md"
}
]
}Review publishing profiles live in pkm-targets.json under the shared toolkit data directory by default. Override
that with GRAN_PKM_TARGETS_FILE or "pkm-targets-file": "..." in config.json.
Example review publishing profiles:
{
"targets": [
{
"id": "obsidian-team",
"kind": "obsidian",
"outputDir": "~/Vaults/Work",
"folderSubdirectories": true,
"dailyNotesDir": "Daily",
"vaultName": "Work"
},
{
"id": "docs-export",
"kind": "docs-folder",
"outputDir": "./approved-notes",
"frontmatter": false
}
]
}pkm-sync resolves one of those saved profiles and writes the approved markdown into it with a
stable file name. Obsidian publishing defaults to:
- notes in
Meetings - transcripts in
Meeting Transcripts - Obsidian wikilinks between notes, transcripts, and optional daily notes
obsidian://open URLs whenvaultNameis available
docs-folder profiles keep the same idempotent publishing model without Obsidian-specific links,
and can disable frontmatter if you want cleaner plain Markdown files.
Commands
Inspect the configured rules:
gran automation rules
gran automation rules --format jsonInspect recent rule matches produced by sync runs:
gran automation matches
gran automation matches --limit 50Inspect recent action runs and resolve pending approval items:
gran automation runs
gran automation runs --status pending
gran automation approve <run-id>
gran automation reject <run-id> --note "not relevant"Inspect generated artefacts, review them, and replay a pipeline:
gran automation artefacts
gran automation artefacts --kind notes --meeting doc-alpha-1111
gran automation approve-artefact <artefact-id>
gran automation reject-artefact <artefact-id> --note "needs tighter action items"
gran automation rerun <artefact-id>Inspect recovery candidates and trigger explicit recovery flows:
gran automation health
gran automation health --severity error
gran automation recover <issue-id>How It Works
gran sync and gran sync --watch enrich each durable sync event with the meeting title,
folders, tags, and transcript readiness. The toolkit then flows through three layers:
- sync discovers change and persists events
- rules decide whether a change is interesting
- actions execute once per matched event/action pair and persist their final run state
That idempotent run log is what keeps automation observable instead of burying side effects inside the sync loop.
Pipeline artefacts also keep their own audit trail. Generated candidates can be edited, approved, rejected, and rerun later, and every step is appended to the artefact history so the review queue can show what changed and why.
When an artefact is approved, the toolkit looks for rule actions whose trigger is approval and
whose sourceActionId matches the pipeline action that created the artefact. Those delivery
actions are then executed exactly once for that artefact, whether the approval came from the review
queue or from approvalMode: "auto".
Processing health sits next to that audit trail. The toolkit detects:
- stale sync state
- meetings that still have no transcript after a grace period
- failed pipeline runs
- meetings whose latest artefact is missing or older than the meeting itself
Each issue gets a durable id, and recovery can either re-run sync, re-run a matching pipeline, or replay the latest artefact pipeline depending on the failure type.
Surface Support
- CLI: inspect rules, matches, runs, generated artefacts, and processing-health issues; recover issues, replay pipelines, resolve pending
ask-userruns, and approve or reject generated artefacts - Web: browse processing-health issues, trigger recovery, review generated artefacts against the current meeting notes, edit them inline, and approve, reject, or rerun them
- TUI: open the automation review overlay with
uto recover health issues and approve, reject, or rerun generated artefacts and pendingask-userruns
Notes on Exports
Automation exports run through the same file writers and export-job history as manual exports, but
they do not hijack the active UI view. Meeting-scoped automation exports default to stable
subdirectories under _meetings/<meeting-id>/ when scopedOutput is enabled.