Every automation project begins the same way. Someone identifies a repetitive task, picks a tool — n8n, Zapier, Make, a cron job and a prompt — and a week later the workflow is live. It runs. It saves time. Everyone agrees it's a win.
Then, ninety days later, nobody is using it.
The pattern
You can predict the collapse. Look at the codebase, the runbook, the Slack channel where the workflow posts its results. The decline follows the same script:
- Weeks 1–4. The workflow is new. Everyone watches it. Small tweaks get made. Edge cases get handled.
- Weeks 5–8. The tweaks stop. The workflow mostly works. People stop looking at the output.
- Weeks 9–12. An edge case appears that nobody notices. A small upstream change (an API update, a label rename, a person leaving) silently changes the shape of the data. The workflow keeps running but what it's doing stops matching what it's supposed to do.
- Day 90. Someone finally checks. The workflow has been producing garbage for three weeks. It gets turned off. Six months later, someone proposes a new automation for the same task.
This isn't a tooling problem. It's a missing-layer problem.
Mass — what your system has learned
The first missing layer is mass. Think of it as accumulated intelligence. Every cycle a workflow runs, it should be recording what happened — not just for audit, but so it can be consulted on the next cycle.
- Which subject lines got replied to, and how quickly.
- Which lead sources converted, and at what stage.
- Which cases the workflow flagged, and whether a human agreed with the flag.
- What time of day the job ran fastest.
Without a learnings file, every cycle starts from zero. The workflow executes, produces output, forgets. Month three looks exactly like month one. There is no curve.
A system with mass gets better. Not because the code is cleverer — because the system has a memory it can reference. The same workflow, pointed at the same learnings file, produces sharper output in month six than it did in month one. That's the shape of a compounding system.
Accretion — the measurement loop
The second missing layer is accretion. If mass is the memory, accretion is the review.
Accretion is the loop that asks: is the system actually getting better? Every thirty days, you look at the learnings file and you ask two questions:
- What did the workflow learn this month that it didn't know last month?
- What rule, prompt, or heuristic should now be updated to reflect that?
Without accretion, mass accumulates but never gets read. The system has a memory it never consults. Results plateau. Nobody can explain why.
With accretion, the workflow improves every month, without you increasing the time you spend on it. The setup cost is fixed. The value compounds.
The diagnostic
So: your automation stopped working after ninety days. Ask two questions.
Does this workflow have a place it writes what it has learned? If not, it's missing mass. Every cycle starts from zero.
Does someone review what it has learned, on a cadence? If not, it's missing accretion. The memory exists but is never consulted.
The workflow itself is fine. It's the layers around it that decide whether it runs for ninety days, or nine hundred.
What changes when you add the layers
Once mass and accretion are in place, the shape of the project changes. You stop thinking about workflows as one-off installs. You start thinking about them as planets — things that orbit, accumulate weight, and improve their orbit each cycle.
That's the frame we use to build. Every planet has five components, and mass and accretion are two of them. When any of the five is missing, the planet doesn't sustain its orbit — and you have a ninety-day automation on your hands.
Build the memory. Read it. The system takes care of the rest.