CadenceGrid
All insights
Methodology10 February 2026·10 min read

The 72-hour IC memo: what the tooling stack has to get right

Cutting a six-week workflow into a three-day one is not a productivity story. It is an architecture story. What the ingestion, orchestration and review layers have to look like for a memo to clear on a compressed clock.

CadenceGrid · Architecture note
Data centre rack with illuminated network switches

The IE report takes six to ten weeks. The financial model and market report sit alongside it for another two to three. An IC memo built from three disconnected artefacts therefore runs roughly eight weeks end-to-end, not counting reconciliation time the analyst absorbs.

Cutting that to 72 hours is not a story about writing faster. It is a story about an architecture where the three artefacts are never separate to begin with. Here is what the layers have to look like.

Layer one: ingestion that keeps its receipts

Every source document (SIS pack, connection register snapshot, AEMO publications, Cornwall or Aurora curves, counterparty filings) lands in a normalised schema with every extracted field linked back to the source passage. Page number, table row, footnote anchor, all first-class fields. If an agent later writes 'the project's SCR at POC is 2.8' the sentence carries a pointer to the page and table it came from.

Without that, the downstream review layer has no surface to audit. Reviewers end up re-reading source material to verify claims, which is the exact six-week-consulting pattern the stack is meant to replace.

Layer two: orchestration that resumes cleanly

A memo is a graph of tasks. Ingest the SIS. Extract the grid-code compliance table. Score the red flags. Pull the revenue curve. Run the dispatch optimiser. Draft the revenue section. Reconcile against the counterparty section. Assemble. Render.

Every step checkpoints. Every step has typed inputs and typed outputs. If the dispatch optimiser fails mid-run because a curve service timed out, the orchestrator resumes the optimiser without re-running ingestion. If a reviewer rejects a section, the orchestrator re-runs only that section with the feedback threaded into the prompt.

The alternative (prompting a single large model with the full project context and hoping for the best) produces a draft that is untraceable, unreviewable, and unfit for an IC. It is also a waste of the model's context window.

Layer three: review that the reviewer will actually use

A memo does not clear the 72-hour window if a senior analyst has to manually re-verify every claim against source material. The review surface has to present the draft alongside the source trail, with every quantitative claim hyperlinked to the source passage and every model output hyperlinked to the scenario assumptions.

Rejecting a claim has to route back to the generating agent with structured feedback, not free text. That is the difference between a review loop that converges in 90 minutes and a review loop that does not converge at all.

Layer four: the artefact itself

The PDF the IC reads is the smallest part of the stack. It is a render of the underlying typed memo object. The live dashboard is a second render of the same object. Both are recreated on demand as the underlying data changes.

This matters at reuse time. When a client screens a neighbour project on the same node, the memo for the first project becomes a typed input to the second. The stack does not re-derive what it already knows.

Why the architecture matters more than the model

The question we get most often is 'what model do you use'. It is the wrong question. Any current-generation model can write a passable revenue paragraph. None of them can produce a memo that holds together across eight typed sections, against real source material, with reviewer-surfaced citations, on a 72-hour clock.

The architecture is the product. The model is a component.

This is the architecture CadenceGrid is being built on. The details will open up further in the BESS DD Methodology v1 paper, in preparation for 2026 Q3.