Marvin produces a structured, source-cited draft of everything you would otherwise read first, so the writing starts from a dossier instead of a blank page. Built for the analyst whose deliverable is the note.
Built for
Sector analysts and research associates at broker research divisions, publishing on the companies they cover.
The note is the deliverable
A buy-side analyst who reads a 10-Q in three hours and updates the model in silence has done the job. A sell-side analyst has to do the reading, update the model, write a first-look note, publish it before the peer at the next firm does, and be available when the sales force starts calling. Time spent on the reading and extraction in front of the writing is time the note does not get written.
Marvin compresses that phase. It does not write the note. It produces a structured, cited draft of everything you would otherwise read first, so the writing starts from a dossier rather than a blank page. The thesis, the rating context, and the client-facing take stay where they belong.
The reading and extraction layer, compressed
The shape of a first-look note is predictable: numbers vs. consensus, guidance delta, management tone, a couple of drivers to watch. The slow part isn't the writing. It's the reading underneath the writing. A Deep Research Agent produces a structured delta of each release within minutes of the filing, with quote-level citations to the press release, call, and 10-Q. Agents can be pre-configured to draft each release directly in the firm's note template, so the analyst opens a partial draft in house format instead of a blank page.
Initiation reports compress the same way. Two to three weeks of absorption (years of filings, eight or so earnings calls, peer reading) becomes a structured primer that covers business model, segment economics, guidance track record, and management commentary, with citations throughout. The analyst reviews it, adds the thesis, and builds the model. The extraction shortens. The judgment phase doesn't.
Peer comparison notes are where the compression compounds the most. Multiple companies queried in parallel, structured peer tables, interactive charts. Sector notes that pull the same data across ten names compress more than single-name notes, since the work was largely cross-name extraction in the first place.
The headcount question
Worth being direct about this because analysts ask. AI tools like ours are sometimes pitched to research management as a way to cover the same universe with fewer people. We do not think that is the most honest framing, and it is not how most of the analysts we talk to are actually using the product.
The reading and extraction phase compresses. What happens to the recovered time is a desk-level decision, not a product outcome. Three patterns we see:
- Same headcount, more names. Teams under pressure to expand coverage absorb the new names with existing analysts. This tends to be the most defensible use.
- Same headcount, more depth. Teams keep coverage size fixed and redirect reclaimed time toward higher-conviction work: more thesis notes, more client conversations, more custom research on the names already covered.
- Fewer people doing the same work. This happens. We would rather the market make that decision clearly than pretend it does not. If the firm's plan is headcount reduction regardless of tooling, the tooling is not the reason. If the plan is headcount reduction because of tooling, that is a leadership choice, not a product feature.
The product does not decide which of these happens. It compresses the reading. What a team does with the recovered time is up to the team.
Start your free evaluation
Analyze 15 leading companies immediately. No registration, no credit card, no sales call.
What the saved hours actually become
The reading and extraction phase is not where the note's value lives. The value is in the rating call, the variant view, the way the analyst frames the quarter for a portfolio manager who has read three other notes already. Compression on the reading buys back the hours that should have gone to those parts.
The work the saved hours go toward:
- Sharper notes. More time on the rating call, the framing, and the parts that distinguish the note from the consensus reaction.
- More client conversations. The half-hour calls with PMs and analysts on the buy-side that drive vote-and-pay decisions. They do not fit a calendar that's already eaten by extraction work.
- Wider or deeper coverage. Whether the desk wants more names per analyst or more depth on the names already covered, the binding constraint stops being mechanical reading.
Citations carry through to the published note. Every claim in a Marvin output ships with a link to the specific passage in the primary document, so compliance review starts with explicit sourcing rather than reconstructing it after the fact. Whether review ends up faster in practice depends on the firm. The inputs are cleaner.
Built for one analyst, scales when ready
The product is single-player by default. One analyst on one desk gets the full experience, with no dependency on the rest of the firm. No CTO sign-off, no firm-wide license, no procurement cycle. This is the canonical way the product is used, not a preview of a "real" team deployment.
When colleagues want in, they get invited. When the desk wants the broader team on the same coverage, that scales too. Neither is a precondition for the analyst who started.


