The Unexpected AI Workflow: Repackaging Research for the Right Audience
A Workflow We Did Not Anticipate
We built Marvin Labs to help analysts research faster. The focus was on ingestion, extraction, and analysis: turning filings, earnings calls, and press releases into material insights at speed.
But a use case has emerged that we did not design for. Analysts are using AI not just to produce research, but to repackage it after the research is done.
The pattern looks like this: an analyst completes a detailed piece of work, whether from our platform or their own notes, and then uses AI to convert that output into the right format for a specific audience. A 10-page sector deep dive becomes five bullet points for a client email. A quarterly earnings summary becomes talking points for the internal morning meeting. English-language research notes become Mandarin summaries for ultra-high-net-worth clients in Asia.
This is not research creation. It is research communication. And it turns out to be one of the most time-consuming parts of an analyst's day.
The Instruction Layer
What makes this workflow effective is not the AI model itself. It is what analysts build on top of it: a persistent instruction layer.
Analysts define conversion rules in plain language. These rules specify:
- Target audience: Portfolio managers, retail clients, institutional investors, internal teams
- Format: Bullet points, narrative paragraphs, slide-ready summaries, email drafts
- Level of financial sophistication: Technical jargon for PMs, accessible language for retail
- Language: English to Mandarin, Japanese, German, or any language the client base requires
- Length constraints: 200-word email summaries, single-page briefs, three-slide decks
These rules are created once and applied repeatedly across documents. The analyst writes the instruction set, tests it on a few outputs, refines it, and then runs it as a repeatable conversion pipeline.
This is the key difference between ad hoc prompting and a genuine workflow. The instruction layer turns a one-off AI interaction into a persistent, reusable process that maintains quality and consistency across outputs.
Research Repackaging in Practice: Global Equity Team
A mid-size asset manager covers 55 companies across the US, Europe, and Asia. The research team produces detailed quarterly notes in English. Before adopting an instruction-layer workflow:
- Analysts spent 30 to 45 minutes per company reformatting notes into client-facing summaries
- A separate process translated key research into Mandarin and Japanese for APAC clients
- Morning meeting talking points were manually extracted from longer reports
After building instruction sets for each output type:
- Client-facing summaries are generated in under 5 minutes per company, then reviewed by the analyst
- Multilingual outputs are produced simultaneously with the English version
- Morning meeting talking points are extracted automatically with consistent structure
Total time on repackaging dropped from roughly 25 hours per week during earnings season to under 5 hours. The quality of output formatting improved because the instruction sets enforce consistency that manual reformatting often lacks.
Why This Matters More Than It Looks
On the surface, reformatting research does not feel like high-value work. That is precisely the problem. It occupies a large share of analyst time without being recognized as a productivity bottleneck.
Consider the math. A typical sell-side analyst covers 15 to 20 companies. Each earnings cycle generates a detailed note. That note then needs to be adapted for:
- Internal distribution: Morning meeting summaries, PM-facing highlights, sector overviews
- Client communication: Tailored emails for different client segments, varying in detail and technical depth
- Cross-border distribution: Translations for non-English-speaking clients or regional offices
- Marketing and business development: Simplified versions for prospect outreach or conference materials
A single research note can produce four to six derivative outputs. Multiply that by 15 companies per quarter and the repackaging burden becomes substantial. We are seeing analysts report 80% or more reduction in the time spent on this formatting and conversion work, depending on complexity.
The Instruction Layer vs. Prompt Engineering
There is an important distinction between what analysts are doing here and generic prompt engineering.
Prompt engineering is ad hoc. The analyst writes a new instruction each time and hopes the output matches what they need. The instruction layer is different. It is a curated, tested set of conversion rules that persist across sessions and documents. It functions more like a style guide or a template system than a one-off query.
This matters because the outputs are client-facing. Inconsistent formatting, missed context, or wrong tone can erode trust. The instruction layer gives analysts control over the conversion process while still benefiting from the speed of AI execution.
Quick Start
Build Your Own Instruction Layer: A Starting Framework
- Identify your recurring outputs: List every format you regularly produce from a single research note (client emails, morning briefs, PM summaries, translations)
- Write conversion rules for one output type: Start with the format you produce most often. Specify audience, tone, length, what to include, and what to omit
- Test on three recent notes: Run the instruction set against recent work and compare to what you produced manually
- Refine and expand: Adjust the rules based on output quality, then repeat for the next output type
- Maintain a library: Store instruction sets by output type. Update them quarterly as client needs or internal processes evolve
The goal is not perfection on the first pass. It is a reviewed draft that takes 2 minutes to produce and 3 minutes to refine, instead of 30 minutes to write from scratch.
The Broader Implication
So much of an analyst's time is not spent creating insights. It is spent communicating them to the right people in the right format. Research departments have long recognized this implicitly. Junior analysts spend hours reformatting senior analysts' work. Teams maintain template libraries. Translation services add days to cross-border distribution.
AI does not change the insight. It changes the cost of distributing that insight across audiences, formats, and languages. When the translation layer between research and communication is handled by AI, analysts can redirect that time toward the work that actually differentiates them: analysis, judgment, and conviction.
This is not a replacement for the analyst. It is the removal of a bottleneck that has been accepted as unavoidable for decades.
What We Are Learning
We did not build Marvin Labs for this workflow. But it makes sense in hindsight. The same capabilities that let AI extract material insights from earnings calls (understanding financial context, maintaining accuracy, handling nuance) also make it effective at converting research into audience-appropriate formats.
The analysts who adopt this workflow fastest tend to share a few characteristics:
- They already produce research for multiple audiences
- They cover global names where cross-language communication is routine
- They work in teams where consistent formatting matters for internal consumption
For firms evaluating AI tools, this is worth considering. The value of an AI platform is not limited to the research it helps create. It extends to how efficiently that research reaches the people who need it.
If you are using AI to repackage your research for different audiences, we would like to hear what formats you are producing and what instruction patterns are working. Reach out through our platform or connect with us on LinkedIn.

Alex is the co-founder and CEO of Marvin Labs. Prior to that, he spent five years in credit structuring and investments at Credit Suisse. He also spent six years as co-founder and CTO at TNX Logistics, which exited via a trade sale. In addition, Alex spent three years in special-situation investments at SIG-i Capital.



