Marvin Labs
AI Adoption in Equity Research: Why Bottom-Up Beats Top-Down

AI Adoption in Equity Research: Why Bottom-Up Beats Top-Down

8 min readAlex Hoffmann, Co-Founder and CEO

Most firms are applying the wrong framework to AI adoption in equity research. They're using the playbook that worked for CRMs and ERPs: centralized planning, top-down rollout, change management consultants. The results are predictable: low adoption rates, frustrated analysts, and minimal productivity gains.

The tools work. Earnings call analysis, document monitoring, and information extraction consistently save 40-70% of analyst time. The problem isn't the technology. The problem is treating AI adoption like an enterprise IT project when it should be treated as individual productivity improvement.

The problem is that AI for equity research isn't an ERP. It's not a collaborative system requiring standardization. It's a productivity tool for individual contributors with different workflows. The analyst covering 25 healthcare companies has fundamentally different needs than one covering 40 software companies. The organizations that succeed will embrace bottom-up experimentation instead of top-down deployment.

The Multiplayer vs Single-Player Software Distinction

The last decade of enterprise IT was dominated by multiplayer systems: CRMs, ERPs, back office platforms. These tools only deliver value when everyone uses them consistently. If half your sales team logs calls in Salesforce and half keeps notes in spreadsheets, the system fails. You can't forecast, pipeline visibility disappears, management can't track progress.

Multiplayer vs Single Player Software Deployments
Multiplayer vs Single Player Software Deployments

This reality drove the familiar top-down implementation pattern: six-month planning cycles, steering committees, change management programs, Big Four consultants. The approach makes sense for multiplayer systems because they require network effects. One person using a CRM differently creates problems for everyone else.

After 30 years of enterprise IT, we've already done the easy projects. What remains are risky, change-heavy integrations that genuinely require heavy governance. The project-management approach isn't overkill for these systems - it's appropriate for genuinely complex work.

AI is Single-Player Software in the Early-Stage Phase

AI tools for equity research operate differently. An analyst using AI to analyze earnings calls doesn't require their colleague to use the same tool. There's no network effect. No shared database that breaks if people use it inconsistently. The value is individual productivity, not collaborative efficiency.

This changes everything:

  • Workflow fit beats standardization: Tools match individual analyst needs, not centralized IT preferences
  • Failure is cheap: If an analyst abandons a tool after two weeks, nothing breaks
  • Experimentation drives learning: Analysts need freedom to try approaches and share what works

But there's a more fundamental difference: AI for equity research is where traditional IT was 30 years ago. We're doing the easy projects now, which also means we don't have established best practices yet.

The technology is evolving rapidly, with new capabilities emerging every quarter. We have some sense of which workflows may be productivity enhancing: Earnings call analysis (2-4 hours per call), document monitoring (scanning dozens of 8-Ks), company onboarding (days reading years of filings). These are high-frequency, time-intensive, largely mechanical, with minimal integration requirements.

But we've been continually surprised by emergent capabilities in new model generations. What doesn't work today may work in six months. This is exactly when you need experimentation and fast feedback loops, not heavy governance.

Eventually, we'll tackle harder AI projects: integration with portfolio management systems, AI-driven trading strategies, organization-wide knowledge platforms. Those will need steering committees. But right now, the opportunity is capturing dozens of easy wins through experimentation.

The Framework: Three Simple Rules

Treat AI adoption as bottom-up business process improvement, not as an IT project. Three rules:

1. Provide Budget and Time for Experimentation

Give each analyst a reasonable budget for AI tools and explicit permission to spend time experimenting.

The budget should be theirs to allocate. If they want to try three different tools for earnings analysis and two for document search, that's fine. The goal is learning what works, not standardizing on a single platform.

2. Basic Vendor Vetting (If Required)

If your organization requires it, have IT or compliance do a basic vendor check on security and data handling. This should be lightweight. Not tool selection, not feature evaluation. Just a quick pass on whether vendors meet basic security standards.

Some analysts will want guidance on which vendors are approved. If that's the case, create a simple list and update it periodically. But which tool an analyst chooses from that list should be entirely up to them.

3. Forum for Best Practice Sharing

Create a lightweight forum for analysts to share what's working. This could be a monthly lunch session, a Slack channel, or a quarterly presentation series.

The goal isn't standardization. It's knowledge sharing. When the software analyst discovers that Material Summaries cut their earnings analysis time by 70%, other analysts should hear about it. They might adopt the same approach, or they might try something different. When the industrials analyst figures out how to extract CapEx guidance across manufacturing segments in minutes instead of hours, that becomes institutional knowledge available to anyone covering capital-intensive businesses.

This knowledge compounds. The first analyst to automate earnings calls shares their workflow. Three colleagues try it, each discovers slight variations that work better for their sectors. They share those refinements. Within six months, the firm has developed sector-specific best practices through distributed experimentation that no centralized team could have designed.

Let adoption spread organically based on demonstrated value, not mandated usage.

4. That's It

This is the entire framework. Budget, knowledge sharing, and basic vendor vetting if needed. Don't overcomplicate it with steering committees, implementation roadmaps, or change management consultants.

This Is Kaizen for Equity Research

This approach isn't new. It's Kaizen, the continuous improvement framework Toyota developed 70 years ago. The core insight: assembly line workers closest to the work were best positioned to identify process improvements. Rather than having engineers design processes from headquarters, Toyota empowered workers to experiment with incremental improvements.

The parallels are direct:

Kaizen in ManufacturingAI Adoption for Analysts
Assembly line workers closest to the processAnalysts closest to their workflows
Budget and time for experimentationAnnual tool budget per analyst, 2-3 hours/week initial time
Quality circles for sharing improvementsMonthly forums for best practice sharing
Many small incremental changesDozens of workflow improvements, not one big project
Organic spread of what worksAdoption based on demonstrated value

Kaizen worked because you can't design optimal workflows from a conference room. You need practitioners experimenting, learning, and sharing. Toyota didn't mandate specific improvements. They created an environment where improvement was expected, supported, and rewarded. The improvements emerged from the people doing the work.

Same principle here. Don't mandate which AI tools analysts use. Create an environment where experimentation is funded, safe, and shared.

Stop the Hype

Hype: "AI will transform your organization's workflows overnight with a centralized platform rollout."

Reality: AI adoption succeeds through individual experimentation and organic spread. The transformation comes from dozens of small workflow improvements, not one big project. Organizations that mandate AI usage the way they mandated CRM adoption will see 30% usage rates and wonder why productivity gains never materialized.

How to Start

If you manage a team of analysts (Head of Research, VP Research, Team Lead, or similar), here's what you should try:

  1. Allocate budget: Give each analyst a budget for AI tool subscriptions. Make it clear this is theirs to spend on experimentation without the need for immediate ROI justification.

  2. Create a forum: Set up a fortnightly or monthly session or internal channel for analysts. Encourage them to share what they're trying, what works, and what doesn't. Don't make attendance mandatory.

  3. Step back: Let analysts experiment. Resist the urge to create implementation roadmaps or standardize too early. Adoption will happen organically if the tools deliver value.

If your organization requires vendor vetting, have compliance or IT do a basic security check, and set up some ground rules on what's acceptable use. But don't let this become a bottleneck. The goal is experimentation, not approval committees.

If you're an analyst whose organization hasn't provided this framework, see if you can start anyway. Pick a tool with a free tier and test it on one workflow for a week. Track time saved. Share results with your manager. Bottom-up adoption often starts with one person demonstrating value, then spreads through informal networks faster than any implementation plan.

Why This Creates Competitive Advantage

The firms that lead in AI adoption won't be those with sophisticated implementation plans. They'll be those that let analysts experiment, share what works, and continuously improve.

The advantage compounds. Analysts at firms using bottom-up adoption discover effective workflows months or years ahead of peers trapped in centralized rollout cycles. They expand coverage 15-20% while competitors debate vendor selection. They build institutional knowledge through hundreds of experiments while others wait for documented best practices.

This isn't hypothetical. Some firms already have analysts processing earnings calls in 45 minutes instead of 3 hours, expanding coverage without adding headcount. Their competitors struggle with the same capacity constraints they've always had.

The gap will widen as AI capabilities improve. Firms with experimentation cultures continuously adopt new capabilities. Firms with centralized governance wait for steering committees, vendor integration, implementation plans. The time lag between capability emergence and adoption determines competitive positioning.

The choice between bottom-up experimentation and top-down deployment isn't an implementation detail. It's a strategic decision about how fast your organization learns. We're in an early-stage, high-uncertainty, rapid-innovation phase. That means Kaizen, not centralized IT deployment.

Alex Hoffmann
by Alex Hoffmann

Alex is the co-founder and CEO of Marvin Labs. Prior to that, he spent five years in credit structuring and investments at Credit Suisse. He also spent six years as co-founder and CTO at TNX Logistics, which exited via a trade sale. In addition, Alex spent three years in special-situation investments at SIG-i Capital.

Get Started

Experience professional-grade AI for equity research, validate insights for yourself, and see how it fits into your workflow.