Chapter 1 | Part 1: Foundation

The Stack

A clear map of the AI tools worth using and the mental model for choosing which to reach for.

10 min read

Before the first tool, you need the map.

Why This Matters

The AI tools available to executives in 2026 are not a single category. They do different things, require different inputs, produce different outputs, and fail in different ways.

The executive who treats them as interchangeable — reaching for ChatGPT when they need a database query, or asking Claude to take their board meeting notes — will get mediocre results from everything. The executive who knows what each tool is for gets excellent results from each, quickly, without the experiments.

This chapter is the map.

The Five Categories

1. Builders

Tools that create software: scripts, dashboards, automations, prototypes, internal tools.

These tools take your description of a problem and produce working code. They run in a terminal (or an IDE that acts like one) and interact with your file system. They are the reason a non-technical executive can build a tool that does exactly what they need — not a generic version sold by a SaaS company, but the specific thing they actually want.

The tools:

Claude Code — Anthropic's command-line agent. You install it, open a terminal, describe what you need. It reads your files, writes code, runs it, fixes what breaks, and reports back. Best for: building tools that connect to your existing data, personal scripts, prototypes you want to understand. Requires a terminal. Runs on Mac, Linux, or Windows with Git Bash.

Cursor — An AI-native code editor. If the terminal feels like too much of a barrier, Cursor is the middle ground: it looks like a standard coding application, but AI is built into every layer. Sebastian Siemiatkowski (CEO, Klarna) uses Cursor to prototype ideas before involving his engineering team. Best for: executives who want a visual interface with AI power underneath.

Gemini CLI — Google's command-line agent, equivalent to Claude Code. Works best if your organization is deeply in the Google ecosystem (Google Drive, Docs, Sheets). Best for: teams where the data lives in Google Workspace.

Codex CLI — OpenAI's command-line agent, equivalent to Claude Code and Gemini CLI. Runs GPT-4o and o3 in the terminal. Best for: organizations already standardized on OpenAI's API or teams that want to avoid Anthropic's platform specifically.

Replit / Lovable / v0 — Browser-based building tools that require no installation. The lowest barrier to entry. Best for: complete beginners or quick one-off prototypes where you don't need the result to last. Less powerful than the terminal-based tools for serious use.

When to use a Builder: You have a specific, small problem. You need a tool that doesn't exist anywhere else because nobody else has your exact requirements. You're willing to stay present for a 2-4 hour session and actually understand what gets built.

When not to: You need something that scales, that customers depend on, or that touches production data. Hand that to your engineering team.

2. Researchers

Tools that find information, synthesize documents, and answer specific questions with sourced responses.

These are not search engines. They don't return a list of links. They read the sources, combine what they found, and give you a direct answer — with citations you can check. They are faster than a Google search for anything requiring synthesis, and more reliable than asking a general-purpose AI to recall facts from its training data.

The tools:

Perplexity — The primary research tool for executives. Ask a question, get a sourced answer. Handles competitive intelligence, market research, news synthesis, regulatory questions. Unlike ChatGPT or Claude in chat mode, Perplexity cites every claim. Best for: any research question where you need to know where the answer came from. Free tier is sufficient for most use.

Claude / ChatGPT Deep Research — Both offer a "deep research" mode that runs a multi-step search across dozens of sources, synthesizes findings, and returns a structured report. Takes several minutes, not seconds. Best for: substantial research tasks where you want depth over speed — preparing for a board meeting, understanding a new industry, evaluating an acquisition target.

NotebookLM — Google's tool for working with your own documents. Upload a PDF, a contract, a long report — ask questions against it. Unlike uploading to a general AI chat, NotebookLM keeps the document as the source of truth. Best for: extracting information from large documents you don't have time to read fully, or comparing multiple documents against each other.

When to use a Researcher: You need to understand something outside your existing knowledge. Competitive landscape, regulatory changes, market sizing, technology assessment. Anything you'd currently ask an analyst or an EA to research.

When not to: The question requires institutional knowledge your organization holds internally — relationships, history, context. No AI researcher knows what your sales team knows about a specific client. Don't substitute AI research for human intelligence on things where the human has access you don't.

3. Writers

Tools that draft, refine, and restructure text.

This is where most executives start, and where the most confusion lives. AI writing tools are not ghostwriters. They don't replace your voice. They handle the structural work — organizing, expanding, tightening — so your time goes to the judgment that only you can apply.

The tools:

Claude / ChatGPT in conversation — Paste your notes. Describe the document you need. Get a first draft. The draft will be structurally correct and probably too long. Your job is to cut it to the version that sounds like you. Best for: proposals, reports, client updates, board memos — anything where the structure is the hard part.

Claude with long context — Claude can hold very large documents in context (up to 200,000 tokens in the Max plan). This makes it the right tool for tasks like: reviewing a 150-page contract and summarizing what you need to know, comparing two proposals against a set of criteria, or editing a long document for consistency.

Notion AI / Docs AI — If your team works in Notion, the built-in AI is useful for in-context drafting and editing. Doesn't require switching tools. Best for: smaller writing tasks within an existing workflow.

When to use a Writer: You have something to say but not the time to say it well. Your notes are clear, your argument is formed — you need the structure. Use AI to build the skeleton. You fill in the judgment.

When not to: The document is the relationship. A personal note to a key client, a difficult conversation with a direct report, a message that needs to carry exactly your register — these cannot be AI-drafted without losing the thing that makes them work. You know which emails those are. Write them yourself.

4. Listeners

Tools that capture, transcribe, and process spoken information.

Meetings are expensive. They produce decisions, action items, and institutional knowledge — most of which evaporates within 48 hours because nobody captured it accurately. AI meeting tools solve this without disrupting the meeting itself.

The tools:

Fathom — Free, installs as a browser extension, joins your video calls, transcribes and summarizes. The summary arrives before the call ends. Best starting point for most executives because the setup is ten minutes and the cost is zero.

Otter.ai — More powerful transcription, with a focus on in-person meetings (via phone) as well as video. Better for teams who hold physical meetings that need to be captured.

Fireflies.ai — The enterprise tier of this category. CRM integration, search across all past meetings, team-wide deployment. Best for: organizations where meeting intelligence needs to flow into sales or client management systems.

When to use a Listener: Every recurring meeting where action items matter. Every client call. Every interview. The setup cost (ten minutes) pays back within the first meeting.

When not to: Conversations where the other party's candor depends on there being no record. Some conversations should remain off the record. Use judgment about which. Always inform participants that a recording tool is running.

5. Local / Private Systems

AI that runs entirely on your hardware. No cloud. No API. No third party ever sees the data.

The first four categories all require sending your text to a company's servers — Anthropic, OpenAI, Google, Perplexity. For most work, this is fine. For some work, it isn't.

Legal strategy documents. M&A discussions. Board materials. Personnel decisions. Competitive intelligence you haven't disclosed. Anything where the value is in the information itself and the risk is that information leaving your control.

Local AI systems run the model on your own machine. The data never leaves. There is no API to get breached, no terms of service to change, no company that can be pressured to hand over your queries.

The tools:

Ollama — The core runtime. Installs in minutes, downloads open-source models from a catalog, runs them locally via a simple command. Everything else in this category is built on top of it (or something equivalent). This is where you start.

LM Studio — A desktop application that makes Ollama accessible without the terminal. You pick a model from a catalog, download it, and chat through a clean interface. The easiest entry point for non-technical executives. Windows, Mac, Linux.

Open WebUI — A self-hosted web interface that connects to Ollama. Looks and feels like ChatGPT, but the AI is running on your machine. Good for teams where multiple people need access to the local system.

Continue.dev — A VS Code extension that connects to local models for coding assistance. The local equivalent of Cursor — your code never leaves the machine. Best for: developers on your team who need AI assistance on proprietary codebases.

Jan — A fully offline alternative to LM Studio. Heavier on features, runs without any internet connection at all after the initial model download.

The models worth knowing:

You install Ollama; then you choose which model to run. The main ones:

ModelWho Made ItBest At
Llama 3.3 (70B)MetaGeneral use, closest to cloud quality
Mistral LargeMistral (France)European data, multilingual, strong reasoning
Qwen 2.5 CoderAlibabaCode generation specifically
Phi-4MicrosoftRuns fast on limited hardware
DeepSeek R1DeepSeek (China)Strong reasoning — note: Chinese company, review data practices
Gemma 3GoogleLightweight, runs on modest hardware

For most executives starting out: Llama 3.3 (8B) is fast on any modern laptop; Llama 3.3 (70B) is significantly more capable but needs either a Mac with 64GB+ RAM or a dedicated NVIDIA GPU.

The hardware reality:

  • MacBook Pro M3/M4 Pro (36–64GB RAM): Runs 7B–30B models well. Good for general use and writing tasks.
  • MacBook Pro M3/M4 Max (96–128GB RAM): Runs 70B models comfortably. Approaches cloud quality.
  • PC with NVIDIA RTX 4090 (24GB VRAM): Runs 70B models fast. Best value for Windows users.
  • Older hardware: Runs smaller models (3B–7B). Useful but noticeably limited on complex reasoning.

The honest tradeoff:

Local models at the 7B–13B size are measurably weaker than Claude Sonnet or GPT-4o on complex reasoning tasks. The gap narrows significantly at 70B+. The tradeoff is simple: cloud AI is smarter; local AI is private.

The right answer for most executives is both. Use cloud tools for work that doesn't carry confidentiality risk. Use local for the work that does.

When to use a Local System: The content you're working with cannot leave your machine. Legal proceedings, non-public M&A discussions, board deliberations, personnel matters, anything covered by NDA or regulation. Also: when you want zero marginal cost at high volume, or want independence from any specific company's platform.

When not to: You need the best available reasoning and the content isn't sensitive. Cloud tools are still ahead on raw capability. Don't handicap important work with a weaker model when the privacy tradeoff doesn't apply.

The Agentic Layer

There is a fifth category emerging: tools that don't just answer or produce, but act. They book meetings, file documents, run multi-step workflows, use a computer on your behalf.

Claude Cowork, Anthropic's newest product, is the clearest example. It can manage files, coordinate with Google Workspace, and execute multi-step tasks while you work on other things.

This category is real and growing fast. It is also the least mature. The tools make mistakes in ways that are harder to catch than a wrong sentence in a draft. For now: be aware of it, experiment with low-stakes tasks, and do not put anything irreversible in its hands without a human in the review loop.

This guide covers the first four categories in depth. The agentic layer will get its own treatment as the tools stabilize.

How to Choose

When you have a task, ask two questions:

1. What is the output?

  • A piece of software → Builder
  • A researched answer → Researcher
  • A written document → Writer
  • A record of a conversation → Listener
  • Any of the above, but the content is sensitive → Local / Private System

2. How confident are you in the output?

All four categories produce outputs that require your review. The Researcher's answer might cite a paywalled source you can't check. The Writer's draft might miss the nuance that changes the meaning. The Builder's code might work correctly but do the wrong thing.

Stay present. The tool accelerates the work. The judgment remains yours.

The Setup Investment

ToolCostSetup TimeTechnical Barrier
FathomFree10 minNone
PerplexityFree (basic)5 minNone
Claude / ChatGPT$20/month5 minNone
NotebookLMFree5 minNone
LM StudioFree (hardware cost)20 minLow
Cursor$20/month20 minLow
Ollama + Open WebUIFree (hardware cost)45 minMedium
Claude Code$20–100/month30 minMedium
Gemini CLIFree (API key)30 minMedium
Codex CLIUsage-based30 minMedium

Start with the free column. Get comfortable with Perplexity for research and Fathom for meetings. Then, when you have a specific thing to build, add a Builder. If privacy requirements apply to some of your work, add LM Studio in parallel — it runs alongside your cloud tools, not instead of them.

The mistake is installing everything at once. You'll use none of them well.

Next: What AI Builders Are

Stay in the loop

Occasional updates on AI systems, tools, and new writing.

Ormus — Diego Bodart