Chapter 8 | Part 3: Operating

Research and Intelligence

Perplexity, deep research, NotebookLM. Real workflows for the research executives actually need.

8 min read

The most expensive research is the research that takes three weeks and answers the wrong question.

The Research Problem

Running a company requires constant intelligence: what are competitors doing, what does this regulation actually mean, what's happening in the market, what does this 80-page contract say, what do analysts think about this sector.

The traditional path is delegation. You ask an analyst, an EA, or a consultant. The research takes days or weeks. It comes back filtered through their interpretation of what you needed to know. Then you ask follow-up questions. Then it takes another few days.

AI research tools collapse this loop. Not for all questions — but for a specific class of questions, they return a sourced, structured answer in minutes that you can then interrogate directly.

This chapter covers what that class looks like, which tools handle it, and where the limits are.

The Core Mental Model

AI research tools are not search engines. They don't return links — they return answers. The difference matters.

When you search Google for "freight forwarding market size 2025," you get links to pages that might answer your question. You click through, read several pages, cross-reference estimates, decide what to trust.

When you ask Perplexity the same question, it reads those pages, combines the estimates, notes where they disagree, and gives you the answer directly — with citations you can click to verify.

The output is faster. But the verification responsibility doesn't disappear — it shifts. You're no longer doing the reading. You're doing the checking. That's a smaller job, but it's still your job.

The rule: Never use an AI-researched fact in a document, presentation, or decision without clicking at least one source. The AI is nearly always right. The exceptions are expensive.

Tool 1: Perplexity — Your Default Research Layer

Perplexity is where most research questions should start. Free to use, no installation required, and designed specifically for sourced answers rather than conversation.

What it handles well

Competitive intelligence: "What are the main freight forwarding companies in Panama and what services do they offer?" Perplexity will synthesize findings from company websites, directories, and industry publications. You get an overview in 60 seconds that would have taken 30 minutes of individual site visits.

Market sizing: "What is the estimated market size of the US truckload freight brokerage market in 2025?" It will surface analyst estimates, note the range, and cite the reports.

News and developments: "What's happened to Flexport in the last 6 months?" It searches recent news, not just training data. (This is where it beats ChatGPT — Perplexity searches in real time.)

Regulatory and compliance: "What are the FMCSA requirements for freight brokers?" It pulls from official sources and explains them in plain language.

Person and company backgrounds: "Who is Lucas Grizz of Raven Cargo?" It aggregates LinkedIn, press mentions, company bios, and public records.

What it doesn't handle

  • Internal questions: It doesn't know what your team knows, what your clients have told you, or what your competitive moat actually is.
  • Highly recent or obscure topics: If it happened in the last 48 hours or lives in a paywalled database, Perplexity may not have it.
  • Questions requiring judgment: "Should we expand to Panama?" is not a Perplexity question. "What are the logistics infrastructure strengths and challenges of Panama?" is.

Practical workflow

The most effective way to use Perplexity is to ask a series of specific, scoped questions — not one broad one.

Instead of: "Tell me about the logistics industry."

Ask: "What are the primary modes of freight forwarding used for US-to-LatAm shipments?" Then: "Who are the main digital-first freight forwarders in Latin America?" Then: "What are the main compliance requirements for a US company operating as an NVOCC?"

Each answer informs the next question. Thirty minutes of this produces better intelligence than a week-old analyst briefing — for many questions.

Tool 2: Deep Research — For Substantial Tasks

Both Claude (with Projects) and ChatGPT (with Deep Research mode) offer a more intensive research capability: multi-step, multi-source research that runs over several minutes and returns a structured report.

This is different from a Perplexity answer. It's closer to a research brief.

When to use it

  • Preparing for a major decision (entering a new market, evaluating an acquisition, assessing a technology investment)
  • Understanding an unfamiliar industry before a significant client engagement
  • Getting up to speed quickly on a regulatory environment you haven't operated in

What to expect

A deep research run typically takes 3-10 minutes. The output is a structured document — sections, headings, numbered findings, citations. It reads like a junior analyst wrote it, which is accurate: it's the kind of synthesis a capable but non-expert analyst would produce.

It will not replace a boutique research firm with ten years of industry-specific relationships. It will replace the $500 desk research you'd commission before you knew whether the direction was worth pursuing.

How to prompt it

Be specific about what decision you're trying to inform.

Weak: "Research the freight forwarding industry."

Strong: "I'm a logistics company CEO considering whether to build dedicated US-to-Panama freight capabilities. Research: (1) the volume and growth rate of US-Panama trade lanes, (2) the main competitors already serving this route, (3) the regulatory requirements for a US company operating in Panama as a freight forwarder, (4) the role of Panama's Tocumen Airport and Colon Free Zone as logistics nodes. I need to know whether this is a viable market entry and what the main risks are."

The more context you give about the decision, the more useful the research.

Tool 3: NotebookLM — When the Document Is the Source

Sometimes the question isn't "what does the internet know" — it's "what does this document say."

NotebookLM (Google, free) lets you upload documents and then ask questions against them. It treats the document as the source of truth, not as context to blend with everything else.

What it handles

  • Contracts: Upload a 60-page supplier agreement. Ask: "What are the termination provisions?" "Does this agreement restrict us from working with competitors?" "What are the payment terms?"
  • Reports: Upload an industry analyst report or a lengthy board deck. Ask: "What are the three main risks identified?" "What does this report say about pricing trends?"
  • Research papers or regulations: Upload a regulatory document. Ask: "What are the specific requirements that apply to freight brokers?" "Is there any provision about handling hazardous materials?"
  • Multiple documents: Upload your five main competitor websites or their annual reports. Ask: "What do all of these say about their technology differentiation?" "Which of these companies mentions sustainability?"

What it doesn't replace

NotebookLM is a tool for extraction and synthesis — not for legal interpretation. "What does this clause say" is a NotebookLM question. "Whether this clause is enforceable against us in Illinois" is a lawyer question.

Use it to get oriented fast. Use it to prepare the questions you bring to the expert. Don't use it to make the call the expert exists to make.

Practical workflow

  1. Upload the document(s) — PDFs, Google Docs, or pasted text
  2. Start with a broad orientation: "Summarize the main sections and their key points"
  3. Then ask specific extraction questions
  4. Copy the answers into your working document with a note that the source is the uploaded document

The Research Workflow in Practice

Here is how these tools fit together in a real executive research session:

Scenario: You're meeting a potential client in a new industry — maritime logistics — in three days. You know nothing about the space.

Step 1: Perplexity for orientation (20 minutes)

  • "What is maritime logistics and how does it differ from freight forwarding?"
  • "Who are the major players in maritime logistics in the Americas?"
  • "What are the main challenges and trends in maritime logistics in 2025?"

Step 2: Deep Research for depth (30 minutes, runs in background)

  • Launch a deep research query on the client's specific sub-sector while you do other work
  • "Research the container shipping market serving US Gulf Coast to LatAm ports, including main carriers, rate trends, and competitive dynamics"

Step 3: NotebookLM for their documents (15 minutes)

  • Upload the client's annual report or website content
  • Ask: "What does this company say about their competitive differentiation?" "What problems do they describe?" "What are their stated priorities?"

Result: Three days before the meeting, you walk in understanding the industry, having read about their specific situation, with questions that reflect genuine knowledge of their world. The meeting starts at a different level.

What AI Research Cannot Do

Access proprietary databases: Bloomberg terminals, legal case databases, specialized industry databases behind paywalls. If your industry uses these, you still need them.

Talk to people: The most valuable competitive intelligence often comes from conversations — with customers, with people who've worked at competitors, with lawyers who've seen the contracts. AI cannot replace relationship-based intelligence.

Apply your institutional knowledge: You know things about your industry, your clients, and your competitors that no public source captures. AI research gives you external context. You apply your internal context to interpret it.

Be accountable: If the research is wrong and you act on it, the AI does not bear the consequence. You do. This is not a criticism — it's a reminder that the stakes are yours, which means the verification is yours.

A Note on Citing AI Research

When you share AI-researched findings — in a board presentation, a client pitch, an internal memo — cite the primary sources, not the AI tool. Perplexity gives you the citations. Use them.

"According to Perplexity..." is not a citation. "According to the Freightos Baltic Index (October 2025), container spot rates on the Asia-Europe lane were..." is a citation.

The AI found it. You verified it. The source is the source.

Next: Writing and Communication

Stay in the loop

Occasional updates on AI systems, tools, and new writing.

Ormus — Diego Bodart