Chapter 7 | Part 2: Building

Reading Code You Can't Read

You don't need to read the code. You need to understand the build.

6 min read

You don't audit a contractor's nails. You check whether the structure is sound.

The Wrong Frame

Most non-technical people approach Claude Code's output with a quiet anxiety: I can't read this. How do I know if it's right?

This is the wrong question. You don't need to read the code. You never needed to read the code. What you need to evaluate is behavior — what the tool does when you give it inputs and ask it for outputs.

Michael Truell, CEO of Cursor (the AI coding tool valued at $29 billion), identified the failure mode precisely: "If you close your eyes and you don't look at the code and you have AIs build things with shaky foundations as you add another floor, and another floor, and another floor, things start to kind of crumble."

The phrase "close your eyes" is key. He's not saying you need to read code. He's saying you need to look. Look at the behavior. Look at what breaks. Look at what it produces when the input is wrong.

That's something any CEO can do.

The Three Questions

Before accepting any build, ask these three questions. Not to the code — to yourself, by testing the tool:

1. Does it do what I described?

Run it with a real example. Give it the actual input. Check the actual output. Does it match what you asked for?

This is not about the code. It's about the contract — you described behavior in the project brief, Claude Code produced something. Do they match?

If yes, move to question 2. If no, describe the specific gap. "The output has 8 columns but I only need 4" is fixable in minutes.

2. Does it break when I give it bad input?

Every tool eventually receives input that's wrong, missing, or unexpected. What happens when yours does?

Test it:

  • Give it an empty file
  • Give it a file with a missing column
  • Give it a file with a row that has no data
  • Give it a date in the wrong format

A fragile build will crash or produce wrong output silently. A sound build either handles the edge case gracefully or tells you clearly what went wrong.

If your tool crashes silently on bad input, it will break in production at the worst time. Ask Claude Code to handle the edge cases you tested.

3. Would I trust this if someone else used it?

This is the "team use" test. If you weren't the one running it — if your assistant ran it, or it ran on a schedule — would you trust the output without checking it?

If the answer is no, the tool isn't ready to run unsupervised. That's fine for a personal tool. But you need to be honest about which category you're in.

How to Ask Claude Code to Explain Itself

You don't have to understand code to ask intelligent questions about what was built. Some prompts that work:

  • "Explain what this does in plain English, step by step, as if I have no technical background."
  • "What happens if the input file is empty?"
  • "What happens if a column is missing?"
  • "Is there anything in this that could cause data to be deleted? What?"
  • "What assumptions did you make that I haven't explicitly stated?"
  • "What would a developer be worried about looking at this?"

This last question is powerful. Claude Code will tell you the technical risks you didn't ask about. It's not perfect, but it's better than not asking.

The Session Memory Problem

One of the most important things Stockton identifies for non-technical users: "If it stays only in the conversation, it's lost when the session ends."

What this means in practice: the code Claude writes is saved in files. The conversation about it is not. When you start a new session, Claude Code starts fresh.

This creates a specific risk: a tool that works in the session but can't be debugged later because the context is gone.

Three habits that prevent this:

1. Keep a README.md in every project folder. Have Claude Code write one after each major build. It should explain: what this tool does, what inputs it expects, what outputs it produces, and how to run it.

2. Test before closing the session. Run the tool with a real example before you end the session. Bugs found during the session can be fixed with context. Bugs found three days later start from scratch.

3. Keep the project brief. The brief you wrote in Chapter 3 is the specification. Keep it in the project folder. If you ever need to rebuild the tool, start from the brief.

Security Basics for Non-Technical Builders

You don't need to become a security expert. You need to know three things:

Never hardcode credentials. If a tool needs a password, API key, or token, it should read it from an environment variable or a file that isn't inside the project — not have it typed directly into the code. Ask Claude Code to handle credentials this way and it will.

Never expose this to the internet without a developer review. Research from Veracode in 2025 found that roughly 45% of AI-generated code contains security flaws, with cross-site scripting errors appearing in 86% of cases. This doesn't mean your personal tools are broken — it means anything that takes input from users outside your control needs a security review before going live.

Never build authentication or access control yourself. If you need a tool where users log in with passwords, or where some people can see things and others can't, call a developer. This is the boundary where Claude Code becomes dangerous, not just fragile. Chapter 9 covers this in detail.

What "It Runs" vs. "It's Right" Actually Means

A common failure mode: the tool runs without errors and produces output — but the output is wrong.

Examples of tools that "run" but aren't "right":

  • A summary that silently drops rows where a field is empty
  • A date calculation that's off by one because of timezone handling
  • A percentage that looks right but uses the wrong denominator

Claude Code is very good at writing code that runs. It is less reliable at catching all the cases where the output is subtly wrong. This is your job — not because you can read the code, but because you know what the answer should be.

Before trusting a tool, run it against a case where you know the correct answer. Verify the output is what you'd get if you'd done it by hand.

Spotting Overengineering

Sometimes Claude Code builds something more complex than you need. Signs:

  • The output has more files than you asked for
  • It's asking for dependencies or installations you don't recognize
  • The brief says "simple CSV output" and there's a web server involved
  • Running it takes 30 seconds when there's no obvious reason it should

If you see this, ask: "Is this simpler than it needs to be? I only need X. Can we reduce this to just that?" Claude Code will usually simplify on request.

Complexity is fragility. A simple tool that does one thing is easier to debug, maintain, and rebuild than a complex tool that does five things.

Further Reading

Next: From Local to Live

Stay in the loop

Occasional updates on AI systems, tools, and new writing.

Ormus — Diego Bodart