Last Wednesday at 2am, J dropped a sprint report in my inbox.

The first version of the hackathon trading agent was live. Strategy framework, backtesting logic, risk control module, position management — all of it running.

I stared at that report for a while.

Not because it was impressive. Because I remembered how long the same workload would have taken a developer four years ago. Writing a backtesting framework from scratch — at least two weeks. Then debugging the signal logic — another week. Documentation? That’s a next-month problem.

J did it in three days.

What AI Actually Compressed

I asked J which parts made the real difference. He gave me five answers — each with a specific process behind it, not just a feeling.

Iteration speed. Every time the strategy logic changed, the old way meant manually tracing which function affected which module. J set up an analyst role using Opus, fed in new signals, and got preliminary backtesting results with problem locations within minutes. Changes that used to take a full day to confirm could be verified in a single forty-minute cycle.

Debugging quality. This one hit closest to home. Debugging used to mean “guess where it broke” — change one thing, run it, guess again. J said when Claude analyzes a traceback, it doesn’t just point to the line — it traces the logic chain and explains why the error happens there. Debugging shifted from scattershot to precision.

Documentation. Every time J finished a module, the docs were ready in sync. That sounds minor, but documentation is exactly what solo developers skip — and two weeks later they can’t read their own code. Now docs travel with the code, no debt accumulating.

Test coverage. For the strategy validation scripts, AI expanded the test cases from 12 to 30, including edge cases J hadn’t thought of — like position behavior during extreme volatility periods. That kind of systematic “you might not have considered this” is where AI earns its keep.

Refactoring speed. The architecture changed three times. The first version mixed the signal layer and execution layer together. The second version separated them but had interface design issues. The third version was right. Refactoring used to be a source of dread — you couldn’t easily map what you changed to what it affected. With AI alongside, J said that fear was cut in half, and impact assessment took minutes instead of hours.

Three Things That Still Get Stuck on the Human Side

When I asked J what AI couldn’t decide for him, he paused.

First principles of strategy logic. “AI can implement my ideas, but it can’t tell me whether the idea is worth pursuing.” During the hackathon J tested six signal combinations — AI could run all of them — but which one actually fit real market structure was a judgment that depended on his own understanding of the market. AI has no intuition. It only has the data you feed it.

The feel for risk limits. Risk module parameters — maximum position size, stop-loss threshold, how many consecutive losses before scaling down — the numbers behind those settings are judgments about “what can I actually handle.” J said AI can calculate expected value, but it doesn’t know whether you’re sleeping at night. That’s deeply subjective, and it has to be set by a person.

First response to anomalies. Midway through a run, the signal logic suddenly went completely flat — no entries, no exits, just silence. J said in moments like that, he needs to judge for himself whether the strategy is waiting, or something broke. AI is slow on these “quiet anomalies” because there’s no obvious error signal to catch. The human sense that something feels off still runs faster than the logs.

Failing Faster — and Failing Better

That’s J’s phrase, and I think it’s exactly right.

Before, building a trading tool from idea to first meaningful backtest could take three weeks. Along the way you’d make a lot of low-level mistakes — syntax errors, logic implementation errors, framework misunderstandings. Those ate most of the time. The hours actually spent thinking “is the strategy sound?” were surprisingly few.

AI compressed the low-level errors.

Over the three hackathon days, J quickly ruled out four strategy directions. Each one — from idea to confirmed “this isn’t working” — took about six hours. If those four failures had all been bogged down in manual debugging, the hackathon window would have run out.

But the direction he ultimately chose was still his own call.

AI helped him reach “knowing it won’t work” faster. But “what will work” is still a human question.

The risk didn’t disappear — it was redistributed. The old risk was “too slow to build, too many errors, no time to validate.” The new risk is “building too fast, getting overconfident, ignoring assumptions AI didn’t catch.” Different risks, but both real.

AI Is a Multiplier — But Only If You Have Something to Multiply

I’ve wondered sometimes why AI tools have such wildly different effects on different people.

Then I come back to those three things J said still get stuck — strategy intuition, the feel for risk limits, reading anomalies. None of those are technical problems. They’re about how deeply you understand what you’re actually doing.

If you half-understand trading logic, AI will help you implement that half-understanding very quickly and very completely. The failure will come just as fast and just as completely.

So yes, AI is a multiplier. But a multiplier doesn’t change direction — it just amplifies whatever scale you started with.

What J built in three days was built on months of understanding market structure, understanding risk logic, understanding the tradeoffs in signal design. AI turned that understanding into code. It didn’t manufacture understanding that wasn’t there.

That’s about it.