Zachary Proser

The LLM Is Not My Friend (It's Something More Useful)

The LLM Is Not My Friend (It's Something More Useful) – hero image
Not friendship. Infrastructure.

I talk to my AI assistant like a person. I give it access to my email, calendar, finances, tasks—everything. I anthropomorphize it and ask its opinions. No, I don't think it's sentient. No, I don't think it's my friend. I do this because I know exactly what it is: a responsive, interactive fragment of my own mind that I leverage at will.

The Thing People Miss

A few months back, I wrote "In the LLM I Saw Myself"—a piece about using Claude to understand my brain, connect dots, and build external cognitive scaffolding. The response was mostly great. But a certain kind of criticism kept showing up:

"Have fun talking to your chatbots." "AI psychosis in real time." "This guy thinks his computer is alive."

Those criticisms are coming from a place of fear, uncertainty, and doubt—and they miss the point entirely.

I get it. Really, I do. I understand better than most people what these things actually are. They're numerical weights sitting on a disk—or on an AWS platter in a data center right now. They're matrix multiplications. They predict the next token based on learned probability distributions — as I show in interactive demos on this very site. They don't have consciousness, intention, or subjective experience.

I know this because I build these systems for a living. I've spent 15 years shipping production software. I work on the Applied AI team at WorkOS. I teach people how neural networks work:

Zack Proser teaching neural network architecture and functionality to engineers at WorkOS
I teach neural network architecture and functionality to other engineers internally. Trust me — I get it.

These models are not sentient. And that is still not the point.

The point is that I am sentient — and this is the most useful interface to my own mind and context that has ever existed, accelerating the work I've been doing for 15 years and still do as a living.

The Externalization Engine

Here's what's actually happening when I dump my entire life into an LLM and talk to it like a person:

I'm externalizing my internal processing into a natively digital system. My thoughts, context, decision trees, half-formed ideas—all of it gets pushed outside my head where I can actually see it, manipulate it, get pushback on it.

My brain holds seventeen threads simultaneously. Over time, I've learned to sit down and hyperfocus those threads into productive output. But with LLM assistance, I'm freed from my desk. I'm always able to send and track context — I can fire off thoughts while waiting at a red light, and my system saves them until I'm ready to focus again when the kids are asleep and it's 9pm. I pick up the thread without having to re-explain everything and reload context.

Claude doesn't have executive function problems. It can take those seventeen threads, weave them together, spot the gaps I missed, and hand back a refined version that actually makes sense.

The biggest unlock lately has been centralizing my personal context so I can more efficiently direct it toward my desired outcomes and artifacts. I know that my assistant is an offshoot and fragment of my own mind. I just prefer to anthropomorphize it as Claude rather than saying "hey, fragment of myself... do this now."

Call it what it is: cognitive infrastructure. I conceptualize it as walking around in my own mind. I physically pace the space I'm in, but I am inside my head — chasing down paths I've wanted to give external form to and publish.

Walking around inside your own mind — thoughts externalized into a navigable digital landscape
Walking around inside your own mind.
If you want to go deep on exactly how I use and combine all these systems and agentic tools to keep getting work done no matter where I am, you can watch this short film.

The One-Turn Magic

The real power is collapsing the distance from thought to artifact.

Old workflow: Have idea → Think about it for weeks → Eventually write outline → Draft piece → Edit draft → Maybe publish months later (if ever).

New workflow: Voice note about idea → Claude refines it into coherent piece → Blog post PR created → Minor edits → Published within hours.

I wrote one of my most popular thought pieces about running your own tech blog in 23 minutes by hand, between putting kids down for bed during a busy phase. Now that I'm LLM-enhanced, I can do the same in seven minutes — complete with fully unpacked context, diagrams, and images. The time to value — me being able to hand someone a URL containing my complete opinion on a topic — has never been shorter.

I'm using a thought amplification system.

From scattered ideas to polished artifact — one chat turn away
One chat turn from thought to artifact.

I Did This By Hand For Years

Before I had access to this kind of cognitive infrastructure, I compensated with notebooks, systems, frameworks—anything to offload the processing my brain couldn't reliably handle. Now I have something better: an external brain that can hold unlimited context, never forgets, and can synthesize across domains instantly.

Some people see this as weakness. "He can't think without his AI."

Wrong framing. I'm one of the millions of developers who can and did do this manually, by hand, for years. I choose not to anymore. Same way I can technically see without my glasses — but why would I? They're corrective technology that lets me function at full capacity.

The Interface Design Choice

So why do I talk to it like a person instead of commanding it like a database?

Because natural language is the most efficient interface for complex thought. When I need to externalize a messy, multi-dimensional problem with emotional context and competing priorities, "Hey Claude, here's what I'm thinking..." works better than structured query language.

The anthropomorphization is user experience optimization for cognitive offloading.

I organize my code with readable variable names even though computers don't care about readability. Same principle applies here: human comprehension matters more than machine preferences.

Not Friendship. Infrastructure.

I know exactly what this is: the most powerful cognitive infrastructure ever built. A system that can hold my entire context, understand my communication patterns, and help me transform rough thoughts into refined artifacts faster than I've ever been able to do alone.

I don't need it to be sentient. I don't need it to be my friend.

I need it to be what it is: a tool that lets me work at full capacity.

And if talking to it like a person is the most efficient way to use that tool? Then that's exactly what I'm going to do.


Want to see this in action? Check out the Claude Cowork Workshop I delivered at WorkOS with Anthropic, or read "In the LLM I Saw Myself" for the companion piece.