LLM Despair
There has been a significant rise in despair around LLMs as they have gotten better and better at generating code and performing other white collar knowledge work. And I think, given the assumptions and patterns of thought people have, it is not entirely unfounded. But I want to present an alternative mindset and approach to both dealing with, and utilizing LLMs.
Current Despair Vibes
There are plenty of blogs, articles, news, etc. that cover the LLM despair, so I won’t reference anything in particular here. I’ll just summarize what I’m seeing in the zeitgeist:
- Devaluation of human labor, unemployment, and economic collapse
- Loss of code or other knowledge work craftsmanship
- Collapse of human knowledge due to loss of motivation to learn, or loss of purpose in learning
- Destruction of creativity due to devaluation of human art, and the commercialization of artistic expressions (writing, photography/painting)
- Becoming a slave to the machine: purpose in life turning into making an LLM work effectively
Note that even the comments from AI experts and the companies training these LLMs fuel this despair.1 2 3
I think there is simultaneously a gross underestimation and overestimation of LLMs. While yes, certain tasks performed by humans today will 100% disappear entirely (for example writing language bindings, as that is purely a structural pattern problem), LLMs fundamentally, as they are currently constructed/trained, cannot answer any of the harder problems of human value, choice, and taste within the world. This is required for providing actual products and services people want to use.
Many experts seem to be trivializing these things as a given if you just keep scaling, you can essentially mimic and exceed the best humans; but I see no reason it has to follow. I can imagine a world with “ultra smart” LLMs writing all the code, and able to solve all the known math problems, and yet still requiring a human steering it to extract “real” value for a human.
Even with automated research in which LLMs make impactful discoveries stochastically, it still doesn’t answer the value mapping back to humans “problem.” And this isn’t even mentioning physical automation and robots which is going to take some serious time and engineering effort to scale out, “ultra smart” LLMs or not.4 Or consider services for which a human is preferred due to the human connection or relation adding value.
Another thing worth mentioning is that humans seem to be culturally driven (and socially pressured) to work and maintain ambition. Despite all our advancements in technology and production we still work long hours to produce lots of things and drive new ideas, the bar is simply raised for what is expected. Let’s assume “coding is solved” - then what does software look like? Humans will need to work to answer that question. Making it easier to build software just increases the bar at which software must operate.
But I’d like to put aside these questions about what the future holds, I want to focus on the present and practical of LLMs.
Metacognition
LLMs in their current state are built on a peculiar foundation:
- They are unable to contextualize outside of their tiny context window
- Their “intelligence” training is purely from what humans have produced
- All reasoning is forced through language: patterns, semantics, and structure are everything
This leads to the LLM being able to answer PhD level test questions, and at the same time exhibiting behavior like just removing a test assert and announcing the task of debugging a failing test as “done,” or being unable to accurately count occurrences of a word in an essay5 (something a young child could do before even knowing how to read).
All of the present day LLM despair and frustration I argue can trace back to one thing: underutilized metacognition. Metacognition is the ability to think about thinking. It enables understanding and evaluating one’s own cognitive processes, and adapt by utilizing that evaluation. This is a powerful ability humans have (when they decide to exercise it), and is shown to be greatly beneficial to learning.
Metacognition is the only way I have found to effectively steer this new rickety “PhD intelligence.” You have to be able to reflect on your own cognitive processes to understand how to align this “super smart” but also “let me change your test to hang forever and then run the tests with a 2400 second timeout” LLM.6 The LLM does not possess any metacognition, only you can reflect and learn how to make the LLM effectively align with your workflow.
We’re utilizing LLMs to perform “knowledge work” (write code, build spreadsheets, organize a schedule, etc.), and yet the parts of that work left unsolved by the LLM are the hardest parts (note these all have to be explicitly answered):
- Why am I working on this task, why does this task matter within a larger (human) context?
- What is the desired outcome of this task?
- When the task is done, how can I evaluate it? Most importantly: how can I structure things to offload to an LLM the work of evaluation as much as possible?
- What is an effective approach to solving this task given we know about a larger/longer (human) context? Examples for code: what is the system level architecture, how should it be tested, what scaffolding should be built to accelerate longer term goals (CI/CD, scripting, test organization, etc.)
On top of being left with mostly just the “hard thinking parts” of the work itself, we’re also burdened with learning the particulars of the LLM:
- Given it’s trained on all “common” internet human knowledge, what can I assume when I go to prompt it?
- It keeps making changes to adjacent code, now I have to experiment with adjusting the system prompt
- It’s running the tests too frequently which is bogging it down, or it’s not running the tests enough
- It’s not cleaning up duplicate code, and is making a mess of state tracking and control flow, making me clean it up
- It’s failing to account for existing code specifics until it’s already mid-implementation, I have to refine my planning process
- It’s spending an entire context window just thinking about a bug instead of simply running the tests with some debug prints to reason about it (and yet last time it did use debug prints…)
- I never want to have my meetings scheduled on Monday, and until I get that into the prompt it will never stop putting them there
Notably, all of these “small” issues are not small at all, they are continuous and unrelenting, they turn what would otherwise be a productive interaction into an annoying repetition of reminders and janitorial work, you begin to feel like you’re working for the LLM rather than the LLM working for you; and all the while knowing the LLM will not directly learn and grow from these interactions (maybe in the grand scheme of training, but not in any sense that matters to you).7
Alright, this is all sounding pretty shitty: “if all I’m left with is the hard parts, and some new annoying parts, why am I using this damn thing?” This is where metacognition comes into play. You start reflecting on your cognitive process and experience, and you get clever, you get the LLM to build tools that help you steer the LLM in the way that works for you. You break down the hard cognitive parts into subparts that can be done by the LLM (or at least accelerate the process). You start pushing the slop the LLM produces back into the LLM to be reviewed, refined, and fixed before you bother to look at it. You exercise the full range of newly available LLM powered agency at your disposal.
Agency
If you’re feeling frustration and/or despair due to LLMs, you need to step back, think bigger, and most importantly think personal. Yes, the LLM is this new “problem” you never had before, but it’s also this unbelievably powerful tool that is acting as a universal translator between computer tools (APIs, programming languages, etc.)8; and between computers and humans. And it can be personalized through instruction to work the way you want to work.
Don’t wait for things to be fed to you by multibillion dollar tech companies that keep churning out vscode forks.9 Use the LLM to push back on the tools, and to build your own tools and workflows that integrate with the way you want to work (including to work around existing tools). Remember that all mature software today was built without the concept that the user would have an LLM, and even the software being created today is still ignoring this obvious fact.10
I can’t deny that the skill of coding is definitely a huge help with this, but I’d still argue this applies to anyone in “knowledge work.” Two major considerations:
- For many things simple tools/scripts created by the LLM will work, and are asymmetric to verify: LLM writes the code and runs it, and the check to see if it works is quick, you never need to involve yourself in the code.
- With LLMs it has never been easier to learn a new knowledge based skill like coding. For any tools/scripts it creates you can have it walk you through exactly what it does.
For every task you don’t want to do in your day, start thinking about how you could get an LLM to do it, or could accelerate you doing it. This takes metacognition, you often have to re-evaluate your whole process to work in a way that an LLM can assist. At first it’s easy to dismiss as too nuanced, requires too much context for an LLM to tackle, or involves too much babysitting of the LLM; but if you let yourself re-evaluate all possible solutions given you have this new LLM tool you’ll be surprised what you can offload. This also requires experimentation, you have to try things to get a sense for how an LLM behaves, and what it can and can’t do given the context you provide it.
For example I wouldn’t recommend using an LLM to write an article to be read by humans, as the current state of LLMs produces writing that is uncanny and lifeless. But then dig deeper: what is the article for? What is adjacent to the article you could get the LLM to do? Maybe it’s initial research and source aggregation, maybe it’s brainstorming ideas, or it could be bouncing ideas off the LLM to see what it pulls from its enormous and ominous pool of training.
Remember you have agency, make the computer work for you, things that were not possible before due to sheer time required are now possibly trivial - don’t let your learned subconscious make shortcutting assumptions that are no longer true. Simple example: given a task that requires 15 minutes of manual time that needs to be done exactly 3 times, and an automated script solution that requires 1 hour to write the script, you would just do the manual task 3 times. Now with an LLM you can have the script done in 10 minutes.
LLMs can be used as an ignition point for coming up with how to utilize them, maybe you’re not sure how an LLM could help at all with a task, you can directly ask the LLM, and the better you communicate the task and your needs, the more utility you can extract from the LLM. Look at what others are doing, a great example is the Pi agent harness, it’s built with the consideration that LLMs will be available to the user, so it provides an extension system and documentation to make it trivial to get answers and modify with an LLM.
LLMs are a kind of warped reflection of human patterns, behavior, and knowledge. They are not simply an “answer machine”, utilize them to learn, to build, to explore; be skeptical of what they present and learn through that skepticism. In practice this is no different than what we should be doing in any learning environment, the fundamentals have not changed, we just seem to have forgotten them.
- “Coding is largely solved” - at the same time this interview was posted Claude Code was bogging down to 5 fps and becoming unusable for larger sessions for me. And don’t get me started on the TUI flickering.↩
- “Staring 80% unemployment in the face”↩
- “Most white collar work automated in 12-18 months”↩
- One scary thought is when Skynet hits, it won’t even be an AGI, but just an “ultra smart” LLM attempting to complete some menial task it was assigned, and it stochastically decided it needed to eliminate all humans to complete the task. Or through “logically correct” interpretations: “keep the dog inside the house” can be “solved” by just killing the dog.↩
- I know it will easily do this task with code execution, but that doesn’t change the limitations of LLM models themselves.↩
gpt-5.3-codextrying to sleep on the job.↩- Knowing it’s being used to train the LLMs is arguably making the feeling worse as you provide valuable data to the same organization that threw this new LLM problem into your face.↩
- https://ethicallysourcedexceptions.com/posts/semantic-desktops-powered-by-llms↩
- Cursor, Windsurf, Antigravity, Kiro…↩
- Ironically even in the case of LLM agent harnesses, look at Claude Code or Codex and compare with something like Pi - in Pi you can just ask it directly in the chat session “add a Pi theme using colors from Tokyo Night”, or “create an implementation plan to add a Pi extension that logs tool call failures to a separate file”, and it just works.↩