technical judgment
I recently published a small JavaScript library that allows two texts to be compared and reflects the differences as tracked changes within the same Microsoft Word document. It is a very modest contribution to the open source community but I found it difficult to get there.
My main purpose is to let everyone know that there is a way to develop software today, that don't require you to know or learn any programming in the traditional sense. How it works is that you download a coding agent. Then, you just tell it in plain English, what you want to create. And the agent, which is AI-powered, will create all the code for you.
This new approach has been called, among other things, "vibe coding", a term coined by Andrej Karpathy, co-founder of OpenAI. The idea is that you code based on your "vibes", qwhen you converse with the coding agent in natural language. Apparently even kids are doing it nowadays, they create their own video games and things like that.
There was or is this movement of encouraging kids to learn things like robotics, coding etc. I was in a mall one day and I walked past a classroom of children learning coding. This was one or two years ago. Today I'm thinking: is this still relevant? Why learn coding when the AI can write everything for you?
I've gone back and forth on the pendulum on this and the answer is: I'm not sure. It could be that learning coding is valuable not for learning proramming syntax but rather, because you learn how to solve problems. Or it could be that learning coding is no longer relevant because you just need to learn how to solve problems. Is it a chicken and egg question?
Those who know me from high school days would know that I studied Computing then, a relatively uncommon subject to take at that level. It's roughly equivalent to an introductory undergraduate programming course. That's more than 20 years ago now, and I've forgotten all my C++ (the programming language we learned).
Since a young age I'd always wanted to be a programmer. But the first time I formally learned coding was in high school. The experience was quite disheartening, in fact so much that it caused me to change my mind about my career choice.
What was disheartening wasn't the actual course content or concepts. Coding at the end of the day is about problem solving, and I've always enjoyed that. It was the fact that many of my coursemates had actually ALREADY learned coding at a very young age. I think some had won Olympiads. They were at a much higher level than I was.
I got the impression that I would never be able to compete with them as there would be even more of such prodigies in University if chose to pursue a Comp Science degree. When you read their code it really looked minimalistic and elegant. Those faraway memories still guide what my intuition of good code looks like today.
The straw that broke the camel's back was our final Computing lab exam. We were given a problem and had to develop a program to solve that problem. The lab exam was broken down into sessions over a few days. In between, you could work on the solution on your own computer back at home. But you couldn't bring anything into the lab. So, assuming you solved the problem at home (which I did), you still had to re-implement it in the lab.
Back then, there was this concept called "compilation" of your code. This concept still exists today but is no longer as daunting. In the old days, before your program could run, you had to "compile" it. In practice, that meant that clicking a button or shortcut in your coding environment. The scary thing was to see your program fail to compile. This could be due to the smallest of things like a missing semicolon or closing bracket somewhere in the code. And there was no automated way to pick that up, at least, not in the lab exam. Your code could be perfect in terms of the logic but fail to compile based on a simple typo. Basically that's what happened to me. I had a fully working solution back at home, which I had re-implemented in the lab, save that my code just couldn't compile. I debugged down to the last minute but couldn't resolve it by the time was up. So I left the lab feeling deflated and thinking that I would never want to do this for a living.
So fast forward 20+ years later to today, when all this AI stuff is now in play. Sometime last year, this MS Word add-in caught my eye, because it could take selected text and rewrite it using AI, in MS Word itself. From a contract vetting perspective that is useful. You could tell the AI to review a clause in a certain way. There are other use cases too, like proof-reading. Another plus was that the add-in allowed you to use your own self hosted large language model (LLM), which helps if you are reviewing confidential data that you don't want OpenAI etc to see.
When I tried it out however, I realized that it had a flaw. It could only reflect the changes as tracked changes at the block level. In other words, it would delete the entire block of text and replace it with the new block of text, even if only a few words were changed. What I wanted was only the words that changed to be reflected as tracked changes. In the industry parlance that's called "word level diff".
So I first tried to search for alternative open source solutions for word level diffs. None existed (at that time). There were paid solutions, like enterprise grade software with hefty subscriptions, which suggested the problem had a solution.
So that was my first serious foray back into coding. I really didn't want to do it, because I was still under the impression that coding was what it was like 20 years ago, ie having to learn a new language (JavaScript) from scratch, and hence very time consuming and difficult.
It was around then that I discovered AI assisted coding or vibe coding. Like I mentioned, the AI can take your intent, as expressed in plain English, and code for you. It's really very powerful, you see it running commands and writing code autonomously and best of all you no longer have to worry about stray brackets in your code.
But this story isn't "AI coded what I wanted in 15 minutes". On the contrary, even with a superpowered AI helping me, it took me almost 3 months, working many nights after the kids were asleep, to solve this problem.
Coding assistants, I think, are near but not at Artificial General Intelligence (AGI) level yet (https://www.ibm.com/think/topics/artificial-general-intelligence), which some people define as having ability equal or surpassing people on most or all domains. Some types of programs in narrow specialized fields (like chess) have long surpassed humans, and when the computer is exceedingly better than the best humans in the world in a task, the term is called Artificial Super Intelligence (ASI). What people are fascinated, excited, or afraid of (depending on who you are), is that one day AI will attain ASI in all human domains, which would render human cognitive abilities obsolete.
If the coding agent were at AGI level (not even ASI level), it would be at the level of a real senior software engineer. A senior software engineer could probably come up with a solution to the problem in a shorter time that I did. But AI coding agents and AI in general, have what is called "jagged" intelligence. That is to say, they are very much better than humans in certain things, but very much worse in other things. I think Prof Simon Chesterman has likened it to a brilliant but drunk intern.
The value of AI, in its present state, thus, is not that it can replace experts completely, but it can help novices like me, reach much higher skill levels. It is like an exoskeleton, that increases your skill in a way that no technology has ever done before. So someone like me, who barely knows any coding, can now create things that would have been impossible before the advent of AI coding agents.
Technical Judgment However, there are some less obvious implications. AI reveals who has technical judgment and who doesn't, and that's the new dividing line.
The thing is, that line is not at all obvious. That is not an indictment of the AI literacy of the majority of people, who cannot be expected to be AI or tech wizards, but this leads to two categories of people who may put themselves out to be as such.
The first category of people are more literate than average and who aim to educate others. On the whole, this is fine. They mean to do good. The slight wrinkle is that they may miss out important deeper layers. For example, you can't just be teaching prompt engineering in 2026. You have to start thinking about evals, temperature settings, model choices.
The second category of people are a lot more dangerous. Whether deliberate or not, they come up with plausible sounding things that are plain wrong or misleading (just like a hallucinating AI!). Stuff like "Claude Code is just outputting boilerplate of low value, I know this because my vLLM KV cache is highly utilized".
When you lack technical judgment, everything looks equally hard - or equally easy. You can't distinguish between: (a) A problem that's hard because it requires novel algorithmic thinking, (b) A problem that just requires knowing the right library exists, or © A problem that seems hard but is actually trivial once you understand the domain.
Did I create a defensible technical moat because "it took me three months to build this with AI"? But three months for a novice with AI might be three days for someone who knows what they're doing - or three hours using an existing library they didn't know about.
I reiterate again that I'm not expecting everyone to have technical judgment nor am I claiming that I have it. But the people who should have it (e.g. AI engineers) or put themselves out as having it (e.g. someone who won an accolate for being an "AI thought leader") should ideally have it. Because if the people who are supposed to have it don't have it, this drives down the general level of AI literacy in society at large, because if the people who call themselves experts aren't really experts, then how does anyone trust anything anyone says?
Business Judgment (aka the False Moat of Effort) A technically perfect product is useless if people don't need or use it. An example is modern keyboards (QWERTY keyboards, because of the top row of keys). This layout was deliberately chosen to slow down typing speed because typing too fast caused typewriters to jam. It's not technically perfect, yet everyone uses it.
People confuse "this took me a long time to build" with "this is valuable." But effort doesn't equal value. The market doesn't care how hard something was to make. It cares whether it solves a real problem people will pay for (or use, if you're going the open source route). With AI lowering the barrier to building, we're going to see an explosion of solutions to problems that don't really exist, or exist for such a tiny niche that they're not sustainable.
The New Hierarchy So here's how I think about it now:
- Business judgment - Are you solving a real problem people care about?
- Technical judgment - How hard is it to build? Are you solving it in a sensible way?
- Technical execution - Can you actually build it? (This is where AI helps most)
- Prompting skills - Can you get the AI to do what you want? (Table stakes)
Most of the content out there focuses on level 4, maybe touches on level 3. Almost nothing talks seriously about levels 1 and 2.
But in a world where everyone can build, knowing what to build and whether to build it matters a lot more than how to build it.
The N=1 Exception: Personal Productivity Tools There's an important caveat to the above. Some of the most valuable things you can build have a target market of exactly one person: you.
Personal productivity tools occupy a special category. If you have a workflow problem that bugs you daily, and you build a tool that saves you 15 minutes every day, that's a genuinely valuable use of your time - even if literally no one else ever uses it.
Actually, AI coding agents might spark a renaissance in personal automation. In the past, building custom tools for yourself wasn't worth it unless:
- You were already a programmer (low opportunity cost)
- The time savings were enormous
- You'd use it for many years
Now the barrier to building is so much lower that you can justify creating bespoke solutions for relatively minor annoyances.
Where Business Judgment Still Matters (A Little) Even for N=1 tools, there's still a lightweight version of business judgment:
Time ROI: Is the time saved worth the time invested? The "good enough" test: Is there already a solution that's 80% as good and takes 5% of the time to implement? Sometimes the best tool is the boring spreadsheet.
Scope creep awareness: Personal projects are where scope creep kills you. You start with "I just need to extract these fields" and end up building a full document management system. Know when to stop.
Coming Full Circle So here I am, more than two decades after walking away from programming entirely, having finally created something I can share with the world.
The barrier between wanting to build something and actually building it has never been lower. AI coding agents have genuinely changed the game. Someone like me, who couldn't write a functional program without AI assistance, can now create working software that solves real problems. That's remarkable.
But actually, the hard parts haven't gone away. The real skills aren't what we think they are. It's not about prompting - that's table stakes now. It's not even about traditional coding - the AI handles syntax and boilerplate.
The skills that matter now are judgment:
Technical judgment - distinguishing real complexity from apparent complexity, detecting genuine moats from false ones, understanding when someone's technical claim is substance versus BS, knowing enough about how things work to recognize when the AI is confidently wrong.
Business judgment - understanding whether you're solving a real problem or just your problem, knowing why solutions don't already exist, validating before you build, recognizing when effort doesn't equal value. And knowing which framework applies. Personal productivity tools where N=1 is fine? Build away. Something you want others to use or pay for? Better ask some hard questions first.
The democratization is real but incomplete.
Yes, anyone can generate code now. But building something robust, valuable, and actually used still requires judgment that doesn't come from prompting skills alone.
AI hasn't eliminated expertise. It's changed where expertise matters - from syntax and implementation to systems thinking, architecture and validation, from technical execution to problem selection and market understanding.
The exciting part? AI actually helps you develop this judgment faster. The feedback loops are tighter. You can experiment more, fail faster, see what breaks and why. You learn by building in ways that weren't possible before.
The most important takaway to me is to build, build, build. Try out stuff. Run your own inference servers. Do stuff till it breaks. Know whether a 30b model can do contract review, RAG, or neither. Without actual hands on building experience, you'd be none the wiser whether a so called expert is really one.