A few days ago, I wrote an article about React. It was a rough-toned hit piece on React that can essentially be summarized by the sentence “React is trash.” I was really frustrated with React. I have used it for 8 years, and lately I have grown tired of it and the entire Node.js ecosystem. I was frustrated because React has been the default way to do frontend, even when we just want to build a survey form or a blog (and if you don't know what's wrong with that, then this is exactly what I'm talking about!!). My anger stemmed from the fact that we use React EVERYWHERE, even when it is not the right tool for the job. The web is over-engineered in many cases, and the market incentivized a generation of 'Library Developers' rather than 'Web Developers'. Many developers from this generation, who started web development after 2015, don't know how to do web development without React (or similar frontend frameworks).
So, I let it all out on the metaphorical paper, spilled my anger and frustration in these lines. It's a case study in confirmation bias. Publishing it wouldn't have started a conversation; it would have just been me shouting in an echo chamber.
I knew I would get some assenting voices because of the over-the-top tone and the confidence with which I spoke. But I felt that something was off. I wanted to first fact-check what I was saying and ensure that my arguments are sound. I did not want to sound like a biased, uninformed ass.
So, I did something that I have been trying to do more of: writing with the help of AI. And by that, I don't mean that I ask an LLM to write for me and then add my name to the byline. I have a very specific way that I use LLMs for writing, and it's simple yet quite powerful. The gist of it is that you use the AI to critique your writing. You write a complete first draft. Then you ask the LLM to critique it without holding back, no mercy, no mincing words. Just ruthless critique from the point of view of someone who completely disagrees with what I'm saying. Here's the prompt:
Here is an article I wrote about <ARTICLE TOPIC>. I need you
to wear the hat of someone who completely disagrees with what
I say in the article. Draft a response to all my points and
critique them as brutally as possible, focusing on technical
details and only submit to the points I made that are technically
and logically sound. I need you to be brutally honest and not
sugarcoat anything, and I need you to start a discussion with
me from the perspective of an experienced, technically savvy
developer who would disagree with my point of view.
Here is my article:
<ARTICLE HERE>That started a really interesting and fruitful discussion with ChatGPT. I learned a lot, and I felt that feeling you get when someone brutally dismantles every argument you make, and you feel this burning sensation in your chest when you realize that you are being schooled and you are totally wrong. This was great! And ChatGPT did not hold back. LLMs are often accused of being soft and telling you what you want to hear. But with this prompt, ChatGPT was ruthless and demonstrated no observable bias towards me. That was what I had asked it so you could argue it has still given me what I want to hear. But that's okay if what I want to hear is a fair bit of criticism. It did concede some good points I made as well, just like I asked it to. Nevertheless, it expertly dismantled my central claim, and after 3 rounds of back and forth on 7 key points, I came out with a more holistic view on the whole topic and more of a middle-ground point of view.
I like how this went. The discussion became more fruitful, and the issues discussed were more mature than just “React is trash.” I came out with a better respect and appreciation for React as a technology, an understanding of its limitations, and better considerations. Reading the original article again, I'm amused by the emotionally charged hot takes. But now, my position has gone from:
- “Learning React is a waste of time.” to “React should not be the default learning path. Most developers learn React before learning the browser.”
- “React's unnecessary complexity is useless” to “Most apps do not need client-side state machines. We massively over-invest in frontend abstractions.”
- “Frontend engineers were insecure about the lack of complex architecture on their side compared to the backend, so they invented artificial complexity to feel smarter.” to “Businesses wanted richer UX and complex UI, which needs complexity as a result. It is a prepaid cost for the evolution of the web to the modern-day requirements and expectations.”
The industry undervalues HTML, CSS, and native browser capabilities. There are interesting problems to solve on the frontend regardless of the tech being used. And yes, the node ecosystem is a disaster when it comes to the amount of black-box code that can be a landmine of security vunlerabilities (the recent react2shell is the latest painful reminder). But that's not a React issue per se.
I realized also where this anger stems from. It's not the technology itself, but my nature as a person. I value simplicity, minimalism, and architectural restraint. I would sacrifice feature density for a simpler stack. But this is a product decision, and each team will have to make this decision for themselves. It should not be a standard decided on by convention. And I was only mad that React was the convention, even when what I needed did not require the complexity that comes with React. React was not the villain I had wanted it to be.
The exchange was extremely valuable. Looking at these points above, it might seem obvious to you that my initial position was too over-the-top. But from one's echo-chamber, it is not as obvious. This is why using an LLM for something like this is a great way to ground oneself in a more balanced point of view and a more defensible position.
The key here is to treat the LLM as a fair adversary. You do not ask it to do the thinking for you, and very importantly, you do not cave in and ask it to help you come up with arguments for your side of the debate. You must think through everything and come up with the points yourself. You can, of course, Google things to make sure they're factually correct. You can even mull it over for some time and come back the next day; the chat context will be saved. But the key is to go through the pain and effort of thinking through the arguments yourself, feeling the pain of getting your comfortable ideas challenged, and being honest with yourself and creative in your approach to constructing valid arguments.
Here is a framework/caveats/points to keep in mind when following this approach:
- Demand technical evidence for the claims that the LLM presents. Ask it for references and citations, and actually go through them. This is not supposed to be a quick thing that you do in a couple of minutes. It will take time, but IT IS WORTH IT!
- Similarly, do not just take the points that the LLM concedes to as a victory. Check and confirm if these are actually true. Ask it for citations here as well and go through them. Sometimes LLMs can't give you citations (or hallucinate them), so in this case, you can instead manually search for the concepts mentioned.
- You can continue the discussion by also challenging the middle ground itself. The matter may not, in fact, be a middle-ground issue. Maybe one extreme is actually more reasonable than the other. A middle ground is not always the right place to be.
- Take your conversation with a grain of salt. The real source of knowledge is the sources cited by the LLM and real-life experience. Use this discussion to inform and educate, without relying on it completely as the source of your truth.
Following the prompt and the framework above, you are sure to get your money's worth out of your LLM. This is a great way to use LLMs to build you up instead of using it to atrophy your brain. So, go out there and get schooled by ChatGPT. It will sting, but it'll feel great!