AI Overuse: A Developer's Farewell to ChatGPT

The Allure of AI

Should we view AI as a malevolent force threatening our livelihoods? I think not.

Since the emergence of ChatGPT 3.0, I’ve been closely following AI-related articles for over three years. This sustained interest stems from the rapid evolution of the field, with new developments and news emerging daily.

It’s conceivable that AI could dominate the Nobel Prizes in the future, and the world is already captivated by the capabilities of ChatGPT.

AI is progressing exponentially, seemingly on the cusp of achieving Artificial General Intelligence (AGI). While Large Language Models (LLMs) are currently spearheading AI advancements, generative AI’s rise follows a pattern observed in earlier breakthroughs in machine learning (ML) and deep learning (DL), which demonstrated immense potential in image and video processing.

Before this, the widespread adoption of the internet ushered in the Information Age.

Prior to that, the proliferation of machinery sparked the Industrial Revolution.

And long before that, the introduction of tools led to the Agricultural Revolution.

It’s essential to critically examine whether these transitions were seamless and universally beneficial.

(Note: Subsequent references to AI will specifically refer to LLM-powered generative AI.)

Echoes of the Industrial Revolution

What legacy did the Industrial Revolution leave us?

Accelerated production of innovative manufactured goods, improved working conditions, and immense wealth.

These are among the many benefits that we enjoy today thanks to the Industrial Revolution. But did the people living through that era share in these benefits?

The Dark Side of Progress

Did working conditions improve immediately with the introduction of machines?

In many cases, tasks that once required significant physical strength were simplified into basic machine operations, leading to the replacement of adult workers with children. Factories began operating around the clock to maximize efficiency, and the resulting wealth was disproportionately concentrated in the hands of factory owners (the bourgeoisie). Did the workers passively accept this situation? No. This gave rise to the Luddite movement.

Despite these challenges, do we believe that the introduction of machines has ultimately transformed people’s lives for the better?

I would argue that the answer is ‘yes.’ The changes have been overwhelmingly positive.

Wait, you’ve painted a negative picture of the Industrial Revolution, so why are you suddenly saying it was positive?

While our lives have undeniably improved, many of the problems associated with the Industrial Revolution stemmed from a failure to anticipate and mitigate the social disruptions caused by the rapid introduction of machines. If a social safety net had been in place, fewer people would have suffered, and the negative consequences would have been minimized.

Okay, but what does any of this have to do with AI?

AI: The Second Industrial Revolution

Former U.S. President Donald Trump announced plans to invest 700 trillion won in AI companies like SoftBank and OpenAI.

LLMs require substantial amounts of power. Companies that generate this power are steadily growing, and Nvidia, which develops AI chips for computation, has achieved the highest market capitalization in the world.

Where will these companies invest? Naturally, they’ll invest where they can make money.

And where is the world currently investing? In AI.

The Profitability of AI

But where will AI’s profitability come from?

AI does not produce products. AI does not run factories.

However, AI can potentially reduce labor costs for companies by automating tasks that are currently performed by humans.

From an economic perspective, what is the cost of a single employee? Assuming an average career span of 30 years (from age 30 to 60) and an average annual salary of 45 million won, a company will pay a single employee 1.35 billion won over their career.

In other words, a company is ‘buying’ a single employee for 1.35 billion won. A company with over 300 employees would spend 400 billion won on labor over 30 years.

Do you still believe that AI is not profitable? Can you still not see why the world is investing in AI?

AI-driven workforce reductions will generate significant profits for companies. This is the alpha and omega of AI investment.

The Limitations of AI

AI does not guarantee 100% success or 100% failure.

I once demonstrated a deep learning model for detecting drowsy driving. While the model ultimately classified certain situations as ‘drowsy driving,’ we, as developers, defined it as ‘a high probability of drowsy driving.’

Let me reiterate: AI does not offer guarantees of absolute success or failure.

Hallucinations are a similar concept. Because models make inferences, they can generate incorrect answers. This is both a potential avenue for AI development and a drawback.

If the model incorrectly identifies me as drowsy while I am not, who is responsible?

The responsibility lies with us, the team that defined the model’s criteria.

AI does not take responsibility. We are the ones who make decisions based on the answers provided by AI.

So what? What are we supposed to do now? Does this mean AI is going to take our jobs?

Approaching AI

Yes, that’s right. AI is going to take our jobs.

The world is fiercely competing to use AI to take our jobs.

I believe this is inevitable, and that a ‘Second Industrial Revolution’ is on the horizon.

What should we do to ensure a smooth transition?

We need to be interested in AI, use it, and maintain both a positive and a critical perspective.

Many people may become disillusioned with life after seriously considering this information. I know I did.

Why should I bother developing myself and studying development if I’m just going to be replaced by AI?

AI can develop code for me, so why should I?

At this point, we need to consider humanism.

Transcending Humanism

In order to transition from a theocratic society where religion governed the nation to an era where ‘kings’ could exploit religion, something had to transcend ‘god.’ Kings used religion, but the bourgeoisie, who possessed the means of production, lacked a comparable tool. They began to promote the idea that humanity itself was important, and this gave rise to ‘humanism.’ Humanism, in turn, led to the emergence of capitalism, communism, fascism, and other ideologies.

In other words, humanism is an effort to break free from the god of a theocratic society.

Some who tried to escape this religious society were branded as heretics and witches, and were considered to be terrible criminals. How do we view them from our current perspective? Do we not see that they were right?

The idea that ‘AI is better than humans, (or, more narrowly,) better than me’ is an act of transcending humanism.

Perhaps this is a natural way of thinking. I believe that we are currently in a transitional period where AI development is causing us to gradually break free from humanism. This is natural, but I hope we can minimize the resulting panic.

What Should We Do?

As mentioned above, we should simply use AI naturally, enjoy it, maintain a critical perspective, and, above all, do what we want to do.

There may be negative aspects in this process. The following sections will finally explain ‘why I want to stop using AI in development.’

AI in Development

AI undeniably boosts productivity.

The languages we use are programming languages. Just as we use Korean to write this blog, we use programming languages to develop programs.

LLM-based generative AI is specialized in writing. Therefore, it will naturally be effective in writing programming languages. So, should we use AI in programming? Absolutely!

However, if you are a developer who is ‘studying,’ you should consider how to use it.

For the following reasons, I have decided not to use AI, at least during the learning process.

AI Steals My Error Notes

When do we typically use AI? I used it often when debugging.

Why doesn’t this work? → Error code, copy the code → Paste into ChatGPT

What’s the problem? Will developers who are weary of errors and debugging always carefully examine, understand, and use the code provided by ChatGPT? In many cases, they will simply copy and paste the code without thinking, and if it doesn’t work, they will use AI again.

User Prompt: This doesn’t work, I’m getting this error.

ChatGPT: Oops, my mistake, let me revise the code.

Will I never make this mistake again? It is highly likely that I will make the same mistake again and seek help from AI again. The possibility of internalizing the knowledge and learning from the mistake is greatly reduced.

If I know 99% of the calculation process but can’t reach the final 1%, have I coded well? I am simply delegating my brain to AI because I am tired. I am entrusting AI with the most critical part, the part that I don’t know and can’t do. This hinders deep understanding and reinforces reliance on external tools rather than internal knowledge. Avoiding AI during the learning phase allows developers to cultivate problem-solving abilities and build a stronger foundation of knowledge.

Robbing the Code-Friendly, Unconscious Environment

There are many developers in the world. It is highly likely that a developer on the other side of the world has experienced the same error as me. But did that developer experience the error in the exact same situation? Is the code they wrote the same as the code I wrote? It will be different. The same error can occur in completely different situations.

AI blocks access to information about the surrounding context. It only debugs the code I send and provides information about that code, but it does not show the process required to write the code. Learning to code involves more than just fixing syntax errors. It’s about understanding the underlying logic, the design patterns, and the best practices that contribute to well-structured and maintainable code. Over-reliance on AI removes the opportunity to delve into these crucial aspects of development.

“Of course, you can use prompt engineering to ask for a detailed explanation, right?”

Put your hand on your heart and think about how often you’ve been too tired and just copied and pasted the code.

To search for and investigate an error, you need prior knowledge. Do I clearly know everything about this prior knowledge? This blog explains different situations, and that blog explains different situations. Do I understand all of these situations? When searching on Google, you have to be able to read and understand ‘Ah~ it’s different from my situation’ in order to find other information. Developing the ability to sift through information, identify relevant sources, and adapt solutions to unique contexts is crucial for any developer.

Even this simple act of searching can make developers more code-friendly. Exposure to a wide range of code examples, discussions, and tutorials cultivates a more intuitive understanding of programming concepts. It allows developers to recognize patterns, anticipate potential problems, and develop a deeper appreciation for the art of coding.

Isn’t ChatGPT the same? If you keep using it while coding, isn’t it the same thing?

ChatGPT, while helpful, provides solutions in a packaged format, often lacking the nuance and context that come from independent research. It’s like receiving a pre-written essay instead of learning how to research and write one yourself.

The Importance of the Unconscious Environment

The best example of an unconscious environment is the home environment.

Here are two children. They are growing up in different families. The child sees a bird flying by and asks their parents:

“Mom (Dad), what’s that bird?”

The parents’ answers differ:

  1. A magpie.
  2. I was curious about what kind of bird it was, so I looked it up. It could be a magpie or a crow, but it looks like a magpie.

The first family provides a direct answer and presents a practical solution.

The second family provides an indirect answer and suggests a creative approach to exploring the answer.

How will these children grow up if they are raised in these different environments?

The child from the first family will be efficient at finding the correct answer, but may not be efficient at dealing with problems where the answer is not readily available. → ChatGPT

The child from the second family may take longer to find a simple answer, but will be more comfortable thinking about problems where the answer is not readily available. → Search and Learning (Googling)

The unconscious environment is formed in this way and is used in all aspects of daily life. The seemingly insignificant details, the subtle cues, and the surrounding context all contribute to a developer’s overall understanding and problem-solving skills.

What do you think development is? I think it’s the latter, but I’ll leave the choice to each individual. A good developer is not just someone who can write code, but someone who can think critically, analyze problems, and develop innovative solutions.

The above is a picture of Freud’s iceberg model. We are unconsciously influenced by the people around us and everything we come into contact with. Even if we don’t pay attention to someone passing by saying, ‘A food is delicious these days,’ it plants a shallow awareness that ‘A food is delicious.’ When we see A food later, we may eat it more deliciously than it actually is, or we may be more disappointed if it doesn’t meet our expectations. This creates a significant difference compared to not hearing the passerby’s words. Our subconscious absorbs countless bits of information that shape our perceptions and influence our decisions.

Even the small piece of information that I encountered while diligently searching for information about development - information that I didn’t consciously see - will eventually become an asset. The unconscious has a much greater impact than we think. This emphasizes the importance of immersing oneself in the world of code, even if it means encountering seemingly irrelevant information along the way.

In Conclusion: My Development Philosophy

My conclusion is that ‘LLMs should be avoided as much as possible when studying, but can be used for productive activities.’ While AI tools like LLMs can be invaluable for experienced developers to enhance productivity and streamline workflows, they should be approached with caution during the initial learning stages. A strong foundation built on independent problem-solving and critical thinking is essential for long-term success in the field.

We must adapt to the post-AI era, learn how to use AI, experience its impact firsthand, and maintain a positive yet critical perspective on AI. We must recognize that AI will eventually take our jobs and always consider what other impacts it may have besides taking our jobs. Let’s reflect on whether the way we use AI is helpful to our lives and ourthinking, and avoid delegating our brains to AI. In essence, we must be mindful of how AI impacts our learning process and ensure that it complements, rather than replaces, our own cognitive abilities.

After much confusion, I have finally established my development philosophy: It’s important to approach development with a personal touch, infusing each line of code with intention and insight.

Infuse every line of code with my thoughts. Let’s not just create simple letters or sentences, but rather imbue them with my philosophy and thinking.

That is the difference between AI and me. AI may generate functional code, but it lacks the creativity, the intuition, and the human element that distinguishes a truly exceptional developer.

Good luck to everyone! By embracing a mindful and intentional approach to development, we can ensure that we remain valuable contributors in the ever-evolving landscape of technology.

Extra: Treating Weak Willpower, Blocking LLM Sites

Weak willpower is a disease. It is illogical to use willpower to cure weak willpower, which is caused by a lack of willpower. It is right to introduce other actions to quit smoking, drinking, or other similar habits. For those who struggle with the temptation to overuse AI tools, implementing strategies to limit access can be beneficial.

Similarly, I thought it would be good for my mental health to block LLM sites. The following is my method for blocking on Mac: This step can help developers regain control over their learning process and encourage more active and independent problem-solving.

  1. Enter the following code in the terminal:

  2. Press i to switch to insert mode. Add the following to the 127.0.0.1 host, just like in the image below. Tab after entering the address.

  3. Press ESC to exit insert mode, and enter :wq to save. This uses DNS (Domain Name System), and ‘127.0.0.1 chatGPT.com’ means that entering chatGPT.com in the address bar will access 127.0.0.1 (my computer’s server host).

Let’s cure our weak willpower together! By taking proactive steps to manage our relationship with AI, we can harness its power without sacrificing our own cognitive development and critical thinking skills.