First of all, I want to clarify not so much what this piece is, but what it is not. This piece is not about whether artificial intelligences (LLM derivatives or some other future architecture) have the capabilities to replace software engineers, or whether they can reach general intelligence.

This piece aims to explain that if we assume artificial intelligence can replace software engineers, the question "will AI end software work?" is actually only a small part of the bigger picture.


We can approach this from a few different angles, but first let's make the following definition: What does a software engineer do? While there are many ways to define this, at its core we can think of it as the process of "turning incoming requests into a product."

What we call "requests" here can be a text, image, video, sound, or a mixture of these that defines the product. Therefore, the set of incoming requests is, in fact, the sum of a group of data.

When we look at the output, we see the written code, the environment it runs in, and the processes through which the relevant code becomes a product. So, even more fundamentally, when defining software engineering, the work being done is transforming the incoming requests dataset into the product dataset.

Even though this looks like a complex process, for a machine or AI it is a transformation operation independent of the physical world: take data A as input, present data B as output.


In general—though there are exceptions—we can define almost every job done in front of a computer or screen in this way.

For example, a radiologist transforms incoming image data into written diagnosis data. An accountant takes various financial data and fits them into certain standards and into the formats institutions want. A financial analyst takes companies' past balance sheets, current market news, and economic indicators; runs them through statistical models, and transforms them into an investment report containing recommendations.

These examples can be multiplied. Let's call all of these professions—meaning professions that work in the digital world, take a dataset as input, and produce another dataset as output—"knowledge workers."

Let's even divide knowledge workers into two: routine knowledge workers and creative knowledge workers.


Let's define routine knowledge workers as people for whom all data transformations they perform are defined; and creative knowledge workers as people for whom the data transformations they perform are not defined at the outset, who set the rules themselves during the process depending on context.

In fact, even though both basically pass data through a function and may look formally similar, from the standpoint of human thinking the former acts according to a fixed list of rules. The latter, on the other hand, is a type of worker that can change depending on the situation, requires "creativity," and can work even when the inputs are not clear.

We see that the work done by routine knowledge workers has already been taken over by software products. For example, whereas in the past you needed a human to add two numbers, later abacuses, mechanical computers, and then calculators delegated this work to machines.


With the LLM and AI revolution, delegating the second kind of knowledge work—namely the creative part—to AI is now on the table.

Software engineering is actually one of the relatively complex areas of "knowledge work." Inputs and desired outputs are often ambiguous; the core work is a set of processes that requires the ability to clarify topics within uncertainty and creativity. Of course, from the perspective of machines it has some advantages too—for example, checking outputs is easier compared to other professions.

Because AI emerged right next to software engineers, and those who first used it, productized it, or solved their own problems with it were again them, the perception that this technology will eliminate the software engineering profession is very strong.

Maybe this argument is correct. However, a world where AI can do software engineering from start to finish is a world where it can also perform a large majority of creative knowledge work.


A small parenthesis needs to be opened here: A portion of creative knowledge workers are "regulative creative knowledge workers." In other words, professional fields whose responsibilities and rules of work are determined by states or institutions; those who make high-level financial decisions, doctors, lawyers, and judges.

AI being able to do a job and being allowed to do that job are different phenomena for some professional groups. These roles, and some other roles, require "social trust"; therefore society may prefer that these jobs continue to be done by humans.

But returning to the main point: in a world where AI can do software engineering, it can do a large part of creative knowledge work as well.


The critical point here is this: Creative knowledge workers—who make up most of the segment we call white-collar—constitute 20–30% of the workforce in the world, and 60–70% of the workforce in "developed" countries.

The group that does software work is less than 1%.

Therefore, if AI can do it, it will not only start doing the work of those who deal with software; in a short period of time it will become capable of doing the work of the majority of this population.


Whether this is good or bad is the subject of another piece and discussion. But what is clear is this: when this process is completed, not only software developers but all creative knowledge workers will be affected.

Therefore, the issue will be far beyond the loss of a single professional group; it will be a collective workforce transformation.