Intelligence itself is difficult to define. And there is often a gap between philosophers and engineers in how to understand artificial intelligence (AI). Philosophers distinguish between the concept of “weak” and “strong” AI. In the first case, the machines act as if they are intelligent; in the latter, they would be actually thinking and not just simulating the thinking, which includes the possibility of reflection (an activity philosophers refer to as a “conscience”). Many AI researchers take the “weak” AI for granted and do not distinguish between real intelligence and its simulation. Typically, philosophers highlight the importance of the conscience in the moment of a realization of some action, while engineers are interested more in the result of that action independent of its motivation or the process itself.
As a general-purpose technology, AI applications are countless and most of us can’t even imagine the broad range of their use. The impact of AI has been and will be so great that it is often compared to the invention of fire or electricity, as noted by Brian Patrick Green in Ethical Reflections on Artificial Intelligence. On the other hand, there are more than a few prophets voicing their concern over AI’s wide and ever-spreading infiltration into almost all aspects of life, including the negative impact on employment, structural changes in the way we live and what we do, and even how we perceive our existence and identity. That said, should we be preoccupied or even afraid, or should we embrace the upcoming technological revolution?
This question cannot be answered without looking at the relationship between Christianity and science and technology. This is especially true given the centuries-long animosity between the Church and science. We should be wary of knee-jerk suspicion, ensuring any criticism of tech isn’t simply blind adherence to this tenuous relationship.
Defining Good and Bad
There are relatively few technologies whose use is to be summarily rejected by a Christian. What then is the criteria for accepting a certain technology? Speaking generally, the technology can be good, neutral or bad. From a Christian moral perspective, “good” technology is one which supports and promotes life and human dignity, while “intrinsically bad” technology act against these core values. This division is necessary to be able to oppose a “technocratic paradigm” which affirms that the technology is an “unconditional benediction,” that is, a solution to (almost) every problem. Of course, there are immense improvements in human life thanks to tech; however, even with all these technological wonders, we should not forget that technology is a means to some specific end and not an end per se. It should be clear that a Christian is not against technology as such, but against a “bad” technology or its improper use.
AI (alongside bio- and nanotechnology) emerged recently as a technology shaping our everyday life, and, according to experts, we are still only on the brink of its real capacities. AI is a clear example of a technology defined as “double use” – meaning that we see its immense benefits but it also involves huge challenges. That’s why we evaluate some tech using the above criterion of whether or not it supports life and dignity. One aspect of this is especially important: the question of equality and justice understood as a minimum level for AI’s participation in the life of the human community.
There are many challenges connected to an ethical reflection of AI such as security, transparency and privacy, destruction potential, and legal status (specifically if the AI is considered a moral agent). For our purposes here, we will focus on the impact of AI on employment and equality. Christian social teaching which reflects Christ’s words about “the least of these” in Matthew 25:40 expresses its preoccupation for “the poor.” And in the second half of the 20th century, a clear link was made between anthropology (the human condition) and theology, between human promotion and evangelization. This relationship stirs a significant controversy. Some say that the “digital revolution” (of which AI is an important constituent) and its effects are principally the same we’ve witnessed many times in the past when there was a technological paradigm shift. Others are afraid of a radical change that endangers our way of life. Is the digital revolution, and specifically AI, a change which is structurally analogous to those in the past (for example, the industrial revolution in the 19th century), or is there something principally new and unprecedented? In the 19th century, the Luddites broke machines in order to prevent them from taking their jobs. But after a transition period, new technologies created many more jobs than ever before. Work productivity also grew and salaries followed. There were many more such changes: massive industrialization, a beginning of employment of women during war times, new digital technologies, the onset of automation in industry and so on. We always have witnessed the same pattern as mentioned before: some transient period followed by an increase in labour productivity and wages. Why then should we not proceed with everything headed principally in the same direction, and why worry about AI at all?
Love’s labour lost
In today’s globalized world, a thrust towards cost-effective, global product distribution is a reality with a growing tendency toward the emergence of “all-or-nothing” markets with one dominant player. For a long time, this could be observed in sports or music, for example, where the best earner takes much more of the profits than the rest of the participants together. However, the best example of monopolistic structures today is present in the market of tech. The statistics confirm that today’s technological changes can influence in a new, radical way how the labour market functions. There is a very real risk that robotization and automation (one specific application of AI) will cancel many more jobs than are created. This kind of change is structural, which is not new; however, it endangers up to 70 percent of the positions in developed countries, which is an unprecedented rate.
Up until now, every structural change principally moved the labour force from production sectors toward the sector of services. The revolutionary change of today, however, influences this sector as well. Moreover, up until now, the general population (as a labour force) participated in the growth of the wealth created, both directly (wage) and indirectly (through its use). However, in the last two to three decades, the relationship has shifted, with 80 percent of economic growth enjoyed by only five percent of Americans. This trend will only grow stronger with ongoing automation of the labour force. It’s possible to imagine that one day, all production will be done without people, with all business profit going to one person – the business owner, with no sharing of benefits (i.e., wages).
There is nothing wrong about an inequality per se. We have been given different capacities which we develop under different conditions in different environments which logically produces different outcomes. However, as Christians we are called to follow the example of Jesus Christ, who was concerned about inequality, particularly the imbalance of power and wealth. A situation like the above creates a prohibitive inequality that harms human dignity as it sets an ever-growing, small group of people above society and denies active participation to the rest. The very dangerous tendency of today’s model of development of AI is that this group of “excluded” rapidly grows. If the current trend continues, we could very rapidly attain a situation in which a majority of the society would be excluded from the benefits of the economic and technological advancements appropriated by a small group of elites, owners of humanless factories and offices. It would, in effect, break the basic anthropological characteristics of a human as a relational being.
So now what?
We hear daily from tech companies that these trends and phenomena are necessary consequences of digital technologies. It must be pointed out, however, that what we are experiencing (or are about to experience) is only one of the possible developments. AI could be employed in some areas which would highly enhance human productivity; for example, in precise manufacturing, healthcare or education. In these sectors there is a huge space for individualization improvements enabled by AI. Our current reality of restricted life thanks to COVID-19 provides an excellent example. In some countries, the application of an “intelligent quarantine,” instead of a full-scale “blind quarantine” as witnessed in South Korea and the Czech Republic, enabled the use of data from mobile operators and AI-algorithms to identify the infected and the possible contagious links. If used more widely, this AI application could have saved a significant part of the millions of jobs which have been lost in the past months.
For many, job loss means the loss of being able to provide for family, forcing people to search for any solution possible. This, in turn, can lead to a loss of dignity, which, as mentioned above, is in direct contradiction to Christian social teaching. But what we are experiencing today is a world in which automation paradigm, that is, labour replacement, is preferred. This is not the result of “free market” conditions as one may assume, but of current policies which favour investing in automation instead of things that would support increased human productivity. All of that said, neither automation nor many of the accompanying phenomena of an AI revolution (such as ever-increasing incursions on privacy and human dignity, loss of jobs and stagnation of wages) are a necessary “price” for the “progress.” Nor is there any warranty of a causal relationship between technological change and increased well-being and social progress. The technology itself is not the problem. The tech is value-neutral. The politics and policies we assign to it are not. Whether we are talking about benefits or negative effects, both are the result of the decisions we make regarding their use. Once COVID-19 restrictions are loosened, we will eventually have to make a choice: will human beings be replaced by virus-resistant AI applications, or will we decide to create an AI-paradigm that will support and facilitate human-machine cooperation? Certainly, we can say “progress” entails a reduction of noisy and repetitive activities, and the need to do dangerous jobs; however, when automation leads only to cost reduction with an increase in inequality and a loss of human dignity, we certainly cannot speak about progress.