P.S. : This article was written by me in April 2022. You can send your comments in the comments section below for any suggestions.

Artificial General Intelligence: Possibilities, Dangers, and Strategies

  AI technologies have been developing exponentially in the last decades. Nowadays, even without being aware of it, we interact with AI many times in our daily lives using internet search engines, social media, and shopping websites. Although today’s AI is not comparable with human intelligence on a general level, they outperform humans in specific tasks that need a lot of computation, such as analyzing millions of prices in the market. Therefore, they are usually called “narrow AI”s. However, it seems that AI is evolving into a thing that is not only capable of doing specific tasks but also more general tasks. Furthermore, according to some experts, at some point, we will see human-level AIs.

  The human-level intelligent AI is often called Artificial General Intelligence (AGI). Compared to humans, AGI is independent of physical factors, and it can make computations way faster so it can develop itself quickly and continuously. Therefore, there is no common opinion on what can happen when the first AGI is created. In addition to the optimistic scenarios, there are many possible dangerous scenarios that AGI can damage or even eradicate our civilization. However, some experts think that it is not conceivable that a machine can be intelligent like humans soon, and they have different claims. Some of them believe that humans are not only physical objects, so they cannot be imitated by computers completely. Moreover, some others claim that it is too hard to replicate intelligence in machines because of the level of complexity of the human brain. Thus, AGI will not be created, at least in this century. That is to say, according to these experts, we do not have to take precautions, which can also lead to a decline in the development of AI technologies that could help us develop our civilization significantly.

  In complete contrast to that point of view, this paper will argue that an AI which is at least as capable as humans (AGI) is highly possible to be created in the next fifty years. The question is, what are the dangers and what strategies should be followed against them. First of all, we should be careful and cautious against possible dangerous scenarios such as self-destruction so that we can prepare ourselves properly. Moreover, to protect ourselves, we should prioritize the precautions of AI. On the other hand, although we are near a point where further imprudent development can lead to harmful AGI and possibly irreversible consequences to our civilization, even at the extinction level, AI also has an important place in the progress of our civilization that should not be slowed down unnecessarily. For this reason, we must act selectively and carefully in all our regulations and precautions.

Is it possible to develop Artificial General Intelligence?

  Many philosophers and scientists have been thinking about how intelligence works. Some of them claimed that there is a soul inside living things that enables them to be alive. Others stated that the brain is the only source of intelligence, and we should be able to explain the brain using the laws of physics only without any unnatural explanations. In 1950 computer scientist Alan Turing stated a theory named Turing Theory which enables us to compute and imitate anything in nature using computing machines. Therefore, this theory assumes that if we consider the human brain only consisting of physical material and interactions, simulating the human brain in computers and replicating the human intelligence is theoretically possible (Turing, 1950). After Turing’s discovery, many scientists started to work on the problem. However, as the structure of the human brain is the most complex thing we have yet discovered in our universe, it is not easy to replicate it in a computer (Ackerman, 1993). Additionally, some experts believe that to create intelligence, imitating the human brain is not the best approach.

  Today we have only artificial narrow intelligent programs which are masters of specific tasks such as playing chess. On the other hand, a program capable of a human is named Artificial General Intelligence. There are two main approaches to creating an AGI: top-down (symbolic) and bottom-up (sub-symbolic) approaches. The symbolic approach is rule-based, and it was popular in the 1980s. The sub-symbolic approach is connectionist like a human brain. There has been a lengthy and unresolved discussion between symbolic and sub-symbolic approaches. There has been a push toward in-between solutions in recent years, and the research continues (Alam et al., 2020). While it is not sure which approach or combination will help us create AGI, almost every expert agrees that it will one day be created, but it is not apparent when.

  The vast majority of experts point out that AGI will be produced in this century. In a poll that asked many AI experts at the Puerto Rico AI Conference which year they think we will have human-level AGI, the median answer was by the year 2055 (Tegmark, 2017, p. 82). Furthermore, most of them think we should be cautious about the AGI because we do not know what can happen when it is created. The reason is it will be so complex that we probably will not have control over it ultimately. On the other hand, some experts like Andrew Ng believe that the age of AGI is not soon. Therefore, we should not be concerned about it (as cited in Tegmark, 2017, p. 47). Although it is not certain that AGI will be created in the upcoming decades, we should research possible scenarios and prioritize proper precautions for these scenarios in order to avoid any potential disastrous consequences.

What are the possible AGI scenarios?

  There are three possible types of AGI: speed AGI, collective AGI, and quality AGI. Their definitions are as below:

  • “Speed AGI: A system that does all that a human intellect can do, but much faster.” […]

  • Collective AGI: A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system. […]
  • Quality AGI: A system that is as fast as a human mind and vastly qualitatively smarter.” (Bostrom, 2014, p. 56).

  In every way, when we build AGI, it will be able to develop itself. Furthermore, as AGI does not have physical and psychological restrictions like humans, it can improve itself continuously. Therefore, its speed of development will be exponential. This event is called an “intelligence explosion” (Tegmark, 2017, p. 272). Stephan Hawking explains intelligence explosion in this way ‘The development of full artificial intelligence could spell the end of the human race. […] It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, could not compete and would be superseded.’ (as cited in Cellan-Jones, 2014). In the end, even though we are not sure about what can happen when an intelligence explosion emerges, we have some predictions.

  There are different possible AI Aftermath Scenarios; in some of the scenarios, humans are in control, and in others, humans are partially in control or not in control. In most scenarios, as a result of an intelligence explosion, AGI will reach superhuman intelligence. For example, one scenario assumes that we can control a superhuman intelligence using different techniques, but the power of it can be used for good or bad purposes depending on the human controllers. In a different scenario, AGI may run the society and maximizes people’s happiness. On the other hand, in one of the worst scenarios, AGI may take control and get rid of all humans. In order to guarantee the continuity of our species, it is necessary to be cautious against these severe negative scenarios. As it is stated, we are not sure what is going to happen because the behavior of anything more intelligent than us is difficult and nearly impossible to predict. However, that does not mean there is nothing that can be done. The critical point is that the race toward building AGI should not stop us from thinking about what we want the aftermath to be like (Tegmark, 2017, p. 273). For this reason, we should pay attention to research on security measures to prevent an undesired result from happening.

What can we do to protect our civilization from the possible dangerous effects of AGI?

The development of AI technologies should be regulated by new laws because, as stated in part II, unregulated enhancements in the development of AGI can do more harm than benefit. In order to keep both progress and regulation, the development of the legal framework for the existence of artificial intelligence can be conditionally divided into two approaches: first, the creation of a legal framework for the introduction of applied systems with artificial intelligence and the stimulation of their development; second, regulation of the sphere of creating artificial “superintelligence”, in particular, compliance of the developed technologies with generally recognized standards in the field of ethics and law (Khisamova et al., 2019, p.1).

  Today, governments and companies significantly support and fund artificial intelligence research. That is because AI has crucial claims to improve almost any industry significantly. However, based on AI computer scientist Peter Bentley states that there is a kind of paradox at work here. These large claims lead to significant publicity, which leads to extensive investment, but also new regulations. The regulation stifles innovation. And then the inevitable reality hits home. AI does not live up to the hype. The investment dries up. Moreover, AI has become a dirty phrase that no one dares speak. Another AI Winter destroys progress (as cited in Baum, p. 11). There is a discussion between the people defending proactive and immediate regulation and the people concerned that regulation can slow down the development of AI (Kaplan & Haenlein, 2019, p. 22). The problem is not promoting AI but regulating it without slowing down. For this reason, it is necessary to act carefully and selectively about the regulations.

  The first thing that can be done is to regulate the artificial intelligence technologies themselves by restricting the production of some type of AI algorithms, methods, techniques, or end products that can cause harmful consequences. For instance, fully autonomous cars may get out of control and can cause vital damages; therefore, the use of autonomous vehicles without a driver may be prohibited. Furthermore, defining some rules may not work as AI constantly changes itself. At the same time, this type of regulation may be detrimental to the development of AI. On the other hand, rather than attempting to control the technology itself, some experts such as Kaplan and Haenlein propose developing shared norms such as rules for algorithm testing and transparency. These are similar to the safety testing done for physical products, maybe with the consumer. Some regulation standards could include some warranty. At this point, constant testing and developer-approved guarantees seem like the most logical method. This would also allow regulations to remain stable and eliminate the need for constant updates in response to technological advances (Kaplan & Haenlein, 2019, p. 22). In this way, we ensure the safest advances without hindering the developments in the field of AI.

Conclusions

It seems that there are numerous dangers and problems on the AGI path. Besides the critical problems that narrow AIs can cause, it could mean the end of our species if a general AI with particularly undesirable purposes gets out of control. Therefore, more detailed research should be done on what could lead to possible negative scenarios. Furthermore, we need to take comprehensive measures before it is too late for everything. Despite its significant risks, AI can make considerable improvements in our lives. It can completely eliminate repetitive and non-creative tasks such as driving for hours every day, working as a cashier, and working in factories. In addition to these, it can provide solutions to some global problems that we have not been able to solve on our own until today. For example, the causes of climate change can be determined more accurately with newly developed AI analysis systems. In fact, these systems that affect the climate can be directly controlled by AI systems (Huntingford et al., 2019). Furthermore, if we consider general artificial intelligence, as Isaac Asimov mentioned in the I Robot, perhaps the use of artificial intelligence in the administration of states, such as justice, merit, and corruption, problems that almost no state has been able to completely eliminate until today and that actually constitute a source for all other global problems. (Asimov, 1950). If we can be successful, as Bostrom states, a peaceful AGI may be our last invention (Bostrom, 2013, p. 4). Moreover, as Kurzweil predicts, when machine intelligence surpasses human intelligence and gives rise to singularity in a few decades, we may live in heaven while still alive (Kurzweil, 2005).

Yavuz Alp Sencer OZTURK, 04/2022

Works Cited

  • Alam, M., Groth, P., Hitzler, P., Paulheim, H., Sack, H., & Tresp, V. (2020). CSSA’20: Workshop on Combining Symbolic and Sub-Symbolic Methods and their Applications. Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 1. https://doi.org/10.1145/3340531.3414072

  • Baum, S. (2018). Countering superintelligence misinformation. Information, 9(10), 244. https://doi.org/10.3390/info9100244

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. Cellan-Jones, R. (2014, December 2). Stephen Hawking warns artificial intelligence could end mankind. BBC News. Retrieved April 22, 2022, from https://www.bbc.com/news/technology-30290540

  • Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? on the interpretations, illustrations, and implications of Artificial Intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004

  • Khisamova, Z. I., Begishev, I. R., & Gaifutdinov, R. R. (2019). On methods to legal regulation of Artificial Intelligence in the world. International Journal of Innovative Technology and Exploring Engineering, 9(1), 5159–5162. https://doi.org/10.35940/ijitee.a9220.119119

  • Kurzweil, R. (2010). The singularity is near: When humans transcend biology. Penguin. Taddeo, M., & Floridi, L. (2018). Regulate artificial intelligence to avert Cyber Arms Race. Nature, 556(7701), 296–298. https://doi.org/10.1038/d41586-018-04602-6

  • Tegmark, M. (2018). Life 3.0. Random House US.

  • TURING, A. M. (1950). I.—Computing Machinery and intelligence. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/lix.236.433