Blog of Professor Slobodan Đukanović,phd
As Artificial Intelligence (AI) plays an increasingly significant role in the economic, social, scientific, medical, financial, and military spheres, it makes sense to consider the potential benefits and risks it may bring to humanity.
The benefits have already been discussed in terms of AI’s achievements, particularly in the fields of medicine and climate sciences. Our entire civilization is a product of human intelligence. In this sense, if we had access to much greater machine intelligence, the ceiling of our ambitions would be raised significantly. AI and robotics could liberate humanity from tedious, repetitive tasks and substantially increase the production of goods and services, leading to an era of abundance and peace, which is the ultimate goal of humanity. The potential to accelerate scientific research with the help of AI could result in cures for currently incurable diseases and solutions for climate change and resource shortages. The statement of the executive director of Google DeepMind: “First, solve AI, then use AI to solve everything else,” succinctly captures the desired development of AI.
1. Lethal Autonomous Weapons: Yes, Terminator! The United Nations defines this weapon as one that can locate, select, and eliminate human targets without human intervention. What is most concerning about this weapon is its scalability: the absence of human oversight and intervention means that a small group of people could apply an arbitrary amount of such weapons against human targets.
2. Surveillance and Influence on Decision-Making: AI (speech recognition, computer vision, and natural language understanding) can be used on a large scale for mass surveillance of individuals and detecting activities of interest. By selecting information using machine learning techniques and sending it to individuals through social media, political opinions can be modified and somewhat controlled, as was evident in the 2016 U.S. elections. Fake audio recordings, so-called “deepfakes,” can have immense power in this context. For example, an audio recording of a politician can be manipulated to make it seem like the person holds racist or sexist views, even if they never said anything of the sort. A high-quality recording like this can destroy the political campaign of that individual.
3. Biased Decision-Making: Careless or intentional misuse of machine learning algorithms can result in biased decisions based on race, gender, or other categories. One example is the machine-based assessment of parole applications or loans.
4. Impact on Employment: The concern about machines replacing jobs is centuries old. The story is not black and white. Machines replace humans in certain tasks, but they make them more productive, and thereby more employable. On the other hand, machines make companies more profitable, and therefore able to increase workers’ wages. Some tasks made possible by machines could become economically sustainable, which would otherwise be impractical. Their use results in increased wealth, but tends to redirect wealth from workers to capital owners, thus exacerbating inequality. Technological advancements in the past have led to dramatic drops in job numbers, but people have always found new types of work afterward. It is possible that AI will have the same effect. This issue becomes one of the key challenges for economies worldwide.
5. Cybersecurity: AI techniques are useful in defending against cyberattacks, but they will also contribute to the spread and resilience of malicious programs. For example, reinforcement learning methods have already been used to create highly effective tools for automated and personalized extortion.
In recent times, particular concern has been raised about the concept of Artificial Superintelligence, intelligence that far surpasses human capabilities, especially with recent advancements in deep learning, the publication of books like Superintelligence by Nick Bostrom (2014), and public statements by Stephen Hawking, Bill Gates, and Elon Musk. Furthermore, the same concern was expressed by Alan Turing back in 1951, doubting whether humans could control a machine more intelligent than themselves.
The discomfort with the idea of creating superintelligent machines is natural. This is known as the “gorilla problem.” Around 7 million years ago, humans and gorillas shared a common ancestor. Today, gorillas have no control over their future; it is in the hands of humans. With the creation of superintelligence, are humans consciously relinquishing control over their future?