The challenge of robots versus humanity has been a constant science fiction theme since the beginning of the 20th century. One hundred years later, Artificial Intelligence systems are still in their infancy. However, since the development of technology is exponential, not linear, these concerns are real and should be taken seriously. Although jobs, privacy, and national security are all on the list of possible drawbacks with huge impact, there are other issues we should be worried about right now.
Is AI necessary?
Since the list of possible risks is long, it makes sense to ask if we need this tool or if it is too dangerous and should be abandoned. Until now, a weak form of AI has proven very beneficial. The pros include speeding up processes, helping ecommerce companies grow by personalizing their offers to their customer’s needs and desires. These algorithms can also assist in drug testing and improve healthcare like in the case of artificial intelligence company Indata Labs’ “Flo” app, focused on women’s reproductive health by pattern analysis. These advancements mean thousands of man hours saved and redirected towards other purposes, as well as better and more affordable products. But what about the turning point when rapid development becomes a threat?
The geniuses of today and future enthusiasts including Stephan Hawking, Elon Musk, Bill Gates, and Mark Zuckerberg have yet to conclude the topic of Artificial Intelligence Singularity, a moment defined as “a point at which artificial intelligence outstrips our own and machines go on to improve themselves at an exponential rate.” In fact, the real question is how the lack of conscience and empathy will be doubled by tasks excellence? Since this moment is far away, for now, let’s see what the current concerns are.
Risks Of Artificial Intelligence
The general vibes surrounding Artificial Intelligence and Big Data are positive, and hardly any sales pitches mention the downsides, yet these are real, and they should be discussed and considered.
Technological advancements have always had as a short-term side effect a decrease in certain jobs, followed by unemployment. Just a small fraction of the former employees don’t re-qualify and get new jobs. Yet, in the case of AI, if it becomes very skilled, it is thought that it could make the entire concept of human work redundant.
We’ve seen AI diagnosing cancer, acting as customer service and even creating art. Is there anything left to be done by people besides being software engineers and designing machine learning algorithms? Although AI can be trained to be extremely skilled, it is only capable of replicating what it has learned. It lacks human reasoning and still needs guidance or mentoring by a human expert, so there is still a place for highly skilled specialists. The only employees that should rethink their careers are those who have easy to automate jobs, like bookkeepers and telemarketers.
Another top concern related to AI is personal space and the invasion of privacy. Humorous depictions of this concept appear in various movies, often manifesting as an omni-present personal assistant with a sarcastic personality. The concept isn’t too far-fetched, considering Zuckerberg’s project Jarvis, the butler who can take text and voice commands and perform facial recognition to allow and welcome guests and also entertain a child.
Although convenient, some users could feel that an AI system accustomed to their every move is too intrusive. On the other hand, a smart AI assistant who has voice features could be a great companion for lonely people. The risk, in this case, is that it may also accentuate the individual’s depression since a machine can’t mimic emotions.
The most shocking thing about AI is its unpredictability, which leads to serious safety concerns. Although it is a fast and diligent learner, there is no record of the way it arrives at a particular decision. Even if the outcome is correct, it came from a black box, therefore, in case something goes wrong at some point, there is no way to make it right other than to retrain the machine with a large enough number of good cases and hope for the best.
This is one of the most significant risks, regardless of the applications. If it is wrong in a diagnosis, the consequences could be fatal, or in the case of a driverless car, it only takes one road crossing on a red light.
Trusting AI too much
The previously mentioned safety concerns come from the way AI and specifically machine learning works. It delves into data, identifies patterns, makes connections and offers a conclusion. It is a trial and error process, and sometimes it can misbehave or provide curious results. Basically, it is only as good as the data that has been fed to the system. This is another good reason for AI to remain under human supervision for now and be used as an assistant, not as a decision maker.
Developing a sense of self
This is not an immediate concern, as the strong form of AI, that is independent, able to reason and make decisions is present only in fiction for now. Yet, it does not hurt to do thought experiments about the new frameworks necessary to accommodate such an entity. What will be the legal coverage? Will it have rights and duties? Will humans still have total command over such forms of intelligence? Will it learn to think about itself and want things? Where do you draw the line between machine and humanoid?
Biggest risk: not letting it realize its full potential
Although the dangers of giving AI too much power too soon are not to be neglected, the reverse is depriving the system of reaching its full abilities. AI should be treated as a promising member of the staff. It won’t be getting the corner office in the first week, but it should get a fair chance at learning and promotions under close supervision. After all, it is working and learning 24/7 and never asks for a raise.