10 Most Common Myths About AI
Will AI take over our jobs, the world, or humanity? The myths surrounding artificial intelligence are both endless and incredulous. Discover how to differentiate the AI myths from the real potential of artificial intelligence.
When it comes to myths about AI, many theories might seem incredulous, yet some are grounded in fact. Considering the hype around the technology, it might get difficult to determine the true disruptive power of artificial intelligence on society.
Many of these theories revolve around AI somehow taking over human civilization. This may even seem true when coupled with the science fiction trope of giving great power to artificial intelligence and robots. However, when looking at the actual capabilities of the technology and how it is being used today, it becomes clear that something like this isn’t possible.
In this article, we will delve deeper into some of the most popular myths about artificial intelligence. We will also investigate the real-world implications and possibilities of such theories and attempt to separate hype from reality in the AI sphere.
In this article
Artificial Intelligence: Fact vs. Fiction
Let’s first look at the reality of AI. Today, artificial intelligence is simply a computer program that mimics certain aspects of human thinking and intelligence. Today’s AI programs are created with a narrow focus, which does not allow them to function outside their prescribed task. They cannot reproduce the learning from their knowledge and apply it in the real world because their sole purpose is to do a well-defined task with precision and speed.
There are some algorithms which are also built to improve upon themselves by learning. This type of artificial intelligence program is called a machine learning algorithm, as it continues to iterate upon itself. With each progressive version of the algorithm, it becomes better at solving the problem faster or more accurately.
Comparatively, human intelligence widely applies a concept known as the transfer of learning. When a human being learns, they solve a given problem using cognitive processes. Using the transfer of learning, this knowledge is generalized and applied to problems that are similar to the problem faced in the first iteration.
For example, a child learning arithmetic in a class can use that knowledge when calculating the bill for groceries. The knowledge is directly carried over from previous problems, thus allowing an individual to easily come to the solution without learning the method again.
Transfer of knowledge is also possible in AI using a concept known as transfer learning. A machine learning algorithm is created with a template of the task it is going to solve. The learning is then carried forward to solve a similar problem. Transfer learning in AI functions only for a small subset of solutions, unlike humans, who are able to generalize a vast variety of problems.
Transfer of learning is one of the many things that sets human intelligence apart from AI. Currently, many complex cognitive procedures and shortcuts cannot be recreated in an AI setting. This means that all AI programs created today are narrow AI – a program designed to do one task specifically. When AI can generalize knowledge and exhibit functionality like that of humans, it is termed as general AI.
General AI is still a far-off reality, as the main focus is on using narrow AI as a tool right now. Narrow AI has found great adoption in the enterprise sector, as it can vastly improve the speed and accuracy of many company processes. Moreover, AI programs can also solve complex business problems and assist enterprises to make informed business decisions.
This means that most AI programs today are generally put into a utility category. AI is simply a tool that serves to assist companies to make better decisions or solve a problem with great speed. Even chatbots, such as Google Assistant and Siri, are simply AI programs that are optimized to process language and decipher human speech patterns into text.
AI has been widely adopted due to the vast amount of computing power at our disposal. Due to the powerful hardware we have today, it is possible to run complex AI programs and receive optimal results with great efficiency.
Learn More: How Artificial Intelligence Has Evolved
Most Common Myths About AI
Movies like ‘Terminator’, ‘I, Robot’, and ‘Eagle Eye’ have catapulted villainous AI into pop culture. The idea of machines thinking for themselves has been around since the late 1950s. First, as a mainstay villain of science fiction writers, then as a technology that was hailed as the next big revolution, AI has been relevant for a long time.
While the AI in these representations is shown to be omniscient and all-powerful, with access to indestructible robots or a military arsenal. However, AI today is simply a collection of complex computer programs. Therefore, most myths about AI taking over the world are not possible with today’s technology. Let us look at these myths in detail.
Myth 1: AI Will Take Over Jobs
This is, by far, the most popular AI theory. Proponents state that AI will be so advanced and cheap for companies that hiring human labor will simply seem too expensive. This will then lead to mass unemployment as AI will take up the same job roles as humans, and execute them at much better speed and cost.
This cannot occur for a multitude of reasons. Primarily, a vast variety of jobs today require cognitive tasks that humans can do much better than AI. While low-level jobs, such as data entry and sorting jobs will be at risk, the rest of the working population will simply see benefits from having AI at their disposal.
AI will be used as a tool to augment existing jobs but will replace some low-level jobs. This means that employees will have to be upskilled, thus promoting them to newer jobs that involve using AI as a tool rather than being replaced by it.
Adding to this, Daniel Shaw-Dennis, SVP Global Strategic Marketing and Alliances, Yellowfin, shares, “From an analytics perspective, the biggest myth we hear is, “the machine will take over my job”, particularly for data analysts. The truth is that AI technology for analytics today is largely about automating the things that are done manually.
From the machine doing the “discovery” aspect of data discovery or technology that utilizes algorithms that automatically surface statistical change, AI-technology is sifting through millions of data points to find what might be of interest.
It’s still up to the analyst to understand what’s important, add context for their organization and present that to their business users. AI is actually freeing them up to perform more high-value tasks.
Learn More: The Top 5 Artificial Intelligence Books to Read
Myth 2: AI Will Control the World
AI taking over the world has long been a mainstay of science fiction. Many stories describe artificial intelligence programs infiltrating the most secure systems in the world and taking over world governments. Some even describe these programs taking control of the nuclear arsenals of a country, causing the extinction of humanity.
This is not possible in any given situation. Firstly, such privileges of national security cannot be given to artificial intelligence, as AI programs will be used only as a tool. They will not be able to make such decisions without human intervention. Moreover, they cannot ‘hack’ their way into the security systems in place today, as they only have a narrow intelligence for their programmed problem.
AI also cannot rule the world through existing systems, as there is no possible way to train narrow AI to lead a country. Moreover, ruling through surveillance is also not a concern as AI or even superintelligent AI’s goal might not be to rule humans.
Myth 3: AI Robots Will Rule Humans
Usually, robots and evil AI programs go hand in hand when it comes to science fiction villains. The myth that AI will create robots that can think for themselves and lead a rebellion against human beings is one of the most popular theories.
Firstly, robot technology is far behind what is described in science fiction. They are usually not paired with intelligent algorithms today; instead, they are programmed to conduct repetitive tasks like an assembly line. Even personal assistant robots are restricted to a small set of tasks.
AI-powered robots getting a hold of weapons is commonly touched upon as an issue. However, the cognitive requirement for this will not only take up a lot of computing power but also require complicated algorithms. These algorithms cannot be deployed in such a remote state, as they will need a considerable back-end infrastructure for computing power.
Myth 4: AI Will Develop on Its Own Without Human Knowledge
This theory states that AI will suddenly ‘go rogue’ and develop itself to a point where it cannot be stopped by humans. Firstly, AI researchers have been dedicating resources towards creating AI that can not only learn from data but also its own results. There is slow progress in this field, recently resulting in unsupervised machine learning.
Otherwise, it is simply not possible for an AI to develop consciousness and sentience on its own and rebel against humanity. AI algorithms cannot develop beyond their own code, as they are simply smart programs that use computing power to solve complex problems.
If in the future, an accurate general artificial intelligence is created, this scenario has a mild possibility of occurring. However, when considering forward-thinking regulatory practices being taken towards AI today, this scenario is unlikely to occur in the near or distant future.
Myth 5: AI Will Eventually Learn to Function like the Human Brain
Artificial intelligence algorithms are just algorithms. They are simply a complex set of commands for a computer to follow and do not work like the human brain in any form. Modern AI advancements, such as neural networks, have derived inspiration from the architecture of the human brain, but are not capable of thinking like humans.
There are many cognitive processes that cannot be replicated by a computer program, at least in the case of AI as we know it today. Moreover, it is extremely difficult to reproduce a human brain completely in software form, as there are still many unknown variables in play. Barring ground-breaking advancements in both AI and neuroscience, this myth won’t turn into reality for the foreseeable future.
Learn More: How Ethically Can Artificial Intelligence Think?
Myth 6: Only Big Companies Can Use AI
Today’s AI landscape is dominated by software giants, such as Google, Amazon, Facebook, and Microsoft. This has led many to believe that only big companies can effectively utilize and control AI. This introduces the possibility of a dystopian world where these companies use AI to control the population.
Of course, this theory is unfounded. Not only is the AI startup sphere booming, but efforts to democratize AI have been undertaken by these companies. Believing that more attention and research will bring better advancements in AI, most of these companies have open-sourced the tools they use for making AI algorithms. AI marketplaces are also present in great numbers, allowing any computer program to utilize complex algorithms in a pinch.
Daniel Shaw-Dennis, SVP Global Strategic Marketing and Alliances, Yellowfin, says, “The technology can be varied in its application and cost. For an SMB, our advice would be to look at the problem that they’re trying to solve or solution they’re looking to deliver and see where it would benefit from this type of technology.
This could vary from using products with these capabilities that enhance areas, such as marketing or sales. They may be looking to deliver an enhanced customer experience, such as a chatbot on their website that may, in fact, mean a lower customer acquisition cost than traditional methods. Others may go down the path of embedding this technology as part of their application.”
Myth 7: More Data Means Better AI
Another myth says that more data results in automatically better AI. Firstly, AI is only as good as the data it ingests. If the data is inaccurate or structured incorrectly, most of the benefits that come from training the AI are not present. This is because AI simply combs through the data and finds the most effective solution and does not improve upon the data that it is given.
This means that any data given to the AI must be in a machine-readable format, requiring human labeling to be truly effective. Even big companies find problems with labeling the large amounts of data that they collect. More data can mean better AI, but only if the data is clean and easily readable by the AI algorithm.
Myth 8: AI Puts Our Data at Risk
With the predatory data collection practices employed by many software giants, AI has found itself amid security concerns. These include protecting the privacy of the customers providing the data, and concerns about the leakage of private information through data breaches. As this data is fed to an algorithm, the private user data is then used, indirectly, for financial gain by deriving insights.
Regulations, such as the European Union’s General Data Protection Regulation (GDPR) are softening the blow for consumers, as they require companies to use data in a responsible manner. This also means that the privacy of consumers cannot be compromised in any way. However, companies may not stop these practices even with such regulations in place, leaving this myth currently unsolved.
Myth 9: Superintelligent AI Will Take over the World
Superintelligent AI, or simply superintelligence, is an advanced general AI learns not only from data but also from itself. Theoretically, this should result in exponential growth of the program’s intelligence, resulting in an ‘intelligence explosion’ and superintelligent AI.
Separating the myth and facts about superintelligent AI is not an easy task. Primarily, superintelligent AI is still far off in the future, with today’s technology being nowhere close to replicating this task. While the myths about superintelligence are well-rooted in theory, it is simply not possible to create superintelligent AI with today’s technology.
In case a superintelligent AI does come along sometime in the future, the best thing to do is to prepare for it through regulation and laws. The main thing to worry is that superintelligent AI’s goals may not be aligned with humanity. In this case, superintelligent AI may surpass all human barriers and take over the world.
As people work more closely with this technology, understand it, and can apply the benefits, we’ll see our processes become more fluid and adaptive – this technology is shifting the way organizations are operating.
One key thing is the foundation you’re building this upon has to be governed. Particularly in the data world, it’s “garbage in, garbage out”, when you’re applying this type of AI-technology without a governed data foundation it becomes garbage in and garbage out, at scale!”
Myth 10: Technological Singularity Is Not Far Off
Technological singularity is an event where human technology transcends itself to a point where the universe becomes irreversibly changed. Prominent author and thinker Ray Kurzweil utilizes a concept known as ‘computronium’ to describe what singularity is. By using the energy and computation contained within an atom and engineering it, Kurzweil predicts that it will become possible to utilize anything to compute complex calculations and run AI.
In the ever-expanding quest to add more computronium to our arsenal, Kurzweil speculates that we will convert the entire universe to computronium, thus reaching a singularity. However, Kurzweil, along with other thinkers in the field, do not attempt to predict what happens after singularity as it is simply unknowable and cannot be predicted with any semblance of accuracy.
Learn More: The Potential Use of Artificial Intelligence in Cyberattacks
Closing Thoughts
Myths about AI are widespread, and many might seem believable. The things AI can and cannot do are not widely known, and the hype surrounding the technology might make it seem like a miraculous solution. While many companies are adopting AI to optimize their processes, this hype might cause them to make a mistake when dealing with AI.
Learning to distinguish fact from hype is extremely important when it comes to artificial intelligence, especially considering popular opinion. Pop culture has cemented AI’s place as the ultimate supervillain, or an omnipresent and omniscient ghost in the machine. In the real world, it is simply a complex computer program, as much in our control as a text processor or web browser.
Hence, anyone willing to take the AI path moving forward will have to not only formulate their own views on what the technology is capable of but also strictly keep in mind what it cannot do. General AI and superintelligent AI are best left to future generations or science fiction authors.
MORE ON AI