Possibly the tech field with the most jargon around, Artificial Intelligence (AI) research deserves a bit of crystal clarification in 2019. Typically conjuring visions of a push-button paradise – or alternatively, a Terminator nightmare – the term “AI” is currently on everybody’s lips, yet with little real understanding of what it looks like today.
The term AI is sometimes used synonymously with Machine Learning (ML), which can further muddy the water. The entire field of AI trades heavily on what are – essentially – separate fields of research. However, as one advancement propels another innovation within a separate branch of AI, it can be hard to follow developments logically.
An IT relocation provider for small businesses sees AI falling into place all the time. Indeed, the very nature of tech assistance now entails contemplating the option of an AI application to optimise a host of business aspects. There are immediate, tangible benefits for small businesses, based upon the facilitation coming from AI development. AI application is inevitable, already here, and currently growing in leaps and bounds.
To clear up the field of AI’s muddy water into crystal clarity, here’s a summary of the main components of AI and what they do best.
First, let’s get the Terminator out of the way. AI might best be seen as a broad umbrella; for many, it reaches far enough to encompass robots. Officially, robotics is an individual and separate pursuit of its own, but there’s a massive overlap with broad AI. Indeed, it’s only logical that we anticipate the highest level of AI in robots that share our lives – like self-driven cars – or those that most closely emulate us, like Sophia.
However, robotics is not synonymous with AI; rather, robotics is concerned with building intelligent machines that can perform specific tasks. For example, AI gives self-driven cars their ability to navigate in variable or dynamic environments. Far more mechanical processes are involved in robotics than AI, however, as well as a host of other engineering intel – aspects an AI coder would never handle.
Machine Learning (ML)
Depending on who you talk to, Machine Learning has been the real spur behind AI over the last few years. Concerned with system development that improves with experience, ML’s pursuits have – at times – merged with other AI arenas, to the extent that many use the terms AI and ML interchangeably.
Rather than a cyborg in every home (which is more the retail end of AI), ML is focused on large scale machine learning – think of a production line of car-making robots, getting better and more skilled over time. Scaling ML algorithms to increasingly large datasets is currently a core pursuit of ML, and the fraternity is more concerned with “organically” enhancing systems than with retail AI applications.
Reinforcement Learning (RL) most closely emulates the functioning of the human brain. Based on the accumulation of numerical rewards that accrue over time, RL is modelled on how young humans learn over time in their particular environment. The intelligence in an RL system learns to combine or repeat actions to optimise the long-term reward.
Very human in construct, the RL environment anticipates successes and failures – all accruing to a greater longer-term goal. Just as a human infant progresses in understanding and sophistication, the pursuits of RL centre on emulating that growing cognisance, intelligence and ability. When people speak of “AI,” it’s often because they’ve seen the results of RL in robots that appear to be able to grow their understanding and performance, just as a maturing human child does. Google’s AlphaGo app is a prime example. It beat the world champion at the game of Go in 2016 due to its growing experience and understanding of the game’s potential.
Deep Learning :-
A subset of Machine Learning, Deep Learning (DL) deals with models based on the nature of biological neurons in human brains. DL re-brands these neural networks, resulting in familiar applications like speech, translation, recognition and gaming.
It’s important to remember that both the nature of computing, as well as the scope of what is possible, has changed dramatically over the last two decades. From a limit of coders building computing capacity as desired, AI now allows us to expect machines to build themselves, or rather, use their accumulated experiences and overall intelligence to increase productivity and precision. If AI is starting to look a lot like neurosurgery, it’s no coincidence.
Other aspects of AI include what’s destined to become positively banal – the Internet of Things (IoT). Indeed, the IoT will surely bring “domesticated” AI to every home on the globe very soon. The IoT is likened to the retail version of AI, although AI handles far more exotic attempts at Natural Language Processing (NLP), Computer Vision (CV) and Neuromorphic Computing (NC).
This latter pursuit is a direct outcome of the neuron-based models. Working with chips that emulate the hard-wiring of a human brain, there’s no necessary processing as demanded by conventional chips. With NC, chips can not only process and store data, but can develop “synapses” as needed, with big savings in time and cost for the business utilizing the technology.