Services
(866) 498-4534
(866) 708-2045

3 Breakthroughs that will propel AI into the Future

To understand where AI is headed and what can enable AI in the future, it is important to understand the genesis. The term Artificial Intelligence (AI) was coined at the Dartmouth Summer Research Project in 1956 for a branch of study dedicated to finding out how to make computers perform intelligent tasks. The shared vision of the authors was to enable computers to solve real-world problems unassisted.

According to the proposal: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Fast forward to 2020 where the remarkable advances in Machine Learning (ML), Natural Language Processing (NLP) and Computer Vision has effectively mainstreamed AI. Autonomous vehicles, AI Drones, AI-assisted drug discovery are some of the more popular applications of AI but to make these ideas practical and scalable, breakthroughs are required in the following domains:

Deep Reinforcement Learning

Among cognitive abilities, learning is at the core of determining whether a person or a system can be called ‘intelligent.’ Machine Learning, a subset of AI, is made up of three predominant learning models:  

  • Supervised Learning - comprises of regression and classification tasks performed by algorithms where human intervention is required to train the system to identify patterns based on past examples of correct input-output combinations.
  • Unsupervised Learning - learning through clustering to create common groups based on similarity in observations. Generative models are used to generate training data where the process is unsupervised. Generative Adversarial Network (GAN) started out by generating images that were used to train a system. Eventually GANs are predicted to mimic human creativity.
  • Reinforcement Learning - a trial and error approach where the system is rewarded for coming up with a solution to the problem.
  • Any combination thereof, also known as ‘hybrids.’

Deep Learning is a subset of ML that utilizes artificial neural networks to mimic human cognitive functions. When you heard in the news a couple of years ago that a deep reinforcement learning algorithm called AlphaGo beat the world champion of the ancient abstract strategy board game called Go, you probably didn’t realize that the algorithm utilized a neural network which knew nothing about the board game at the beginning. It is phenomenal how a self-taught algorithm learnt to make decisions through rewards and penalties to eventually become better at the strategy game, enough to beat a human, that too, a world champion.

The ability of artificial neural networks to process unstructured data will determine our success in utilizing Deep Reinforcement Learning models to finally enable a complete and irreversible transition from legacy methods to an AI integrated society where:

  • Smart cities improve the quality of life of citizens by offering transparent governance, better infrastructure, resource distribution and transportation and reduced environmental footprint.
  • The time involved in drug discovery, vaccine development and commercialization is reduced from years to months.
  • Autonomous Vehicles eliminate accidents and safety related incidents on the road.
  • Factories are fully automated so that workers are retrained and transitioned from performing routine, repetitive tasks to higher value activities such as programming and customer experience management.
Edge AI

Research from Strategy Analytics estimates that connected devices installed base will reach nearly 40 billion by 2025. Add to this the inevitable global rollout of the new telecom standard 5G that dramatically reduces latency. Combined, they are powerful drivers for Artificial Intelligence (AI) but streaming data to the cloud and running AI algorithms in the cloud present its own set of challenges.

To put this in perspective, let us understand how the current generation of IoT systems work. An IoT system made up of devices and sensors generate data and connect to the internet to stream data to the cloud where AI algorithms are run to make ‘inferences’. As you can imagine, transmitting data from the source to the cloud and sending inferences back to the source result in latency. Plus, there is the debate around the privacy and security of data on the cloud.

Enter Edge Computing where computing power is decentralized and data is processed at the edge which refers to the IoT endpoint device. According to Gartner, by 2025, 75% of enterprise-generated data will be created and processed outside a centralized data center or cloud. Tech giants such as Intel have already started partnering with educational organizations to train developers on building Edge AI applications that promotes running AI algorithms directly on the local device.  

The key challenges to mitigate if you want to make Edge AI practical are:

  • Deep learning models cannot be run on a single endpoint but edge clusters can be created within an IoT Ecosystem by combining the computing power of multiple devices (distributed computing) in the vicinity.
  • GPU Accelerated Computing is a true enabler where the power of Graphics Processing Unit (GPU) is harnessed along with CPU on the edge device to run processing-intensive algorithms such as deep learning and predictive analytics.
  • AI acceleration necessitates new hardware architectures and availability of dedicated AI chipsets that offer more computational power by taking up less space and energy.
  • AI algorithms have to be written in such a way as to reduce the size of deep learning models “without losing their capabilities”. This trend of miniaturizing AI algorithms is called Tiny AI.
Quantum Computing

Let us look at some of the recent announcements in the field of Quantum Computing where claims of “Quantum Supremacy” are made:

  • In October 2019, Google published the results of a successful “quantum supremacy” experiment where it stated that it has developed a 54-qubit quantum processor named “Sycamore.” This machine has performed a computation in 200 seconds that could take the fastest supercomputer 10,000 years.
  • In December 2019, Princeton University announced that it has “demonstrated that two quantum-computing components, known as silicon “spin” qubits, can interact even when spaced relatively far apart on a computer chip.”
  • In March 2020, according to its own press release, Honeywell is “on track to release a quantum computer with a quantum volume of at least 64, twice that of the next alternative in the industry.”
  • Again in March 2020, Google released TensorFlow Quantum, an open source library for Quantum Machine Learning.

As you can already imagine, the volume and the speed with which data can be processed by a Quantum ML (QML) Algorithm cannot be matched by classic ML algorithms. The experience to the end user interacting with a voice assistant powered by QML, as predicted by Intel, will be similar to interacting with a person in terms of the rapidity and the relevance of responses. Furthermore, there are computer models that require immense computational power to run such as simulating global weather. This means that it is not only prohibitively expensive but also time-consuming to run such models even on a supercomputer. Breakthrough in QML and Quantum Computers will effectively bring down the time taken to run such models to nanoseconds instead.

Detroit (Farmington Hills)
Dallas, TX
Chicago, IL
Indianapolis, IN
Pittsburg, PA
Atlanta, GA
Tampa Bay, FL
Toronto, Canada
Tokyo, Japan
Chennai, India
Ahmedabad, India
© 2023 Softura - All Rights Reserved
crossmenu