A Quick Guide to Artificial Intelligence (AI)
AI defined in a nutshell for corporate enterprise practitioners
Add bookmark
What is artificial intelligence (AI)?
Artificial intelligence (AI) is, at its core, the science of simulating human intelligence by machines. One definition is the branch of computer science that deals with the recreation of the human thought process. The focus is on making computers human-like, not making computers human. The goals of artificial intelligence usually fall under one of three categories: to build systems that think the same way that humans do; to complete a job successfully but not necessarily recreate human thought; or, using human reasoning as a model but not as the ultimate goal.
With the advent of the internet of things (IoT), the interconnection via the Internet of computing devices in everyday objects, AI is poised to play a large role. Artificial intelligence plays a growing role in IoT, with some IoT platform software offering integrated AI capabilities.
There are several sub-specialities that comprise the whole. Although many of these subsections are used interchangeably with artificial intelligence, each of them has unique properties that contribute to the topic.
Machine Learning vs. AI
Artificial intelligence and machine learning (ML) are terms that are often used interchangeably in data science, though they aren’t the exact same thing. ML is a subset of artificial intelligence that believes that data scientists should give machines data and allow them to learn on their own. ML uses neural networks, a computer system modeled after how the human brain processes information. It is an algorithm designed to recognize patterns, calculate the probability of a certain outcome occurring, and “learn” through error and successes using a feedback loop. Neural networks are a valuable tool, especially for neuroscience research. Deep learning, another term for neural networks, can establish correlations between two things and learn to associate them with each other. Given enough data to work with, it can predict what will happen next.
There are two frameworks of ML: supervised learning and unsupervised learning. In supervised learning, the learning algorithm starts with a set of training examples that have already been correctly labeled. The algorithm learns the correct relationships from these examples and applies these learned associations to new, unlabeled data it is exposed to. In unsupervised learning, the algorithm starts with unlabeled data. It is only concerned with inputs, not outputs. You can use unsupervised learning to see group similar data points into clusters and learn which data points have similarities. In unsupervised learning, the computer teaches itself, wherein supervised learning, the computer is taught by the data. With the introduction of Big Data, neural networks are more important and useful than ever to be able to learn from these large datasets. Deep learning is usually linked to artificial neural networks (ANN), variations that stack multiple neural networks to achieve a higher level of perception. Deep learning is being used in the medical field to accurately diagnoses of more than 50 eye diseases.
Predictive analytics is composed of several statistical techniques, including ML, to estimate future outcomes. It helps to analyze future events based on what outcomes from similar events in the past. Predictive analytics and ML go hand in hand because the predictive models used often include an ML algorithm. Neural networks are one of the most widely used predictive models.
Natural Language Processing
Natural language processing (NLP) began as a combination of artificial intelligence and linguistics. It is a field that focuses on “computer understanding and manipulation of human language.” NLP is a way for computers to analyze and extract meaning from human language so that they can perform tasks like translation, sentiment analysis, and speech recognition, among others. Each of these topics deals with textual data in a different way. One such task is machine translation, where a computer automatically converts one natural language into another while preserving the meaning. It is difficult even by artificial intelligence standards, as it requires knowledge of word order, sense, pronouns, tense, and idioms, which vary widely across languages. In machine translation, the computer scans words that are already translated by humans to look for patterns. Like machine learning, NLP has progressed leaps and bounds by using neural network models that allow it to learn pattern recognition. Services like Google Translate use statistical machine translation techniques. There is still a long way to go until a computer can be considered completely fluent in a given language, though.
Classification and clustering are two different ways that ML creates pattern recognition. Classification is assigning things to a specific label, while clustering is grouping similar things together. You can apply either of these approaches to NLP. Text classification aims to assign a document or fragment of text to one or more categories to make it easier to sort through. Text classification is a technique used in spam detection and sentiment analysis, where effect is assigned to a given set of text being analyzed. Successful text classification, or document classification, occurs when an algorithm takes text input and reliably predicts what custom category that text falls into. Document clustering is a technique that clusters, or groups, similar documents into categories to allow structure within a collection of documents. The algorithm can do this even without understanding or being fluent in the language of the text input because it learns statistical associations between inputs and the categories. It is able to perform information extraction from a chunk of text.
Question answering works in a similar way. A question answering system answers questions posed on natural language. This practice is often used in customer service chatbots that can answer the most frequent or basic questions before escalating the query to a real human, if needed. These are different than bots, which are automated programs that crawl the internet looking for a specific type of information. The highest form of a question answering algorithm would pass the Turing test, a test to see if a machine’s text-based chat capabilities can fool a human into thinking they are talking to another human. A machine using text generation could arguably pass the Turing test. Text generation is the ability of a machine to generate coherent, human-like dialogue. Ethical concerns exist for AI text generation because they are so similar to human text.
Speech
A major area of speech in AI is speech to text, which is the process of converting audio and voice into written text. It can assist users who are visually or physically impaired and can promote safety with hands-free operation. Speech to text tasks use machine learning algorithms that learn from large data sets of human voice samples. Data sets train speech to text systems to meet production-quality standards. Speech to text has value for businesses because can aid in video or phone call transcription. Text to speech converts written text into audio that sounds like natural speech. These technologies can be used to assist individuals who have speech disabilities. Amazon’s Polly is an example of a technology that uses deep learning to synthesize speech that sounds human for e-learning, telephony and content creation applications.
Speech recognition is a task where speech is received by a system through a microphone and checked against a vocabulary bank for pattern recognition. When a word or phrase is recognized, it will respond with the associated verbal response or a specific task. You can see examples of speech recognition from Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google’s Google Assistant. These products need to be able to recognize the speech input from a user and assign the correct speech output or action. Even more advanced are attempts to create speech from brainwaves for those who lack or have lost the ability to speak.
Expert Systems
An expert system uses a knowledge base about its application domain and an inference engine to solve problems that would normally require human intelligence. Examples of expert systems include financial management, corporate planning, credit authorization, computer installation design and airline scheduling. Expert systems have potential value in IoT applications. For example, an expert system in traffic management can aid with the design of smart cities by acting as a “human operator” for relaying traffic feedback to the appropriate routes.
A limitation of expert systems is that they lack the common sense that humans have, such as the limits of their skills and how recommendations they make fit into the larger picture. They lack the self-awareness that humans have. Expert systems are not substitutes for decision makers because they do not have human capabilities, but they can drastically reduce the human work required to solve a problem.
Planning, scheduling and optimization
AI planning is the task of determining the course of action for a system to reach its goals in the most optimal way possible. It is choosing a sequence of actions that have a high likelihood of transforming the state of the world in a step-wise fashion to achieve its goal. When this task is successful, it allows for task automation. These solutions are often complex. In dynamic environments with constant change, they require frequent trial and error iteration to fine tune. Scheduling is the creation of schedules, or temporal assignments of activities to resources while taking into account the goals and constraints are necessary.
Where planning is determining an algorithm, scheduling is determining the order and timing of actions generated by the algorithm. These are typically executed by intelligent agents, autonomous robots and unmanned vehicles. When they are done successfully, they can solve planning and scheduling problems for organizations in a cost-efficient manner compared to hiring more staff which increases overhead costs. Optimization can be achieved by using one of the most popular ML and deep learning optimization strategies: gradient descent. It is used to train a machine learning model by changing its parameters in an iterative fashion to minimize a given function to its local minimum.
Robotics
Artificial intelligence is at one end of the spectrum of intelligent automation, while robotic process automation (RPA), the science of software robots that mimic human actions, is at the other. One is concerned with replicating how humans think and learn, while the other is concerned with replicating how humans do things. Robotics develops complex sensorimotor functions that give machines the ability to adapt to their environment. Robots can sense the environment using computer vision.
Robotics are used in the global manufacturing sector in assembly, packaging, customer service and sold as open source robotics where users can teach robots custom tasks. Collaborative robots—or cobots—are robots that are designed to physically interact with humans in a shared workspace. They can be valuable to organizations who wish to eliminate human participation in dirty, dull and/or dangerous tasks.
The main idea of robotics is to make robots as autonomous as possible through learning. Despite not achieving human-like intelligence, there are still many successful examples of robots executing autonomous tasks, such as swimming, carrying boxes, picking up objects and putting them down. Some robots can learn decision making by making an association between an action and a desired result. Kismet, a robot at M.I.T.’s Artificial Intelligence Lab, is learning to recognize both body language and voice and how to respond appropriately.
Computer vision
Computer vision is defined as computers obtaining a high-level understanding from digital image or videos—on other words, image recognition. It is a fundamental component of many IoT applications, including household monitoring systems, drones, and car cameras and sensors. When computer vision is coupled with deep learning, it combines the best of both worlds: optimized performance paired with accuracy and versatility. Deep learning allows IoT developers greater accuracy in object classification.
Machine vision takes computer vision one step further by combining computer vision algorithms with image capture systems to better guide robot reasoning. An example of computer vision is a computer being able to “see” a unique set of stripes on a UPC and scan and recognize it as a unique identifier. Optical character recognition (OCR) uses image recognition of letters to decipher paper printed records and/or handwriting despite a multitude of different fonts and handwriting variations across people. Another example is how Apple’s Face ID allows your iPhone to recognize your face only to unlock your screen. A machine can use image recognition to interpret input it receives through computer vision and categorize what that input is. With training, its computer vision can learn to recognize input in different states, like humans. Computer vision can also enable machine-assisted moderation of images.
References
A Beginner’s Guide to Neural Networks and Deep Learning [webpage]. (n.d.). Skymind.
Alford, E. (2018, August 21). Bots, chatbots, robots, AI! Here’s why knowing the difference could set your company apart. ClickZ.com
Amazon. Amazon Polly: Turn text into lifelike speech using deep learning. aws.amazon.com.
Bellhawk Systems Corporation. Real-Time Artificial Intelligence for Scheduling and Planning Make-to-Order Manufacturing. industrytoday.com.
CFB Bots. (2018, April 20). The Difference between Robotic Process Automation and Artificial Intelligence. Medium.com.
Clark, S. (n.d.). Part II: NLP Applications: Statistical Machine Translation [slides]. Cl.cam.ac.uk.
Computer Vision. (n.d.). Microsoft Azure. azure.microsoft.com.
Copeland, J. (2000, May). What is Artificial Intelligence? AlanTuring.net.
DeNisco Rayome, A. (2018, April 10). Google's Speech-to-Text now includes improved phone call and video transcription, automatic punctuation, and recognition metadata. techrepublic.com.
Donges, N. (2018, March 7). Gradient Descent in a Nutshell. Medium.com.
Falcon, W. (2019, February 18). OpenAI’s Realistic Text-Generating AI Triggers Ethics Concerns. Forbes.
Technology Quarterly: Finding a Voice. (2017, May 1). The Economist.
Geitguy, A. (2018, August 15). Text Classification is Your New Secret Weapon. Medium.com.
Ghaffari, P. (2015, January 12). NLP and Text Analytics Simplified: Document Clustering. LinkedIn.com.
Greene, T. (2018). A beginner’s guide to AI: Computer vision and image recognition. Thenextweb.com.
Hammond, C. (2015, April 10). What is artificial intelligence? Computer World.
Hardesty, L. (2017, April 14). Explained: Neural networks. MIT News.
IBM. (n.d.). AI Planning. researcher.watson.ibm.com.
IBM. (n.d.). Text to Speech. ibm.com.
Harris, T. (2002, April 16). How Robots Work. HowStuffWorks.com.
Introduction to Unsupervised Learning [webpage]. (2018, April 9). Algorithmia blog.
Joshi, K. (n.d.). Expert Systems and Applied Artificial Intelligence. Umsl.edu.
Kamesh, D.B.K., Sumadhuri, D.S.K., Sahithi, M.S.V. and Sastry, J.K.R. (2017). An Efficient Architectural Model for Building Cognitive Expert System Related to Traffic Management in Smart Cities. Journal of Engineering and Applied Sciences, 12: 2437-2445.
Kismet. (n.d.). ai.mit.edu.
Kunze, L. (2018, December 7). We’re thinking about the Turing Test all wrong. Quarts.com.
Maini, V. (2017, August 19). Machine Learning for Humans, Part 2: Supervised Learning. Medium.com.
Maini, V. (2017, August 19). Machine Learning for Humans, Part 3: Unsupervised Learning. Medium.com.
Marr, B. (2018, February 14). The Key Definitions of Artificial Intelligence (AI) That Explain Its Importance. Forbes.
Marr, B. (2018, August 29). The Future Of Work: Are You Ready For Smart Cobots? Forbes.
Mozzilla. (Updated 2019, March 5). Using the Web Speech API. developer.mozilla.org.
Mozilla. (n.d.). Speech and Machine Learning. research.mozilla.org.
Perez, J.A., Deligianni, F., Ravi, D. and Yang, G.Z. (n.d.). Artificial Intelligence and Robotics. arxiv.org.
Pesce, A. (2013). Natural Language Processing in the kitchen. L.A. Times.
Robotics Online Marketing Team. (2018, September 11). How Artificial Intelligence is Used in Today’s Robots. Robotic Industries Association: Robotics online.
Sauer, Jürgen. (2003). Planning and Scheduling: An Overview.
Schatsky, D., Kumar, N., & Bumb, S. (2018, May). Bringing the Power of AI to the Internet of Things. Wired.
Servick, K. (2019, January 2). Artificial intelligence turns brain activity into speech. Science.
Talluri, R. (2017, November 29). Conventional computer vision coupled with deep learning makes AI better. Network World.
Vincent, J. (2018, August 13). DeepMind’s AI can detect over 50 eye diseases as accurately as a doctor. The Verge.
Wakefield, K. (n.d.). Predictive analytics and machine learning. SAS.
Wiggers, K. (2019, January 23). Google releases dataset to train more sophisticated question-answering systems. Venture Beat.