What is Artificial Intelligence

Artificial intelligence or AI is a term that often comes up in discussions of modern technology. But what is artificial intelligence? And what can AI do?

One way to define artificial intelligence is the simulation by machines of processes characteristic of human intelligence. A similar AI definition might describe it as the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. These ideas suggest a direct correlation between human intelligence and machines with AI meaning that human intelligence can be defined in such a way that a machine can easily mimic it and execute tasks in a human-like manner.Perhaps the best definition of artificial intelligence is one that puts this idea more simply: AI or artificial intelligence is a technology that makes it possible for machines to learn from experience, adjust to new inputs, and perform human-like tasks.

Those are the AI basics in principle, but what is AI in more concrete terms? Any introduction to artificial intelligence might see artificial intelligence explained in terms of how it automates repetitive learning and discovery through data. This distinction sets AI apart from hardware-driven, robotic automation. Rather than simply automating manual tasks, AI performs frequent, high-volume, computerized tasks, and adapts through progressive learning algorithms (complex mathematical formulas) to let the data do the programming without the need for constant human intervention.

Artificial Intelligence

Those are the AI basics in principle, but what is AI in more concrete terms? Any introduction to artificial intelligence might see artificial intelligence explained in terms of how it automates repetitive learning and discovery through data. This distinction sets AI apart from hardware-driven, robotic automation. Rather than simply automating manual tasks, AI performs frequent, high-volume, computerized tasks, and adapts through progressive learning algorithms (complex mathematical formulas) to let the data do the programming without the need for constant human intervention.

So, among the basics of artificial intelligence, we would include a body of data that provides training material for the underlying software, mathematical processes (algorithms) enabling an AI system to learn from this data, and mechanisms enabling the system to put that learning to use.

An introduction to AI must, therefore, look at the principle of artificial intelligence itself, together with the underlying processes and technologies that enable it to operate. An AI overview encompasses fields including computer programming, machine learning, analytical models, natural language processing, deep learning, and neural networks.

What does artificial intelligence mean for practical applications? Today’s AI systems are trained to perform clearly defined tasks. AI finds structure and regularities in data so that the algorithm acquires a skill; then, this skill is used to inject intelligence into existing products. For example, the digital assistant Siri was added to a range of Apple products to provide them with an interactive and intelligent edge.

History of Artificial Intelligence

The idea of investing inanimate objects with human-like characteristics dates back to ancient times. So, for example, myths surrounding the Greek god Hephaestus depict the deity as forging servants made out of gold that resemble the concept of modern robots. In ancient Egypt, statues of the gods were built with holes and joints so they could be animated by priests.

Some analysts trace the history of artificial intelligence in the pre-modern era as beginning in 1308, when the Catalan poet and theologian Ramon Llull published his Ars generalis ultima (The Ultimate General Art), a thesis perfecting his method of using paper-based mechanical methods to create new knowledge from combinations of concepts. Mathematician and philosopher Gottfried Leibniz expanded on this in 1666, with the publication of Dissertatio de arte combinatoria (On the Combinatorial Art), a dissertation suggesting that all ideas are nothing but combinations of a relatively small number of simple concepts.

The history of AI took a further step towards modernity in 1763, when Thomas Bayes developed a framework for reasoning about the probability of events. Bayesian logic is now one of the cornerstones of machine learning theory. Similarly, in 1854 George Boole introduced Boolean principles into the mix with his assertion that logical reasoning could be performed systematically in the same manner as solving a system of equations. Several years earlier, in 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, had proposed a design for the first programmable machine.

In the 1940s, Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer – a system whose software applications and the data it processes could be held in the computer’s memory. Another notable turning point in artificial intelligence history came in 1943, when Warren McCullough and Walter Pitts published “A Logical Calculus of Ideas Immanent in Nervous Activity,” a paper proposing the first mathematical model for building a neural network.

In 1950, the British mathematician and World War II code-breaker Alan Turing published “Computing Machinery and Intelligence,” a paper in which he speculated about “thinking machines” that could reason at the level of a human being. He also proposed a criterion now widely known as the “Turing Test,” which specifies that computers or other mechanisms need to complete reasoning puzzles as well as humans to be considered “thinking” in an autonomous manner. Turing’s work, in many ways, established the fundamental goal and vision of artificial intelligence. 1950 was also the year when Harvard undergraduates Marvin Minsky and Dean Edmonds built SNARC, the first neural network computer.

But, when was artificial intelligence created? When was AI invented? And who invented artificial intelligence? For many observers, modern AI history truly began in 1956, during a summer conference at Dartmouth College. The event, sponsored by the Defense Advanced Research Projects Agency (DARPA), was attended by AI pioneers John McCarthy, Marvin Minsky, Oliver Selfridge, and others who developed artificial intelligence in the proceeding years. At the conference, Allen Newell and Herbert Simon demonstrated Logic Theorist (LT), an artificial intelligence program that would eventually prove 38 of the first 52 theorems in Whitehead and Russell’s Principia Mathematica.

McCarthy, who officially coined the term when referring to the “Dartmouth Summer Research Project on Artificial Intelligence,” is the man whose name most often comes up when asking who invented AI. In 1958, he developed Lisp, which was to become the most popular programming language used in artificial intelligence research. Known as “the father of AI,” McCarthy went on to publish “Programs with Common Sense” in 1959, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages. The ultimate objective of the project was to make programs “that learn from their experience as effectively as humans do.”

In 1965, Joseph Weizenbaum developed ELIZA, an interactive program capable of carrying on a dialogue in the English language on any topic. The same year, Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi began working on DENDRAL – the first expert system – at Stanford University. The system automated the decision-making process and problem-solving behavior of organic chemists. Research at Stanford University later gave rise to MYCIN in 1972, an early expert system for identifying bacteria causing severe infections and recommending antibiotics.

Progress and investment in AI have had their ups and downs – and the history of artificial intelligence has been characterized by so-called “AI Winters.” The first happened in the mid-1970s, when frustration with the lack of progress of AI development led to major DARPA cutbacks in academic grants and lasted roughly from 1974-1980.

In 1982, Japan’s Ministry of International Trade and Industry launched the Fifth Generation Computer Systems project, whose ambition was to develop a functioning supercomputer, and a platform for AI development. A year later, the US government launched its Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.

Due to increased interest and investment, 1985 saw companies spending more than a billion dollars a year on expert systems – and an entire industry known as the Lisp machine market emerged to support them. However, as computing technology improved, cheaper alternatives emerged, and the Lisp machine market collapsed in 1987, ushering in the “Second AI Winter.” This lasted until around 1993.

During this period, members of the IBM T.J. Watson Research Center published “A statistical approach to language translation” in 1988. This research marked a shift from rule-based to probabilistic methods of machine translation and reflected a broader shift to “machine learning” based on statistical analysis of known examples, rather than comprehension and “understanding” of the task at hand – a trend that would continue as AI began to resurge in the mid-1990s.

In 1995, inspired by Joseph Weizenbaum’s ELIZA program and assisted by the growth of the World Wide Web, Richard Wallace developed the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity). The project incorporated natural language sample data collection on an unprecedented scale.

1997 saw Sepp Hochreiter and Jürgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of a recurrent neural network, which is used today in handwriting recognition and speech recognition. That same year, Deep Blue became the first computer chess-playing program to beat a reigning world chess champion (Garry Kasparov).

The 21st century has witnessed a number of successes for AI. For example, in 2000, Cynthia Breazeal of MIT developed Kismet, a robot that could recognize and simulate emotions. In 2005, STANLEY, a self-driving car, won the DARPA Grand Challenge. In 2009, Google began secretly developing a driverless car. In 2014, it became the first autonomous vehicle to pass a US state self-driving test in Nevada. 2011 was the year that the natural language question-answering computer Watson competed on the Jeopardy! TV game show, and defeated two former champions. And in 2016, Google DeepMind’s AlphaGo defeated the reigning Go champion Lee Sedol.

On the theoretical front, in 2006, Oren Etzioni, Michele Banko, and Michael Cafarella coined the term “machine reading,” defining it as an inherently unsupervised “autonomous understanding of text.” That same year, Geoffrey Hinton published “Learning Multiple Layers of Representation,” a paper laying the foundations for new approaches to deep learning.

 

How Does AI Work?

AI - how it works

Understanding artificial intelligence as a concept is key to understanding how artificial intelligence works. One of the fundamental principles defining AI is the Turing Test, which essentially demands that an artificially intelligent entity should be capable of holding a conversation with a human agent (in person, via telephone, chatbox, etc.). Crucially, the human agent should not be able to conclude that they are talking to an Artificial Intelligence. In order to ensure this, the AI needs to possess these qualities:

  • Natural Language Processing (NLP) to enable it to communicate successfully.
  • Knowledge Representation to act as its memory.
  • Automated Reasoning that enables the system to use stored information to both answer questions and draw new conclusions.
  • Machine Learning, which enables the system to detect patterns and adapt to new circumstances.

Ai Chip ConceptThese are the characteristics that must underlie an artificial intelligence. How it works from a developer’s perspective is through the reverse-engineering of human traits and capabilities into a machine. And how AI works in fulfilling this requires a diverse set of components fusing together several disciplines, including mathematics, computer engineering, neuroscience, philosophy, economics, psychology, linguistics, control theory, and cybernetics.

How does AI work at this conceptual level? Let’s take mathematics as an example. Computation and probability are required to enable machines to understand logic. Computation provides the mathematical pathways to make calculations easy for algorithms, while probability allows for the accurate prediction of future outcomes, which are the basis from which these algorithms can make decisions.

How does artificial intelligence work at the level of the machine? We can examine this by looking at natural language processing (NLP), which enables a machine to understand human language as it is spoken. Most NLP systems use machine learning to draw insights from human language.

The system must first assemble all the required data, including database files, spreadsheets, email communication chains, recorded phone conversations, notes, and other relevant inputs. An algorithm applied to this data removes extraneous information and normalizes certain words which have the same meaning. The resulting text is then divided into groups known as tokens. The NLP program next analyzes the available data to identify patterns, the frequency of these patterns and other statistics, in order to understand the usage of tokens, and how they apply.

So, how does an AI work? In essence, an artificial intelligence system collects data relating to its specific area of application, applies mathematical processes or algorithms to that information, which allows it to draw inferences or conclusions from that data, then performs a function on the basis of these observations. Systems with machine learning capability are able to adapt and improve their performance over time, as more data is accumulated.

Types of Artificial Intelligence

There are several types of AI, with classifications depending on how a machine compares to humans in terms of its adaptability and performance. The various types of artificial intelligence are judged to be more or less advanced, in terms of whether they can perform with equivalent levels of proficiency to a human (more advanced), or whether their functionality and performance are more basic (less advanced).

One method of classifying artificial intelligence and AI-enabled machines is based on their similarity or otherwise to the human mind, and their capacity to think or even feel like a human. Under this system, there are four artificial intelligence types:

1. Reactive Machines

The oldest type of AI, with machines that have no memory-based functionality and, therefore, no capacity to learn from past experiences or interactions. These machines can respond to a limited set or combination of inputs. IBM's Deep Blue, the machine that beat chess Grandmaster Garry Kasparov in 1997, is one example.

2. Limited Memory Machines

These systems have the same ability as reactive machines, but are also capable of learning from historical data. This includes machines that apply machine and deep learning to large volumes of training data that they store in their memory as a reference model for solving future problems. Nearly all existing applications of AI come under this category.

3. Theory of Mind

An artificial intelligence at the theory of mind level will be able to discern the needs, emotions, beliefs, and thought processes of the humans it's dealing with, and be able to understand humans as individuals whose minds can be shaped by multiple factors. Though some research in the field of artificial emotional intelligence is currently underway, the development of a true theory of mind AI will require input from other branches of artificial intelligence and is still some years away.

4. Self-aware AI

At this level, an artificial intelligence will have become so advanced that it develops self-awareness, with emotions, needs, beliefs, and potentially desires of its own. This is the "Terminator" doomsday scenario, in which AI could develop ideas like self-preservation, which directly or indirectly conflict with the interests of humanity. It's the next step on from theory of mind (which hasn't actually emerged yet) and is perhaps decades or even centuries away from being practical.

The other main method of classifying AI types identifies 3 types of AI:

Artificial Narrow Intelligence (ANI)

Commonly known as narrow or weak AI, artificial narrow intelligence includes systems that can only perform a specific task autonomously, using human-like capabilities. This encompasses all reactive and limited memory AI systems currently in existence – even the most complex AI that uses machine learning and deep learning to teach itself.

Artificial General Intelligence (AGI)

Also known as general or strong AI, this category includes the yet-to-emerge types of agents in artificial intelligence that will essentially be able to learn, perceive, understand, and function completely like a human being.

Artificial Super-Intelligence (ASI)

With greater memory, faster data processing, analysis, and decision-making capabilities than even AGI, artificial super-intelligence will potentially do much more than simply replicating the many-layered intelligence of human beings.

To facilitate the development of different kinds of AI, advanced algorithms are being written and combined in new ways to analyze large volumes of data more quickly, and at multiple levels. There are several types of AI algorithms, including:

Classification Algorithms, which come into the picture if there’s a need to predict an outcome based on a set number of fixed, pre-defined outcomes. Examples include Naive Bayes, the Decision Tree, Random Forest, and Logistic Regression. 

Regression Algorithms are employed when the target output is a continuous quantity. The initial data set needs to have labels, and regression algorithms fall within the category of Supervised Machine Learning. Linear Regression is the simplest and most effective form of this class of algorithm. 

Clustering Algorithms assign the input data into two or more clusters based on the similarity of various features. It’s a form of Unsupervised Machine Learning, in which the algorithm learns patterns and useful insights from data without any guidance from a labeled data set. K-Means Clustering is the simplest algorithm of this type.

Ensemble Learning Algorithms come into play when there’s an abundance of data, and prediction precision is of high value. These algorithms can boost the results of the analysis conducted using one or more of the other categories.

Various components contribute to the different types of artificial intelligence. Among the elements of AI as it exists today are:

Machine Learning (ML): This employs statistics, physics, operations research, and neural networks to extract insights from data without a need for the system to be specifically programmed to look in a particular area.

Neural Networks: A type of machine learning system made up of interconnected units (like neurons in the human nervous system). Neural networks process information by responding to external inputs, relaying information between each unit. This approach enables them to make multiple passes at a set of data to find hidden connections.

Deep Learning (DL): This employs huge neural networks with many layers of processing units, which, with enough computing power and improved training techniques, enables systems to identify complex patterns in large amounts of data.

Cognitive Computing: The aim of cognitive computing is to develop machines that can simulate human thought processes through the ability to interpret images and speech – and interact seamlessly with humans on the basis of that interpretation.

Computer Vision: Computer vision systems use pattern recognition and deep learning to process, analyze, and understand images. Systems can capture images or video in real-time, and interpret their surroundings.

Natural Language Processing (NLP): NLP enables computers to analyze, understand, and generate human language, including speech.

Importance of Artificial Intelligence

With applications in practically every walk of life, the importance of artificial intelligence is being clearly demonstrated on a daily basis. AI automates repetitive learning and discovery through the use of data and performs frequent, high-volume tasks consistently and with the capacity to work 24/7.

The technology adds intelligence to existing products and facilitates a range of tasks in both the commercial sphere and at the level of individual citizens. From global supply chain and distribution management through to healthcare, customer service, and the convenience of conversational platforms, chatbots, and interactive technology, AI is helping companies improve their operations and productivity, and enhancing the lives of consumers.

Another reason why AI is important lies in its potential for future development. In healthcare, for example, AI is providing new ranges of diagnostic tools, enabling personalized drug protocols,  and even powering robots that can assist in surgery. As a supporting mechanism for smart infrastructures that make more efficient use of resources, artificial intelligence may also have a role to play in creating more sustainable economies and managing environmental concerns.

Benefits of Artificial Intelligence

By replacing repetitive tasks with automated mechanisms and processes, one of the benefits of artificial intelligence is its ability to improve efficiencies and increase productivity. Of course, observers of the various artificial intelligence pros and cons might argue that this same benefit has a darker side, in that the people who were formerly responsible for performing these repetitive tasks could lose their jobs.

AI Face However, one of the advantages of artificial intelligence that’s often overlooked in this argument is the potential it holds for job creation. While it’s possible that AI will eliminate low-skilled jobs, emerging technologies in the field may equally create high-skilled job opportunities that will span all sectors of the economy. We’re already witnessing this to some extent, with the growing demand for professionals with AI and related skills such as data scientists, system architects, and artificial intelligence engineers.

In previous observations, we touched on how the various uses of artificial intelligence are streamlining tasks and paying dividends in the commercial sector. So, for example, in healthcare, AI applications are facilitating the management of medical records and other data, automating repetitive tasks, and thereby enabling cost and process efficiencies. Elsewhere, AI technology is providing a framework for digital and virtual consultations, precision medicine, and the creation of new drugs and treatment regimens.

Other uses of AI are fueling advances in areas that include manufacturing, transport, city management, and agriculture. Autonomous or self-driving cars and smart traffic management systems have the potential to eliminate congestion on the streets and reduce the occurrence of accidents, some 95% of which are attributable to human error. Automation and data analytics are assisting manufacturers and agriculturists to increase production and make more efficient use of their available resources.

More generally, AI technology is increasing the power and scope of data analysis in all sectors, with concepts such as predictive and prescriptive analytics for preventative maintenance, and better strategic planning.

Risks of Artificial Intelligence

Industry and AI

Besides its potential to create jobs in new and existing areas related to the technology, one of the dangers of artificial intelligence that’s most often cited is that it will cause mass unemployment among human workers who are replaced by automation and analytical processes. In relation to this, one of the negative effects of artificial intelligence lies in its potential to make humans over-reliant on automation, lazy, and lacking in their own initiative.

Speaking of initiative, as more tasks are given over to the technology, another of the risks of artificial intelligence is that its current inability to think “outside the box” could deprive us of innovative ideas for new products and services. Today’s AI machines can only perform those tasks which they are specifically designed or programmed to do, leaving little room for the adaptability and inferred logic that’s required for new inventions.

Among the disadvantages of artificial intelligence in its current form is the lack of bonding and human connection that could make the technology an integral part of a team. As such, until AI systems can gain a greater degree of “personality” (through natural language response, friendlier interfaces, etc.), there will be a tendency for humans to work around, rather than with, the technology.

One of the major factors limiting the impact of artificial intelligence is the huge investment required to develop its technology, and keep it running. In contrast to standard software development, AI operates at a much larger scale, demanding a correspondingly higher input in terms of computing resources and expertise. Additionally, the current lack of standards in AI software development means that it’s difficult for different systems to communicate with each other, which slows innovation and drives costs up further.

Is artificial intelligence a threat? There are a number of areas in which it might be perceived as such – and these are far more immediately relevant than the doomsday scenario of artificial super-intelligence popularized by science fiction, which may never actually occur.

Given that an AI does what it’s programmed to do, a lot hinges on the ethical and other standards of the developers. For example, autonomous or robotic weapons are artificial intelligence systems that are programmed to kill. In the wrong hands, these weapons could easily cause mass casualties. Giving autonomous vehicles a license to make spot decisions that affect who lives and who dies in a given traffic scenario might be something that we come to regret.

Even systems whose primary focus is analysis or prediction could have negative consequences if their output is in error. Weather prediction is one example of this that has the potential to hurt businesses, disrupt travel, and endanger lives. Applications of AI in the criminal justice system have already been the focus of accusations of bias, in terms of “profiling” citizens on controversial lines such as race or religion.

Future of Artificial Intelligence

Artificial intelligence has been the driving force behind emerging technologies like big data analytics, robotic process automation, and IoT, and the future of AI will likely see continued development in all of these areas. An evolutionary cycle that began with knowledge engineering progressed to model and algorithm-based machine learning and is now shifting its focus to perception, reasoning, and generalization will put AI at the center of much of the world’s technological innovation in the years to come.

All sectors of the economy will be influenced by this artificial intelligence future. Though autonomous vehicles are in their early stages of development, AI in the future could potentially replace our current transport systems with a new convention that integrates self-driving vehicles with smart city and traffic planning. In fields like manufacturing and healthcare, we can expect advances on the current state of affairs, in which AI-powered robots are working alongside humans to perform a limited range of tasks. Virtual modeling and big data analytics are mapping out new production processes, treatment regimes, and the creation of new products and organizations are creating more personalized and focused experiences for consumers and patients.

Adversarial AI conceptThe future of AI in education will see an expanded role at the level of formal institutions, where early-stage virtual tutors are already assisting human instructors, and facial analysis algorithms are assessing the emotions of students as they learn in real-time. As AI takes over an increasing range of tasks in the workplace, there will be a need for increased investments in education to retrain people for new jobs.

The future of artificial intelligence technology in the near term is likely to be influenced by current explorations in two areas: reinforcement learning and generative adversarial networks (GAN).

Reinforcement learning takes a reward or punishment approach to training systems, rather than relying on labeled data. Google DeepMind’s Alpha Go Zero (the system that defeated a reigning Go champion) operates on this principle.

Generative adversarial networks (GAN) pit two neural nets in opposition to each other as a way to allow computer algorithms to create, rather than simply assess data. Systems that can generate original images and audio based on learning about a certain subject or a particular type of music are early examples of this technology.

At a larger scale, our AI future will hopefully see the technology adopting a greater role in promoting sustainability, and dealing with climate change and other environmental issues. Evolving IoT technologies and increasingly sophisticated sensors could enable planners to set policies and rules limiting congestion, pollution, and other factors, with greater focus and accuracy.

Ultimately, how artificial intelligence will change the future depends on the attitude and will of the governments and technologists who develop and use it. If ethics and social awareness are factored into the programming and development of AI systems and applications, dystopian visions of a world enslaved by killer machines won’t necessarily be our future.

Examples of Artificial Intelligence

Several examples of artificial intelligence already feature in our daily lives. Self-driving and self-parking vehicles are AI examples that use deep learning to “recognize” the spaces around the machine, with technologies that enable them to navigate a nearly infinite range of possible driving scenarios. Current AI technology is deployed in cars made by Toyota, Mercedes-Benz, Audi, Volvo, and Tesla.

Woman using digital assistant while cookingDigital assistants like Apple’s Siri, Google Now, Amazon’s Alexa, and Microsoft’s Cortana are artificial intelligence examples that have permeated the consumer market. These apps learn from interaction with their users, allowing them to improve their recognition of speech patterns, and to serve up suggestions and options based on a user’s personal preferences. The systems can perform a range of tasks, including scheduling, web searches, and transmitting commands to other apps.

Mention “examples of AI,” and robots are usually one of the first things that spring to mind. Industrial robots of all sizes are currently working alongside humans in production and research environments, performing repetitive or highly intricate tasks that would otherwise be prone to human error. On their own, AI-powered robots are able to work in environments that are hazardous to human health.

Artificial intelligence applications in communications and social media are now quite common. The spam filters that keep unwanted messages out of your email or messaging inbox use simple rules or algorithms to flag keywords and phrases. Gmail uses a similar approach to categorize your email messages into primary, social, and promotion. Platforms like Facebook and Pinterest are using technologies like facial recognition and computer vision to personalize your newsfeed and make content recommendations.

How is AI used in other areas? One of the industries that most benefit from artificial intelligence is the finance sector. AI is used to create systems that can learn what types of transactions are fraudulent. Some financial AI systems use machine learning to assess a customer’s creditworthiness and help banks and financial institutions determine if they are likely to honor the terms of a loan.

Summary:

Benefits of Artificial Intelligence

By replacing repetitive tasks with automated mechanisms and processes, one of the benefits of artificial intelligence is its ability to improve efficiencies and increase productivity. Of course, observers of the various artificial intelligence pros and cons might argue that this same benefit has a darker side, in that the people who were formerly responsible for performing these repetitive tasks could lose their jobs. AI Face However, one of the advantages of artificial intelligence that’s often overlooked in this argument is the potential it holds for job creation. While it’s possible that AI will eliminate low-skilled jobs, emerging technologies in the field may equally create high-skilled job opportunities that will span all sectors of the economy. We’re already witnessing this to some extent, with the growing demand for professionals with AI and related skills such as data scientists, system architects, and artificial intelligence engineers. In previous observations, we touched on how the various uses of artificial intelligence are streamlining tasks and paying dividends in the commercial sector. So, for example, in healthcare, AI applications are facilitating the management of medical records and other data, automating repetitive tasks, and thereby enabling cost and process efficiencies. Elsewhere, AI technology is providing a framework for digital and virtual consultations, precision medicine, and the creation of new drugs and treatment regimens. Other uses of AI are fueling advances in areas that include manufacturing, transport, city management, and agriculture. Autonomous or self-driving cars and smart traffic management systems have the potential to eliminate congestion on the streets and reduce the occurrence of accidents, some 95% of which are attributable to human error. Automation and data analytics are assisting manufacturers and agriculturists to increase production and make more efficient use of their available resources. More generally, AI technology is increasing the power and scope of data analysis in all sectors, with concepts such as predictive and prescriptive analytics for preventative maintenance, and better strategic planning.