What is artificial intelligence? | CryptoNewsHerald

Introduction

Artificial Intelligence (AI) is a rapidly advancing field of technology that has the potential to revolutionize many aspects of our lives. AI is a form of computer science that focuses on creating intelligent machines that can think and learn like humans. AI is already being used in many areas, from healthcare to education, from manufacturing to finance, and from navigation systems to robotics. AI has the potential to improve how we live and work, and it is important to understand what AI is, how it works, and how it can be used in the future. In this article, we will explore the fundamentals of AI and discuss its various applications.




Artificial intelligence: news and articles


BaseArtificial intelligence

Artificial intelligence: news and articles


BaseArtificial intelligence

What is artificial intelligence?

Artificial intelligence (AI) is a broad branch of computer science that focuses on the creation of smart machines capable of performing intelligent tasks.

To date, there are many approaches to the creation of algorithms. Advances in the fields of machine learning and deep learning over the past few years have significantly changed the technology industry.

What are the definitions of artificial intelligence?

The fundamental goal and vision of artificial intelligence was established by the English mathematician Alan Turing in an article “Computing Machines and Mind”published in 1950. He asked a simple question: “Can machines think?” Then the scientist suggested famous testnamed after him.

At its core, AI is a branch of computer science that seeks to answer Turing’s question in the affirmative. It is an attempt to reproduce or simulate human intelligence in machines.

The global goal of artificial intelligence still raises many questions and disputes. The main limitation in defining AI as simply “intelligent machines” is that neither scientists nor philosophers can explain what artificial intelligence is and what exactly makes a machine smart.

Scholars and textbook authors “Artificial Intelligence: Modern Approach” Stuart Russell and Peter Norvig combined their work around the topic of intelligent agents in machines and defined AI as “the study of agents that receive perceptions from the environment and perform actions.”

During a speech at Japan AI Experience In 2017, DataRobot CEO Jeremy Achin began his speech with the following definition of how AI is being used today:

“AI is a computer system capable of performing tasks that require human intelligence… Many of these systems are based on machine learning, others are based on deep learning, and some of them are based on very boring things like rules.”

Although these definitions may seem abstract, they help define the main directions of theoretical research in computer science and provide concrete ways to implement artificial intelligence programs for solving applied problems.

What is Turing’s contribution to the development of AI?

In the middle of the last century, Alan Turing laid the theoretical foundation that was ahead of its time and formed the basis of modern computer science, for which he was called the “father of computer science.”

In 1936, the scientist created an abstract calculator – the so-called Turing machine – an important component of the theory of algorithms, which formed the basis of modern computers. In theory, such a machine can solve any algorithmic problem.

In turn, if the algorithm can be run on a Turing machine, then the programming language used to create it will have “Turing completeness” in which any algorithm can be written. For example, the C# language has such completeness, but html does not.

Also named after a mathematician is a mental test that is not related to a machine, but is directly related to artificial intelligence – the Turing test. In the scientific community, it is believed that as soon as the machine passes this test, it will be possible to fully talk about the emergence of intelligent machines.

The essence of the game is that a person, using text correspondence, interacts simultaneously with a machine and another person. The task of the computer is to mislead the test participant and convincingly impersonate a person.

What is AI like?

Artificial intelligence is usually divided into two big categories:

  • Weak AI [Weak AI]: this kind of artificial intelligence sometimes referred to as “narrow AI” [Narrow AI], works in a limited context and is an imitation of human intelligence. Weak AI is often focused on performing very well on one task. And, although these machines may seem smart, they work with great limitations.
  • Artificial General Intelligence (AGI): AGI, sometimes referred to as “strong AI” [Strong AI], is the kind of artificial intelligence we see in movies, like the robots in Westworld or the hologram of Joy in Blade Runner 2049. AGI is a machine with general intelligence that, like a human, can use it to solve any problem.

What is weak artificial intelligence?

Weak AI is all around us, and it is the most successful implementation of artificial intelligence to date.

With a mission-oriented approach, he has made many breakthroughs over the past decade that have brought “significant societal benefits and contributed to the nation’s economic vitality,” according to the report. “Preparing for the Future of Artificial Intelligence”published by the Obama administration in 2016.

Here are some examples of weak AI:

  • Google search;
  • Image recognition software;
  • Siri, Alexa and other voice assistants;
  • Unmanned vehicles;
  • Netflix and Spotify recommendation systems;
  • IBM Watson.

How does weak AI work?

Much of weak AI is based on advances in the field machine learning And deep learning. The similarity of these concepts can be confusing, but they should be distinguished. Venture capitalist Frank Chen offered the following definition:

“Artificial intelligence is a set of algorithms that try to mimic human intelligence. Machine learning is one of them, and deep learning is one of the machine learning methods.”

In other words, machine learning feeds the computer data and uses statistical methods to help it learn how to perform tasks without being specifically programmed for them, eliminating the need for millions of lines of written code. Popular types of machine learning are supervised learning (using labeled datasets), unsupervised learning (using unlabeled datasets), and reinforcement learning.

Deep learning is a type of machine learning that processes input data through a neural network architecture based on biological principles.

Neural networks contain a number of hidden layers through which data is processed, allowing the machine to “deeper” into its learning, make connections, and weight input for best results.

What is machine learning?

Artificial intelligence and machine learning are not the same thing. Machine learning is just one sub-field of AI.

The most common types of machine learning are supervised, unsupervised, and reinforcement.

Learning with a teacher

They are used when developers have a labeled data set and they know what features the algorithm should look for.

As a rule, it is divided into two categories: classification and regression.

Classification is used in cases where it is necessary to classify objects into previously known classes. This type of learning is used in spam filters, language detection or detection of suspicious transactions.

Regression is used when it is necessary to correlate an object with a time line, for example, to predict the value of securities, demand for a product, or make medical diagnoses.

Learning without a teacher

Less popular type of MO due to its unpredictability. Algorithms are trained on unlabeled data and they need to find features and patterns on their own. Often used for clustering, dimensionality reduction, and association searching.

Clustering is like classification, but without the known classes. The algorithm itself must find signs of similarity in objects and combine them into clusters. Used to analyze and label new data, compress images, or merge labels on a map.

Dimensionality reduction – generalizes specific features into a higher level abstraction. Often used to determine the subject of texts or create recommender systems.

Associations have found their application in marketing, for example, when compiling promotions and sales or analyzing user behavior on the site. It can also be used to create a recommender system.

Reinforcement learning

This is the training of an agent to survive in the environment in which it exists. The environment can be anything from a video game to the real world.

For example, there are algorithms that playing Super Mario no worse than people, but in the real world, the autopilot in Tesla cars or a robot vacuum cleaner do everything to avoid obstacles in their path.

Reinforcement learning rewards an agent for correct action and punishes for errors. The algorithm does not have to remember all its previous experience and calculate all possible scenarios. He must learn to act according to the situation.

Remember when machine beat man at go? Long before that, scientists had established that there are more variations of moves in this game than there are atoms in the universe. Not a single computer program that exists today could calculate all the options for the development of the party. However, AlphaGo, Google’s algorithm, did the job by not calculating all the moves in advance, but by acting according to the circumstances, doing it with incredibly high accuracy.

What are neural networks and deep learning?

The concept of artificial neural networks is not new. For the first time this concept formulated by American scientists Warren McCulloch and Walter Pitts in 1943.

Any neural network consists of neurons and connections between them. A neuron is a function that has many inputs and one output. They exchange information among themselves through communication channels, each of which has a certain weight.

Weight is a parameter that determines the strength of the connection between neurons. The neuron itself does not understand what it sends, so the weight is needed in order to regulate which inputs to respond to and which not.

For example, if the neuron sends the number 50, and the connection weight is 0.1, then the result will be 5.

As the architecture of neural networks became more complex, neurons decided to connect not in any way, but in layers. Inside the layer, neurons do not interact with each other in any way, but receive and transmit information from the previous layer to the next.

As a rule, the more layers in the neural network, the more complex and accurate the model. But then, 50 years ago, researchers ran into the limitations of computing power. As a result, the technology turned out to be a disappointment and was forgotten for many years.

She was remembered in 2012 by University of Toronto students Alex Krizhevsky, Ilya Satskever and Jeff Hinton won the ImageNet Computer Vision Competition. They used a convolutional neural network for image classification, which had an error rate of 15.3%, more than 10% lower than the second-place team. The deep learning revolution has been largely driven by the development of graphics cards.

Deep learning differs from neural networks only in the methods of training networks of large sizes. In practice, as a rule, developers do not find out which network can be considered deep and which is not. Today, even to build networks on five layers, developers use “deep” libraries, such as Keras, TensorFlow or PyTorch.

The most popular networks today are Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN).

CNN is often used for face recognition, finding objects in photos and videos, improving image quality, and other tasks. Recurrent networks have found application in machine translation of text and speech synthesis. For example, since 2016 Google Translate works on the basis of RNN architecture.

Generative adversarial networks (GANs) have also found popularity. It is based on two neural networks, one of which generates data, for example, an image, and the second tries to distinguish correct samples from incorrect ones. Since the two networks compete with each other, there is a antagonistic game.

GAN is often used to create photorealistic photographs. For example image repository This Person Does Not Exist consists of portrait photos of “people” created by a generative neural network.

What is artificial general intelligence?

Building a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for AGI comes with some challenges.

General AI has long been the muse of dystopian science fiction, in which superintelligent robots are infesting humanity, but experts agree it’s not something we need to worry about anytime soon.

American inventor and futurist Ray Kurzweil predictedthat general AI will be available by 2029. His colleague Rodney Brooks is not so optimistic, and is sure that the turning point in the development of machine intelligence technologies happen by 2300.

Stuart Russell, one of the authors of the textbook “Artificial Intelligence: A Modern Approach”, suggests that the invention of AGI will become accidental, like the discovery of nuclear energy in 1933. The scientist believes that this is a vivid example of how pointless it is to make any predictions in the development of such an unpredictable technology that has not yet been fully studied.

Subscribe to CryptoNewsHerald news in Telegram: CryptoNewsHerald AI – all the news from the world of AI!

Subscribe to CryptoNewsHerald on social networks

Found a mistake in the text? Select it and press CTRL+ENTER

CryptoNewsHerald Newsletters: Keep your finger on the pulse of the bitcoin industry!

Conclusion

Artificial Intelligence (AI) is a quickly evolving field of technology that has the potential to revolutionize many aspects of life, from health and safety to scientific exploration and beyond. AI is a powerful tool that can be used to solve complex problems and create innovative solutions. While AI has its limits, it has already made strides in areas such as robotics and machine learning, and it will continue to be a driving force in the technological revolution of the future.

FAQ

What is artificial intelligence?

Artificial intelligence (AI) is a branch of computer science that aims to create intelligent machines capable of performing tasks that typically require human intelligence. AI algorithms and techniques enable machines to perceive their environment, understand natural language, learn, and problem solve.

Comments (No)

Leave a Reply