Not everything we call AI is actually artificial intelligence. Here’s what to know

(ORDO NEWS) — In August 1955, a group of scientists requested US$13,500 in funding for a summer seminar at Dartmouth College in New Hampshire. The area they proposed to explore was artificial intelligence (AI).

Although the funding request was modest, the researchers’ hypothesis was not: “Every aspect of learning, or any other characteristic of intelligence, could in principle be so accurately described that it could be simulated by a machine.”

Ever since these humble beginnings, films and the media have romanticized AI or portrayed it as a villain. However, for most people, AI has remained a matter of discussion and not a part of conscious life experience.

AI has entered our lives

Late last month, AI, in the form of ChatGPT, broke out of sci-fi speculation and research labs onto the desktop computers and phones of the general public.

It’s what’s known as “generative AI” suddenly, a cleverly worded prompt could write an essay, or make a recipe and shopping list, or compose an Elvis Presley-style poem.

While ChatGPT was the most impressive newcomer in a year of generative AI success, similar systems have shown even greater potential for new content creation, with text-to-image tooltips used to create vibrant images that have even won art competitions.

AI may not yet have the living consciousness or theory of mind popularized in sci-fi movies and novels, but it’s getting close to at least disrupting what we think AI systems can do.

Researchers working closely with these systems have lost their minds from a reasonableness perspective, as in the case of Google’s LaMDA Large Language Model (LLM). LLM is a model trained to process and generate natural language.

Generative AI also raises concerns about plagiarism, the use of original content used to create models, the ethics of information manipulation, and breach of trust. and even “the end of programming”.

At the center of it all is a question that has become increasingly relevant since the summer workshop at Dartmouth: is AI different from human intelligence?

What does “AI” really mean?

To be considered AI, a system must exhibit a certain level of learning and adaptation. For this reason, decision systems, automation, and statistics are not AI.

AI is broadly divided into two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AI). To date, the AIS does not exist.

The key challenge for creating general AI is to adequately model the world with all the knowledge in a consistent and useful way. This is a massive undertaking, to say the least.

Most of what we know as AI today is of narrow intelligence when a particular system solves a particular problem.

Unlike human intelligence, such narrow AI intelligence is only effective in the area in which it is trained: fraud detection, face recognition, or social recommendations, for example.

However, AI will function in the same way as humans. . At the moment, the most prominent example of an attempt to achieve this is the use of neural networks and “deep learning” trained on huge amounts of data.

Neural networks are inspired by how the human brain works. Unlike most machine learning models that perform calculations on training data, neural networks work by passing each data point one by one through an interconnected network, tweaking the parameters each time.

As more and more network data passes through them, the parameters stabilize; the end result is a “trained” neural network that can then produce desired results on new data, such as recognizing whether an image contains a cat or a dog.

A significant leap forward in artificial intelligence today is driven by technological improvements in the way large neural networks are trained, adjusting a huge number of parameters every time they run thanks to the power of large cloud computing infrastructures.

For example, GPT-3 (the artificial intelligence system that runs ChatGPT) is a large neural network with 175 billion parameters.

What is needed for AI to work?

For AI to work, three things are needed. to be successful.

First, we need high-quality, objective data, and there should be a lot of them. Researchers building neural networks use the large datasets that have emerged as society has digitized.

Co-Pilot pulls data from billions of lines of code hosted on GitHub to empower programmers. ChatGPT and other major language models use the billions of websites and text documents stored on the Internet.

Text-to-image tools such as Stable Diffusion, DALLE-2, and Midjourney use image-text pairs from data. sets of type LION-5B.

AI models will continue to evolve, becoming more sophisticated and influential as we increasingly digitize our lives and provide them with alternative data sources such as simulated data or data from game settings such as Minecraft.

AI also needs a computing infrastructure for effective learning. As computers become more powerful, models that now require intensive effort and large-scale calculations may be processed locally in the near future.

Stable Diffusion, for example, can already be run on local computers rather than in cloud environments.

The third need for AI is improved models and algorithms. Data-driven systems continue to rapidly develop in one area after another, once considered the territory of human knowledge.

However, as the world around us is constantly changing, AI systems need to be constantly retrained using new data.

Without this important step, AI systems will actually give incorrect answers or fail to take into account new information that has emerged since they have been trained.

Neural networks are not the only approach to AI. Another prominent camp in artificial intelligence research is symbolic AI. Instead of processing huge datasets, it relies on rules and knowledge similar to the human process of forming internal symbolic representations of specific phenomena.

But the balance of power over the past decade has leaned heavily towards data-driven approaches, and the founding fathers of modern deep learning have recently been awarded the Turing Prize, the equivalent of the Nobel Prize in Computer Science.

Data, calculations, and algorithms form the basis of the future of AI.

All indications are that rapid progress will be made in all three categories for the foreseeable future.

Online:

Contact us: [email protected]

Our Standards, Terms of Use: Standard Terms And Conditions.