قالب وردپرس درنا توس
Home / Technology / Leading computer scientists are debating the next steps for AI in 2021

Leading computer scientists are debating the next steps for AI in 2021



The 2010s were huge for artificial intelligence, thanks to advances in deep learning, a branch of AI that has become possible due to the increasing capacity to collect, store and process large amounts of data. Today, deep learning is not only a topic for scientific research, but also a key component in many everyday applications.

But a decade of research and application has made it clear that deep learning in its current state is not the ultimate solution to solving the ever-elusive challenge of creating AI on a human level.

What do we need to push AI to the next level? More data and larger neural networks? New deep learning algorithms? Approaches other than deep learning?

This is a topic that has been very much discussed in the AI ​​community, and which was the focus of an online discussion Montreal.AI held last week. Entitled “AI Debate 2: Moving AI Forward: An Interdisciplinary Approach,”

; the debate was attended by researchers from a variety of backgrounds and disciplines.

Hybrid artificial intelligence

Cognitive researcher Gary Marcus, who took part in the debate, reiterated some of the main shortcomings of deep learning, including excessive data requirements, low capacity to transfer knowledge to other domains, opacity and lack of reasoning and knowledge representation.

Marcus, an outspoken critic of deep learning approaches, published an article in early 2020 in which he proposed a hybrid approach that combines learning algorithms with rule-based software.

Other speakers also pointed to hybrid artificial intelligence as a possible solution to the challenges of deep learning.

“One of the key issues is to identify the building blocks of AI and how to make AI more reliable, explicable and interpretable,” said computer scientist Luis Lamb.

Lam, who is co-author of the book Neural-symbolic cognitive reasoning, proposed a basic approach to neural-symbolic AI based on both logical formalization and machine learning.

“We use logic and knowledge representation to represent the reasoning process as [it] is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machinery, ”said Lamb.

Inspiration from evolution

Fei-fei Li, a professor of computer science at Stanford University and former chief AI researcher at Google Cloud, emphasized that vision in the history of evolution has been one of the most important catalysts for the emergence of intelligence in living beings. In the same way, work with image classification and computer vision has contributed to triggering the deep learning revolution over the past decade. Li is the creator of ImageNet, a dataset of millions of tagged images used to train and evaluate computer vision systems.

“As scientists, we ask ourselves, what is the next North Star?” Sa Li. “There is more than one. I have been extremely inspired by evolution and development. ”

Li pointed out that intelligence in humans and animals comes from active perception and interaction with the world, a trait that is sorely lacking in today’s AI systems, which rely on data cured and tagged by humans.

“There is a fundamental critical loop between perception and activation that drives learning, understanding, planning and reasoning. And this loop can be better realized when our AI agent can be embodied, can call between exploratory and exploitative actions, is multimodal, multi-tasking, generalizable and often social, “she said.

At his Stanford lab, Li is currently working on building interactive agents that use perception and activation to understand the world.

OpenAI researcher Ken Stanley also discussed lessons from evolution. “There are evolutionary properties in nature that are just so deeply powerful, and that have not been explained algorithmically yet because we can not create phenomena like what is created in nature,” Stanley said. “These are qualities we should continue to chase and understand, and they are qualities not only in evolution but also in ourselves.”

Reinforcement learning

Computer scientist Richard Sutton pointed out that the work for AI mostly lacks a “computational theory”, a term coined by neurologist David Marr, who is known for his work on vision. Computational theory defines which goal an information processing system seeks and why it seeks that goal.

“In neuroscience, we lack a high-level understanding of the goal and purpose of the general mind. This is also true in artificial intelligence – perhaps more surprisingly in AI. There is very little computational theory in Marr’s sense in AI, ”said Sutton. Sutton added that textbooks often define AI simply as “getting machines to do what people do”, and most current AI conversations, including the debate between neural networks and symbolic systems, are about “how to achieve something, as if we already understood what we were trying to do. ”

“Reinforcement learning is the first computational theory of intelligence,” Sutton said, referring to the branch of AI where agents are given the basic rules of an environment and are allowed to discover ways to maximize their rewards. “Reinforcement learning is explicit about the goal, about what is and why. In reinforcement learning, the goal is to maximize an arbitrary reward signal. For this purpose, the agent must calculate a policy, a value function and a generative model, ”said Sutton.

He added that the field needs to further develop an agreed computational theory on intelligence and said that reinforcement learning is currently the prominent candidate, although he acknowledged that other candidates may be worth exploring.

Sutton is a pioneer in reinforcement learning and author of a basic textbook on the subject. DeepMind, the AI ​​lab where he works, is deeply invested in “deep reinforcement learning”, a variant of the technique that integrates neural networks into basic reinforcement learning techniques. In recent years, DeepMind has used deep reinforcement learning to master games such as Go, Chess and StarCraft 2.

While reinforcement learning has striking similarities with the learning mechanisms in the brains of humans and animals, it also suffers from the same challenges that plague deep learning. Reinforcement learning models require extensive training to learn the simplest things and are strictly limited to the narrow domain in which they are trained. Currently, the development of deep reinforcement learning models requires very expensive computational resources, making research in the area limited to deep-seated companies such as Google, which owns DeepMind, and Microsoft, the quasi-opener of OpenAI.

Integrating the world’s knowledge and common sense into AI

Computer scientist and Turing Prize winner Judea Pearl, best known for her work on the Bayesian network and causation, emphasized that AI systems need world knowledge and common sense to make the most of the data they are fed.

“I think we should build systems that have a combination of knowledge of the world along with data,” Pearl said, adding that AI systems based solely on collecting and blindly processing large amounts of data are doomed to fail.

Knowledge does not come from data, Pearl said. Instead, we use the innate structures in our brains to interact with the world, and we use data to interrogate and learn from the world, as witnesses in newborns, who learn many things without being explicitly instructed.

“That kind of structure must be implemented externally in relation to the data. “Even if we succeed in some miracle in learning that structure of data, we must still have it in the form that can be communicated with humans,” said Pearl.

University of Washington professor Yejin Choi also stressed the importance of common sense and the challenges absenteeism poses to current AI systems, which are focused on mapping input data to outcomes.

“We know how to solve a dataset without solving the underlying task of deep learning today,” Choi said. “It’s because of the significant difference between AI and human intelligence, especially knowledge of the world. And common sense is one of the basic missing parts. ”

Choi also pointed out that the reasoning space is infinite, and reasoning in itself is a generative task and very different from the categorization tasks for which today’s deep learning algorithms and evaluation values ​​are suitable. “We never count very much. We only reason on the go, and this is going to be one of the most important basic, intellectual challenges we can think of going forward, ”Choi said.

But how do we achieve common sense and reasoning in AI? Choi proposes a wide range of parallel research areas, including combining symbolic and neural representations, integrating knowledge into reasoning and constructing references that are not just categorization.

We still do not know all the way to common sense yet, Choi said, adding: “But one thing is for sure, we can not just get there by making the tallest building in the world taller. Therefore, GPT-4, – 5 or -6 does not cut it. ”

VentureBeat

VentureBeat’s mission is to be a digital urban area for technical decision makers to gain knowledge about transformative technology and transactions. Our website provides important information about computer technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to gain access to:

  • updated information on topics that are of interest to you,
  • our newsletters
  • gated thought leader content and reduced access to our valued events, such as Transform
  • network features and more.

Become a member


Source link