What do we need for AGI? Some final thoughts

One way to get a good idea of ​​the recipe for stronger artificial intelligence is to talk to the experts who gather at today’s conferences and trade show events to think about what the near future will look like.

There was CES earlier this month and other industry presentations throughout the year, but there are also other kinds of symposia (symposia?) and conference events where people close to the industry talk about probabilities, priorities and solutions.

As I’ve talked to some of these people informally and listened to formal presentations, some common themes are emerging about what we’ll need to put into the next generation of AI—artificial intelligence systems that are more alive and more capable than we have now.

Here’s some of that secret sauce people are talking about to make cutting-edge AI breakthroughs in 2025.

Physics aware systems

To be truly impressive, AI systems must understand the world around them. This is difficult because they do not have biological bodies that were naturally equipped with all kinds of devices and equipment to help navigate three-dimensional space.

However, AI systems are learning physics, the same way they learn everything else – through huge amounts of training data and extremely complex neural networks that target the right results through processes like back propagation and stochastic analysis.

So we are getting closer to that part of strong artificial general intelligence or AGI.

Continuous memory

Another big element of AI getting stronger is systems becoming better able to remember what they’ve experienced in the past.

This comes in all sorts of forms – you have information about previous interactions with people, information about sensory input from the world around you, and other types of data that are either experiential or inform the machine’s experience.

For example, when asking ChatGPT, she defines dynamic memory as “the storage and retrieval of information over long periods” and lifelong learning as “the ability to continuously acquire and improve knowledge without catastrophic forgetting.”

Catastrophic oblivion?

This is somehow poetic.

Physical interaction and sensorimotor skills

Artificial intelligence is only as strong as its hardware systems and physical footprints within the physical world.

In other words, there’s not much a computer can do from a desktop. It must be able to move and interact with physical systems. That means having complex sensory systems, but it also means having physics-aware bionic structure that can navigate three-dimensional space.

When we talk about robot agility, this is the category we are dealing with.

Access to training data

And then, too, AI is only as good as its training data. It must have accurate data to achieve useful results. This is where people get concerned with the process of “hallucinations”, or AI simply making extremely false statements.

Here you can also talk about natural biases and problems with miscalibration of AI systems.

This might not be a big deal if you’re making music recommendations, but it can be a very big deal if the AI ​​is responsible for, say, approving loans or helping people find jobs.

Multidimensional AI

Here’s another idea I got recently from some experts who were talking about the path to AGI itself.

AI, they argue, is not linear. It is multidimensional. It continues not just one trajectory, but several, which combine to form the elements of what we see as the frontier of artificial intelligence.

This is where I think the work of Marvin Minsky comes into play. In his book Society of the Mind, Minsky was quite specific about his theory that the human brain is not a single computer, but a collection of cooperating components that work together to perform real-time human cognition. .

You can also call them “agents”.

This year, we have people talking about “agent AI” and multi-agent collaborative intelligence systems. When computers can divide up work and delegate tasks, they can begin to continue building complex systems that work more like the human brain.

Early science fiction writers got it wrong – it’s not about linearly scanning human brain activity and replicating it. It is about those artificial systems that are able to evolve to the point where they work like the human brain and give much the same performance.

These are some of the things I’ve been hearing over the past few weeks as we get ready for a banner year in AI. Keep an eye on this space – because there will be a lot going on, not only in terms of models and devices, but also in terms of planning, and hopefully, a regulatory framework. We need to reckon with the power of AI to harness it in the right ways. And that will take work.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top