Generative Agents

Some weeks ago we where talking here about the NPC revolution.  

We fall short. It’s more than this. It’s mind blowing.

If this is the first time you hear about Generative Agents, the next 5 minutes of reading can change your life.

A few days ago, a new mind blowing paper from Stanford U and Google was released (link):

No hay texto alternativo para esta imagen

The idea is simple. Create a kind of game board, a sandbox, like in SIMS, where different AI agents are created and given a personality, a background, an objective. This creates a test environment in which you can simulate human behivor. Somekind of a simplified Matrix with AI agents.

For example, one of the agents is assigned the objective of creating a party. This objective involves interaction with other agents whom you invite to the party and therefore, generates social situations that we can simulate and analyze.

What is an AI agent?

Autonomous agents/Generative agents/AutoGPTs are systems that can receive tasks, strategize, browse the internet, and refine their approach in real time.

How it works?

Lets see an example.

I’ve created an AI agent for Belobaba with this goal: Create a trading competitions for students so they can learn into a paper trading environement

You can create yours here.

What does it mean?

AI just need a goal from you. It auto creates tasks, execute them and review them. They don’t need human supervision to do things. They just create new tasks for its self to acomplish the goal.

It’s frightening!

But let’s back to the Stanford new Paper. What they have done here, basically, is to create an environement (a very simplified representation of the reallity where you can control the variables) and they let 25 of those agents free to interact and work.

And in the interaction incredible things are happening:

No hay texto alternativo para esta imagen

Those who know me, knows about my obsession with Stanley Milgram and his experiments on obedience that explain, among many other things, how it is possible that an advanced society like Germany in the 1930s ended up becoming Nazi Germany.

Can you imagine a toy like this in the hands of someone like Milgrams? Sociologists, psychologists, political scientists and humanists, you are welcome.

As the paper describes:

Computational software agents simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.

A lot of things can be studied about human behivor under this enviroments: How is the information transmitted? And the fake news? And the wrong ideas? you can do so many experiments… on consumption, racism, feminism, integrism, fanaticism, fascism, and a long etcetera. And create variations to understand how each variable influences, as Milgram did in his experiments. What happens if we kill an NPC in a community? And if we remove a park that is the meeting point where they interact? What happens if half of the agents speak one language and the other half another?

Imagine an hipotesis, create scenarios, run the test, and compare the results.

What if they had to compete for scarce resources like water? Or knowledge?

What if instead of a town this environment simulates a company? Can we create scenarios to simulate what happens under certain business decisions?

No hay texto alternativo para esta imagen
No hay texto alternativo para esta imagen

I”ve learned many new things thanks to the AI. Perhaps these agents discover better formulas to solve some problems.

Let’s try to understand a little bit better how they work:

Each agent has three cognitive capacities: observation, planning, and reflection.

This is a revolution also for game developers for example, as you can read in the paper:

No hay texto alternativo para esta imagen

West world it’s coming.

You can see here a detailed journey of one of those agents that works developing a new mobile app:

No hay texto alternativo para esta imagen

Periodically each Agent enters into a reflective state. They examine the 100 most recent objects in the memory stream and use the prompt “Given only the information above, what are 3 most salient high-level questions we can answer about the subjects in the statements?”

No hay texto alternativo para esta imagen

And here an example of the Memory Stream:

No hay texto alternativo para esta imagen

Not convinced yet that this can take us into a new level of human behivor comprehension?

You need to see this serie called The Rehearsal (el método Fielder) in HBO:

But, do we need to put some limits here?

Earlier this week, Carnegie Mellon chemists dropped a new paper where they showed these agents can make drugs just connecting them to the API of a chemical lab. By the moment they just successfully synthesize chemicals like ibuprofen but, where are the limits?

Are we playing to be GOD?

What do you think, it’s too much? are we going too fast?

Yours in crypto and AI.