I’ve actually been trying to write a post about parenting for a while, but life has been pretty busy since the beginning of February, and I’m realizing that it takes a lot time to do justice to someone else’s ideas. So for this post, I’ll just write what I think, about something topical but not exactly1.
First, 2 starting points for the rest of the post.
Premise 1
There’s no magic involved in the human mind.
Human beings are physical beings, and all the things that go on in our minds that are so special to us are the results of physical processes. This also means that, in principle, non-human and non-biological entities are capable of processes that are equivalent to things that we humans experience.
This doesn’t mean that we can explain our emotions or our thoughts in terms of this atom doing this and that atom doing that. I mean, we have trouble explaining what happens when we have more than a few atoms.
All I’m saying is that we don’t need to add a new law of physics to explain why we can think and feel.
Premise 2
Evolution is usually the driving force behind things (or ideas) taking over.
Evolution is what happens when you have things that can make new copies of themselves, and when some of those things are more likely to copy themselves than others. Biological evolution is where this was first discovered, and this is why the human mind is good at what it does. Having a mind that can plan, talk to each other, have feelings, etc. was advantageous for our ancestors, and that’s why we have it.
Cultural evolution2 is also a big part of the world we have. Humans learned how to use tools, build machines, run governments, etc. These are ideas that spread across the world, because humans found them useful and they kept copying and using them.
Speculations about the far future
When people talk about future with AI, positive or negative, most people are concerned about short- and medium-term futures. Years, decades, and maybe centuries.
I want to step way back and think about millions, billions of years. This is where maybe you can start reading this post as sci-fi rather than non-fiction, but this is something that I think can happen with some non-zero probability3.
Superhuman AI
I don’t think it’s too controversial in the year 2023 to say that there will be superhuman AI sometime in the future. They may or may not do things that a human or a group of humans can never do in principle, but they will definitely be much faster at doing what a human does.
Self-replicating AI
AI programs run on physical computers and collections of computers. And right now, humans have to design and manufacture those physical devices (with some automation) and humans have to provide the power and start up the program (again with some automation). So there’s no self-replication there.
But at some point, with AI programs being given agency to do things in the physical world, a self-replicating AI will be possible4. And I can imagine scenarios where humans would like to create a robot that can copy itself, because that would make the robot self-sustaining in a way. And once this kind of thing exists, we may start an evolution of AI algorithms or AI robots or AI something-that's-hard-to-imagine-right-now.
AI evolution
One scenario that would be kind of a dead end is if the thing that’s evolving is more like a virus than a self-sustaining system. Maybe an AI “virus” takes over all the computers that are connected to the internet, and that’s it.
What would be more interesting is if the self-replicator actually produces more hardware that runs the identical algorithm or something slightly different5. Then some classes of AI self-replicators will inhabit an ecosystem, maybe together with humans or something post-human, at least for a while6.
They may be capable of civilization, or they may be good at copying themselves and not much else. This part is where I’m most uncertain about in this story. But if they are capable of civilization, there’s fun stuff that follows.
Space exploration
I think “Let’s send humans to Mars” is the dumbest idea that a lot of smart people have7. If human civilization or something that follows it is to spread across the solar system and beyond, it’s more likely that some sort of robots would do the spreading and not humans or anything that went through purely biological evolution on Earth.
Which do you think is easier?
Send humans (or other living things) through empty space, making sure they are well-fed, well-oxygenated, well-protected, etc. Then provide an Earth-like environment wherever they end up.
Design robots that can withstand conditions during the trip and at the destination, and send them there.
One possibility is for humans to design such robots, and make them capable of self-replication because that enables the robots to sustain themselves as long as they have the necessary resources.
The other possibility is for evolving AI robots to decide that they want to get out themselves, or to send other self-replicating robots out. The robots would be capable of planning, communicating, etc., if that’s what gives them an evolutionary advantage. So they should be able to run a project to send robots to the Moon or the Mars or wherever.
Last words on this
AI robots may or may not expand out into space in the far future of human civilization. But I think that is much more likely than any purely biological entity colonizing another planet. And I think this is true about any biological thing that evolved on any other planet in the universe8.
So, it seems to me that if we want to look for extraterrestrial intelligence, we should be looking for technosignatures (signs of technology), rather than biosignatures (signs of life).
Does what I’m talking about affect how we should think about AI as it exists now? Not really. We are so so far from having anything like what I described.
But I do want to point out one thing. If we are starting to get AI systems that interact with the outside world in such reliable and sophisticated ways that they can create copies of themselves, we probably have AI that is capable of acting as moral agents. I don’t know if this possibility has been thought about enough, by most people who aren’t philosophers of AI.
What I’m listening to now
After some wild speculations about the far future, why not some Benny Sings? Nothing too out there. Just good, polished indie pop that I think is hard to dislike.
It also helps that this post has many holes!
i.e. evolution of memes, in its original meaning
Depending on the mood, I can convince myself that the probability is pretty high. But obviously, prediction is hard.
There’s a Wikipedia page on self-replicating machines, which I really should read. I’m just writing what I think so I won’t do that right now.
How we might get such a "mutation" is an interesting question.
The other possibility is that human (or post-human) civilization ends before we get to these things.
If some billionaire wants to work on it as a vanity project, I guess I wouldn’t object too much. But vanity is exactly what we would get out of it.
Because outer space is so different from habitable planets.