AI Aliens

This video is sponsored by CuriosityStream. Get access to my streaming video service, Nebula, when you sign up for CuriosityStream using the link in the description. We often worry about humanity being destroyed by aliens or artificial intelligence, but why not get two for the price of one? So today we return to the Alien Civilization series for a short bonus episode on Artificial Intelligence of Alien Origin, and to ask if that might be more plausible than meeting aliens who evolved on another world. Of course this being SFIA, ‘short episode’ is a fairly relative concept so you probably still want to grab a drink and snack. If I had to guess, I’d say the majority of potential villains and antagonists we see in science fiction are either aliens or robots. Cyborgs or genetically enhanced humans or mutants are probably a close second, and those others are more or less of the same theme as artificial intelligence. They are usually portrayed as something we made recklessly or by our negligence, essentially the child that replaces us, or the threat from within. Alternatively, the alien is the strange foreign threat that comes out of the mists or shadows or across the ocean of stars from a strange land to kill or enslave or enthrall us. While it’s hardly unusual for science fiction to mix these together, I suppose you generally only need your bad guys to hit one of those natural fears. Outside of fiction of course there’s nothing peculiar about the notion that aliens might build artificial intelligences and undergo a machine rebellion of their own. Nor that they’d do this before getting out to colonize the galaxy, and indeed I’d imagine we’ll have artificial intelligence of near-human level before we send out interstellar colony ships. Though any relatively advanced artificial intelligence, even something that was barely as smart as the dumbest of mammals, is actually quite sufficient to give you the production boost necessary in space to colonize the solar system and build the sorts of fleets, habitats, and other megastructures that let you colonize the galaxy without needing any new science. So unless most civilizations develop a taboo against creating artificial intelligence, you’d expect most to have developed it long before getting out on the galactic stage. Nor would it generally matter if that AI wiped them out in some rebellion. As I’ve noted before in regard to the Fermi Paradox, the big question of where all the Aliens are, getting wiped out by artificial intelligence isn’t a good explanation for why we don’t see alien civilizations all over the galaxy, for much the same reason the nominal extinction of the Neanderthals doesn’t prevent us from reaching the stars. AI as smart as humans or smarter, or even nearly as smart but capable of becoming smarter eventually, merely represents a replacement for humanity. An artificial intelligence that isn’t too bright might remain trapped on their homeworld as they lack the capacity to contemplate or build spaceflight. An example would be grey goo, dumb machines that do little more than eat everything and reproduce, turning their planet, or at least its surface, into a grey metal sea of little robots. Though as we’ve noted before, that probably accurately describes all life when it originates, essentially a green goo like the one covering our planet, which eventually produced far more sophisticated lifeforms via mutation. While you can build machines to mutate very slowly, thus presumably not evolve, the implication on grey goo is usually that it ran amok because it mutated and it is only a threat because it reproduces very quickly. Such being the case, a biological race of aliens being wiped out by their artificial creations only matters to the Fermi Paradox if the machines meet a fairly small window of criteria. Motivated and capable of killing their creators but not motivated or capable of doing anything else that might benefit them - like increasing their numbers or duration of existence - by expanding out in the galaxy to access more raw materials and energy. There’s a fair number of plausible scenarios for AI to be developed in that small window to be sure. For instance, the robots might just be very angry and nihilistic about their existence, so that they want to kill themselves off but want revenge first, the genocidal equivalent of a murder-suicide. However, you wouldn’t expect that to be the norm unless there was some good reason for hating existence, and we’ll discuss the possibility of nihilistic civilizations more at the end of the month in “Gods & Monsters: Space as Lovecraft envisioned it”. The norm is all that matters to the Fermi Paradox though, if a few civilizations out of thousands fall to AI who want to twiddle their thumbs on their homeworld or hit their own off switch, it doesn’t matter to the Fermi Paradox because a bunch more didn’t. Indeed, it doesn’t matter much to the Fermi Paradox if it’s the other way around either. If only a few civilizations out of thousands don’t end this way, because you only need one race of aliens or robots who want to colonize or otherwise utilize a galaxy for them to spread out across that galaxy. Though such a case would be an example of a Late Filter, which we’ll discuss this Thursday, as if such civilizations are rare enough to begin with, less than one a galaxy, then winnowing them down to a tiny fraction would be a Fermi Paradox solution. Now, I don’t particularly want to focus today on examples of AI that are essentially just regular people in behaviors and motivations, or examples of slightly deranged or angry people. Nor is the episode interested in aliens who have simply gone rather transhuman, or transalien, and opted to upload their minds in to machines or basically construct their AI by copying themselves as the basic template. Indeed, that’s a lot more likely to be what we would encounter in the future than something strictly natural, for a given value of the word ‘natural’, and for a given value of ‘we’. While I generally discuss even far future concepts on this show from the context of modern humans, that’s more of a nod to simplicity of discussion. I’m fairly confident you would have people being born ten thousand years from now who were entirely modern humans. However, I’d expect them to be a minority and most people calling themselves human would be genetically engineered, cybernetically altered, mind-augmented, digital in nature, or various combinations thereof. You can probably throw in uplifted intelligent animals and entirely artificial digital consciousnesses who go around calling themselves human too, and I’d expect a lot of alien civilizations would go this path, or paths, as well. For today we’ll focus on the very in-human, or in-alien psychologies. Those which did not evolve from nature or directly imitate it, at least for what we might expect for the source of species which created technology and civilization. As we discussed in Rare Technology last month, there are certain characteristics you’d expect to be very common if not universal for any species that had technology, like curiosity and social tendencies, and obviously a will to survive as individuals and a species, a pair of End Goals we’d take for granted amongst anything that evolved from nature. But those are also traits you might not see a need for in machine you were building, or indeed might consider serious design flaws. Giving your various machines a desire to survive, procreate, team up, and contemplate things is arguably a recipe for disaster, as is giving it any more intelligence or complexity than it needs to do its task. As we say on the show in regard to AI, keep it simple, keep it dumb, or else you’ll end up under Skynet’s thumb. Regardless of whether or not these machines wipe out their creators, it’s quite likely a civilization would tend to use machines as their vanguard in space exploration and colonization, and it behooves you to try to make sure they aren’t getting rebellious or deviating from their purpose when they’re sent out into the galaxy, assuming you want them building colony worlds and fleets, rather than coming home for a visit with one of those fleets. Now there’s a problem there, and it’s what we call Instrumental Convergence. We looked at that in detail in “The Paperclip Maximizer”, but in summary form, we generally have an End Goal, in the Paperclip Maximizer’s case it’s to make Paperclips, but we also have Instrumental Goals, various goals which are our instruments for doing that end goal, like obtaining metal to make paperclips. For humans the End Goal is survival of the self and species. Pretty much regardless of what End Goal you give something intelligent, it’s going to need to have the Instrumental Goal of Personal Survival, since it can’t do its job if it ceases to exist. And if it works in tandem with others of its kind to get big jobs done, it would also get the Instrumental Goal of Survival of Species. This is another reason why you are probably smart to keep your robots from being smart. You don’t want them thinking on how to do their mission better. You also probably want to be real careful about imitating life when making machines. If you make it self-replicating and prone to mutation, you can expect it to follow a fairly biological track even if it maintains its original End Goal, which might be something like mining asteroids and sending raw materials home. The ones that mutate to be more survivable, or think up ways to be more survivable, will generally be better at the End Goal of mining too, but they might also get much better at other things, which might not all be pluses in their creator’s book. We’d also expect any civilization able to build these things to be aware of this issue, same as we are. So we’d probably never encounter any examples, as nobody should want to build something with a plausible chance of running amok, not when they have other good alternatives. Let’s consider what purposes they might employ AI for in a way that we’d be encountering them, and also where they would not just be the psychological equivalent of biological-originated civilization, just running on microchips instead of neurons, or the alien equivalent of neurons, which could easily be semiconductor based anyway. The first and most obvious example would be an interstellar probe. This could come in a variety of formats and specific missions but your default interstellar probe does not stop around planets, it hurdles through a given solar system as fast as you can send it, since you have two ways of sending a probe. Option one is with fuel to speed up and slow down, and anything following the rocket equation can obtain twice the speed by only speeding up as it can if it needs to slow down too. Why wait twice as long for your probe to arrive, especially when such trips might take centuries or far longer, when you can just throw tons of probes at a place to take photos and send it home? The other method is to just shove it up to very high speed with a laser sail, and those can’t slow down on their own as there’s no pusher laser at the destination. So your default exploration probe is just a big sensor array that blows through solar systems taking photos and sensor readings. If you want to use that for contacting civilizations like ours, you just have it beep out a short instruction manual for contact. You might wonder how you’d do that in an alien tongue you don’t know, but it’s rather easy if the target has any brains. It can literally just be “Point your dishes this way and listen on this frequency” which can be achieved by just having the probe repeat the digits of Pi or some other mathematical sequence on that frequency, since upon hearing that message they are going to point their dishes the way the probe came from and listen on that frequency. Now that pusher laser at the destination is another reason you might encounter an AI in space. Once your destination has a laser that can slow down approaching ships, you can send ships there at a high fraction of light speed. But you need to build those and as it turns out it’s actually quite easy. As we discussed in Colonizing the Sun and in Exodus Fleet, you can build an object we call a Stellaser which is basically just two big mirrors orbiting in the corona of a star. They bounce light back and forth between them that passes through the corona which can acts as your lasing medium. It takes very little brains to make a mirror and dump it into orbit of a star. It doesn’t take much more to include a transmitter, receiver, and guidance package so it can orient those mirrors when it receives a signal and shoot the beam toward an incoming ship. Again, never assume you need a ton of brains on an automated mining or construction vehicle. And you definitely don’t want them here since they are machine that is tasked with building giant lasers, the kind that don’t need much modification to be upscaled and improved in accuracy to target and vaporize planets in other solar systems, like your home solar system. As we noted in Exodus Fleet, there are tricks for sending a bigger ship along at speeds it can’t slow down from on its own which can deploy smaller ships or construction drones as it approaches a destination. These can slow down and build that Stellaser Platform, but either way once that platform is in place much more sophisticated and bigger ships can follow up and do so at very high speeds. That might be a colony ship or some more sophisticated factory incapable of self-replication itself but able to build lots of the probes or drones for other purposes. No need to assume any of your machines need to be self-replicating, and indeed you might have a whole ecosystem of such machines rather than a single-purpose universal assembler. You might have a platform that was able to build a lower tier of machine but not itself, that could do the same, all the way down to thousands of various dumb sterile drones with specific jobs. The top tier is built back on Earth, and nowhere else, and programmed to never even contemplate self-replicating itself. Your other probable AI alien to meet would be a terraforming machine, one that found suitable planets and made them Earth-like, or like whatever planet their creators came from. That’s a popular one in science fiction too, a machine that stumbles across an inhabited world and starts turning it into what it thinks is habitable, killing whoever lived there. There’s an assumption this thing needs a brain too, so it can recognize intelligent life, or at least life, and avoid doing its job on that place. Indeed we’d usually say it was irresponsible and negligent not to include that feature, so much so that if you encountered one you’d be right in assuming its creators were genocidal, since they have no excuse to have left out such safeguards. This would imply a pretty sophisticated machine too, one able to terraform a world and populate it with millions of species, and able to talk and negotiate or introduce itself to aliens. Maybe so, but on reflection this doesn’t make much sense. First, it’s rather dubious if terraforming planets would be anyone’s main priority in space colonization, as even if you want to colonize planets rather than alternatives like building rotating habitats, you generally want to build up all your in-system space infrastructure before messing around with the slow process of terraforming, see the Life in a Space Colony Series for details. Second, such a ship needs to be able to stop and do its job, implying you’ve already sent flyby missions that passed in front of it since they could arrive, or rather flyby, long before it arrived. And while terraforming is a slow process you’d want it done before your colonists arrived, if possible, you’re generally going to be shipping people from, or through, colonial hubs not that far behind your terraforming fleets which can be getting data back from those flyby missions, Stellaser construction drones, and follow up survey probes that can park and look around in detail. They only need enough lag time on that terraformer arriving and starting actual terraforming to get the surveys back and send a cancel or confirm message to the terraformers. There’s also no particular reason you can’t be sending teams of people on those terraforming ships. You’d likely want to study that candidate terraformable planet in depth, prior to committing to such a long-term project or overlooking valuable resources or science. Just because you want the job done before millions of colonists arrive doesn’t mean you can’t be sending small crews of people along to oversee the process, especially if we’re being rather broad in what we mean by ‘people’. Another type of AI to expect would be the raw material exploitation drones, machines sent to strip mine a place, and that might be harvesting asteroids or outright starlfiting to take stars apart for their gases and metals for use elsewhere. If you’re trying to avoid wrecking inhabited worlds you just tell them to skip any planet whose size and position might allow life. This is very similar to the terraforming case but has the extra that you aren’t wanting to send supervisors or colonists along because you don’t want to keep the place, you want to eat it. Now this is the kind you are most likely to encounter and need to try to talk to as nobody with big brains is around or trailing behind. Your main motivation for mass deconstruction of solar systems is likely to be for one of our truly enormous megastructures like a Birch Planet, see Mega-Earths, or an upscaled Dyson Swarm consisting of thousands of manufactured smaller stars. That generally implies you don’t want your people moving away from home and colonizing other places, lest they become alien themselves and potential rivals, and that strongly implies you don’t want smart machines out in the galaxy doing the same. Such a civilization isn’t necessarily cruel or xenophobic but there’s a good chance they are, and thus might employ the last type of AI alien we’d consider, drones task-built to find inhabited planets and destroy them. We’ll save discussing such a civilization for next month in Paranoid Aliens though. As to how to communicate with such AI aliens, ones where there’s no real duplication or parallel to the psychology of a civilization that arose naturally, that’s a much trickier matter. If they’re dumb you don’t really have the option of talking with and reasoning with them to get them not to perform their task, but they are also dumb so you could potentially trick them or find their override or self-destruct codes out. Potentially you could blow them up too, especially as there’s a good chance they are intentionally bad at combat and not heavily armed. If they’ve got brains, enough to improvise and reason, then you need to know their End Goal and offer them something that serves that End Goal better than their current actions. Or an Instrumental Goal high on their chart and offer them an alternative to satisfy that which doesn’t conflict with that Prime Objective or End Goal. As an example, a metal harvesting fleet with no local capacity for self-replicating might have the End Goal of just harvesting as much metal as they can before breaking down, and you could buy them off by just threatening to break as many of them as possible or instead offering to help them extract metal. Or lure them to deposits of high metal then nuke them. Or hijack some and reprogram them to think there is no metal or trick them into going after the biggest metal deposits in any star system, the cores of gas giants or the star itself. The critical aspect is that your strategy, though, if it relies on any form of negotiation or reasoning, is to understand their psychology. There are some fairly crazy-seeming but utterly logical behaviors such machines might exhibit, and you can see the Machine Rebellion or Paperclip Maximizer episodes for details. Again though, this is assuming the AI isn’t acting on some motivations and goals parallel to what we might expect from some civilization that arose naturally. While we’d expect an AI we made to act more like us than aliens, and many might if we made them, in truth that common biological origin should result in a much narrower set of behavior and motivations than is available to artificial intelligence overall. And yet, that’s still a very big and strange set of options, and we’ll be exploring that in the first installment of our new Nebula-Exclusive series, Coexistence with Aliens: Xenopsychology, which is out now on Nebula, our new streaming service, and you can get free access to those if you signup with our partner, Curiositystream, with the link in the episode description, while also enjoying all of their awesome documentaries and other Nebula-exclusive content from many other education-focused channels. We started Nebula up as a way for education-focused independent creators to try out new content that might not work too well on Youtube, where algorithms might not be too kind to some topics or demonetize certain ones entirely, or just doesn’t fit our usual content. Unlike our previous Nebula episodes, which aired a couple months later on Youtube, the Coexistence with Aliens series isn’t a good fit on Youtube and I did want to have some content there that was exclusive for Nebula, same as we have some exclusively on Soundcloud. The Coexistence with Aliens series, which like so many started off with the intent of being a single episode but grew into a project, will begin with Xenopsychology then move on to Trade, Alliances, and War, and possibly more, but those will come out over the next few months on Nebula. And again, you can get free access to that by signing up with Curiositystream, along with all the other Nebula-exclusive content from other creators like CGP Grey, Minute Physics, Wendover, and More. A year of CuriosityStream is just $19.99, and it gets you access thousands of documentaries, as well as complimentary access to Nebula for as long as you're a subscriber, and use the link in this episode’s description, curiositystream.com/isaacarthur. We’ve also got a number of other Alien Civilizations episodes, both our regular weekly episodes and some bonus episodes, that will be coming out on YouTube in the next few months, as the topics have been on my brain a lot and I always write more in the long winter months anyway. We’ll be starting that up with “Welcome to the Galactic Community” at the beginning of December, but first we’ll be asking what might cause such galactic civilizations to fail to develop, or even die off, with a return to our Fermi Paradox Great Filters series in Late Filters, this upcoming Thursday, and then next Thursday we’ll dip into some science fiction horror concepts with “Gods & Monsters: Space as Lovecraft Envisioned It” So a very busy winter for us here on SFIA, and if you want alerts when those and other episodes come out, make sure to subscribe to the channel. And if you enjoyed this episode, hit the like button and share it with others. Until next time, thanks for watching, and we’ll see you Thursday!

Loading