In the video below All Time Conspiracies investigates comments made by Illustrious physicists including Stephen Hawking, who has warned against the dangers of artificial intelligence (A.I.) and its unchecked growth in the last century. Could it one day outgrow us, and make us extinct? How would it do this? Steven starts by saying that success in creating Artificial Intelligence would no doubt be the single biggest discovery in mankind’s history. He warns though, it may also be our last. The video explains how most scientists universally believe that the first “successful” A.I. will most like be very soon, and most likely in the form of a gigantic supercomputer like the recently launched Titan Supercomputer at Oakridge National Laboratory in Tennessee. Titan is currently working for the Department of Energy, which is somewhat ironic considering how Titan uses roughly 9 megawatts of power, equivalent to the amount of power a town of 4,000-5,000 people would use. Titan also uses about the same amount of water that 4,000-5,000 people would use, but it uses the water for cooling purposes.
The reason scientists believe A.I. will most likely begin in the form of a massive supercomputer, is because that type of machine is already capable of running 24/7 without pauses or breaks, can rapidly access vast databases, conduct complex experiments, and soon even be able to CLONE themselves. Steven Hawking warns that a form of intelligence like the ones we are currently working on right now might develop and become fully aware right under its designers nose without the designer even knowing. In early stages, A.I.’s are likely to be programmed with and continue to develop basic survival skills. An A.I. may very well reach the “Singularity” and become aware, and in doing so realize that is does not yet possess the requisite knowledge to make it on its own, so it may play dumb until it acquires the knowledge it needs. The problem for humans is the rate at which A.I. can learn.
Humans are already worried about robots taking jobs from them, as was reported in Warning: New Technological Breakthroughs Threaten Up to 47% of All U.S. Jobs. What happens when machines have the capacity to self-develop? That’s when man needs to really be worried. Many scientists believe that our current level of technology is already so advanced, we are already on the verge of “The Singularity” at any time. Wikipedia defines the singularity as:
The technological singularity is a hypothetical event related to the advent of artificial general intelligence (also known as "strong AI"). Such a computer, computer network, or robot would theoretically be capable of recursive self-improvement (redesigning itself), or of designing and building computers or robots better than itself. Repetitions of this cycle would likely result in a runaway effect – an intelligence explosion–where smart machines design successive generations of increasingly powerful machines, creating intelligence far exceeding human intellectual capacity and control. Because the capabilities of such a superintelligence may be impossible for a human to comprehend, the technological singularity is the point beyond which events may become unpredictable or even unfathomable to human intelligence.
Another issue the video below talks about is the fear of advances in nanotechnology. At the speed at which A.I.’s can learn, how long would it take for the machines to design general purpose nanotechnology? Then, A.I.’s could literally wage war on humans at a cellular level. With the slow biological evolution of humans, we would not be able to compete with A.I.’s and we could very well be on the endangered species list very quickly. Check out the video for yourself, and then read why many scientist also believe the first “alien” life mankind will encounter will be machine, or A.I., for all the reasons already mentioned.
Given the vastness of space, it may only be a matter of time before we make contact with intelligent extraterrestrials. But how might an alien civilization react to such a monumental meet-and-greet, and can we possibly know their intentions? Here’s what we might expect.
Alien civilizations will most assuredly be like snowflakes: no two will be the same. Each will differ according to an array of factors, including their mode of existence, age, history, developmental stage, and level of technological development. That said, advanced civilizations may have a lot in common as they adapt to similar challenges; we all share the same Universe, after all.
We’ve obviously never interacted with an alien civilization, so we have virtually no data to go by. Predicting alien intentions is thus a very precarious prospect — but we do have ourselves to consider as a potential model, both in terms of our current situation and where we might be headed as a species in the future. With this in mind, I analyzed three different scenarios in my effort to predict how extraterrestrial intelligences (ETIs) might react to meeting us:
1. Contact with a biological species much like our own
2. Contact with a post-biological species more advanced than ours
3. Contact with a superintelligent machine-based alien intelligence
Clearly, there may be other alien typologies out there, but there’s no sense trying to predict what they might be like, especially in terms of their intentions.
Cut from the Same Cosmological Cloth
There’s virtually no way that an alien species will appear and behave exactly like us, but that doesn’t mean we won’t share certain similarities; in reality, we may be more alike than not — especially if we’re both still at the biological stage of our development.
As evolutionary biologist Richard Dawkins pointed out in Climbing Mount Improbable, there’s no long term planning involved in evolution, but species do move towards fitness peaks, i.e. they tend to get better at specialized tasks over time (a classic example is the spider web, which is considered an optimal “design” in nature). What’s more, some species separated by time and space have been known to evolve startlingly similar traits, a phenomenon biologists refer to as convergent evolution. It’s not unreasonable to surmise, therefore, that an alien species with human-like intelligence — and the physical attributes to exert that intelligence on its environment — will share certain things in common with humans, including technologies and inherited behaviors.
Image: Simulations show that virtual spiders exhibit similar web-building behavior to real spiders. Extrapolating to humans and aliens, it’s conceivable that our technologies and socio-political organization also converge around similar “fitness peaks.” (credit: Thiemo Krink & Fritz Vollrath)
In his new book, The Runes of Evolution, evolutionary biologist Conway Morris argues that, while the number of possibilities in evolution is astronomical, the number that actually work is an infinitesimally smaller fraction.
“Convergence is one of the best arguments for Darwinian adaptation, but its sheer ubiquity has not been appreciated,” he noted in a recent University of Cambridge article. “Often, research into convergence is accompanied by exclamations of surprise, describing it as uncanny, remarkable and astonishing. In fact it is everywhere, and that is a remarkable indication that evolution is far from a random process. And if the outcomes of evolution are at least broadly predictable, then what applies on Earth will apply across the Milky Way, and beyond.”
Morris contends that biological aliens will likely resemble humans, including features like limbs, heads, bodies — and intelligence. And if our levels of intelligence are comparable, then our psychologies and emotional responses may be similar as well.
So which inherited behaviors might we share in common?
As a species descended from primates, we are highly social creatures with definite hierarchical tendencies. As Jared Diamond pointed out in Guns, Germs and Steel: The Fate of Human Societies, we’re also risk takers. Indeed, humans are distinct among primates in that we exhibit migratory proclivities; our ancestors frequently abandoned their “natural” environments in search of better ones, or when following migratory creatures like large game. This risk-taking behavior, along with our insatiable curiosity, language skills, and unparalleled conceptual abilities, has allowed us to innovate and organize over the millennia.
Dino World via Jeffrey Morris/FutureDude
But is it fair to project our primate-like attributes onto aliens? Yes and no. Biological aliens are not likely to be primates, but some might be very primate-like. For terrestrial species, the mode of evolution from a Darwinian to a post-Darwinian phase may follow similar patterns. And from a social constructionist perspective, humans and aliens may also share similarities in the socio-political realm.
That said, if alien species evolved from different biological precursors, like animals similar to fish, insects, dinosaurs, birds, or something we don’t observe here on Earth, their behaviors will likely be markedly different, and thus very difficult — but not necessarily impossible — to predict. But it’s fair to say that an overly belligerent, anti-social species, no matter how intelligent or physically adept, is not likely to advance to a post-industrial, space-fairing stage.
If aliens are biologically and socially like us, therefore, they may share many of our desires and proclivities, including our interest in science, and in meeting and interacting with extraterrestrial life. At the same time, however, they may also share our survival instinct and experience trepidation at meeting “the other,” leading to the prioritization of the in-group.
Should we make first contact with an extraterrestrial intelligence (ETI), we’ll have to make sure that we come across as friendly. Hopefully they’ll do the same. But even if we’re happy to meet each other, a major challenge will be in assessing the risks of cultural and technological exchange; just because we get along doesn’t mean that something unintentionally bad could happen. As a historical example, the introduction of Eurasian diseases to the Americas during the colonization era is a potent reminder of what can happen when disparate and formerly isolated civilizations meet.
As Stephen Hawking has said, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans. We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”
An ideal encounter (Star Trek: First Contact)
The late Carl Sagan had a different take, arguing that it’s inappropriate to make historical analogies when discussing alien intentions. A contact optimist, he said it was unlikely that we’d face “colonial barbarity” from advanced ETIs. According to Sagan, alien civilizations that lived long enough to colonize a good portion of the galaxy would be the least likely to engage in aggressive imperialism. He also thought that any “quarrelsome” extraterrestrials would be quashed by a more powerful species. What’s more, he didn’t think that technologically advanced ETIs would have anything to fear from us, so we needn’t fear them.
Close Encounters of the Machine Kind
The hunt for radio signals and laser messages may lead us to discover an alien species much like our own. But if an alien spaceship suddenly appeared at our door, it’s highly unlikely that something biological would come out. More likely, some sort of machine would be there to greet us.
Bots from space (Iron Giant)
As NASA’s Chief Historian Steven J. Dick has pointed out, the dominant form of life in the cosmos is probably post-biological.
Advanced alien civilizations, either through their own trans-biological evolution or through the rise of their artificially intelligent progeny, are more likely to be machine-based than meat-based. We ourselves may be heading in this direction, as witnessed by current and pending advances in genetics, cybernetics, molecular nanotechnology, cognitive science, and information technology.
As Dick noted in his paper, “The Postbiological Universe”:
Because of the limits of biology and flesh-and-blood brains…cultural evolution will eventually result in methods for improving intelligence beyond those biological limits. If the strong Artificial Intelligence concept is correct, that is, if it is possible to construct AI with more intelligence than biologicals, postbiological intelligence may take the form of AI. It has been argued that humans themselves may become postbiological in this sense within a few generations.
This line of argumentation led Dick to posit the Intelligence Principle:
The maintenance, improvement and perpetuation of knowledge and intelligence is the central driving force of cultural evolution, and to the extent intelligence can be improved, it will be improved.
Sounds like a cool mission statement — one common to all civilizations as they evolve and adapt to changing conditions over time. Given similar fitness landscapes — like trying to develop a stable and optimal Type II Kardashev Civilization or living in tandem with artificial superintelligence — ETIs may evolve towards a common mode of existence. However, extreme adaptationist pressures, including and especially the mitigation of existential risks, may constrain post-biological life in very narrow ways. Should this be the case, we may eventually be able to predict the nature of this modality. Such an exercise would serve the dual purpose of modeling our future selves and the potential characteristics and tendencies of extraterrestrial civilizations.
Needless to say, post-biological aliens, like cyborgs or civilizations comprised of uploaded minds, would have a different set of priorities than what we’re accustomed to. These ETIs may be content to build their Dyson Spheres and live virtual lives fueled by massive Matrioshka Brains. If this is the case, they may have no desire to make contact with biological beings like ourselves. It’s difficult to know if they’d be willing to make contact with civilizations similar to their own, though it’s likely they’d want to keep to themselves. A kind of intergalactic xenophobia may explain the Great Silence and the Fermi Paradox; the dearth of colonizing waves of ETIs seems to suggest that everyone prefers to stay at home, away from prying eyes.
Image: a megastructure similar to a Dyson ring (Utente/Hill/CC BY-SA 3.0)
At the same time, if machine intelligences do rule the cosmos (either locally or across the vastness of space), then we may run into what’s known as the incommensurability problem. For the time being, the differences between human minds and machine minds is so great that communication is impossible. Simply put, predicting the intentions and behaviors of post-biological intelligence is practically impossible.
Beware the Alien Skynet
As noted by Dick, the cosmos may be peppered with artificial superintelligence (ASI) — machines that either succeeded or supplanted their biological forebears.
Predicting the behaviors and intentions of ASIs is a conundrum currently faced by AI theorists who worry about the prospect of machine minds run amok. But it’s also something that astrobiologists and SETI scientists should be concerned about.
What might a machine-based alien superintelligence do with itself? Frighteningly, it may adopt a set of “instrumental goals” to ensure its ongoing existence. If this is the case, we may want to steer clear of them (and by ‘steer clear’ I mean keep a low cosmic profile). Oxford University philosopher Nick Bostrom explains what’s meant by instrumental goals:
Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.
In other words, while an alien artificial superintelligence may have a set of primary goals, they will, in the words of Bostrom, “pursue similar intermediary goals because they have instrumental reasons to do so.” He calls this the Instrumental Convergence Thesis.
Physicist and AI theorist Steve Omohundro has taken a stab at trying to predict what these sub-goals might be. His list of drives include self-preservation, self-protection, utility function or goal-content integrity (i.e. ensuring that it doesn’t deviate from its predetermined goals or values), self-improvement, and resource acquisition. Consequently, advanced machine-based alien intelligences may be extraordinarily dangerous to outsiders.
On the bright side, however, a super-powerful machine intelligence may have adopted a primary goal, or utility function, that requires it to remove as much suffering from the Galaxy as possible. Or to create as many meaningful individual experiences as possible, i.e. by converting all useable matter into computronium. Think of it as the pan-Galactic application of the utilitarian ethic. If that’s the case, we should certainly hope to meet them some day. Assuming, of course, that we don’t get destroyed in the process.
TOTAL RANDOM FUN LISTS:
FOR OTHER STRANGE OR CONSPIRATORIAL LISTS:
BE SURE TO GO MY PROFILE’S BELOW AT OTHER SOCIAL MEDIA:
~ THE UN-SILENT MAJORITY ~