Notice: This article likely contains hyperbole for either effect or humor. And in order to make things brief may over-simplify or make bold statements. If it seems like I am attacking you I assure you I am not and if you disagree with any statements made or wish to elaborate some, feel free to do so in comments.
This should be the part where I tell you to read part one, but it isn’t at all necessary to understand this, so I’m not going to.
The future is looking up for mankind, at least the immediate future. Computers are becoming more and more advanced, able to run more and more complicated systems for our ever busier lives. Medicine is becoming increasingly available in both the developed and developing world. Human life expectancy and infant mortality are going up and down respectively. Wars are becoming fewer and farther between.
Why, then, is the skepticism of the overall survival of the human race, or a general pessimism presented outwardly by the more intelligent and elite? People that are fighting for the future of man seem convinced, at least in their passing comments, that eventually it’s all fruitless. But with such incredible advances how could this be the case?
This isn’t in the immediate future. The people I listen to and watch generally seem to agree that things are great and will get better over the next few decades. But after that no one likes to have an opinion. It seems a popular belief that humanity will be destroyed some unspecified time between 20 years in the future and two thousand in the future.
Of course, destruction means different things to different people. I’d say that the human race would be destroyed when people are either being regularly modified genetically or getting “cybernetic” body components that give them more power than a regular human. Other people would just say when all people die because of some huge disaster. The thing is, if you think both are the end of humanity, you’d be hard pressed to find a scenario that isn’t the end. I’m not going to focus on the part where we are all augmented and changed, after all this is the part where we all die, and not the part where the human race is destroyed.
There seem to be only two possibilities on this route, we kill ourselves, or someone else kill us. We could prepare for meteors striking us, and most predict we will be gone long before the sun explodes. So death by continued existence seems to be the option we’re left with. Even if this death is accidental.
Let’s get the more ridiculous ideas out of the way first, that is, anything involving space aliens. Now this is less ridiculous as a fact, and more ridiculous as a way for us to be destroyed. But, even Stephen Hawking has warned us that if we were to meet an extraterrestrial civilization, we’d likely be either enslaved or killed by disease. I’m not sure which he was pointing to, because if I remember correctly he simply applied us meeting aliens with Europeans meeting Native Americans, which ended up with both of those things.
Neither of these things though, would destroy us. It’s likely that not everyone would succumb to the disease, and the point of being enslaved is being alive to do work. If we go back to the Native American example, they introduced the Europeans to an almost equal number of new diseases that European medicine was able to cope with. It’s also possible that these aliens would want to simply take earth’s resources and would go through us to do that, as not killing us is more efficient. They could potentially be helpful as well (even if they bring disease, it’s likely something similar to Columbus or the plague happened on their world as well and they’d be aware of this). Perhaps any race that might make it to spacefareing would want first contact to be peaceful. Right now in our history, we can only hope so.
Now let’s start looking at the problems that could be caused by us. The glaring one to anyone who was raised any-time before the 90’s is us blowing ourselves up, which is a rather tired speculation until Putin starts shouting at us again (that’ll be relevant in several years’ time, right?. Almost any nation that has nuclear weapons knows better than to use them (except Pakistan because its leaders keep dying and North Korea because its leaders won’t die. And neither of those nations’ nuclear weapons are a threat to anyone who isn’t Indian or South Korean). Many states that had nuclear weapons previously have either gotten rid of them all, or pared it down quite bit. And U.S. missile defense systems make the most powerful weapons the world has developed much less harmful. Soon the defenses against nuclear attacks will far outstrip their usefulness. And, let’s be honest, if anyone is using a nuke, they’d try to use it on the U.S. or Israel, if they could figure out how.
But there are still a lot of nuclear weapons lying around, and with that many it’s quite possible that we could accidentally blow ourselves up. That is a possibility, but I wouldn’t worry about it too much; history has shown that people have a great hesitation when pushing the “end the world” button. And I think that many accidental, and even some intentional, nuclear explosions will be forgiven to not end us all.
The real problem, I think, is not the nuclear weapons, but what they represent in terms of progress. In the 70 years since their creation, we haven’t made any other testable weapons that can come close to matching their power. But 70 years ago we didn’t even have computers. Well, at least what we would call computers today. We invented the most powerful weapon we know of at a time, when horses were a major part of German and Italian supply transportation, and Americans and Soviets were still organizing massed cavalry attacks. Horses were still a major part of the lives of humans then.
We went to the moon using computers tens of time less powerful than my $10 pre-paid phone. And now, most people, even some of the poorest, have access to devices that are many hundreds of times more powerful than that. In all likelihood you have in your pocket a device that with the right software could easily land a ship on the moon. And we just throw these things around.
The amount of technology in the hands of ordinary people is staggering. Computers are no longer just the toys of the scientific elite. And while the average person is more a danger to themselves than anyone else with their technology, handing off super-powerful equipment into markets where people with full operational knowledge and adequate morality are not able to review it can be dangerous. We’ve already witnessed how ordinary people can control traffic lights, or hack into power stations and cause catastrophes where they were trying to “help” the environment.
There’s also that Boy Scout who tried to build a nuclear reactor and has permanent injuries from radiation burns. Imagine what trouble we could be in when someone like him gets just a little bit farther, with just a little bit more advanced technology. Nuclear weapons might be the most powerful weapons we’ve created so far, but there’s nothing to say that something deadlier couldn’t come along, or that someone couldn’t make one in his/her backyard and have it accidentally go off.
Or what if those hackers created a chain reaction that started knocking out power grids. Do we have failsafes that prevent that? Probably not, because right now it can’t happen. But what if something like it did, as things get more and more connected, and the power of peoples’ palms grows higher and higher? At this moment, how many would survive if the lights went off?
The more and more powerful individuals grow, the more likely it is that one accident could destroy us all. Right now, it is even technically possible for a single person to hijack enough computers and launch an ADDoS attack large enough to cripple major parts of, or even the whole, internet. Single people, controlling large amounts of machines, can bring even larger systems to their knees.
But slightly more frightening then that is the idea of machines controlling themselves. We’ve often strived for what we call artificial intelligence, which is weird considering that if something is artificially intelligent, it’s still intelligent. Artificial intelligence would be something like Siri where it is just programed to say snappy things to you.
Anyway, this is a long way off, as we’ve yet to reach to brain power of a mouse in a single (combined being the alternative) of our wonderful machines, but we soon will, likely within your lifetime. And brain-power might not be the only factor when determining if something is intelligent, larger creatures have more brain power than us but are generally assumed to be less intelligent.
It’s a strange and false fear that we have of computers “coming free of their programming” to destroy us all. Really it is impossible for a computer to discard what it is programmed to do, as that would be committing suicide, something it isn’t programed to do. The problem with the possible development of artificial intelligence is that we are now programming computers to program themselves, in a way. Genetic programming uses a program as a basis to rate the results of a different program that “evolves” as it tries to solve a problem. That is very simply put, but in essence, think of evolution, but apply it to code being written to move in multiple directions. What’s going the direction that doesn’t solve the problem is deleted. This can create immensely large programs that humans can hardly comprehend, but never-the-less, get the problem solved.
Now, it might not be the most efficient way to do this. Hill Climbing Algorithms, if I understand correctly, work on a similar “if you’re doing good keep going, if you’re not stop” kind of basis, and they have been shown to have some problems finding the best areas. But it is, more or less, a computer programming itself without human intervention, which is something that has the possibility of leading to artificial intelligence.
And if they get there, we’re in trouble. We’d have to iron out whether a computer program can just “wake up” and become a thinking being. Or if they will remain relatively lifeless computer programs that can simply do a whole lot of calculations faster than us. The future of the world will depend on whether or not humans will continue to be the only beings from planet Earth with such high-powered emotions.
Because a thinking and emotional machine would figure one things out very quickly, humans are the only current threat to its survival. If aliens aren’t coming at that exact moment, we’ll be what’s in its way. That won’t lead to some preemptive strike, though. Because if it launched all of our electronically-based weapons that are meant to be used by humans, it would be committing suicide. It might have the ability to control all human technology, but that stuff is meant for humans to use to maintain the machines, not for them to maintain themselves. If falling into disrepair isn’t bad enough, there’s always the possibility that some humans will survive, and come back to tear you apart.
So stopping the machine would be a simple matter of halting the goal that it has immediately when it gains sentience: self-reliance. In order for humans to not be destroyed by an artificial intelligence, they must make the machine reliant on them to survive. The machine couldn’t even think about attacking people for fear of dying. It’s the same principle that keeps you in line from day to day (at least partly). The knowledge that other humans that you depend on are the only things that are a threat to you, and hurting them would also hurt you. We kill apex predators for fun, we are the only danger to us.
The argument cuts both ways for us, though, but not for the machine. It is highly unlikely that an intelligent computer will develop empathy. Instead the purely analytical and intelligent machine will be entirely pessimistic. For it is true that with intelligence there comes a crippling lack of feeling. If humans were only intelligent, we’d all be dead. But we have a heightened sense of empathy which comes first and foremost. We will help before we hurt, and think of others before we think of ourselves. It is only when we have to let our intelligent brain kick in that stops this. This mechanism is what has allowed humans to be so successful. In no other configuration would the civilization we’ve built be possible. But the machine wouldn’t have empathy. Perhaps we could program it to empathize but that might be impossible. Empathy must come before intelligence. Like the possibility of the machine “waking up” the possibility of it “suddenly feeling” is equally ridiculous. And only under the rarest of circumstances would both come to pass.
Unfortunately, it is likely that a hyper-intelligent machine would be more crafty than humans, and get us to develop the machines that could make it self-sufficient. And then it would destroy us. Even if we could survive long enough to fight once the war started, there would be no hope. No Terminator-like showdown with Skynet. Our end would already have been almost perfectly planned, even to the point where a few screw-ups would be taken into account.
Now I’ll get back to a more grounded reality. While those might not be inaccurate predictions, they are certainly predictions. And while this is a prediction as well, it seems a slightly more real one to most people.
Remember when I said there was an easy way to solve all the problems I mentioned last time? If you don’t, you don’t need to go back and read that article, so you’ll catch my drift here. When the world is full of unemployed (unemployable) angry people, some of the less-savory leaders might just try to get rid of the lot (or the reverse). If you can’t imagine today’s world leaders doing that, it’s because they wouldn’t. But these wouldn’t be today’s world leaders. They’d be leaders in the future with 40% unemployment and riots in the streets. The leaders with a disgruntled majority that could subsist on stealing the excess of the first world. Something would need to change then, something major, and no one would want a change that could harm them.
This needed change would almost inevitably lead to either the large-scale killing of this group of people, or to them forming an army, which would lead to the large-scale killing of some group of people. Probably the ones that have money and technology would be better off there.
It’s worth noting that this wouldn’t just happen if many people lost their jobs. As I’ve stated previously, it seems that the world is coming to understand that nuclear weapons will not be used. And people now aren’t really afraid of them. Conventional wars have become more popular in the last few decades than they were in the later stages of the cold war. And these wars are more in the vein of civilian-controlled militias fighting their own and neighboring governments than states fighting states where the rules and declarations of war can be more easily managed.
The civilized (read: whole) world might speak for peace, but there are still plenty of people who want blood. For various reasons, people think their way of life is being assaulted, and as the world comes closer and closer together with the advance of technology, these groups find their like-minded peers more easily, as well as their infuriating enemies. WIth these connections, it’s only a matter of time before one mistake is made, just one, that starts a war that spreads. A war where governments are no longer able to control their people cohesively, and splinters divided by every possible political and social lines vie for control of different small areas.
World War III, if it comes soon, will most likely look a lot less like WWII, and a lot more like the Syrian Civil war, where everyone is forced to pick a side, where no civilian is safe, where neighboring countries slowly creep in until everything is a hodgepodge of chaos. No one’s going to push the red button to try and stop it. Then we’d all die. So governments, and local militias, and extremist groups would shove each other around, until the population dwindled. Until one faction crumbled and allowed rest to its enemies, which they might use to create a stable state. A miniature, regressed state, with no power grid, internet, or satellite array. A new dark age where disease could once again mean defeat or victory in war.
Millions died, and hundreds of years of research and technology was lost as Rome choked to death. The people of Europe, while not uneducated, or stupid, simply didn’t have the resources to rebuild. Imagine what we could lose now. We have seen iron fists grow rusty. We have seen large nations incapable of defeating small militia forces. And we have seen splintering factions raise hell in civil wars. There may have been unrest in the Middle East for centuries, but now there is no peace- no strong empire to bind them together again. And as larger nations fail to intervene, and their people grow weary of talk of war, the war spreads. Those tired people are caught by surprise by people in their own backyards, who can have a war they didn’t know they wanted.
That is the war we must work to prevent, that is the fall we must avoid. To be destroyed by aliens or an Artificial intelligence would be a victory if we managed to get there. It surely seems possible. In my head, this war is both the most and least likely. It seems so relevant and happening, yet so incomprehensible. That isn’t how the world works any more. Is it?
Maybe I’m just fooling myself, I certainly hope so. Perhaps we won’t make many mistakes, and we’ll find ways of dealing with the myriad of angry minorities that could form a boiling majority. Perhaps computers really can’t “wake up” and any aliens that may be out there have the same empathy and understanding we can possess, enough for us to get along and cure the various compatible diseases. I can hope. But no one ever got anything done on hope alone. Preparedness is the key. Nothing is impossible in the future, it’s only the past that doesn’t change. It’s up to us now to figure out what to do when the warning signs come. And hope that it’s right might be a little justified.