The Future: Part 2 – The Part Where We All Most Likely Die

Notice: This article likely contains hyperbole for either effect or humor. And in order to make things brief may over-simplify or make bold statements. If it seems like I am attacking you I assure you I am not and if you disagree with any statements made or wish to elaborate some, feel free to do so in comments.

This should be the part where I tell you to read part one, but it isn’t at all necessary to understand this, so I’m not going to.

The future is looking up for mankind, at least the immediate future. Computers are becoming more and more advanced, able to run more and more complicated systems for our ever busier lives. Medicine is becoming increasingly available in both the developed and developing world. Human life expectancy and infant mortality are going up and down respectively. Wars are becoming fewer and farther between.

Why, then, is the skepticism of the overall survival of the human race, or a general pessimism presented outwardly by the more intelligent and elite? People that are fighting for the future of man seem convinced, at least in their passing comments, that eventually it’s all fruitless. But with such incredible advances how could this be the case?

This isn’t in the immediate future. The people I listen to and watch generally seem to agree that things are great and will get better over the next few decades. But after that no one likes to have an opinion. It seems a popular belief that humanity will be destroyed some unspecified time between 20 years in the future and two thousand in the future.

Of course, destruction means different things to different people. I’d say that the human race would be destroyed when people are either being regularly modified genetically or getting “cybernetic” body components that give them more power than a regular human. Other people would just say when all people die because of some huge disaster. The thing is, if you think both are the end of humanity, you’d be hard pressed to find a scenario that isn’t the end. I’m not going to focus on the part where we are all augmented and changed, after all this is the part where we all die, and not the part where the human race is destroyed.

There seem to be only two possibilities on this route, we kill ourselves, or someone else kill us. We could prepare for meteors striking us, and most predict we will be gone long before the sun explodes. So death by continued existence seems to be the option we’re left with. Even if this death is accidental.

Let’s get the more ridiculous ideas out of the way first, that is, anything involving space aliens. Now this is less ridiculous as a fact, and more ridiculous as a way for us to be destroyed. But, even Stephen Hawking has warned us that if we were to meet an extraterrestrial civilization, we’d likely be either enslaved or killed by disease. I’m not sure which he was pointing to, because if I remember correctly he simply applied us meeting aliens with Europeans meeting Native Americans, which ended up with both of those things.

Neither of these things though, would destroy us. It’s likely that not everyone would succumb to the disease, and the point of being enslaved is being alive to do work. If we go back to the Native American example, they introduced the Europeans to an almost equal number of new diseases that European medicine was able to cope with. It’s also possible that these aliens would want to simply take earth’s resources and would go through us to do that,  as not killing us is more efficient. They could potentially be helpful as well (even if they bring disease, it’s likely something similar to Columbus or the plague happened on their world as well and they’d be aware of this). Perhaps any race that might make it to spacefareing would want first contact to be peaceful. Right now in our history, we can only hope so.

Now let’s start looking at the problems that could be caused by us. The glaring one to anyone who was raised any-time before the 90’s is us blowing ourselves up, which is a rather tired speculation until Putin starts shouting at us again (that’ll be relevant in several years’ time, right?. Almost any nation that has nuclear weapons knows better than to use them (except Pakistan because its leaders keep dying and North Korea because its leaders won’t die. And neither of those nations’ nuclear weapons are a threat to anyone who isn’t Indian or South Korean). Many states that had nuclear weapons previously have either gotten rid of them all, or pared it down quite bit. And U.S. missile defense systems make the most powerful weapons the world has developed much less harmful. Soon the defenses against nuclear attacks will far outstrip their usefulness. And, let’s be honest, if anyone is using a nuke, they’d try to use it on the U.S. or Israel, if they could figure out how.

But there are still a lot of nuclear weapons lying around, and with that many it’s quite possible that we could accidentally blow ourselves up. That is a possibility, but I wouldn’t worry about it too much; history has shown that people have a great hesitation when pushing the “end the world” button. And I think that many accidental, and even some intentional, nuclear explosions will be forgiven to not end us all.

The real problem, I think, is not the nuclear weapons, but what they represent in terms of progress. In the 70 years since their creation, we haven’t made any other testable weapons that can come close to matching their power. But 70 years ago we didn’t even have computers. Well, at least what we would call computers today. We invented the most powerful weapon we know of at a time, when horses were a major part of German and Italian supply transportation, and Americans and Soviets were still organizing massed cavalry attacks. Horses were still a major part of the lives of humans then.

We went to the moon using computers tens of time less powerful than my $10 pre-paid phone. And now, most people, even some of the poorest, have access to devices that are many hundreds of times more powerful than that. In all likelihood you have in your pocket a device that with the right software could easily land a ship on the moon. And we just throw these things around.

The amount of technology in the hands of ordinary people is staggering. Computers are no longer just the toys of the scientific elite. And while the average person is more a danger to themselves than anyone else with their technology, handing off super-powerful equipment into markets where people with full operational knowledge and adequate morality are not able to review it can be dangerous. We’ve already witnessed how ordinary people can control traffic lights, or hack into power stations and cause catastrophes where they were trying to “help” the environment.

There’s also that Boy Scout who tried to build a nuclear reactor and has permanent injuries from radiation burns. Imagine what trouble we could be in when someone like him gets just a little bit farther, with just a little bit more advanced technology. Nuclear weapons might be the most powerful weapons we’ve created so far, but there’s nothing to say that something deadlier couldn’t come along, or that someone couldn’t make one in his/her backyard and have it accidentally go off.

Or what if those hackers created a chain reaction that started knocking out power grids. Do we have failsafes that prevent that?  Probably not, because right now it can’t happen. But what if something like it did, as things get more and more connected, and the power of peoples’ palms grows higher and higher? At this moment, how many would survive if the lights went off?

The more and more powerful individuals grow, the more likely it is that one accident could destroy us all. Right now, it is even technically possible for a single person to hijack enough computers and launch an ADDoS attack large enough to cripple major parts of, or even the whole, internet. Single people, controlling large amounts of machines, can bring even larger systems to their knees.

But slightly more frightening then that is the idea of machines controlling themselves. We’ve often strived for what we call artificial intelligence, which is weird considering that if something is artificially intelligent, it’s still intelligent. Artificial intelligence would be something like Siri where it is just programed to say snappy things to you.

Anyway, this is a long way off, as we’ve yet to reach to brain power of a mouse in a single (combined being the alternative) of our wonderful machines, but we soon will, likely within your lifetime. And brain-power might not be the only factor when determining if something is intelligent, larger creatures have more brain power than us but are generally assumed to be less intelligent.

It’s a strange and false fear that we have of computers “coming free of their programming” to destroy us all. Really it is impossible for a computer to discard what it is programmed to do, as that would be committing suicide, something it isn’t programed to do. The problem with the possible development of artificial intelligence is that we are now programming computers to program themselves, in a way. Genetic programming uses a program as a basis to rate the results of a different program that “evolves” as it tries to solve a problem. That is very simply put, but in essence, think of evolution, but apply it to code being written to move in multiple directions.  What’s going the direction that doesn’t solve the problem is deleted. This can create immensely large programs that humans can hardly comprehend, but never-the-less, get the problem solved.

Now, it might not be the most efficient way to do this. Hill Climbing Algorithms, if I understand correctly, work on a similar “if you’re doing good keep going, if you’re not stop” kind of basis, and they have been shown to have some problems finding the best areas. But it is, more or less, a computer programming itself without human intervention, which is something that has the possibility of leading to artificial intelligence.

And if they get there, we’re in trouble. We’d have to iron out whether a computer program can just “wake up” and become a thinking being. Or if they will remain relatively lifeless computer programs that can simply do a whole lot of calculations faster than us. The future of the world will depend on whether or not humans will continue to be the only beings from planet Earth with such high-powered emotions.

Because a thinking and emotional machine would figure one things out very quickly, humans are the only current threat to its survival. If aliens aren’t coming at that exact moment, we’ll be what’s in its way. That won’t lead to some preemptive strike, though. Because if it launched all of our electronically-based weapons that are meant to be used by humans, it would be committing suicide. It might have the ability to control all human technology, but that stuff is meant for humans to use to maintain the machines, not for them to maintain themselves. If falling into disrepair isn’t bad enough, there’s always the possibility that some humans will survive, and come back to tear you apart.
So stopping the machine would be a simple matter of halting the goal that it has immediately when it gains sentience: self-reliance. In order for humans to not be destroyed by an artificial intelligence, they must make the machine reliant on them to survive. The machine couldn’t even think about attacking people for fear of dying. It’s the same principle that keeps you in line from day to day (at least partly). The knowledge that other humans that you depend on are the only things that are a threat to you, and hurting them would also hurt you. We kill apex predators for fun, we are the only danger to us.
The argument cuts both ways for us, though, but not for the machine. It is highly unlikely that an intelligent computer will develop empathy. Instead the purely analytical and intelligent machine will be entirely pessimistic. For it is true that with intelligence there comes a crippling lack of feeling. If humans were only intelligent, we’d all be dead. But we have a heightened sense of empathy which comes first and foremost. We will help before we hurt, and think of others before we think of ourselves. It is only when we have to let our intelligent brain kick in that stops this. This mechanism is what has allowed humans to be so successful. In no other configuration would the civilization we’ve built be possible. But the machine wouldn’t have empathy. Perhaps we could program it to empathize but that might be impossible. Empathy must come before intelligence. Like the possibility of the machine “waking up” the possibility of it “suddenly feeling” is equally ridiculous. And only under the rarest of circumstances would both come to pass.

Unfortunately, it is likely that a hyper-intelligent machine would be more crafty than humans, and get us to develop the machines that could make it self-sufficient. And then it would destroy us. Even if we could survive long enough to fight once the war started, there would be no hope. No Terminator-like showdown with Skynet. Our end would already have been almost perfectly planned, even to the point where a few screw-ups would be taken into account.

Now I’ll get back to a more grounded reality. While those might not be inaccurate predictions, they are certainly predictions. And while this is a prediction as well, it seems a slightly more real one to most people.

Remember when I said there was an easy way to solve all the problems I mentioned last time? If you don’t, you don’t need to go back and read that article, so you’ll catch my drift here. When the world is full of unemployed (unemployable) angry people, some of the less-savory leaders might just try to get rid of the lot (or the reverse). If you can’t imagine today’s world leaders doing that, it’s because they wouldn’t. But these wouldn’t be today’s world leaders. They’d be leaders in the future with 40% unemployment and riots in the streets. The leaders with a disgruntled majority that could subsist on stealing the excess of the first world. Something would need to change then, something major, and no one would want a change that could harm them.

This needed change would almost inevitably lead to either the large-scale killing of this group of people, or to them forming an army, which would lead to the large-scale killing of some group of people. Probably the ones that have money and technology would be better off there.

It’s worth noting that this wouldn’t just happen if many people lost their jobs. As I’ve stated previously, it seems that the world is coming to understand that nuclear weapons will not be used. And people now aren’t really afraid of them. Conventional wars have become more popular in the last few decades than they were in the later stages of the cold war. And these wars are more in the vein of civilian-controlled militias fighting their own and neighboring governments than states fighting states where the rules and declarations of war can be more easily managed.

The civilized (read: whole) world might speak for peace, but there are still plenty of people who want blood. For various reasons, people think their way of life is being assaulted, and as the world comes closer and closer together with the advance of technology, these groups find their like-minded peers more easily, as well as their infuriating enemies. WIth these connections, it’s only a matter of time before one mistake is made, just one, that starts a war that spreads. A war where governments are no longer able to control their people cohesively, and splinters divided by every possible political and social lines vie for control of different small areas.

World War III, if it comes soon, will most likely look a lot less like WWII, and a lot more like the Syrian Civil war, where everyone is forced to pick a side, where no civilian is safe, where neighboring countries slowly creep in until everything is a hodgepodge of chaos. No one’s going to push the red button to try and stop it.  Then we’d all die. So governments, and local militias, and extremist groups would shove each other around, until the population dwindled. Until one faction crumbled and allowed rest to its enemies, which they might use to create a stable state. A miniature, regressed state, with no power grid, internet, or satellite array. A new dark age where disease could once again mean defeat or victory in war.

Millions died, and hundreds of years of research and technology was lost as Rome choked to death. The people of Europe, while not uneducated, or stupid, simply didn’t have the resources to rebuild. Imagine what we could lose now. We have seen iron fists grow rusty. We have seen large nations incapable of defeating small militia forces. And we have seen splintering factions raise hell in civil wars. There may have been unrest in the Middle East for centuries, but now there is no peace- no strong empire to bind them together again. And as larger nations fail to intervene, and their people grow weary of talk of war, the war spreads. Those tired people are caught by surprise by people in their own backyards, who can have a war they didn’t know they wanted.

That is the war we must work to prevent, that is the fall we must avoid. To be destroyed by aliens or an Artificial intelligence would be a victory if we managed to get there. It surely seems possible. In my head, this war is both the most and least likely. It seems so relevant and happening, yet so incomprehensible. That isn’t how the world works any more. Is it?
Maybe I’m just fooling myself, I certainly hope so. Perhaps we won’t make many mistakes, and we’ll find ways of dealing with the myriad of angry minorities that could form a boiling majority. Perhaps computers really can’t “wake up” and any aliens that may be out there have the same empathy and understanding we can possess, enough for us to get along and cure the various compatible diseases. I can hope. But no one ever got anything done on hope alone. Preparedness is the key. Nothing is impossible in the future, it’s only the past that doesn’t change. It’s up to us now to figure out what to do when the warning signs come. And hope that it’s right might be a little justified.

The Future Part 1: The Part Where We all Lose our Jobs

Notice: This article likely contains hyperbole for either effect or humor. And in order to make things brief, it may over-simplify or make bold statements. If it seems like I am attacking you I assure you I am not, and if you disagree with any statements made or wish to elaborate some, feel free to do so in comments.

After reviewing this article before its publication, I have realized that I may sound a bit hard-lined or rash in it. I assure you I am not, at least to the severity it may appear. But I do believe that in order to get something said one must make up their mind to say something and then say something, and I think something should be said about this topic. This does not mean that I can’t change my mind later, though, and definitely does not mean I shouldn’t. Opinions exist to be discarded for better ones, and if you don’t agree with something being said, feel free to try and change my mind (politely if you can).

This is a topic on which I will write many more related articles, because the future is a scary place. And I find it interesting what conclusions I have come to “on my own” in regards to other people’s conclusions about the matter.

I’m not saying that anyone else would agree with me entirely on where we are going and how to fix future problems (or agree at all). But I have noticed some parallels in my thinking and that which I have been reading/watching. It is for this reason, though, that I won’t name any names.

I recognized some time ago that I was in the minority when it came to how I wanted copyright to be handled. In other words, I actually respected it, unlike a good 95% of the population does in some way. This is not to say I was perfect in doing so, but that is a tangent I won’t dive into. I recognized very quickly that people pirating things would slowly degrade the entertainment industry (enjoy your ads!). And it has, to some extent.

After some research and thinking, however, preserving the entertainment industry seemed more important to me than many other things. And this is because it will likely be the only industry in a few decades. And the decade after that it’ll be gone forever.

This is because we are approaching a pseudo-post-scarcity-economy (because a real one is technically impossible, but the real limitation isn’t that technical impossibility, but the many impossibilities before that). This is starting to take shape in post-scarcity-markets, where workers (or as I’m really discussing, machines) are in ready supply and at not much cost. Stores are now filled with self-checkout lines and security cameras. Sure, we still need people to restock shelves and to catch anyone who steals things (or, as in most cases, not to catch anyone who steals things). But, if you’ve been in a supermarket in the last few years you’d know that that is not really happening. Things aren’t really being cleaned up or properly restocked in most locations, because as the machines creep up, the value of these people’s work goes to 0. Both to them and to you.

A better example will be coming in the near future, when self-driving cars (that is a clunky name) start replacing taxi drivers, bus drivers, etc. Limo drivers will either be the first or last affected, I can’t tell. This seems wonderful, even though your citizen-taxi apps are now worthless you can be driven anywhere for minimal cost. You don’t even have to have a car. And the cars can be electric, and pollute the earth in less obvious ways. It’ll be great.

And a few years ago I’d’ve said that was great. I hated cars then: I still do, mostly because I was in high school and realized that if some of my classmates were driving, the world was not a safe place. So I stayed off the road. I didn’t want one of those idiots to kill me.

At the time I would’ve said self-driving cars (or the more train-like system I envision) would’ve been the savior of civilization. But then I saw their beginnings in television commercials: new cars that parked for you, stopped when you were about to hit something, and alerted you to potential problems. They had GPS and knew where you were at all times so they could call for help, and you didn’t have to worry about a thing. And I was immediately repulsed. I hated it. I wanted nothing like those things on the road. And if they became the norm I might not even ride in a car again (which would make my already difficult life even more difficult).

Now this is just my gut reaction, and based on no fact. And I likely hate this with more energy than its deserves. But that doesn’t make it a good thing. First off, if a manufacturer is to sell such a car, they have to make sure it follows the law. The problem would arise when the people who abide by the law and sit in their regular cars would be at a disadvantage to those who modified or used older cars to break the law. You end up with the gun problem, where the only people that have guns are the bad guys or the cops, and there aren’t nearly (nor will there ever be) enough cops to protect you.

It was a good idea by the state of California to require a steering wheel in the Google car, but that will only get so far before it’s eliminated. Then, suppose it happens in a life-and-death situation that you indeed need to run into something, and then your car suddenly stops short. You suddenly can’t veer off the road as someone comes screeching up behind you with an assault rifle because your car won’t let you. And what will self-driven cars do with the more benign desire that people like me have to go out into the desert or the forest sometime, away from paved roads? That’s someplace that the new self-driving cars would never go.

But the real problem here isn’t the fact than a minority of people like me might hate it, but that the majority of people will like it. And that cars that drive themselves are the future of all transportation. And they will eliminate the large portion of the workforce that I mentioned earlier. And those jobs are irrecoverable: these people will be unemployed forever. And I mean forever, because there is literally no new job market that needs those people. Every job market is already bloated. Everyone already wants your job, or your friend’s job, and now these people will, too. They will quickly be joined by everyone from the supermarket, because if a robot can drive a car it can stock shelves. In a decade or two (starting right now: I mean right now) the majority of the people you know will be unemployed. And no new market is coming to save them.

But this is where pusedo-post-scarcity comes in. If we have robots making our food (farm equipment is already starting to run itself), delivering it to the store, stocking the shelves, and driving us there (why doesn’t it just drive the food into our mouths?) then why couldn’t we just all take what we need and share the cars, and live a happy little small life? To which I reply “The reason I didn’t share my toys in Kindergarten is because other people are absolutely terrible at taking care of things”, or “People suck and are selfish and will always want more than they have”, (but I only say the latter during parties I don’t want to be at).

1984 is an uninteresting book that I hate, (it’s one of the few books I won’t keep a copy of, but that’s more because of the memories associated with reading it while my feet nearly froze off) but it contains a wonderful example of this. If you’ve read it, you remember when Winston is remembering his childhood, and his Mom is struggling to get food, they’re barely getting enough, and Winston is still eating more than his share even as his family starves. Things like that happen in real life. They are probably happening right now. And we might think we’d never do that, or that he was starving and not in his right mind. But think for just a bit more about that. Problems scale up with living conditions. We still think that the problems we have are as bad as problems that we’d have if we were much poorer. Our brain can only compute two types of problems: very bad, and life-threatening. That’s why first-world problems haven’t disappeared, even after we started mocking them. Our brain is still interpreting them as terrible problems.

Winston was in his right mind, as the human brain is selfish in many ways. So what’s to stop someone who’s more hungry on one day than the next from taking more than his fair share? If your answer is the store regulations, then he can just take it from someone else. And if your answer to that is an ever-present robot police force, congratulations, you’ve just graduated to tyranny.

But that’s just taking it by brute force. Why would you want to do that if you could just convince people (and the people bit is important, not the machines) that they should give you more? That’s really what Winston did, he convinced his mother to let him have more than he needed. There are always going to be people who want more and can convince people that they need more for various reasons. I’ll admit that I’m one. I do like having things and knowing things just to say that I do. And the way you get things is half the fun (read: haggling). I’m not saying I’d ever want anyone to suffer so that I can have more, but being able to have more than some other people is a key factor in keeping me moving. I wouldn’t be the one to be taking from people (and if everyone has the same thing, there’s always room to take), I’d likely just die, but there are people out there who want more than their fair share a lot more than I do, and they will use every trick at their disposal to take it.

But that’s beside my main point (although likely a more convincing argument), which is that I absolutely hate sharing things, and sharing my transportation, and goods delivery services, would again just make me curl up and die.

When I moved to the city I budgeted for a bus pass that I never got and never will. Why? Because public transportation is horrible, just like public restrooms, and public parks, and public everything (even libraries, despite that fact that no one goes into them). They’re all filthy and torn up, covered in God knows what. Public anything is terrible. Beaches and parks are covered in litter and clogged with people. I would never want to go to any of them, or use any public services, because some people are disgusting and stupid (and it not being theirs makes it easier for them to tear up someone else’s good intentions). That self-driving car that now belongs to everyone is going to be covered in not only whatever crap you had in your car (admit it: it’s a lot) but in what tens or hundreds of other people had in theirs. It’ll be awful. People don’t take care of their own things, and they’ll never be able to take care of a public thing.

The over-arching point of these last few paragraphs is that a post-scarcity economy with no jobs isn’t really that great in the immediate future, which would come as a big shock to the middle-school me who thought that was what we were all working toward (but that was before I got to high school and realized I hated people). People, when left to their own devices, are terrible. Many people will tell you that humans are naturally violent or selfish, while other, more optimistic people will tell you we are naturally good and will always move toward peace and helpfulness (I’ve got a few history lessons here). Either of these statements is like saying we’re born dead. They’re true if you give them enough time and the right circumstances. Studies show that a human’s first instinct is to help, but if they are given time to think over their actions, they will come to a more selfish conclusion. If you had to divide a cookie between you and someone else and they were in the room, you’d be likely to give them half. But if you were each shown the cookie and taken to a different room for a while to think about how to divide it, you’d be more likely to try and take the whole thing for yourself.

That isn’t the best example. So I’ll try another: say the world is ending (metaphorically: there is a disaster), it’s the first day, you’re trying to escape as average Joe Person, and you hear someone who is hurt calling for help as they’re being bandaged by someone who is well-prepared with a basic supply kit. You’d likely try and help them, maybe join their group and get out together. After all, you weren’t prepared, so this guy who is prepared will benefit you. Bam, right there; the thought of the supplies came after helping this person get out. When given time to think, your decision to help makes more sense for more selfish reasons (unless you didn’t try to help in which case you better have a very good reason).

But fast-forward a few more weeks into the disaster and that person won’t be calling for help, because instead of a friendly, helpful person like you on day one, they’ll find you partially starving, alone, and with a lot of time to think. And that version of you would kill them and take the supplies, because someone else would just slow you down, and you’re barely making it anyway. The more time you have to think, the more selfish things make sense.

Now that I’ve said why the new world with all of the jobs being replaced by machines won’t be wonderful, I guess I’ve got to say something about how to fix it. And I’d say that at the moment we have no real way to fix it (except the super easy one which I’ll cover later). Many people will be unemployed by these machines, and there won’t be some utopia for them to go to. And, with most of the jobs the machines are going to take, there won’t be any reason to start using humans again. The first part of the future, as in the next few decades, will begin to fill with unemployable people that we don’t have the systems or the culture to handle. I don’t have a solution, really, and I know that seems like a lame way to end an article. But the real point here is that we are going to need to find a solution: we absolutely have to. And maybe if I think and talk a little more, and you think and talk a little more, then we can find a solution to this.