Welcome to the Back to School for Writers blog series. Every Wednesday until the end of September, a guest poster will share their knowledge and expertise on a specific topic. Today’s guest is David Gale. The eagle-eyed among you might’ve noticed that we share a last name, so I’ll confess up front that he’s my smart and funny husband, who put this post together for me on short notice.
Well, Rabia asked me to help out with her “Back to School for Writers” series. I’m supposed to talk about something I know, so since I’m a programmer, here goes: five things Hollywood (and many writers) often get wrong about computers.
1) A geek with a computer can hack anything.
You know the scene: the Good Guys are under attack by the Big Bad, everything seems doomed, and then the Nerd pulls out his laptop, hacks into the Big Bad’s computers and takes it down with a single keystroke. We’ve all seen it happen, most egregiously in Independence Day. And I’m here to tell you: it ain’t gonna happen that way unless the Big Bad is a bumbling, clueless technological neophyte–AKA a Minion, in which case they have no right being the Big Bad in the first place. There are a host of different reasons why this scenario is impossible, but the main one is simple: any computer system that holds the Ultimate Top Secret Plans of Doom will not be connected to the internet. You’re going to have to go to a specific location and pass through several layers of security and encryption just to get a listing of the files you still don’t have access to. There was a flurry of excitement recently when someone suggested that they might hack the Curiosity Mars rover, based on the fact that NASA is able to update the rover’s computers from Earth. But then people realized that, in order to pull this off, they’d need to replicate NASA’s Deep Space Network system in order to broadcast a strong enough signal, in addition to breaking the encryption and figuring out the right commands to send. Not something you do in your spare time.
2. When computers break, there’s smoke and showers of sparks. I swear, the electrical engineers who built the Enterprise’s computers must have been paid with baseball cards*, because those things are held together with spit and bubblegum. The ship gets hit–with the shields still up!–and suddenly every computer panel is sparking like a major French city on Bastille Day. Um, no, that’s not going to happen. When a computer goes belly-up, it’s actually pretty boring: things go black and quiet. It’d take a catastrophic heating-related failure to cause smoke (and even then, at least half the time, you’ll smell something but not see anything), and sparks–or even components melting–are only going to happen when something is pushed way beyond its manufactured specifications. People building battleships design a safety margin in, so things generally peak out at about 80% of the safe zone. Oh, and they also build in redundant fail-overs for the really important stuff, so that even if something goes boom, things keep working.
* Yes, I know there’s no money in the Star Trek universe’s Federation. So clearly they needed to come up with something else.
3. Computers talk like robots.
War Games is one of my all-time favorite movies. If you haven’t seen it yet, go dig up a copy. It’s one of the best “rogue computer nearly wipes out the world” movies ever, hands-down. But, I confess, it has some issues. And I’m not even going to mention the “sequel” that came out a few years ago…(shudder)…er, where was I? Oh, yes, talking computers. See, when Joshua–the computer from War Games–talked, he used a monotonous, robotic-sounding voice. And that’s been par for most major computer systems out there (the ones which have gotten a talking role, at least) for the last thirty years. But even in the early 1980’s, computers could synthesize voice decently. And we’re in 2012 now; everyone’s got GPS units that sound like, well, whatever we want them to. And if a GPS can do that, then my Doomsday Computer to End All can talk to me in a nice, soothing voice as it searches for the missile codes it needs in order to blow everything to little tiny bits. It may not get every inflection right, but it’ll be close.
4. Computers can’t deal with logical fallacies. There’s a Star Trek episode where Kirk destroys an android bent on enslaving humanity by saying that someone who has just confessed to being a liar always lies. The android can’t resolve this paradox–is the liar lying about being a liar?–and breaks down (complete with smoke, but, amazingly enough, no sparks). Unfortunately for Kirk, there’s very little chance that an artificial intelligence strong enough to handle natural language processing (necessary for understanding spoken commands) will not be able to disregard commands that it doesn’t understand. It doesn’t need to establish the truth or fiction of the statement; it just needs to decide to ignore it. And if it’s not capable of ignoring commands, then you can stop it just by, well, commanding it to stop.
5. Computers will eventually become so smart that they decide humanity is not needed.
At least in The Matrix, humanity continued to serve a purpose once the computers had taken over. They didn’t need our brains, but they did need our magical ability to produce electricity from our imaginations. But the main problem I have with that premise isn’t with the boundless energy of humanity (I’ve often wondered if I could eliminate my electric bill by putting my seven-year-old in a giant hamster wheel hooked to a generator); no, my problem is that computers are only as smart as we can make them. Oh, and there’s an entire class of problems that computers can’t solve at all (to be fair, neither can humans, but we can often intuit solutions without needing to do all the hard work). So we can make computers “smart” enough to calculate the value of pi to 5,000,000,000 digits (eventually), which no human will bother doing–but only because we’re smart enough to know how to calculate pi. It’s the same with everything we tell computers to do, even landing on Mars. Sure the computer’s doing the heavy work, but we had to tell it how to do the work in the first place. If we can’t figure out how to do something, we’re not going to be able to build a computer that can figure it out. And that’s the dirty little secret of artificial intelligence–until we can fully understand ourselves, how we think, how emotions work, what intuition and art fundamentally are, we’re not going to be able to create anything that can truly be said to think for itself. And if we ever reach such a hyper-conscious state…well, the intelligences we create will still be childish compared to us.
Oh, and here’s a bonus: if you’re still worried about the Robot Apocalypse, here’s what it will look like.
BIO: David Gale is a programmer by day, but don’t hold that against him. He loves his wife, Rabia, very dearly–even to the point of guest-posting on a moment’s notice. He’s also the creator of WriteTrack, a tool for writers who want to write a large number of words in a small amount of time, but don’t want to do the math to figure out their daily goal. And if you’re really curious/in need of something to put you to sleep, you can visit his blog.