The Time of EVE

I just finished watching “The Time of EVE”, an anime film about androids- what else. I am no particular fan of Japanese cartoons but horrendous graphics aside, they aren’t afraid to make you reconsider what you take for granted. They like to fuck with your moral compass, and often at the end of them, you find yourself thinking the world might not be so black and white as you previously assumed.

EVE is no different. It doesn’t ignore the fact that human-like robots would have epic moral repercussions, and suggests a world where three major groups have formed: Those who promote the use of androids (and allow themselves to bond with them on an emotional level), those who treat them like any other robot and those who oppose them.

The plot, in a very similar way, revolves around three storylines. The main one, the least important as though it is about the narrator, tells the story of a high school kid who falls into the second category. He owns a “female” android and treats “her” like shit, like an object. The second storyline, the one most in the background but still the most important, is about his friend, pretty much an android-hater. And the third is about the bar they run into, “The Time of EVE.”

In a society where robots are indistinguishable from humans, and have higher capacities in every possible way, it would only be normal for us fleshlings to feel threatened. As a result, androids are easily distinguished by a halo (called a “ring”) above their heads that displays their status and a bunch of Japanese stuff like “I WISH I COULD FUCK” though I’m just guessing that. Also, they obey Asimov’s Three Rules of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These are in order of importance. Easy enough.
Trivia: Before Asimov’s time, all literature and movies depicting robots involved them going berserk and destroying what they could because they saw it fit. Asimov found those stories “unbearably tedious” and introduced the three laws, only to have them scanned for
loopholes and endlessly raped by directors. Oh, irony.

EVE for once, is not about robots breaking the laws- quite the contrary. If related at all, it is about the interpretation of robot behavior though those laws- to figure out why a robot behaves like it does.
Instead, it is about a bar where outside laws like wearing the “ring” no longer apply and androids and humans are treated equally. All robotic behavior is dropped and some androids even adopt a child-like behavior. And nobody, not even the androids themselves, can keep robots apart from humans.

What ensues, is a little questionable, to say the least. Robots fall in love, or at least act in love, start to believe they’re human, and lie to their owners (which is not listed in the rules) about going there.

Okay, so- Bottom line is, that a kid was raised by an old-school robot and unavoidably gets attached to it. But then it is ordered by the kid’s father to stop speaking because of the secrets they both share, and does so despite the child’s crying and begging. The child loses faith in all robots and ends up as the protagonist’s android-hating friend. Spoiler alert by the way. Whoops.

In the end, the kid is proven wrong. At the bar, the robot does speak in order to defend his life (breaking the second law in favor of the first) and thus proves that ohmigod, it loves the kid after all and even ends up saying so. Spoiler alert again. Ha, ha.

This, ladies and gentlemen, is false logic. Sure, everybody roots for the friendly, sexy robots and feels for them when they apparently have tear ducts, but seems to forget that without contradicting itself, the movie simply proves that while surprisingly complicated, these androids are still machines that function as programmed. I even found it very clever how each of these androids’ behaviors is deducted to their programming, like the security android who refuses to have itself fixed because her mental anomalies would be discovered and she would be scrapped, leaving her owner vulnerable and potentially harmed. Aww, real love, right? Wrong.

I’ve said this before: If you make a robot that looks sad, it isn’t actually sad. If you make it look sad for a good reason, it still isn’t actually sad. It’s imitating human behavior and no matter how convincing, that still doesn’t make it human. When a child-like robot cries because its mommy (some tower crane, I dunno) is demolished, it doesn’t give any more of a fuck than your toaster watching you have a heart attack. Robots are not human and never will be, per definition.

So where is the line, then? In all honesty, I have to say I don’t know, and I don’t think there is any way of knowing because of how convincing emotional communication is. If you would replace part of a human’s brain by a computer, would that make the person in question inhuman? When is he “dead” and replaced by a laptop? Could his conscience he copied and a second entity be created, or would that be imitation?

Nobody can tell. It’s a dilemma that can’t be solved objectively. But, if there is one thing that I’ve learned, is that problems can also be tackled through subjective means.
There are two extremes in this case: Either assume that you can never be sure and simply disregard any form of emotion that isn’t your own (and even second-guess your own), or simply “love all” and include the “grey zone” where the difference is hard to tell, into the things that you consider worth bonding with.

If you look at the effects of that choice, it seems logical to decide on the latter, because you don’t want to end up a sociopath. I think we’d be better off simply accepting any shown emotion, imitation or genuine, as “real” and act accordingly. However, this too can have dire consequences.
The example in the movie, where emotional bonds are instantly cut and the child is left heartbroken, is a good one but certainly not the worst. We have to consider that a robot still doesn’t have anything like free choice, and its show of emotion is still programmed by somebody who chooses to do so, because it benefits him somehow.

If you were to adopt a robot child like in the move A.I. (Jude Law and Nu-Metal, bitches), and it would start screaming every time you mention the name of a competing manufacturer, it wouldn’t be long before those names gained a negative annotation. Our feelings would be played and programmed through those of inanimate objects. Don’t think this is above their morals- or have you forgotten the millions of smiling faces on every cardboard box at the supermarket?

In order to prevent major mistakes tomorrow, it’s best if we answer these questions today. But nobody takes them seriously, so why would we?

Either way, see this film. The graphics aren’t half bad (3D rendering WTF) and neither is the storyline. And Maynard willing, it will make you think.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s