Saturday, December 8, 2007

Learning to Let “Goto”

Have you ever had a conversation like this before:

“So, how do I get from my office to the restaurant?”

“Well, you take the 10 freeway and go east. Then when you hit the 405 interchange, head north.”

“Head north. So stay on the left lane on the off-ramp?”

“No, actually, the right lane.”

“The right lane??? No way. You’re going east on the 10 and so if you want to go north, you’ll have to be on the left.”

“No, the right lane of the off-ramp goes north.”

“That’s impossible. North is on your left when you're going east.”

...and so on. While a conversation such as this might not necessarily have earth-shaking consequences — at worst, this insistent instructee may have to turn around and lose some travel time — it does reflect yet another situation that may be different if more of us knew a little bit more about computer science.

Some may be surprised to realize that the central object of study in computer science is not the computer. On the contrary, the computer is merely a tool for creating, studying, and making use of the real star of computer science: the algorithm.

While there are many formal definitions for an algorithm, in keeping with the “daily life” tone of this blog, let’s stay relatively informal: in essence, an algorithm is a set of instructions. It’s a “how-to,” a “1-2-3,” a “step-by-step guide.” And yes, to borrow yet another catch phrase, in many respects an algorithm is meant “for dummies.”

No, this doesn’t mean computer scientists are dummies — it means that computers are. The “dummy” aspect of algorithms lies not in their discovery, definition, or creation — that‘s squarely in the realm of computer science, and certainly not for “dummies” — but in their execution. An algorithm should include all of the information necessary in order to complete it successfully — no more, no less. And this is why computers are the ideal “algorithm executives” — they rely completely on the information provided by the algorithm, and nothing else. No questions asked. If an algorithm’s instructions will make a computer “freeze,” then the computer will “freeze.” If these instructions will make a computer produce a convincing facsimile of Angelina Jolie playing Grendel’s mother, then that’s exactly what you get.

In other words, computers follow instructions, and the information provided within those instructions, without injecting (or needing) any other information. There is no “insight,” or “intuition,” or “experience,” or “understanding” involved. The bottom line: much as we tend to anthropomorphize computers, they actually operate in a manner that is almost contrary to how we do. One can argue that the previous four concepts — insight, intuition, experience, and understanding — are fundamental to being “human.” And yet, they have nothing to do with the amazing things that computers can accomplish; in fact, they may even get in the way.

Which brings us back to the conversation above: the driving directions being given in that conversation can be viewed as constituting an algorithm — an algorithm for traveling from one’s office to some restaurant. The recipient of this algorithm, however, is a living, breathing human being...who appears to know better. Indeed, prior experience and an understanding of directions does make these directions seem incorrect: north is to your left when you’re going east. So of course it would be the left lane of an offramp that would head north.


There are two ways to go here: first, one can be “more human” and state that no, the person receiving directions does not actually have enough understanding nor experience. Sometimes, offramps do turn in opposing directions — it just depends on how the roads were built. So it is possible to head north by staying on the right lane on an eastbound offramp.

But wait — the would-be restaurant guest was already being told to stay on the right lane in order to head north on the 405. The issue was that, because said guest had “understanding,” “experience,” and “insight,” he or she was doubting the instructions being given. This guest is human, after all. Had these instructions been given to a computer, then no questions would have been asked, and the computer would have arrived at the restaurant sooner than the human.

Now that you’ve come this far, you might think that this entry is some sort of invective against that which makes us human — if this were The Matrix, I’d be rooting for Agent Smith, or if this were Terminator, I’d be squarely on Arnold’s side. And why not — I’m a computer scientist after all, right?

On the contrary, this entry is actually all about the perks of being human: we have a choice. We can choose to invoke our insight, understanding, and experience...and, when the situation is right, we can also choose not to do so. In the case of the dogged dinnergoer, he or she probably should have recognized that the directions’ provider was authoritative, and his or her instructions should simply be followed. There will be other times to invoke one’s sense of direction or navigation; this probably was not one of them.

With this perspective, you can probably think of some other times when it’s better to just put our “computer faces” on and follow instructions. Even without inborn culinary aptitude, one can probably whip up a decent meal as long as he or she sticks to a good recipe (and executes its instructions correctly). I’m also sure that teachers everywhere have, at one time or another, asked their students to “just please follow the instructions.“ And how many times have negative consequences, ranging from the comic to the tragic, emerged due to a failure to “use as directed?”

A little computer science provides the perspective and the skill set to discern when to invoke our human gifts (i.e., develop the algorithm) and when to simply let go, and follow instructions (i.e., perform the algorithm). There are right and wrong times to behave one way or the other, and for the moment, only a fully-realized human can compute that correctly.

No comments: