Thursday, June 2, 2011

Breakdowns of Imagination

Through the years, there's been a lingering - not necessarily strong, just enduring - idea that the goal of a science fiction writer is to predict the future. On an individual basis, maybe that's true for a few of them, but the kind of arrogance it takes to say "this is how the future will unfold, and I am totally right about it" is totally different from the sort of arrogance that drives a writer to seal an envelope or send off an electronic submission and say "this is an incredibly awesome piece of writing and you will buy and publish it." Or at least think it.

Personally, I've always preferred the notion that a more apt description of the sf writer is as a forward scout: running ahead in one direction or another, taking a look around, then coming home and telling everyone what they saw over that particular hill. Good sf isn't supposed to be prognostication; it should more properly be speculation, about the sort of things that might happen. I'd argue that it's thanks to decades of such speculation about the pitfalls and possibilities of human cloning, for example, that there are already laws in place to deal with it before it's even been demonstrated to be viable.

Nevertheless, it's very possible to just not see something coming. There's nothing wrong with that; it's human. What can be a problem, indicative of potential speedbumps in thinking outside a speculative fiction mindset, is to entirely dismiss the possibility of something coming. Which, realistically, is a third kind of arrogance.

"A rocket will never be able to leave the Earth's atmosphere." - The New York Times, 1936.

I'm specifically prompted in this by Hugo Gernsback, the Father of Science Fiction, despite the fact that he's been dead for forty-four years. In the letter column of the October 1931 issue of Wonder Stories, I came across his editorial response to a letter from Milton Kalesky - who you may recall as the writer of "Revolt of the Ants" - expressing concernss that once robots became sufficiently inexpensive and reliable, the laws of economics would result in them replacing human workers wholesale. Kaletsky suggested that people needed to start preparing then for the changes that would result in order to ensure society wasn't vastly disrupted by it.

Gernsback spent three paragraphs in a response, but what really caught my eye was this one.

Popularly, the robot is supposed to be a man-like machine that walks on its feet, and performs all sorts of manual work. While it may be possible, in a hundred thousand years hence, to have a mechanical man who performs simple tasks, it will manifestly remain imposible, even then, to have, for instance, to mention one of thousands - a robot cook. Not by the wildest stretch of imagination, can you think of a robot which could cook an entire meal all by itself without the human agency entering the work.

I can understand Gernsback, or people in general, being skeptical about the possibilties of robotics. I can understand the notion that some things that seem inordinately simple to us are fiendishly difficult to get a computer to comprehend. But a hundred thousand years? Really? What is it about the act of cooking that makes it so ungraspable to a robot that even if we worked at it for twice as long as it's been since we achieved behavioral modernity, we still wouldn't do it?

In looking ahead, there's conservatism and then there's practically blindness. Granted, the huge span of time Gernsback cites may just be a product of his time; I've noticed other works from the early 20th century that seem to take extreme technological stasis as something of a given. But blindness is of no use to a scout.

No comments:

Post a Comment