A Conversation With Bill DeSmedt

by Claire E. White

Bill DeSmedt likes to say that he's spent his life
Photo of Bill DeSmedt
living by his wits and his words. They have served him well in a number of careers: Soviet expert, computer programmer, Artificial Intelligence researcher and his current profession as a knowledge engineer.

As a college student, he considered becoming a physicist. During the Cold War that meant learning Russian, which he did. As a Soviet exchange student, he learned to love Russia and its culture, becoming an expert on the country. But when his Soviet Area expertise became less useful after the fall of the Soviet Union, Bill looked around for another career. He became interested in computers and eventually became a programmer and system designer, consulting for both startups and Fortune 500 companies. He also developed a specialty in Artificial Intelligence research and language. Today Bill is a highly-regarded knowledge engineer: some one who gathers knowledge and incorporates it into computer programs such as expert systems and natural-language processing systems.

Although he already had a successful career, a happy marriage, children and even grandchildren, Bill had one dream he hadn't yet fulfilled: to be a novelist. A lifelong voracious reader, especially of SF and science nonfiction, Bill decided it was finally time to take action. After seeing a special on tv narrated by Carl Sagan he was struck by one of the greatest natural mysteries on our planet: the Tunguska Event. On June 30th, 1908, a giant fireball streaked across the daytime sky above the Tunguska River in Siberia, then exploded in the atmosphere, destroying millions of trees in the forest in a circular pattern. The fallen trees all radiated outward from a centerpoint. Scientists estimate that the explosion above Tunguska released an energy equivalent to exploding a nuclear bomb 500 to 2000 times the force of the Hiroshima between 2 and 9 miles above the Earth's surface Shockwaves were reported almost 1,000 miles away from the blast point, but no crater was ever found, which seems to rule out the theory of a meteor or comet. In eastern Siberia the night sky was bright enough that you could read a newspaper. The area was blanketed in radiation; low-level radiation remains to this day, as do biological mutations in the area. Many theories have been propounded for the Tunguska Event: a meteor, a comet, a miniature black hole, antimatter, or an accidental detonation of an alien spacecraft.

Bill DeSmedt was fascinated by the mystery and couldn't stop thinking about it. And so his first novel was born. Singularity (Per Aspera) is a fast-paced techno-thriller based on hard science, which postulates that the Tugunska Explosion was caused by a microscopic black hole which was smaller than an atom and heavier than a mountain. The twist? The black hole never exited and is still orbiting inside the Earth, tunneling through the mantle in a decaying orbit that could eventually devour the entire planet. And a Russian billionaire who has been hiring rogue scientists with specialties in creating weapons of mass destruction has his own ideas about how to exploit the situation. Now it will be up to a maverick astrophysicist who proves his theory about the tiny black hole, a beautiful rookie government agent and a high-level intuitive consultant to combine their skills to avert disaster on a worldwide level. With its fascinating science, big ideas, exciting action sequences and complex plotting, Singularity is drawing comparisons to the works of Michael Crichton, Tom Clancy and Dan Brown. The sequel, Duality, is due out next year.

Bill lives with his wife of 37 years in Milford, Pennsylvania "a town whose long tradition in speculative literature serves as a constant source of inspiration", he says. When he's not working or writing, you might find him spending time with his family or keeping up with the latest scientific discoveries. Bill spoke with us about Singularity and his decision to launch a new career as a novelist. He also discusses the science behind the book, recent advances in artificial intelligence, and why not teaching evolution and science in schools could spell the beginning of the end of our civilization.

What did you like to read when you were growing up?

I confess to having been one of those teenagers who took refuge from the Sturm und Drang of adolescence in the safe, if at times unsettling haven of science fiction. Not that the whole of SF's attraction lay in its escapist aspects. Science fiction is a funhouse mirror we hold up to ourselves, a way of exploring what it means to be human and how far the bounds of the human can be stretched. There are worse literatures for a young person, struggling with issues of personal identity, to get caught up in.

How did you get interested in the Soviet Union?

It didn’t take a great deal of effort to get interested back during the Cold War. I mean, the fact that somebody's got nuclear missiles pointed at your head tends to focus the attention all by itself. In my case, though, the interest born of self-preservation was reinforced by other factors. Chief among them was the fact that I toyed for a while with the notion of becoming a physicist, and for prospective physicists Russian was the recommended way to fulfill their foreign-language graduation requirement (remember those?). I eventually abandoned that hard-science career path, or it me, but by then it was too late: the Russian language had me hooked. I wound up taking eighteen months of it at the Defense Language Institute in Monterey California, followed by a BA and an MA in Soviet Area Studies, capped off by ten months on the US-USSR student exchange in the mid-seventies. By the time the dust cleared, I was well on my way toward a professorship in Soviet politics -- That’s when my subject matter got shot out from under me, as the country whose political system I was studying ceased to exist.

How did you get interested in computers? Did the speed of the home computing revolution surprise you at all?

Computers were sort of what happened after the Soviet Union went away. And, to some extent, a career in programming and system design was a natural extension of my earlier interests. As I have Jonathan Knox saying in Singularity, "it all comes down to language; moving from Russian to C++ just means swapping one set of formalisms for another." I don’t necessarily agree with everything my protagonist says or does, but he took those words right out of my mouth. And in my case the whole thing came full circle, since some of my work with computers actually involved trying to get them to understand natural language. As for the home-computer revolution, aside apart from a few visionaries like Alan Kaye, saying you weren’t surprised by it is the same as saying you remember the sixties -- a sure sign you weren’t there at the time.

Let's like to talk about your new novel, Singularity. What was your inspiration for this book? What sparked your imagination?

Cover of Singularity by Bill DeSmedt
I’d always had what they call an "educated layman’s" interest in relativity and quantum mechanics, and that naturally led to black holes, the point where those two very different theories -- of nature on the very largest and very smallest scales, respectively -- come together. In addition, general reading over the years had given me at least a subliminal awareness of the Tunguska Event, the still-unexplained explosion that devastated an area of Central Siberia half the size of the state of Rhode Island back in 1908.

All that came to a head one rainy Saturday afternoon in the mid-nineties, while I was watching a rerun of Carl Sagan’s Cosmos TV series; more specifically, the episode where he talks about comet and meteorite impacts. Maybe halfway through Carl briefly mentioned Tunguska, and then touched even more briefly on Albert A. Jackson and Michael P. Ryan’s theory that the explosion, estimated at anywhere from two to forty megatons, might have been caused by a submicroscopic black hole hitting the earth. In the next breath, though, he went on to dismiss the idea: it seems a black hole that tiny should have passed unimpeded through the solid body of the planet and out the other side, resulting in a second explosion of equal or greater violence. Since no such "exit event" had been detected, the theory itself was consigned to the dustbin of astrophysical history. Q.E.D.

It was then it struck me: what if there was no exit event, because there was no exit? What if the Tunguska Event was caused by a black hole smaller than an atom, and that black hole’s still down there, circling around and around inside the earth, and slowly eating it alive?

Well, serves me right for watching the tube when I should have been helping out around the house. Because, once I’d thought of it, the idea wouldn’t leave me in peace. It’s not like it "sparked my imagination" -- not in a good way, anyhow. It was more like one of those tunes you can’t get out of your head. It’d come back at odd moments, and bring its friends: stray pieces of my personal history that somehow fit into the puzzle (or maybe made a larger puzzle). I’d catch myself buying books on black-hole and Tunguska-related topics, just on the off chance that.... you know? I even tried giving the concept away free to not one, but two published authors I knew slightly, hoping they’d take the monkey off my back. Nothing doing. Finally, I exorcised the story idea the only way I could -- by sitting down and writing the story myself.

There are numerous theories about what caused the Tunguska event in 1908, including meteors, UFOs, nuclear weapons and the theory you use in your novel -- the subatomic black hole. Do you think we will ever know what happened? Has all the evidence we would need to solve the puzzle been erased?

Photo of Trees Felled by Tunguska Event
Photo taken in 1927 of trees felled by the Tunguska explosion.
Actually, I could add a few more theories to the ones you’ve cited. Like the possibility that Tunguska was caused by a solar plasmoid ejected from the sun, or by one of inventor Nikola Tesla’s experiments gone terribly awry, or by a laser beamed at us by extraterrestrials in an ill-advised attempt to communicate. There's even one theory that holds that the Tunguska impactor didn't fall down from the sky at all, but rose up out of the ground instead -- a "geometeorite" or a gigantic gas eruption that incidentally threw enough flaming material skyward to make eyewitnesses believe they had seen something streak across the heavens prior to impact. And, of course, the local Evenki tribesmen remained convinced that the whole thing was a visitation from their storm-god Ogdy.

The Jackson-Ryan hypothesis, that the Tunguska Event was the collision of a primordial black hole the size of an atom, seems downright tame by comparison. As to whether we'll ever know for certain what caused Tunguska, well -- absent a logical contradiction, the proposition that science will never manage to accomplish [fill in your own impossible dream here] has historically not been the way to bet. If, on the other hand, you’re asking whether we'll solve this particular puzzle within our lifetimes, then the confidence factor drops considerably. As to what shape the evidence is in nowadays, that depends to some extent on which theory you’re trying to prove.

For instance, the modified Jackson-Ryan hypothesis -- what I call the "Vurdalak Conjecture" ("Vurdalak" being Russian for "werewolf" or "vampire") -- makes at least one prediction that ought to be testable, even if the Tunguska impact site itself were to fall victim to runaway exploitation by mining consortiums. That’s because, if there is a submicroscopic black hole still hurtling around beneath the surface of the earth, then we've very nearly got the capability to detect that fact from low-earth orbit right now.

I'm thinking in particular of the Gravity Recovery And Climate Experiment (GRACE) mission that NASA's been running jointly with the German Aerospace Center. GRACE is specifically designed to detect anomalies in earth’s gravitational field. It would require some retooling and/or reprogramming to pick up one as fast-moving as Vurdalak, but the bottom line is:, for the Jackson-Ryan theory at least, a conclusive test of the evidence still looks doable.

The heroine of the novel is Marianna Bonaventure, a rookie field agent who is talented but whose impulsivity gets her into trouble. What was the greatest challenge in creating Marianna? Were there any traits you were specifically trying to avoid with her?

"To write at all, you;ve got to tap into some pretty personal stuff, so it can be hard not to take it personally when your readers offer even the most constructive criticism. You;ve got to get past that though, because, if you do succeed -- if you soldier on through to the point of seeing your work in print -- you're going to see a lot more criticism, too."
Not that Marianna wasn’t challenge enough in her own right, but the real challenge I’d set myself went beyond the writing of any single character. I wanted, you see, to try writing a thriller that wouldn’t be just a "guy thing," full of cool machines and earth-shattering threats. Those are in there too, but I also wanted to write something that might also appeal to women. And that meant writing a strong, but believable female hero. Or at least as believable as a time-warping Russian billionaire or a pattern-sensing elite consultant. That created some problems, though ...

Because thriller convention dictated that Marianna be "drop-dead" gorgeous, which can make it hard for women to identify with her. Of course, I could have simply cast her against type, made her less breathtaking than the norm for the genre, but that felt like a cop-out. So I did the next best thing: I wrote a female character who, though undeniably beautiful, doesn’t necessarily see herself as such. Marianna's unease with herself goes beyond surface appearances, though -- it's symptomatic of her larger uncertainty about whether she can cut it in a quintessentially male profession. But that element of characterization does at least start with looks. If the average guy can't go seven seconds without thinking about sex, as the sexist cliché would have it, then I'm betting the average woman’s equivalent thought has to do with physical self-consciousness. I wanted a heroine who -- precisely because everyone else can see how smashing she is -- would give the lie to those feelings of inadequacy, as a way of saying that maybe it's time for all women to set those feelings aside. So, is Marianna a male fever-dream? Of course she is, in the trivial sense that all fictional characters are the author's "dreams," and the author in this case is a male (me). But I hope she's a bit more than that, too.

Jonathan Knox, the consultant who can see patterns in seemingly unrelated events and gets dragged into Marianna's violent world, is a likeable hero who rises to the occasion. What was the greatest challenge in writing Jonathan?

By comparison with Marianna, Jon Knox wasn't much of a stretch. He's based in large measure on a composite of consultants I’ve known and worked with. By the same token, the hardest part was keeping him in character as a consultant. Consultants tend to cultivate an air of detachment, of objectivity, as a matter of professional survival: they can't get too involved in any one assignment, because the minute they've solved whatever problem they were called in on, they're outa there! The problem was: how to get that stance of disengagement to play in a thriller format. And how to have a "hero" like that rise to the occasion, when the only weapons he's got are words.

Still, I felt it was worthwhile to try. There are a lot of consultants out there (the Department of Labor estimates a couple hundred thousand), and odds are your daily life is at least as likely to be affected by something a consultant advised some corporate client to do as by a doctor or a lawyer. Yet, legal and medical thrillers are legion, and who do consultants get as a fictional role model? Nobody, really. (You can't count Tom Clancy's Jack Ryan -- what consultant in his right mind would quit his practice to go off and be President of the United States?) So, I felt it was time a real consultant got his turn at bat.

I especially enjoyed the character of Dr. Jack Adler, the astrophysicist who wangles his way onto the Tunguska expedition only to find himself the target of an assassin. Who was your inspiration for Jack?

I'm glad you liked Jack -- he’s one of my favorites too. As to who the inspiration for him was, no one in particular. I knew he'd have to be a Texan, since it was in Texas that Al Jackson and Mike Ryan first came up with the Tunguska/black hole hypothesis. And he needed an easygoing, laidback style to make his sometimes intense subject matter go down easier. In his gift for plainspoken exposition of difficult science, Jack is modeled after Paul Blass, an engineer friend of mine who was kind enough to serve as Singularity’s technical principal advisor. Beyond that, though, Jack Adler sprang entirely from my febrile imagination.

Marianna works for a fictional government agency called CROM (Critical Resources Oversight Mandate) which tracks down scientists who might sell WMD technology to terrorists, especially those who used to work for the USSR and who are now unemployed and whose expertise is highly valued by terrorists and rogue nations. What is the U.S. government doing today about this threat? How much of a danger are these scientists?

Well, the Department of Energy has got a mandate to oversee the Initiative for Proliferation Prevention, and they do have a Defense Nuclear Nonproliferation program up and running. So, while CROM itself may be fictional (though I sincerely hope it’s got some real-world counterpart), the rest of the infrastructure is factual.

As to the danger presented by rogue WMD researchers, I guess the good news is that
"Think of scientific the progress of civilization as a multistage rocket. The first stage has been firing, lifting us higher and higher, for all of recent history. But that progress comes at a price: the increasing exploitation and exhaustion of earth’s natural resources: The Hubbert Peak, the point where we hit an all-time maximum petroleum output, is fast approaching (if we haven't passed it already); thereafter, energy supplies decline, while demand continues to grow."
they're no longer in a position to bring civilization to an end, at least not all at once. The bad news is that the people looking to acquire their services nowadays are no longer constrained by even the modicum of basic humanity and common sense that reigned during the Cold War. It took the Islamicists to make the architects of Mutual Assured Destruction look good, and, believe me, they do. So, while the threat of wholesale nuclear holocaust may be a thing of the past, there's a retail version of it heading for a shopping district near you. Even one such event is too many, and it needn't take all that many WMD experts to whip one up. So, yes, I think these renegade scientists represent a real danger. And I'm not alone: point your browser at the websites of the Nuclear Threat Initiative or the Russian American Nuclear Security Advisory Committee, and search on keyphrase "brain drain."

Authors of thrillers have to make a decision: to include romance or not to include it. What went into your decision to include a romance between the two lead characters as a major plot component? What are your thoughts about love scenes, in general? It is certainly somewhat unusual to have a steamy love scene in a book that also discusses astrophysics, subatomic particles, and magnetohydrodynamics.

The question makes it sound like there was an identifiable point prior to which romance wasn't going to be included, and after which it was. But that's something of a false dichotomy. In actuality, it's hard to remember a time when sex, love, romance, more broadly, relationship, was not going to be a integral and intimate part of Singularity. In a sense, the novel is as much an exploration of, and a speculation on, the nature of relationships -- more particularly, the question of whether they're even possible any more -- as it is an exploration of, and a speculation on, the Tunguska Event or cosmology or black holes. I see both sets of issues as, in their way, equivalently cosmic in scope and significance. In that sense, a decision not to include a relationship between Jon Knox and Marianna Bonaventure would have been tantamount to a decision not to write the book at all, or at least not this book.

I'd like to talk about the mechanics of writing. Will you take us through a typical writing day?

A good part of my writing, scheduling especially, is conditioned by the need to hold down a day-job. And there I count myself fortunate, in that my work as a knowledge engineer holds challenges and inspirations nearly the equal of those I encounter in writing. But it does mean that about the only time I've got available for writing is early mornings and weekends. By "early," I mean starting at around 5:00 a.m., and going straight through till around 7:30, when it's time to head to work. And I'm not by any stretch of the imagination what you'd call a morning person. Or I wasn't before I launched this project. Still, those hours immediately after waking are as it turns out the most creative and productive of the day for me. Thoreau enjoins us to "morning work!" and, by the blushes of Aurora and the music of Memnon, it's certainly worked for me. And, in keeping with the early hour, I favor silence and stillness. The tranquility of the Delaware River Valley as the sun comes up is rhapsody enough.

How do you approach the editing phase of writing? Do you let anyone else read your work in progress?

I've only done this once, you understand, this novel-writing thing. And it's notoriously hard to generalize from a single instance. But for what it's worth... I can certainly attest to the truth of the old editor's maxim: "Real books aren't written, they're re-written." The first draft of Singularity took me six months, Memorial Day to Thanksgiving. I thought it was perfection incarnate, ready to start making the rounds of literary agencies and publishing houses. Just on the off chance, though, I engaged a professional line editor (a.k.a., a "book doctor") to give it a quick once-over first. Little did I know. About a year later, it really was ready. One of the things I found worked best for me, during both the writing and the re-writing, was to enlist (or maybe "build" is a better word) a community of first readers around my project. I'm not talking after the first draft. No, I began foisting my work on other folks for comment once I had the first draft of the first chapter! If you think it's hell being a writer, try being the friend of one! [smiles]

Now, the downside of that is, it leaves you vulnerable. To write at all, you've got to tap into some pretty personal stuff, so it can be hard not to take it personally when your readers offer even the most constructive criticism. You've got to get past that though, because, if you do succeed -- if you soldier on through to the point of seeing your work in print -- you're going to see a lot more criticism, too. Your first readers can help you there too, by inoculating you against, or at least inuring you to, what's coming. It's not like you're writing for yourself, after all. You're writing for a readership. You can't begin building and engaging that readership too soon.

What is the current state of artificial intelligence R&D? How close are we to creating artificial intelligence with the capacities of a human brain? And when do I get my robot maid, like Rosie on The Jetsons?

My personal sense is that we're not as close as we seem to think.
"Robotocist Masahiro Mori explains that as a machine gets more and more humanlike (read: cuter and cuter) its emotional appeal grows correspondingly; there's a rising curve of attachment, in other words, just as you might expect. But then a funny thing happens: at the point where the simulation is almost, but not quite, perfect, attraction suddenly turns to abhorrence -- the curve stops rising and plunges instead into a deep trough of revulsion."
For the past half century or so, artificial intelligence (AI) researchers have more or less pinned their hopes on what's called the "symbolic" approach. That is, proceeding as if, at some bedrock level, human intelligence consists solely of the manipulation of symbols. That had a number of advantages. For one thing, insofar as the process of symbol manipulation can be abstracted away from any particular physical implementation, it means that whatever neurons can do, microchips should be able to duplicate. For another, manipulating symbols is an entirely "transparent" operation -- it all takes place in plain sight, so to speak; and that, in turn, means that if we can duplicate human intelligence in this manner, we'll also, by the very fact of duplicating it, be able to understand it.

Yet another nice thing about this symbol-manipulation approach to intelligence is it offers an unambiguous criterion for success. Back in 1952, computer pioneer Alan Turing devised what he called "The Imitation Game" as a way to tell whether a machine can think. That game, better known nowadays as "The Turing Test," involves a judge sitting in a closed room containing nothing but two teletype terminals (this was the 1950s, after all). At the far end of one of those teletypes, sight unseen, is a human, at the far end of the other, a computer. It's up to the judge to figure out which is which, based solely on his or her teletyped conversations with them. The judge is free to type in anything he or she wants to and see how the other party responds. But if, at the end of the day, the judge can't reliably tell the human from the computer, then, Turing claimed, we would have to admit that the computer is thinking. In other words, Turing reduced the whole range of intelligent behavior to an ability to manipulate the symbols we call "words" so as to carry on a conversation, and in so doing he provided AI with the same sort of I'll-know-it-when-I-see-it standard as politicians used to apply to sniff out obscenity.

Well, this is all pretty heady stuff. And, as I say, it's managed to power the field for fifty or sixty years. Recently, though, there have been signs it's running out of gas. Signs lLike the leader of the longest-running, most ambitious symbolic AI project in the history of the field throwing in the towel as regards the goal of using artificial intelligence to shed light on the real thing, when Doug Lenat said of his Cyc project: "Absolutely none of my work is based on a desire to understand how human cognition works. I don't understand, and I don't care to understand” (MIT Technology Review, 105/2).

What went wrong? Philosopher John Searle hints at the real problem in his answer to the Turing Test. Called the "Chinese Room," Searle's thought-experiment envisions a man locked in a room together with some rulebooks. From time to time, a slot in the door opens to deliver a piece of paper containing a string of Chinese characters. The man in the room speaks no Chinese, but he can look up the characters in his books and follow the rules to produce another (to him, equally incomprehensible) string of characters, which he then shoves back through the door-slot.

Searle assumes, for the sake of argument, that a sufficiently detailed set of rules would enable the man in the room to pass the Turing Test -- in Chinese! But, Searle doesn't stop there. Instead, he goes on to claim that, Turing Test notwithstanding, the experience of the man in the Chinese room is fundamentally different from that of a person who actually understands Chinese, because the man in the room doesn't understand anything -- he's just mechanically applying incomprehensible rules to convert one string of incomprehensible symbols into another. The implication is that an AI system could pass a Turing Test without exhibiting anything that we would recognize as thinking, because the system could pass the test without ever having had the experience of thinking.

To make the same point in a different way, consider the conundrum I call "half a machine translation program." Start from the premise that a person might learn a foreign language simply in order to read some literary work in the original, as the critic Edmund Wilson is said to have learned Russian, at Vladimir Nabokov's urging, solely in order to read Tolstoy's War and Peace. Then further assume that, having accomplished this feat, the person does nothing further with it -- never produces a translation of the work in question, nor writes an essay about it, never even discusses it with friends. This might seem a little odd, but it's not an exercise in futility by any means: the person's purpose was to derive personal enjoyment and edification from the work, after all, and that purpose was achieved. Now consider a machine-translation program that halts halfway through like this. It reads in, say, War and Peace in the original Russian and then just stops, never going on to produce any output. Of what earthly use is such a program? How could we even be sure it was working? Above all, whom does it benefit? In the case of the person who learns to input Russian, even if no corresponding output is ever forthcoming, there's still one obvious beneficiary: the person's own self. But our half-a-machine-translation program has no such excuse. There's no self inside to benefit. It's like what Gertrude Stein said of Oakland: "There is no there there." As far as our machine is concerned, there can be no further reflection on the lives of Prince Andrei or Pierre Bezukhov. Once the program terminates, whatever internal representation it might have constructed from Tolstoy's masterpiece is simply dumped. Robin Williams once said of Albert Einstein that, when you looked at a photo of his face, you could tell that "all the lights are on and everybody's home." Well, staring into the face of contemporary symbolic AI, it's pretty clear that the house is dark and empty.

That's not to say that AI itself is impossible by any means. Just that the traditional symbol-manipulation approach to it seems to have hit a dead end. The implication of all this as far as my own work is concerned is that, when I went to create an AI character for Dualism, the sequel to Singularity, I had it embody a radically different approach to emulating intelligence -- one with, in my opinion, a far better hope of succeeding.

What is the greatest challenge in making a robot speak like a human?

The real challenge is to get it to understand what it' is saying. And that's at least conceptually a separate issue from those addressed in your previous question: namely, how to get a robot to say think up something worth saying. So, let's abstract away from that thornier issue and consider the simple case of getting a robot to read an arbitrary prepared text aloud the way a human announcer might. And, while we're at it, let's assume that our robot's speech synthesizer is equal to the task of accurately generating all the phonemes found in English. And knows how to handle all the so-called collocational effects, where the sound of one syllable phoneme is subtly (or not so subtly) altered in the presence of a preceding or following one. And can string isolated words together into continuous speech. None of this requires a real technological breakthrough. In fact, we're pretty close to doing most of it right now. The question is: would all of that enable our robot to "speak like a human"? And there the answer is no. What's missing from even the most perfect speech synthesis in and of itself is any ability to produce natural-sounding intonation, to put the emphasis on the appropriate word(s) in the sentence, and on the appropriate sentence(s) in the paragraph. What's still missing is any hint of a mind behind the words, of an intention behind the action, of a desire to communicate a meaning to the listener. Because, currently at least, there is none. And without that, the listener will sooner or later stop listening.

What are the ethical issues that arise in AI research? Are the concerns justified?

So far there haven't been any, because the research has yet to produce an artificial intelligence intelligent enough to raise them. If and when it does, I'd imagine the ethical issues might be not unlike those involved in the decision to have a baby. No big deal, right? Unless, of course, the proud prospective parents happen to be named Mr. And Mrs. Schickelgrueber.

Point is, I can't imagine an AI as such that would turn out more of a monster than some of the ones we've produced in the good old fashioned biological way, and yet nobody's arguing that people should stop having kids on ethical grounds.

On the other hand, there may be ethical issues in creating lightning-fast intelligences who, because they are created to serve humans, are doomed to slow their interactions down to the plodding pace of biochemical thought processes. Imagine trying to maintain a coherent conversation if, by your own innate internal clockspeed, you could only utter one word per week. And it's going to stay like that, subjectively, forever. Sounds like a one-way ticket for a descent into despair and madness. Anything, even nonexistence, might be preferable to enduring such a fate.

So the real ethical conundrum real AIs might raise is -- should we let them pull their own plug?

Most of the SF that has been written about AI has a very negative, frightening aspect to it, from I, Robot, to 2001: A Space Odyssey to Spielberg's AI to The Terminator movies. Why do humans fear AI so much?

That's a really good question, and one whose answer gets more elusive the more you think about it. After all, here you've got this thing -- this system, this being -- that's been created by humans, ostensibly for human purposes. Yet, now that we've brought it into existence by dint of great (and, as yet, we can't know just how great) effort, it turns out we're afraid of it? Well, it's not like that hasn't happened before. The twentieth century unleashed its fair share of Frankenstein's monsters, up to and including atomic energy. But those mostly concerned blind processes impervious to reason and liable to spin out of control. Neither of those factors should hold for an AI. In the first place, it's got to display something akin to human reasoning ability, by definition. And, as regards the control issue, there's at least the possibility of building in the fail-safes along with the intelligence, à la Asimov's Three Laws of Robotics.

So, no, I don't think the answer lies with our old friend Frankenstein's monster, but rather with his perennial sidekick, the Wolfman. Go anywhere in the world where wolves are found, and you'll find the equivalent of our werewolf legends too. Seems like almost everybody's got their own version of Lon Chaney sprouting facial hair, huffing and puffing. Yet, all the evidence suggests that wolves themselves are actually quite admirable creatures: they're good to their young, organized, capable of planning -- worthy competitors, in other words. So, why does every culture that encounters them feel compelled to demonize them?

It's that competition thing. Wolves aren't just good at competing with us, they're too good at it. Too good for our comfort, anyway. Social psychologists call it "the theory of the proximate enemy." The things that scare us the worst are the things that're most like us without being us -- almost identical, but not quite. Enough like us to be a threat, in other words, not enough to be a friend. (Although, as I write this, a domesticated wolf lies curled contently at my feet.)

From the late 1970s on, AI researchers have been grappling with an analogue to the "proximate enemy" phenomenon called the "Uncanny Valley." As described by Japanese roboticist Masahiro Mori, it works like this: As a machine gets more and more humanlike (read: cuter and cuter) its emotional appeal grows correspondingly; there's a rising curve of attachment, in other words, just as you might expect. But then a funny thing happens: at the point where the simulation is almost, but not quite, perfect, attraction suddenly turns to abhorrence -- the curve stops rising and plunges instead into a deep trough of revulsion. Surprise, surprise, our human-mimicking machine has just fallen into the "Uncanny Valley." And the only way up and out is to duplicate, not just simulate, humanity. So, not only does AI have this "very negative, frightening aspect to it," as you correctly point out, but odds are it's going to get more negative, more frightening, before it gets better.

Russia has undergone enormous changes in the last twenty-five years and its society has had some trouble adjusting. What is the greatest challenge Russian society is grappling with today? Is President Putin a moderate, as he appears to be to most Americans?

Well, I haven't been a professional observer of Russian society for the better part of twenty years now. But, the view from my bench on the sidelines is that Russia continues to struggle with navigating a passage between Charybdis and Scylla -- between a descent into anarchy and a return to some version of dictatorship. And the increasing stratification of society isn't helping there: it just fuels the kind of popular resentment that a would-be tyrant can tap into on his way to the top. As to Putin, I think he's a moderate, all right -- a moderate autocrat. If you want to get a feel for his commitment to democracy, take a look at the strong-arm tactics he's used to take down the oligarchs, think about his ongoing assault against a free press, consider the $300 million he's said to have spent to fix the Ukrainian election for his preferred (anti-Western) candidate. Not that Putin's particularly to blame for his anti-democratic proclivities. Like everybody else, he's a product of his environment, and models of non-authoritarian leadership are few and far between in the Russian historical experience.

Mainstream media has focused less and less on science and more on the culture of celebrities, sports and scandals. Many schools are cutting back on science courses, and now there is a controversy about whether evolution should even be taught at all. What are your thoughts about the place of science in our modern culture? Can fiction be a tool to get people more interested in scientific discoveries?

To answer your last question first: I sure hope so. We can only hope that fiction -- or something -- will stir the popular imagination, and get us back on track. Because we're running out of time.

To see why, and to understand why we need more and more science, instead of less and less, think of scientific the progress of civilization as a multistage rocket. The first stage has been firing, lifting us higher and higher, for all of recent history. But that progress comes at a price: the increasing exploitation and exhaustion of earth's natural resources: The Hubbert Peak, the point where we hit an all-time maximum petroleum output, is fast approaching (if we haven't passed it already); thereafter, energy supplies decline, while demand continues to grow. Same story on all the other non-renewable resources: ores, arable soil, potable water, you name it.

And a dwindling resource-base is a sure recipe for conflict, which will burn even more resources even faster, and to no earthly purpose. But even if we can somehow avert a ruinous worldwide resource war, the trajectory of human civilization is still nearing its first-stage peak. And, if we let it flame out and fall back to earth, it'll never rise again. Because, before we crash and burn (if we do) we'll have consumed all the readily harvestable bounty of the planet, leaving behind nothing to jumpstart a successor civilization with. So, we really haven't got much of a choice. We've got to find a way to fire that metaphorical second stage and keep on going, on out to the stars. And by "we," I mean the folks who are coming of age right now. The same ones who're being fed things stuff like "evolution is just a theory, not a fact" (as if the people who parrot that could tell the difference). Turning our backs on science is more than just a big mistake, in other words. It's a mistake that we'll never get another chance to correct.

What are your pet peeves in life?

Next to the one I just cited, nothing else even comes close.

What are your favorite ways to relax and have fun?

Nothing very complicated, I'm afraid: Family and friends (including the aforementioned domesticated wolf -- a wirehaired dachshund named Pepper), reading and writing pretty much top the list. Everything else is a distant second.

Photo of Bill DeSmedt
If you could travel back in time to have a chat with your 16 year-old self, what would you tell yourself? Would you be a receptive audience?

I think I'd tell myself to try getting involved with writing sooner, and stick with it. Hard as it was to finally commit to doing it, writing Singularity has definitely been one of the most rewarding experiences of my life, and my one regret is that I put it off as long as I did. How receptive would I have been to that message at the age of 16? Not very. But then, most 16-year-olds, myself included, probably haven't had enough life experience to have something they really want to write about. I'm lucky there in that I have, and I do.





Return to the December 2004 issue of The IWJ.
More from Writers Write


  • Costco Plans to Sell Books Only From September to December


  • Karlie Kloss to Relaunch Life Magazine at Bedford Media


  • NBF Expands National Book Awards Eligibility Criteria


  • Striking Writers and Actors March Together on Hollywood Streets


  • Vice Media Files for Chapter 11 Bankruptcy