Ok, I’ve been accused of being wordy before, even verbose, occasionally sesquipedalian, but never of grammatolatry, or even verbolatry. Now, before you say, huh?, my friend David turned me on to the Wordsmith.org A.Word.A.Day email, which sends you some pretty spiffy vocab to contemplate each and every day, for free, without any apparent adjunct and unwanted spam attached thereto – Definitely a worthwhile thing, for my mind.
Anyway, grammatolatry is a very cool word that I am not at all sure I’ll remember well enough to use later, so I thought I'd better do it now. It put me in mind of a very interesting book I read recently, David Stork’s HAL’s Legacy: 2001’s Computer as Dream and Reality. Using the most notorious computer intelligence yet created as his muse, Stork basically dips into the state of the art today and asks how close we are to the 12th of January, 1992, (HALs Birthday for you non technogeeks). And Grammatolatry might be at the root of the problem Stork outlines therein.
The short answer to the big question posed by Stork's book, where are we compared to where Clarke thought we'd be by now, is in essence this: While we’ve actually built computer system with more memory than Sir Arthur thought we'd be able to, we’re not anywhere near having a box that can genuinely channel Douglas Rain; for you non-geeks, he’s the actor who supplied HALs voice. And in fact, some of the biggest names in the field of ai, like Allen Newell and Marvin Minsky, are less optimistic about our chances of ever so doing than they were when the pursuit began in the 1950s.
So what’s the problem? Well, the fact is, computers are great at handling stuff like dialing phones, searching data, or handling web orders for books, but they’re not so hot at stuff that requites broad understanding of th world around them, and let’s face it, understanding language probably tops that list.
Ray Kurzweil, (Yep, the guy who has his name on so many way cool synthesizers in so many way cool bands), is also a student of ai. Whereas once he believed we’d have HAL nailed by the dawn of the 21st Century, he notes that the likely reason we’ve not achieved that goal is a lack of sophisticated computer architecture; in other words, it’s not the capacity of the machine that presents the greatest challenge, it’s whether or not we can build a reasonably effective analog of the human brain’s neural network. Kurzweil points out that even today’s supercomputers do not have anything approaching the capacity of the human brain, which has “about a hundred billion neurons, each of which has an average of a thousand connections to other neurons” (163). Advances in circuit design, such as creating three-dimensional circuits, will increase the capacity of computers, but even more important is achieving breakthroughs in architecture, in the arrangements of circuits.
These are just a couple of vignettes from the book. Stork is the Chief Scientist at the Ricoh California Research Center, and a Visiting Professor of Psychology at Stanford, so he’s what you could call a well informed insider in this field of study. His book stemmed from the formation of a team of the brightest stars in computer research he put together a while back; cutting edge experts in virtually every facet of the pursuit of ai, computer design, and all its most interesting offshoots. After putting this supergroup together, he asked them to speak to where we are now vis a vis Sir Arthur’s vision. There are essays by all these folks therein, many of which are truly fascinating stuff.
And no, you don’t need to be a supergeek to understand it – Lord knows I’m not, and I think I groked it – Bet you will too!