Computer Science, Philosophy, & Quantum physicsFri, 21st Nov '03, 3:35 am::

(If you're a CS major, you should read the original article I wrote this long blog entry on. If that is too confusing, feel free to read my interpretation and extension of it.)

Here's a long article linking computer science, philosophy, and quantum physics, by Jaron Lanier. You probably haven't heard of him (I certainly didn't) but you've definitely heard one little term he coined in the 1980's, "Virtual Reality. He looks more like a English major with a Philosophy minor than a computer scientist who "co-developed the first implementations of virtual reality applications in surgical simulation, vehicle interior prototyping, virtual sets for television production, and assorted other areas." In this paper, he looks at computers from an entirely different angle than we have been used to. Computer scientists (and in turn the rest of the world) basically think of computers as a bunch of electric signals being passed over wires at a very high speed. At any given time, there's really only one thing happening on your computer. You may think you are reading this website, listening to music, moving your mouse, and chatting online at the same time, but at the lowest CPU level, only one of these programs is running for a few split nanoseconds of time and then the operations for the next software are run. The context switch happens so fast that we get a perception that everything is running at the same time, which it isn't. Just like a 30 frames per second film reel, which is composed of a small number of pictures that are played just fast enough to give us perception of motion.

Real world of course doesn't work in this way. There isn't really some smallest amount of measurable time (at least not that we can measure with the current technology). Also in real life things do happen at the same time, that is you can be driving and drinking coffee while talking on the cell phone and scratching your head. Computers can only fake this kind of multitasking and that is where the problem lies. Fifty years ago it was very easy to conceptualize computers as simple straight-forward machines that receive input and product output. This led to instantenous implementations of the mathematical models of Turing machines and the first software. Sadly that is exactly what we are doing five decades since ENIAC - giving input and getting output. That, explains Lanier, is the reason why image recognition, voice recognition, video analysis and almost every application of artificial intelligence fails to product smart, intelligent results - because the current computer architecture is built to be perfect under perfect conditions.

Think about it this way. Theoretically, if you move your mouse cursor to the left, it MUST move to the left. There is nothing written in the software code for the mouse driver to move your cursor otherwise. But mouse cursor isn't the only process running in your system. Your stupid Word document will crash the system because there was a big photograph in it and your mouse is now stuck on the right corner of the screen, refusing to move. Theoretically, the code for mouse cursor did not fail, but your operating system did, and as a result brought down the perfectly functional code for the mouse cursor. The mouse cursor code is thus written perfectly to work under only perfect conditions. The alternative to this, he offers, is to write individual pieces of code, that don't rely on perfect protocols and systems to function with high accuracy. In his own words, "Wouldn't it be nicer to have a computer that's almost completely reliable almost all the time, as opposed to one that can be hypothetically perfectly accurate, in some hypothetical ideal world other than our own, but in reality is prone to sudden, unpredictable, and often catastrophic failure in actual use?"

Now if you've used a computer for more than a week, you know that computers are NOT perfect. In reality, of course not. But in theory, the science behind computers is perfect and predictable, mainly because it is built on the logics and functions of mathematics. If you add 2 and 2, you must get 4 under all circumstances. Find me a computer on which the Windows calculator gives anything but 4 for 2+2. However, the problem he says is that on small scale, perfection is relatively easy to achieve. Making small programs that work to specifications, is easy. But making a 10 million line program that analyzes the structure of the DNA is never going to be perfect, simply because of the scale. And Microsoft Windows has 50 million lines of code! How can one expect every line to function in tandem with the other 49,999,999 lines?

One obvious solution is to write better code and reuse the same code modules. I'm sure the brains at MS have thought of that before I just said it. And surely they tried to reuse as much code as possible. Yet they end up with 50 million lines. This only means that today's computer technology requires them to write 50 million lines to accomplish what they want - to provide us with an operating system that can play music, burn cd's, run datacenters, operate critical hospital equipment, and let you sell stuff on eBay. The keyword here is "today's", because there is nothing other than the limits of current technology that restricts anyone from writing smaller more efficient code. There is no need to obey the speed of light in order to write more compact code. There is no mathematical formula which predicts that in order to accomplish 'burn a cd' operation, someone must write 30,000 lines of code. Theoretically, we could design a CD burner that knows everything there is to burning a CD and all we have to tell it, is what songs or files to burn. But instead, we use a full-fledged CD-burning software to help us burn CDs. Then when the software fails, the burn process stalls midway and the CD has to be thrown away. Sure there is error correction built into the CD burner that will avoid jitter and prevent buffer overruns, but that is a unique solution to a unique problem. According to Lanier, there really should be no need to perform error correction. The CD burner should talk to the computer and as long as the computer managed to say 'hey I'm all ok' with 99% accuracy, it should go ahead and burn the CD.

Yeah I agree this sounds just as theoretically perfect and practically useless as every marketing campaign for some quasi-revolutionary killer-app released every other day, but it's hard to deny that with the current state of technological affairs, unless something is done to reduce the complexity of code being written for large projects (think, your utility company, the telephone companies, the stock market etc.) there is only so much that computer programming will be able to accomplish. Using pseudo-smart code can let credit card companies determine if someone's credit card might have been used fraudulently, but that means they have to first write the exact code to catch it. We humans don't learn anything exactly. When I drive a car, I drive with my left hand on the left-side of the steering and right-hand on the bottom, though my driving instructor taught me to put both hands in the 10-2 position. With him as my instructor (programmer), I learnt efficient driving but made my own adjustments to function better. Given the current logic circuits and architecture of computers, it's almost impossible to build an artificial intelligence system that can adapt to the world as humans. That is why we don't have talking robots and flying cars yet. Because it is not possible to design complex systems using zeros and ones. We've gone as far as possible using fuzzy logic to mimic true quantum states. But if there's ever a next stage in computers, if computer scientists ever want to break through the bounds of for-loops, return-values, and type-casts, they'll have to think of computers VERY differently from today.

How differently would you say? Think about magna-lev trains and dog-pulled sleds. Both do the same, take you from point A to B, but on entirely different levels. Today's technology is the bullet speed train that can achieve something at a really fast pace as long as the electricity is running, the magnetic track is well maintained, the passenger-load is within specifications. The pack of huskies pulling a sled on the other hand, may be 50 times slower, but they will easily walk around a big rock on an icy terrain without being reined to do so. Today's best computer software fail to achieve that. Think about it, as breathtaking as it was, the Mars Polar Lander was barely able to move around the mildly rocky Martian surface on it's own. A two year old child can run around faster and better. Why? Because the child's brain has millions of neurons and billions of connections that work at the same time, unlike the CPU of the robot, which no matter HOW fast, will always perform one-instruction at a time.

That is why Kasparov remains undefeated by the machines. Because he can think of 10 moves at the same time, while remembering 50 different layouts from the past, while the computer can only think of each move and layout at one time, although a billion times a second. A billion is still not larger than 10 to the power of 50. Someday the computer will be fast enough, sure, but it still wouldn't be able to laugh at a blonde joke. This is, I mean, if computers and software progress in the direction they currently are (and have been for fifty years). What is needed is an entirely different perspective on algorithms to get to the next generation, otherwise we have to stick to feeling 'intelligent' for writing software that opens doors for cats.

In addition to the current state of computer science, Lanier also connects computers with philosophical ideas, like the existance of objects, something that Peter Unger did in few of his papers. At the moment, I feel obviously unqualified to analyze Lanier's philosophical theories. Hopefully someday I will be qualified enough.

Add a Comment

 < Oct 2003Dec 2003 >