Shrink Rap Radio #126

Shrink Rap Radio Live #3, ''Artificial Life and Artificial Intelligence''
December 16th, 2007

Transcribed by Jason Howard.

  • Introduction
  • Logical Atomism, the Anthropomorphic Divide and Intelligence
  • Cognitive Simulation and Early Intelligence
  • the Singularity and Machine Intelligence
  • Measuring Machine Intelligence
  • Power of Computing
  • Living in the Machine
  • Action, Free Will and the Private Mind


    Van Nuys: Welcome to Shrink Rap Radio, the planet's premier psychology podcast. This is your host, Dr. Dave, coming to you from the San Francisco Bay area. What follows is the third episode of Shrink Rap Radio Live. It took place on Sunday morning, December 16th 2007, with my co-host Jerry Trumbule in Denver Colorado and our guest, Tom Barbalet, who called in from Las Vegas, Nevada. Tom Barbalet is originally from Australia and as a teenager had became very interested in the concept of artificial life, which refers to the notion of creating a whole microecology of virtual beings within the computer. It's a form of computer simulation and Tom is among the pioneers to experiment with this sort of thing. My understanding of artificial life is that the designer/programmer sets up a set of rules or evolutionary principles in a computer-generated world and then sees what consequences evolve. Ideally, as in the real world, more complex virtual organisms may develop as a result of competition for resources and interbreeding and unexpected things may happen. Tom's Noble Ape Simulation was among the first and has garnered a world-wide following and put him in touch with many of the leading figures in the field. In addition to being a regular Shrink Rap Radio listener, Tom is also a prolific blogger and podcaster in his own right. In the show notes, you'll find links to his podcast at and as well as a number of other links mentioned in our discussion. What's all this got to do with psychology, you might ask? Well, there are lots of connections. Computers are evolving as thinking machines that amplify our own thought processes. Robotic toys and pets are likely to be our future friends to some degree, and to the extent that we can simulate evolution and consciousness, we will come to understand both better.

    Barbalet: Dr. Dave, Jerry, good to talk to you both.

    Van Nuys: Excellent.

    Trumbule: Hey Tom.

    Van Nuys: We're so glad that you're here. I also want to acknowledge that Anne, otherwise known as Anne the Man, in the chat room. Tiger Lily is asking if we have done any psychology focusing on only children and the only child as an adult. No, we have not. That's a good topic suggestion and it's something we can look at in the future. But I want to take full advantage of the fact that we have our guest here, Tom Barbalet. I'm going to bring him on, and as Jerry was giving a little bit of an intro -- and Tom, you can tell us more about your background. You're a podcaster and a blogger and I know that you've been very involved in the whole realm of artificial intelligence and artificial life. Maybe we should start off by having you introduce yourself a bit and telling us what artificial life is.

    Logical Atomism, the Anthropomorphic Divide and Intelligence

    Barbalet: Well, to go full-circle from the end of our discussion last week when we were talking about ideas coming in -- sense data coming in -- and this idea of the one. My background -- I started programming at a very early age. My interest with regards to programming was to remove myself from my age in some regard. To be able to develop software and technology where people didn't know that I was in my early teens or my mid teens. Towards my late teens I started to realize that I'd developed all these different bits of software that did various things -- virtual reality games, language analysis, all this kind of stuff -- but it didn't have a focal point. I studied philosophy and physics at university. In my philosophy class in particular I was exposed to the ideas of logical atomism, a movement that was started by Bertrand Russell early turn of last century. The idea of logical atomism is that we are constantly getting sense data from a wide variety of sources. This kind of atomization of the information we are receiving gelled very heavily with me. So, using this in practical terms, if you think of sense data like matchsticks for example, I'm currently picking up matchsticks and then feeding matchsticks out through this podcast to Anne in Israel and folks that are listening to this in non-real time. All these kinds of things seem very powerful in terms of analyzing the difference between the external world and the internal world which is, I guess, fundamentally what Shrink Rap Radio is about as well. So from this idea I thought that I needed to create two kinds of simulation. I needed to create a simulation that represented the external world -- that simulated, as close as I could get with computers at the time, to something that could approximate an idealistic, utopian real world, if such a thing could exist. And also simulate the internal workings of these ape-like creatures, the Noble Apes, which is what the project ended up being called. So, from this idea of two kinds of simulation and tying it initially to Bertrand Russell's idea of sense data, a lot of stuff has been done in the past twelve years with regards to Noble Ape. Rather than giving a blow-by-blow, we end up currently with Apple Computer and INTEL using Noble Ape to tweak and test various aspects of their system. I'm developing it in different directions as well. I have a friend at NASA, Bruce Damer, who talks about Noble Apes exploring space. So you get this amazing linking of, on one side, psychology, and on another side, biology, and on another side, physics and mathematics and philosophy, and they all Venn together in artificial life.

    Van Nuys: OK. Now, we're getting a question on the chat here. Are you originally Australian, Tom?

    Barbalet: (laughter) Yes. That's the important thing. I started developing it when I was in Australia.

    Van Nuys: OK.

    Barbalet: However, through Noble Ape, I've been able to travel quite extensively. I launched the project officially in Malaysia. This is another interesting point that we may talk about later. In Malaysia I was exposed to large groups of wild monkeys, and also large groups of feral cats. I know, Dr. Dave, you're a fan of the Sopranos and gangster movies and these kinds of things.

    Van Nuys: True enough.

    Barbalet: If you observe feral cats in the wild, you see sit-downs as you would see in mobster movies. They come together. The main feral cats will move in and they're flanked by their lieutenants. They'll sit and have what appeared to be intimate discussions from a distance. What fascinated me observing feral monkeys as well was that they were very protective of their families. Where my mother was living -- literally a street over -- they were building in a forest that had previously been the sole domain of these monkeys. They would come down -- the male monkeys in particular would run amuck, which is a beautiful Malay word -- and cause all kinds of havoc protecting their family groups. I'd always had questions with regards to the anthropomorphic divide but this struck me as a fundamental breakdown in the anthropomorphic divide. For people not familiar with the anthropomorphic divide, I guess Disney and Beatrice Potter and all these kinds of things are great examples of a popular anthropomorphism where animals talk and interact and behave like humans. But if you talk about serious biology -- and to a certain extent I believe psychology as well, Dr. Dave -- there is a hard division that some academics would like to make between the human mind and the animal mind. This is a division which I saw immediately break away in observing large animal populations in their natural environment.

    Van Nuys: Yes, and I think that position has been softening somewhat in recent years. I know that when Jerry and I first got into psychology -- Jerry, you can comment on this -- there were huge warnings against anthropomorphizing the animal mind. It seems to me that that's softened. Is that your impression, Jerry?

    Trumbule: Yeah. It used to be a sin in an experimental psychology to impart any thought or consideration to the animal. You couldn't say, ''Well, the animal thinks that the reward is over here.'' You just had to look at behavior. Of course, I was coming from a Skinnerian background.

    Van Nuys: So, Tom, what would we see if we were looking at your Noble Ape program that you created for the computer? Would we see little figures of monkeys running around, or does this purely have some kind of mathematical representation?

    Barbalet: Well, I'm working on the figures of monkeys currently. In fact, ever since I started developing Noble Ape I've been working on the figures of monkeys. I've always liked the idea of the Noble Ape more in a philosophical sense than in a physical sense. Although I've mapped it down to a physical reality at a number of points in the simulation, I think it defeats the broader methodology. Now, I have friends in the 3D graphics community and the gaming communities that would like to see Noble Apes rendered in their full glory. But what your audience probably should appreciate is that Noble Ape is a completely amateur and privately funded development in that regard. The amount of time in order to create 3D models of apes and cats and trees and these kinds of things takes away from some of the other aspects of the development that I'm more heavily focused on. I have an interesting point. I'm also the editor of Biota which is an artificial life forum. We do bring in biologists and paleontologists and things like that. There was a podcast we did recently about whether artificial life could explain the Cambrian explosion [MP3]. If we return to the idea of the mind and anthropomorphism, once you eliminate the anthropomorphic divide, you need to start asking the question: where does intelligence and where does the mind follow from intelligence, and whether all forms of life have intelligence. I think this is a broader question in artificial life as well. What was fascinating in that podcast in particular was that we had on Roy Plotnick who is a paleobiologist. He described the movement slightly before the start of the Cambrian period where these little creatures that were originally just kind of floating around started getting ideas about -- sorry, I'm using the worst possible kind of anthropomorphic language. They discovered feeding grounds and then they started changing their movement based on the feeding grounds. Obviously it was a positive reinforcement loop because the feeding grounds then gave them nutrition which meant that they could improve. This is the Cambrian period fundamentally. Now, somewhere between those little floating things that looked a bit like floating mattresses that you put in swimming pools through to humans, intelligence and the mind evolved. But as Dr. Dave would say, I think this is an important question that merits further investigation about where this actually starts to exist. Even if you look at creatures like bees for example. They've recently discovered that bees have a dancing mechanism that shows where the food is. They send out scout bees, they come back to the hive, and they do quite an elaborate dance that explains to the other bees where this food is and what direction and distance they need to take. That's really the basis of language in some regard.

    Cognitive Simulation and Early Intelligence

    Trumbule: Tom, if I may, I'd like to back up a second. I went to your site I downloaded the program and unzipped the files so I could try to get a feel for what it is that you're doing. I saw two of the screens. One was a broad map and one was a close-up map. The third one was the brain activity. I couldn't get a handle on that brain activity window. I wonder if you could explain that a little bit.

    Barbalet: Well, when I started developing Noble Ape, in terms of the internal simulation -- the brain simulation -- I thought there were two fundamentals that I could start mathematically modeling. The first is fear. This is instantaneous fear. This is the kind of fear where, if you've ever been struck by a car, or shocked -- to capture that emotion in real-world terms. The second thing that I wanted to mathematically model was this idea of desire. That we have these things in the future that we are looking forward to or planning for and these kinds of things. I thought that I could mathematically model both of those. So what you see in the brain is actually these two things competing in a mathematical sense. The brain, in fact, is a third thing. If you're talking a kind of dimensionality argument, you have fear, you have the desire, you have incoming sense data that maps onto those two things, and then you have the brain which is the resolution of this. What you see in the brain currently is actually the change of the time as opposed to any of the underlying detail. I think, from what Dr. Dave has told me, this is fundamentally Jungian as well. That the ability to actually interpret the brain information is not primary but it is in fact secondary or tertiary, and some of it cannot be intuited in time snapshots or even over time. I think this was an interesting thing that came out of the initial brain simulation, and I've stuck to my guns with regards to the cognitive simulation that I initially developed. There are other people who like removing that and putting their own program simulation bubbles in and seeing what happens. That ability is available through Noble Ape as well.

    Trumbule: OK. I think I understand a little bit better now. But getting back to what you were talking about earlier, where do you see this development of consciousness? What is your current opinion of that in terms of the phylogenetic scale?

    Barbalet: I think it's a very interesting question. I'm in no way an expert with regards to this, and this is why I always like the fact that through Biota I have primary contact with embryologists and paleobiologists and people who are really at the cutting edge of this research in reality. My interest is with regards to simulation. What interests me looking at the biology and looking, as you say, at these early processes, is the computational modeling of that. Popular questions like machine intelligence and whether machines can think -- these kinds of issues come from that. What we're discussing with regards to where it evolved, it's a question in reality that I would like to leave with the paleobiological and embryological communities because I think they're the ones making the breakthroughs in those fields.

    Trumbule: OK. So you're dodging the question.

    Barbalet: I'm not dodging the question. I've read a lot of this recently because I'm working on a chapter of a book with a fellow. What I'm finding, reading it, is that we are at the point where a breakthrough could occur in the next 6-10 years. So, as podcasts are dated, I feel there is so much research of interest currently.

    Trumbule: Where would we look for that breakthrough? Where would you expect to see it?

    Barbalet: Well, there are two modes of thought here. The first is that we are all encoded with the answer. On one dimension is the idea of the human genome. By following the human genome there are points of adaption that could be pegged at specific changes which could be then linked to ideas of cognition. This is a kind of third step removed process. However, there is another view that, purely through the fossil records, and by interpreting and modeling -- ultimately it butts back into the simulation community, this is why these researchers have such a connection with artificial life -- just looking at the way these creatures moved and interacted could also give the answer.

    Van Nuys: Yeah. We have a question from the chat room where Anne -- also known as Anne the Man -- is asking, ''Is consciousness and intelligence a term in biology at all?'' In other words, do people in biology even think in terms of consciousness.

    Barbalet: I think so. When they publish on consciousness they certainly talk candidly about it. I think intelligence is probably and certainly in the conversation with Roy Plotnick he was very careful to not use terms like consciousness with regards to these early feeding patterns. But I don't know. I think it certainly is privately and my surveying of the literature doesn't immediately resound that it's used in the literature as much as these researchers talk about it privately.

    the Singularity and Machine Intelligence

    Van Nuys: One of the things I was hoping we could touch upon is something that's been referred to as the singularity.

    Barbalet: Ah, yes.

    Van Nuys: I see you're familiar with this concept.

    Trumbule: Ah, yes.

    Van Nuys: For our listeners I'll give my sense of it, and then you guys can sharpen it up if I don't have it quite right. I believe the singularity is referring to the idea that computing power, and therefore artificial intelligence, will come to a place where it will make such a qualitative leap that it will kind of leave us in the dust. That something new in the universe will emerge in terms of a super-intelligence. Do either one of you want to refine what I've just said about it?

    Barbalet: I'll let Jerry go first because I have very strong opinions with regards to it.

    Trumbule: Yes. Well, I can expose my ignorance, I guess. I've been interested in this idea for some time. When I first got into computers, which was back in 1980, I thought, ''Wow, this is eventually all hook up. We think of ourselves as neurons and pretty soon we'll be relating to each other in new and exciting ways.'' This emerging of some kind of new behavior. Then I saw that idea developed in books and magazines. In particular there was an article in 2005 in Wired magazine. It pretty well encapsulated the whole thing. The idea was that if you look at the world wide web as a collection of neurons, each computer being a super-neuron with thousands of millions of transistors in it, we're starting to approach the capacity of the human brain if we look at this giant global brain. Now, with good computing and some things like that coming around it looks like we're sort of inching towards this singularity event when we wake up one morning, turn our computers on, and find a new message to us from our big brain.

    Van Nuys: Right. Certainly there have been a lot of science fiction movies that present that as kind of a scary option. Tom, you say you have strong opinions on the subject?

    Barbalet: I have very strong opinions on the subject. This is another aspect of the stuff that I do through Biota. This ultimately links with Noble Ape, so I'm going to give a bit of a longer answer to this. When I started Noble Ape, it was based on the argument with a fellow student that I had about the ability of computers to think and simulate and form what is ultimately described in some regard in the singularity movement. Ironically, whilst I kind of petered along went to the Bay Area, went to the UK, what have you, this fellow followed his academic strain and is now an academic at Oxford University. One of his co-academics is a fellow by the name of Nick Bostrom who is very big in the singularity movement, probably second only to Kurzweil in terms of his publications. This fellow James Morauta passed me Nick Bostrom's paper probably four or five months ago with regards to the question of whether we would ever live in a computer simulation. Now, this takes the idea of machine intelligence; it takes the idea of the singularity; it takes aspects of artificial life, and it ultimately puts it together in a single paper. What I found fascinating reading the paper was that they had no primary contact with simulators. So when you approach the idea of machine intelligence, in particular, I always approach it with the view of the anthropomorphic divide. In some regard that people have this view that if animals are not capable of thinking or having a mind then -- maybe this is depreciating in recent years -- feelings with regards to computers are very similar. There seems to be a claim made -- an almost truism -- the experience that one has with contemporary computing seems to indicate that machines won't think. What strikes me from that is that our experiences with regards to computing and the example provided with regards to opening an email client and seeing a message and these kinds of things are all the very low end of processing power. We underutilize processing power considerably. Looking through this argument, there were two things that came through. Firstly, a naivity -- and this is a naivity with regards to the singularity movement -- in terms of what processing power was, and how it operated, and the immense power of processing that we currently have. So these people that claim that this will happen in the future are underestimating contemporary computing. The other idea was a very naive view of what a simulation is. My feeling, having tinkered with simulations for more than a decade, is that you can look at the legal system, the financial system, even things like the road system as being simulations in their own right. Our ability to interact and be entities in a far broader scale of things seems to replicate what I'm seeing in the Noble Ape Simulation with Noble Apes wandering around -- only obviously we are biological entities. So from that thinking -- and I've written quite a bit in recent weeks -- the idea came through that perhaps we've already passed the singularity. Perhaps the events being described with regards to computation have already occurred and we are just unaware of them. The discussion that I passed back to James Morauta and back to Nick Bostrom related to the fact that all these events they were claiming would happen in the future have demonstrably already happened.

    Van Nuys: What would be an example of that?

    Barbalet: A good example of that is the description that you gave with regards to the use of email communication. The way in which people interact with machines currently. The question of machine intelligence seems to presuppose that the machines are going to, as you say, become some superhuman intelligent entity. If you look, for example, at the horse and the car. When people talk about machine intelligence, particularly with regards to replicating the human mind, I always think of a group of horses sitting in a field looking out over a freeway. One horse says to another horse, ''Surely they can move quickly, but they'll never actually be horses.'' The irony is that we use the horsepower to describe what the car is.

    Van Nuys: I was thinking about that very thing. That we're still using horsepower in this modern age. When you're looking at a new car it's still described in terms of horsepower.

    Barbalet: Exactly. And here's another interesting thing. When Apple and INTEL use Noble Ape, the metric that they use to test their processing power and how they're optimizing is ape-brain cycles per second. That is the metric that they use. The simulated ape brain. The number of those cycles they can get per second out of the machine. I think there's some disconnect. I think this may be linguistic, this may be philosophical, this may even be psychological with regards to ideas of machine intelligence and our own intelligence.

    Van Nuys: Well, Tom, I'm intrigued by your notion that the revolution may have already happened without us noticing it. I often reflect that we are living in a science fiction reality that not so long ago would have seemed like science fiction. For example, I'm thinking of my iPhone, or cell phones generally. And even what we're doing right now.

    Barbalet: Very much so.

    Van Nuys: The way that we're having this conversation. These things are extensions of our intelligence. They're tools that extend our capabilities and one of the remarkable things is how quickly we as humans adapt to these technologies and they become ho-hum and ordinary. So we don't see it as revolutionary. The revolutionary feeling -- the feeling of breakthrough -- lasts about a week. Then we're, ''OK. What can you do for me now?''

    Barbalet: If you look at the aspect of the dissolving of history that is supposed to occur after the singularity, I think this has already occurred. I started deconstructing the whole singularity movement, having had this correspondence with these academics at Oxford. The more that I read, the more that I realized that it was actually their misunderstanding with regards to the technology that was causing them to push this date into the future, as opposed to an understanding of the deeper underlying technology with regards to computation and simulation which actually pushes the date -- if the date exists, I think it's more a smearing -- into the past.

    Van Nuys: Were you able to persuade anybody to your point of view?

    Barbalet: It's a current project so I'm introducing it to the listeners of Shrink Rap Radio before I'm introducing it formally to the academic community. So you're hearing it for the first time. I'm more than willing to accept all correspondence with regards to this. tom at if you want to shoot me an email. I think it's a highly plausible argument. Also -- and this comes to the idea of Biota as an inclusive movement -- I'm always concerned by these movements that are in some regard exclusive and also try to make claims with regards to things that they're not actually delivering upon. That's another secondary concern I have with regards to the singularity movement and a number of other movements that you would think should be on the cutting edge of artificial intelligence or on the cutting edge of artificial life, and are in fact just producing populist directions that don't really communicate any of the wonder or any of the brilliance that is coming out of these fields currently.

    Measuring Machine Intelligence

    Van Nuys: I'm under the impression that artificial intelligence has been very successful in very narrow domains. For example, developing software that would optimize a complex shipping problem, let's say of organizing stock and shipments to go out, and so on. But it's been disappointing as far as creating anything that would work like a human brain in terms of having more general intelligence. Is that something you can comment on?

    Barbalet: Well, I have to respectfully disagree with you, and I would like to point your listeners to the Talking Robots podcast which is coming out of Switzerland. In terms of artificial intelligence and robotics in particular it shows the cutting edge of that field. I think the problem here -- and this is returning to a comment I made a little earlier -- is our own experiences with regards to computers as generalists are typically very poor. But in the cutting edge research, if you look at the Roomba, the iRobot technology, a fellow by the name of Rodney Brooks who skirted artificial intelligence and artificial life for a number of years. He is the CEO of that company and I think also still at MIT. There are a lot of practical robots that are coming out currently. I think the question with regards to what machine intelligence is versus what human intelligence is needs to be specified with regards to domains where human intelligence is still superior or you are not looking for esoteric qualities in artificial intelligence. That's really my concern with regards to the generalist debate currently. People would ideally look for an exact replication of the human, as the horse-car analogy shows, the robotics and artificial intelligence communities are looking for applied solutions currently.

    Van Nuys: I know that, for me, one place where this impacted my life is in the realm of chess. Since computers have become such excellent chess players -- that's a narrow domain, but one in which computers really excel, I don't know if the question has been totally resolved now whether or not computers can outdo grandmaster chess players. It seems to me, the last I heard, maybe they're kind of tied and it's not totally clear. But it's certainly clear that computers are better at chess than I am, and it's had an interesting psychological affect on me, which has been to reduce my interest in chess.

    Barbalet: (laughter) This is a fascinating thing. When you start looking at machine intelligence, the well-quoted test for machine intelligence was always the Turing Test. This is a situation where you're on one end of the phone, and on the other end of the phone there is a computer with a voice that sounds human, and a human. In order for the Turing Test to succeed, you need to be in a position where you can't determine which one is the computer and which one is the human. Ironically, if you call your bank, and if you stay on hold for 30 minutes or so, and then you speak to someone who you assume is a human, and then you have trouble with this person. They're reciting something which doesn't seem to be answering what you're saying, and then you ask to speak to the manager, you then have a similar experience. My feeling is that in contemporary life we've been able to find counterexamples of the Turing Test already in terms of our ability to interact and our ability to determine whether we're dealing with humans. Now, in some cases, you're not dealing with humans anymore when you call banks and things like that. I think these counterexamples to the Turing Test need for us to find ideas of machine intelligence or comparison tests which are more valid. Now the chess one is a good example because what you start exploring when you discover the Turing Test is that there are very few measures of intelligence that you could formulate which actually put humans ahead.

    Van Nuys: Could you say that again?

    Barbalet: If you want quantifiable measures of intelligence, other than the Turing Test, it's very difficult to find ones in contemporary computing that will put the human ahead. In double blind things, where you're not relying on, obviously, situations where you need hands or something that is intimately connected with being a human.

    Van Nuys: OK. Jerry, any thoughts or comments about that?

    Trumbule: Well, I was just thinking, that might explain some of the difficulties I've been having with my bank recently.

    Barbalet: Very much so. I love using that example because the Turing Test is used by classicists as the be-all and end-all with regards to human and machine intelligence. I think we need a better scale.

    Trumbule: From my point of view, the other side of the coin is what has happened to my brain since I wired into the Internet. That was early on. Now I feel like a major portion of my database is in the computer. When the computer's not on, I'm going like, ''Oh, yeah. I have to write that down to remember to ask the computer.'' Mostly, of course, through Google and Wikipedia and things like that. In some since, while I feel like I'm keeping my brain active, I'm not sure that I'm building up my own database.

    Van Nuys: Yeah. Anne is asking -- in terms of your assertion, Tom, that computers excel in just about every realm -- he raises a question, ''What about speech recognition? What about image recognition?''

    Barbalet: I think they're pretty good in image recognition. The problem with speech recognition is, I think, part of the nature of the research. It is improving. As people immediately alluded to, I come from Australia. (laughter) You folks come from the US. I constantly do a juggling in my mind to translate what would be colloquial terms or just phraseology and words. So this is the difficulty. Actually, what we've been able to do, live in our little villages, and communicate amongst ourselves for thousands of years in some regard, we've created something which -- this is a good point -- which is of a level of complexity where you would need international linguists, you would need a wide variety of intellectuals to participate in order to resolve the current issues. So, yes. speech recognition is a good one, but I think it's actually more a problem of the intellectual community pooling resources than it is with regards to fundamental computing power.

    Van Nuys: Yeah. And speech recognition is improving by leaps and bounds, I must say, because people are using Dragon NaturallySpeaking on a PC where years ago you would have had to have a whole room-full of computers to achieve what people can achieve on just a generic PC that they would pick up at a warehouse store. Authors have claimed to have written entire books just using Dragon NaturallySpeaking. I know that when I had a different cell phone carrier -- I think it was Sprint -- they had very good voice recognition where I didn't even have to train it. I thought, my goodness! You know? It's able to handle regional differences and so on I guess. I thought it was pretty remarkable that I didn't even have to train their speech recognition system to say, ''Call Jerry Trumbule,'' and it would recognize it the first time out if I had Jerry Trumbule in my address book.

    Barbalet: There's also not a lot of processing power in a cell phone although they're increasingly becoming like supercomputers. At the time that you use the cell phone, it's amazing how little processing power you actually need to do single-person speech recognition. As Anne has pointed out, obviously, multi-region, multi-dialect, all of these kinds of things, so this is a complexity problem in some regard that academics can solve. There is a difference between industry and academia and in artificial life there is also a hobbyist community as well. We are all advancing in different areas. It's getting that intercommunication that's really critical.

    Power of Computing

    Trumbule: Tom, I wanted to ask about grid computing and how it relates to your thoughts on the singularity.

    Van Nuys: Jerry, would you say what you mean by grid computing before he answers?

    Trumbule: Yeah. Well, as I understand it, and I've participated in a couple of these, you make a choice to turn your computer over to some larger system, install a little software, and then when you're not using your computing power, this larger system takes over and uses your CPU as part of a large grid of CPUs which are all working on a massive problem.

    Van Nuys: Oh, yeah. That's like the SETI project -- the Search for Extra-Terrestrial Intelligence -- where you can get a screensaver and it uses your unused cycles to search the universe for extraterrestrial life.

    Trumbule: Well, there's another one I wanted to mention which I signed up for that has to do with protein folding. Trying to figure out where complex proteins are going to bend. That was of interest for a while, but then there were some rumors about these kinds of programs being used to bring viruses into your individual computer and I eventually got off of both of them.

    Barbalet: I want to address that initially. I think they use linked protocols for these kinds of projects and that's ultimately a way of describing the information that you're moving to the central points are pretty much standardized currently. I think there may have been misinformation in that, in some regard, but it's certainly something I've participated in with regards to SETI, and also, there's always discussion in terms of artificial life developers mirroring what SETI did, particularly in terms of the screensaver component. But I want to talk a little bit more about this idea of computation and the amount of computing power that is required to solve particular problems. When I started developing Noble Ape, I started developing it on a machine that was running, I think around 6.8 or 8 MHz. I'm now running it on multi-core machines that are running thousands if not -- well, considerably faster. It's very difficult to quantify with contemporary processes because they diverge in all different directions. The interesting part of a lot of these kinds of problems is optimization. If you look at optimizing for particular processes, and if you look at distribution and all these kinds of things, you can get a lot of additional power through looking at how you're pushing the information around. Now, in terms of relating it back to the singularity problem -- or whatever you want to call it -- movement -- my feeling is that the issues is with regards to the processing, not how the processing is actually being done. I think contemporary computing -- what INTEL and AMD are doing with regards to multi-core processes, ideas of atomizing simulations and distributing them over networks -- is just delivering a different magnitude of power. However, on a desktop computer, if you have the time, the cycles are there to solve a wide variety of computational problems as well. I'm not really -- because I'm a kind of post-singularist, I can't see grid computing as leading towards something. It's just a very productive way of using what I've described as a kind of processing down-time that we all have with regards to the computers we use for email and things like that.

    Living in the Machine

    Trumbule: And let me jump to a different question. If the singularity has already happened -- I can understand that point of view -- what about this concept that at some point we would be able to upload our brains into the web and then we would live on forever as a virtual entity?

    Barbalet: You walked into this perfectly, Jerry, because you've already described that you maintain half your brain on the Internet currently. What is interesting with regards to this question is the idea of: are we the same person now that we were 10 years ago or 15 years ago? The mathematician that came out of San Jose State, Rudy Rucker, a science fiction author, has a book called ''the Lifebox, the Seashell and the Soul''.

    Trumbule: Tom, could you give that name again please?

    Barbalet: Rudy Rucker.

    Van Nuys: Yeah. I've read a couple of his books.

    Barbalet: Yeah. So he's a science fiction author and he also writes popular science. He's a mathematician. He was big in the cyberpunk movement. But his most recent book, in almost a kind of project discussion as opposed to a tradition argument, lays this out. My concern with that has always been the way that people change over time. I would like to have -- if it were possible to upload me -- I would like to upload myself at various time intervals. I think I would be fundamentally different. It would actually be quite cute for future generations to watch the various iterations of me arguing with the other iterations of me. So I think the idea of a continuum with regards to how we develop -- I mean, as people get older, their political views change, for example. This is a good example. Perhaps a radically leftist youth will retire and take on a center-right position. They hold their views a particular way at various points in their life. Now, one could say that this is an evolution of thought, they collect more information and all these things have kind of accumulated to what they are at the end and what we need to store is the person at the end. But my concern is always with regards to evolution of time in that domain.

    Trumbule: The youthful brain as opposed to the senile brain. (laughter)

    Van Nuys: Yeah. (laughter)

    Barbalet: Well, it's not just that. After people have had major life events and major experiences they tend to have some resounding effect. I mean, I've had experiences throughout my life which have changed my perspective with regards to certain things diametrically and it's not because of any logical or rational thing that's occurred. It's because my relationship with the event has dramatically changed my thought processes. So, I think it's a complicated question, and it's a fascinating one to think about in terms of how we evolved as people based on external and internal events.

    Trumbule: Well, to follow up on that, Tom, how do we get control of our on-line entity? Mine doesn't do anything when I'm asleep. I want mine to be working for me while I'm sleeping.

    Barbalet: (laughter) Yes. I don't know. It's one of these funny thing with regards to the singularity because in some regard I feel it's almost a nightmare. I mean, if you don't have any resting time -- if you're constantly working -- when do you actually have the time to reflect and be a person? Ultimately what I would like is some division between my simulated self and myself, with regards to my simulated self doing a certain amount of work and then being on a holiday a portion of time, and then my actual self taking the remaining holiday. If you see what I'm saying. So, I think, in terms of getting yourself working better on-line, it's a complicated problem currently. Maybe this is really the question for the future. My concern currently is that humans are increasingly acting like simulated entities in the real world and are increasingly working all hours in the real world and are not taking as much advantage of perhaps their simulated self as they could be. I think that's a problem for the future as Dr. Dave would say.

    Van Nuys: Yeah. I recognize that in some ways I've become a slave to the machine.

    Trumbule: I'll second that.

    Van Nuys: (laughter) Yeah. My legs are turning into flippers or something because I don't walk enough.

    Trumbule: Atrophy. They're atrophying.

    Van Nuys: They're atrophying. That's the word I was looking for.

    Trumbule: I also realize that my on-line entity is working for me while I'm sleeping because, for example, just this morning I got up and found a very interesting comment to one of the blog items that I had posted yesterday. So my blog item was out there acting like me and attracting comments from people who were awake while I was asleep. So I guess I've already got that.

    Barbalet: Very much so.

    Action, Free Will and the Private Mind

    Van Nuys: Yeah. Anne the Man is asking, ''Don't we need to differentiate thinking from communicating and acting?'' There are various comments going by here, and I assume neither one of you are able to see the chat flow. By the way, we now have seven minutes remaining, so we all need to be aware of that as we begin to wind things down a bit.

    Barbalet: Certainly.

    Trumbule: And we'll try to end the show when it actually ends this time.

    Van Nuys: Yeah. We'll stop talking at the end.

    Barbalet: Yeah. I thoroughly encourage listeners to check out the previous show. It was very edifying.

    Trumbule: Really.

    Van Nuys: Yeah. The machine ran amuck. That's all I can say.

    Barbalet: Yes. Yes. So returning to Anne's question, the idea of the quiet mind verses the vocal mind is an interesting one. I think -- and this returns to the argument Nick Bostrom made -- the idea of a simulated self is ultimately the speaking mind in some regard. The question is, do you want to simulate the quiet mind as well, and is the quiet mind critical to the speaking mind? What are your thoughts on this, Dr. Dave?

    Van Nuys: Oh my goodness. I'm not sure I get the distinction. Help me out.

    Barbalet: Well, I think one is shown by action, and the other is what is motivated by the action that you may never be privy to. So the idea on one part is that you are speaking and acting and you have action. That can show some degree of intelligence but there's a lot of other stuff going on.

    Van Nuys: Yeah. The private mind versus the public mind. And what's your question about that?

    Barbalet: I think the question Anne was posing related to whether, just by showing action, which is the easiest way for machine intelligence to show itself, we're in fact betraying the additional aspect of intelligence which is the private mind.

    Van Nuys: Well, that certainly raises an interesting question. For future research, as Dr. Dave would say.

    Barbalet: (laughter)

    Van Nuys: (laughter) Yeah. Because that was the thing that tripped up behaviorists for so long -- that they were unwilling to talk about or consider what was going on and what they called the black box. So, in some way, we may face a similar conundrum as we look at our computing devices. We have certain outputs through which they communicate with us, but there may be the question of, is there some other kind of process that's going on in there that we're not privy to. In other words, are they thinking thoughts, and are the thoughts generous thoughts towards us or not?

    Barbalet: I mean, my concern in the human realm has always been the movement towards human automation, where you have a situation where you're completely divorced from the private mind of those around you in work environments and things like that, where you're so totally driven by productivity, where the human becomes the machine in that regard. Ultimately, it maps into the idea of free will. That really free will is, in some regard, the quiet mind kind of tinkering over -- the private mind to use Dr. Dave's wording. I think it's interesting. I think the minimization of the free will is something we need to address in the post-singularity world.

    Van Nuys: When you talked about being plugged into the corporation, it's interesting that the colloquial description for that is ''working for the man.'' But in fact, as you point out, we become sort of dehumanized when we're working for ''the man.'' It's not really ''the man'' that you're working for but a sort of machine, in a way. The corporation as machine.

    Barbalet: Certainly. Or I think of it as a simulation, fundamentally, as well. I think the analysis that comes out of this -- which is why I'm interested in kind of pushing the singularists -- the folks that are associated with the singularity movement -- into being post-singularity, is the idea of, OK, it's already happened. We're already past there. Now what does humanity need to do in order to remain human. And that's the question that hasn't really been answered.

    Van Nuys: Yeah. Now Anne had some interesting questions here relating to free will, wanting to know, well, what do you mean by free will? And is a machine capable of spontaneity?

    Barbalet: I think a machine is perfectly capable of spontaneity. Certainly they're capable of doing random things very easily. There's a quality somewhere between completely random and completely predetermined where free-will resides. Free will isn't completely random but it's somewhere between completely random and completely predetermined. You're right, in terms of when you interact with machines, you're not often times aware of the actual processing that they're doing. I think we're talking about something that's a bit more fundamental in terms of free will. Certainly, it is a broad philosophical question. If you have a machine that is doing all this processing in order to make a response, is that not just exactly the same as human free will in some regard?

    Van Nuys: I need to cut in here. We've got less than a minute remaining, so I want to make sure that we have time to thank you, Tom, for being our guest today, and giving us lots of provocative things to think about.

    Barbalet: Not a problem. It was my pleasure. And if folks want to communicate with me more, as I've said, tom at It's noble as in noble person as opposed to the explosives expert. Alternatively, I'm also the editor of Biota,, and there's a podcast series associated with both Biota and Noble Ape.

    Van Nuys: I want to thank our listeners who have joined us in the web chat area. Thank you very much for hanging in there with us. Maybe we'll see you next week. Jerry?

    Trumbule: Well, I just wanted to say. Remember, it's all in your mind.

    Van Nuys: I don't know why I keep forgetting to say that here, but it's true. (laughter) It's either all in your mind or it's all in the computer.

    Barbalet: Or it's somewhere in between.

    Van Nuys: It's all in the relationship.