Mar 302012
 

One of the things which I’d been meaning to look at, but hadn’t got around to, is how the braincode changes over time on a minute by minute basis (i.e. the temporal resolution of the simulation).  With some new additions to the interactive longterm console it’s now possible to plot the braincode for an individual ape over 24 hours of simulation time.

Here’s an example.  The code is in the horizontal axis, with colors representing different instructions and parameters, and time is in the vertical axis with 12am at the top.

The commands to produce this are:

nalongterm – to run the interactive console

run <steps> – to run for some number of steps

top – to list the top apes (alternatively you can use ls or dir)

watch <ape name> – to watch a particular ape

step – to run for 24 hours of simulation time

A file is then saved to the current directory called temporal_braincode.png.

The activity seems to vary between individuals and also over time.  There is less braincode activity during sleep, and this is probably because the cognitive simulation is less active.  The right hand side of the image represents the outer part of the braincode, so here you can sometimes see attentional switching effects.

Some more examples:

Some apes don’t have much cognitive activity going on.  For these individuals their braincode program remains mostly static, or becomes colonized by the ambient informational environment.  In the example below it looks as if there are a couple of counter instructions incrementing and then rolling over, but otherwise no significant changes.

Mar 272012
 

If you are ever looking for my Ape Brain Narcissism Misses the Singularity: An Artificial Life View article from H+ magazine, the original page seems to have disappeared. I printed a PDF of the article a while back. In the spirit of maintaining information, please feel free to read my copy.

@timoreilly Provocative: Ape Brain Narcissism Misses the Singularity: An Artificial Life View

Shucks.

Mar 262012
 

I contacted Steve Omonohundro on Bob Mottram’s recent post. Here is Steve’s response.

My writings have really been about the consequences of rational systems with different kinds of goals and the extent to which the technologies we are building are likely to be well-described by these models. I’m not a “Singularitarian” in the sense that I don’t think extremely rapid technological change is good for humanity and much of my work is about how to create systems that change slowly enough for humanity to make thoughtful and well-considered choices.

Bob talks about “informationally closed systems”. I think that’s an interesting class of systems to understand but most of my writing is not about them. Rational systems have goals in the physical world and act to try to bring them about. They learn by interacting with the world and by seeing the consequences of their actions.

The phrase “maladaptive goal” is a bit odd. A goal can only be maladaptive relative to some other goal. Systems can be built with many different kinds of goals. A system with a particular goal is not ever going to think its own goal is maladaptive because its goal is its very purpose in the world. In the paper “The Basic AI Drives“, I did identify 3 situations in which rational agents will want to change their goals because the physical form of the goal interacts with the informational content but these situations are pretty obscure. For most rational agents, their goals are what they are trying to bring about in the world and changing them would go against their very purpose.

Systems can be given abstract goals, however, such as creating greater happiness, or greater peace, or being compassionate which might have many different concrete subgoals as possible realizations. Those concrete realizations can then certainly change as the world changes or the system learns more.

In an economist’s sense, a utility function is a measure of the desirability of an entire history of the universe, so for most utilities a system can’t reach “100% performance on its utility function” while there is any universe history left.

Bob asks where goal systems come from and what is the origin of human values. These are the critical and important questions! Evolutionary psychology and ethical philosophy have proposed some answers but I think there is much left to discover. Humans are not fully rational but act approximately rationally when we are clear about what we really want. One of our challenges is that we are not yet completely clear on what we want but our technology is rapidly moving forward ready to give it to us! As in countless genie stories, if we ask for the wrong thing, we won’t like what we get.

If we build technological systems with amorphous or unclear goals and they are allowed to self-modify and replicate, they are unlikely to behave in ways that are positive for humanity. My papers analyze a number of drives which appear for many simple goals if they are not explicitly counteracted including self-preservation, replication, and resource-acquisition. I and others are working very hard to design classes systems which will act in support of humanity rather than just playing out these drives with anti-human consequences. As long as systems are simple and confined to controlled environments like the Noble Ape experiments, there is unlikely to be any danger. But as systems become more powerful, we must be very careful. The more understanding we have of the behavior of intelligent systems and of human values and goals, the more likely we will be to create technologies with careful forethought and for the benefit of humanity. So I applaud your inquiry into these important topics.

Best,

Steve

Mar 192012
 

In recent years something called Artificial General Intelligence (AGI) has become fashionable, but the question of what constitutes a good design for a mind is actually much older.  So what is the most maximally general mind which could in principle exhibit any sort of activity within the constraints of memory and time?

I think the answer to this is fairly unremarkable.  It would just be a Turing machine.  That is, a program comprised from some instruction set, where some instructions exist which can write to or read from the environment, and other instructions which do things like adding, multiplying and moving data from one address to another.  Something like the following, where programs A and B are different individuals in a population.

These would be able to do anything which is computable, and when individuals met they could form a temporary shared address space which allowed reading and writing between them, like this:

Individuals might have a finite life span, and new individuals might be introduced with initially random programs as a source of new information, or novelty. Under this kind of system – which is not even as sophisticated as genetic programming – programs which are good at surviving in the environment (whatever that might be) and transmitting their data from one individual to another would tend to stick around and multiply.

At this point you could say that using a system such as this one the AGI problem has been solved, so long as you don’t believe that hypercomputation is an important requirement.  Even the limitations of Gödel’s incompleteness could be addressed to some degree if there is a population of heterogeneous programs, such that things not provable or complete within one mind could be provable by another with a different perspective – a triumph of overlapping capabilities.  I think that some Alife systems do work in a similarly general manner, and the above description is also similar to how the braincode within Noble Ape works.

However, there are some down sides to complete generality.  In a system where any kind of computation is possible it may take an extraordinarily long time to discover solutions which are in any way comparable to the kinds of things which when we observe them in the natural world we would describe as being “intelligent”.  Even for quite small programs and instruction sets, the space of all possible programs is going to be colossal, especially when they can move around between individuals such that they effectively span across minds.

In my estimation real intelligent systems, like primates or dolphins, aren’t completely general like this.  They come with a lot of preexisting evolutionary baggage, which constrains their mind architecture in a way which provides more immediate adaptive value for survival.  So in Noble Ape there are also other systems which are part of the mind design, but which are not Turing complete systems.  These are:

  • The social graph, which stores information about other individuals which have been met
  • An episodic memory, which stores information about events which have recently occurred
  • The attention system, which can shift attention between entries in the social graph or episodic memory.
  • A drives system, which attempts to keep some important parameters within reasonable limits.
  • An affect system, which ascribes affective weighting to memories, depending upon their emotional impact

None of these systems are programs themselves, but the braincode system is able to access and recombine them in novel ways such that the mind as a whole is still a sort of Turing machine.  It’s this ability of a particular kind of language system to re-organize and transform existing legacy modules which I think is important in human cognition.  So for example if you’re reading a book which contains some new idea then areas from your visual system, audio system and higher level conceptual system might be re-combined in a new way, forming a new synthesis which can itself be remembered for future reference.  If the language system is recursively enumerable then an endless number of syntheses are potentially possible, even though the language itself contains only a finite number of words or instructions.  It also means that arbitrary syntheses can be generated, to produce imaginary scenarios which have never previously been experienced.

Mar 182012
 

A common concept amongst Singularitarian thinkers is that of the seed AI.  The seed AI is some initially simple computational system which is able to both read and modify its own source code, and then undergoes a rapid evolution driven by positive feedback in which its intelligence increases under the direction of an unchanging utility function.  It’s sometimes also referred to as “the intelligence explosion”.

I think this idea is problematic, and whenever I hear it I get the feeling that someone is attempting to sell me some variety of snake oil.  The difficulty, in essence, seems to be due to a misguided notion of what intelligence is about.  The seed AI is typically portrayed as an informationally closed system.  It might have an external energy supply, but other than that no information enters or leaves the black box during its ascent to superintelligence.

There are some variants of the scenario in which there is information leakage, such as the AI getting humans to do things for it via Mechanical Turk, or by gambling on the stock market, but as soon as the seed AI gets involved with such stuff it will encounter fundamental constraints which act as a brake on its progress, and it will also become vulnerable to infiltration by meta-machines from the human cultural realm.  If the AI produces informational output which interacts with the wider world, then the world system will respond accordingly and the inflexible goal system of the seed AI which is necessary to guide its optimization process will no longer remain relevant to the new context.  Optimizing to a now maladaptive goal (superstupidity rather than superintelligence) becomes a recipe for failure and annihilation by entropic accumulation or external colonization.  So it seems that any extra-curricular excursions by the seed AI outside of its black box surely doom its progress.

Suppose that the seed AI avoids these pitfalls and reaches 100% performance on its utility function.  What then?  Since the function cannot change this spells the end of history as far as the AI is concerned, and no further evolution is possible.  It could choose to remain within this happy state of affairs, or it could break out from its box, but as soon as it does so all the previously described problems return with a vengeance.

The consensus of opinion amongst leading Singularitarian thinkers, such as Stephen Omohundro, seems to be that advanced AIs will seek to preserve their goal systems under all conditions.  There is of course the possibility of modifiable goal systems, but this is viewed with considerable alarm and consternation as being a scary idea.

If this were a podcast it’s at this point that Omohundro’s commentary about self-improvement would be interrupted by a scratched record sound effect.  Hold on a moment.  This whole narrative seems somewhat dubious.  Are intelligent systems just utility maximizers?  Where do the goal systems come from?  What are the “human values” which he speaks of, and what does alignment with human values mean from an information theoretical perspective ?

Noble Ape and Radical Self-modification

One of the great fears amongst Singularitarians seems to be what has been described as “radical self-modification”.  Under this scenario goals may be malleable, “unfriendly” or merely non-existent.  Noble Ape is a system in which the presence of any explicitly predefined goals are largely absent.  There can be goals to seek mates or to navigate to particular locations, but other than that apes don’t have any fixed aims in life.  Worse still, the braincode programs are able to read and modify themselves in an unconstrained way, and the 3D cognitive simulation could be restructured into arbitrary virtual machines by the braincode system.

According to the Singularitarian narratives which I know of, systems with unfavorable or missing goal systems will bring about disaster (“untold damage” according to Omohundro).  I’ve heard extraordinarily florid language being used in relation to this, such as “the destruction of everything we value”, or that the universe might be “turned into paperclips”.  Noble Ape, and similar artificial life systems which contain an aspect of open-ended program evolution may provide existence proofs that such scenarios are predicated upon misguided assumptions.

I’d argue that in order to get interesting behavior it’s a good idea to try to minimize the influence of explicit goals.  They’re undoubtedly useful in some contexts, but too high a degree of goal orientation is also likely to constrain the behavior in a way which may eliminate the potential for ongoing cultural evolution.

Mar 182012
 

If you consider simple kinds of automata, like the ones described by Stephen Wolfram in New Kind of Science, and then if you consider the cultures of primates or humans to be forms of automata but just with many more dimensions then what kinds of automata would these be?

I think Wolfram does a reasonably convincing job in that book of showing – perhaps even to a gratuitous extent – that the classifications which simple 1D automata fall into are also applicable in higher dimensions too, such that they represent general properties of information systems as they evolve over time.

Class I

What would a class I culture look like?  It might begin in a random sort of way, but soon converges to a single highly stereotyped set of behaviors.  This is an “end of history” kind of situation, or a minimum information state in terms of algorithmic compressibility.  Many animal cultures do seem to be like this.  For example the social organization of bees or termites was pretty much the same 1000 or 10000 years ago as it is today.  Not much qualitative change seems to have occurred in the way that their cultures communicate and organize.  If you could compress bee culture into an algorithm of minimum size, that algorithm would only change slowly over long periods of time, mainly in line with underlying changes in genetics or climatic conditions.

Class II

In a class II culture you’d see oscillations in its behavior over time.  There might even be secondary oscillations at the level of communication between individuals, but these would be of a symmetrical kind.  A program representing this sort of culture would be a fractal.  Here I’ll go out on a limb and conjecture that most existing primate cultures fall into this category.  So you can have developments such as termite fishing, cracking nuts with rocks, washing food in the sea or bathing.  My guess is that these local cultural adaptations will wax and wane over long periods of time, but that they are non-cumulative and so don’t have any particular historical direction.  There are dynamics, but only within a limited range.  Possibly bird song might also fall into this category.  Classes I and II could be described as being steady state or equilibrium cultures.

Class III

A class III culture would be chaotic, such as during wars, revolutions or natural disasters.  I don’t know of any cultures which are perpetually chaotic or random-looking, although amongst social creatures ranging from insects to primates there are phases of unpredictable change as factions compete for resources or respond to environmental change, such as the disruption of an ant nest by a predator.

Class IV

This is the more interesting type of culture, and as far as I know it’s only possessed by humans.  A class IV culture exists somewhere between classes II and III.  It can contain cycles, such as the secular cycles observed in human history, but has a non-repeating and cumulative structure.  Crucially, these cultures can be Turing complete, and contain meta-machinery which is not simply symmetrical as a fractal would be, nor totally random.  Meta-machines within these cultures would be analogous to Ross Ashby‘s notion of “organisms”.

 

Some additional speculation

Conjecture I : “Interesting” cultures which can support advanced features such as folklore, art, religion or complex technologies are those which are Turing complete and can contain meta-machines.

Conjecture II: These cultures are a function of the grammatical complexity of their communications systems.

Conjecture III:  At some point in human evolution there was a transition from an equilibrium culture to a Turing complete one.  This I call “the computational revolution”.  Whether other creatures have also done this, such as cetaceans, is still unknown but it should in principle be possible to find out by studying their communication behavior.  After the computational revolution, humans became symbionts with their own meta-machines.

Mar 162012
 

Noble Apes have a number of drives which they try to maintain within a homeostatic regime.  One of these is a social drive, and it’s related to crowding.  If there are few other apes in the vicinity then the social drive increases, and if there are many apes around then the drive decreases.  This is like loneliness or over-stimulation, and Cynthia Breazeal used a similar idea for regulation of social behavior in robotics.  It’s also related to the idea of behavioral sink.

Previously the threshold value that the social drive must exceed in order for interactions to take place was just hard coded, but I’ve made this variable using the SOCIAL_THRESHOLD macro.  It seems that the typical degree of sociality for any species is partly innate – some are highly social (sheep being the obvious example) whereas others are mostly solitary, so I’ve made the threshold value partly genetic and partly learned.

Mar 112012
 

I periodically get asked how to build the Noble Ape Simulation for Windows. Thankfully Microsoft now provides pretty snappy free compiler. I’m going to use a Google link to find it to avoid the link being connected with a particular dated version.

Visual C++ Express

You’ll need a copy of the simulation source. I’d recommend getting the latest tar ball.

And you will need something to extract the tar ball. I use 7Zip although there are many other programs. Once you have it decompressed, I recorded a YouTube video about the final step.

Enjoy!

Mar 082012
 

For a while I’ve been thinking about how to possibly implement something akin to a theory of mind, and now I have a provisional solution. Previously, when Noble Apes chatted they did so in the same way with different interlocutors, and for a primate this obviously isn’t very realistic. A better strategy, which allows for all sorts of devious outcomes, is to be able communicate in different ways with different individuals, and this necessitates having some kind of model, or at least partial representation, of others.

To implement this I’ve transferred the braincode programs from the noble_being structure into the social graph (social_event structure). The first entry in the graph is always the self, so this is the inner part of the internal dialogue with the outer part being stored within links to other individuals.

The idea is illustrated in the following diagram, which shows what happens when two apes are communicating:

Having the braincode attached to every link in the social graph does increase the overall memory consumption, but this change doesn’t add much to the CPU usage.  The blue boxes represent braincode programs, and when two programs are linked they form a temporary shared address space which allows data to move between them and for values in one program to become parameters for another.  It’s a very abstract representation of a language system, which makes no attempt to simulate individual words, concepts or articulations, but does allow for recursively enumerable grammatical structure.  The idea is that behavior may be observed which has a qualitative similarity to real biological language systems.

A Meeting of Minds

When apes meet for the first time the (outer) braincode associated with the new social graph entry needs to be initialized.  If there are no other individuals in the graph then the inner braincode is copied (a kind of bootstrapping effect, where you use yourself as a model for other conspecifics), otherwise braincode from the most similar individual in the graph is copied over to the new entry.  Similarity is decided based upon the friend or foe value, because this is initially calculated based upon a variety of genetic and learned parameters which could be called the prejudice function.

The Socially Constructed Self

This kind of cognitive architecture results in an identity which is socially and historically constructed.  When not communicating, or even when sleeping, the cognitive process of the ape becomes an internal dialogue between a number of different imagined actors, some of whom could represent individuals who are no longer living but remain persistent and who may continue to exert influence as informational creatures of the ideosphere.  Instructions within the braincode can trigger switches in attention between the different individuals in the social graph or between events in the episodic memory, and there is also an attempt to maintain a consistency of narrative flow such that the next focus of attention may have some properties in common with the current attentional content.

At the end of a lecture about the phenomenal self model, Thomas Metzinger hints at this kind of portable selfhood when he says “nobody is born, and nobody ever dies”.  It’s an idea similar to the Cartesian theater, but not quite in the same sense as meant by Daniel Dennett.

Some Implications

A couple of consequences logically follow from this sort of architecture.  Firstly, it’s possible that an ape could experience something resembling grief or estrangement.  When individuals are removed from the social graph this literally changes the cognitive dynamics.  Less controversial would be to say that an ape undergoes some psychological disturbance when they lose contact with another individual who previously played a significant role in their cognitive dynamics.

Another consequence is that fantasies involving beings who never existed could be supported within this architecture.  When spreading anecdotes during chat there is some probability of miscommunication or deliberate fabrication of information, and in principle this could turn into a persistent folklore or proto-religion perhaps similar to ancestor worship.

Also, although not currently implemented it’s possible to imagine autistic apes.  In this scenario there would be some limitation on the creation or memory size of braincode for each link in the social graph, with the result that the number or degree of modeling complexity of actors on the mental stage becomes more constrained, or reduced exclusively to the self.  A possible hypothesis about the rapidly escalating size of the human brain over its course of evolution is that this was a ratcheted response to a complex social environment where the ability to better model other individuals within the group had significant adaptive value, and with creative fantasies,elaborate traditions and religions being a byproduct of this.