Why it’s hard to make machines think original thoughts

Robust artificial creativity systems are an important step towards the ultimate commodity: a mass-producable product that in turn produces solutions and ideas on demand. Think how this could add to our capacity for problem solving. The idea is as exciting as the challenges involved in realizing it. Many questions remain unanswered:

Not only do we lack understanding of our own creative mechanisms, but the basics of computer programs seem to oppose the idea of achieving unbound originality. Here’s a look at that important, fundamental problem when implementing creativity. In easy digest format, no less.

A Brief Introduction to Creativity

A painting done by the computer program AaronCrucial to what follows is pointing out that creativity is ill-defined and people generally have very different ideas of what it is. This can make it difficult to discuss and debate.

Art is typically strongly tied to creativity, and many scientists are focusing on this. The painting on the side was created by the computer program Aaron, which is one of the more famous creative systems. But while creativity exhibits itself very strongly amongst artists and is easily associated with them, that’s nowhere near the whole story. For example, there’s software like Thaler’s neural networks that have invented new, patentable physical materials. This is another type of creative expression.

And to go even further; in Emergence of Creativity, my chapter in the book Intelligent Complex Adaptive Systems, I explain and define creativity and its origins in a way that accounts for even the actions of primitive organisms — not only human abilities.

But for this article’s purposes, all we have to agree on is that creating something new or being original is an essential part of creativity. Given this agreement, we shouldn’t run into a problem with the following explanations. But even so, keep in mind how extremely multifaceted creativity is and that I’m simplifying the concept (to keep this article from becoming a book).

A problem when creating creative systems

To properly explain the problem, how programming seems to oppose creativity, we must understand what computer programs are: instructions. A set of steps the computer executes. Typically, when we create computer programs we specify a certain problem and in turn devise a set of instructions that addresses this problem.

A program that can add numbers is a very simple example of this: we specify that its input are numbers and operators, how it should apply the operators to the numbers and that the output should be the result of the computation. Note here that before we create a program we need to know what we want it to do and what instructions achieve that purpose.

Computer programs are instructions, even when they become more complex.

An example of a creative system

Consider an intelligent agent model. An agent is a system that perceives its environment (input) and acts upon that environment (output), and broadly speaking, an agent’s input can be anything from keystrokes to streaming video (or a combination).

Our agent is a writer, to stay within a creativity setting most are comfortable with (here’s to hoping you think people like Shakespeare are creative). For this particular case, the input is a human’s demanding to hear a story about a particular subject, like a story about detectives or robots. Our agent composes a story, puts it in a file and then acts upon the environment by displaying it on-screen.

Our agent perceives human input from keyboard and displays a story on screen

In between receiving input and presenting output is, of course, a program that maps the input to output. Its brain, loosely speaking. We’ve already stated the agent’s high-level goal: to write a story. It’s the part of the agent that makes decisions on where he puts the plot twist where we learn that his mother, Alice, wasn’t really an actress but a government agent.

But in order to make our agent write something other than gibberish, he must have a dictionary of words and he must know grammar. He must also have common sense to know how the world works or otherwise we’d be getting stories where a bucket drinks from a detective.

In the real world we would have to take our agent’s architecture quite a bit deeper. We would have had to give him some way of choosing plots, paragraphs and words, for example. But we’re going to look past that and just focus on what we already have at this point.

Instructions are limitations

Note now that when we gave our agent a dictionary, a goal, grammar knowledge and common sense, we effectively restricted him: He’s not a painter. He’s not a musical composer. He’s not a programmer, a witch, a lion or a wardrobe. And when we look at it as a creative writer, we begin to see he’s not that creative at all.

A goal limits the objectives of a system and thereby helps us organize how the system will behave1: Our agents should write a story — he’s not about to write a groundbreaking paper about artificial creativity. And what about his stories? He has common sense that dictates no man can fly without the help of machines. We killed our creative agent’s Superman right there.

But these restrictions were also necessary for him to do anything at all. To explain this with a familiar analogy, it’s like writing a cooking recipe: To bake a cake we need certain ingredients. When we bake it the ingredients define what kind of cake it becomes. But we’re baking a cake, not bread. And the cake is sweet, not sour. The ingredients are restricted to define a particular outcome of the baking. Similarly, the instructions we devise are what defines a programs behavior and outcome.

Basically, to make it do what we want it to, we impose restrictions — a confined set of rules out of all the possible rules in the world.

What kind of instructions make limitless systems if instructions themselves are limitations?

Now here’s the core of the problem, finally: We agreed in the beginning of this article that an essential part of creativity is originality and creating something new. But like we’ve discussed above we know beforehand how a program should behave before we make it, including what it should produce.

Diagram depecting a programmer knowing what his program will do

So how can we make a program when we don’t want to know beforehand what it should do, and when we want it to be as limitless as possible? If we must tell the program what to do, can the program ever be original? Can it surprise us? Can it make something novel?

The basics of programming require us to explicitly design mechanisms that produce certain outcomes. By giving these explicit instructions we inadvertently decrease the potential of the program surprising us since clearly it means that we know beforehand how it will behave.

The instructions that define our program (and make it work) are in turn the exact reason it can’t produce surprising, novel and interesting ideas.

But how about a self-organizing program that writes its own code on the fly to overcome its restrictions? Yes, that sounds appealing and is what many scientists working on artificial creativity are trying to do, in one form or the other. And it would be really easy too… if the program wouldn’t have to be creative to write new code!

Edit (Aug. 25th): Due to some comments from readers (thank you) I feel inclined to emphasize what I mentioned in the article: many creative systems have already been made (have a look through the creativity category).

I’ve personally created and worked on systems that present creative behavior. Making them more robust is just a question of time, research and development. The example used here is intentionally simple and raw to flesh out an essential problem that scientists face when developing creative systems—but this is a problem we are overcoming.

Links and References

  • Aaron painting retrieved from Wikipedia
  • 1 Stuart Russell and Peter Norvig (2002). Artificial Intelligence: A Modern Approach (2nd edition). Prentice Hall. 60.

Related posts:

  1. Absolut Adopts Machines & Artificial Creativity Recently Absolut Vodka launched Absolut Machines, a new campaign that'll be running for a year and centers around two artificial...
  2. The 5th International Workshop on Computational Creativity I received a notification recently that the 5th International Joint Workshop on Computational Creativity (IJWCC) is open for submissions. I...
  3. Emergence of Creativity in Intelligent Complex Adaptive Systems The book chapter presents my latest research on the emergence of creativity in natural and artificial organisms, a theory of...
  4. Anthropocentric Approaches to Creativity I just added a few paragraphs to the artificial creativity page, on anthropocentric approaches to creativity. If you have a...

15 Comments, Comment or Ping

  1. Why can’t you just make 100 billion small, light, virtual machines and implement the rules we know about neurons on each machine and connect certain sets of virtual machines to various simple inputs?

    Have them make new connections, sum near-simultaneous interactions into new output, abolish latent connections, establish directionality of connections, reinforce connections that were used early or often, and abolish unused or overused VMs early in the game.

    Some get connected to photo-sensitive scintillators, just like rods and cones. Some get connected to narrow-band microphones, just like hair cells of the inner ear? Some get connected to strain gauges, just like sensory corpuscles? Some get connected to motors, speakers, etc. It would presumably take a while to hack all the initial pathways, but it seems doable. A lot of the anatomy is sufficiently detailed at this point, even at the molecular biology level, that such a construct would actually help the molecular biologists understand where they might have gaps in their own taxonomic efforts to detail every molecule.

    I know 100 billion seems like a large number, but you could do it within the construct of something like folding-at-home.

  2. >> And it would be really easy too… if the program wouldn’t have to be creative to write new code!

    That doesn’t make any sense. WE are creative and we can’t yet write programs that are creative.

    Maybe we can’t write programs that are creative, because WE aren’t actually creative…

  3. Please educate yourself, for example go read the book “A New Kind Of Science” for a bunch of examples for computer generated creativity. I don’t know where to begin to point out all the ways in which your thesis is wrong. Study complex systems, adapative systems and so on – there are all kinds of algorithms whose outputs we can not really explain.

    Also, we always knew the outcomes of the computations, we would not bother to write those programs. Obviously we write programs because they will produce unexpected results – otherwise there would be no need to write them.

    Furthermore, limitations do not imply that creativity is impossible. The human brain has limitations, too, yet manages to be creative.

  4. @Björn – I explicitly state in this article that there are programs that create unpredictable results. And obviously I wouldn’t be researching the field of artificial creativity if I thought it were a dead end.

    I mention several examples of creative systems. Thaler’s neural networks that have invented patentable ideas, Aaron that makes original paintings. If you page through my articles on creativity, you’ll also find a heap of other examples.

    As to a New Kind of Science and complex adaptive systems – I’ve referenced it on more than one occasion in my papers on creative systems. I’ve personally created and worked on complex systems that present unpredictable creative behavior. Like I also mention in this article. Some of which are award winning.

    Furthermore, limitations do not imply that creativity is impossible. The human brain has limitations, too, yet manages to be creative.

    Nowhere do I state that artificial creativity is impossible, nor would I ever think such a thing. I never say that this simple example is the end-all nature of all computer systems, in fact, I state the exact opposite.

    This is an introductory article for people unfamiliar with programming and creative systems. I’m explaining a fundamental problem that people working on creative systems have to overcome. Difficulties on the way to creating human-level creative systems. The “thesis” here is only that instructions introduce limitations, and that these limitations have only been overcome to a certain extent in AI systems. If you know an example of a system that has no limitations, and want, for example, to challenge AAAI fellow Peter Norvig on his statement that goals limit systems, then please do so.

    Next time you’re going to criticize someone’s article, please read it before you write.

  5. Sorry, yes I skimmed your article and got stuck on the sentence “By giving these explicit instructions we inadvertently decrease the potential of the program surprising us since clearly it means that we know beforehand how it will behave.”

    I have just heard this prejudice against computer programs too many times. But it is wrong to argue against a prejudice, rather than against your actual article, so my apologies. It is a bit of a pity that you wrote that article in an introductory article, because the recipients are exactly the people who will pick up that prejudice.

    As you no doubt know, some of the “creative” algorithms are not even very complex.

  6. I have just heard this prejudice against computer programs too many times.

    I hear it all the time as well. Creativity is perhaps the only thing people still have a death-grip on as separating themselves from machines. They’re wrong.

    [...] so my apologies. It is a bit of a pity that you wrote that article in an introductory article, because the recipients are exactly the people who will pick up that prejudice.

    Apology accepted and feel free to swing by again. Your comment is absolutely right, it just doesn’t apply to me. Good point that the recipients may pick up more prejudice — perhaps I should’ve emphasized more that we do have creative systems. I have a sequel to this article in the works, discussing how we’re gradually overcoming the limitations, that will hopefully make up for it.

  7. Why can’t you just make 100 billion small, light, virtual machines and implement the rules we know about neurons on each machine and connect certain sets of virtual machines to various simple inputs?

    Connectionist AI definitely has promise and applications in creative systems (as can be expected, since it simulates animal brain mechanisms to some extent). But it remains hard to mold them into doing what we want to, so it’s hard to develop broad, general creativity.

    I know 100 billion seems like a large number, but you could do it within the construct of something like folding-at-home.

    Well, not only is 100 billion a large number but it’s estimated that even the number of hyperlinks on the web still aren’t as many as the number of synapses in the human brain. To apply something like folding-at-home would require a major effort that probably wouldn’t be enough, even though the number of transistors of the internet is greater than the number of neurons in the brain.

    If people like Kevin Kelley and Kurzweil are correct, however, a single computing device matching the power of the human brain will cost $1k in 2019. And the internet in the year 2040 will have processing power that matches all 7 billion human brains on Earth.

    Now we wait.

  8. Yang Yu

    It has been a while since my last post on here. Thought I’d join in on this one. I’m currently working on a theses about Intelligence, and creativity and AI is of course part of it.

    I believe the main focus of creativity should not be on evaluate weather an output is creative (surprise), but rather focus on the reason for creativity. In other words, why are things creative and what ingredients are necessary to achieve this behavior.

    I love to give this example, because everyone can relate. The example of an Ant. An Ant with a brain size smaller than a grain of sand, out performs super computers. It does not have a storage cap like us, meaning their memory and learned behavior lacks, however, it can process information just as good, and just as fast. Inputs such as their senses, with one huge one: their antennas which we lack.

    I believe that human have a disadvantage in terms of evolutionary advances to our bodies since our reliance on technology and tools has rejected our natural bodies to be adapting to our needs, rather, we invent tools and innovations to adapt to the environment. Some might argue that this reliance is inevitable and critical to our survival, however , Ants has taken over most parts of the earth with the usage of naturalistic tools making them part of nature, thus through evolution, they have a better chance of survival.

    my point of all this, is that, the reason for creativity comes from survival and is the behavioral representation of intelligence.

    Intelligence does not require 15 billion VMs or human brain, so Creativity can be obtained from something much simpler.

    I totally agree with the fact that instructions limit functionality, and the overall capability of creativity. However like I previously said, the importance of creativity is not the human judgment on a particular output, but rather the reason for it.

    The AI Aaron draws a picture, why did he it draw it? did the drawing have a underlying purpose that would benefit the survival of Aaron in its environment? Did he draw it willingly? If the answer is no for those questions, then I think Aaron is not being creative, but simply following instructions.

    The paper that I am writing focuses on the intelligence from a Reason point of view, and the purpose of life for any object in the universe. I’ll post up a link ones I am finished if you are interested.

    Finally, I will attempt to answer the question, “Why it’s hard to make machines think original thoughts?”

    The answer is simple, you cannot MAKE it. it has to do it itself.
    It has to have a purpose that is self beneficial, not human beneficial. With that being said, it must be able to know it’s needs and wants, and deduce what is beneficial. it must follow a concept of life and death, and a subconscious willingness to be alive. These are fundamental requirements of a intellectual being rather than simply a program.

    Like what you said, the closest research for this type of system is a genetics and evolutionary algorithms where the program mutates and reproduces etc to solve a problem based on health. I propose a few tweaks on this design. First, remove the problem. A true being has only one problem, and that is existence. 2nd, there is no Health. Health is a judgment on output, and intelligence cannot be judged by an process outside of the intellects environment, it must judge itself using it’s own thinking process

    Agree disagree?

  9. DF

    Great article, as always!

    As Björn, at first I did get the wrong vibe, even though you do make it clear that you are only presenting the limitations AI “seems” to have, not agreeing with them.

    But Is the goal really “limitless systems”? I don’t think these exist, not even in the real world. To me it’s all a matter of complexity.

    Rain surprises us all the time, though It’s pure physics – a very (very) big system, with a very (very) well definied set of rules. That’s what I think our brain Is.

    So It would only be a matter of processing power until “the perfect AI” is doable (following Niels recipe in the comments). And, as you said, this probably won’t take long…

    Then the big question will be: will perfect AI lead to “a better world” or to… TERMINATOR (much cooler reference than Matrix)? Note how I’m totally ignoring the much harder question that Is: what Is “a better world”?

    But I digress! Thanks for the article Hrafn (and I didnt forget that AI Games wiki thing! :)

  10. DF

    Just one more thing on Yang’s great comment about AI lacking intent.

    That’s one of the main arguments I hear against the possibility of simulating the human mind: even If you build the perfect AI – how do you know It “feels” the way we feel?

    I understand and respect this argument, but In order to consider It In this kind of scientific study, we must be able to understand It, and “intent” can’t be measured – I agree I can’t prove If a robot has intent, but I also can’t prove anybody else has It!

  11. Yang Yu

    DF, in my opinion, there does not exist any system that is “better” than another system, they are simply different. However, if you define “better” as most fit in their environment.

    Like I said in my previous comment about “humans reliance on tools and technology, and that we are no longer achieving individual evolutionary advances, but rather, society and mankind advances.” If you view society as a “Super organism”, then the ultimate goal in the organism’s evolution is becoming more efficient and more advanced in all areas of science. An inevitable achievement would be in advanced AI and Robotics.

    The human race is great in that we can love, we all have individual wishes and dreams, we can all pursuit different goals. but in the end, we are all selfish. That selfishness ultimately is our weakness in terms of the “Super organism” since our disconnection, disconcisiveness in decision making lacks output efficiency. On the other hand, selfishness and disconnectivity promotes competitiveness and creativity. Therefore, human’s role in the “Super organism” will always be important, and robots will co-exist with us even if they proof to be more powerful. I believe a intelligent AI will have deduce this, so the likely hood of a Terminator is slim.

  12. Yang Yu

    Just a side note…

    The survival of AI and Robotics as the dominate species on this planet is highly unlikely. In evolutionary theory of survival of the fittest, the “fittest” does not imply the relationships between the different species, but rather the relationship with the environment. Of course there are impacts on the environment from other species, but ultimately, the fittest is one that is best correlated with it’s environment. In this regard, the fitness of Ants is much greater than that of humans due to our negative and un-naturalistic approach to survival in the reliance on nonrenewable resources, technologies and tools.

    With that being said, an Totally Mechanical body’s fitness and survival in our environment must have an lower fitness than that of humans since their reliance on technology is the soul of their existence and our reliance is simply an tool (even though we are getting more and more attached.). So in my opinion, in order for human survival in the long run, we must become more renewable to earth and everything on it.

    Coming back to Creativity. In order to create something creative, the creator has to be creative. If the creativeness can be measured by an coefficient, the coefficient of the creator will always be higher than that of the created since the model or algorithm used cannot exceed the creators own knowledge.

  13. Great comments guys :) A lot of thought worthy issues.

  14. Just wanted to point to you our new product Artisteer which may be the first professional Web design automation software and achieves quite good looking results with mostly random actions. Not sure if we’d define it as truly artificial creativity since we don’t measure user feedback, but have a set of limited rules/instructions that logically (not creatively) produce good outcome most of the time. The question is if this is ture creativity since the logic is very simple, yet users feel that it produces creative outcome.
    Feel free to check our product out at http://www.artisteer.com or contact myself to further discuss.

Reply to “Why it’s hard to make machines think original thoughts”

Please read the Terms of Use before commenting!

Basic HTML allowed (a, blockquote, strong, em)

Other ..

Think Artificial is a proud member of the 9rules blog community.