In 2013, I wrote a breakthrough article on the nascent examples of computers beginning to generate ideas in a way similar to human creativity. Over the passing years, this ability has grown by leaps and bounds. Here I revisit the article with all-new evidence showing how close we are to artificial creativity.
One of the favorite stories in Science Fiction is of a future where robots are so advanced that they have taken on human characteristics and act as advanced servants. Boston Dynamics currently make the most advanced robot displaying this, able to move freely and interact in many ways with people. But even SciFi have difficulty imagining a world where robots can come up with their own ideas. This world is closer than you may think.
In the not too far future machines and robots will not only become more advanced, they will also begin to exhibit aspects of Creativity, and may soon exceed people in the ability to produce simple creative outputs. However, while I believe robots will be able to imitate a human’s ability for crafting creative work, I don’t believe this is the same as true creativity.
Skeptical? Let’s me outline the technological advances which will lead to the breakthroughs, and then see my predictions of jobs robots will soon steal from creative people:
1. Elementary, my dear Watson (modelling the human mind)
A lot of advances in robot technology have been about making them more independent (able to move in a new space independently, recognising faces and commands etc). The big upcoming leaps come from research into how machines can emulate the human thought process. In recent years, big data and deep learning algorithms, and the ability to spread processing power across thousands of computers in the cloud, is making this process more and more effective. For example, Skype is now able to translate a video chat between two people speaking different languages, in real time.
The EU has already begun investing €1billion into the modelling of the human brain over the next 10 years, which will likely include experiments into modelling thought processes.
Even before that, IBM created a new type of knowledge supercomputer called Watson, which managed to win the Gameshow ‘Jeopardy’. Unlike previous supercomputers used to search for data faster, Jeopardy questions are often ambiguous and rely on cryptic connotations within them, so Watson needed to analyse queries in a more human-like manner to react, and did so very successfully.
Chef Watson – creating new recipes
IBM partnered with food magazine Bon Apetit, and allowed Watson to data-mine its database of thousands of recipes. This allowed Watson to understand examples of successful recipes to see what ingredients go together well. Watson then takes this information several steps further, analysing things like the molecular makeup of individual ingredients, their flavour profiles and how they react to cooking.
How is this useful? Well, someone can then ask Watson to come up with a recipe based on any combination of criteria. Even ones that no human chef would usually think of, such as “Chinese style dishes with plantains”, or “something using beef stock, goji berries, mustard and sweetcorn”. In fact, Watson famously created new recipes for Bengali Butternut BBQ Sauce, Cheese and Chocolate Panini and Italian grilled lobster.
Best of all, or if you have your doubts, you can try it for yourself, for free. Chef Watson is the recently launched site where you can give Watson the criteria of the dish and let it work its magic behind the scenes to give you new ideas you would never have come up with yourself.
Writing its own TED talk
While this is still a few years away, IBM has launched the Watson AI X Prize. As explained in the video below, this will be a $5m prize to the team which develops an AI which creates its own convincing TED talk, to be held in 2020.
The prize will be awarded to the team which shows progress in AI-human cognitive computing, but the programme itself will either present or co-present their own findings.
(I just hope that during the presentation they don’t at any point say “I’m sorry Dave, I can’t let you do that”)
2. Machine Learning
In order to make machines more independent, many researchers are looking into robots building their own awareness of their surroundings over time. The aim is to reduce the requirement of humans to programme all of the information they need in advance. So now there are machines which are learning new information in the same way that toddlers do, and learning about their own body in order to learn how to move. It can even begin to imagine what is going on in the minds of people it is interacting with.
While that is interesting, the real changes will come out of letting learning computers loose on the internet’s data so that they can learn human concepts. in 2012, Google created a neural network of 16,000 computers and fed it random image thumbnails from Youtube. Without any previous knowledge, it was able to form a concept of similarity between many images, and to learn what the most common object was. In case you were guessing, it was a cat. Thanks Youtube! Given more processing power and time, these machines will soon look at objects and see not only descriptions which humans have programmed, but the meaning people give to them.
Last year, Google took this a step further, and allowed these machines to start looking at images in a “dreamlike” state. They did this to learn how their system was actually “seeing” the images it was fed. So they allowed it to interpret parts of an image which it thought were similar to images it already understood and combine them. So if it sees a butterfly with an eye-shaped pattern, it will make those patterns look more like eyes. Or if it sees a cloud which it thinks looks a bit like a dog’s head, it will make it look more like a dog’s head.
You can try this yourself with images at Google’s Deep Dream Generator. But be aware, the results can be scary when you see how a computer interprets what it “sees”. For example, here is the before and after of a stock photo for “innovation” I fed into it:
What this technology is doing it is it allowing computers and neural nets to develop a skill they were previously terrible at: interpretation.
Soon an evolved version of this would be able to produce work which previously didn’t exist, based on some starting information and criteria of what to produce.
3. Big data, predictions and instant experimentation
‘Big Data’ is one of the biggest trends in analytics from the past few years, already doing everything from predicting what you will search for in Google Autocomplete, which type of Toaster Amazon should recommend to you, and which Movies Netflix thinks you would like to see on a Tuesday evening. By feeding a system enough data it is able to discern the underlying trends more effectively than a person ever could and make predictions of what may work in the future. It is already predicting what music you will listen to.
One of my other favorite experiments is called Yossarianlives, which is a metaphorical search engine. Instead of searching for specific data, it lets you search for a concept and returns results of what its databank from internet searches say are related metaphorical concepts. It’s close to a digital brainstorming session.
Pandora’s Music Genome project gets input from music experts on thousands of songs, including how the lyrics work, aspects of the base melody, genre, style, speed, and impact. It also runs thousands of experiments with its millions of users when producing a personal track list, streamed as a radio station, and gets real-time feedback on how successful it was by how the user interacts with the suggested music. This helps it figure out how people react to and enjoy aspects of music in different settings, and so is able to produce a list of new music a customer may like.
But what about the next evolution of big data? Computers are already able to understand voice, language structure and word meanings. If big data analysed the lyrics to every song released in the last 100 years and saw how popular they fared, it is likely it could find the underlying patterns and predict new lyrics. More than that, it could instantly test them with people to see how they fared. Imagine a programme able to take a concept, find metaphors for it, use big data to predict potential lyrics which would be popular, and then produce 100 slightly different versions. It could produce a song by “singing” the lyrics using a computer voice over a synthesised track, and release each version either on Youtube or a radio streaming service. Based on user feedback and popularity, it would then amend the content and style, run the experiment again, get more feedback, until it had a song which users loved, and then release it to its iTunes account, without any human every writing a note.
Similarly, big data could be used to analyse previous links between all forms of media and internet chatter and its effect on the success of media released after that. Would there have been a way to predict the success of ‘Vampire’ based books earlier? Could it predict the rise of a music genre representing the attitudes of a demographic like Grunge did in the 90s? How far in advance could you predict what will be popular? Big Data will eventually enable all of this, or at least enhance it.
So what comes next?
While I do believe that machines will soon replace certain aspects of the creative process, I don’t think they will ever be truly creative. This is due to the distinct difference between creativity (the generation of new and valuable ideas) and craft (turning those ideas into something tangible). Machines will overtake humans in craft, and in many cases already have (manufacturing, image processing), they can produce the ‘What?’ and ‘How?’, but not the ‘Why?’. Until there is a machine which has gone beyond using inputs as data, and using data as experiences, then all of its information, no matter how much analysis went into it from however many millions of sources, is still second hand from human.
It is the intentionality of being creative that will continue to separate humans from machines for the foreseeable future.
That being said, here are my predictions of creative jobs that will be at least partially replaced by machines in the next decade:
- Advertising: Programs will produce try out hundreds or thousands of designs, slogans etc, and try them out in small scale on the internet before a full campaign launch. Based on user reaction they will refine the campaign and iterate until an ideal message is found.
- Music: The first fully digitally written, sung and produced song will be released. It will likely have very generic lyrics about ‘Love’, ‘Beauty’ and use the word ‘Baby’ a lot. But the second album will show a lot more nuance and variety. And the live performances will have a lot of lighting effects but not much soul.
- Architecture & Design: By providing the exact functionality required from a building or product, a programme will produce several very different designs which all meet the underlying requirements.
- Writing, Screenwriting & TV: By finding the underlying trends in public opinion, software will be able to predict what books, films and TV shows will be popular in the 1, 2 & 3 years time. It will then compare this against previous films to suggest story arcs which the book / film / TV show should follow to enhance likelihood of success.
Do you think that machines will ever be able to produce truly creative work? Let us know in the comments below (we read all comments).
Latest posts by Nick Skillicorn (see all)
- S3E47: Prof. Keith Sawyer – The Creative Classroom and improving learning outcomes - December 4, 2019
- 3 Dimensions of Innovation: the 23 Capabilities your company needs to succeed - November 28, 2019
- Please vote for me here as one of the best Innovation Writers of 2019 - November 20, 2019
- S3E46: Max McKeown – The Innovator’s Gap - November 20, 2019