I’ve said it before and I’ll say it again.
Before going any further, watch the short video above.
It is a short film called Zone Out, it was created in only 48 hours by a director called Benjamin…
…and it is complete and utter garbage.
But that isn’t a problem, because the director, actor’s or writers’ feelings won’t be hurt by my comment.
That is of because Benjamin is not a person.
He is an artificial intelligence.
Benjamin is artificial intelligence developed by Ross Goodwin, a creative technologist at Google, and LA-based director Oscar Sharp. They previously developed Benjamin for the London Sci-Fi Film Festival, where he fed a series of science fiction movie scripts into Benjamin’s algorithms to produce a completely new script, which was then acted out by real actors including Silicon Valley’s Thomas Middleditch to make a movie in 48 hours.
The result was the weird pseudo-sci-fi film Sunspring, which you can also watch below:
For 2018, Sharp and Goodwin didn’t only let Benjamin’s algorithms write the script. They also let it direct the scenes, speak the dialogue and create the soundtrack.
In order to achieve this feat, first they filmed some real actors in front of a green screen to capture video of facial expressions.
Then they once again allowed it to write a script, but also fed it hours of old public domain film, allowing Benjamin to take versions of the captured actors’ faces and superimpose them on top of the faces in the stock footage (a process known as “deepfakes”), speaking the script in a fully digitised voice.
To look at a behind-the-scenes video on how the whole process came together, check out this video below (which is ironically longer than the film itself).
While the output may not win any visual or artistic awards, what is more important to consider is how quickly this technology is advancing.
Because in a few years’ time, we may no longer be able to tell the difference between a real performance, one that was altered using software, and potentially even one directed by software.
We have also seen the development of tools which can create robot voices which sound surprisingly human (or even pretend to be someone else), superimpose an actor’s face onto someone else’s body (often with nefarious purposes) and even create fake videos of famous (and soon everyday) people acting out someone else’s words, like in the explanation below.
Considering how quickly both the tools and algorithms are advancing, it may soon be possible for everyone to create Hollywood-quality digital approximations or alterations of everything from famous film scenes to brand new content.
It also means that in the coming years, as the technology progresses further, there will be more and more examples of artificial creativity creating their own content.
And as it uses its machine learning to understand what people consider to be high quality and creative, it will become higher and higher quality over time.
Eventually, it will become very challenging to determine whether media has been produced by a person or software.
In previous articles, I have asked whether artificial creativity can truly create art, or just replicate someone else’s ideas in a new way.
But perhaps this new generation of software just further illustrates that art is in the eye of the beholder.
Latest posts by Nick Skillicorn (see all)
- Self-Serving bias: Why you think nothing is your fault - August 9, 2023
- We are all sheep - August 2, 2023
- Planning fallacy: Why we are so bad at predicting how long something will take - July 27, 2023
- Pygmalion effect: The self-fulfilling prophecy - July 24, 2023