Witness ‘Fear and Loathing in Las Vegas’ Through Google’s Deep Dream
Without getting too immersed in the MIT-level details, suffice to say Google‘s Deep Dream algorithm makes ordinary images look like all-out LSD trips. After Google announced their algorithm last month, the question immediately became: What would happen if you apply that algorithm and its breakthrough in neural networking to something already exceedingly trippy? The answer is this acid-fried clip from Fear and Loathing in Las Vegas, the 1998 film adaptation of Hunter S. Thompson’s gonzo adventure.
In this scene, Thompson (played by Johnny Depp) flashes back to the Electric Kool-Acid Tests; however, the Google Deep Dream algorithm adds an extra layer of psychedelia over what’s already a surreal scene.
As Depp wades through the San Francisco scene, with Jefferson Airplane’s “Somebody to Love” pounding in the background, borders morph and people’s faces melt briefly into basset hounds and other household pets. In one instance, when an LSD-seeking man portrayed by Lyle Lovett yells out to solicit drugs, the interior of his mouth houses a third eyeball. It’s the kind of terrifying acid trip only the brain of a computer can dream up (or Fear and Loathing artist Ralph Steadman).
As the Google Research Blog explained when they announced Deep Dream, “We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the ‘output’ layer is reached. The network’s ‘answer’ comes from this final output layer.”
The algorithm was then passed on to tech-savvy users to experiment with, resulting in the Fear & Loathing clip created by YouTube user Roelof Pieters as well as countless other hallucinogenic images dreamt up by computers.