What if the future town was created by machines? Could they cope with the unwritten human law of cohabitation?

A participatory experimental artwork that evolves over time, Machine Maps is an exhibition using Machine Learning, AI, and town planning design to build future arrangements and configuration of space.  A playful experiment in creating new digital maps and encouraging audiences to rethink their spaces and the interpretation of them.

Ashley James Brown

Machine Maps.

June 2017.

Ashley James Brown.

 

What if computers could dream ?

 

Machine Maps uses the concepts of machine learning and artificial intelligence by using conditional adversarial networks (CAN). PIX2PIX uses the CAN process and trains a computer to learn a mapping between 2 sets of images, drawn differently, but depicting the same underlying scenario. Once learnt, the computer can attempt to recognise features from one style of image and redraw from memory (much like how we dream) in the other style. Machine Maps was trained with a series of drawn and aerial satellite images of various topographies of Milton Keynes.

 

This installation gives audiences the power to create simple featured drawings that the computer then tries to dream and reconstruct a real image from its memory banks. Strange, beautiful and not entirely accurate, Machine Maps allows for the creative experimentation of one’s landscape and to truly play with the idea of a future Milton Keynes in which computers may try to create our towns topography based on historical data with minimal human input. Will AI become the future of design, architecture and town planning or will it merely inform design thinking?

 

Please draw roads, lakes, forests and building plots to design your very own block of Milton Keynes and have it printed to join the growing aerial landscape exhibition that combines design from both man and machine.

 

Credits:

Machine Maps is part of The Dystopian Town Planner, a series of work by Ashley James Brown created for Digitalis that explores the themes of steganography, street signage and the way in which place is designed by authority and repurposed by citizens.

 

Machine Maps is based upon the amazing groundwork by Gene Hogan and the Machine Learning for Artists toolkit which makes heavy use of the pix2pix research learning model and resulting software (credit source: Image-to-Image Translation with Conditional Adversarial Networks by Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A ) published Arxiv on November 21st 2016. Pix2pix in turn borrows heavily from DCGAN and Context-Encoder deep learning frameworks.

 

With thanks to Pete Ashton for training such large amounts of data for me.

 

Tools:

OSX 10.12.5

NVIDIA CUDA Compute Works deep learning GPU framework

NVIDIA CUDNN Compute Works deep learning GPU framework

Pix2Pix – Image-to-image translation with Conditional Adversarial Networks

Torch – A scientific computing language

LUAJIT – Java implementation of Lua

Lua – scripting language

Many other contributed libraries and resources.

Process.