Douwe Osinga's Blog: Deep Ink: Machine Learning fantasies in black and white

Thursday, January 12, 2017

One of the New New Things in Machine Learning is the concept of adversary networks. Just like in coevolution, where both the leopard and the antelope become faster in competition with each other, the idea here is to have one network learn something and to have the other network to learn judge the results of the first. The process does a remarkable job generating photorealistic images. Deep dreaming has been generating crazy and psychedelic images for a while now. RNNs have been imitating writers for a few years with remarkable results.
Deep Dreaming, by jessica mullen from austin, txt

The state of the art Neural Networks used to classify images contain many connected layers. You feed in the image on one side and a classification rolls out on the other side. One way to think about this process is to compare it with how the visual cortex of animals is organized. The lowest level recognizes only pixels, the layer just above it knows about edges and corners. As we go up through the layers, the level of abstraction increases too. At the very top the network will say "I saw a cat." One or two levels below that though, it might have neurons seeing an eye, paw or the skin pattern.

Deep Dreaming reverses this process. Rather than asking the network, do you see an eye somewhere in this picture, it asks the network, how would you modify this picture to make you see more eyes in it? There's a little more to this, but this is the basic principle.

So Software can imitate Art, just as Life does. It tends to be fairly literal though. The Deep Dreaming images, fascinating as they are, reflect patterns seen elsewhere in clouds or on top of random noise. So that got me thinking, what happens if we force some stark restrictions on what the network can do?


Deep Ink works similarly. But instead of starting with an image of a cloud, we start with white picture that has a little blob of black pixels in the middle, a little bubble of ink if you will. We then run the network over this image, but rather than allowing to adjust the pixels a tiny bit in a certain direction, the only thing it can do, is flip pixels, either from black to white or the other way around.

The network can't do much with areas that are pure black or pure white, so in effect it will only flip pixels at the border of the ink bubble in the middle. It's like it takes a pen and draws from the center in random directions to the sides, making patterns in the ink. Making that into an animated gif shows off the process nicely.
You can find the code as always on Github. You can experiment with which layer to activate and which channel in that layer. Activating a channel in the top layer doesn't seem to draw something represented that channel though. The other thing to play with is, are the values representing black and white in the network. I keep them very close together - the further apart they are, the more high frequencies sneak in.




0 comments: