Douwe Osinga's Blog: 2017

Wednesday, May 10, 2017

Movie Recommendations

Today's project is a small Python notebook to recommend movies. I know, I know, there's a million of those out there, but this one is special, since it is not trained on user ratings, but on the outgoing links of the Wikipedia articles of the movies.


Why is that good? Two reasons. One is using diverse data. When you build a recommender system just on user ratings, you do get an Amazon-like system of people that liked this movie, also liked that movie. But if you're not using information like the year of the movie, the genre or the director, you are throwing away a lot of relevant features that are easy to get.

The second reason is that when you start a new project, you probably don't have enough user ratings to be able to recommend stuff from the get go. On the other hand, for many knowledge areas it is easy to extract the relevant wikipedia pages.

The outgoing links of a wikipedia page make for a good signature. Similar pages will often link to the same page. Estimating the similarity between two pages by calculating the jaccard distance would probably already work quite well. I went a little further and trained an embedding layer over the outgoing links.

The result is not Netflix quality, but it works reasonably well. As an extra bonus, I projected the resulting movies onto a 2 dimensional plane, rendering their movie posters as placeholders. It's fun to explore movies that way. Go play with it.

Wednesday, February 8, 2017

Amazon Dash and Philips Hue

When the Amazon Dash came out a few years ago, I thought it was an April Fool's joke. A button that you install somewhere in your house to order one specific product from Amazon? That's crazy!


It didn't take long for people to figure out how to use the button for something other than ordering products from Amazon. No higher hacking is required and at USD$ 4.99 a pop, quite affordable. When not in use, the button isn't connected to the wifi. So when you click it, the first thing it does is set up that connection. A script monitoring the local network can easily detect this event and then do something arbitrary - like order beer.

There are many hacks around, but none of them do exactly what I want:
  • When the last person leaves the house, all lights should switch off
  • When the first person comes back, all lights should switch back on
So I wrote a script that doesn't just monitor the Dash button, but also the presence of the phones of me and my wife on the local network. The basic rules are:
  • If any lights are on, switch them off when:
    • - the button was pressed
    • - no phones were seen on the network for 20 minutes
  • If any lights have been previously switched off, switch them on when:
    • - the button was pressed
    • - a phone is seen after 20 minutes of no phones
This way, the button can always be used to switch on or off the lights, but if you don't switch off the lights when leaving home, they will go off automatically. Unlike with a motion controlled set up, this won't happen if you are home but not moving (though it will happen if your phone runs out of battery). When you come home and you had previously switched off the lights using the button, they will come on automatically.

To get this working, check out the code, install the requirements and run: 

python auto_lights --hue_bridge=<bridge-ip> --phone_mac=<phone-macs> --dash_mac=<dash-mac>

While running, the program will also print any new mac addresses it detects and for extra convenience it also prints the manufacturer. You can use this to find out the mac address of your phone and of the dash button - switch your phone to airplane mode, wait for things to quiet down and when you switch airplane mode off, you should see your phone's mac address.

It works reasonably well. The longest I've seen my phone not contacting the wifi was 13 minutes, so 20 minutes seems safe. Coming home, it takes a little longer than ideal for the phone to reconnect to the wifi, but you can use the Dash button if you are in a hurry.

As always the code is on Github.

Thursday, January 26, 2017

Building a Reverse Image Search Engine using a Pretrained Image Classifier

In the Olden Days, say more than 10 years ago, building Image Search was really quite hard (see xkcd). In fact, when Alta Visa first came out with a search engine for images, they couldn't really do it. What they did was return the image that had text around it that most matched your query. It's a wonder that that worked at all, but it had to do for years and years.
Why we need Reverse Image Search: find more cat pictures (from: wikipedia)
How things have changed. These days Neural Networks have no problem detecting the actual content of pictures, in some categories outperforming their human masters. An interesting development here is reverse image search - supply a search engine with an image and it will tell you where else this or similar images occur on the web. Most articles on the web describing approaches on how to do this focus on things like Perceptual Hashing. While I am sure that is a good way, it struck me that there is a much simpler way.

Embeddings! Algorithms like Word2Vec train a neural network for a classification task, but they don't use the learned classification directly. Instead they use the layer just before the classification as a representation of the word. Similarly, we can use a pre-trained image classifier and run it on a collection of images, but rather than using the final layer to label the result, we get the layer before that and use that as a vector representation of the image. Similar images will have a similar vector representation. So finding similar images becomes just a nearest neighbor search.

As with a lot of things like this, getting the data to run the algorithm on is more work than getting the algorithm to run. Where do we get a set of representative images from? The images from the Wikipedia are a good start, but we might not want all of them. Most articles are about specific instances of things - for a reverse image search demo, classes of things are more interesting. We're interested in cats, not specific cats.

Luckily, Wikidata annotates its records with a 'is instance of' property. If you have imported a Wikidata snapshot into Postgres, then getting the wikipedia_ids for all values for the instance-of property is a simple SQL statement:
select properties->>'instance of' as thing, count(*) as c 
from wikidata group by thing

For some of these, Wikidata also provides us with a canonical image. For others we have to fetch the wikipedia page and parse the wikicode. We're just going to get the first image that appears on the page, nothing fancy. After an hour of crawling, we end up with a set of roughly 7 thousand images.

SkLearn provides us with an k-nearest-neighbor algorithm implementation and we're off to the races. We can spin-up a flask based server that accepts an image as a POST request, feeds that image into our pre-trained classifier. From that we'll get the vector representing that image. We then feed that vector into the nearest neighbor model and out fall the most similar images. You can see a working demo here

It mostly works well. If you feed it a cat, it will return pictures of cats, the definition of success on the Internet. On mobile, you can directly upload a picture from your phone's camera and that seems to go ok, too. The biggest limitation I've come across so far is that the algorithm is bad at estimating how good its guesses are. So if there aren't any suitable pictures in the training set, it will return the one that it thinks is the closest match, but to the human eye it seems fairly unrelated.

As always, you can find the code on Github.

Tuesday, January 17, 2017

Learning to Draw: Generating Icons and Hieroglyphs

In this blog post we'll explore techniques for machine drawn icons. We'll start with a brute force approach, before moving on to machine learning, where we'll teach a recurrent neural network to plot icons. Finally we'll use the same code to generate pseudo hieroglyphs by training the network on a set of hieroglyphs. With the addition of a little composition and a little coloring, we'll end up with this:


Last week's post "Deep Ink" explored how we can simulate computers playing with blobs of ink. But even if humans see things in these weird drawings, Neural Networks don't. If you take the output of Deep Ink and feed it back into something like Google's Inception, it offers no opinions.

The simplest thing I could come up with to generate icons was brute force. If you take a grid of 4x4 pixels, there are 2^16 possible black and white images. Feed all of them into an image classifier and see if anything gets labeled. It does. But 16 pixels don't offer a lot of space for expression, so the results are somewhat abstract if you want to be positive, or show weaknesses in the image classifier if you are negative. Here are some examples:


We can do a little better by going to 5x5. To give the model a little more to play with, we can add a permanent border. This will increase the size to 7x7, but we'll only flip the middle pixels. Unfortunately the amount of work we need to do going from 4x4 to 5x5 increases by a factor 512. Trying all 4x4 icons takes about an hour on my laptop. Exploring the 5x5 space takes weeks:


These are better and easier to understand. It some cases they even somewhat explain what the network was trying to see in the 4x4 images.

They say that if brute force doesn't work, you're just not using enough. In this case though, there might not be enough around. 8x8 for icons is tiny, but it would take my laptop something like 3 times the age of the universe to try all possibilities. Machine Learning to the rescue. Recurrent Networks are a popular choice to generate sequences, for example fake Shakespeare, recipes and Irish folk music, so why not icons?

I found a set of "free" icons at https://icons8.com/ After deduping it'll give us about 4500 icons. Downsample them to 8x8 and we can easily encode them as sequences of pixels to be turned on. An RNN can learn to draw these quite quickly. Adding a little coloring for variety and on a 15x10 grid you'll get this sort of output:


These are pretty nice. They look like monsters from an 8 bit video game. The network learns a sense of blobbiness that matches the input icons. There's also a sense of symmetry and some learned dithering when just black and white doesn't cut it. In short it, learns the shapes you'll get when you downsample icons to 8x8.

As cute as these throw backs to the 80's are, 64 pixels still isn't a lot for an icon. Especially since the input isn't a stream of optimized 8x8 icons, but rather downsampled 32x32 icons (the lowest resolution that the icons8 pack comes in).

We can't use the same encoding for 32x32 icons though. With a training set of 4500, having a vocabulary of 64 for the 8x8 icons is OK. Each pixel will occur on average 70 times, so the network has a chance to learn how they relate to each other. On a 32x32 grid, we'd have a vocabulary of 1024 and so the average pixel would only be seen 4 times, which just isn't enough to learn from.

We could run length encode; rather than store the absolute position of the next black pixel, store its relationship with the previous one. This works, but it makes it hard for the network to keep track where it is in the icon. An encoding that is easier to learn specifies for each scanline which pixels are turned on, followed by a new line. This works better:

The network does seem to learn the basic shapes it sees and we recognize some common patterns like the document and the circle. I showed these to somebody and their first reaction was "are these hieroglyphs?" They do look like hieroglyphs a bit of course and it begs the question, what happens if we train on actual hieroglyphs?

As often with these sort of experiments, the hard thing is getting the data. Images of hieroglyphs on the Internet are found easily; getting them in nice 32x32 pixel bitmaps is a different story though. I ended up reverse engineering a seemingly abandoned icon rendering app for the Mac that I found on Google Code (itself abandoned by Google). This gave me a training set of 2500 hieroglyphs.

The renderer responsible for the image at the beginning of this post has some specific modifications to make it more hieroglyphy: Icons appear underneath each other, unless two subsequent icons fit next to each other. Also, if the middle pixel of an icon is not set and the area it belongs to doesn't connect to any of the sides, it gets filled with yellow - the Old Egyptians seem to have done this.

Alternatively we can run the image classifier over the produced hieroglyphs:


You can see it as mediocre labeling by a neural network that was trained for something else. Or as hieroglyphs from a alternate history where the Ancient Egyptians developed modern technology and needed hieroglyphs for "submarine", "digital clock" and "traffic light"

As always you can find the code on github.












Thursday, January 12, 2017

Deep Ink: Machine Learning fantasies in black and white

One of the New New Things in Machine Learning is the concept of adversary networks. Just like in coevolution, where both the leopard and the antelope become faster in competition with each other, the idea here is to have one network learn something and to have the other network to learn judge the results of the first. The process does a remarkable job generating photorealistic images. Deep dreaming has been generating crazy and psychedelic images for a while now. RNNs have been imitating writers for a few years with remarkable results.
Deep Dreaming, by jessica mullen from austin, txt

The state of the art Neural Networks used to classify images contain many connected layers. You feed in the image on one side and a classification rolls out on the other side. One way to think about this process is to compare it with how the visual cortex of animals is organized. The lowest level recognizes only pixels, the layer just above it knows about edges and corners. As we go up through the layers, the level of abstraction increases too. At the very top the network will say "I saw a cat." One or two levels below that though, it might have neurons seeing an eye, paw or the skin pattern.

Deep Dreaming reverses this process. Rather than asking the network, do you see an eye somewhere in this picture, it asks the network, how would you modify this picture to make you see more eyes in it? There's a little more to this, but this is the basic principle.

So Software can imitate Art, just as Life does. It tends to be fairly literal though. The Deep Dreaming images, fascinating as they are, reflect patterns seen elsewhere in clouds or on top of random noise. So that got me thinking, what happens if we force some stark restrictions on what the network can do?


Deep Ink works similarly. But instead of starting with an image of a cloud, we start with white picture that has a little blob of black pixels in the middle, a little bubble of ink if you will. We then run the network over this image, but rather than allowing to adjust the pixels a tiny bit in a certain direction, the only thing it can do, is flip pixels, either from black to white or the other way around.

The network can't do much with areas that are pure black or pure white, so in effect it will only flip pixels at the border of the ink bubble in the middle. It's like it takes a pen and draws from the center in random directions to the sides, making patterns in the ink. Making that into an animated gif shows off the process nicely.
You can find the code as always on Github. You can experiment with which layer to activate and which channel in that layer. Activating a channel in the top layer doesn't seem to draw something represented that channel though. The other thing to play with is, are the values representing black and white in the network. I keep them very close together - the further apart they are, the more high frequencies sneak in.




Monday, January 9, 2017

Building Spotify's Song Radio in 100 lines of Python

Somebody said that when it comes to deep learning, it is still very early days, Like 1995 (I was going to lookup the quote, but I can't find - it was probably Churchill or Mark Twain). I disagree. The early days are gone, it is more like 1958. Fortran has just been invented. The early days of having to implement the mechanics of neural networks is akin to writing machine code in the Fifties. Platforms like Tensorflow, Theano and Keras let us focus on what we can do, rather than how.


Marconi the inventor (from Wikipedia)

Marconi is a demonstration of this. It shows how to build a clone of Spotify's Song Radio in a less than a hundred of Python using open source libraries and data easily scraped from the Internet. If you include the code that scrapes and preprocesses the data, it is almost 400 lines, but still.

Pandora was the first company to successfully launch a product that produced playlists based on a song. It did this by employing humans who would listen to songs and characterize each on 450 musical dimensions. It worked well. Once you have all songs mapped into this multi-dimensional space, finding similar songs is a mere matter of finding songs that occupy a similar position in this space.

Employing humans is expensive though (even underpaid musicians). So how do we automatically map songs into a multi-dimensional space? These days the answer has to be Machine Learning. Now you could build some sort of model that really understands music. Probably. It sounds really hard though. Most likely you'd need more than a hundred if not more than a thousand lines of code.

Marconi doesn't know anything about the actual music. It is trained on playlists. The underlying notion here is that similar songs will occur near each other in playlists. When you are building a playlist, you do this based on the mood you are in, or maybe the mood you want to create.

A good analogy here is Word2Vec. Word2Vec learns word similarities by feeding it sentences. After a lot of sentences, it knows that "coffee" and "espresso" are similar, because it notices that in a sentence where you use the one, you might as well expect the other. It even learns deeper relationships between words, for example that the words "king" and "man" have the same relationship as the words "queen" and "woman".

Fascinating stuff. Usefully, the Python library GenSim contains a great implementation of Word2Vec. So if we feed this playlists containing song ids, rather than sentences containing words, it will after a while learn relationships between songs. Suggesting a playlist based on a song becomes than again a straightforward nearest neighbor search.

The trick is collecting the data. How do we get our hands on a large set of playlists? To their great credit, Spotify has a wonderful API that lets you get info on songs, artists and playlists. It does not, however, grant access to a dump of (public) playlists, which would be the ideal input for this project.

The workaround I use, is to search for words in the title of playlists. We start with the word 'a'. This will return a thousand playlists containing the word 'a'. We store those. Then we count for all playlists scraped so far, how often we see any of the words in the titles of those playlists. We pick the word that appears most often that we haven't searched for. Rinse and repeat. So after 'a', you'll see 'the', 'of' etc. After a while 'greatest' and 'hits' appear.

It works and quickly returns a largish set of public playlists. It's not ideal in that it is hardly a natural way to sample playlists. For example, by searching for the most popular words, chances are you'll get the most popular playlists. The playlists returned also seemed very long (hundreds of songs), but maybe that's normal.

Next up: get the tracks that are actually in those playlists. Thanks to Spotify's API, that's quite simple. Just keep calling:

res = session.user_playlist_tracks(owner_id,  playlist_id,
    fields='items(track(id, name, artists(name, id), duration_ms)),next')
There's a bunch of parsing, boiler plate and handling timeouts etc to make it work in practice, but it's all fairly straight-forward.

Once we have the training data, building the model is also quite easy. I throw out playlists that are too long or that have only songs from one artist and the model is trained with a lowish number of dimensions. To make this accessible online, I import the song vectors into postgres. Recommending music then becomes as simple as this sql statement:

SELECT song_id, song_name, artist_name, 
       cube_distance(cube(vec), cube(%(mid_vec)s)) as distance
FROM song2vec ORDER BY distance
LIMIT 10;
Where mid_vec is the vector representing the song that was used as an input, or the middle of a set of vectors if multiple songs were provided. 


How does it perform? Well, you can try for yourself, but I think it works pretty well!

There's a lot of room for more experiments here, I think. Building an artists only model would be a simple extension. Looking at the meta information of the songs and using it to build classifiers might also be interesting. Even more interesting would be to look at the actual music and see if there are features in the wave patterns that we can map onto the song vectors.