Friday, September 23, 2016

t-SNE in Python (part 2)

As promised, in this post I'll talk about my implementation of the t-SNE dimensionality reduction algorithm in Python. Sorry in advance for the lack of syntax highlighting -- I haven't figured that out yet. I've made a version that explicitly calculates the gradient of each vector $Y$ in the reduced dataset, and another version that employs Theano's grad function. The Numpy version was a bit tricky to implement. The Theano version, for once, was actually easier than the Numpy version, because all you do is just slam in T.grad. I'll start with the Numpy version, and then move on to the Theano version. Before we jump into any code, let's state the the gradient of the cost function:
$$
\frac{\partial Cost}{\partial y_{i}} = 4\sum_{j} (p_{ij} - q_{ij})(y_i - y_j)(1 + ||y_i - y_j||^2)^{-1}
$$
Whaaaaaaa??? Take 11 lines of code and turn it into 2? Theano, you've stolen my heart! The crazy part about this is that it even runs a little faster than the Numpy version. As always, there is some overhead (around 3 seconds on my machine) to compile the function that computes the gradient. When 'training' the t-SNE algorithm however, the Theano version is about 1.5x faster, so you quickly make back this time.

Tuesday, September 20, 2016

t-SNE in Python (part 1)

In reading some papers about Tomas Mikolov's word2vec algorithm for creating word embeddings, I came across a cool method for visualizing high dimensional data. I have some experience with methods that are used for dimensionality reduction like PCA and autoencoders (check out an old but presumably working version here), but I'd never encountered something that was purpose built for visualizing high dimensional data in two or three dimensions. t-SNE (t distributed stochastic neighbor embedding) attempts to minimize a 'difference' (I'll clarify this a hot sec) function between what amounts to distances in high and low dimensional space. Here, we are calculating 'distances' (really probabilities) between vectors in the dataset. t-SNE has a homepage here, where you can find a bunch of resources. The original paper (also on the homepage) is pretty straightforward in its explanation of the algorithm. I'll quick describe what's going on with the algorithm before discussing one aspect of it that I've been working on.

Say you have a set of vectors ${x_0, x_1, x_2, ... ,x_N} \in X$ each with dimension k. As we're dealing with a dimensionality reduction, assume that kis something large. For example, we could be dealing with the MNIST dataset, where kwould be 784. The goal is to generate some good representation of $X$ in a lower dimensional space, say of dimension l.Ideally lwould be something like 2 or 3, so we could see our data on a 2- or 3-D scatter plot. Let's call this low dimensional representation $Y$. The goal here is to preserve the relative position of each vector in $X$ in our new representation $Y$. In other words, for each vector $x_i$ we want to be able to create an embedding $y_i$ that preserves some distance metric between each other vector in both $X$ and $Y$. t-SNE uses two difference distance metrics -- one for the vectors in $X$ and another for the vectors in $Y$. For a detailed look at why they do this, check out their paper. For each $x_i$ we can calculate an associated probability $p_{j|i}$ that looks as follows:
$$
p_{j|i} = \frac{exp(-{||x_i - x_j||}^{2} / 2\sigma_i^2)}{\sum_{k\ne i}exp(-{||x_i - x_k||}^{2} / 2\sigma_i^2)}
$$

Here $x_j$ is another vector in $X$. As such we are calculating the probability of vector $j$ given vector $i$. I don't think that this distribution is entirely intuitive, but you can think of it as the probability that $x_i$ would 'choose' vector $x_j$ As van der Maaten points out in their paper, vectors that are close to each other (as in their euclidean distance is near zero) will have a relatively high probability, while those with larger distances will have a much lower probability. In practice, one doesn't actually use this probability distribution. Instead, we use a joint probability distribution over indices $i$ and $j$. I was puzzled by the normalization factor, and it only became clear to me once I took a peak at some code.
$$
p_{ij} = \frac{exp(-{||x_i - x_j||}^{2} / 2\sigma^2)}{\sum_{k\ne l}exp(-{||x_i - x_l||}^{2} / 2\sigma^2)} \\
p_{ij} = \frac{p_{i|j} + p_{j|i}}{2n} $$

The second line above has to do with a symmetry consideration. Again, if you're interested, check out van der Maaten's paper. The way I think of this is as follows: We calculate the $p_{i|j}$ as we normally would. What we get from this is an $nxn$ matrix, where $n$ is the number of vectors in our dataset. Note that as a joint probability matrix, this sucker is not yet normalized! We then add the transpose of the unnormalized probability distribution to the $p_{i|j}$ matrix, and normalize that. Van der Maaten and crew also define a joint probability distribution for the vectors in $Y$. Instead of using the same distribution as for the vectors in $X$, t-SNE employs a Student t-distribution with a single degree of freedom. This choice is well motivated, as it avoids overcrowding in the lower dimensional space, as the Student t-distribution has fatter tails than the exponential distribution. Note that we could use an exponential distribution (with no sigma factor), and we would simply be doing symmetric SNE. Anyways, the joint probability distribution looks as follows: $$
q_{ij} = \frac{{(1 + {||y_i - y_j||}^{2})}^{-1}}{\sum_{k\ne l}{(1 + {||y_l - y_k||}^{2})}^{-1}}
$$

As before, that $l$ index confused me quite a bit before I saw how this algorithm got implemented in some code. Basically the idea is that we calculate a matrix full of unnormalized $q_{ij}$s, and then normalize over the whole matrix. The cost function that we minimize in order to generate the best embedding $y_i$ for each each vector $x_i$ looks as follows.

$$
Cost = \sum_{i=0} \sum_{j=0} p_{ji}log(\frac{p_{ji}}{q_{ji}})
$$
We can minimize this function using conventional neural net techniques, like gradient descent. Van der Maaten and co spend some time talking about the gradient of the cost function, but I thought I would leave it up to Theano to do that... The problem I've been concerned with is calculating the free parameter $\sigma_i$ in $p_{j|i}$. In their paper, Van der Maaten and co. say that they use binary search to find an optimal value of $\sigma_i$. I thought it would be cool if I could leverage the machinery of Scipy/Numpy (and even Theano!) to do this. First, however, a description of the problem. t-SNE has a hyperparameter, called the perplexity that determines the value of $\sigma_i$ for each $x_i$. The perplexity is the same for every vector in $X$. One calculates the perplexity as follows:
$$
Perplexity(x_i) = 2^{H_i} \\
H_i = - \sum_{j} p_{j|i} log_{2}(p_{j|i}) $$
$H_i$ is called the entropy (which looks the same as entropy in Physics, minus the Boltzmann factor). So lets say we set the perplexity to be 10, the goal is to find a value of $\sigma_i$ for each $x_i$ such that the $Perplexity(x_i)$ is 10. In principle this is just a matter of creating some Python function that has $\sigma_i$ as an argument, and use scipy's optimize module to find the minimum of the squared distance between the chosen perplexity and the calculated value. I wondered, however, if I could use Newton's method, in conjunction with Theano's symbolic gradient to do this. It turns out that I couldn't do that, so I just stuck with Scipy's optimize module. In the next post, I'll talk about my implementation of the t-SNE algorithm, using conventional Numpy and Theano.

Saturday, September 3, 2016

Solving differential equations in Theano

I've spent some time messing around with Theano, building and training neural nets. For my senior thesis, I used my own implementation of a Theano neural net to classify waveforms in the XENON100 dark matter detector. I've even spent some time working on LSTMs for text generation (The latter project is essentially my attempt to implement Karpathy's char-rnn in Theano). My understanding is that Theano was built with training neural networks in mind, hence all the emphasis on automatic differentiation and GPU support. In reality, Theano is a general purpose symbolic math library with lots of convenient neural net functionality, like built in squashing functions, downsampling functions, and even 2D convolution functions. Theano is fast because it compiles code into C code on the fly. We make Theano functions by creating computational graphs using symbolic variables. This means that there is some overhead when running Theano code -- my LSTM might take as much as a minute to compile before it starts actually training.