Thursday, August 3, 2017

(Love letter to) Anand Al Saeed Restaurant

Anand Al Saeed, or simply Anand, is my all around favorite restaurant in Abu Dhabi. It's located in the same super block as Al Mariah mall (not to be confused with Al Maryah Island/mall). Parking can be a bit of a pain around there, so I generally take advantage of the ample parking by Sun and Sand Sports, a little bit east of the restaurant in the same super block. Like many restaurants in Abu Dhabi, Anand a family seating area in the back. Unlike many restaurants in Abu Dhabi, this area is often pretty full of families or mix gender couples, especially during busy times. 

Anand has a small menu, and they specialize in (correct me if I'm wrong in the comments) West Indian veg food. The dal that comes with their thali reminds me a lot of the sweet and spicy dal that my friend's mom (whose family is from Gujarat) cooks up on special occasions. Their menu also features snacks and Punjabi food. I've tried most of the snacks, but I've never had any of the stuff off the Punjabi part of the menu. Walking in at dinner time, I'd say only one out of every ten customers has something off the Punjabi part of the menu, so I've left it alone until now. Anand doesn't have menu cards, rather they write it up on a white board at the back of the restaurant.


My favorite thing off the Anand menu is the thali. Never the same two days in a row, it features three stews, a fried vegetable dish, and chapatis and or rice. Most of the time the thali will come with another side dish, like warm curdled yoghurt (I'm not a big fan) or just plain yoghurt. Recently, during the later evening (after 8 or 9) they've had pulao available in addition to rice. Pulao is rice that has been cooked with chopped up veggies (normally carrots and green beans), nuts, and some dried fruit. It is similar to biryani, but the spice profile is a little different, and I would argue that it's more of a side dish/complimentary dish instead of a meal in itself. As far as the stews, generally there will be some combination of sweet and spicy dal, a thick chick pea stew, rajma (kidney bean) stew, or the potato and tomato based bhaji they serve with puri bhaji. The fried veggies are my favorite part. I often find myself stuffing my face with oily cabbage or "ivy gourd" (the vegetable in tindly) scooped up with warm chapati.

Thali at Anand is an exercise in restraint, and refusing food. Waiters will come around very frequently with refills for everything. I often end up eating second and third helpings of some dishes before I realize that I'm super full. My tactic is to eat only one or two chapatis and to save the dal for the rice they serve at the end. 14 dirhams truly gets you "all you can eat".

Anand only starts serving thali in the afternoon. If I roll up before noon, I generally get the puri bhaji. Puri bhaji is tomato and potato stew served up with deep fried dough, or puris. This dish is pretty simple, but also amazing.


Puri Bhaji
A friend of mine, who's also a big fan of Anand, has been telling me for weeks to get their chaat, but I never bothered until very recently. The other day I had the sev puri, pav bhaji, vada pav and a single pani puri curteousy of a too-full stranger. The sev puri and vada pav were pretty standard (but nonetheless scrumptious!) while the pav bhaji was a little strange if you're used to the Mumbai style pav bhaji you get at places like Salam Bombay/Chhappan Bhog.


Sev Puri

  
Pav Bhaji alĂ  Anand
The thing that I appreciate most about Anand is the fact that the food, for the most part, isn't super indulgent. I can happily eat there a few times a week and not feel icky. Anand feels like the day-to-day cooking that I might do at home. The flip side of this is that if I were to have a guest visit me for a few days in Abu Dhabi, I probably wouldn't take them to Anand. Having a friend visit for a few days is a special occasion, and the food should be celebratory and rich, verging on gratuitous. Anand is tasty, but it's also something you could eat everyday.














Tuesday, July 25, 2017

Opal Restaurant

One of my favorite cheap restaurants in Abu Dhabi is Opal, in the port. Open 24 hours a day, Opal serves everything from karak and snacks to full meals. I often find myself going for lunch and for late night snacks.

For lunch, I generally get thali. I wouldn't say that Opal's thali is the best in the city, but it might be cheapest. Eight dirhams gets you unlimited stews and rice or chapati, depending on what you choose. I'm partial to the "round rice" with an added one dirham chapati. The rice goes well with the thinner, more soup like curries, and the chapati is good for eating up the delicious fried veggies. The stews are, in my mind, pretty unusual in the thali scene of Abu Dhabi. Places like Anand Al Saaed or Evergreen pretty consistently have potato stew (the bhaji that goes with puri bhaji), some sort of sweet dal, and one other legume stew (last time I went to Anand it was kidney bean, or rajma stew). All this seems pretty familiar to those familiar with the pure veg fare of Abu Dhabi. Opal's stews are generally fish or coconut based, doing away with the legume heavy curries of other spots. Perhaps my favorite coconut based dish at Opal is the diced carrots and beets fried with shaved coconut. Eating this with a piping hot chapati has become one of my greatest culinary pleasures in the city, second only to the the oily tindly at Anand.

Part of Opal's thali is a little chunk of grilled fish. This stuff is enormously flavorful, but often very bony. During the lunchtime rush after noon, they also serve a variety of other grilled fish for an additional price (a 10 inch shaari costs ~ 15 dirhams). 

Thali. Starting at the bottom left, and going clockwise: Fish stew, coconut/lentil stew, pickles, green beans and coconut, chana in a coconut gravy, and some unidentified stew that tastes like salty socks. The fish is buried under the rice under the unidentified stew. Check out that fat rice! If anyone can tell me what this rice is called I'll buy you a karak

As Opal is open 24 hours, friends and I will often end up there late at night (or early in the morning). Milky sweet karak, Chips Oman (and all its delicious variants), and the Zinker are the things to get. I wrote an article a while back about the Chips Oman sandwich. The Chips Oman sandwich is a masterpiece of junk food. The basic Chips Oman sandwich is as follows: A freshly made paratha, slathered with cream cheese spread and topped with a crunched up bag of Chips Oman chili flavored chips. Roll it up with optional but entirely necessary dakoos (hot sauce, but not Tobasco). For a long time, I was content with this three to four dirham snack. Recently, however, I've ventured into some of the other Chips Oman based creations they have at places like Opal. Adding a makanak, or hot dog, sliced length-wise and fried is pretty fantastic. The "Francisco" or a Chips Oman with bits of tandoori chicken is also darn scrumptious. Opal, and neighboring City Gate Restaurant have a whole list of Chips Oman sandwiches. I think the Opal special is eight dirhams and has a bunch of seafood on it.
Chips Oman wa makanak: Chips Oman with a hot dog.
The Francisco: Chips Oman with Tandoori Chicken. You can bet your last dirham I got some dakoos with this

Karak :)
The Zinker is a sandwich modeled off (as far as I can tell) KFC's Zinger sandwich. Usually it consists of a hot dog bun with a fried chicken, cream cheese spread, some dakoos, and some veggies. Most Zinkers that I've had have also been grilled to crisp up the trash hot dog bun.

Opal, and the port in which is resides, is a space in which men from all parts of Abu Dhabi society come to eat and shop. I say men because rarely does one see women in this part of the city. On the few occasions I've seen women at Opal or in the port, they've either stayed in their cars, or been in a mix gender group. Sitting at Opal in the evening, I've seen all sorts of men roll up to get a cup of karak and a snack: Taxi drivers, street cleaners, construction workers, men in slick, pressed kanduras and ghutrahs, men in business suits, men wearing shalwar kameez, and men in slacks and button down shirts. I've heard people speaking tons of languages from South Asia and beyond, some I can identify, others I cannot. There are few other places in the city where you can see so many different people from so many different strata of Abu Dhabi society. A sort of Arabic/English/Urdu + others pigeon language has emerged in places like the port to contend with the fact that there are people speaking so many different languages. Most of the people manning vegetable/fruit stands here can speak this pigeon.


Monday, February 27, 2017

Configuring Raspberry Pi as Bluetooth speaker

Let's say you have a rad stereo system in your house. Maybe you even have an aux cable you use to play music from your phone or laptop. What if you want to be able to control the stereo over Bluetooth? While I'm sure there are other options, I decided to use a raspberry pi 3 and a Hifiberry DAC+. The normal audio out on the raspberry pi is really quite awful, so the DAC+ is necessary if you want to play any sort of high quality sound.

Configuring the device to play sound over Bluetooth is not as straight forward as on your laptop. The purpose of this post is to put all the necessary resources for going about this in one place. I spent about 40 minutes digging around forums before I found an explanation that worked for me. 

The first thing you're going to want to do is configure the Bluetooth on your Raspberry Pi 3 to be able to play audio. User `dlech` has a great tutorial on getting this to work on this github page. Basically you have to install `PulseAudio`, replace a few configuration files, create a PulseAudio service, and then connect to the device from which you want to play music. 

Next, you have to configure the DAC+. This article explains how to replace the default audio out with the newly installed DAC+ board. 

In summary:

- Play audio over bluetooth: Here
- Configure DAC+ board: Here


Tuesday, January 10, 2017

Theano LSTM

I'm just going to show a code snippet the shows the main forward step of a single layer Long Short Term Memory (LSTM) recurrent neural network (RNN). I'm not going to go through how it all works, as there are a ton of great resources online for RNNs. Check out this, by Andrej Karpathy, this tutorial on implementing RNNs in NumPy and Theano, or this fantastic explanation of LSTMs. Note that I've gotten this to work with the MNIST dataset, resulting in some crazy low error rates. If you're interested in seeing that code, let me know!



Saturday, December 24, 2016

LED shirt with pocket

Over the past few years my brother has given me some pretty sweet t-shirts. He is a professional screen printer, so he has at his disposal some pretty awesome facilities for making shirts and posters. This year I decided to give him a shirt in return. Instead of trying to print something, I figured I use my Arduino skillz to make a cool wearable. Originally I wanted to arrange some LED sequins in the shape of two boobs, but after seeing the LEDs I decided to arrange them randomly on the shirt. The LEDs are really bright, so I thought it would be best if I let them speak for themselves.

This was the first time I've made something outside of NYUAD. I had to order the everything online, which ended up being kind of pricey. It's sort of crazy how spoiled we get at school! For now, I've just got the LEDs on all the time. I'm using a Lilypad USB for this project. I chose it because it has a lot of pins, and its relatively small. I ended up buying a big battery (2000 mAh) so the shirt will stay on for several consecutive days. The size of the battery meant that I had to create some means of making sure the battery didn't swing around when the user (my brother) was wearing the shirt. I ended up just sewing on a pocket, with the help of my mom.

Below you can see the progression of how the shirt was made.

Friday, September 23, 2016

t-SNE in Python (part 2)

As promised, in this post I'll talk about my implementation of the t-SNE dimensionality reduction algorithm in Python. Sorry in advance for the lack of syntax highlighting -- I haven't figured that out yet. I've made a version that explicitly calculates the gradient of each vector $Y$ in the reduced dataset, and another version that employs Theano's grad function. The Numpy version was a bit tricky to implement. The Theano version, for once, was actually easier than the Numpy version, because all you do is just slam in T.grad. I'll start with the Numpy version, and then move on to the Theano version. Before we jump into any code, let's state the the gradient of the cost function:
$$
\frac{\partial Cost}{\partial y_{i}} = 4\sum_{j} (p_{ij} - q_{ij})(y_i - y_j)(1 + ||y_i - y_j||^2)^{-1}
$$
Whaaaaaaa??? Take 11 lines of code and turn it into 2? Theano, you've stolen my heart! The crazy part about this is that it even runs a little faster than the Numpy version. As always, there is some overhead (around 3 seconds on my machine) to compile the function that computes the gradient. When 'training' the t-SNE algorithm however, the Theano version is about 1.5x faster, so you quickly make back this time.

Tuesday, September 20, 2016

t-SNE in Python (part 1)

In reading some papers about Tomas Mikolov's word2vec algorithm for creating word embeddings, I came across a cool method for visualizing high dimensional data. I have some experience with methods that are used for dimensionality reduction like PCA and autoencoders (check out an old but presumably working version here), but I'd never encountered something that was purpose built for visualizing high dimensional data in two or three dimensions. t-SNE (t distributed stochastic neighbor embedding) attempts to minimize a 'difference' (I'll clarify this a hot sec) function between what amounts to distances in high and low dimensional space. Here, we are calculating 'distances' (really probabilities) between vectors in the dataset. t-SNE has a homepage here, where you can find a bunch of resources. The original paper (also on the homepage) is pretty straightforward in its explanation of the algorithm. I'll quick describe what's going on with the algorithm before discussing one aspect of it that I've been working on.

Say you have a set of vectors ${x_0, x_1, x_2, ... ,x_N} \in X$ each with dimension k. As we're dealing with a dimensionality reduction, assume that kis something large. For example, we could be dealing with the MNIST dataset, where kwould be 784. The goal is to generate some good representation of $X$ in a lower dimensional space, say of dimension l.Ideally lwould be something like 2 or 3, so we could see our data on a 2- or 3-D scatter plot. Let's call this low dimensional representation $Y$. The goal here is to preserve the relative position of each vector in $X$ in our new representation $Y$. In other words, for each vector $x_i$ we want to be able to create an embedding $y_i$ that preserves some distance metric between each other vector in both $X$ and $Y$. t-SNE uses two difference distance metrics -- one for the vectors in $X$ and another for the vectors in $Y$. For a detailed look at why they do this, check out their paper. For each $x_i$ we can calculate an associated probability $p_{j|i}$ that looks as follows:
$$
p_{j|i} = \frac{exp(-{||x_i - x_j||}^{2} / 2\sigma_i^2)}{\sum_{k\ne i}exp(-{||x_i - x_k||}^{2} / 2\sigma_i^2)}
$$

Here $x_j$ is another vector in $X$. As such we are calculating the probability of vector $j$ given vector $i$. I don't think that this distribution is entirely intuitive, but you can think of it as the probability that $x_i$ would 'choose' vector $x_j$ As van der Maaten points out in their paper, vectors that are close to each other (as in their euclidean distance is near zero) will have a relatively high probability, while those with larger distances will have a much lower probability. In practice, one doesn't actually use this probability distribution. Instead, we use a joint probability distribution over indices $i$ and $j$. I was puzzled by the normalization factor, and it only became clear to me once I took a peak at some code.
$$
p_{ij} = \frac{exp(-{||x_i - x_j||}^{2} / 2\sigma^2)}{\sum_{k\ne l}exp(-{||x_i - x_l||}^{2} / 2\sigma^2)} \\
p_{ij} = \frac{p_{i|j} + p_{j|i}}{2n} $$

The second line above has to do with a symmetry consideration. Again, if you're interested, check out van der Maaten's paper. The way I think of this is as follows: We calculate the $p_{i|j}$ as we normally would. What we get from this is an $nxn$ matrix, where $n$ is the number of vectors in our dataset. Note that as a joint probability matrix, this sucker is not yet normalized! We then add the transpose of the unnormalized probability distribution to the $p_{i|j}$ matrix, and normalize that. Van der Maaten and crew also define a joint probability distribution for the vectors in $Y$. Instead of using the same distribution as for the vectors in $X$, t-SNE employs a Student t-distribution with a single degree of freedom. This choice is well motivated, as it avoids overcrowding in the lower dimensional space, as the Student t-distribution has fatter tails than the exponential distribution. Note that we could use an exponential distribution (with no sigma factor), and we would simply be doing symmetric SNE. Anyways, the joint probability distribution looks as follows: $$
q_{ij} = \frac{{(1 + {||y_i - y_j||}^{2})}^{-1}}{\sum_{k\ne l}{(1 + {||y_l - y_k||}^{2})}^{-1}}
$$

As before, that $l$ index confused me quite a bit before I saw how this algorithm got implemented in some code. Basically the idea is that we calculate a matrix full of unnormalized $q_{ij}$s, and then normalize over the whole matrix. The cost function that we minimize in order to generate the best embedding $y_i$ for each each vector $x_i$ looks as follows.

$$
Cost = \sum_{i=0} \sum_{j=0} p_{ji}log(\frac{p_{ji}}{q_{ji}})
$$
We can minimize this function using conventional neural net techniques, like gradient descent. Van der Maaten and co spend some time talking about the gradient of the cost function, but I thought I would leave it up to Theano to do that... The problem I've been concerned with is calculating the free parameter $\sigma_i$ in $p_{j|i}$. In their paper, Van der Maaten and co. say that they use binary search to find an optimal value of $\sigma_i$. I thought it would be cool if I could leverage the machinery of Scipy/Numpy (and even Theano!) to do this. First, however, a description of the problem. t-SNE has a hyperparameter, called the perplexity that determines the value of $\sigma_i$ for each $x_i$. The perplexity is the same for every vector in $X$. One calculates the perplexity as follows:
$$
Perplexity(x_i) = 2^{H_i} \\
H_i = - \sum_{j} p_{j|i} log_{2}(p_{j|i}) $$
$H_i$ is called the entropy (which looks the same as entropy in Physics, minus the Boltzmann factor). So lets say we set the perplexity to be 10, the goal is to find a value of $\sigma_i$ for each $x_i$ such that the $Perplexity(x_i)$ is 10. In principle this is just a matter of creating some Python function that has $\sigma_i$ as an argument, and use scipy's optimize module to find the minimum of the squared distance between the chosen perplexity and the calculated value. I wondered, however, if I could use Newton's method, in conjunction with Theano's symbolic gradient to do this. It turns out that I couldn't do that, so I just stuck with Scipy's optimize module. In the next post, I'll talk about my implementation of the t-SNE algorithm, using conventional Numpy and Theano.