I've taken a little break from neural networks lately to work on some code that solves Poisson's equation in two dimensions using finite element method. This is something I've been wanting to do for a long time now, but I've spent far more time lusting than doing. FEM is cool because one can deal with irregular domains. I've messed around a little with finite difference method (FDM), but this only works on regular domains. If you wanted use FDM on a non rectangular (or circular in polar coordinates) shape, even just an 'L'-shape, you'd have to do some nasty boundary condition matching, or just give up. If you do FEM on a rectangular domain, the matrix you solve ends up looking exactly like the one you use for FDM. In this post, I'm just going to form the weak form of the Poisson equation with zero boundary Dirichlet conditions.
Tuesday, September 8, 2015
Tuesday, September 1, 2015
Recursive Neural Nets
I've been looking for a way to up my haiku game for a while now, and I think I've found it. Recursive, or recurrent, neural networks (RNNs) are a novel (for me) neural network architecture that can be "'deep’ in both space
and time, in the sense that every piece of information passing either vertically
or horizontally through the computation graph will be acted on by multiple
successive weight matrices and nonlinearities" (Graves, 2014). Sorry for the long quote there, but I feel like it encapsulates the architecture of RNNs pretty well. Instead of passing input vectors through a series of vertical (or horizontal) nodes, like in feed forward networks, activations are formed using information from weight matrices in the same layer which are then fed forward through the network in the usual manner. If you want to visualize this a little better, look at this page (which is also the source of my inspiration for this little project). A lot of articles, like the one I quoted above, talk about propagation in time and space. This is just a convenient way of talking about 'moving through' (another way of saying 'doing lots of matrix-vector products') a neural network. Normal multilayer perceptrons (MLP) like the one that I made/borrowed only move through the spacial direction, which means that successive iterations have no effect on each other. RNNs are essentially constructed of feed forward networks that are connected at some level, usually through the hidden layer. This means that the hidden layers have their own weight matrices. If you think that my understanding of RNNs, or MLPs is incorrect, please let me know, as I'm still trying to learn and understand this stuff.
Subscribe to:
Posts (Atom)