The post Structures for Neural Language Modeling tried to discuss the “structues or not” problem from one side - the design of network. In this post I want to talk about how networks, without explicit, man-made structures, can learn structures in the data themselves.
In this post I will focus on structures in the model instead of structures from the language.
I will post mini-reviews for papers & codes I read (hopefully) every week. This post is about papers on neural machine modeling with different forms of syntactic information.
A super short introduction to two models by MILA-UDEM.
This is an installation guide for people who struggles with Theano with CUDA and possibly OpenBlas on Windows. Technical, not interesting.
which is typically Dark Art itself…
During summer 2014 I traveled to US and spent a month at Cornell. Here are some photos and notes.
The pencil of legends. Argubly the best pencil ever.
Pleasure to just hold in hand.