Common Representation Learning

Publications

  1. Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning
    Janarthanan Rajendran, Mitesh M Khapra, Sarath Chandar, Balaraman Ravindran, arXiv, 2015.
    [pdf]

  2. Correlational Neural Networks
    Sarath Chandar, Mitesh M Khapra, Hugo Larochelle, Balaraman Ravindran, To appear in Neural Computation, 2015
    [pdf]

  3. From Multiple Views to Single View : A Neural Network Approach
    Subendhu Rongali, Sarath Chandar, Ravindran B, Second ACM-IKDD Conference on Data Sciences 2015
    [pdf]

  4. An Autoencoder Approach to Learning Bilingual Word Representations
    Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha, NIPS 2014
    [pdf, code]

  5. Multilingual Deep Learning
    Sarath Chandar, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha, NIPS Deep Learning Workshop 2013
    [pdf]


Word Vectors

The word vectors learnt using binary bag-of-words reconstruction training with merged bags-of-words. For more details on how they are trained, refer the NIPS 2014 paper.

English - German
English - French
English - Spanish

If you are using these representations for any of your experiments or applications, please cite the NIPS 2014 paper.

Code

Code for NIPS 2014 paper is available here. If you have any queries regarding the code mail to sarathcse2008@gmail.com.