Deep Learning Toolbox

data-analytics-feature-trial

A Matlab toolbox for Deep Learning.
Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data. It is inspired by the human brain’s apparent deep (layered, hierarchical) architecture. A good overview of the theory of Deep Learning theory is Learning Deep Architectures for AI

For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng.

If you use this toolbox in your research please cite:

Prediction as a candidate for learning deep hierarchical models of data (Palm, 2012)

Directories included in the toolbox
NN/ – A library for Feedforward Backpropagation Neural Networks

CNN/ – A library for Convolutional Neural Networks

DBN/ – A library for Deep Belief Networks

SAE/ – A library for Stacked Auto-Encoders

CAE/ – A library for Convolutional Auto-Encoders

util/ – Utility functions used by the libraries

data/ – Data used by the examples

tests/ – unit tests to verify toolbox is working

For references on each library check REFS.md

 

 

https://github.com/rasmusbergpalm/DeepLearnToolbox

 

mathworks.com/matlabcentral/fileexchange/38310-deep-learning-toolbox

 

 

 


Deep Learning Research Groups

Some labs and research groups that are actively working on deep learning:

University of Toronto – Machine Learning Group (Geoffrey Hinton, Rich Zemel, Ruslan Salakhutdinov, Brendan Frey, Radford Neal)

Université de Montréal –  MILA Lab (Yoshua Bengio, Pascal Vincent, Aaron Courville, Roland Memisevic)

New York University – Yann LecunRob Fergus, David Sontag and Kyunghyun Cho

Stanford University – Andrew NgChristopher Manning‘s, Fei-fei Li‘s group

University of Oxford – Deep learning group,  Nando de Freitas and Phil Blunsom, Andrew Zisserman

Google Research – Jeff Dean, Geoffrey Hinton, Samy Bengio, Ilya Sutskever, Ian Goodfellow, Oriol Vinyals, Dumitru Erhan, Quoc Le et al

Google DeepMind – Alex Graves, Karol Gregor, Koray Kavukcuoglu, Andriy Mnih, Guillaume Desjardins, Xavier Glorot, Razvan Pascanu, Volodymyr Mnih et al

Facebook AI Research(FAIR) – Yann Lecun, Rob Fergus, Jason Weston, Antoine Bordes, Soumit Chintala, Leon Bouttou, Ronan Collobert, Yann Dauphin et al.

Twitter’s Deep Learning Group – Hugo Larochelle, Ryan Adams, Clement Farabet et al

Microsoft Research – Li Deng et al

SUPSI – IDSIA (Jurgen Schmidhuber‘s group)

UC Berkeley – Bruno Olshausen‘s group, Trevor Darrell‘s group, Pieter Abbeel

UCLA – Alan Yuille

University of Washington – Pedro Domingos‘ group

IDIAP Research Institute – Ronan Collobert‘s group

University of California Merced – Miguel A. Carreira-Perpinan‘s group

University of Helsinki – Aapo Hyvärinen‘s Neuroinformatics group

Université de Sherbrooke – Hugo Larochelle‘s group

University of Guelph – Graham Taylor‘s group

University of Michigan – Honglak Lee‘s group

Technical University of Berlin – Klaus-Robert Muller‘s group

Baidu – Kai Yu‘s and Andrew Ng’s group

Aalto University – Juha Karhunen and Tapani Raiko group

U. Amsterdam – Max Welling‘s group

CMU – Chris Dyer

U. California Irvine – Pierre Baldi‘s group

Ghent University – Benjamin Shrauwen‘s group

University of Tennessee – Itamar Arel‘s group

IBM Research – Brian Kingsbury et al

University of Bonn – Sven Behnke’s group

Gatsby Unit @ University College London – Maneesh Sahani, Peter Dayan

Computational Cognitive Neuroscience Lab @ University of Colorado Boulder

 

 

 

 

 

Deep Learning software


  1. Theano – CPU/GPU symbolic expression compiler in python (from MILA lab at University of Montreal)
  2. Torch – provides a Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu)
  3. Pylearn2 – Pylearn2 is a library designed to make machine learning research easy.
  4. Blocks – A Theano framework for training neural networks
  5. Tensorflow – TensorFlow™ is an open source software library for numerical computation using data flow graphs.
  6. MXNet – MXNet is a deep learning framework designed for both efficiency and flexibility.
  7. Caffe -Caffe is a deep learning framework made with expression, speed, and modularity in mind.Caffe is a deep learning framework made with expression, speed, and modularity in mind.
  8. Lasagne – Lasagne is a lightweight library to build and train neural networks in Theano.
  9. Keras– A theano based deep learning library.
  10. Deep Learning Tutorials – examples of how to do Deep Learning with Theano (from LISA lab at University of Montreal)
  11. DeepLearnToolbox – A Matlab toolbox for Deep Learning (from Rasmus Berg Palm)
  12. Cuda-Convnet – A fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm.
  13. Deep Belief Networks. Matlab code for learning Deep Belief Networks (from Ruslan Salakhutdinov).
  14. RNNLM– Tomas Mikolov’s Recurrent Neural Network based Language models Toolkit.
  15. RNNLIB-RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition.
  16. matrbm. Simplified version of Ruslan Salakhutdinov’s code, by Andrej Karpathy (Matlab).
  17. deeplearning4j– Deeplearning4J is an Apache 2.0-licensed, open-source, distributed neural net library written in Java and Scala.
  18. Estimating Partition Functions of RBM’s. Matlab code for estimating partition functions of Restricted Boltzmann Machines using Annealed Importance Sampling (from Ruslan Salakhutdinov).
  19. Learning Deep Boltzmann Machines Matlab code for training and fine-tuning Deep Boltzmann Machines (from Ruslan Salakhutdinov).
  20. The LUSH programming language and development environment, which is used @ NYU for deep convolutional networks
  21. Eblearn.lsh is a LUSH-based machine learning library for doing Energy-Based Learning. It includes code for “Predictive Sparse Decomposition” and other sparse auto-encoder methods for unsupervised learning. Koray Kavukcuoglu provides Eblearn code for several deep learning papers on thispage.
  22. deepmat– Deepmat, Matlab based deep learning algorithms.
  23. MShadow – MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support efficient, device invariant and simple tensor library for machine learning project that aims for both simplicity and performance. Supports CPU/GPU/Multi-GPU and distributed system.
  24. CXXNET – CXXNET is fast, concise, distributed deep learning framework based on MShadow. It is a lightweight and easy extensible C++/CUDA neural network toolkit with friendly Python/Matlab interface for training and prediction.
  25. Nengo-Nengo is a graphical and scripting based software package for simulating large-scale neural systems.
  26. Eblearn is a C++ machine learning library with a BSD license for energy-based learning, convolutional networks, vision/recognition applications, etc. EBLearn is primarily maintained by Pierre Sermanet at NYU.
  27. cudamat is a GPU-based matrix library for Python. Example code for training Neural Networks and Restricted Boltzmann Machines is included.
  28. Gnumpy is a Python module that interfaces in a way almost identical to numpy, but does its computations on your computer’s GPU. It runs on top of cudamat.
  29. The CUV Library (github link) is a C++ framework with python bindings for easy use of Nvidia CUDA functions on matrices. It contains an RBM implementation, as well as annealed importance sampling code and code to calculate the partition function exactly (from AIS lab at University of Bonn).
  30. 3-way factored RBM and mcRBM is python code calling CUDAMat to train models of natural images (from Marc’Aurelio Ranzato).
  31. Matlab code for training conditional RBMs/DBNs and factored conditional RBMs (from Graham Taylor).
  32. mPoT is python code using CUDAMat and gnumpy to train models of natural images (from Marc’Aurelio Ranzato).
  33. neuralnetworks is a java based gpu library for deep learning algorithms.
  34. ConvNet is a matlab based convolutional neural network toolbox.
  35. Elektronn is a deep learning toolkit that makes powerful neural networks accessible to scientists outside the machine learning community.
  36. OpenNN is an open source class library written in C++ programming language which implements neural networks, a main area of deep learning research.
  37. NeuralDesigner  is an innovative deep learning tool for predictive analytics.

 

 

 

 

Datasets

These datasets can be used for benchmarking deep learning algorithms:

Symbolic Music Datasets


Natural Images


Artificial Datasets

Faces


Text


Speech


Recommendation Systems

  • MovieLens: Two datasets available from http://www.grouplens.org. The first dataset has 100,000 ratings for 1682 movies by 943 users, subdivided into five disjoint subsets. The second dataset has about 1 million ratings for 3900 movies by 6040 users.
  • Jester: This dataset contains 4.1 million continuous ratings (-10.00 to +10.00) of 100 jokes from 73,421 users.
  • Netflix Prize: Netflix released an anonymised version of their movie rating dataset; it consists of 100 million ratings, done by 480,000 users who have rated between 1 and all of the 17,770 movies.
  • Book-Crossing dataset: This dataset is from the Book-Crossing community, and contains 278,858 users providing 1,149,780 ratings about 271,379 books.

Misc


2 Replies to “Deep Learning Toolbox”

Leave a Reply

Your email address will not be published. Required fields are marked *