Fundamentals Of Practical Deep Learning
29 Feb 2016First week of this month I had a pleasure of attending Fundamentals Of Practical Deep Learning - a two days course organise by Deep Learning London.
I have been interested in deep learning for a while but didn’t had time or relevant project to work on. My interest was boosted first by conformation that android do dream of electronic sheep and then by information that Google is open sourcing its AI engine. This was enough to get me started - I attended a few workshops, including very practical and theoretical one. At this stage I started to look for more more substantial effort to harness deep learning. Fundamentals Of Practical Deep Learning, looked just ideal.
Workshop
Organisers suggested experience of programming in C++, Python and Java (I only had first two), basic knowledge of calculus, linear algebra, and probability theory (all checked) - I think this has been a bare minimum to follow the workshop - it was well worth to have working experience in data processing. This should be enough to crunch through offerings which where:
- theory, delivered by Dr. Tapani Raiko and Pyry Takala, his PhD student;
- description of nVidia hardware by Alison B. Lowndes, Deep Learning Solution Architect;
- practice sessions covering most popular frameworks - Caffe, Thorch and Theano;
- networking with other deep learners;
- good venue and food, always convenient when you want to focus on learning.
Was it worth it?
It was great, well worth the price:
- I learned a lot more theory than I could do on my own. I have a lot of additional references, which I am still digging through.
- Practical sessions supported my understanding of the theory and gave me the taste of different frameworks. They had been run on the cloud so it was possible to do it with any hardware.
- I met a lot of people at the different level of Deep Learning Experience - from experts to starters like me. We had a truly geographical spread - participants came not only from UK but worldwide - Finland, Russia, Israel and Singapore. Discussion with them was very rewarding as attendees’ experience varied massively giving you the whole spectrum from inexperienced users like me to true experts. Equally important was to discuss the usage of deep learning - a lot of people came with specific ideas which again allowed me to understand more how to use it.
Overall I felt this was a very good way to learn more about deep learning and get yourself ready for using it in practice. There is a lot of work ahead of me, but this workshop gave me enough background knowledge to actually start work on it. One of the biggest takeaways was understanding limitations of deep learning project and (I think) how to implement them in practice.
Next step
Before I came on the course I located a possible project within university that I could use deep learning on. As a part of preparation I discussed use of HPC computing at the University of Nottingham. Discussion with nVidia staff at the workshop gave me a better understanding what hardware I need to carry out task. Original project has evolved in the meantime, which lead me to slight change of plans.
At the moment I want to train small Neutral Network locally using Tensorflow by end of March. Why Tensorflow, after learning about Caffe, Thorch and Theano? From discussion with other users I find it well documented and reasonably easy to implement, if you happy with Docker. There is also free udacity course covering it. Based on this experience I would like to look at image recognition related project and its implementation on University of Nottingham HPC’s GPUs. Hopefully before end of May. If this works well I would anticipate start working on the project that got it started towards end of the year.
Will keep you posted.