Fundamentals Of Practical Deep Learning

First week of this month I had a pleasure of attending Fundamentals Of Practical Deep Learning - a two days course organise by Deep Learning London.

I have been interested in deep learning for a while but didn’t had time or relevant project to work on. My interest was boosted first by conformation that android do dream of electronic sheep and then by information that Google is open sourcing its AI engine. This was enough to get me started - I attended a few workshops, including very practical and theoretical one. At this stage I started to look for more more substantial effort to harness deep learning. Fundamentals Of Practical Deep Learning, looked just ideal.

Workshop

Organisers suggested experience of programming in C++, Python and Java (I only had first two), basic knowledge of calculus, linear algebra, and probability theory (all checked) - I think this has been a bare minimum to follow the workshop - it was well worth to have working experience in data processing. This should be enough to crunch through offerings which where:

Was it worth it?

It was great, well worth the price:

Overall I felt this was a very good way to learn more about deep learning and get yourself ready for using it in practice. There is a lot of work ahead of me, but this workshop gave me enough background knowledge to actually start work on it. One of the biggest takeaways was understanding limitations of deep learning project and (I think) how to implement them in practice.

Next step

Before I came on the course I located a possible project within university that I could use deep learning on. As a part of preparation I discussed use of HPC computing at the University of Nottingham. Discussion with nVidia staff at the workshop gave me a better understanding what hardware I need to carry out task. Original project has evolved in the meantime, which lead me to slight change of plans.

At the moment I want to train small Neutral Network locally using Tensorflow by end of March. Why Tensorflow, after learning about Caffe, Thorch and Theano? From discussion with other users I find it well documented and reasonably easy to implement, if you happy with Docker. There is also free udacity course covering it. Based on this experience I would like to look at image recognition related project and its implementation on University of Nottingham HPC’s GPUs. Hopefully before end of May. If this works well I would anticipate start working on the project that got it started towards end of the year.

Will keep you posted.

If you liked this post, you can Tweet about it or Follow me on Twitter