Diving into Tensorflow
11 Dec 2016Does android dream of an electronic sheep?
Since February this year I have been learning more and more about neutral networks and their recent reincarnation known as deep learning. This approach become famous outside of its usual field not only by association with Google’s work on AI but also by number of articles discussing results of deep learning as similar to hallucination, experienced by people in, let’s call it, chemically excited state of mind.
This almost gives an impression of the deep learning as a good party trick. It is definitely more than that. The concept is to replicate the work of the human brain and create self-learning machines. In the current instance neural networks are extremely efficient in finding patterns, though as with every ML approach a lot of this can be noise (over-fitting) and a lot of effort is put in place in order to avoid this. If you interested in learning more check online - there is a lot of information. Few things I could recommend is:
- Peter Goldsborough’s talk from December pyData London;
- Andrej Karpathy blog;
- Christopher Olah blog;
- Stanford CS class;
- NVidia courses - those are guys behind the most popular deep learning GPU hardware;
- MIT article.
- Fei-Fei Li’s TED talk
And if you want to try out NN online:
let’s go Tensorflow
There is a number of really good frameworks supporting this. I have set my sight for Tensorflow for three reasons:
- it has support of the Google and an active community
- it can be operated using Python
- Goggle Cloud support it natively
I have just started a small side project, looking at utilising deep learning
my own copy of Tensorflow
Until recent the easiest way to install Tensorflow was to use a docker image. This great tool is much better served by Linux than Windows. The main problem is the need to use of virtual box as a part of deployment - you can use Linux dockers but you can’t compile anything for your hardware (you do deploy them in virtual Linux) neither can you fully explore the command line capacities of this great tool.
An alternative is to install Tensorflow native. Again, great for Linux, not so good for Windows. While the logical conclusion is to switch to Linux I still need to use some of my high-performance university machines that need to run on Windows.
Over a week ago, Google released v 0.12 that [works natively at Windows platform]((https://developers.googleblog.com/2016/11/tensorflow-0-12-adds-support-for-windows.html). Section below explains how to install it on the PC. Note that we are looking at installing CPU based version. It is much slower than GPU one and generally not recommended for anything but basic. I tried to install GPU version, despite having NVidia card which should be supported by CUDA I just managed to crash my hardware. I would be interested to hear from ppl who managed to install it.
- Install Anaconda, it’s great data science oriented distribution so you will find it useful anyway
- Create new environment
conda create -n tensorflow python=3.5
conda info --env
activate tensorflow
- Install Tensorflow into it
pip install tensorflow
conda install ipython
conda install scipy
- Test your installation
- run ipython - make sure it identifies correct python 3.5.x and does not flag
missing libraries - install any missing libraries usingconda install xxx
import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) a = tf.constant(10) b = tf.constant(32) print(sess.run(a + b))
- Run your first neutral network
- get env path using
conda info --env
- run example
cd [YOUR_ENV_PATH]\Lib\site-packages\tensorflow\models\image\mnist\ python convolutional.py
- This will take a long time on CPU - now you know why people run NN on GPU-rich servers.
- if you read through my introduction this is eqivalent of this ConvnetJS example
- If above two steps executed successfully - you got working Tensorflow, well done!