亚马逊云教程5:安装TensorFlow,运行MNIST
概要:安装机器学习的开源Python包TensorFlow,运行机器学习实例MNIST。
读者:初学机器学习的朋友
时间:1500字,阅读3min,操作8min
前提:亚马逊云教程4:安装Anaconda,多python环境,运行jupyter notebook
常听说的人工智能、机器学习等技术,如果我们只是简单运用一下的话,或者是想初步了解一下它们,可以使用别人已经开发好的东西来运行几个案例体会一下。TensorFlow是谷歌的开源机器学习框架,它有Python版本的包。前面我们用一本书来比喻一个Python的包,这里,就像谷歌写了一本书,是关于机器学习的,里面有很多工具,所有安装了这本书的人可以用里面的工具去解决很多问题,这本书就叫TensorFlow。本教程讲解如何在服务器中安装TensorFlow,并且在jupyter notebook中运行一个机器学习的实例,MNIST。
安装TensorFlow
启动并登录服务器。新建一个Python虚拟环境“tensorflow”,并且添加到jupyter notebook的kernel中。这里我选用python3.6作为示例。谷歌官方完整的文档在这里。
roden@ip-172-31-2-87:~$ conda create -n tensorflow python=3.6 ipykernel
# 新虚拟环境名字为 tensorflow ,使用python3.6
Fetching package metadata .........
... # 省略
Proceed ([y]/n)? # 按回车确认
libsodium-1.0. 100% |################################| Time: 0:00:04 291.92 kB/s
...
ipykernel-4.6. 100% |################################| Time: 0:00:00 466.71 kB/s
#
# To activate this environment, use:
# > source activate tensorflow
#
# To deactivate this environment, use:
# > source deactivate tensorflow
#
roden@ip-172-31-2-87:~$ source activate tensorflow
# 进入新环境
(tensorflow) roden@ip-172-31-2-87:~$ python -m ipykernel install --user --name tensorflow
# 安装ipykernel到jupyter notebook
(tensorflow) roden@ip-172-31-2-87:~$
现在我们就有了一个新环境,并且可以在jupyter notebook中使用。接下来我们在这个环境中安装TensorFlow。根据官网提示,找到TensorFlow的Python包的URL地址。这里我们需要选择Linux的,CPU的,Python3.6的,官网提示(https://www.tensorflow.org/install/install_linux#python_36)。我们使用这个链接“https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp36-cp36m-linuxx8664.whl”。我们使用 pip
这个命令安装,pip是Python包管理器,就像图书馆管理员,负责购买新书,退掉旧书等,pip负责安装升级Python的包。
(tensorflow) roden@ip-172-31-2-87:~$ pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp36-cp36m-linux_x86_64.whl
# 使用pip,和谷歌给的链接安装TensorFlow这个包。
Collecting tensorflow==1.1.0 from https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp36-cp36m-linux_x86_64.whl
Downloading https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp36-cp36m-linux_x86_64.whl (31.4MB)
99% |████████████████████████████████| 31.4MB 55.9MB/s eta 0:00:01
... # 省略
Installing collected packages: werkzeug, six, setuptools, protobuf, wheel, numpy, tensorflow
Successfully installed numpy-1.13.0 protobuf-3.3.0 setuptools-36.0.1 six-1.10.0 tensorflow-1.1.0 werkzeug-0.12.2 wheel-0.29.0
(tensorflow) roden@ip-172-31-2-87:~$ # 安装成功
验证是否成功安装。
(tensorflow) roden@ip-172-31-2-87:~$ python
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
然后逐行输入下面的python代码,每一行后按下回车执行。
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
看到的结果应该是类似这样的。
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
2017-06-12 18:16:41.255674: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-12 18:16:41.255755: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-12 18:16:41.255831: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-12 18:16:41.255888: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-12 18:16:41.255925: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
>>>
在最后一条命令中,如果看到下面的消息,就说明已经安装成功了。
>>> print(sess.run(hello))
b'Hello, TensorFlow!'
运行MNIST
我们可以先运行jupyter notebook,然后在里面新建一个笔记本,使用tensorflow这个kernel。
(tensorflow) roden@ip-172-31-2-87:~/tf_notebook$ nohup jupyter notebook &
[1] 1710
(tensorflow) roden@ip-172-31-2-87:~/tf_notebook$ nohup: ignoring input and appending output to 'nohup.out'
然后我们在浏览器中输入服务器的“PublicIP:jupyterport”,如“http://52.10.20.197:9999/”进入到jupyter notebook中。新建一个笔记本。“New -> tensorflow”。可以改一下笔记本的名字,在新窗口的第一行,“Untitled”,左键点击一下就可以更改了,如“MNIST_demo“。
然后我们准备运行MNIST,这个是被称之为人工智能领域的”Hello World“程序。它是训练一个模型让电脑自动识别手写的数字,0-9。谷歌官方的教程在这里,网上也有一些中文翻译的文档,如TensorFlow中文社区的文档(http://www.tensorfly.cn/tfdoc/tutorials/mnist_pros.html)。下面的代码是从文档中节选后合并在一起的结果。我们把下面的代码放在jupyter notebook中的单元格Cell中。
这一段是在下载并导入数据。
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
下面的代码包含了模型的设计,训练,和评估。具体的含义大家可以看看推荐阅读中的资料。
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer())
for i in range(20000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
然后我们可以看到类似下面的输出结果,就代表我们运行成功了。
step 0, training accuracy 0.08
step 100, training accuracy 0.84
step 200, training accuracy 0.92
step 300, training accuracy 0.9
step 400, training accuracy 0.96
...
大家可以体会到训练的速度非常慢,这是因为我们使用的是非常基本的EC2,下一节我们讲解如何把现在的EC2升级到有强大计算能力的EC2服务器:使用AMI保存已经安装好的所有软件,然后选择其他类别的EC2来创建实例。
推荐阅读
安装TensorFlow到Ubuntu,谷歌官网文档,中文翻译。
进阶版MNIST教程,谷歌文档,中文翻译。
集锦卡
安装TensorFlow
conda create -n tensorflow python=3.6 ipykernel
创建新的虚拟python3.6环境source activate tensorflow
进入新环境python -m ipykernel install --user --name tensorflow
把新环境安装到jupyter的ipython kernelpip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.1.0-cp36-cp36m-linux_x86_64.whl
安装TensorFlow
测试是否安装成功
(tensorflow) roden@ip-172-31-2-87:~$ python # 进入python
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
看到下面的输出就说明安装成功了。
b'Hello, TensorFlow!'
在jupyter notebook中运行MNIST
运行jupyter noteboook,
nohup jupyter notebook &
新建一个笔记本。“New -> tensorflow”。
输入正文中的包含模型的设计,训练,和评估的python代码,并运行。
编辑:思考问题的熊
AWS is a registered trademark of Amazon Web Services, Inc. and/or its affiliates.
AWS 是Amazon Web Services, Inc. 和/或其附属公司的注册商标。