关于一元线性回归以及随机梯度下降等基础知识,请大家参考其他文章,这里不做详细介绍。
关于TensorFlow的基础知识请参考本博客的其他文章。
本篇文章只给出实例代码和效果图。
下面给出TensorFlow的一元线性回归代码:
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt num_samples = 100 num_epoch = 200 learning_rate = 0.5 log_dir = "/tmp/log" x_data = np.random.rand(num_samples).astype("float32") y_data = x_data * 0.1 + 0.3 + np.random.normal(0, 0.005, num_samples) w = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) b = tf.Variable(tf.zeros([1])) y = w * x_data + b loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(learning_rate) train = optimizer.minimize(loss) with tf.name_scope('loss'): tf.summary.scalar('loss', loss) init = tf.initialize_all_variables() merged = tf.summary.merge_all() with tf.Session() as sess: sess.run(init) summary_writer = tf.summary.FileWriter(log_dir, sess.graph) for step in range(num_epoch): sess.run(train) result = sess.run(merged) summary_writer.add_summary(result, step) summary_writer.close() # 画图 xy = zip(x_data, y_data) xy = sorted(xy) (x_data, y_data) = zip(*xy) plt.plot(x_data, y_data) line_xs = np.arange( min(x_data) - 0.1 , max(x_data) + 0.1 , 0.05) line_ys = w.eval() * line_xs + b.eval() plt.plot(line_xs, line_ys) plt.show()
效果图如下:
损失函数变化图如下:
几点注意事项:
如果你的tensorflow版本太旧了,可以升级相应版本:
sudo pip install --upgrade tensorflow sudo pip install --upgrade tensorflow-gpu
tensorboard启动命令为:
tensorboard --logdir=/tmp/log
参考:
下面是一个线性回归模型,我们首先准备一组x和y数据,数据本身有线性关系,然后我们定义模型和参数w和b,通过启动session运行后应用只需要根据输入的数据就可以学习到w和b的值。
import tensorflow as tf import numpy as np # Prepare train data train_X = np.linspace(-1, 1, 100) train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10 # Define the model X = tf.placeholder("float") Y = tf.placeholder("float") w = tf.Variable(0.0, name="weight") b = tf.Variable(0.0, name="bias") loss = tf.square(Y - tf.mul(X, w) - b) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss) # Create session to run with tf.Session() as sess: sess.run(tf.initialize_all_variables()) epoch = 1 for i in range(10): for (x, y) in zip(train_X, train_Y): _, w_value, b_value = sess.run([train_op, w, b], feed_dict={X: x, Y: y}) print("Epoch: {}, w: {}, b: {}".format(epoch, w_value, b_value)) epoch += 1
经过10轮测试得到的w和b基本符合我们的预期,如果我们给出不同数据集就可以训练出不同的参数,我们也可以考虑使用复杂的模型来学习数据中更复杂的关系。
参考资料: