use_batchnorm = True
X = tf.placeholder(tf.float32, [None, input_dim], name = "X")
Y = tf.placeholder(tf.float32, [None, output_dim], name = "Y")
W1 = tf.get_variable("W1", shape=[input_dim, 16], initializer=tf.contrib.layers.xavier_initializer())
b1 = tf.Variable(tf.random_normal([16]))
L1 = tf.matmul(X, W1) + b1;
if use_batchnorm:
L1 = tf.layers.batch_normalization(L1)
hypothesis = tf.nn.relu(L1)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=hypothesis, labels=Y))
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
...
References:
https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization
https://github.com/hunkim/DeepLearningZeroToAll/blob/master/lab-10-6-mnist_nn_batchnorm.ipynb
https://shuuki4.wordpress.com/2016/01/13/batch-normalization-%EC%84%A4%EB%AA%85-%EB%B0%8F-%EA%B5%AC%ED%98%84/
'Deep Learning' 카테고리의 다른 글
Deep Learning을 위한 Window 환경에서의 Anaconda + Python + Tensorflow cpu 설치 가이드 (0) | 2018.01.06 |
---|