DL之DNN优化技术:DNN优化器的参数优化—更新参数的四种最优化方法(SGD/Momentum/AdaGrad/Adam)的案例理解、图表可视化比较
DL之DNN优化技术:DNN优化器的参数优化—更新参数的四种最优化方法(SGD/Momentum/AdaGrad/Adam)的案例理解、图表可视化比较
四种最优化方法简介
DL之DNN优化技术:神经网络算法简介之GD/SGD算法(BP算法)的简介、理解、代码实现、SGD缺点及改进(Momentum/NAG/Ada系列/RMSProp)之详细攻略
优化器案例理解
输出结果
设计思路
核心代码
#T1、SGD算法
class SGD:
'……'
def update(self, params, grads):
for key in params.keys():
params[key] -= self.lr * grads[key]
#T2、Momentum算法
import numpy as np
class Momentum:
'……'
def update(self, params, grads):
if self.v is None:
self.v = {}
for key, val in params.items():
self.v[key] = np.zeros_like(val)
for key in params.keys():
self.v[key] = self.momentum*self.v[key] - self.lr*grads[key]
params[key] += self.v[key]
#T3、AdaGrad算法
'……'
def update(self, params, grads):
if self.h is None:
self.h = {}
for key, val in params.items():
self.h[key] = np.zeros_like(val)
for key in params.keys():
self.h[key] += grads[key] * grads[key]
params[key] -= self.lr * grads[key] / (np.sqrt(self.h[key]) + 1e-7)
#T4、Adam算法
'……'
def update(self, params, grads):
if self.m is None:
self.m, self.v = {}, {}
for key, val in params.items():
self.m[key] = np.zeros_like(val)
self.v[key] = np.zeros_like(val)
self.iter += 1
lr_t = self.lr * np.sqrt(1.0 - self.beta2**self.iter) / (1.0 - self.beta1**self.iter)
for key in params.keys():
self.m[key] += (1 - self.beta1) * (grads[key] - self.m[key])
self.v[key] += (1 - self.beta2) * (grads[key]**2 - self.v[key])
params[key] -= lr_t * self.m[key] / (np.sqrt(self.v[key]) + 1e-7)
相关文章
DL之DNN:自定义五层DNN(5*100+ReLU+SGD/Momentum/AdaGrad/Adam四种最优化)对MNIST数据集训练进而比较不同方法的性能
赞 (0)