版本:2.0.2 chainer我 使用亞當優化,然後報告錯誤,我發現它是由代碼(fix1 == 0?)造成的: 在adam.py:亞當優化器在chainer中報告錯誤?
@property
def lr(self):
fix1 = 1. - math.pow(self.hyperparam.beta1, self.t)
fix2 = 1. - math.pow(self.hyperparam.beta2, self.t)
return self.hyperparam.alpha * math.sqrt(fix2)/fix1
錯誤日誌:
Traceback (most recent call last):
File "AU_rcnn/train.py", line 237, in <module>
main()
File "AU_rcnn/train.py", line 233, in main
trainer.run()
File "/root/anaconda3/lib/python3.6/site-packages/chainer/training/trainer.py", line 285, in run
initializer(self)
File "/root/anaconda3/lib/python3.6/site-packages/chainer/training/extensions/exponential_shift.py", line 48, in initialize
self._init = getattr(optimizer, self._attr)
File "/root/anaconda3/lib/python3.6/site-packages/chainer/optimizers/adam.py", line 121, in lr
return self.hyperparam.alpha * math.sqrt(fix2)/fix1
ZeroDivisionError: float division by zero
你在'exponential_shift'中試圖改變什麼值? 你知不知道亞當使用'alpha'作爲學習速度,而'lr'本身不應該被觸及。 – corochann
如何使用adam算法?我無法設置lr? – machen
@corcochann是否有任何示例代碼如何使用亞當,是的,我在每個時代設置lr decay exponential_shift = 0.9 – machen