开发者

To which weight is the correction added in a perceptron?

开发者 https://www.devze.com 2023-03-15 05:14 出处:网络
I\'m experimenting with single-layer perceptrons, and I think I understand (mostly) everything. However, what I don\'t unders开发者_Python百科tand is to which weights the correction (learning rate*err

I'm experimenting with single-layer perceptrons, and I think I understand (mostly) everything. However, what I don't unders开发者_Python百科tand is to which weights the correction (learning rate*error) should be added. In the examples I've seen it seems arbitrary.


Well, it looks like you half answered your own question: its true you correct all of the non-zero weights, you don't correct all by the same amount.

Instead, you correct the weights in proportion to their incoming activation, so if unit X activated really strongly and unit Y activated just a lil bit, and there was a large error, then the weight going from unit X to the output would be corrected far more than unit Y's weights-to-output.

The technical term for this process is called the delta rule, and its details can be found in its wiki article. Additionally, if you ever want to upgrade using to multilayer perceptrons (single layer perceptrons are very limited in computational power, see a discussion of Minsky and Papert's argument against using them here), an analogous learning algorithm called back propogation is discussed here.


Answered my own question.

According to http://intsys.mgt.qub.ac.uk/notes/perceptr.html, "add this correction to any weight for which there was an input". In other words, do not add the correction to weights whose neurons had a value of 0.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号