开发者

Does it make sense to use an "activation function cocktail" for approximating an unknown function through a feed-forward neural network?

开发者 https://www.devze.com 2023-04-02 00:16 出处:网络
I just started playing around with neural networks and, as I would expect, in order to train a neural network effectively there must be some relation between the开发者_开发知识库 function to approxima

I just started playing around with neural networks and, as I would expect, in order to train a neural network effectively there must be some relation between the开发者_开发知识库 function to approximate and activation function.

For instance, I had good results using sin(x) as an activation function when approximating cos(x), or two tanh(x) to approximate a gaussian. Now, to approximate a function about which I know nothing I am planning to use a cocktail of activation functions, for instance a hidden layer with some sin, some tanh and a logistic function. In your opinion does this make sens?

Thank you,

Tunnuz


While it is true that different activation functions have different merits (mainly for either biological plausibility or a unique network design like radial basis function networks), in general you be able to use any continuous squashing function and expect to be able to approximate most functions encountered in real world training data.

The two most popular choices are the hyperbolic tangent and the logistic function, since they both have easily calculable derivatives and interesting behavior around the axis.

If neither if those allows you to accurately approximate your function, my first response wouldn't be to change activation functions. Rather, you should first investigate your training set and network training parameters (learning rates, number of units in each pool, weight decay, momentum, etc.).

If your still stuck, step back and make sure your using the right architecture (feed forward vs. simple recurrent vs. full recurrent) and learning algorithm (back-propagation vs. back-prop through time vs. contrastive hebbian vs. evolutionary/global methods).

One side note: Make sure you never use a linear activation function (except for output layers or crazy simple tasks), as these have very well documented limitations, namely the need for linear separability.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号