• Activation function (Types)
  • How does the Neural network work
  • Activation function (Types) 3. Hyperbolic Tangent Function — (tanh)




    Download 4.74 Mb.
    bet4/5
    Sana11.01.2024
    Hajmi4.74 Mb.
    #135217
    1   2   3   4   5
    Bog'liq
    Mashinali o\'qitishga kirish 21-ma\'ruza Nosirov Kh
    HamdamovAyyubxon, ma\'lumotlar tuzulmasi va algaridimlar amaliy ish - 1, Ma\'lumotlar bazasi mustaqil ish -3, Ma\'lumotlarni saralash algoritmlari. Saralash tushunchasi va uni-kompy.info, sun-iy-neyron-tarmoqlarni-umumiy-tasnifi, Mashinali o\'qitishga kirish 1-ma\'ruza Nosirov Kh (1), Asosnoma Quraqov S.A., Haydarqulov Shohzod, OR-5.51.02.02-Elektr atansiyalari tarmoqlari va tizimlari, lab5power point, Kompyuter tarmoqlari va adminstratorlash fanidan test savollari , 8-sinf mavzulashtirilgan testlar, menejment tex

    Activation function (Types)

    3. Hyperbolic Tangent Function — (tanh)


    It is similar to Sigmoid but better in performance. It is nonlinear in nature, so great we can stack layers. The function ranges between (-1,1).
    The main advantage of this function is that strong negative inputs will be mapped to negative output and only zero-valued inputs are mapped to near-zero outputs. So less likely to get stuck during training.

    Activation function (Types)

    4. Rectified Linear Units — (ReLu)


    ReLu is the most used activation function in CNN and ANN which ranges from zero to infinity.[0,∞)
    It gives an output ‘x’ if x is positive and 0 otherwise. It looks like having the same problem of linear function as it is linear in the positive axis. Relu is non-linear in nature and a combination of ReLu is also non-linear. In fact, it is a good approximator and any function can be approximated with a combination of Relu.

    Activation function (Types)


    ReLu is 6 times improved over hyperbolic tangent function.
    It should only be applied to hidden layers of a neural network. So, for the output layer use softmax function for classification problem and for regression problem use a Linear function.
    Here one problem is some gradients are fragile during training and can die. It causes a weight update which will make it never activate on any data point again. Basically ReLu could result in dead neurons.
    To fix the problem of dying neurons, Leaky ReLu was introduced. So, Leaky ReLu introduces a small slope to keep the updates alive. Leaky ReLu ranges from -∞ to +∞.
    ReLu vs Leaky ReLu
    Leak helps to increase the range of the ReLu function. Usually, the value of a = 0.01 or so.
    When a is not 0.01, then it is called Randomized ReLu.

    How does the Neural network work?

    Let us take the example of the price of a property and to start with we have different factors assembled in a single row of data: Area, Bedrooms, Distance to city and Age.


    Download 4.74 Mb.
    1   2   3   4   5




    Download 4.74 Mb.

    Bosh sahifa
    Aloqalar

        Bosh sahifa



    Activation function (Types) 3. Hyperbolic Tangent Function — (tanh)

    Download 4.74 Mb.