|
Mashinali o‘qitishga kirish Nosirov Xabibullo xikmatullo o‘gli Falsafa doktori (PhD), dotsent, tret kafedrasi mudiri
|
bet | 1/5 | Sana | 11.01.2024 | Hajmi | 4,74 Mb. | | #135217 |
Bog'liq Mashinali o\'qitishga kirish 21-ma\'ruza Nosirov Kh Mashinali o‘qitishga kirish Nosirov Xabibullo xikmatullo o‘gli Falsafa doktori (PhD), dotsent, TRET kafedrasi mudiri Sun’iy neyron tarmoqlariga kirish Nosirov Xabibullo xikmatullo o‘gli Falsafa doktori (PhD), Dotsent, TRET kafedrasi mudiri - https://cds.cern.ch/record/487162/files/0102224.pdf
- https://www.kdnuggets.com/2019/10/introduction-artificial-neural-networks.html
- https://towardsdatascience.com/an-introduction-to-artificial-neural-networks-5d2e108ff2c3
- https://towardsdatascience.com/introduction-to-artificial-neural-networks-for-beginners-2d92a2fb9984
- https://ulcar.u https://orbit.dtu.dk/files/2717948/imm2443.pdf ml.edu/~iag/CS/Intro-to-ANN.html
- http://toritris.weebly.com/perceptron-2-logical-operations.html
Artificial Neural Networks - Artificial Neural Networks (ANNs) have been around for quite a while: they were first introduced back in 1943 by the neurophysiologist Warren McCulloch and the mathematician Walter Pitts. The early successes of ANNs until the 1960s led to the widespread belief that we would soon be conversing with truly intelligent machines. When it became clear that this promise would go unfulfilled (at least for quite a while), funding flew elsewhere and ANNs entered a long dark era. In the early 1980s there was a revival of interest in ANNs as new network architectures were invented and better training techniques were developed. But by the 1990s, powerful alternative Machine Learning techniques such as Support Vector Machines were favored by most researchers, as they seemed to offer better results and stronger theoretical foundations.
Artificial Neural Networks Finally, we are now witnessing yet another wave of interest in ANNs. Will this wave die out like the previous ones did? There are a few good reasons to believe that this one is different and will have a much more profound impact on our lives: - There is now a huge quantity of data available to train neural networks, and ANNs frequently outperform other ML techniques on very large and complex problems.
- The tremendous increase in computing power since the 1990s now makes it possible to train large neural networks in a reasonable amount of time. This is in part due to Moore’s Law, but also thanks to the gaming industry, which has produced powerful GPU cards by the millions.
- The training algorithms have been improved. To be fair they are only slightly different from the ones used in the 1990s, but these relatively small tweaks have a huge positive impact.
- Some theoretical limitations of ANNs have turned out to be benign in practice. For example, many people thought that ANN training algorithms were doomed because they were likely to get stuck in local optima, but it turns out that this is rather rare in practice (or when it is the case, they are usually fairly close to the global optimum).
- ANNs seem to have entered a virtuous circle of funding and progress. Amazing products based on ANNs regularly make the headline news, which pulls more and more attention and funding toward them, resulting in more and more progress, and even more amazing products.
|
| |