In-Depth Artificial Neural Networks

Hello everyone, in this article, I will convey to you the logic of the operation of artificial neural networks, which are recognized as the lifeblood of deep learning and form the basis of artificial intelligence, and the methods of deep learning. If you want to make what is described in this article more consolidated and understandable, as a suggestion, I demand that your programming knowledge be at a normal level and familiar with data structures.

This image has an empty alt attribute; its file name is Ekran-Resmi-2020-09-30-23.48.51-1-1024x495.png

As you can see in the image I gave above, we have 3 input neurons. In our output layer, it produces 2 outputs, y1 and y2. Actually, I want to tell you how I met deep learning.

As soon as I realized that deep learning methods can produce more optimized results than the classic machine learning methods I used in my previous projects, I threw myself into the arms of artificial learning. Come on, let’s all Digest artificial neural networks in a pleasant way so that we can quickly move on to deep learning methods.


📌 Before I talk about artificial neural networks, I would like to tell you what a nerve cell means. Artificial neural networks are a method of machine learning inspired by the human brain. A nerve cell, or neuron, is the basic functional unit of the nervous system that performs information transfer.

This image has an empty alt attribute; its file name is Ekran-Resmi-2020-09-30-23.45.31-1.png

What you see above is actually a neuron, a typical nerve cell. Artificial neural networks are also produced, just as they are inspired by the human brain and called nerve cells.

📌 As with neurons in the nervous system, neural networks are responsible for ensuring the transfer of data between input and output. In summary, artificial neural networks are a technology that uses machine learning methods inspired by the work of the human brain. Currently, modeling of artificial neural networks is used in many problem solutions, such as engineering, economics, and health.

Why is deep learning more SUCCESSFUL?

This image has an empty alt attribute; its file name is Ekran-Resmi-2020-09-30-23.47.50-1.png

In classical (traditional) machine learning methods previously used, the success chart is slightly lower than in other methods. In contrast, training a very large neural network will often produce higher performance. And as this training continues continuously, the machine will learn more, increasing its knowledge capacity day by day.

Classical machine learning methods now include old methods when viewed today. Did it also come to your attention that the method with the highest performance when looking at the chart is deep learning? So isn’t deep learning the same as ANN? I can hear you say no.

According to François Chollet, deep learning is a subfield of machine learning that can increasingly achieve more useful representations when processing data in successive layers.

One of the most basic examples of laundering when it comes to artificial neural networks is undoubtedly the graph of measuring T-shirt Price and demand. Let’s discuss together the example given in the first week of the AI for Everyone course, which is also my favorite course given by Andrew NG. In this way, it is in real life. The higher the price of the T-shirt, the lower the demand for this T-shirt will be.

This image has an empty alt attribute; its file name is Ekran-Resmi-2020-10-02-11.33.28.png

In such charts, it should be known that the demand curve never falls below zero. Maybe demand will decrease a lot and approach zero, but it won’t be negative. Andrew NG says that this blue line turns out to be perhaps the simplest possible neural network

When the chart makes a little more progress, we’ll see the blue line move steadily.

This image has an empty alt attribute; its file name is Ekran-Resmi-2020-10-02-11.39.43-1.png

If we want to pour this structure into a neural network, it is common for the price to be given to the neural network as input and the demand variable to come out as output. In the YSA structure, inputs and outputs are expressed by nerve cells, i.e. neurons, which we see above. Other hidden layers can also be found in it. The number and content of layers will change completely according to the architecture of the neural network.

Deep neural networks contain important contents such as gradient reduction algorithm, tensor operations, backward propagation algorithm (chain derivative), loss function when a little more detail is taken. In this article, I have shown the logic of neural networks by preparing you for these topics. For me, doing a lot of research and being open to a lot of information is always the most important. Gradually, we will now get to the level where we will train our own neural network model. Hope to see you! Have a healthy day ✨


  1. François Chollet, Deep Learning with Python, Book by François Chollet.
  2. Coursera, Andrew NG, AI For Everyone.
  3. Wikipedia, “Neural Network”,
  4. Albahnsen, “Building AI Applications Using Deep Learning”, June 6, 2017,

Related Posts

Leave a Reply