Application of CNN

Hello everyone, in my last blog post, I wanted to discuss a simple application about my favorite topic, CNN. I chose the mnist data set, which is one of the easiest data sets for this topic.

The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The MNIST database contains 60,000 training images and 10,000 testing images. Half of the training set and half of the test set were taken from NIST’s training dataset, while the other half of the training set and the other half of the test set were taken from NIST’s testing dataset. The images contained within have a width of 28 pixels and a height of 28 pixels.

Figure : Sample images from MNIST test dataset


Data set is imported from tensorflow library. Session function has used to running codes. Global_variables_initializer has activated for codes to work. Data should be given piece by piece to train the model so batch_size has taken as 128. A function called “training step” has been created for the realization of the training. The for loop has defined as the loop that will perform the training in the function. MNIST pictures have taken with this code x_batch, y_batch = mnist. train. next_batch(batch_size) so we have feed pictures to our model in the form of batch. Feed_dict_train has defined to assign images and tags in the data set to our place holders. The code has written in one line to simultaneously optimize the model and see the variability of the loss value. The if loop has been used to observe the situation in our training. It is coded for training accuracy and training loss printing every 100 iterations. The test_accuracy function has been defined to see how our model predicts data that it has not encountered before.

2 convolutional layers have used to implement the MNIST data set. As a result of trials, when the number of convolutional layers, training step and filter sizes have increased, it has seen that the accuracy increased.First convolutional layer has 16 filters and they all have 5×5 size filters. Second convolutional layer has 32 filters and they all have 5×5 size filters. Layers have combined by making necessary arrangements with max pooling function. ReLU and SoftMax functions have used as activation function. Adam has been used as an optimization algorithm. A very small value of 0.0005 was taken as the learning rate. Batch size is set to 128 for make the training better. Training accuracy and training loss have printed on the output every 100 iterations to check the accuracy of the model. Test accuracy 0.9922 has obtained because of 10000 iterations when the codes have executed.

Figure : Estimation mistakes made by the model

In the figure above, some examples that our model incorrectly predicted are given. Our model can sometimes make wrong predictions, which may be because the text is faint or unclear. In the first example, we see that our model estimates the number 4 as 2.

Figure :  Graph of Loss function

The loss graph gives us a visualized version of the loss values we observed during the training. As shown in the figure, we have a decreasing loss graph over time. Our goal is bringing the loss value closer to zero. Through the loss graph, we can see the appropriateness of the learning rate. When we look at the figure, we can say that our learning rate value is good because there is no slowdown in the decrease in the graph.

In this blog, I made an application on Python using CNN with the Mnist data set. Thank you to everyone who has followed my blogs closely until today, goodbye until we see you again …




Support Vector Machines Part 1

Hello everyone. Image classification are among the most common usage area of artificial intelligence. There are many ways to classify images, but I want to talk about support vector machines in this blog.

In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.Since the algorithm in question does not require any joint distribution function information regarding the data, they are distribution independent learning algorithms.Support Vector Machine (SVM) can be used for both classification and regression challenges. However, it is mostly used for classification problems.

How to solve the classification problem with SVM?

In this algorithm, we draw each data item as a point in n-dimensional space. Next, we classify by finding the hyperplane that separates the two classes very well. The algorithm is set in two classes of the line to be drawn in such a way that it passes from the furthest place to its elements. It is a nonparametric classifier. SVM can also classify linear and nonlinear data, but generally tries to classify data linearly.

SVMs apply a classification strategy that uses a margin-based geometric criterion instead of a pure statistical criterion. In other words, SVMs do not need statistical distribution estimates of classes in order to move from the classification task, and they define the classification model using the concept of margin maximization.

In SVM literature, the predictor is called a variable symbol, and a transformed symbol used to describe the hyperplane is called a feature. The task of choosing the most appropriate representation is also known as feature selection. A set of properties that describe a case is called a vector.

Thus, the purpose of SVM modeling; The goal is to find the optimal hyperplane separating the vector sets, with the single-category states of the variable on one side of the plane and the other categorized states on the other side of the plane.

Classification with SVM

The mathematical algorithms owned by the SVM were originally designed for the classification problem of two-class linear data, then generalized for classification of multi-class and non-linear data. The working principle of DVM is based on the prediction of the most appropriate decision function that can distinguish the two classes, in other words, the definition of the hyper-plane that can distinguish the two classes from each other in the most appropriate way (Vapnik, 1995; Vapnik, 2000). In recent years, intensive studies have been carried out on the use of DVMs in the field of remote sensing, which are used successfully in many areas. (Foody et al., 2004; Melgani et al., 2004; Pal et al., 2005; Kavzoglu et al., 2009). In order to determine the optimum hyperplane, two hyperplanes parallel to this plane and its boundaries must be determined. The points that make up these hyperplanes are called support vectors.

How to Identify the Correct Hyper Plane?

It is quite easy to detect the correct hyperplane with package programs such as R, Python, but we can also detect the correct hyperplane manually with simple methods. Let’s consider a few simple examples.

Here we have 3 different hyperplanes a, b and c. Now let’s define the correct hyperplane to classify the star and the circle. Hyperplane b is chosen because it correctly separates stars and circles in this graph.

If all of our hyperplanes separate classes well, how can we detect the correct hyperplane?

Here, maximizing the distances between the nearest data point (class) or hyperplane will help us decide on the correct hyperplane. This distance is called the Margin.

We can see that the hyperplane C margin is high compared to both A and B. Hence, we call the straight plane C.

SVM for linearly inseparable data

In many problems, such as the classification of satellite images, it is not possible to separate the data linearly. In this case, the problem arising from the fact that some of the training data remains on the other side of the optimum hyperplane is solved by defining a positive dummy variable. The balance between maximizing the limit and minimizing false classification errors can be controlled by defining a regulation parameter (0 <C <∞) that takes positive values and is denoted by C (Cortes et al., 1995). Thus, data can be separated linearly and hyper-plane between classes can be determined. Support vector machines can mathematically make nonlinear transformations with the help of a kernel function, thus allowing the data to be separated linearly in high dimensions.

It is essential to determine the kernel function to be used for a classification process to be performed with support vector machines (SVM) and the optimum parameters of this function. The most commonly used kernel functions in the literature are polynomial, radial based function, PUK function and normalized polynomial kernels.

SVM is used for things like disease recognition in medicine, limitation of consumer loans in banking, and face recognition in artificial intelligence. In the next blog, I will try to talk about their applications on package programs. Goodbye until we meet again …




Past, Present and Future of Artificial Intelligence (AI)

We talked with Ergi Şener about the change of artificial intelligence (ai) from the past to the present, its impact today and its effect of future.

Ergi Sener, who is indicated as one of the 20 Turkish people to be followed in the field of technology (*), received a BS in Microelectronics Engineering in 2005 and double MS in Computer Science & Management in 2007 from Sabancı University. He is pursuing a PHD degree in Management.

Ergi began his career as the co-founder and business development director of New Tone Technology Solutions in 2007 with the partnership of Sabancı University’s Venture Program. After the successful exit of this company, between 2009 and 2013, he worked as a CRM manager at Garanti Payment Systems and as a senior product manager in the New Technology Business Division of Turkcell.

In 2013, he joined MasterCard as a business development and innovation manager for emerging markets and managed the SEE cluster. After his corporate career, Ergi acted as a serial entrepreneur and founded 3 companies on fintech, IoT and AI. He was also the managing director of a Dutch based incubation center. He is currently the co-founder and CEO of a new generation fintech, Payneer. During his career, among with many others Ergi received “Big Data Analysis & Data Mining Innovation Award” in 2017 & 2018, “”Global Telecoms Business Innovation Award” in 2014, “MasterCard Europe President’s Award for Innovation” in 2013 and “Payment System of the Year Award” and Turkcell CEO Award’12.

Ergi is an instructer at Sabancı University and Bahcesehir University, and technology editor at He is also the angel investor of Wastespresso and


Question 1: What do you think about the development of artificial intelligence from past to present? Since corona pandemic outbreak, we had a paradigm shift and everything has transformed very fast. What do you think about the development of artificial intelligence after post corona?

Although the term AI suffers from too many definitions, we can simply define it as “trying to make computers think and act like humans”. The more humanlike is the desired outcome, the more data and processing power are required. Since AI is one of the most prominent technology trends of today, we regularly find ourselves in a lot of conversations such as when will artificial intelligence replace our jobs or which sectors will be disrupted by AI… Indeed, AI is a branch of science that is changing the world in many ways. It is constantly growing and evolving on a large scale that includes research, education, and technological developments.


Designed by pikisuperstar / Freepik (ai)

Almost 70 years passed since the father of AI, Elon Turing, laid the foundations of this discipline. Since the beginning of the first studies, the major aim was to have computers act as humans. Although some technology giants like Google or Amazon claim that they passed the Turing test, there are still many years ahead for such a progress.

As we enter 2021, it will not be wrong to say that, in order to relieve us against pessimistic scenarios; AI technology will not replace many jobs in the short term, but will cause radical changes in business practices and processes. It will also transform many sectors and jobs. Therefore, it is very important to follow the developments in the field of AI closely and make plans for its integration to our business. Danger bells will be ringing for those who do not care about the progress of AI or think it is too early.

I should state it in a way that “today, AI is eating the World”. A serious change is taking place with the development of AI in many different areas from driverless vehicles to image processing; from natural language processing, to optimization problems; from robotic systems to “drones”; or from speech recognition technologies and virtual assistants to process automation.

AI will be more with us in the Post-Corona period. We might recognize this more clearly if we analyze how AI has been used to fight Corona, one of the major crises of humanity. A global epidemic like Coronavirus, once again revealed the importance of technology, artificial intelligence and data science and their effectiveness in addressing the epidemic and returning to regular life by getting rid of the virus faster.

AI was used to monitor and predict the spread of epidemics

The better we track the spread and effect of the virus, the better we can fight with it. Analyzing virus-related news, social media posts, and government documents by artificial intelligence platforms can predict the spread of the epidemic. BlueDot, a Canadian start-up, uses artificial intelligence to track infectious disease risks. BlueDot’s AI warned about Corona threat days before the World Health Organization or Disease Control and Warning Centers. In doing so, BlueDot first reported articles on 27 cases published in Chine for the initial findings in Wuhan and added it to its warning system. Then, in order to find people who are likely to be infected and travel using global airline ticketing data, was used to determine the cities and countries that can be reached directly from Wuhan. The international destinations that BlueDot predicted to attract the most passengers from Wuhan were: Bangkok, Hong Kong, Tokyo, Taipei, Phuket, Seoul and Singapore, which were the first places Coronavirus was seen after Wuhan.

AI was used to diagnose the virus

Infervision, a Beijing-based start-up that also conducts AI-focused studies, developed a system that allows the disease to be effectively detected and monitored with the artificial intelligence-based solution it has developed. Thanks to this solution, which increases the speed of diagnosis, it is also possible to reduce the increasing panic in hospitals. Infervision’s software detects and interprets the symptoms of pneumonia caused by the virus. Chinese e-commerce giant Alibaba also implemented an AI-powered system that managed to diagnose the virus with 96% accuracy within seconds. Alibaba’s system was developed by training on images and data from 5,000 confirmed Coronavirus cases and is currently used in more than 100 hospitals in China. Both systems are developed by analyzing patients’ chest CT scans (tomography).

Color Coded Health Assessment Application

Despite its controversial technology and use of AI, China’s advanced surveillance system uses SenseTime’s facial recognition technology and temperature detection software to identify people who have fever and are more likely to be viruses. The Chinese Government also, with the support of tech giants such as Alibaba and Ant Financials, implemented a color-coded health rating system to help track millions of people returning to work after the rapidly spreading Coronavirus outbreak. With this application, people are divided into three categories as “green, yellow or red” according to their health conditions, travel history, whether they have visited the places where the virus is common, and their interactions with infected people. In line with the analysis of the program, it is determined whether individuals in the relevant category will be quarantined or permitted. For individuals provided with a green health code to enter public points, subways and office buildings, a QR code is sent from their mobile application and this code is scanned by the authorities.

Delivery of medical devices by drones

One of the fastest and safest ways to deliver the necessary medical devices in the event of an epidemic is the use of drones. For this purpose, the Japanese Terra Drone company supports the transportation of medical devices and quarantine materials between the Disease Control Centers and Hospitals in Xinchang with minimum risk. Drones can also be used to control public spaces, monitor compliance with quarantine practices, and thermal imaging.

Use of robots in sterilization, food and material supply

Since physical robots are not infected with viruses, they can be used to perform many routine tasks (cleaning, sterilization, food and medicine supply, etc.) and to reduce human contact. Robots from Denmark-based Blue Ocean Robotics use ultraviolet light to kill bacteria and viruses. In addition, Pudu Technology’s robots, which are used to bring food in hospitals, are used in 40 hospitals in China.

Use of chatbot to share information or answer questions

Citizens can get free health consultancy services from chatbot services delivered through WeChat, China’s popular social messaging application. However, chatbots also share information about recent travel procedures and circulars.

Using AI in new drug development

Google’s DeepMind division has used state-of-the-art AI algorithms and deep learning systems to understand the proteins that can cause the virus, and published the findings to help develop therapies for the virus. However, the BenevolentAI company uses AI systems to help treat Coronavirus. The company used its predictive capabilities to identify drugs that could be effective against the virus in the market after the outbreak.

Using autonomous taxis

Autonomous vehicles have become one of the most popular AI use cases in recent years. We are still waiting the autonomous vehicles in traffic, but there is a great progress from automotive industry and over-the-top technology companies to make this dream a reality. During epidemic, again in China, autonomous vehicles are used as taxis to reduce the spread of the virus.

All the applications above show that AI has fastened with Corona, and we will see with many different applications in many different sectors. Also, it is crucial that the companies advanced in AI and that are investing on AI will be effective in many areas and will have the ability to disrupt many sectors. Apple can be a good example with the new Apple Watches. Apple can track Corona and heart attack with its predictive analytics platforms.

Question 2: If you take in Turkey, what are we doing in artificial intelligence as Turkey? Are we at a level that can compete with the world? Has artificial intelligence taken its place in our professional life in all sectors?

As with any popular technology, we should isolate the truth and “hype” when talking about AI. With the popularization of AI, many new trends in the tech world has started to be associated with AI, and AI has become sought after in everything (increased investment in AI also has a direct and significant effect on this). Unfortunately, in Turkey, many people in the business do not have a correct understanding and enough information about AI. What can be accomplished with AI is truly limitless: virtual digital assistants, chatbots, driverless vehicles, real-time translation services, AI-powered physical robots, etc… In real terms, AI has begun to deeply affect both our daily life and business processes, and in 2021, we will be deeply feeling many concrete uses for it. In this context, it is obvious that AI will have a transformative effect on consumers, institutions and even government organizations around the world. In addition to the actual potential of artificial intelligence, how we manage such a profound technological revolution and its impact on our professional and personal lives should also be seriously discussed and a strategic road map should be determined for Turkey in a bigger picture.


Designed by vectorpouch / Freepik (ai)

Today, AI has become a phase that transforms a faster intelligence than human into money. The most common use case of AI in Turkey as well as in many other countries is chatbots that answer consumers’ questions and direct them to the appropriate place or person. It is a common example of AI, and one, most people have experienced personally. Different applications of AI can be diversified in different areas such as fraud detection, predictions, optimization, product recommendation, pricing forecasts, recommendations and personalized marketing. We have seen some pilot uses for all these use cases in different sectors. But the real question is whether we have shared such a success study or not.

Especially AI, combined with a fast and robust infrastructure, provides the opportunity to access real-time data and reach every customer with personalized content at the right time and with the right message. Analytical and camera capabilities were on the agenda of every technology giant, but the real opportunity (“untapped opportunity”) lies in the lack of readiness for many companies’ platforms or products that will be worthwhile for daily life.

As Turkey, we should also increase our focus and investment on AI. Actually, we have really very valuable academicians and experts working on AI; however, these professionals mainly work abroad. One of my friends is currently the director of personalization platform of Netflix – one of the most advanced companies in terms of AI implications. On the other hand, another friend of mine is in a managerial role at Google autonomous vehicle, Waymo. We also have great start-ups developing state of the art AI use cases and many AI innovations. Great projects also take place in universities by researches and academicians. But these initiatives should be supported in a structured manner with a clear strategy. Currently, we have not seen concrete results on AI efforts. 

In this period, AI is likely affect our lives more and more every day. It is important to strategically advance the work in this field in our country and to follow this focus systematically.

Question 3: We know that singularity, or the technological singularity, is the hypothetical belief that in the future, artificial intelligence will go beyond human intelligence and will visibly change civilization – the nature of humanity. So, what do you think about singularity? Can artificial intelligence go beyond human intelligence? Or will humans always be one step ahead, as artificial intelligence will develop at the same rate as humans?

As I mentioned, with the first appearance of AI concept 70 years ago, the major goal was to have systems levels same as humans. Today, platforms claiming to pass the famous Turing test are increasing day by day (Turing test refers to the situation in which the answer to a question asked by a person cannot be distinguished whether it is given by a human or by a machine). Google, at its recent events mentioned that the Google Assistant passed the Turing Test. However, it is not proper to claim that the test was passed with the examples that were tried in an extremely limited fiction. In the face of the uncertain processes we are in, machines that can understand and connect with human reactions, natural languages and our world as much as the human brain have not been built.

On the other hand, based on a recent Stanford report, AI computational power is accelerating faster than traditional processor development. Every three months, the speed of AI computation doubles, according to Stanford University’s 2019 AI Index report. These improvements show how fast AI is improving. But there is still a long way to go.


Designed by iuriimotov / Freepik (ai)

I believe that singularity concept is exaggerated. Elon Musk’s Neuralink initiative is so crucial to be determined as a first step to reach augmented humans, but many factors will affect this vision. We still need time to see many of the most popular use cases of AI in our daily lives, like autonomous vehicles, physical robots, etc. So, singularity can be considered like a science fiction concept, but we should also be aware that there is a critical progress on this issue.

Besides, leaders in the fields of AI, including Elon Musk and Google DeepMind’s Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons, otherwise known as “killer robots.” In their petition, the group states that the development of such technology would usher in a “third revolution in warfare,” that could equal the invention of gunpowder and nuclear weapons. The letter is signed by the founders of 116 AI and robotics companies from 26 countries,

Musk has a history of expressing serious concerns about the negative potential of AI. I agree with Elon Musk and believe that if we do not find ways to control the improvement of AI, we will not control it then. So, there should a global consensus for the sake of humanity. However, we should understand that AI will be one of the crucial factors that will affect the competition level of the countries. Russian president Putin stated that whichever country leads the way in AI research will come to dominate global affairs. So, it will not be easy to have such a global consensus, which will also result in  accelerated  crisis that will be shared by AI.

Thank you to Ergi Şener for the nice interview.


Designed by pikisuperstar / Freepik

Designed by vectorpouch / Freepik

Designed by iuriimotov / Freepik


Veganism and Artificial Intelligence

Veganism and artificial intelligence are good topics to discuss because they look mysterious to me. We know how artificial intelligence is everywhere but what about the relationship between veganism, do we know about it?
I was a vegetarian for a few years and, to be honest, it was one of my good diets. Actually, Turkey is often associated with cuisines made up out of meat but also we have many non-meat dishes. Okay, if I turn back to our topic is I wonder about how artificial intelligence can affect veganism whether in a good or bad way. However, first, let’s have a look at what does vegan means.

What is veganism?

The term “vegan” was chosen by combining the first and last letters of “vegetarian.” It is an undeniable fact is that being vegan is really popular whether its purpose is for health or for caring about animals. Of course, behind being vegan, there are different kinds of reasons such as ethics, health, and environment.
This is not for a few months, this is a lifestyle and vegan people prefer to not consume all animal and animal products such as dairy, eggs, cheeses, meats, fishes, etc. However, this situation should not consign you that they have to eat just vegetables and fruits – because it is totally wrong! For example:

  • Beans
  • Lentils
  • Tofu
  • Hummus
  • Seeds
  • Plant milk
  • Whole grains

Those are just a few examples of foods that they can consume.
Before finishing a short explanation of veganism, I would like to add some bullet points of types of veganism:

  • Whole-food vegans
  • Raw-food vegans
  • Dietary vegans

What does Artificial Intelligence have an impact on veganism?

There is a huge investment in plant-based industry and I think in the future many big companies/businesses will be involved in it more than before. Well, this investment is not only for food, but it is also for clothes, shoes. Sometimes, I get advertisements or see comments related to “vegan clothes”, “vegan creams”, etc. on my social media accounts – I think it is because I am interested in these areas.

If you would like to learn more about advertisement, you can have a read this article:

According to my research, what I have seen is that artificial intelligence can positively impact. As all we know, we are having difficult moments due to the pandemic and it will not be the last and we will not live on this planet forever also because of climate change. There is a question from my side is that does the plan-based product really mean healthy? I mean if we talk about the health factor of being vegan, the products are supposed to be healthy – if they are not, so where is the purpose of this health factor? I think there are many points that shape things whether we realize them or not.

Some Examples:

While some questions arise, if the companies are going to produce plant-based products first they need to start to search the planet for eligible plants. After that for sure, the analysis step will be required. As you assume, we talk about the whole planet so it means they need “scientists” and some “specific processes” to deal with a huge dataset. How AI can help is that it has the capacity to process complex algorithms to predict. Moreover, AI can find combine different recipes easily and predict which ones will be loved by humans based on human behaviors because also those behaviors are data that AI used.
The other part is we need to be careful about what we eat because of our body system. We need to have a balance between protein, carbohydrate, and etc., and what has been thought us is animal products have necessary molecules for our bones for example. So, the companies should find something to replace animal products and for it, AI helps to produce cheaper and tastier options for consumers.


A Phenomenon: Time loop

We heard about a time loop, parallel worlds and etc. but do we know exactly what is it? Basically, the time loop is a phenomenon when some periods of time are repeated and re-experienced by somebody. The things happen over and over again. Also, we see it in many movies – those movies/series are my favorites ones so far. There is a series called a Russian Doll on Netflix and it is about the time loop. If I need to tell what is based on is that Nadia’s personal journey and she is going through repeated moments. She dies repeatedly, always returning at the same moment where she was and of course she tries to figure out what is happening to her. For sure, there are some sub-topics like addictions or issues but my main topic is about the time loop and I would like to go deep.

What is the Time loop?

Mostly we call it “déjà vu” and I am sure most of us at least once experienced it. These areas are really deep and make us confuse if we are not familiar with these terms and/or moments. Honestly, I am not familiar with it but it grabs my interest because it does not mean that it does not happen to someone else even if I cannot understand totally. Therefore, I did a quick research about it, and according to my research, there are two different kinds of time loops. The first one is called the “causal paradox” and another one is called the “ontological paradox” and this is also known as the bootstraps paradox.
The causal paradox exists when events in the future trigger a sequence of events in the past whereas the ontological paradox involves an object or person to create the loop. As a note, their origin cannot be determined.
The time loop happens without ending and our memories are reset once we restart to repeat moments. The thing is everything looks normal actually we live normal until a point that we experience the same things again.

Time loop: Why, How, When…?

Of course, we would like to travel to a different time whether it is in the past or future. Sometimes, we may want to change the events that possible to happen in the future – it is an inevitable wish. There is an example of how we desire to learn something about the future and in my culture, we have “fortune-telling”. Despite all the real world, sometimes for fun sometimes for real, fortune-telling becomes an important one of the moments. I know this example exactly is not about the time loop but it is about time travel and these topics are related to each other.
On the other hand, there is something related to human behaviors because we would like to know more about a mystery, about our brain functions, how we react to things…

If you would like to learn about brain activities and developments, check this article out:

Usually, I ask myself how AI will be playing a role in the time loop area if AI is going to affect every single area. It is not easy to get the correct answer but I will try to understand.

Artificial Intelligence and the Time Loop

As I tried to mention above, there are some different kinds of concepts. For example, how to get there whether it is past or future; after getting there how the things can be changed by events and/or persons. What AI can do is that predict the future that possible to happen. Therefore, even with the basic concepts, we try to predict the possible future and take an action based on it. In today’s world, we use from house to factory, many tools developed through machine learning algorithms and want them to make our lives more easy and valuable. So, I am asking can we use AI for time travel? Why not? AI uses and monitors data from different sources and based on it creates machine learning models to have impacts in the future – Such an excited.
However, a big challenge is that we might change possibilities based on our understanding of future events. And actually, it is not only about the changing possibilities, also the challenge is to manage multiple pathways with multiple data. It looks really complicated to me.


Designing an Artificial Human Eye: EC-Eye

The eye is one of the organs with the most complex biological structure. Thanks to this structure, it provides a very wide viewing angle, as well as processing both the distance and the near in detail, and it also provides an incredible harmony according to the environment and light conditions. In addition to its neural networks, layers, millions of photoreceptors, it also has a spherical shape, making it very difficult to copy.
Despite all these difficulties, scientists from the Hong Kong University of Science and Technology continued their work in this area and developed a bionic eye with light-sensitive superconducting perovskite material. This bionic eye, which they call the “Electrochemical Eye” (EC-Eye), is about to do much more, let alone copy a human eye.

The cameras we have now can sound like a replica of vision. But for small sizes, the resolution and viewing angle do not exactly have the characteristics of the human eye, rather solutions such as microchips are used. But, as we said before, designing them on a spherical surface is not that easy. So how does EC-Eye do this?
We can say that the electrochemical eye consists of 2 parts. There is a lens on the front that functions as a human iris. It also has an aluminum shell filled with an electrically charged liquid on the same side. This liquid is a biological fluid in the form of a gel that fills the inside of the eye, which we know as “Vitreous” in the human eye structure.

On the back of the EC-Eye, some wires send the generated electrical activity to the computer to process. It also has a silicone eye socket to make contact. Finally, and most importantly, the sensitive nanowires that perform the detection. These nanowires are so sensitive that their response speed is faster than photoreceptors in a normal human eye. Transmission takes place by transmitting the electrical reactions that occur on the nanowires to the computer. Of course, even if it seems like a very easy process when told in this way, it is an application that pushes the limits of technology. It is even more intriguing that all these processes work with a power and feature that will leave the human eye in the background.
To see how it works, an interface was created between EC-Eye and the computer, and some letters were shown to EC-Eye through this interface. As a result of the detection, it was proven that a higher resolution image was obtained. For the next stages, it will face much more complex tests and studies will continue for its development.

It is very clear that this bionic eye needs to pass many more tests to replace the human eye, especially although it looks like a small device, the stage of connecting nanowires to a computer for processing is now a problem. When it comes to a lot of nanowires, it seems very difficult to install and use them in a practical way, so these bionic eyes may take a little longer to commercialize and be used by everyone. But for now, it gives great hope for the future.
If it comes to a point where it can do things that the human eye cannot perceive, it can be said that it has a lot of potential. What we see in science fiction movies and “These only happen in movies anyway.” It seems that recording, seeing far, night vision, viewing frequencies in other wavelengths is not that inaccessible anymore. Just as these can be done very comfortably even with phone cameras, it is not that difficult to predict that high-end technological applications including artificial intelligence can do this easily.
Artificial Intelligence has already begun to be a part of us in every field.

Looking to the Future: Creating an Artificial Eye

Featured Image

A Step-By-Step Journey To Artificial Intelligence

Machine learning (ML) is the study of computer algorithms that develop automatically through experience [1]. According to Wikipedia, machine learning involves computers discovering how to perform tasks without being explicitly programmed [2]. The first thing that comes to most of you when it comes to artificial intelligence is undoubtedly robots, as you can see in the visual. Today I have researched the relevant courses at the basics of machine learning and artificial intelligence level for you, and here I will list the DataCamp and Coursera courses that I’m most pleased with.

DataCamp Courses

💠 Image Processing with Keras in Python: During this course, CNN networks will be taught how to build, train, and evaluate. It will be taught how to develop learning abilities from data and how to interpret the results of training.
Click to go to the course 🔗
💠 Preprocessing for Machine Learning in Python:  You’ll learn how to standardize your data to be the right format for your model, create new features to make the most of the information in your dataset, and choose the best features to improve your model compliance.
Click to go to the course  🔗
💠 Advanced Deep Learning with Keras: It shows you how to solve various problems using the versatile Keras functional API by training a network that performs both classification and regression.
Click to go to the course 🔗
💠 Introduction to TensorFlow in Python: In this course, you will use TensorFlow 2.3 to develop, train, and make predictions with suggestion systems, image classification, and models that power significant advances in fintech. You will learn both high-level APIs that will allow you to design and train deep learning models in 15 lines of code, and low-level APIs that will allow you to go beyond ready-made routines.
Click to go to the course 🔗
💠 Introduction to Deep Learning with PyTorch: PyTorch is also one of the leading deep learning frameworks, both powerful and easy to use. In this lesson, you will use Pytorch to learn the basic concepts of neural networks before creating your first neural network to estimate numbers from the MNIST dataset. You will then learn about CNN and use it to build more powerful models that deliver more accurate results. You will evaluate the results and use different techniques to improve them.
Click to go to the course 🔗
💠 Supervised Learning with scikit-learn: 

  • Classification
  • Regression
    • Fine-tuning your model
    • Preprocessing and pipelines

Click to go to the course 🔗

💠 AI Fundamentals:

  • Introduction to AI
  • Supervised Learning
    • Unsupervised Learning
    • Deep Learning & Beyond

Click to go to the course 🔗

Coursera Courses

💠 Machine Learning: Classification, University of Washington: 

  • The solution of both binary and multi-class classification problems
  • Improving the performance of any model using Boosting
  • Method scaling with stochastic gradient rise
  • Use of missing data processing techniques
  • Model evaluation using precision-recall metrics

Click to go to the course 🔗

💠 AI For Everyone,  

  • Realistic AI can’t be what it can be?
  • How to identify opportunities to apply artificial intelligence to problems in your own organization?
  • What is it like to create a machine learning and data science projects?
  • How does it work with an AI team and build an AI strategy in your company?
  • How to navigate ethical and social discussions about artificial intelligence?

Click to go to the course  🔗

💠 AI for Medical Diagnosis, 

  • In Lesson 1, you will create convolutional neural network image classification and segmentation models to diagnose lung and brain disorders.
  • In Lesson 2, you will create risk models and survival predictors for heart disease using statistical methods and a random forest predictor to determine patient prognosis.
  • In Lesson 3, you will create a treatment effect predictor, apply model interpretation techniques, and use natural language processing to extract information from radiology reports.

Click to go to the course 🔗
As a priority step in learning artificial intelligence, I took Artificial Neural Networks and Pattern Recognition courses in my Master’s degree. I developed projects related to these areas and had the opportunity to present these projects. So I realized that I added more to myself when I passed on what I knew. In this article, I mentioned the DataCamp and Coursera courses that you should learn in summary. Before this, I strongly recommend that you also finish the Machine Learning Crash Course.


  1. Mitchell, Tom (1997). Machine Learning. New York: McGraw Hill. ISBN 0-07-042807-7. OCLC 36417892.
  2. From Wikipedia, The free encyclopedia, Machine learning, 19 November 2020.
  3. DataCamp,
  4. Coursera,

Yapay İnsan Gözü Tasarlamak: EC-Eye

Göz, en karmaşık biyolojik yapıya sahip organlardan bir tanesi. Bu yapısı sayesinde çok geniş bir görüş açısı sağlamasının yanı sıra hem uzağı hem yakını detaylı bir şekilde işler ve ayrıca çevre, ışık koşullarına göre de inanılmaz bir uyum sağlar. İçinde bulundurduğu sinir ağlarına, katmanlarına, milyonlarca fotoreseptörlere ek olarak bir de küresel şekle sahip olması, onun kopyalanmasını oldukça zorlaştırıyor.
Tüm bu zorluklara rağmen Hong Kong Bilim ve Teknoloji Üniversitesi’nden bilim insanları bu alanda çalışmalarına devam etti ve ışığa duyarlı süperiletken perovskit maddesi ile biyonik bir göz geliştirdiler. “Elektrokimyasal Göz” (EC-Eye) adını verdikleri bu biyonik göz, bir insan gözünü kopyalamayı bırakın çok daha fazlasını yapmak üzere.

Şu an sahip olduğumuz kameralar aslında görme işlevinin bir kopyası gibi gelebilir. Fakat küçük boyutlar için çözünürlük ve görüş açısı tam olarak insan gözünün özelliklerine sahip değil, daha çok mikroçip gibi çözümler kullanılır. Fakat bunların küresel bir yüzeyde tasarlanması önceden de söylediğimiz gibi o kadar kolay olan bir işlem değil. Peki EC-Eye bunu nasıl yapıyor?
Elektrokimyasal göz, temel olarak 2 parçadan oluşuyor diyebiliriz. Ön tarafında insan irisinin görevini yapan bir mercek bulunmakta. Yine aynı tarafta elektrik yüklü bir sıvı ile doldurulmuş alüminyum bir kabuğa sahiptir. Bu sıvı aslında insan göz yapısında “Vitre” olarak bildiğimiz gözün içini dolduran jel şeklinde biyolojik bir sıvıdır.

EC-Eye’ın arka kısmında ise oluşturulan elektriksel aktiviteyi işlemek üzere bilgisayara gönderen teller bulunmaktadır. Teması gerçekleştirmek adına da silikon bir göz yuvasına sahiptir. Son olarak, ve en önemlisi, algılamayı gerçekleştiren hassas nanoteller. Bu nanoteller o kadar hassas bir yapıya sahiptir ki yanıt hızları, normal bir insan gözündeki fotoreseptörlerden daha hızlıdır. Nanoteller üzerinde oluşan elektriksel reaksiyonların bilgisayara iletilmesi ile de iletim gerçekleşmiş oluyor. Tabii bu şekilde anlatınca çok kolay bir işlem gibi gözükse de aslında teknolojinin sınırlarını zorlayan bir uygulama. Tüm bu işlemlerin insan gözünü arka planda bırakacak bir güçte ve özellikte çalışması ise daha da merak uyandırıcı. 
Nasıl çalıştığını görmek adına, EC-Eye ve bilgisayar arasında bir arayüz oluşturuldu ve bu arayüz sayesinde EC-Eye’a bazı harfler gösterildi. Çıkan algılama sonucunda daha yüksek çözünürlükte görüntü elde edildiği kanıtlandı. İleriki aşamalar için çok daha kompleks testlerle karşı karşıya gelecek ve geliştirilmesi için çalışmalara devam edilecek.

Bu biyonik gözün insan gözü yerine geçebilmesi için daha birçok testten geçmesi gerektiği çok açık özellikle her ne kadar küçük bir cihaz gibi görünse de nanotellerin bilgisayara işlenmesi için bağlanması aşaması şu an bir sorun yaratmakta. Söz konusu çok fazla nanotel olunca bunların yerleştirilmesi ve pratik şekilde kullanılması oldukça zor gözüküyor, yani bu biyonik gözlerin ticarileşmesi, herkes tarafından kullanılabilmesi, biraz daha uzun bir zaman alabilir. Ama şimdilik, gelecek için büyük bir umut veriyor. 
İnsan gözünün algılayamadığı şeyleri de yapabileceği bir noktaya gelirse eğer, çok fazla özelliğe sahip bir potansiyelinin olduğu söylenebilir. Bilim kurgu filmlerinde gördüğümüz ve “Bunlar sadece filmlerde olur zaten.” dediğimiz kayıt almak, çok uzağı görmek, gece görüşü, başka dalga boylarında frekansları görüntüleme artık o kadar da ulaşılmaz değil gibi duruyor. Bunlar nasıl telefon kameraları ile bile çok rahat bir şekilde yapılabiliyorsa işin içinde yapay zekanın da olduğu üst düzey teknolojik uygulamaların bunu kolaylıkla yapabileceğini tahmin etmek aslında o kadar da zor değil.
Yapay Zeka her alanda bir parçamız olmaya başladı bile.

Looking to the Future: Creating an Artificial Eye

The Movie “Her”: An Approach to Human-Intelligent Machine Interactions

Seven years ago, under the direction of Spike Jonze, a not-so-classic movie was released, although it contains a classic romance at its core: Her. As in all romantic movies, the girl saves the man from the depressive process and leaves the man in solitude when their full relationship reaches top speed. Despite this classic script, it is the most talked-about and still analyzed film of the year it was released.

Bilim İnsanları, Robotların Ağrıyı Algılaması ve Kendi Kendine Onarmasına Yardımcı Olmak İçin “Mini Beyinler” Geliştiriyor

Nanyang Teknoloji Üniversitesi’nde (Singapur) çalışan bilim insaları, beyinden ilham alan bir yaklaşım kullanarak, robotların ağrıyı tanıması ve hasar gördüğünde kendi kendine kendini onarması için yapay zekaya (AI) sahip olmanın bir yolunu bulmanın üzerine çalışıyorlar. NTU tarafından üretilen robotlar yakın zamanda hayatımızda yerini alacak.

Sistemde, fiziksel bir kuvvetin uyguladığı anlamak, basınçtan kaynaklanan ‘ağrıyı’ işlemek ve yanıtlamak için yapay zeka destekli sensör kitleri bulunuyor. Robotun, insan müdahalesine gereksinimi olmadan, küçük bir ‘yaralandığında’ kendi hasarını tespit etmesine ve onarmasına da olanak sağlıyor ve hızlıca kendini tamir ediyor.

Designed by stories / Freepik

Günümüzde robotlar, yakın çevreleri hakkında bilgi üretmek için bir sensör ağı kullanıyor. Örneğin, bir felaket kurtarma robotu, enkaz altında hayatta kalanı bulmak için kamera ve mikrofon sensörlerini kullanır ve kişiyi, kollarındaki dokunma sensörlerinden kılavuzluk ederek dışarı çıkarır. Bir fabrikada montaj hattında çalışan bir endüstriyel fabrika robotu, robotun kolunu doğru konuma yönlendirmek için görüş kullanır ve nesnenin kaldırıldığında kayıp kaymadığını belirlemek için sensörlere dokunur. Yani günümüz sensörleri tipik olarak bilgiyi işlemiyor. Ancak öğrenmenin gerçekleştiği tek bir büyük, güçlü, merkezi işlem birimine gönderiyor. Bu durum yanıt sürelerinin gecikmesine neden olur. Aynı zamanda bakım ve onarım gerektirecek, uzun ve maliyetli olabilecek hasarları gündeme getiriyor.

NTU’lu bilim insanlarının yeni yaklaşımı, yapay zekayı, robotik cilde dağıtılmış ‘mini beyinler’ gibi davranan çok sayıda küçük, daha az güçlü işleme birimine bağlı sensör düğümleri ağına yerleştiriyor. Bilim insanlarının, bu, öğrenmenin yerel olarak gerçekleştiği ve robot için kablolama gereksinimlerinin ve yanıt süresinin geleneksel robotlara göre beş ila on kat azaldığı anlamına geliyor.

Designed by stories / Freepik

Bu projenin yardımcı yazarı Elektrik ve Elektronik Mühendisliği Fakültesi’nden Doç. Dr. Arindam Basu, “Robotların bir gün insanlarla birlikte çalışabilmesi için, bizimle güvenli bir şekilde etkileşime girmelerinin nasıl sağlanacağı bir endişe. Bu nedenle, Dünyanın dört bir yanındaki bilim adamları, robotlara bir farkındalık duygusu getirmenin, örneğin acıyı ‘hissedebilme’, buna tepki verebilme ve zorlu çalışma koşullarına dayanma gibi yollar buluyor. Bununla birlikte, gereken çok sayıda sensörü bir araya getirmenin karmaşıklığı ve bu tür bir sistemin sonuçta ortaya çıkan kırılganlığı, yaygın olarak benimsenmesi için büyük bir engeldir.

Çalışmanın ilk yazarı, aynı zamanda NTU Malzeme Bilimi ve Mühendisliği Okulu’nda Araştırma Görevlisi olan Rohit Abraham John, “Bu yeni cihazların kendi kendini iyileştirme özellikleri, robotik sistemin ne zaman kendini tekrar tekrar birleştirmesine yardımcı oluyor ‘dedi. Oda sıcaklığında bile bir kesik veya çizikle yaralanmış. Bu, biyolojik sistemimizin nasıl çalıştığını taklit eder, tıpkı bir kesikten sonra insan derisinin kendi kendine iyileşmesi gibi.

Designed by stories / Freepik

Nesneleri tanımak için ışıkla etkinleşen cihazları kullanmak gibi nöromorfik elektronikler üzerindeki önceki çalışmalarını temel alan ve üzerine çalışan NTU araştırma ekibi, şimdi daha büyük ölçekli uygulamalar için sistemlerini geliştirmek üzere endüstri ortakları ve hükümet araştırma laboratuvarlarıyla işbirliği yapmayı düşünüyor ve robotların ağrıyı algılaması ve kendi kendine kendini onarmasına yardımcı olmak için “Mini Beyinler” geliştiriyor. NTU tarafından üretilen robotlar hayatımızın bir parçası olacak.



What is the Squidbot?

We can say that an era is beginning when artificial intelligence is also present under the sea. The quality of the developed products is getting better day by day. If the continuity of the projects produced can be achieved, there will be people who will work and contribute in this field.