WordCloud and Sentiment Analysis with Python

One of the most popular concepts of our day is the word cloud and the work done on it. People use the nltk library to experiment with the word cloud. The aim here is to process the processes before the natural language processing phases. Since the Python programming language reaches a wider audience every day, the variety of projects made with nltk is increasing. Beginners often analyze tweets posted on Twitter on any topic and make visualizations, analyzes and inferences from it. While creating a word cloud, one of the key points is the number of repetitions. The more the word repeats, the more prominent it becomes in the word cloud. I tried to explain your research with the image below. Here the part indicated with q is the part of the word or phrase you are looking for. To do this, you must first have a Twitter Developers account. In addition, you will be providing the link here as we will pull it from here. Through the link, you can pull tweets to your local workplace and take action on them.

As can be seen in the photo above, the most critical part of the job is to set up the def structure well and decide on which variables to label. You can save the token and key values ​​somewhere and withdraw from there. Here, I did not show the key values ​​due to privacy, but once you have a Twitter Developer account, you can access these values ​​yourself and make the necessary assignments. If you want to access specific values ​​while analyzing on Twitter, you can make custom searches. I will put the necessary documentation in the Resources section, you can review it according to your request and integrate it into your own code and analysis. If I need to give extra information, there are several methods in nltk we will use. One of the ones I use individually is “stopwords” and the other is “wordnet”. You can change the English option according to the language you want to work on. It is a comprehensive English language in terms of word strength and effectiveness. If you are working on your own language and have a great collection of words, you can specify it in the comments tab. Thus, we can keep the interaction high and increase the yield rate. You can observe the part I explained here in the image below.

I chose the word ‘samsung’ for the Word Cloud study. By entering the abbreviation of the language option in the Lang section, you can pull the hashtag data you have chosen to your work area. At first we add the necessary libraries, and then we can change the background color we will work with according to your personal wishes. In addition, if you set the plt.axis value to “on”, you can observe the frequency of repeating words. Since I did this with the highlighting method myself, I found it unnecessary to show the axes as extra. What I do here is to set up a basic wordcloud structure and to gain something even at your entry level. If you have a career planning for natural language processing, you can see it as a start and continue to improve by following the necessary notes. The word cloud structure is the bottom tab of these jobs. While I teach you as a junior, I also continue to work in this field myself. The natural language processing career is an area where you need to stay up-to-date on what others are doing, how they code, and what projects are done on platforms such as large-scale resource browsing and github.

I’ll show you an entry-level sentiment analysis, which I will mention in the last part. Before starting what I will explain here, when writing code in Python, you need to set up the def patterns well and feed your code into it here. Accelerating functional processes and writing an understandable, scalable code for people who will come after you while working in the future will benefit everyone. By writing clean code with def structures, you can easily transfer what you do there to the person who comes for you. Going back to the sentiment analysis, here we can already do this scoring work via the textblob library. TextBlob classifies it as a result of the content of tweets sent on Twitter and the positive – negative words of this content. After this classification, it gives you a ready-made column for analysis. You can analyze this according to your wishes and try different studies. For example, you can chart it, observe the number of repetitive words and take the differences with these values ​​and integrate the background into a picture you have edited yourself.


Contour Extraction Using OpenCV

In image processing, the concept called stroke is a closed curve that connects all continuous points that a color or density has. Strokes represent the shapes of objects in an image. Stroke detection is a useful technique for Shape analysis and object detection and recognition. When we do edge detection, we find points where the color density changes significantly, and then we turn those pixels on. However, strokes are abstract collections of points and segments that correspond to the shapes of objects in the image. As a result, we can process strokes in our program, such as counting the number of contours, using them to categorize the shapes of objects, cropping objects from an image (image partitioning), and much more.
Computer vision
🖇 Contour detection is not the only algorithm for image segmentation, but there are many other algorithms available, such as state-of-the-art semantic segmentation, hough transform, and K-Means segmentation. For better accuracy, all the pipelines we will monitor to successfully detect strokes in an image:

  • Convert image to binary image, it is common practice for the input image to be a binary image (it must be a result of threshold image or edge detection).
  • FindContours( ) by using the OpenCV function.
  • Draw these strokes and show the image on the screen.

Apply Contour on Photoshop

Adobe PS
Before moving on to the encoding of contour extraction, I will first give you an example of Photoshop to give you better acquisitions.
Katmandan kontur çıkarımı
As a first step, to access the window you see above, right-click on any layer in Photoshop’s Layers window and select blending options.
🔎 If the Layers window is not active, you must activate the layers by clicking the Window menu from the top menu. The hotkey for Windows is F7.
It is possible to select the Contour color and opacity you want to create in the image by selecting the Contour tab from the left section. Then, background extraction is made to distinguish the contour extraction that will occur in the image first.
People siluet
After removing the background in the image you see here, I made a selection in yellow tones so that the object is visible in the foreground. After the background is removed, the outer contour will be applied to the image and the detection will be more successful.
People contour

Contour Extraction with Python OpenCV

I use Google Colab and Python programming language as a platform. If there are those who regularly code Python, it is a platform that I can definitely recommend! Come on, let’s start coding step by step.
📌 Let’s import the libraries required for our project as follows.
Gerekli Kütüphanelerin Yüklenmesi
📌 As the second step, we get our image with the imread function.
Görüntünün Alınması
📌 As you know in the world of image processing, our images come in BGR format. The BGR image must first be converted to RGB format and then assigned to the grayscale color channel.
Converting Color Spaces
📌 As the fourth step, a binary threshold operation is performed by specifying a threshold value in the image. To access the mathematics that the binary threshold function runs in the background, you must examine the following formula 👇
Binary threshold
If you have noticed, the image on which threshold will be applied is selected as a gray level image, not RGB. Please pay attention at this stage. When you follow these steps in order, you will receive the following feedback.
📌In this step, we will use the findContours function to find the contours in the image. The image where the contours will be determined will be the binary image that we have realized the threshold.
Find Contours
📌 We will use drawContours function to draw these contours visually.
Draw Contours
🖇 The parameter cv2.CHAIN_APPROX_SIMPLE in the method removes all unnecessary points and saves memory by compressing the contour.
📌 Now we can print our contour extracted image on the screen.
Imshow contours

In this way, we made our inference. Hope to head to the world of other projects in another article … Stay healthy ✨


  1. Contour Tracing, https://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/intro.html.
  2. Edge Contour Extraction, https://www.cse.unr.edu/~bebis/CS791E/Notes/EdgeContourExtraction.pdf, Pitas, section 5.5, Sonka et al., sections 5.2.4-5.2.5.
  3. https://www.thepythoncode.com/article/contour-detection-opencv-python adresinden alınmıştır.
  4. https://www.subpng.com/png-m7emk6/ adresinden alınmıştır.
  5. OpenCV, https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html.
  6. OpenCV, https://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html.

Buy Sell Algorithm with Moving Average

Data Scientists working in the field of finance usually make calculations such as portfolio optimization, trading transactions and portfolio return. This work is very important on the stock market. Because every decision made affects the amount of profit to be made. Therefore, it should be integrated into the system being worked on by choosing the steps carefully. There is a mechanism in the stock exchange that interacts with the world, and companies that can quickly adapt to its changes should quickly make a difference and become sustainable. In this way, while revealing their difference, they can change their marketing style and be active in the market. Companies that offer consultancy as a brand and have a high potential to adapt to changes can frequently mention their names. Machine learning and deep learning algorithms are deeply worked in the background of robot consultants. Every company that offers robot consultants has a solid infrastructure in its own right. Even if the coding part is a bit complicated, the moment we reach the conclusion part, we will see the whole success with our own eyes. Based on this, I put the output as an example in the bottom picture.

Actually, the picture you see above represents the final state of the project. For those who want to reach, I will leave the whole code in the resources section and you will be able to adapt it to your own systems easily. I should indicate that as a note. I did this encoding using the company Aselsan Turkey Located in the stock market. In addition, any transaction you see here is not investment advice. After specifying these, we add the libraries as you see below and read our data set. Then we code the describe () function to get statistical output about the data. The variable we are dealing with here will be on the ‘close’ variable, which represents the closing of the exchange. I made my own analysis by taking the dates of the data set as of January 1, 2017. You can make your analysis at any time you want, but the only thing that should be, the historical data set for the stock must be in the necessary libraries so that you can use it as I use it. Otherwise, your code will not run and will generate errors. You can examine the details of the code for which I put the Github link. If you have any questions, you can contact me at my e-mail address.

There are many different methods of technical analysis within the stock market. Here we will continue on the moving average entirely. The moving average method is one of the most common methods used in the stock market. Thanks to this method, there are many people who instantly follow the trading style transactions in the stock market. There are still technical analysis methods that we will add to these. Examples include RSI, Bolinger Band, MACD, and Fibonacci Correction Levels. The lines you see at the bottom are the moving average method that will make horse sell transactions for us with the window () function. The blue line in the image represents the actual prices. Apart from this, the intersection points of other lines turn to us as buy and sell and we can measure the return ourselves. Thanks to the function I named buy_sell, it takes the necessary actions for us. This makes the preparation for us. The functioning of this place for us indicates that all of the transactions are completed. Now only the necessary assignments have been made and the visual representation of the function is as I showed it at the beginning. To do this, the matplotlib library will help you.

The rest of this article will come as long as I improve myself and I am thinking of writing this in a series. I aim to explain to you the effects of the trading and technical analysis methods used in the stock exchange and help everyone who thinks about a career in this field. There are many start-ups in the stock market that trade through robot advisors. In addition, large companies on the basis of the sector continue to provide continuity while discovering new things in the market by investing in many small companies that will work in this field and are open to development. As it is known, the stock market can be affected by even the smallest things and change the profit and loss situations quickly. Large companies, who have information about what will happen before, preserve their profit margin by taking firm steps in the market by predicting such volatile environments. There are many technical analysis methods in the analysis systems used while creating them. The scalability of such processes can also guarantee how the system will react and that it will respond positively. I will continue to evaluate the share prices and process the technical analysis methods on the Python programming language. You can follow up and give feedback for this.




A Phenomenon: Time loop

We heard about a time loop, parallel worlds and etc. but do we know exactly what is it? Basically, the time loop is a phenomenon when some periods of time are repeated and re-experienced by somebody. The things happen over and over again. Also, we see it in many movies – those movies/series are my favorites ones so far. There is a series called a Russian Doll on Netflix and it is about the time loop. If I need to tell what is based on is that Nadia’s personal journey and she is going through repeated moments. She dies repeatedly, always returning at the same moment where she was and of course she tries to figure out what is happening to her. For sure, there are some sub-topics like addictions or issues but my main topic is about the time loop and I would like to go deep.

What is the Time loop?

Mostly we call it “déjà vu” and I am sure most of us at least once experienced it. These areas are really deep and make us confuse if we are not familiar with these terms and/or moments. Honestly, I am not familiar with it but it grabs my interest because it does not mean that it does not happen to someone else even if I cannot understand totally. Therefore, I did a quick research about it, and according to my research, there are two different kinds of time loops. The first one is called the “causal paradox” and another one is called the “ontological paradox” and this is also known as the bootstraps paradox.
The causal paradox exists when events in the future trigger a sequence of events in the past whereas the ontological paradox involves an object or person to create the loop. As a note, their origin cannot be determined.
The time loop happens without ending and our memories are reset once we restart to repeat moments. The thing is everything looks normal actually we live normal until a point that we experience the same things again.

Time loop: Why, How, When…?

Of course, we would like to travel to a different time whether it is in the past or future. Sometimes, we may want to change the events that possible to happen in the future – it is an inevitable wish. There is an example of how we desire to learn something about the future and in my culture, we have “fortune-telling”. Despite all the real world, sometimes for fun sometimes for real, fortune-telling becomes an important one of the moments. I know this example exactly is not about the time loop but it is about time travel and these topics are related to each other.
On the other hand, there is something related to human behaviors because we would like to know more about a mystery, about our brain functions, how we react to things…

If you would like to learn about brain activities and developments, check this article out:


Usually, I ask myself how AI will be playing a role in the time loop area if AI is going to affect every single area. It is not easy to get the correct answer but I will try to understand.

Artificial Intelligence and the Time Loop

As I tried to mention above, there are some different kinds of concepts. For example, how to get there whether it is past or future; after getting there how the things can be changed by events and/or persons. What AI can do is that predict the future that possible to happen. Therefore, even with the basic concepts, we try to predict the possible future and take an action based on it. In today’s world, we use from house to factory, many tools developed through machine learning algorithms and want them to make our lives more easy and valuable. So, I am asking can we use AI for time travel? Why not? AI uses and monitors data from different sources and based on it creates machine learning models to have impacts in the future – Such an excited.
However, a big challenge is that we might change possibilities based on our understanding of future events. And actually, it is not only about the changing possibilities, also the challenge is to manage multiple pathways with multiple data. It looks really complicated to me.



Designing an Artificial Human Eye: EC-Eye

The eye is one of the organs with the most complex biological structure. Thanks to this structure, it provides a very wide viewing angle, as well as processing both the distance and the near in detail, and it also provides an incredible harmony according to the environment and light conditions. In addition to its neural networks, layers, millions of photoreceptors, it also has a spherical shape, making it very difficult to copy.
Despite all these difficulties, scientists from the Hong Kong University of Science and Technology continued their work in this area and developed a bionic eye with light-sensitive superconducting perovskite material. This bionic eye, which they call the “Electrochemical Eye” (EC-Eye), is about to do much more, let alone copy a human eye.

The cameras we have now can sound like a replica of vision. But for small sizes, the resolution and viewing angle do not exactly have the characteristics of the human eye, rather solutions such as microchips are used. But, as we said before, designing them on a spherical surface is not that easy. So how does EC-Eye do this?
We can say that the electrochemical eye consists of 2 parts. There is a lens on the front that functions as a human iris. It also has an aluminum shell filled with an electrically charged liquid on the same side. This liquid is a biological fluid in the form of a gel that fills the inside of the eye, which we know as “Vitreous” in the human eye structure.

On the back of the EC-Eye, some wires send the generated electrical activity to the computer to process. It also has a silicone eye socket to make contact. Finally, and most importantly, the sensitive nanowires that perform the detection. These nanowires are so sensitive that their response speed is faster than photoreceptors in a normal human eye. Transmission takes place by transmitting the electrical reactions that occur on the nanowires to the computer. Of course, even if it seems like a very easy process when told in this way, it is an application that pushes the limits of technology. It is even more intriguing that all these processes work with a power and feature that will leave the human eye in the background.
To see how it works, an interface was created between EC-Eye and the computer, and some letters were shown to EC-Eye through this interface. As a result of the detection, it was proven that a higher resolution image was obtained. For the next stages, it will face much more complex tests and studies will continue for its development.

It is very clear that this bionic eye needs to pass many more tests to replace the human eye, especially although it looks like a small device, the stage of connecting nanowires to a computer for processing is now a problem. When it comes to a lot of nanowires, it seems very difficult to install and use them in a practical way, so these bionic eyes may take a little longer to commercialize and be used by everyone. But for now, it gives great hope for the future.
If it comes to a point where it can do things that the human eye cannot perceive, it can be said that it has a lot of potential. What we see in science fiction movies and “These only happen in movies anyway.” It seems that recording, seeing far, night vision, viewing frequencies in other wavelengths is not that inaccessible anymore. Just as these can be done very comfortably even with phone cameras, it is not that difficult to predict that high-end technological applications including artificial intelligence can do this easily.
Artificial Intelligence has already begun to be a part of us in every field.

Looking to the Future: Creating an Artificial Eye


Credit Scoring / Credit Analysis

There are certain start-ups that every company will invest in or help with financial development. As a result of certain analyzes, the investor company determines the company to invest and acquire. In this way, taking the development into account, the amount of contribution to be provided in direct proportion to the return is calculated in advance. This kind of analysis method has been developed in banks among their customers by data scientist . In short, credit scoring transactions are carried out between the bank and the customer in the loan application. The purpose of doing this is basically evaluated with tests to see if people actually pay or will be able to pay the loan they will receive. This is called credit scoring in machine learning. After the transactions, a positive or negative feedback is made to the person applying for the loan. There are many metrics that evaluate in this direction. As an example to these; There are many features that will be examined in more detail, such as the amount of wages people get, their career history, their previous loan status, and so on. As a result of their evaluation, 1 and 0 values ​​that will be formed give us positive or negative meaning.

Banks do extensive research on this subject, as in most subjects, and after analyzing the data they have, they put them into machine learning processes. As a result of these processes, the final model is prepared by performing a few optimization operations on the logic testing steps. Then these situations are accelerated and tested for people who apply for almost every loan. Values ​​0 and 1 are assigned as values. As a result of the transactions, the output of 0 does not suggest us to give credit to this person, and vice versa, when the output of 1 comes, it makes the customer segmentation process for us by saying “you can give credit to this person”.After the last step is completed thanks to the data science staff, the last step for us is to return this information to the required departments, finalize the applications of the individuals according to the results and return. The importance of analysis is critical for a bank. Because the smallest mistakes made can cause the loss of large amounts. For this reason, every credit scoring transaction should return to the bank positively.

Credit scoring transactions are of great importance for every bank. The amount of money out of the safe and the failure of the person to be loaned to fully fulfill its responsibility will cause major financial problems. Therefore, the data science team working at the back should be experts in this field and evaluate the measures according to every circumstance. In addition, people’s personal information should be analyzed thoroughly and a logical return to their application should be made. After arranging the data pre-processing steps and performing the operations on the necessary variables, the process is about getting a little more data ready. Another critical issue in credit scoring is the data pre-processing steps and the analysis steps to be taken afterwards. The Data Science team should do the engineering of variables themselves and analyze the effects of variables and their correlations correctly. After these processes, it will be inevitable that a logical result will occur. To minimize the margin of error, it is all about adjusting the data almost perfectly and evaluating the necessary parameters.

It is necessary to create the machine learning algorithm at the very beginning of the processes required to perform credit scoring and the variables should be checked once more before the model. Because the transactions are completely related to variables. Therefore, the effect of categorical or numerical variables on the model differs. Also, while setting up this model, it must be adjusted carefully. If the parameters we will use are specifically using the Python programming language, the parameters can be tested thanks to the GridSearchCV () method, and then the most suitable parameters are integrated into the model. Thus, it can proceed more successfully in credit scoring. This increases the level of service provided, so that people can meet their expectations and provide a personalized service to suit them. People with a high level of satisfaction develop their bond with the bank. Additionally, they feel more confident psychologically. The most basic feature of people is to feel belonging or connected somewhere. Providing this can increase the customer potential owned. If you want your own advertisement to be made, you can keep a good bond with your customers and increase their loyalty to you. One of the things that directly affects this is undoubtedly credit scoring.

References :

Featured Image

HTC (Hybrid Task Cascade) Network Architecture

As a result of my recent literature research for image segmentation, I have come across very different segmentation architectures. Before this article, I told you about the architecture of Mask R-CNN. Just like this mask R-CNN architecture, the Cascade Mask R-CNN structure has appeared in the literature. I will try to enlighten you about this with the information I have collected from the original academic documents and research I have read.

Cascade is a classic yet powerful architecture that improves performance in a variety of tasks. However, how to enter sample segmentation with steps remains an open question. A simple combination of Cascade R-CNN and Mask R-CNN provides only limited gains. In exploring a more effective approach, it was found that the key to a successful instance segmentation level is to take full advantage of the mutual relationship between detection and partitioning.
Hybrid Task Cascade for Instance Segmentation proposes a new Hybrid Task Cascade (HTC) framework that differs in two important respects:

  1. Instead of cascading these two tasks separately, it connects them together for common multi-stage processing.
  2. It adopts a fully convoluted branch to provide spatial context, which can help distinguish the rigid foreground from the complex background.

The basic idea is to leverage spatial context to improve the flow of information and further improve accuracy by incorporating steps and multitasking at each stage. In particular, a cascading pipeline is designed for progressive purification. At each stage, both bounding box regression and mask prediction are combined in a multi-tasking person.

Innovations ✨

The main innovation of HTC’s architecture is a cascading framework that connects object detection and segmentation, providing better performance. The information flow is also changed through direct branches between the previous and subsequent mask determinants. Architecture also includes a fully convolutional branch that improves spatial context, which can improve performance by better distinguishing samples from scattered backgrounds.
2017 Winner

Hybrid Task Cascade: Sample Segmentation Framework
  • It combines bounding box regression and mask prediction instead of executing in parallel. 
  • It creates a direct way to strengthen the flow of information between mask branches by feeding the mask features from the previous stage to the existing one.
  • It aims to gain more contextual information by fusing it with box and mask branches by adding an additional branch of semantic segmentation. 
  • In general, these changes in the framework architecture effectively improve the flow of information not only between states but also between tasks.

A comparison of the HTC network’s sample determination approaches with the latest technology products in the COCO dataset in Table 1 can be seen. In addition, the Cascade Mask R-CNN described in Chapter 1 is considered a strong basis for the method used in the article. Compared to Mask R-CNN, the naive cascading baseline brings in 3.5% and 1.2% increases in terms of box AP and mask AP. It is noted that this baseline is higher than PANet, the most advanced method of sample segmentation. HTC is making consistent improvements on different backbones that prove its effectiveness. ResNet-50 provides gains of 1.5%, 1.3% and 1.1%, respectively, for ResNet-101 and ResNeXt-101.
📌 Note: Cascade Mask R-CNN extends Cascade R-CNN to instance segmentation by adding a mask header to the cascade [3].


The image below shows the results of this segmentation in the COCO dataset.
In the results section of the article, the advantages of the HTC model they created over other models are mentioned.

We recommend the hybrid task cascade (HTC), a new graded architecture for Instance Segmentation. It intertwines box and mask branches for common multi-stage processing and uses a semantic partitioning branch to provide spatial context. This framework gradually improves mask estimates and combines complementary features at each stage. The proposed method without bells and whistles achieves a 1.5% improvement over a strong cascade Mask R-CNN baseline in the MS COCO dataset. In particular, our overall system reaches 48.6 masks AP in the test-inquiry dataset and 49.0 mask AP in test-dev.

📌 Finally, in order to understand the changes of variables in the table, I leave you a table of MS COCO metrics as a note.


  1. Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, Chen Change Loy, Hybrid Task Cascade for Instance Segmentation, April 2019.
  2. Zhaowei Cai and Nuno Vasconcelos, Cascader-cnn:Delving into high quality object detection, In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  3. https://paperswithcode.com/method/cascade-mask-r-cnn.
  4. https://cocodataset.org/#home

The Movie “Her”: An Approach to Human-Intelligent Machine Interactions

Seven years ago, under the direction of Spike Jonze, a not-so-classic movie was released, although it contains a classic romance at its core: Her. As in all romantic movies, the girl saves the man from the depressive process and leaves the man in solitude when their full relationship reaches top speed. Despite this classic script, it is the most talked-about and still analyzed film of the year it was released.

During Covid: Technological Innovations

What are the most important technological innovations that we have learned during covid? We are currently having Covid second wave and it seems like it will be tougher than the first one. New lockdown rules were decided to apply by many countries. On the other hand, there are good signs related to vaccines that possible to develop in the near future.
Have you read?
The advent of technology transformed our lives from artificial intelligence to nanotechnology even before Covid. However, while going through these global challenges, we all understood is how technological innovations are the key to us.
According to Top 10 Emerging Technologies of 2020, I would like to provide some insights into healthcare.

1. Virtual Patients

I really like this virtual patient idea because it can create opportunities like faster and safer clinical trials. For this idea, high-quality images of organs are needed, and to be able to understand these organs’ functions, mathematical and statistical models are another necessary part. Why do I like this technology? Because, as I mentioned before, Covid is a huge challenge for humanity. However, especially doctors are suffering a lot more than other job groups because they have to contact patients somehow – it is a huge risk! Instead of physically contacting virtual organs or body systems could be helpful in the initial assessments of treatments. It means safer, too.

2. Digital Medicine

What is this digital medicine and improvements? This medicine contains sensors and these sensors send data to apps to detect issues. First of all, this is a good opportunity for people who have limited access to health services. Again my example will be related to Covid because people who are from a rural area have a lack of health services, unfortunately. As all we know, early diagnosis is really significant and with this medicine, the app could be helpful to track the therapies.

3. Microneedles

The big advantage of microneedles is painless injections for sure – At least for me. Since my childhood, I scared of needles and I cannot see any blood or even blood drop. Unfortunately, it irritates me and I have a memory of it. It was my first experience of donating blood and I went to the hospital in the morning without having breakfast. After donating my blood, as soon got up, I fainted and what I only remembered thing when I opened my eyes is nurses.
If I turn back to the topic, these microneedles penetrate the skin without pain and even mixed into creams. It means we can use them easily without having any trouble. On the other hand, these microneedles could allow do our blood tests at home and analyze them, after analyzing we can send the results to the hospital. This is another good part because we do not need to wait in long lines for the blood testing and probably this process will not be expensive. Thus, care will be more accessible from urban areas to rural areas.


Imagine a world where we have developed many things to help humanity, eradicated diseases, and issues. Such amazing!



Featured Image

Data Labeling Tools For Machine Learning

The process of tagging data is a crucial step in any supervised machine learning projects. Tagging is the process of defining areas in an image and creating descriptions of which object belongs to these regions. By labeling the data, we prepare our data for ML projects and make them more readable. In most of the projects I’ve worked on, I’ve created sets in the dataset, I’ve done self-tagging, I’ve done my training with tagged images. In this article, I will introduce the data labeling tools that I encounter the most by sharing my experience in this field with you.
Labeling Image


Colabeler is a program that allows labeling in positioning and classification problems. Computer vision is a labeling program that is frequently used in the fields of natural language processing, artificial intelligence, and voice recognition [2]. The visual example that you see below shows the labeling of an image. The classes you see here are usually equivalent to the car class. In the tool section that you see on the left side, you can classify objects like curves, polygons, or rectangles. This selection may vary depending on the limits of the data you want to tag.
Labeling Colabeler
Then in the section that says ‘Label Info’, you type the name of the objects you want to tag yourself. After you finish all the tags, you save them by confirming them from the blue tick button. And so you can go to the next image with Next. Here we should note that every image we record is sorted to the left of this blue button. It is also possible to check the images you have recorded in this way. One of the things I like most about Colabeler is that it can also use artificial intelligence algorithms.
📌 I performed tagging via Colabeler in a project I worked on before, and it is a software with an incredibly easy interface.
📽 The video on Colabeler’s authorized websites describes how to make the labeling.
Localization of Bone Age
I gave a sample image of the project I worked on earlier above. Because this project is a localization project in the context of machine learning, labeling has been done by adhering to these features. Localization means isolating the subregion of the image where a feature is located. For example, trying to define bone regions for this project only means creating rectangles around bone regions in the image [3]. In this way, I have labeled the classes that are likely to be removed in the bone images as ROI zones. I then obtained these tags as Export XML/JSON provided by Colabeler. A lot of machine learning employees will like this part, it worked very well for me!

♻️ Export Of Labels

Exporting XML Output
At this stage, I have saved it as JSON output, because I will use JSON data, you can save your data in different formats. In the image I give below, you can see the places of the classes I created in the JSON output. In this way, your data was prepared in a labeled manner.
JSON Format


ImageJ is a Java-based image processing program developed at the National Institutes of Health and the Laboratory for Optical and Computational Instrumentation (LOCI, University of Wisconsin). ImageJ’s plugin architecture and built-in development environment have made it a popular platform for teaching image processing [3].

As I listed above, you can see a screenshot taken from ImageJ in Wikipedia. As can be seen, this software does not exist on an overly complex side. It is a tool that is used in many areas regardless of the profession. 📝The documentation provided as a user’s guide on authorized ImageJ websites describes how to perform labeling and how to use the software tool.
📌 I have also been to Fiji-ImageJ software tools for images that I had to tag in the machine learning project. I think its interface is much older than other labeling programs I’ve worked with. Of course, you can perform the operations that you want to do from a software point of view, but for me, the software also needs to saturate the user from a design point of view.
The image I gave above was a screenshot I took during the project I was working on on my personal computer. In order to be able to activate the data while working on the Matlab platform, it was necessary to update with priority. For this reason, after updating, I continued to identify the images. Below is the package that will be installed during the installation of the Matlab plugin for ImageJ users.
ImageJ Matlab

📍Matlab Image Labeler

The Image Labeler app provides an easy way to mark rectangular area of interest (ROI) tags, polyline ROI tags, Pixel ROI tags, and scene tags in a video or image sequence. For example, using this app will start by showing you [4]:

  • Manually tag a picture frame from an image collection
  • Automatically tagging between image frames using an automation algorithm
  • Export tagged location fact data

Image Toolbox Matlab
In the image you see above, we can perform segmentation using Matlab image Labeler software. More precisely, it is possible to make labeling by dividing the data into ROI regions. In addition, you can use previously existing algorithms, as well as test and run your own algorithm on data.
Selection ROI
In this image I received from Matlab’s authorized documentation, the label names of the bounding regions you selected are entered in the left menu. A label Color is assigned according to the class of the object. It is also quite possible that we create our labels in this way. In the next article, I will talk about other labeling tools. Hope to see you ✨

  1. https://medium.com/@abelling/comparison-of-different-labelling-tools-for-computer-vision-f3afd678da76.
  2. https://www.colabeler.com.
  3. From Wikipedia, The Free Encyclopedia, ImageJ, https://en.wikipedia.org/wiki/ImageJ.
  4. MathWorks, Get Started with the Image Labeler, https://www.mathworks.com/help/vision/ug/get-started-with-the-image-labeler.html.
  5. https://chatbotslife.com/how-to-organize-data-labeling-for-machine-learning-approaches-and-tools-5ede48aeb8e8.
  6. https://blog.cloudera.com/learning-with-limited-labeled-data/.

SSD(Single Shot Multibox Detector) model from A to Z

In this article, we will learn the SSD MultiBox object detection technique from A to Z with all its descriptions. Because the SSD model works much faster than the RCNN or even Faster R-CNN architecture, it is sometimes used when it comes to object detection.
This model, introduced by Liu and his colleagues in 2016, detects an object using background information [2]. Single Shot Multibox Detector i.e. single shot multibox detection (SSD) with fast and easy modeling will be done. And what can be mentioned by one shot? As you can understand from the name, it offers us the ability to detect objects at once.

I’ve collated a lot of documents, videos to give you accurate information, and I’m starting to tell you the whole alphabet of the job. In RCNN networks, regions that are likely to be objects were primarily identified, and then these regions were classified with Fully Connected layers. Object detection is performed in 2 separate stages with the RCNN network, while SSD performs these operations in one step.
As a first step, let’s examine the SSD architecture closely. If the image sounds a little small, you can zoom in and see the contents and dimensions of the convolution layers.

An image is given as input to the architecture as usual. This image is then passed through convolutional neural networks. If you have noticed, the dimensions of convolutional neural networks are different. In this way, different feature maps are extracted in the model. This is a desirable situation. A certain amount of limiting rectangles is obtained using a 3×3 convolutional filter on property maps.
Because these created rectangles are on the activation map, they are extremely good at detecting objects of different sizes. In the first image I gave, an image of 300×300 was sent as input. If you notice, the image sizes have been reduced as you progress. In the most recent convolutional nerve model, the size was reduced to 1. Comparisons are made between the limits set during the training process and the estimates realized as a result of the test. A 50% method is used to find the best among these estimates. A result greater than 50% is selected. You can think of it as the situation that exists in logistical regression.
For example, the image dimensions are 10×10×512 in Conv8_2. It will have outputs (classes + 4) for each bounding box when the 3×3 convolutional operation is applied and using 4 bounding boxes. Thus, in Conv8_2, the output is 10×10×4×(C+4). Assume that there are 10 object classes for object detection and an additional background class. Thus output 10×10×4×(11+4)=6000 will be. Bounding boxes will reach the number 10×10×4 = 400. It ends the image it receives as input as a sizeable Tensor output. In a video I researched, I listened to a descriptive comment about this district election:

Instead of performing different operations for each region, we perform all forecasts on the CNN network at once.

4 bounding boxes are estimated in each cell in the area on the right side, while the image seen on the left in the image above is original [3]. In the grid structures seen here, there are bounding rectangles. In this way, an attempt is made to estimate the actual region in which the object is located.
In the documents I researched, I scratched with the example I gave above. I really wanted to share it with you, because it is an enormous resource for understanding SSD architecture. Look, if you’ve noticed, he’s assigned a percentage to objects that are likely to be in the visual. For example, he gave the car a 50% result. But he will win because the odds above 50% will be higher. So in this visual, the probability that it is a person and a bicycle is more likely than it is a car. I wish you understood the SSD structure. In my next article, I will show you how to code the SSD model.Hope you stay healthy ✨


  1. Face and Object Recognition with computer vision | R-CNN, SSD, GANs, Udemy.
  2. Dive to Deep Learning, 13.7. Single Shot Multibox Detection (SSD), https://d2l.ai/chapter_computer-vision/ssd.html.
  3. https://jonathan-hui.medium.com/ssd-object-detection-single-shot-multibox-detector-for-real-time-processing-9bd8deac0e06.
  4. https://towardsdatascience.com/review-ssd-single-shot-detector-object-detection-851a94607d11.
  5. https://towardsdatascience.com/understanding-ssd-multibox-real-time-object-detection-in-deep-learning-495ef744fab.
  6. Single-Shot Bidirectional Pyramid Networks for High-Quality Object Detection, https://www.groundai.com/project/single-shot-bidirectional-pyramid-networks-for-high-quality-object-detection/1.

Data Mining and Being a Data Miner

Hello everyone, as a statistician, I can say that most statisticians dream of becoming a data miner but the road to be followed for this is long and bumpy. According to Google Trends data, “Data mining” and “Data Miner” searches in Google Web Search are very popular around the world. So what makes data mining so attractive?
Currently, the multiplicity of data and the difficulty of using the information required after processing data has increased the need for data mining.
Data mining is an automatic or semi-automated technical process used to analyze and interpret large amounts of dispersed information and turn it into information. Data mining is frequently used in marketing, retail, banking, healthcare, and e-commerce application areas.
Stages of Data Mining

We can basically consider the data mining process is:

  1. Obtain and secure the data stack
  2. Smoothing
  3. Damy-Optimization
  4. Data Reduction
  5. Normalization
  6. Applying Related Data Mining Algorithms
  7. Testing and training results in related software languages (R, Python, Java)
  8. Evaluation and presentation of results

To become a data miner requires programming, mathematics, statistics, machine learning, and some personal skills. Let’s examine these requirements in a little more detail together.

  • Algorithmic approach
  • Programming logic
  • Big data technologies(Spark, Hive, Impala, DBS, etc.)
  • SQL(databases), NoSQL, Bash Script, R, Python, Scala, SPSS, SAS, MATLAB, etc.
  • Cloud technologies (AWS, Google Cloud, Microsoft Azure, IBM, etc.)

2)Statistical Learning (SL):

  • Tidy data process and data preprocessing
  • Regression Models
  • Linearity and causality
  • Inference Statistics
  • Multivariate Statistical Methods

3)Machine Learning(ML)

  • Classification
  • Clustering
  • Association Rule Learning
  • Text Mining, NLP
  • Reinforcement Learning
  • Deep Learning

4)Personal Skills

  • Being Able To Ask The Right Questions
  • Analytical Perspective
  • Problem Solving Ability
  • Storytelling and presentation ability

As a result, we talked briefly about the definition, stages, and requirements of data mining in this blog. Hope to see you in our next blog.

Mobile Application Development

FaCiPa Series – 3

FaCiPa Series 2 I wanted to write the mobile application side, which is the last series of my articles because I got very nice returns leftover from my article. It’s an amazing feeling to be able to talk to you today about the project I’ve been developing for a year! In this article, we will talk with you about FaCiPa’s mobile interface.
Since the project included Python programming language and API-side encodings, different options such as kiwi or Ionic were available as a platform. Other articles I have written for Ionic can be found at the links below. In these links, you can briefly get information about What is Ionic, The working structure of the Ionic project, and its use with the Semantic UI. In addition, since TypeScript is written with a code structure, you can also review the article I wrote about it. Below are the most common explanations about the Ionic Framework:

👉 This open source library is built on Cordova.
👉 It is a library that allows even Web developers to develop mobile applications.

Mobile Application Design
First, we start by creating a new project on the Ionic Framework, the mobile platform for FaCiPa.

Then we create a page with the ionic generate command.
Generate Page
Ionic Page
In the application, there is a home page, registration page, and analysis page to start with, so 4 pages should be created together with the home page in total.
All files


The framework that will be used in FaCiPa’ s mobile interface has been selected as Ionic. More use of mobile devices than computers, the increase of mobile applications, the diversity of mobile devices, and the presence of different operating systems have led software developers to find different mobile solutions. In addition to native application development, it has become an important need to create an application structure that can also be run on any platform over time, and hybrid applications that can be developed with the support of languages such as HTML5 and JavaScript have emerged [1].
Ionic Framework, especially Angular.js, the first choice of programmers with JS or Angular 2 experience is usually Ionic. Open source, Ionic is home to thousands of mobile apps with thousands of followers and supporters. The Ionic Framework, which in its own words has “first-class” documentation, is a convenient and easy-to-learn library.
🏗 The Ionic Framework is built on Cordova. Cordova provides access to the hardware and system resources of the mobile device. You can run it on mobile operating systems such as Android, IOS, or Windows Phone. You can even publish this app as a mobile-compatible website in a convenient way. HTML, JavaScript, and Angular are basically the basis for developing applications with Ionic. knowing js will be enough. Visual Studio Cide platform was used as a platform in the project. The designs of the application are src\pages\home\home.html like .HTML files with the HTML extension are laid out with HTML5. The necessary CSS designs are src\pages\home\home.scss like .scss files it was done in files with the SCSS Extension [1].
📷 The photos that will be used in the project are determined to be taken from the user in the first step and then reduced to 1 photo in order to not tire the user and reduce the processing load of the machine. The user receives the app from Mobile stores and instantly takes photos and sends this photo to the server for processing.
🛡 The backend section of the application is src\pages\home\home.like ts .files with the TS extension are made in TypeScript.
Upload Camera


A warning appears above the content of the application and must be manually removed by the user so that they can continue to interact with the application. In the application, an ion-alert warning is given for the user to take the correct photo.
🔎 Title: Title of the warning box
🔎 Subtitle: Warning text
🔎 Buttons: The button used to remove the warning if the OK button is clicked, the photoOne() method is executed and the photo is taken.
Ionic Alert


The Ionic camera plug-in is a necessary plug-in for taking photos or videos from mobile devices. Cordova plugin requires: cordova-plugin-camera
🔎 Quality
🔎 destinationType: Destination Type
🔎 encodingType: Coding Types
🔎 media Type: Media Type (Picture)
Install Camera
Install Cam


Wireframe Templates
As content, you can design your application’s pages completely freely. The wireframe drawing you saw above was a drawing designed when the project first started. Then we created the designs of the project. I have to say as a footnote that, unfortunately, our product does not support English, so I have to share it in Turkish.
The visuals I have given above are the analysis page of the project and the feedback on the analysis result. Thus, we have come to the end of FaCiPa. Thank you for following it patiently. Stay healthy ✨


  1. https://devnot.com/2016/hibrit-uygulama-catisi-ionic-i-taniyalim/
  2. R. L. Delinger, J. M. VanSwearingen, J. F. Cohn, K. L. Schmidt, “Puckering and Blowing Facial Expressions in People With Facial Movement Disorders,” J. Phys Ther, vol. 88, pp. 909-915, August 2008.
  3. The Spreading of Internet and Mobile Technologies: Opportunities and Limitations, Hasan GULER, Yunis SAHİNKAYASİ, Hamide SAHİNKAYASİ. Journal of Social Sciences Volume 7 Issue 14 December 2017, 03.10.2017-27.10.2017.
Featured Image

Article Review – Tooth Detection with Mask RCNN

In this article, I will review the article ‘Tooth Detection and Segmentation with Mask R-CNN [1]’ published at the Second International Conference on Artificial Intelligence in Information and communication. This article describes the implementation of automatic tooth detection and segmentation on Mask RCNN’s dental images. The article, it is aimed to identify only females and divide them into segments.

It should be noted that Mask RCNN has a good segmentation effect even in complex and crowded dental structures ⚠️

If you are dealing in this area like me, the things we need to pay attention to first when reviewing an article will be keywords (keywords). The keywords in this article were selected as Mask R-CNN, Object Detection, Semantic Segmentation, and Tooth. We continue to do our research on these keywords.

A one-step network such as the Fully Convolutional Neural Network (FCN), You only Look Once (YOLO) and Single Shot multibox Detector (SSD) are 100-1000 times faster than the region-recommended algorithm [3], [4], [5].

Technical Approaches

❇️ Data Collection

Since there is no public data set, 100 images were collected from the hospital and the data set was trained. Of these images, 80 images are divided into educational data. The remaining 10 images are verification data, while the other 10 images are test data. Images of different distances and lighting and people of different sexes and ages were selected within the project. (Challenge for the network)

❇️ Tag Images Annotation

Labelme is an image tagging tool developed by MIT’s Computer Science and artificial intelligence laboratory (CSAIL) [6]. Provides tools for tagging object edges. When annotating images, multiple polygons will form around the teeth. An example of this utility can be seen in Figure 1. Saves corner coordinates in a JSON file for an image. Since it is a manual operation, there will be a small error when annotating images. However, it does not affect the overall evaluation of the model. Since there is only one category, the tooth part is labeled as 1. The rest that is considered a background is labeled as 0.

❇️ Deep Network Architecture Details


Mask RCNN Workflow

                                                           Mask R-CNN Architecture

You can see the Mask R-CNN architecture in the figure above. Mask R-CNN consists of several modules. Mask R-CNN, an extension of Faster-RCNN, includes a branch of convolution networks to perform the sample segmentation task. This branch is a standard convolutional neural network that serves as a feature extractor. In principle, this backbone network can be any network that extracts image features such as ResNet-50 or ResNet-101. In addition, to perform multi-scale detection, a feature pyramid network (FPN) is used in the backbone network. FPN improves the standard feature extraction pyramid by adding a second pyramid that takes the top-level features from the first pyramid and passes them to the lower layers. A deeper ResNet101 + FPN backbone was used in this project.
Step by Step Detection

                                                                   Mask R-CNN Working Structure

🔍 Details Of Architecture

A Roi align method for changing the ROI pool has been proposed. RoIAlign can maintain an approximate spatial position. RPN regression results are usually decimal and require integrals. The boxes obtained by RPN must be joined at the same maximum pooling size before entering the fully connected layer. During the project process, it was reported that the Integral was also needed, allowing RoIAlign to eliminate the integral process and protect the decimals. It is accurate for detection and segmentation. The classification combines the loss values of RoI regression and segmentation. Classification and ROI regression loss are no different from normal object detection networks. The mask loss branch is a convolutional neural network with ROI as the input and output is a small mask of size 28×28.

✅ Results

As the data will be trained at 50 epochs, 20 epochs of the data will be trained to start with, and 30 epochs will be trained to fine-tune all layers. The total loss value is 0.3093, consisting of bounding box loss, class loss, mask loss, and RPN loss. The total loss curve is shown in Figure 4. The final test result is also shown to be (a) the best result and (b) the worst.

                                                                         Total loss curve

The Pixel Accuracy (PA) method is the simplest and most effective method for evaluating results. The best result was 97.4% PA and the worst was 90.1%. Since there are a small number of prosthetic samples in the dental samples found in the project, the accuracy of prosthetic detection was low.
Final Test Sonuçları

              Final test results. (a) best individual result example, (b) worst individual result example 


  1. Guohua Zhu, Zewen Piao, Suk Chan Kim, Department of Electronics Engineering, Pusan National University, Tooth Detection and Segmentation with Mask R-CNN, ICAIIC 2020.
  2. https://github.com/fcsiba/DentAid.
  3. Shelhamer, E., Long, J., and Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 4 (Apr. 2017), 640–651. 1, 2.
  4. Redmon, J., and Farhadi, A. Yolov3: An incremental improvement. arXiv (2018). 1
  5. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. Ssd: Single shot multibox detector. To appear. 1
  6. B. Russell, A. Torralba, and W. T. Freeman, Labelme, The Open Annotation Tool MIT, Computer Science, and Artificial Intelligence Laboratory [Online]. Available: https://labelme.csail.mit.ed.
  7. Zhiming Cui, Changjian Li, Wenping Wang, The University of Hong Kong, ToothNet: Automatic Tooth Instance Segmentation and Identification from Cone Beam CT Images.

The Future of Environmental Sustainability: AI and Greenhouse Emissions

Climate change continues to be one of the most important issues that humankind faces today. One of the main factors that causes climate change is the greenhouse effect; simply such effect refers to increase on earth’s temperature with respect to emissions of gases like carbon dioxide, nitrous oxide, methane and ozone; more broadly greenhouse gases. Emission of such gases and increase in greenhouse effect is significantly correlated with human activities. However, AI based activities would create a difference in such processes and environmental sustainability studies suggest. AI use for environmental sustainability can lower worldwide greenhouse emissions by 4% at the end of 2030, PwC forecasts. Such percentage corresponds to 2.4 Gt, which is the combined annual greenhouse gas emission of Australia, Canada and Japan. Anticipation is that such quantities would lead many institutions to develop their sustainability models with help of AI. 
Considering AI’s ability to process data more efficiently than ever before, such ability can be used to analyze the data linked to the environmental issues the report suggests. Such analyzes would assist environmental sustainability by identifying patterns and making forecasts. As a current practice, IBM developed AI systems to process extensive data of weather models in order to make weather forecasts more reliable. It has been stated by the company that the system developed increased the accuracy rate by 30%. In terms of sustainability, such accuracy may lead large institutions to manage their energy amount and minimize greenhouse emissions. 
Moreover, AI can assist to reduce the greenhouse emissions with its practices on transportation. Autonomous vehicles can have a promising impact on such reduction, since the vehicles use less fossil fuels with fuel efficient systems. Furthermore, if AI based systems started to be used for calculating the efficient roads on car-sharing services, autonomous vehicles may change the passenger habits. With the benefits of such efficient road calculations, many passengers would prefer car-sharing services or public transportation rather than individual use of vehicles. Also, autonomous vehicles would have a reductive factor on traffic since such vehicles would be informed of each other. Such reduction on traffic and communicative systems may assist vehicles to be more efficient in terms of their energy use. Such shifts in the area of transportation may have a significant effect on environmental sustainability since the area has a remarkable emission ratio.
On the sectoral side, AI can also be used to manage companies’ emissions. Electric services company Xcel Energy’s practice with AI is an instance for such management. Prior to the practice; after producing the electricity with burning coal, the Xcel factory released the greenhouse gases like nitrous oxide into the atmosphere like the many other companies’ factories in the sector. However, in order to limit such emission; the company advanced its Texas factory’s smokestacks with artificial neural networks. Such advancement assisted the factory in terms of developing a more efficient system and most significantly, limiting the emissions. Such systems may reduce the nitrous oxide emissions 20% International Energy Agency forecasts. Therefore now, hundreds of other factories in addition to Xcel Energy are using such advanced systems for their environmental sustainability.
However; besides such significant developments, AI systems have carbon footprints too since training data requires a considerable amount of energy. Some sources even suggest that such quantities of energy can predominate AI’s benefits on energy efficiency. On the other hand, it is also suggested that as AI’s own energy efficiency is also being developed, such quantity could become a minor factor considering AI’s contributions to energy efficiency and limiting greenhouse emissions. 
AI’s such intersections with social and scientific issues are most likely to be the crucial points of society’s future. According to the research, “The Role of Artificial Intelligence in Achieving Sustainable Development Goals” AI can assist to resolve 95% of the issues that are directly related to environmental SDGs; which are climate action, life below water and life on land. Considering such effect, AI can be the tool that will be used for taking a step forward in environmental sustainability. 
Vinuesa, R., Azizpour, H., Leite, I. et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11, 233 (2020). https://doi.org/10.1038/s41467-019-14108-y

Importance of Data Quality and Data Processing

The subject that the whole world talks about and is now seen as the most important thing in the new order is data. Data is processed in many different ways and is prepared to extract information from it. It is a structure that gives a different dimension that changes the direction of the world on its own. Today companies actually exist as much as the knowledge they have. The readily obtained data may be inferior to the data you have collected yourself, the details of which you know. Therefore, you can spend a lot of time on data processing and extend the project time. This can be a big disadvantage for you. It is entirely up to you to measure the quality of the incoming data and arrange them in a certain order. If the data quality is really bad, it can be integrated into the system after the final preparations are made by carefully applying the data processing steps above it.
The biggest mistake made by software developers who are at the beginning level is to process the data that is prepared cleanly. To level up, you can create a data set yourself and analyze it. While this gives you self-confidence, the solutions you find in the face of the difficulties you encounter are what will lead you to a great deal of progress, so that you will reach the ‘problem solving ability’ that many big companies care about. Dealing with the data you collect yourself will prepare you for real-life problems. People who want to pursue a career in Data Science should find a solution to a real problem by collecting their own data and adjusting this data so that they can finally go to the product stage. Thanks to the project phases it has developed, it can easily continue its career with a high level of experience in matters such as processing information, product development, and finding solutions to real life problems.

The most important issue for the Data Scientist is data. If there is no data, no solution can be found, and the people who have the data will hold the power in the new era. In the future world order, we can call the data that will give full direction to the world. There are data flowing live at every stage of life, and processing them and making logical inferences is an extremely important skill for the century we live in. Understanding the information obtained from the data well and finding solutions to the problems that may arise is another situation that will provide convenience in finding a job in the future. The most important issue opened to the subject of artificial intelligence is a project and the existence of quality data for that project. Data quality has full say in determining how long the project is to be formed and its maximum destination. There is no matter as important as data quality because if the data is of poor quality, there are many problems to occur.
Another issue, which is as important as data quality, is to perform data processing steps correctly. Data science, machine learning, artificial learning, deep learning or artificial intelligence, whatever you call it, all it takes for these jobs to become a product is data. In addition, the quality of this data and the fact that the data processing steps are prepared at a very good level directly affect the processes of the projects where these names are made. The most critical situation is to pass data processing steps, topics that will be presented as products. After you have overcome such vital points, you can quickly navigate by using mathematical, engineering or statistical information on the part of the work to become a product. This situation accelerates your project and motivates you. Thus, you can move to a different dimension by acting with the motivation you get and the pushing power you have.

The conditions of the world will continue to change continuously throughout the century we live in. It is the data itself that is determined to lead this change. The data, called the new oil, is literally petroleum for the new century. Processing them and obtaining logical results is the main goal of everyone. Persons working in this field must have strong numerical knowledge and have experience in data processing. It should benefit the units it works with by actively using its problem-solving intelligence from the first moment it takes the data. Data processing; It is a technique that has the power to change success scores in machine learning and deep learning. This technique, if used correctly, can easily achieve the achievable maximum score levels.
The thing that contributes to the development of smart systems and their full penetration into our lives has been created thanks to quality data. If you want to produce quality works for the project to be worked on, you must first collect the data you have on the basis of quality and solid foundations. If this is not the case, you can keep your project ready by performing a very good data processing phase and finishing before the project. Thus, it saves you both time and gives you confidence in the quality of the data you will deal with while getting to the job, and you will need to solve problems over data in a minimal way during the project steps. Data quality is the life source of projects. People who have had the opportunity to work with good data know exactly what I mean. Remember, good data means a good project, a good working order and good results.

I hope you like. If you like it, you can give me a return by stating it in the comments.

Gig Economy: Uber, Netflix, Amazon, Airbnb

Is the Gig Economy really important? How Amazon, Netflix, Uber, and Airbnb have become so successful? What is the main purpose of Uber and Netflix? Many of us want to be rich, successful, and respected persons and some of us want to build Billion Dollar Business Models to become a more powerful person. Have you thought about it if it is not as difficult as we might think?

  • Amazon: Changed the way people shop, taking over a huge part of the retail industry.
  • Airbnb: Changed the way people in hotel stays.
  • Netflix: Reinvented the video and movie rental industry.
  • Uber: Changed the way people form transportation. E.g. ride-sharing

Let’s back to our question – How? It is an undeniable fact is that they spend their time on IT infrastructure, but for sure technology alone could not have accounted for their success. It is possible to be heard a term called “user experience” by many of us and this term has become really important because those unicorns focused on their users’ expectations, needs, desires… etc. I think it is the basic explanation of the user experience.
Let’s look at more detailed explanations of the two of them:


What is the main purpose of Uber as a Gig Economy?

There are many Gig Economy examples such as Amazon, Netflix, Uber, Airbnb. However, in this paragraph, I am going to explain Uber.
Based on Wikipedia’s description of Uber is an American company that offers vehicles for hire, food delivery (Uber Eats), package delivery, couriers, freight transportation, and, through a partnership with Lime, electric bicycle and motorized scooter rental. What I remember is Uber can be the early adopter of the Gig Economy, and what is obvious thing is that the way people form of transportation was changed by its service. I would like to give you statistics about Uber trips by year and according to Forbes, Uber provided 6.9 Billion rides in 2019 – Such an amazing!
Looking at things from a different perspective is really matters because as all we know taxies are concern with regulations. For example, if I want to go somewhere, I have to call a cab or search for it and if I am lucky I can find one; or if I am on a street and need a cab, first the taxi drivers have to be satisfied with my “long trip” because of price. Let’s say my trip is not “long”, they even do not let me get a ride. On the other hand, Uber provides a passenger-friendly riding experience, it is easy to order a cab with a simple app and there are really flexible pricing options whether the trip is long or short because it has a user-centric experience.


What is the main purpose of Netflix as a Gig Economy?

There are many Gig Economy examples such as Amazon, Netflix, Uber, Airbnb. However, in this paragraph, I am going to explain Netflix.
The main purpose of Netflix is to focus on its users and what they wanted. According to the latest numbers, Revenue earned by Netflix in Q1 of 2020 is $5.77 billion. The behind story is similar to Uber because as I mentioned they focused on what customers want, desires, and what are their frustrations and etc. What do customers want? They want to be able to watch different kinds of movies without high prices and limitations. So if I give you a basic answer to what Netflix is doing is a subscription-based rental system. Moreover, Netflix has a different pricing structure around the world. Here’s what you get with each plan:

  • Basic: The Basic streaming plan costs $8.99 per month and has the most limited features. You can only use it on a single screen at a time (which is fine if you’re the only user of the account), and resolution is limited to standard definition (SD), which is equivalent to old, pre-HD television.
  • Standard: The Standard streaming plan costs $13.99 per month and allows you two watch on two screens at a time in high definition (HD).
  • Premium: The Premium streaming plan costs $17.99 per month. For that, you can watch on four screens at once (ideal for a large family), and you can video programming in HD or 4K Ultra HD, if available.

There is a link that similar to this topic if you would like to read how the advertisement systems work https://globalaihub.com/ai-for-advertisements/


All these businesses having their services with the help of big data and data analysis. They identify their customers’ expectations and doing analysis but of course to be able to make data meaningful they need huge datasets. In addition, they need to expand their businesses more, and more and it means attracting new customers is the other step. Therefore, they need to improve their business models and revenues with different solutions, and all solutions are related to each other.



Relationship Between Human Brain and CNN

Hello, we all know that the image classification process of convolutional neural networks is influenced by the working principle of neural networks in the human brain. Let’s examine the relationship between them.
       Convolutional Neural Networks are deep learning architecture that is widely used in computer vision studies such as image classification, localization, object perception. Convolutional neural networks are the field of study associated with machine learning to analyzing visual imagery. CNN choose unique features from pictures to distinguish the given figure. This process also happening in our brains unconsciously.
Biological Inspiration of Convolutional Neural Network (CNN)
Mapping of human visual system and CNN
Research in Sensor Processing (1960’s and 1970’s)
These works are prime Dr. Hubel and Dr. Wiesel worked on the area of Sensory Processing. In which, they inserted a micro-electrode into the primary visual cortex of an partially anesthetized cat so that she can’t move and shown the images of line at different angles to the cat.
Through the micro-electrode they found that some neurons fired very rapidly by watching the lines at specific angles, while other neurons responded best to the lines at different angles. Some of these neurons responded to light and dark patterns differently, while other neurons responded to detect motion in the certain direction.

Where is visual cortex located in humans brain?


                                                                                     Figure 1: Functional areas for the human brain

Visual Cortex is the part of the Cerebral Cortex of the Brain that processes the visual information. Visual nerves from the eyes runs straight to the primary visual cortex. Based on the structural and the functional characteristics it is divided into different areas, as shown in the following picture:

Figure 2: Different areas of visual cortex

 Visual cortex: Functions
The visual information is passed from one cortical area to another and each cortical area is more specialized than the last one. The neurons in the specific field only respond to the specific actions.
Some of them with their functions are as follows:

  1. Primary visual cortex or V1: It preserves spatial location of visual information i.e. orientation of edges and lines. It is the first one to receive the signals form what eyes have captured.
  2. Secondary visual cortex or V2: It receives strong feed-forward connections from V1 and sends strong connections to V3, V4 and V5. It also sends strong feedback network to V1. Its function is to collects spatial frequency, size, color and shape of the object.
  3. Third visual cortex or V3: It receives inputs from V2. It helps in processing global motion and gives complete visual representation.

      4. V4: It also receives inputs from V2. It recognizes simple geometric shapes and also forms recognition of object. It is not tuned for complex objects              as Human Faces.

  1. Middle temporal (MT)visual area or V5: It is used to detect speed and direction of moving visual object i.e. motion perception. It also detects motion of complex visual features. It receives direct connections from V1.
  2. Dorsomedial (DM) area or V6: used to detect wide field and self-motion stimulation. Like V5 it also receives direct connections from V1. It has extremely sharp selection of the orientation of visual contours.

Structure of Convolutional Neural Networks
CNN processes the image with various layers.
Layers Of CNN
Input Layer: In this layer, data is transmitted raw to the network.
Convolutional Layer: Used for detecting features.
Non-Linearity Layer: Introduction to nonlinearity to the system
Pooling (Down sampling) Layer: Decrease count of weights and check the conformation
Flattening Layer: Prepare the data for classical neural network
Fully Connected Layer: Standard Neural Network used in classification

                  Figure 3: CovNet Diagram

As a result, CNN imitates the work of the visual cortex in our brain. If we look the plane picture, we can define the plane by separating the features such as two wings, engines, windows. CNN does the same thing but previously they detect low-level properties such as curves and edges and create them up to more abstract concepts. Don’t you think it’s great? Hope to see you in our next blog.

  1. https://scipy.github.io/old-wiki/pages/Cookbook/Matplotlib/HintonDiagrams
  2. https://medium.com/@tuncerergin/convolutional-neural-network-convnet-yada-cnn-nedir-nasil-calisir-97a0f5d34cad
  3. https://medium.com/@gopalkalpande/biological-inspiration-of-convolutional-neural-network-cnn-9419668898ac
  4. Kuş, Zeki.”Mikrokanonikal Optimizasyon Algoritması ile Konvolüsyonel Sinir Ağlarında Hiper Parametrelerin Optimize Edilmesi”Fatih Sultan Mehmet University,2019 (pp. 16-21)