Preprocessing with Image Processing Techniques

In order for many of the projects that we aim for to be realized, it is necessary to undergo image processing steps. In this article, we will reveal the pre-processing stages with Gauss, average filter, threshold filters, and Canny Edge Sensor. As a Platform, you can work in Colab like me! In this way, you can perform your projects both very quickly and without taking up space.

Image processing techniques are systems for obtaining various information by analyzing existing images. Image processing techniques work through different mathematical expressions, from the simplest algorithms to the most complex algorithms, depending on their place of use.

In order to use image processing methods, real-world data captured using a previously obtained camera will be processed. Operations were determined as reading data through OpenCV, controlling pixel values in terms of color channels, eliminating noise contained in the image, and using existing filters.

It would be better for our working principle to prepare the data set that we will use for our projects in advance. Images will be extracted from the file using the imread( ) method from your dataset. Let’s get to work by loading the necessary libraries for this process.

📍NOTE: After performing the preprocessing step, I usually use the imshow() function to check the image. But let’s not forget that since our imshow function does not work in Colab, we must install the cv2_imshow module!

The image you see below contains code that allows you to obtain the image and examine the pixel values.

After installing our libraries, we must create a class and copy the path to the PATH variable for the data set that we will use. Because our image processing techniques will work on the images contained in this folder.

As you can see in the imread function, I put 0 next to the file, since the images I use are gray-level images. In this way, even if the hue is possible in the images, this image will turn gray. We can print our first image on the screen using the RGB imshow method.

When we put 0 next to our image, we will have reached the image that you see below. it is possible to print the image that you see on the screen with the cv2_imshow(image) command. After this step, we can move on to the image processing steps.

Image Processing Steps

If you want to see the pixel values of your RGB or gray scale image, you can print it to the screen using the print command. In this way, you will also check which channel you are working on. Since I use an RGB rose image in this image, the pixel values show the following numbers.

📌 RGB Color Channel: RGB is the most commonly used color space. In this color model, each color sees red, green, and blue as its main spectral components. The infrastructure of this model includes a Cartesian coordinate system.

The code required to convert our image to RGB and examine the pixel values is given as follows.

📌 HSV Color Channel: The name of the HSV space comes from the initials hue, saturation and intensity, the English equivalent of the words hue, saturation and brightness. The HSV color space defines color with the terms Hue, Saturation, and Value. Although RGB also uses a mixture of colors, HSV also uses color, saturation, and brightness values. Saturation determines the vitality of the color, while brightness refers to the brightness of the color.

📌 CIE-LAB Color Channel: CIE, 1931 color spaces are the first defined quantitative links between the distribution of wavelengths in the electromagnetic visible spectrum and physiologically perceived colors in human color vision. The mathematical relationships that define these color spaces are essential tools for Color Management, which is important when dealing with recording devices such as color inks, illuminated displays, and digital cameras.

LAB Color Channel

Elimination of Noise In The Image

Because images are real-world data from the camera, they often contain Gaussian noise due to current changes on a camera’s sensor. Noisy images can lead to worse performance in edge detection, which we use for Element detection. Therefore, it is important to reduce this noise.

🖇 There are many methods available in the literature to reduce noise. Today we will discuss 2 methods with you.

  1. Adaptive Threshold Gaussian
  2. Adaptive Threshold Mean
➡️ Adaptive Threshold Gaussian

I have included the Python code in which the Gaussian method is applied to make our images become gaussian noise removed images as follows. It is possible to reach the desired result by manipulating the parameters in the adaptiveThreshold method here.

When Gaussian and average threshold filters, which are frequently used in the literature, are applied on these images, it is seen that almost the same blur (softening) level is approached. These methods are adaptive gaussian filter and average filter application, respectively.

➡️ Adaptive Threshold Mean

Adaptive thresholding is the method by which the threshold value is calculated for smaller regions and therefore there will be different threshold values for different regions.

You will appreciate that there are very minor nuances between Gaussian and Mean filters. You can continue with the filter you want by changing the parameter values yourself.

➡️ Edge Detection

Edge detection is an important technique used to detect features. Canny edge detection algorithm, which is one of the edge detection techniques in the figure, has been run on the images.

Canny Code

Canny Image

REFERENCES

  1. Medium, Cerebro, Using Artificial Intelligence in Image Processing Techniques, April 2018.
  2. Wikipedia, The Free Encyclopedia, ‘Image Processing’, September 2020.
  3. C. Gonzalez, Rafael, E. Woods, Richard, Digital Image Processing, Palme Publishing, (Ankara, 2014).
  4. S. Singh and B. Singh. “Effects of noise on various edge detection techniques”. In: 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom). Mar. 2015, pp. 827–830.
  5. https://www.tutorialspoint.com/opencv/opencv_adaptive_threshold.htm.
  6. Ajay Kumar Boyat and Brijendra Kumar Joshi. “A Review Paper: Noise Models in Digital Image Processing”. In: CoRR abs/1505.03489 (2015). arXiv: 1505.03489. url: http:// arxiv.org/abs/1505.03489.
  7. T. Silva da Silva et al. “User-Centered Design and Agile Methods: A Systematic Review”. In: 2011 Agile Conference. Aug. 2011, pp. 77–86. doi: 10.1109/AGILE.2011.24.

 

Contour Extraction Using OpenCV

In image processing, the concept called stroke is a closed curve that connects all continuous points that a color or density has. Strokes represent the shapes of objects in an image. Stroke detection is a useful technique for Shape analysis and object detection and recognition. When we do edge detection, we find points where the color density changes significantly, and then we turn those pixels on. However, strokes are abstract collections of points and segments that correspond to the shapes of objects in the image. As a result, we can process strokes in our program, such as counting the number of contours, using them to categorize the shapes of objects, cropping objects from an image (image partitioning), and much more.

Computer Vision

🖇 Contour detection is not the only algorithm for image segmentation, but there are many other algorithms available, such as state-of-the-art semantic segmentation, hough transform, and K-Means segmentation. For better accuracy, all the pipelines we will monitor to successfully detect strokes in an image:

  • Convert image to binary image, it is common practice for the input image to be a binary image (it must be a result of threshold image or edge detection).
  • FindContours( ) by using the OpenCV function.
  • Draw these strokes and show the image on the screen.

Apply Contour on Photoshop

Adobe PS
Before moving on to the encoding of contour extraction, I will first give you an example of Photoshop to give you better acquisitions.
Katmandan kontur çıkarımı
As a first step, to access the window you see above, right-click on any layer in Photoshop’s Layers window and select blending options.
🔎 If the Layers window is not active, you must activate the layers by clicking the Window menu from the top menu. The hotkey for Windows is F7.
It is possible to select the Contour color and opacity you want to create in the image by selecting the Contour tab from the left section. Then, background extraction is made to distinguish the contour extraction that will occur in the image first.
People siluet
After removing the background in the image you see here, I made a selection in yellow tones so that the object is visible in the foreground. After the background is removed, the outer contour will be applied to the image and the detection will be more successful.
People contour

Contour Extraction with Python OpenCV

I use Google Colab and Python programming language as a platform. If there are those who regularly code Python, it is a platform that I can definitely recommend! Come on, let’s start coding step by step.
📌 Let’s import the libraries required for our project as follows.
Gerekli Kütüphanelerin Yüklenmesi
📌 As the second step, we get our image with the imread function.
Görüntünün Alınması
📌 As you know in the world of image processing, our images come in BGR format. The BGR image must first be converted to RGB format and then assigned to the grayscale color channel.
Converting Color Spaces
📌 As the fourth step, a binary threshold operation is performed by specifying a threshold value in the image. To access the mathematics that the binary threshold function runs in the background, you must examine the following formula 👇
Formula
Binary threshold
If you have noticed, the image on which threshold will be applied is selected as a gray level image, not RGB. Please pay attention at this stage. When you follow these steps in order, you will receive the following feedback.
Background
📌In this step, we will use the findContours function to find the contours in the image. The image where the contours will be determined will be the binary image that we have realized the threshold.
Find Contours
📌 We will use drawContours function to draw these contours visually.
Draw Contours
🖇 The parameter cv2.CHAIN_APPROX_SIMPLE in the method removes all unnecessary points and saves memory by compressing the contour.
📌 Now we can print our contour extracted image on the screen.
Imshow contours

In this way, we made our inference. Hope to head to the world of other projects in another article … Stay healthy ✨

REFERENCES

  1. Contour Tracing, http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/intro.html.
  2. Edge Contour Extraction, https://www.cse.unr.edu/~bebis/CS791E/Notes/EdgeContourExtraction.pdf, Pitas, section 5.5, Sonka et al., sections 5.2.4-5.2.5.
  3. https://www.thepythoncode.com/article/contour-detection-opencv-python adresinden alınmıştır.
  4. https://www.subpng.com/png-m7emk6/ adresinden alınmıştır.
  5. OpenCV, https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html.
  6. OpenCV, https://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html.

Yapay İnsan Gözü Tasarlamak: EC-Eye

Göz, en karmaşık biyolojik yapıya sahip organlardan bir tanesi. Bu yapısı sayesinde çok geniş bir görüş açısı sağlamasının yanı sıra hem uzağı hem yakını detaylı bir şekilde işler ve ayrıca çevre, ışık koşullarına göre de inanılmaz bir uyum sağlar. İçinde bulundurduğu sinir ağlarına, katmanlarına, milyonlarca fotoreseptörlere ek olarak bir de küresel şekle sahip olması, onun kopyalanmasını oldukça zorlaştırıyor.
Tüm bu zorluklara rağmen Hong Kong Bilim ve Teknoloji Üniversitesi’nden bilim insanları bu alanda çalışmalarına devam etti ve ışığa duyarlı süperiletken perovskit maddesi ile biyonik bir göz geliştirdiler. “Elektrokimyasal Göz” (EC-Eye) adını verdikleri bu biyonik göz, bir insan gözünü kopyalamayı bırakın çok daha fazlasını yapmak üzere.
 

 
Şu an sahip olduğumuz kameralar aslında görme işlevinin bir kopyası gibi gelebilir. Fakat küçük boyutlar için çözünürlük ve görüş açısı tam olarak insan gözünün özelliklerine sahip değil, daha çok mikroçip gibi çözümler kullanılır. Fakat bunların küresel bir yüzeyde tasarlanması önceden de söylediğimiz gibi o kadar kolay olan bir işlem değil. Peki EC-Eye bunu nasıl yapıyor?
Elektrokimyasal göz, temel olarak 2 parçadan oluşuyor diyebiliriz. Ön tarafında insan irisinin görevini yapan bir mercek bulunmakta. Yine aynı tarafta elektrik yüklü bir sıvı ile doldurulmuş alüminyum bir kabuğa sahiptir. Bu sıvı aslında insan göz yapısında “Vitre” olarak bildiğimiz gözün içini dolduran jel şeklinde biyolojik bir sıvıdır.
 

 
EC-Eye’ın arka kısmında ise oluşturulan elektriksel aktiviteyi işlemek üzere bilgisayara gönderen teller bulunmaktadır. Teması gerçekleştirmek adına da silikon bir göz yuvasına sahiptir. Son olarak, ve en önemlisi, algılamayı gerçekleştiren hassas nanoteller. Bu nanoteller o kadar hassas bir yapıya sahiptir ki yanıt hızları, normal bir insan gözündeki fotoreseptörlerden daha hızlıdır. Nanoteller üzerinde oluşan elektriksel reaksiyonların bilgisayara iletilmesi ile de iletim gerçekleşmiş oluyor. Tabii bu şekilde anlatınca çok kolay bir işlem gibi gözükse de aslında teknolojinin sınırlarını zorlayan bir uygulama. Tüm bu işlemlerin insan gözünü arka planda bırakacak bir güçte ve özellikte çalışması ise daha da merak uyandırıcı. 
Nasıl çalıştığını görmek adına, EC-Eye ve bilgisayar arasında bir arayüz oluşturuldu ve bu arayüz sayesinde EC-Eye’a bazı harfler gösterildi. Çıkan algılama sonucunda daha yüksek çözünürlükte görüntü elde edildiği kanıtlandı. İleriki aşamalar için çok daha kompleks testlerle karşı karşıya gelecek ve geliştirilmesi için çalışmalara devam edilecek.
 

 
Bu biyonik gözün insan gözü yerine geçebilmesi için daha birçok testten geçmesi gerektiği çok açık özellikle her ne kadar küçük bir cihaz gibi görünse de nanotellerin bilgisayara işlenmesi için bağlanması aşaması şu an bir sorun yaratmakta. Söz konusu çok fazla nanotel olunca bunların yerleştirilmesi ve pratik şekilde kullanılması oldukça zor gözüküyor, yani bu biyonik gözlerin ticarileşmesi, herkes tarafından kullanılabilmesi, biraz daha uzun bir zaman alabilir. Ama şimdilik, gelecek için büyük bir umut veriyor. 
İnsan gözünün algılayamadığı şeyleri de yapabileceği bir noktaya gelirse eğer, çok fazla özelliğe sahip bir potansiyelinin olduğu söylenebilir. Bilim kurgu filmlerinde gördüğümüz ve “Bunlar sadece filmlerde olur zaten.” dediğimiz kayıt almak, çok uzağı görmek, gece görüşü, başka dalga boylarında frekansları görüntüleme artık o kadar da ulaşılmaz değil gibi duruyor. Bunlar nasıl telefon kameraları ile bile çok rahat bir şekilde yapılabiliyorsa işin içinde yapay zekanın da olduğu üst düzey teknolojik uygulamaların bunu kolaylıkla yapabileceğini tahmin etmek aslında o kadar da zor değil.
Yapay Zeka her alanda bir parçamız olmaya başladı bile.
 
Kaynakça:

Looking to the Future: Creating an Artificial Eye


https://www.nature.com/articles/s41586-020-2285-x.pdf?origin=ppub
https://tr.euronews.com/2020/05/21/insanlar-ve-robotlar-icin-gelistirilen-biyonik-goz-ilk-testleri-gecti-potansiyelde-s-n-r-y

Featured Image

Data Labeling Tools For Machine Learning

The process of tagging data is a crucial step in any supervised machine learning projects. Tagging is the process of defining areas in an image and creating descriptions of which object belongs to these regions. By labeling the data, we prepare our data for ML projects and make them more readable. In most of the projects I’ve worked on, I’ve created sets in the dataset, I’ve done self-tagging, I’ve done my training with tagged images. In this article, I will introduce the data labeling tools that I encounter the most by sharing my experience in this field with you.
Labeling Image

📍COLABELER

Colabeler is a program that allows labeling in positioning and classification problems. Computer vision is a labeling program that is frequently used in the fields of natural language processing, artificial intelligence, and voice recognition [2]. The visual example that you see below shows the labeling of an image. The classes you see here are usually equivalent to the car class. In the tool section that you see on the left side, you can classify objects like curves, polygons, or rectangles. This selection may vary depending on the limits of the data you want to tag.
Labeling Colabeler
Then in the section that says ‘Label Info’, you type the name of the objects you want to tag yourself. After you finish all the tags, you save them by confirming them from the blue tick button. And so you can go to the next image with Next. Here we should note that every image we record is sorted to the left of this blue button. It is also possible to check the images you have recorded in this way. One of the things I like most about Colabeler is that it can also use artificial intelligence algorithms.
📌 I performed tagging via Colabeler in a project I worked on before, and it is a software with an incredibly easy interface.
📽 The video on Colabeler’s authorized websites describes how to make the labeling.
Localization of Bone Age
I gave a sample image of the project I worked on earlier above. Because this project is a localization project in the context of machine learning, labeling has been done by adhering to these features. Localization means isolating the subregion of the image where a feature is located. For example, trying to define bone regions for this project only means creating rectangles around bone regions in the image [3]. In this way, I have labeled the classes that are likely to be removed in the bone images as ROI zones. I then obtained these tags as Export XML/JSON provided by Colabeler. A lot of machine learning employees will like this part, it worked very well for me!

♻️ Export Of Labels

Exporting XML Output
At this stage, I have saved it as JSON output, because I will use JSON data, you can save your data in different formats. In the image I give below, you can see the places of the classes I created in the JSON output. In this way, your data was prepared in a labeled manner.
JSON Format

📍ImageJ

ImageJ is a Java-based image processing program developed at the National Institutes of Health and the Laboratory for Optical and Computational Instrumentation (LOCI, University of Wisconsin). ImageJ’s plugin architecture and built-in development environment have made it a popular platform for teaching image processing [3].

As I listed above, you can see a screenshot taken from ImageJ in Wikipedia. As can be seen, this software does not exist on an overly complex side. It is a tool that is used in many areas regardless of the profession. 📝The documentation provided as a user’s guide on authorized ImageJ websites describes how to perform labeling and how to use the software tool.
📌 I have also been to Fiji-ImageJ software tools for images that I had to tag in the machine learning project. I think its interface is much older than other labeling programs I’ve worked with. Of course, you can perform the operations that you want to do from a software point of view, but for me, the software also needs to saturate the user from a design point of view.
Fiji-ImageJ
The image I gave above was a screenshot I took during the project I was working on on my personal computer. In order to be able to activate the data while working on the Matlab platform, it was necessary to update with priority. For this reason, after updating, I continued to identify the images. Below is the package that will be installed during the installation of the Matlab plugin for ImageJ users.
ImageJ Matlab

📍Matlab Image Labeler

The Image Labeler app provides an easy way to mark rectangular area of interest (ROI) tags, polyline ROI tags, Pixel ROI tags, and scene tags in a video or image sequence. For example, using this app will start by showing you [4]:

  • Manually tag a picture frame from an image collection
  • Automatically tagging between image frames using an automation algorithm
  • Export tagged location fact data

Image Toolbox Matlab
In the image you see above, we can perform segmentation using Matlab image Labeler software. More precisely, it is possible to make labeling by dividing the data into ROI regions. In addition, you can use previously existing algorithms, as well as test and run your own algorithm on data.
Selection ROI
In this image I received from Matlab’s authorized documentation, the label names of the bounding regions you selected are entered in the left menu. A label Color is assigned according to the class of the object. It is also quite possible that we create our labels in this way. In the next article, I will talk about other labeling tools. Hope to see you ✨

REFERENCES
  1. https://medium.com/@abelling/comparison-of-different-labelling-tools-for-computer-vision-f3afd678da76.
  2. http://www.colabeler.com.
  3. From Wikipedia, The Free Encyclopedia, ImageJ, https://en.wikipedia.org/wiki/ImageJ.
  4. MathWorks, Get Started with the Image Labeler, https://www.mathworks.com/help/vision/ug/get-started-with-the-image-labeler.html.
  5. https://chatbotslife.com/how-to-organize-data-labeling-for-machine-learning-approaches-and-tools-5ede48aeb8e8.
  6. https://blog.cloudera.com/learning-with-limited-labeled-data/.
Mobile Application Development

FaCiPa Series – 3

FaCiPa Series 2 I wanted to write the mobile application side, which is the last series of my articles because I got very nice returns leftover from my article. It’s an amazing feeling to be able to talk to you today about the project I’ve been developing for a year! In this article, we will talk with you about FaCiPa’s mobile interface.
Since the project included Python programming language and API-side encodings, different options such as kiwi or Ionic were available as a platform. Other articles I have written for Ionic can be found at the links below. In these links, you can briefly get information about What is Ionic, The working structure of the Ionic project, and its use with the Semantic UI. In addition, since TypeScript is written with a code structure, you can also review the article I wrote about it. Below are the most common explanations about the Ionic Framework:

👉 This open source library is built on Cordova.
👉 It is a library that allows even Web developers to develop mobile applications.

Mobile Application Design
First, we start by creating a new project on the Ionic Framework, the mobile platform for FaCiPa.

Then we create a page with the ionic generate command.
Generate Page
Ionic Page
In the application, there is a home page, registration page, and analysis page to start with, so 4 pages should be created together with the home page in total.
All files

FACIPA MOBILE INTERFACE

The framework that will be used in FaCiPa’ s mobile interface has been selected as Ionic. More use of mobile devices than computers, the increase of mobile applications, the diversity of mobile devices, and the presence of different operating systems have led software developers to find different mobile solutions. In addition to native application development, it has become an important need to create an application structure that can also be run on any platform over time, and hybrid applications that can be developed with the support of languages such as HTML5 and JavaScript have emerged [1].
Ionic Framework, especially Angular.js, the first choice of programmers with JS or Angular 2 experience is usually Ionic. Open source, Ionic is home to thousands of mobile apps with thousands of followers and supporters. The Ionic Framework, which in its own words has “first-class” documentation, is a convenient and easy-to-learn library.
🏗 The Ionic Framework is built on Cordova. Cordova provides access to the hardware and system resources of the mobile device. You can run it on mobile operating systems such as Android, IOS, or Windows Phone. You can even publish this app as a mobile-compatible website in a convenient way. HTML, JavaScript, and Angular are basically the basis for developing applications with Ionic. knowing js will be enough. Visual Studio Cide platform was used as a platform in the project. The designs of the application are src\pages\home\home.html like .HTML files with the HTML extension are laid out with HTML5. The necessary CSS designs are src\pages\home\home.scss like .scss files it was done in files with the SCSS Extension [1].
📷 The photos that will be used in the project are determined to be taken from the user in the first step and then reduced to 1 photo in order to not tire the user and reduce the processing load of the machine. The user receives the app from Mobile stores and instantly takes photos and sends this photo to the server for processing.
🛡 The backend section of the application is src\pages\home\home.like ts .files with the TS extension are made in TypeScript.
Upload Camera

IONIC ALERT (ION-ALERT) PLUGIN

A warning appears above the content of the application and must be manually removed by the user so that they can continue to interact with the application. In the application, an ion-alert warning is given for the user to take the correct photo.
🔎 Title: Title of the warning box
🔎 Subtitle: Warning text
🔎 Buttons: The button used to remove the warning if the OK button is clicked, the photoOne() method is executed and the photo is taken.
Ionic Alert

IONIC CAMERA PLUGIN

The Ionic camera plug-in is a necessary plug-in for taking photos or videos from mobile devices. Cordova plugin requires: cordova-plugin-camera
🔎 Quality
🔎 destinationType: Destination Type
🔎 encodingType: Coding Types
🔎 media Type: Media Type (Picture)
Install Camera
Install Cam

FIRST DRAFT DRAWINGS OF THE PROJECT

Wireframe Templates
As content, you can design your application’s pages completely freely. The wireframe drawing you saw above was a drawing designed when the project first started. Then we created the designs of the project. I have to say as a footnote that, unfortunately, our product does not support English, so I have to share it in Turkish.
Facipa
The visuals I have given above are the analysis page of the project and the feedback on the analysis result. Thus, we have come to the end of FaCiPa. Thank you for following it patiently. Stay healthy ✨

REFERENCES

  1. http://devnot.com/2016/hibrit-uygulama-catisi-ionic-i-taniyalim/
  2. R. L. Delinger, J. M. VanSwearingen, J. F. Cohn, K. L. Schmidt, “Puckering and Blowing Facial Expressions in People With Facial Movement Disorders,” J. Phys Ther, vol. 88, pp. 909-915, August 2008.
  3. The Spreading of Internet and Mobile Technologies: Opportunities and Limitations, Hasan GULER, Yunis SAHİNKAYASİ, Hamide SAHİNKAYASİ. Journal of Social Sciences Volume 7 Issue 14 December 2017, 03.10.2017-27.10.2017.

Facial Paralysis Assistant: FaCiPa

Hello everyone, as I promised you before, I’m here to introduce FaCiPa. I will introduce you to the details of how to make an application from scratch by approaching the FaCiPa application step by step, which you have come across in many interviews, interviews, and practices. Excuse my excitement today, because I feel like every project I do is my child. So much so that this project is very valuable to me, as it also contains memories from my own life.