Preprocessing with Image Processing Techniques

In order for many of the projects that we aim for to be realized, it is necessary to undergo image processing steps. In this article, we will reveal the pre-processing stages with Gauss, average filter, threshold filters, and Canny Edge Sensor. As a Platform, you can work in Colab like me! In this way, you can perform your projects both very quickly and without taking up space.

Image processing techniques are systems for obtaining various information by analyzing existing images. Image processing techniques work through different mathematical expressions, from the simplest algorithms to the most complex algorithms, depending on their place of use.

In order to use image processing methods, real-world data captured using a previously obtained camera will be processed. Operations were determined as reading data through OpenCV, controlling pixel values in terms of color channels, eliminating noise contained in the image, and using existing filters.

It would be better for our working principle to prepare the data set that we will use for our projects in advance. Images will be extracted from the file using the imread( ) method from your dataset. Let’s get to work by loading the necessary libraries for this process.

📍NOTE: After performing the preprocessing step, I usually use the imshow() function to check the image. But let’s not forget that since our imshow function does not work in Colab, we must install the cv2_imshow module!

The image you see below contains code that allows you to obtain the image and examine the pixel values.

After installing our libraries, we must create a class and copy the path to the PATH variable for the data set that we will use. Because our image processing techniques will work on the images contained in this folder.

As you can see in the imread function, I put 0 next to the file, since the images I use are gray-level images. In this way, even if the hue is possible in the images, this image will turn gray. We can print our first image on the screen using the RGB imshow method.

When we put 0 next to our image, we will have reached the image that you see below. it is possible to print the image that you see on the screen with the cv2_imshow(image) command. After this step, we can move on to the image processing steps.

Image Processing Steps

If you want to see the pixel values of your RGB or gray scale image, you can print it to the screen using the print command. In this way, you will also check which channel you are working on. Since I use an RGB rose image in this image, the pixel values show the following numbers.

📌 RGB Color Channel: RGB is the most commonly used color space. In this color model, each color sees red, green, and blue as its main spectral components. The infrastructure of this model includes a Cartesian coordinate system.

The code required to convert our image to RGB and examine the pixel values is given as follows.

📌 HSV Color Channel: The name of the HSV space comes from the initials hue, saturation and intensity, the English equivalent of the words hue, saturation and brightness. The HSV color space defines color with the terms Hue, Saturation, and Value. Although RGB also uses a mixture of colors, HSV also uses color, saturation, and brightness values. Saturation determines the vitality of the color, while brightness refers to the brightness of the color.

📌 CIE-LAB Color Channel: CIE, 1931 color spaces are the first defined quantitative links between the distribution of wavelengths in the electromagnetic visible spectrum and physiologically perceived colors in human color vision. The mathematical relationships that define these color spaces are essential tools for Color Management, which is important when dealing with recording devices such as color inks, illuminated displays, and digital cameras.

LAB Color Channel

Elimination of Noise In The Image

Because images are real-world data from the camera, they often contain Gaussian noise due to current changes on a camera’s sensor. Noisy images can lead to worse performance in edge detection, which we use for Element detection. Therefore, it is important to reduce this noise.

🖇 There are many methods available in the literature to reduce noise. Today we will discuss 2 methods with you.

  1. Adaptive Threshold Gaussian
  2. Adaptive Threshold Mean
➡️ Adaptive Threshold Gaussian

I have included the Python code in which the Gaussian method is applied to make our images become gaussian noise removed images as follows. It is possible to reach the desired result by manipulating the parameters in the adaptiveThreshold method here.

When Gaussian and average threshold filters, which are frequently used in the literature, are applied on these images, it is seen that almost the same blur (softening) level is approached. These methods are adaptive gaussian filter and average filter application, respectively.

➡️ Adaptive Threshold Mean

Adaptive thresholding is the method by which the threshold value is calculated for smaller regions and therefore there will be different threshold values for different regions.

You will appreciate that there are very minor nuances between Gaussian and Mean filters. You can continue with the filter you want by changing the parameter values yourself.

➡️ Edge Detection

Edge detection is an important technique used to detect features. Canny edge detection algorithm, which is one of the edge detection techniques in the figure, has been run on the images.

Canny Code

Canny Image

REFERENCES

  1. Medium, Cerebro, Using Artificial Intelligence in Image Processing Techniques, April 2018.
  2. Wikipedia, The Free Encyclopedia, ‘Image Processing’, September 2020.
  3. C. Gonzalez, Rafael, E. Woods, Richard, Digital Image Processing, Palme Publishing, (Ankara, 2014).
  4. S. Singh and B. Singh. “Effects of noise on various edge detection techniques”. In: 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom). Mar. 2015, pp. 827–830.
  5. https://www.tutorialspoint.com/opencv/opencv_adaptive_threshold.htm.
  6. Ajay Kumar Boyat and Brijendra Kumar Joshi. “A Review Paper: Noise Models in Digital Image Processing”. In: CoRR abs/1505.03489 (2015). arXiv: 1505.03489. url: http:// arxiv.org/abs/1505.03489.
  7. T. Silva da Silva et al. “User-Centered Design and Agile Methods: A Systematic Review”. In: 2011 Agile Conference. Aug. 2011, pp. 77–86. doi: 10.1109/AGILE.2011.24.

 

Contour Extraction Using OpenCV

In image processing, the concept called stroke is a closed curve that connects all continuous points that a color or density has. Strokes represent the shapes of objects in an image. Stroke detection is a useful technique for Shape analysis and object detection and recognition. When we do edge detection, we find points where the color density changes significantly, and then we turn those pixels on. However, strokes are abstract collections of points and segments that correspond to the shapes of objects in the image. As a result, we can process strokes in our program, such as counting the number of contours, using them to categorize the shapes of objects, cropping objects from an image (image partitioning), and much more.

Computer Vision

🖇 Contour detection is not the only algorithm for image segmentation, but there are many other algorithms available, such as state-of-the-art semantic segmentation, hough transform, and K-Means segmentation. For better accuracy, all the pipelines we will monitor to successfully detect strokes in an image:

  • Convert image to binary image, it is common practice for the input image to be a binary image (it must be a result of threshold image or edge detection).
  • FindContours( ) by using the OpenCV function.
  • Draw these strokes and show the image on the screen.

Apply Contour on Photoshop

Adobe PS
Before moving on to the encoding of contour extraction, I will first give you an example of Photoshop to give you better acquisitions.
Katmandan kontur çıkarımı
As a first step, to access the window you see above, right-click on any layer in Photoshop’s Layers window and select blending options.
🔎 If the Layers window is not active, you must activate the layers by clicking the Window menu from the top menu. The hotkey for Windows is F7.
It is possible to select the Contour color and opacity you want to create in the image by selecting the Contour tab from the left section. Then, background extraction is made to distinguish the contour extraction that will occur in the image first.
People siluet
After removing the background in the image you see here, I made a selection in yellow tones so that the object is visible in the foreground. After the background is removed, the outer contour will be applied to the image and the detection will be more successful.
People contour

Contour Extraction with Python OpenCV

I use Google Colab and Python programming language as a platform. If there are those who regularly code Python, it is a platform that I can definitely recommend! Come on, let’s start coding step by step.
📌 Let’s import the libraries required for our project as follows.
Gerekli Kütüphanelerin Yüklenmesi
📌 As the second step, we get our image with the imread function.
Görüntünün Alınması
📌 As you know in the world of image processing, our images come in BGR format. The BGR image must first be converted to RGB format and then assigned to the grayscale color channel.
Converting Color Spaces
📌 As the fourth step, a binary threshold operation is performed by specifying a threshold value in the image. To access the mathematics that the binary threshold function runs in the background, you must examine the following formula 👇
Formula
Binary threshold
If you have noticed, the image on which threshold will be applied is selected as a gray level image, not RGB. Please pay attention at this stage. When you follow these steps in order, you will receive the following feedback.
Background
📌In this step, we will use the findContours function to find the contours in the image. The image where the contours will be determined will be the binary image that we have realized the threshold.
Find Contours
📌 We will use drawContours function to draw these contours visually.
Draw Contours
🖇 The parameter cv2.CHAIN_APPROX_SIMPLE in the method removes all unnecessary points and saves memory by compressing the contour.
📌 Now we can print our contour extracted image on the screen.
Imshow contours

In this way, we made our inference. Hope to head to the world of other projects in another article … Stay healthy ✨

REFERENCES

  1. Contour Tracing, http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/intro.html.
  2. Edge Contour Extraction, https://www.cse.unr.edu/~bebis/CS791E/Notes/EdgeContourExtraction.pdf, Pitas, section 5.5, Sonka et al., sections 5.2.4-5.2.5.
  3. https://www.thepythoncode.com/article/contour-detection-opencv-python adresinden alınmıştır.
  4. https://www.subpng.com/png-m7emk6/ adresinden alınmıştır.
  5. OpenCV, https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html.
  6. OpenCV, https://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html.

Designing an Artificial Human Eye: EC-Eye

The eye is one of the organs with the most complex biological structure. Thanks to this structure, it provides a very wide viewing angle, as well as processing both the distance and the near in detail, and it also provides an incredible harmony according to the environment and light conditions. In addition to its neural networks, layers, millions of photoreceptors, it also has a spherical shape, making it very difficult to copy.
Despite all these difficulties, scientists from the Hong Kong University of Science and Technology continued their work in this area and developed a bionic eye with light-sensitive superconducting perovskite material. This bionic eye, which they call the “Electrochemical Eye” (EC-Eye), is about to do much more, let alone copy a human eye.
 

 
The cameras we have now can sound like a replica of vision. But for small sizes, the resolution and viewing angle do not exactly have the characteristics of the human eye, rather solutions such as microchips are used. But, as we said before, designing them on a spherical surface is not that easy. So how does EC-Eye do this?
We can say that the electrochemical eye consists of 2 parts. There is a lens on the front that functions as a human iris. It also has an aluminum shell filled with an electrically charged liquid on the same side. This liquid is a biological fluid in the form of a gel that fills the inside of the eye, which we know as “Vitreous” in the human eye structure.

 
On the back of the EC-Eye, some wires send the generated electrical activity to the computer to process. It also has a silicone eye socket to make contact. Finally, and most importantly, the sensitive nanowires that perform the detection. These nanowires are so sensitive that their response speed is faster than photoreceptors in a normal human eye. Transmission takes place by transmitting the electrical reactions that occur on the nanowires to the computer. Of course, even if it seems like a very easy process when told in this way, it is an application that pushes the limits of technology. It is even more intriguing that all these processes work with a power and feature that will leave the human eye in the background.
To see how it works, an interface was created between EC-Eye and the computer, and some letters were shown to EC-Eye through this interface. As a result of the detection, it was proven that a higher resolution image was obtained. For the next stages, it will face much more complex tests and studies will continue for its development.

It is very clear that this bionic eye needs to pass many more tests to replace the human eye, especially although it looks like a small device, the stage of connecting nanowires to a computer for processing is now a problem. When it comes to a lot of nanowires, it seems very difficult to install and use them in a practical way, so these bionic eyes may take a little longer to commercialize and be used by everyone. But for now, it gives great hope for the future.
If it comes to a point where it can do things that the human eye cannot perceive, it can be said that it has a lot of potential. What we see in science fiction movies and “These only happen in movies anyway.” It seems that recording, seeing far, night vision, viewing frequencies in other wavelengths is not that inaccessible anymore. Just as these can be done very comfortably even with phone cameras, it is not that difficult to predict that high-end technological applications including artificial intelligence can do this easily.
Artificial Intelligence has already begun to be a part of us in every field.
 
References:

Looking to the Future: Creating an Artificial Eye


https://www.nature.com/articles/s41586-020-2285-x.pdf?origin=ppub
https://tr.euronews.com/2020/05/21/insanlar-ve-robotlar-icin-gelistirilen-biyonik-goz-ilk-testleri-gecti-potansiyelde-s-n-r-y

Mobile Application Development

FaCiPa Series – 3

FaCiPa Series 2 I wanted to write the mobile application side, which is the last series of my articles because I got very nice returns leftover from my article. It’s an amazing feeling to be able to talk to you today about the project I’ve been developing for a year! In this article, we will talk with you about FaCiPa’s mobile interface.
Since the project included Python programming language and API-side encodings, different options such as kiwi or Ionic were available as a platform. Other articles I have written for Ionic can be found at the links below. In these links, you can briefly get information about What is Ionic, The working structure of the Ionic project, and its use with the Semantic UI. In addition, since TypeScript is written with a code structure, you can also review the article I wrote about it. Below are the most common explanations about the Ionic Framework:

👉 This open source library is built on Cordova.
👉 It is a library that allows even Web developers to develop mobile applications.

Mobile Application Design
First, we start by creating a new project on the Ionic Framework, the mobile platform for FaCiPa.

Then we create a page with the ionic generate command.
Generate Page
Ionic Page
In the application, there is a home page, registration page, and analysis page to start with, so 4 pages should be created together with the home page in total.
All files

FACIPA MOBILE INTERFACE

The framework that will be used in FaCiPa’ s mobile interface has been selected as Ionic. More use of mobile devices than computers, the increase of mobile applications, the diversity of mobile devices, and the presence of different operating systems have led software developers to find different mobile solutions. In addition to native application development, it has become an important need to create an application structure that can also be run on any platform over time, and hybrid applications that can be developed with the support of languages such as HTML5 and JavaScript have emerged [1].
Ionic Framework, especially Angular.js, the first choice of programmers with JS or Angular 2 experience is usually Ionic. Open source, Ionic is home to thousands of mobile apps with thousands of followers and supporters. The Ionic Framework, which in its own words has “first-class” documentation, is a convenient and easy-to-learn library.
🏗 The Ionic Framework is built on Cordova. Cordova provides access to the hardware and system resources of the mobile device. You can run it on mobile operating systems such as Android, IOS, or Windows Phone. You can even publish this app as a mobile-compatible website in a convenient way. HTML, JavaScript, and Angular are basically the basis for developing applications with Ionic. knowing js will be enough. Visual Studio Cide platform was used as a platform in the project. The designs of the application are src\pages\home\home.html like .HTML files with the HTML extension are laid out with HTML5. The necessary CSS designs are src\pages\home\home.scss like .scss files it was done in files with the SCSS Extension [1].
📷 The photos that will be used in the project are determined to be taken from the user in the first step and then reduced to 1 photo in order to not tire the user and reduce the processing load of the machine. The user receives the app from Mobile stores and instantly takes photos and sends this photo to the server for processing.
🛡 The backend section of the application is src\pages\home\home.like ts .files with the TS extension are made in TypeScript.
Upload Camera

IONIC ALERT (ION-ALERT) PLUGIN

A warning appears above the content of the application and must be manually removed by the user so that they can continue to interact with the application. In the application, an ion-alert warning is given for the user to take the correct photo.
🔎 Title: Title of the warning box
🔎 Subtitle: Warning text
🔎 Buttons: The button used to remove the warning if the OK button is clicked, the photoOne() method is executed and the photo is taken.
Ionic Alert

IONIC CAMERA PLUGIN

The Ionic camera plug-in is a necessary plug-in for taking photos or videos from mobile devices. Cordova plugin requires: cordova-plugin-camera
🔎 Quality
🔎 destinationType: Destination Type
🔎 encodingType: Coding Types
🔎 media Type: Media Type (Picture)
Install Camera
Install Cam

FIRST DRAFT DRAWINGS OF THE PROJECT

Wireframe Templates
As content, you can design your application’s pages completely freely. The wireframe drawing you saw above was a drawing designed when the project first started. Then we created the designs of the project. I have to say as a footnote that, unfortunately, our product does not support English, so I have to share it in Turkish.
Facipa
The visuals I have given above are the analysis page of the project and the feedback on the analysis result. Thus, we have come to the end of FaCiPa. Thank you for following it patiently. Stay healthy ✨

REFERENCES

  1. http://devnot.com/2016/hibrit-uygulama-catisi-ionic-i-taniyalim/
  2. R. L. Delinger, J. M. VanSwearingen, J. F. Cohn, K. L. Schmidt, “Puckering and Blowing Facial Expressions in People With Facial Movement Disorders,” J. Phys Ther, vol. 88, pp. 909-915, August 2008.
  3. The Spreading of Internet and Mobile Technologies: Opportunities and Limitations, Hasan GULER, Yunis SAHİNKAYASİ, Hamide SAHİNKAYASİ. Journal of Social Sciences Volume 7 Issue 14 December 2017, 03.10.2017-27.10.2017.

Facial Paralysis Assistant: FaCiPa

Hello everyone, as I promised you before, I’m here to introduce FaCiPa. I will introduce you to the details of how to make an application from scratch by approaching the FaCiPa application step by step, which you have come across in many interviews, interviews, and practices. Excuse my excitement today, because I feel like every project I do is my child. So much so that this project is very valuable to me, as it also contains memories from my own life.

Feature extraction techniques with MATLAB

Feature extraction, a method commonly used in computer vision, image processing and artificial intelligence projects, is the application of size reduction on raw data. [1]. As you know, machine learning has been experiencing dramatic developments recently. This has led to great interest in machine learning from industry, academia and popular culture 🏭👩‍🔬. With the introduction of machine learning and deep learning models in the field of Health in the world recently, intelligent systems can detect many diseases in advance or do not overlook details that an expert cannot see [2]. There are different regions that can detect the disease on MRI images that are common in medical treatments ☢️. In order to concentrate in these regions, feature selection and feature extraction are carried out and the results obtained are reflected to various algorithms and the machine is provided to detect diseases detected by human beings.

[gdlr_core_space height=”30px”]
Detection of lesion sites in sample brain MR image by feature inference [3]
[gdlr_core_space height=”30px”]

I’m going to continue to work on left hand wrist MRI images taken from individuals that I mentioned earlier in my article “preliminary stages in bone age determination with image processing”. Of course, the images used are entirely up to the person’s request, if you wish you can also make feature inference on another image data set ✔️. The important thing is that we can recognize the objects in the image and determine what feature it contains. Figure 2 ‘ to briefly explain the image preprocessing stages in the MATLAB environment show. During the pre-processing stages, certain filters were used to erase the trivial details found in the image, reduce the light factor, and clarify certain areas.

[gdlr_core_space height=”30px”]
MATLAB pre-processing steps
[gdlr_core_space height=”30px”]
Examining of Feature Extraction

📌 Feature extraction is the acquisition of details by reducing the size in order to positively affect performance in a project. Attribute inference (feature inference), used in machine learning, pattern recognition, and image processing, creates derived values (properties) using measured data given as input [3].

📌 A feature in machine learning is the individually measurable property of an observed data. Features are inputs fed into the machine learning model to make a prediction or classification [4].

Steps of Data Analysis
[gdlr_core_space height=”30px”]

Feature Extraction Techniques

Feature extraction aims to reduce the number of features in the data set by creating new features from existing data and then discarding the original features [6]. In line with this information, the color channels of the images were first checked in MATLAB. Then, according to the specified color channel RGB and Gray-Level information by keeping color conversions and numerical values will be obtained from these results. These numerical values will then be used in machine learning and will first be manually checked whether they belong to the same classes.

[gdlr_core_space height=”30px”]

🔎 RGB, HSV, LAB Color Spaces and Examining of  GLCM

🔗 RGB Color Space : RGB is the most widely used color space. In this color model, each color acts as the main spectral components of red, green and blue. The Cartesian Coordinate System is in the infrastructure of this model. The color subspace of interest is examined as this cube, which is frequently used in image processing [7].

📌 When working in the RGB color channel, let’s check that the image received with priority is suitable for the RGB color channel, and then let’s keep the Matrix values of the red, green and blue color channels as variables.

Parsing the image into R, G and B channels

Representation of Sample Red Channel Values

[gdlr_core_space height=”30px”]

🔗 HSV Color Space : The name of the HSV space comes from the initials of the words hue, saturation and brightness. HSV color space defines color with the terms Hue, Saturation, and Value. Although a mixture of Colors is used in RGB, HSV uses color, saturation and brightness values. Saturation determines the vitality of the color, while brightness refers to the brightness of the color.  The HSI space separates the nephew component in a color image from the hue and saturation, which are color-bearing information [9].

Parsing the image to H, S and V properties

Example Representation of V Channel Values

[gdlr_core_space height=”30px”]

🔗 CIE Color Space : The CIE 1931 color spaces were the first defined quantitative connections between the distribution of wavelengths in the electromagnetic visible spectrum and physiologically perceived colors in human color vision . The mathematical relationships that define these color spaces are essential tools for Color Management, which are important when dealing with recording devices such as color inks, illuminated displays, and digital cameras [11] . To parse the RGB image into CIELAB channels, the transformation must be performed with the command rgb2lab.

Parsing the image into channels C, I and E

Representation of Sample C Channel Values

[gdlr_core_space height=”30px”]

🔎 GLCM (Gray-Level Co-Occurrence Matrix) : Several tissue features can be extracted with the grey level co-formation Matrix. The texture filter functions provide a statistical view of the texture based on the image histogram. These functions can provide useful information about the texture of an image, but cannot provide information about the shape, that is, the spatial relationships of pixels in an image [12].

Calculation of sample GLCM values [12]

Creation of The Gray Level Co-Formation Matrix In The Image

🔔 Creation of the feature vector: In machine learning, feature vectors are used to represent numerical or symbolic properties of an object, called properties, in a mathematical, easily analyzable way. It is important for many different areas of machine learning and pattern processing. Machine learning algorithms often require a numerical representation of objects so that algorithms can perform processing and statistical analysis. Feature vectors are the equivalent of vectors of explanatory variables used in statistical procedures such as linear regression [13].

The resulting feature vector is a vector with values in size 1×28.

Graphitization of The Feature Vector

In this way we have obtained the feature vector. Hope to see you in my next post 🙌🏻

REFERENCES

[1] Sadi Evren Seker, “Feature Extraction”, December 2008, http://bilgisayarkavramlari.sadievrenseker.com/2008/12/01/ozellik-cikarimi-feature-extraction/.

[2] M. Mert Tunalı, “Brain tumor detection via MRI images Part 1 (U-Net)” taken from Medium.

[3] Shahab Aslani, Michael Dayan, Vittorio Murino, Diego Sona, “Deep 2D Encoder-Decoder Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation in Brain MRI”, September 2018, Conference Paper, MICCAI2018 (BrainLes Workshop).


[4] MC.AI, The Computer Vision Pipeline, Part 4: Feature Extraction, October 2019, https://mc.ai/the-computer-vision-pipeline-part-4-feature-extraction/.

[5] Javier Gonzalez-Sanchez, Mustafa Baydoğan, Maria-Elena Chavez-Echeagaray, Winslow Burleson, Affect Measurement: A Roadmap Through Approaches, Technologies, and Data Analysis, December 2017.

[6] Pier Paolo Ippolito, “Feature Extraction Techniques”, Towards Data Science, https://towardsdatascience.com/feature-extraction-techniques-d619b56e31be.

[7] C. Gonzalez, Rafael, E. Woods, Richard, Digital Image Processing, Palme Publishing, (Ankara, 2014).

[8] Retrieved from https://favpng.com/png_view/light-rgb-color-space-rgb-color-model-light-png/BsYUHtec.

[9] Dr. Lecturer Member of Caner Ozcan, Karabuk University, CME429 Introduction to Image Processing, “Color Image Processing”.

[10] Retrieved from https://tr.pinterest.com/pin/391179917623338540/.

[11] From Wikipedia, The Free Encyclopedia, “CIE 1931 Color Space”, April 2020, https://en.wikipedia.org/wiki/CIE_1931_color_space.

[12] Matlab, Image Processing Toolbox User’s Guide, “Using a Gray-Level Co-Occurrence Matrix (GLCM)”, http://matlab.izmiran.ru/help/toolbox/images/enhanc15.html.

[13] Brilliant, “Feature Vector”,  https://brilliant.org/wiki/feature-vector/, April 2020.

Preliminary Steps in Bone Age Detection with Image Processing

Today, health studies in the field of artificial intelligence continue unabated. Now, the biggest assistants of health personnel in every field are artificial intelligence, algorithms, robotics working in health. In the age of growth, you have encountered to a child who has a height and growth disorder problem caused by genetic and growth disorders. In order to control short stature and growth impairment in these individuals, wrist x-ray film examinations are requested during the examination. So what do you think the doctors get from this X-ray film 👨‍⚕️? In these MRI images, skeletal maturation degree, called bone age, is calculated from the bones in the wrist. I can hear you say what the difference between bone and birth life calculated from the day of human birth can be 🗓️. The calculated age of an individual from the date of birth is the calendar age. Bone age is usually calculated from the left wrist X-ray, which occurs with the effect of hormones and nutrition 🦴

Properties of the wrist bone and heat map

The major mismatch between skeletal (bone) age and chronological calendar age occurs in children with obesity or beginning early puberty. If you want to examine the carpal bones on the human wrist with you and find out what age bone is 🔍

What is Bone Age?

The degree of maturation of the bones is expressed as bone age. In a child with normal bone maturation, bone age should be equal to the chronological age. However, sometimes it is normal to see some deviation between bone age and chronological age.

🚩 Hand Wrist Carpal Bones

The wrist bones, which are referred to as Ossa Carpi in medical language, consist of 29 bones, including 2 forearm bones (radius and ulna), 8 wrist bones (carpals), 5 comb bones (metacarpals) and 14 finger bones. We will deal with carpal bones, which are used as wrist bones for bone age determination. Let’s get to know these bones better 🔍

📍 In Figure 2, The X-ray images of the carpal bones in the wrist, which are examined from an anatomical point of view, are given below.

📍 In children aged 0-6 years, the most important criterion when examining X-ray images is the number and size of secondary ossification centers and wrist bones in the epiphysis region.

Epiphysis and Diaphyseal Region Representation in Bone

📍 As shown in Figure 5, when the structural analysis of the long bone is performed, the epiphysis is considered to be one of the two ends of the long bone. As shown in Figure 5, it is the middle section of the long bone, and there is Metaphysis in the area between the epiphysis and the diaphyseal. Metaphysics is the area where the center of secondary ossification is located. The proportion of the areas to be taken from these regions and the age of the human can be determined. In this article I show you how the different filters that are the preliminary stages of image processing work. I chose Matlab as the programming platform. In MATLAB’S GUI platform, the image is easily selected and filters are applied.

🚩 Application Of Background Inference

Background extraction is a method often used in image processing applications to capture and track moving objects on a fixed ground. Removing unwanted radiological signs in the images used is possible with background removal method. Inference is performed in the background using the defined structural element.

Structural Element Selection
Structural Element Selected In Background Inference

The background of the selected image is removed when the required operations are executed in the background.

Original Image
Removed Background Image
🚩 Threshold Detection

It is a method used to convert an input image into a binary image. Binary image is the definition of the image in black and white. The goal is to identify the object by reducing the noise. As the first stage, MRI is performed on the image Threshold. In this way, the detection of the object can be easily detected in non-colored black-and-white images. OpenCV ‘ s threshold function cv2.THRESH_BINARY  threshold type 127-255 with the view of the match is performed. değerlerinde eşikleme gerçekleştirilmektedir.

Original Image & Thresholded Image

🚩 Contrast Enhancement

Image enhancement techniques were examined one by one and the contrast enhancement method was chosen to provide the best results.

Contrast Stretching

It is the process of expanding the niece level range to include the entire niece range of the recording medium or display device.

Contrast Enhancement Matlab Code

Matches the density values in the grayscale image with the new values given by the imadjust method.

Image intonation as a result of imadjust method

Contrast is enhanced by using the histogram equalization feature with the histeq method.

Histogram Equalization Matlab Code

Histogram Equalization Result Image Toning

Adaptive histogram equalization is used so that contrast is limited with the adapthisteq method.

Adaptive Histogram Equalization Matlab Code

Adaptive Histogram Equalization Result Image Toning

🚩 Applying Mean and Median Filters

The mean filter is the simplest filter that can be created with the help of a generated kernel. For each pixel, the surrounding pixels are averaged.

Mean Filter Import Matlab Code

Image Filtered Image Result

The median filter is assigned to the pixel whose median values are in the given neighborhood.

Median Filter Import Matlab Code

Image Result With Median Filter Applied

🚩 Applying of Laplace and Sobel filters

For image sharpening, the two-dimensional laplace operator is applied with a 0.2 percent filter.

Laplace Filter Import Matlab Code

Image Result Of Laplace Filter Application

In another of the image sharpening methods, more details are revealed on the image with the Sobel operator.

Sobel Filter Import Matlab Code

Sobel Filter Applied Image Result

🚩 Canny Edge Detection

The Canny edge detection algorithm is a multi-step algorithm that can detect edges with noise simultaneously being suppressed. It contains the steps to reduce noise with Gauss filter, gradient calculation with the use of gradient operators, detection of edges with Threshold values. For example, let’s run a Canny locator with threshold values (20, 70) on an MRI image with an individual’s hand and look at the results.

Edge detection with Canny Edge Sensor

Canny Filter Import Matlab Code

Canny Filter Applied Image Result

🚩 Application of Erosion and Dilation Filters for Morphological Processes

The erosion filter erodes a grayscale or packaged binary image in accordance with the values of the given structural element. The dilation filter expands the worn areas of the worn image,making it easier to process. Let’s apply wear and expansion filters with the defined structural elements as follows.

Erosion and Dilation Filter Import Matlab Code

Erosion and dilation filters applied to the image result

The pre-processing stages of the image described above can be tested with different filters by changing the structural elements according to the projects to be applied. Thus, I have described the most commonly used pre-processing stages in the image. I wish codes on good days 💻

REFERENCES

  1. Retrieved from http://www.cocukendokrindiyabet.org/haber/95.
  2. Gur Emre Guraksın, Selcuk University, Bone Age Detection Using Artificial Intelligence Techniques, Institute of Science, January 2015.
  3. Expert assistant Nurdan Akkan, Comparison of Greulich-Pyle and Tanner-Whitehouse Methods Used in Bone Age Detection , I.U. Faculty of Dentistry Dental and Jaw Orthopedics Unit, 1982.
  4. Esra HASALTIN, Erkan BESDOK, Use of Artificial Neural Networks in Radiological Bone Age Detection from Hand – Wrist X-Ray Images, Erciyes University, Computer Engineering Department, Faculty of Engineering, Institute of Science.
  5. Neyzi, O., Ertugrul, T., Pediatrics (2nd Edition), p. 61-100, Nobel Medical Bookstores, Istanbul, 1993.
  6. Retrieved from http://mesutpiskin.com/blog/opencv-arka-plan-cikarma-background-subtraction.html.
  7. Dr. Lecturer Member of Caner Ozcan, Karabük University, CME429 Introduction to Image Processing, Density Transforms and Histogram Processing.
  8. Retrieved from https://www.mathworks.com/help/images/ref/imadjust.html.
  9. Sıddık Acıl, Image Processing with Python: Taken from Medium, Mean and Median Filters.
  10. Retrieved from https://www.mathworks.com/help/images/ref/fspecial.html?searchHighlight=fspecial&s_tid=doc_srchtitle.
  11. Retrieved from https://www.mathworks.com/help/images/edge-detection.html?searchHighlight=canny.
  12. Retrieved from https://www.mathworks.com/help/images/ref/imerode.html?s_tid=doc_ta.
  13. Retrieved from https://www.mathworks.com/help/images/ref/imdilate.html?s_tid=doc_ta.

Image Processing Color Spaces | RGB, HSV and CMYK 🌈

Welcome to the world of image processing 🎉I’m going to talk about color spaces in image processing, which is one of the areas of Computer Vision that are very important today. You know that image processing basically performs operations on the image. The use of color in image processing is due to two factors. First, color is an identifier that facilitates object recognition and object extraction from the image. Secondly, people can distinguish thousands of shades and intensity compared to gray shades only. For image analysis, we also need to specify the color channel to be used when performing various operations on the image. Before learning color channels, let’s learn a little about Color Image Processing. Scientifically, the foundations of the concept of color were discovered in 1665 by the British physicist Isaac Newton. In this experiment carried out in a dark room, it was noticed that the light coming through the door hole was shattered on the prism to form a color spectrum. Below is a summary drawing of this experiment.

We said the color spectrum but didn’ t mention what it means. The color spectrum is the separation of white light into its colors by passing through a special prism. In fact, we all know that very closely. To give a little more detail, rainbows that enthrall us with their colorful state after the rain are the most beautiful examples of color formation with the refraction of light 🎆.

Color spectrum formed when white light is passed through the prism 🌈

💎 The main reason for mentioning these is to present the spectrum range that the human eye can perceive in the most descriptive way. The spectrum range that the human eye can detect is 400 to 700 nm. This interval is defined as the scientifically visible region. This is exactly where the separation of white light passed through the prism you see in the photograph comes into play into the visible color spectrum. Here we’re going to work with this visible region. Let’s we talk about the most commonly used color channels !

As you know from everyday life and image processing, the main colors are Red, Green and Blue. The color channel consisting of these colors is called RGB in scientific terms. These primary colors come together to form the intermediate colors we use. Different image channels have been created from the main and intermediate colors.

🔎 As you can see, the main colors Red, Green and Blue are brought together to form Yellow, Cyan and Magenta colors. We’ll see these in the future when we examine the color spaces.

Formation of basic and intermediate colors 🌈

We basically talked about the color concept and the most used RGB structure up to this section. Now that we’ve got the basic structure we need to learn, we can access the color spaces. In image processing, a grayscale image has only one channel. Each pixel value that the image has is valued from 0 to 255. the image contains color according to the pixel values it has. In gray images, instead of colors such as red, green and blue in the RGB channel, the intensity level, ie brightness is handled. Actually, brightness refers to the colorless state of intensity. In color images, the storage space increases because more than one channel is used. The purpose of the color spaces or models I will now describe is often to facilitate color identification. A color space is the process of defining a subspace in the system in which a coordinate system and each color are represented by a single point. The RGB channel is widely used in image processing for color monitors and color video cameras, the CMY and CMYK channels for color printing, and the HSI (HSV) channel is created for people to describe and interpret color. These channels are the leading channels in image processing, so I will talk to you about them today.


RGB Color Channel

RGB is the most widely used color space. In this color model, each color acts as its main spectral components in red, green and blue. In the infrastructure of this model is located a Cartesian coordinate system. The color subspace concerned is examined as this cube, which is frequently used in image processing.

📝When this cube is examined, the RGB primary color values are found in the three corners of the cube, cyan, yellow, and magenda in the other three corners. The values R, G and B are expressed as vectors in the coordinate system. As you can see, in the RGB color space, different colors are located on the cube and in dots. When representing an RGB image with 24 bits, we specify the total number of colors (28) 3 = 16777216, assuming 8 bits are 1 byte. The cube you see above is a solid object containing the number of colors 16777216. To use colors in this cube, there are color codes or values written in specific color models. To use colors in this cube, there are color codes or values written in specific color models. There are many related websites. Examples include w3schools. In OpenCV, which is a very common library in image processing, we will examine how to define an RGB color model and how to extract an RGB histogram of the image.

📃 In OpenCV, the RGB color space is defined as BGR. A Histogram is the name given to a graphic that shows the numbers of color values in an image. If the histogram equality of the values in an image is desired to be expressed on the graph, first The X and y coordinates are specified on the matrix. Then the hist module is shown with imshow( ) by specifying how many boxes there will be. When creating a histogram of an image, we convert the original BGR image to gray.

Create a histogram chart

The original used image was chosen as an image where the RGB color model predominated.

📝 Histogram equalization or equalization is a method to resolve color distribution disorder caused by the fact that color values in a picture are clustered in a specific location. In the graph generated below, the values are clustered in the range 50-100.

(2 8 ) output of histogram chart with 256 boxes 📊

The image is a color image, so the RGB values will be processed and the colors will be separated and the histogram will be balanced for each of the red, green and blue colors.

Create a histogram chart with 32 and 8 boxes

Histogram chart with 32 boxes 📊

Histogram chart with 8 boxes 📊

Histogram Normalization

This generated histogram to convert into a probability distribution function, each value is divided by the sum of these values.

Creating a normalized histogram with the normed=True module

Normalized histogram boxes 📊

HSV | HSI Color Channel

Color models such as RGB, CMY, CMYK do not contain practical terms in terms of human interpretationFor example, no one talks about the color of a house by giving the percentage of the primary colors that make up that color. When we look at a colored object, we express that object with Hue, Saturation, and brightness. For this reason, the concepts of Hue, Saturation and brightness, which make it easy for us to define colors, have been put forward. The name of the HSI space comes from the initials hue, saturation and intensity, which are the English equivalent of the words hue, saturation and brightness. HSV color space defines color with the terms Hue, Saturation, and Value. Although a mixture of Colors is used in RGB, HSV uses color, saturation and brightness values. Saturation determines the vitality of the color, while brightness refers to the brightness of the color.  The HSI space separates the nephew component in a color image from the hue and saturation, which are color-bearing information.

The hue, saturation and brightness values used in HSV space are obtained from the RGB color Cube. The brightness value is zero while the color and saturation values for the Black color in the HSV space can take any between 0 and 255. In white, the brightness value is 255.

Conversion from RGB color space to HSV color space

Original : Image Converted to RGB and HSV Space

(a) – hue (b) – saturation (c) – intensity

CMY & CMYK COLOR CHANNEL

In the CMY model, the pigment primary colors, Cyan, winner and yellow, combined in equal amounts, should produce the Black color. In practice, the combination of these colors for printing produces a fuzzy-looking black tone. A fourth color, black, is added, which will reveal the CMYK color model to produce the correct black tone.

As mentioned earlier, this color model is used in image processing to produce hard copies. The equal amount of pigments of the CMY color space should produce the Black color. In order to produce the correct black tone in order to be dominant in printing, black tone was added to the CMY color space and CMYK color space was obtained. In publishing houses, “four-color printing” refers to CMYK, while “three-color printing” refers to the CMY color model.

REFERENCES

  1. C. Gonzalez, Rafael, E. Woods, Richard, Digital Image Processing, Palme Publishing, (Ankara, 2014)
  2. Retrieved from http://www.kisa-ozet.org/tayf-nedir-kisaca/.
  3. Dr. Lecturer. Member of Caner ÖZCAN, Karabuk University, BLM429 Introduction to Image Processing, Image Acquisition and Digitization.
  4. Retrieved from https://www.instructables.com/id/Exploring-Color-Space/
  5. Retrieved from https://www.fencix.net/isigin-sogurulmasi/
  6. Retrieved from https://www.eikonal.com.br/8930509-Prismas-especiais
  7. Retrieved from http://www.atasoyweb.net/Histogram-Esitleme
  8. Retrieved from https://www.istockphoto.com/tr/foto%C4%9Fraf/rainbow-lorikeet-gm115919863-2434334#/close
  9. Retrieved from https://people.eecs.berkeley.edu/~sequin/CS184/TOPICS/ColorSpaces/Color_0.html