Python Data Science Libraries 2 – Numpy Methodology

One of the most important and fundamental libraries in Python is undoubtedly the numpy library. In the continuation of this series, I will first continue with numpy from the pandas library now. In general, its functional structure with library-based features is based on a more robust infrastructure than other libraries. Therefore, it can perform the mathematical operations to be done quickly and in a healthy way. Its expansion is already known as Numerical (num) python (py) in python. As it can be understood from here, it is a library with strong mathematical aspect and possible to reach desired results quickly and easily. It is one of the indispensable building block libraries in Machine Learning and Deep Learning. Basically, it plays a role in the background of every transaction. What is mentioned here is the matrices in the form of arrays and the operations between them according to their states, the calculation of their outputs and the use of matrices in the basis of the work done as a project is the most necessary condition. Although we often see this frequently in Image Processing operations, people who will work in this field must have numpy knowledge in their transactions.

 

This library, which is used as a whole, offers you mathematical structures suitable for the models you will use. In this way, descriptive explanations of your transactions will also make more sense. As I mentioned in the upper paragraph, matrix operations are the most important event in mathematics. This spreads to the whole of the transactions you are currently doing and numpy provides you convenience in layer-based transaction processes. When we actively process images, we can see the most important layer operations visibly. Even if the OpenCV library carries the necessary load during the operations, operations that are not done through the array structure of any numpy library will not be sustainable. The numpy library is an indispensable value of these works, as there will be matrices and products of matrices behind many operations. It is a fully user-friendly library in line with the possibilities of its functional structure. It is among the top 5 most useful libraries among Python libraries, according to tests conducted by people working in this field worldwide. Usage areas are increasing in direct proportion to this.

 

Deep learning and Machine Learning topics do not only mean writing long lines of code contrary to popular belief. For this reason, most of the people start writing code or even making a career in this field without knowing the events that are going on in their background. Behind these events lies an extensive knowledge of mathematics and statistics, the best example of which is Image Processing. Because on the back of it is all mathematics, these operations are matrices and there are numpy in the libraries used. This is the biggest proof that this library is active almost everywhere. There is no library in python that is multifunctional in this way. Because there are two libraries that must be found in every field. These are the numpy and pandas libraries. While these provide convenience in both processing the data and performing numerical operations on the data, they show us the differences in the data perspective. This is a proof of the importance of libraries in Python, especially libraries on data processing and data analysis.

 

 

I can clearly say that the Numpy library makes a great difference in data shaping and preparation. It has functions that we would call useful in many ways such as reshape, array, exp, std, min, sum in the numpy library. This is actually the most basic level that distinguishes it from other libraries. For those who want to reach the necessary details of this, I will leave information about them in the resources section. From here, you can use the numpy library and what kind of features you can take advantage of, or what kind of convenience you can get in numerical transactions, you can find them yourself from the cheat sheet or numpy’s own website.

 

Thank you for reading and following my articles until this time, I wish you a good day.

 

References:

-https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf

-https://numpy.org/

-https://cs231n.github.io/python-numpy-tutorial/

-https://www.w3schools.com/python/numpy_intro.asp

-https://globalaihub.com/python-veri-bilimi-kutuphaneleri-1-pandas-metodoloji/

-https://globalaihub.com/python-data-science-libraries-1-pandas-methodology/

Psychiatric Illness and Social Media

Is there any psychiatric illness of yours based on your posts on social media? I saw a post about it last week on Linkedin and also there was a link to check research out about it. This is such interesting research because as all we know no matter what kinds of technological developments occur, our brain and psychology are affected somehow. Especially, seeing how social media can help to find our illness is pretty cool.

I would like to provide really short explanations: Data was collected across 223 participants and their Facebook posts were used. Of course, those posts include images and texts in order to understand their emotions, reactions, and moods. Their main method was machine learning algorithms and they have found someone who has mental illness a year before the first hospitalization. If you want to get detailed information, you can check the resources part out at the end of this article.

What Does Affect our Mental Health?

It is a well-known fact is that all we live in a society and living there shapes our lives. Society occurs many different kinds of people and all of us exposed to each of their behaviors somehow. For example, we have different cultures, education systems, languages, etc. worldwide, and those differences can create challenges for our psychology. I am not sure if “challenge” is the correct word but what I believe is that our psychology is influenced by what we have seen during the formative years of adolescence.

Furthermore, our family environment is really important for all of us whether it is a nuclear or extended family. Since our first education starts, it evolves with the family. Generally, we copy our parent’s behaviors and implement that system into our lives. Let’s say if your family is likely to trend to violence, it is an undeniable fact is that somehow you will be affected by that environment.

What about Technology…

On the other hand, with the new era, we cannot deny that there are technological developments. Moreover, we already involved in social media such as Facebook, Instagram, Twitter, etc. We all spend too much time with these tools and sometimes it is just for fun but sometimes for fighting or explaining ourselves to people that we never met.

How Does Social Media Help to Identify Illness?

According to statistics, the rate of social media usage is really high and it is more common among young people. Also, the rate of mental illness among young people is the highest one when compared to other ages.

So, how social media can help is that young people are both the highest utilizers of social media and among those at the highest risk for the development of mental illness. Based on their post on social media, it is not difficult to come up with the result.

Let’s think deeply about it. For example, when I feel nervous, mostly I watch some TED talk videos to be able to feel more relax and see other people who have the same problems as me. Why? Because in that way I feel I am not alone. In my case, instead of typing some aggressive things on Facebook, I prefer to watch videos. However, some people, especially young people might prefer to post messages on Facebook and based on the language that they used can bring into the open their behaviors at that moment.

As we know, the Facebook algorithm gathers information about us using our location, political views, religion, etc and identifies many things about us even we think we are good at hiding them. Even though we cannot share our feeling with another person, these social media helps us somehow to do not feel lonely. At the same time, it can help to find out our psychological issues if we have.

If you would like to check another article, please have a look at it:

https://globalaihub.com/time-to-break-up-with-technology-is-it-possible/

 

Resources

https://www.nature.com/articles/s41537-020-00125-0?utm_medium=affiliate&utm_source=commission_junction&utm_campaign=3_nsn6445_deeplink_PID100095187&utm_content=deeplink

 

Application of CNN

Hello everyone, in my last blog post, I wanted to discuss a simple application about my favorite topic, CNN. I chose the mnist data set, which is one of the easiest data sets for this topic.

The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The MNIST database contains 60,000 training images and 10,000 testing images. Half of the training set and half of the test set were taken from NIST’s training dataset, while the other half of the training set and the other half of the test set were taken from NIST’s testing dataset. The images contained within have a width of 28 pixels and a height of 28 pixels.

Figure : Sample images from MNIST test dataset

Application

Data set is imported from tensorflow library. Session function has used to running codes. Global_variables_initializer has activated for codes to work. Data should be given piece by piece to train the model so batch_size has taken as 128. A function called “training step” has been created for the realization of the training. The for loop has defined as the loop that will perform the training in the function. MNIST pictures have taken with this code x_batch, y_batch = mnist. train. next_batch(batch_size) so we have feed pictures to our model in the form of batch. Feed_dict_train has defined to assign images and tags in the data set to our place holders. The code has written in one line to simultaneously optimize the model and see the variability of the loss value. The if loop has been used to observe the situation in our training. It is coded for training accuracy and training loss printing every 100 iterations. The test_accuracy function has been defined to see how our model predicts data that it has not encountered before.

2 convolutional layers have used to implement the MNIST data set. As a result of trials, when the number of convolutional layers, training step and filter sizes have increased, it has seen that the accuracy increased.First convolutional layer has 16 filters and they all have 5×5 size filters. Second convolutional layer has 32 filters and they all have 5×5 size filters. Layers have combined by making necessary arrangements with max pooling function. ReLU and SoftMax functions have used as activation function. Adam has been used as an optimization algorithm. A very small value of 0.0005 was taken as the learning rate. Batch size is set to 128 for make the training better. Training accuracy and training loss have printed on the output every 100 iterations to check the accuracy of the model. Test accuracy 0.9922 has obtained because of 10000 iterations when the codes have executed.

Figure : Estimation mistakes made by the model

In the figure above, some examples that our model incorrectly predicted are given. Our model can sometimes make wrong predictions, which may be because the text is faint or unclear. In the first example, we see that our model estimates the number 4 as 2.

Figure :  Graph of Loss function

The loss graph gives us a visualized version of the loss values we observed during the training. As shown in the figure, we have a decreasing loss graph over time. Our goal is bringing the loss value closer to zero. Through the loss graph, we can see the appropriateness of the learning rate. When we look at the figure, we can say that our learning rate value is good because there is no slowdown in the decrease in the graph.

In this blog, I made an application on Python using CNN with the Mnist data set. Thank you to everyone who has followed my blogs closely until today, goodbye until we see you again …

 

REFERENCES

  1. https://en.wikipedia.org/wiki/MNIST_database
  2. https://www.udemy.com/course/yapayzeka/learn/lecture/8976370?start=135#que

Hate Speech and AI: Issues in Detection

Hate speech is a form of expression which attacks someone mostly based on their race, gender, ethnicity and sexual orientation. The history of hate speech dates back long time ago; however, with the expansion of the internet and social media, it had its most accelerated form. Now, 41% of the American population have experienced a form of online harassment as Pew Research Center’s report suggests. Also, the high correlation between suicide rates and verbal harrasment in migrant groups shows the crucial importance of detecting and preventing the spread of hate speech. Additonally as an instance from recent years, after the mass murder that happened in Pittsburg synagoge it has seen that the murderer was posting hated messages to jews constantly before the incident.

 

 

Retrieved from: https://www.kqed.org/news/11702239/why-its-so-hard-to-scrub-hate-speech-off-social-media

 

Furthermore, the Pew Research Center’s report also suggests that 79% of the American population thinks that the detection of hate speech/online harassment is in the responsibility of online service providers. Hence, many online service providers are aware of the importance of the issue and have close relationships with AI engineers while solving it.

When it comes to the logic of hate speech detection, there are many complex points. Firstly, such complexity comes from the current AI technologies’ limitations on understanding the contexts of human language. For instance, current technologies fail to detect hate speech or give false positives when there are contextual differences. As such, researchers from Carnegie Mellon University suggested that the toxicity of the speech may differ with the race, gender and ethnic characteristics of the people. Hence, to increase the quality of the data and detection; it is important to identify the characteristics of the author while identifying the hate speech and its toxicity rate according to the researchers. Also, such identification can also reduce the current bias the algorithms have.

Retrieved from: https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/pi_2017-07-11_online-harassment_0-01/

 

However, current AI technologies have difficulties in detecting such characteristics. Firstly, it’s difficult to identify the demographics and characteristics of the authors’; since in most of the cases such information is not available on the internet. So, the process of distinguishing hate speech becomes harder. Secondly, even if the author clearly indicates such information; sometimes the detection process becomes more difficult due to the cultural insights of the given context. The dynamics of the countries or even the regions in countries is changeable and is really related to their culture and language. Such differences and ongoing changing factors are also crucial points for the outcomes of the processes; some outcomes may fail to detect or detect false positives due to non-statistical cultural differences.

 

 

Language is one of the most complicated and most significant functions of the humankind. There are many different ways and contexts of communicating with language which even neuroscientists could not fully map yet. However, with artificial intelligence scientists are also one step forward in describing the patterns and mechanisms of language. In such sense, the crucially important subject in the age of the internet, hate speech detection, also has an advantage since it is much easier to detect online harassment with machine learning algorithms. Nevertheless, there is no way for humans to get out of the detection cycle in today’s technology with the issues faced in detection processes. 

 

References 

https://bdtechtalks.com/2019/08/19/ai-hate-speech-detection-challenges/

https://deepsense.ai/artificial-intelligence-hate-speech/

https://www.kqed.org/news/11702239/why-its-so-hard-to-scrub-hate-speech-off-social-media

 

Python Data Science Libraries 1 – Pandas Methodology

I am putting the topics I have been working on into a series that I will tell you one by one. For this reason, I will explain the methodology and usage aspects of almost all libraries that I am actively working on. I’ll start with pandas, which allows functional operations such as data preprocessing without reading data first. With this library, we can easily perform data pre-processing steps, which are vital steps for data science, such as observing missing data and extracting that data from the data set. In addition, you can bypass data types and the front part of numerical or categorical operations that you will do on them. This provides us with significant convenience before proceeding. Each library in Python has its own specialties, but speaking for pandas, it is responsible for all of the pre-part modifications to the data to form the basis of the data science steps. Data classification processes in Pandas can be designed and activated quickly with a few functional codes. This is the most critical point in the data preprocessing stage, in the previous steps of data modeling.

 

 

We can store the data as “dataframe” or “series” and perform operations on it. The fact that Pandas library performs every operation on data in a functional, easy and fast way reduces the workload in data science processes on behalf of data scientists. In this way, it can handle steps such as the beginning and most difficult part of the process, such as data preprocessing, and focus on the last steps of the job. By reading data such as .csv, .xlsx, .json, .txt prepared in different types, it takes the data that has been entered or collected through data mining into python to process. Pandas library, which has the dataframe method, is more logical and even sustainable than other libraries in terms of making the data more functional and scalable. Those who will work in this field should work on the methodology of pandas library, which has the basic and robust structure of the python programming language, not to write code directly. Because new assignments on the data, column names, grouping variables, removing empty observations from the data or filling empty observations in a specific way (mean, 0 or median assignment) can be performed.

 

 

Data cannot be processed or analyzed before the Pandas library is known. To be clear, the pandas library can be called the heart of data science. Specially designed functions such as apply (), drop (), iloc (), dtypes () and sort_values ​​() are the most important features that make this library exclusive. It is an indispensable library for these operations, even if it is not based here on the basis of its original starting point. In the steps to be taken, it has a structure with tremendous features and a more basic case in terms of syntax. It is possible to host the results from the loops in clusters and convert them into dataframe or series. The acceleration of the processes provides a great advantage in functional terms if the project that will emerge has a progressing process depending on time, which is generally the case. Looking at its other possibilities, it is one of the most efficient libraries among the python libraries. The fact that it is suitable for use in many areas can be considered as a great additional feature. Pandas is among the top 3 libraries in the voting among data processing libraries made by software developers using the python programming language. You can reach this situation, which I quoted with datarequest in the sources section.

 

 

The concept of “data science”, which has been developing since 2015, has brought the pandas library to the forefront and this library, which has been developing in silence for years, has come to light. After Pandas, I will explain numpy and talk about numerical and matrix operations. In general, Pandas is a library that has high-level features in basic data analysis and data processing. In addition, if you specify the topics you will talk about and the things you want me to mention, I will draw a more solid way in terms of efficiency. I hope these articles that I will publish in series will help people who will work in this field. In the future, I will add the cheatsheet style contents that I will prepare on github to the bibliography section. If you want to take advantage of such notes, I will put my github account in the resource section, and you can easily access there.

 

 

References:

https://www.geeksforgeeks.org/python-pandas-dataframe/

https://medium.com/deep-learning-turkiye/adan-z-ye-pandas-tutoriali-ba%C5%9Flang%C4%B1%C3%A7-ve-orta-seviye-4edf0094e0d5#:~:text=Pandas%2C%20Python%20programlama%20dili%20i%C3%A7in,sonuca%20kolayca%20ula%C5%9Fmak%20i%C3%A7in%20kullan%C4%B1lmaktad%C4%B1r.

https://www.dataquest.io/blog/15-python-libraries-for-data-science/

https://github.com/tanersekmen/

https://www.edureka.co/blog/python-pandas-tutorial/

https://globalaihub.com/importance-of-data-quality-and-data-processing/

https://globalaihub.com/hareketli-ortalama-algoritmasiyla-al-sat-tavsiyeleri/

https://www.dataquest.io/course/pandas-fundamentals/Python Data Science Libraries 1 – Pandas Methodology

Data Analysis and Visualization with Python – 2

We continue to make visualizations on the Iris dataset I used in my previous article. There are 2 most frequently used libraries for data visualization. Of these libraries, matplotlib is known by many people, just as I know. In addition, our second library is seaborn. In this article, we will witness the visualization of data with the help of libraries.

🔐 You need to enter the link for the Colab link I use.

Data Visualization Libraries

1. Seaborn: Statistical Data Visualization Library

Seaborn is a Python data visualization library based on Matplotlib. It provides a high-level interface to draw attractive and informative statistical graphs. Visit the setup page to see how you can download the package and start using it.

Seaborn

We can say that the difference compared to Matplotlib is that it has more customization options.

Seaborn Samples

In the image I gave above, we see how we can visualize the data thanks to Seaborn. It is possible to display our data in many different graphics and forms.

2. Matplotlib: Visualization with Python

Matplotlib; it is a comprehensive library for creating static, animated, and interactive visualizations in Python.

Matplotlib Logo

Matplotlib was originally written by John D. Hunter, has an active development community ever since.

Plots

Likewise, in the visual I have given here, there are visualization forms that can be made with Matplotlib.

🧷 Click on the link to view the plot, or graphics, in the Matplotlib library.

  • Line Plots: It shows the relationship between two variables in lines.

Line plots

  • Scatter Plots: As the name suggests, this relationship between two variables is shown as distributed points.

Scatter Plots

✨ I wanted to use the seaborn library to measure the relationship between the variables in the Iris data set.

Uploading Seaborn

After including the Seaborn library in our project, we provide the graph by entering various parameters. Here we have compared the relationship between sepal_length and petal_width attributes over dataframe. The cmap variable is the variable that determines the color palette we use in our chart. It can be changed upon request. The variables indicates the size of the points in the scatter chart given here as points.

Data Visulatizaton

We have come to the end of another article. Stay healthy ✨

REFERENCES

  1. https://seaborn.pydata.org.
  2. https://matplotlib.org.
  3. Machine Learning Days | Merve Noyan | Data Visualization | Study Jams 2 |, https://www.youtube.com/watch?v=JL35pUrth4g&t=640s.
  4. Matplotlib, Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/wiki/Matplotlib.
  5. https://jakevdp.github.io/PythonDataScienceHandbook/04.02-simple-scatter-plots.html.
  6. https://jakevdp.github.io/PythonDataScienceHandbook/04.01-simple-line-plots.html.
  7. https://matplotlib.org/3.1.1/tutorials/colors/colormaps.html.

 

 

 

 

 

 

 

Data Analysis and Visualization with Python

Hello, one more beautiful day! In this article, we will continue to code Python with you. So what are we doing today? We will talk about one of my favorite topics, data analysis. You can get your data set from data sites such as Kaggle or UCI. In addition to these, I did research on Iris Flower Data Set and chose it for you.

The Iris flower dataset is a multivariate dataset presented by the British statistician and biologist Ronald Fisher in his 1936 article on the use of multiple measures in taxonomic problems. It is sometimes referred to as the Anderson Iris dataset because Edgar Anderson collected data to measure the morphological variation of Iris flowers of three related species. The dataset consists of 50 samples from each of the three Iris species (Iris Setosa, Iris virginica and Iris versicolor).

Four properties were extracted from each sample:

    1. The length of the sepals in centimeters
    2. The width of the sepals in centimeters
    3. The length of the petals in centimeters
    4. The width of the petals in centimeters

This dataset becomes a typical test case for many statistical classification techniques in machine learning, such as support vector machines.

Iris dataset

The visual you see above is also included in the notebook I created in Colab. In this visual, we see examples from the data set. You can access it via the Colab link at the end of the article. It is already in the literature as one of the most frequently and fundamentally used data sets in the field of data science.

STEPS

✨ The necessary libraries must be introduced in Colab and then the path of the data set in the folder must be specified. Then you can print the df variable to see the data set content or use the df.head( ) command to access the first 5 lines.

Veri kümesini ve kitaplıkları içe aktarma

Veri Kümesini İncele

✨ If you wish, let’s run the df.head( ) command and see how we will get an output.

Baş Komuta

✨ We include the values of the features in the data set above. Variables like sepal_length and petal_width are numerical variables. In addition, the feature of the flower type referred to as species is referred to as a categorical variable. First of all, it is useful to know which type of variable this data falls into.

⚠️ If it is desired to estimate the categorical data, namely the type of flower from the numerical variables (features between sepal_length and petal_width), this is a classification problem.

Descriptive Statistics

✨ Descriptive statistics are printed with Pandas’ describe method. If you want to follow, you can access the original documents of Pandas. In this way, how much data each feature contains – it is possible to see the lost data – it is informed. Standard deviation, average, minimum and maximum values of the properties are seen.

Describe Method

For example, in these data, the sepal_length feature is specified as 150000 lines in total and the standard deviation of these values is approximately 0.83.

⏳ The 25% and 75% range are known as Quartiles. By controlling these values, data can be analyzed.

✨ To get information about the data set, df.info( ) command should be run.

According to this information, we see that there is no row with an empty value. In addition to these, we also know that the features that exist numerically have float type.

✨ The df.isna( ) command checks if there is missing data (Not a Number) in the data set. We expect the row with the missing data to be ‘True’. However, as we have seen above, we do not have any lost data.

NaN Any

✨ The df.isna( ).any( ) command returns True if the data set contains even 1 missing data while checking lost data.

Not a Number Value

🖇 NOTE: Click on the link for the Colab link I mentioned above.

In the second article of the series, I will refer to the small points in the data analysis and the visualization area. Stay healthy ✨

REFERENCES

  1. https://pandas.pydata.org/pandas-docs/stable/index.html.
  2. https://www.kaggle.com/arshid/iris-flower-dataset.
  3. Machine Learning Days | Merve Noyan | Data Visualization | Study Jams 2 |, https://www.youtube.com/watch?v=JL35pUrth4g.
  4. https://www.kaggle.com/peterchang77/exploratory-data-analysis.

 

Time to Break up with Technology – Is it possible?

Perhaps, it is time to break up with technology – I would be really happy if there is any chance. 🙂 What I have noticed is that screen time is increasing in my life day by day. Unfortunately, it is not only because of work, but it is also due to pandemic all we are stuck at home. Like: watching movies on Netflix, some videos on Youtube, playing video games, checking social media, and if it is weekdays, working on the computer.

There has to be a life more than this. Actually, there was before the pandemic, for example, at least we all working at the office and knew when to finish our duties. However, even though right now we finish our daily duties, again we need to stick with the technology to be able to find something to be enjoyed – such a sad!

According to Jean Twenge, a psychology professor at San Diego State University, “there’s lots of great things to do online, but moderation is often the best rule for life, and it’s no different when it comes to screens.”

If you want you can have a look at this article:

https://globalaihub.com/during-covid-technological-innovations/

What problems can technology cause?

I think all we know what kinds of problems can occur but I would like to touch some of them again with the examples of my daily life.

Although I know the causes and results, I cannot stop myself to check my phone before sleeping. Also, I am watching some series on Netflix, and if those series are most excited or emotional, it affects my quality of sleep. When I woke up in the morning, I feel tired and need more sleep even I sleep more than 7 hours. As a result of it, sometimes it might create a focus problem on some productive tasks that I need to work on.

What is the solution?

Well, it is difficult to give a list of how to protect ourselves but there are many pieces of research about it. Based on those researches;

  1. We need to define what kinds of screen time makes us unhappy and affect. As I mentioned about myself, in my case it could be watching the emotional series. On the other hand, for you, that could be reading some articles, having time on Facebook or other social media platforms such as Twitter, Instagram, etc.
  2. The most important part is a no-phone zone because according to Dr. Twenge, people who keep phones in their bedrooms sleep more poorly. I have been hearing this since my childhood but my question is to myself is that do I apply it in my life? Unfortunately, my answer is no. Especially, the blue light from screens can affect our eyes and also trick our brains into thinking about what we read. In order to create no-phone zones, we need to agree with ourselves about when and where. For example, dinner and lunchtime no phone, while sleeping, no phone in the room, while having time with a family no phone, and etc.
  3. If the phone is with us, we need to turn off all notifications except a few of the necessary ones. In my case, all WhatsApp groups and almost all apps notifications are off, and my phone is in silent mode without vibrations. At least it helps me to stay away from my phone a bit.
  4. The other thing is physical activities such as exercise, sports. Also, cleaning a room can help 🙂 Why it is important because physical activities definitely increase our level of dopamine which helps to pleasure from what we do.

Some Statistics

I would like to give you some statistics:

Social Media Usage Statistics

  • According to a study in 2017, there are about 210 million people addicted to the internet and social media worldwide.
  • American smartphone users, on average, launch social media apps 17 times a day. On the other hand, countries like Thailand, Argentina, Malaysia, and Mexico, open social media apps more than 40 times daily.
  • Young, single women are the most addicted to social media among all market segments.
  • 52% of US adults get their news on social media.

Statistics of Effects of Social Media Addiction Facts

  • People who use Facebook more than once per hour are more likely to experience conflicts with their partners.
  • 21% of the same age group feel restless when they’re unable to check messages on social media.
  • Between 11%-43% of social media users in the US feel bad when their posts receive only a few likes.

Resources

https://comparecamp.com/technology-addiction-statistics/#:~:text=According%20to%20a%20study%20in,the%20average%20American%20internet%20user.

https://www.nytimes.com/2020/11/25/technology/personaltech/digital-detox.html

 

 

 

 

Effects of Activation Functions

In Deep Learning, activation functions play a key role and have a direct impact on success rates. The main reason for this is that before reaching success values ​​in the output layers in neural networks, we can reach the change of success value with the change of determined coefficients and weights thanks to the activation function. Generally, the structure of functions varies between linear and nonlinear. This can be found by trying on models for structures such as clustering and regression, or you can access them through the links I have left in the resources section. There are different formulas for each activation function, and we must carefully create the code strings to be installed for them. The formula for neuron operations consists of weights and bias value. Statistical information is one of the most important points in these processes. Even if code writing seems like a critical role, and not considered much, what really matters is knowing what you’re doing. Knowledge of Mathematics and Statistics is an important issue that cannot be ignored. Mathematics and statistics play an important role in all of the Data Science, Deep Learning and Machine Learning processes.

 

 

As you can see in the above formula, the additional error parameter known as beta is actually bias. One of the most important structures taught during statistics education is the bias structure. In the neural networks we use and process, bias is an extremely valuable issue and cannot be ignored. The selection of activation functions is very important for the result in the exit and inlet parts effectively on neural networks. The appearance of these functions that contribute to learning varies per input and along the parameters, coefficients. The image I present below includes the situation that is considered as the inputs and outputs of the activation functions. I will leave it as a link at the last part for those who want to access these codes. Success criteria vary within each activation function itself. Softmax is mostly used in the output layer because it makes more sense and success. There are two commonly known names. Examples of these are softmax and sigmoid. Most people embarking on a career journey in this field often hear about these two activation functions. Data scientists working on neural networks are experimenting with ReLU as an initial step.

 

 

Activation functions vary according to success parameters along the x and y axes. The main target success rate is to coincide with the peak as data increases along the y-axis. To achieve this, both the parameter values, the coefficient adjustment and the activation function selected during the operations are effective. Back propagation – forward propagation through the coefficients are re-determined and kept at the optimum level, which has an incredible place throughout neural networks. These operations are completely related to mathematics. You should have knowledge of derivatives and if you are working in this field, the important thing is not to write code, but to know exactly what you are doing. You can observe as I left it at the bottom, we are always doing derivative operations for backward rotation. There are neural networks built on a mathematical background. After these processes, we can observe the success of the activation functions by finding the most suitable one in the exit section. Thus, we can easily find the optimum function for the model and see its usage cases on a project basis. As a result of these situations, success situations vary.

 

 

In the last part, I will explain the activation functions and briefly talk about what they do. Step Function, Linear Function, Sigmoid Function, Hyperbolic Tangent Function, ReLU Function, Leaky ReLU Function, Swish, Softmax Function can be given as examples for activation functions.

Step Function: Makes binary classification with threshold value.

Linear Function: It produces several activation values ​​but its derivative is constant.

Sigmoid Function: It is a function known by almost everyone and gives output in the range of [0,1].

Hyperbolic Tangent Function: It is a nonlinear function that outputs in the range of [-1,1].

ReLU Function: It is essentially a nonlinear function. The property of the ReLU function is that it takes value 0 for negative inputs and positive values ​​forever. [0, + ∞)

Leaky ReLU Function: Leaky ReLU distinctive feature is that it has transitioned with axes close to 0 but touches on the origin to 0 a and keeps the lost gradients in ReLU with the negative region.

Swish Function: This function produces the product of the inputs and the sigmoid function as an output.

Softmax Function: This function, which is used for multiple classification problems, produces outputs between [0,1] that show the probability that each given input belongs to a subclass.

 

I wrote the parts that I made use of the images I took and the definitions in the references section. If you liked my article, I would appreciate it if you could give feedback.

 

References :

-https://www.derinogrenme.com/2018/06/28/geri-yayilim-algoritmasina-matematiksel-yaklasim/

-https://medium.com/smlr-center/sinir-a%C4%9Flar%C4%B1-ve-derin-%C3%B6%C4%9Frenme-vi-hesaplama-grafi%C4%9Fi-1caf57ec03f9

-https://ayyucekizrak.medium.com/derin-%C3%B6%C4%9Frenme-i%C3%A7in-aktivasyon-fonksiyonlar%C4%B1n%C4%B1n-kar%C5%9F%C4%B1la%C5%9Ft%C4%B1r%C4%B1lmas%C4%B1-cee17fd1d9cd

-http://buyukveri.firat.edu.tr/2018/04/17/derin-sinir-aglari-icin-aktivasyon-fonksiyonlari/

-https://www.aliozcan.org/ai/aktivasyon-fonksiyonlari/

-https://globalaihub.com/degisken-secimi-hakkinda-temel-bilgiler/

-https://globalaihub.com/basic-information-about-feature-selection/

Featured Image

Interactive IPython and Shell Commands

One of the annoyances you will encounter when interacting with the standard Python interpreter is the need to switch between multiple windows to access Python tools and system command line tools. IPython fills this gap and offers you a syntax for executing shell commands directly from the IPython terminal [1]. We will continue with the terminal commands with you. The command lines I will write now work strictly on a Unix-like system such as Linux or Mac OS X.

While doing research on IPython, I came across the following in an article:

Ipython is a programming tool with the Python kernel but with some advantages over Python. One of the features that make this tool superior is that it has a unique graphical interface and a nice development environment [2].

We will deal with this later. I’ll be offering a quick introduction here for beginners while working at Shell for now. Let’s continue the subject with our first example.

For example, here we can see the directories and files contained within a user’s system. First of all, our first command that allows us to print text in the terminal will be echo.

Shell Commands

✳️ Echo: With this line of code, just like the print function in Python, we print the data to the screen in the terminal. As we can see, it prints the data in quotation marks on the screen.

Echo command

✳️ Pwd (Print Working Directory):Writes the working directory that the name implies.

PWD Command

✳️ Ls: Lists the contents contained in the working directory.

LS Command

✳️ Cd: I assume that many of you know this command. It’s a command I use very often. With the cd command, you can navigate between directories. For example, in the following image, we move to the Documents folder.

Cd Command

✳️ Mkdir:Of course, it is possible to create an index when you are in the terminal in the directory we are in! Create a sample folder using the Mkdir command. In addition, I continued the operations by moving to the parent directory. Cd to switch to a parent directory cd .. just use the command.

Mkdir Command

Quick Introduction to IPython

IPython is a growing project with language-independent components. From IPython 4.0 onwards, the language-independent parts of the project are: laptop format, message protocol, qtconsole, laptop web application, etc. It moved to new projects under the name Jupyter. IPython is focused on interactive Python, Part of which provides a Python kernel for Jupyter. So we’ll try the codes we’ve been working on in shell on Jupyter. For this purpose, we first activate Jupyter Notebook.

Jupyter

After creating any Python 3 notebook in Jupyter, we can try the commands we want. For example, I tried to list my directory, the contents, and print a text on the screen.

Ipython Commands

Partial List Of Debug Commands

There are many more available commands for interactive debugging than we list here; the table below contains a description of some of the more common and useful ones:

Debugging List

It is possible to use the debug commands you want in IPython lines when you need help. I have given a few commands that we can use in the images below. In this way, we can use the necessary commands.

REFERENCES

  1. Jake VanderPlas, Python Data Science Handbook, Essential Tools For Working with Data.
  2. IPython, https://mustafaakca.com/ipython/.
  3. IPython Interactive Computing, Official Website, https://ipython.org.
  4. Introduction the Shell, https://datacarpentry.org/shell-genomics/01-introduction/index.html.
  5. Info World, https://www.infoworld.com/article/3193969/10-unix-commands-every-mac-and-linux-user-should-know.html.

Support Vector Machines Part 1

Hello everyone. Image classification are among the most common usage area of artificial intelligence. There are many ways to classify images, but I want to talk about support vector machines in this blog.

In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.Since the algorithm in question does not require any joint distribution function information regarding the data, they are distribution independent learning algorithms.Support Vector Machine (SVM) can be used for both classification and regression challenges. However, it is mostly used for classification problems.

How to solve the classification problem with SVM?

In this algorithm, we draw each data item as a point in n-dimensional space. Next, we classify by finding the hyperplane that separates the two classes very well. The algorithm is set in two classes of the line to be drawn in such a way that it passes from the furthest place to its elements. It is a nonparametric classifier. SVM can also classify linear and nonlinear data, but generally tries to classify data linearly.

SVMs apply a classification strategy that uses a margin-based geometric criterion instead of a pure statistical criterion. In other words, SVMs do not need statistical distribution estimates of classes in order to move from the classification task, and they define the classification model using the concept of margin maximization.

In SVM literature, the predictor is called a variable symbol, and a transformed symbol used to describe the hyperplane is called a feature. The task of choosing the most appropriate representation is also known as feature selection. A set of properties that describe a case is called a vector.

Thus, the purpose of SVM modeling; The goal is to find the optimal hyperplane separating the vector sets, with the single-category states of the variable on one side of the plane and the other categorized states on the other side of the plane.

Classification with SVM

The mathematical algorithms owned by the SVM were originally designed for the classification problem of two-class linear data, then generalized for classification of multi-class and non-linear data. The working principle of DVM is based on the prediction of the most appropriate decision function that can distinguish the two classes, in other words, the definition of the hyper-plane that can distinguish the two classes from each other in the most appropriate way (Vapnik, 1995; Vapnik, 2000). In recent years, intensive studies have been carried out on the use of DVMs in the field of remote sensing, which are used successfully in many areas. (Foody et al., 2004; Melgani et al., 2004; Pal et al., 2005; Kavzoglu et al., 2009). In order to determine the optimum hyperplane, two hyperplanes parallel to this plane and its boundaries must be determined. The points that make up these hyperplanes are called support vectors.

How to Identify the Correct Hyper Plane?

It is quite easy to detect the correct hyperplane with package programs such as R, Python, but we can also detect the correct hyperplane manually with simple methods. Let’s consider a few simple examples.

Here we have 3 different hyperplanes a, b and c. Now let’s define the correct hyperplane to classify the star and the circle. Hyperplane b is chosen because it correctly separates stars and circles in this graph.

If all of our hyperplanes separate classes well, how can we detect the correct hyperplane?

Here, maximizing the distances between the nearest data point (class) or hyperplane will help us decide on the correct hyperplane. This distance is called the Margin.

We can see that the hyperplane C margin is high compared to both A and B. Hence, we call the straight plane C.

SVM for linearly inseparable data

In many problems, such as the classification of satellite images, it is not possible to separate the data linearly. In this case, the problem arising from the fact that some of the training data remains on the other side of the optimum hyperplane is solved by defining a positive dummy variable. The balance between maximizing the limit and minimizing false classification errors can be controlled by defining a regulation parameter (0 <C <∞) that takes positive values and is denoted by C (Cortes et al., 1995). Thus, data can be separated linearly and hyper-plane between classes can be determined. Support vector machines can mathematically make nonlinear transformations with the help of a kernel function, thus allowing the data to be separated linearly in high dimensions.

It is essential to determine the kernel function to be used for a classification process to be performed with support vector machines (SVM) and the optimum parameters of this function. The most commonly used kernel functions in the literature are polynomial, radial based function, PUK function and normalized polynomial kernels.

SVM is used for things like disease recognition in medicine, limitation of consumer loans in banking, and face recognition in artificial intelligence. In the next blog, I will try to talk about their applications on package programs. Goodbye until we meet again …

REFERENCES

  1. https://dergipark.org.tr/en/download/article-file/65371
  2. https://www.analyticsvidhya.com/blog/2017/09/understaing-support-vector-machine-example-code/
  3. http://nek.istanbul.edu.tr:4444/ekos/TEZ/43447.pdf
  4. https://www.harita.gov.tr/images/dergi/makaleler/144_7.pdf
  5. https://www.slideshare.net/oguzhantas/destek-vektr-makineleri-support-vector-machine
  6. https://tez.yok.gov.tr/UlusalTezMerkezi/tezSorguSonucYeni.jsp#top2
  7. https://medium.com/@k.ulgen90/makine-%C3%B6%C4%9Frenimi-b%C3%B6l%C3%BCm-4-destek-vekt%C3%B6r-makineleri-2f8010824054
  8. https://www.kdnuggets.com/2016/07/support-vector-machines-simple-explanation.html

 

Featured Image for Keras

A Quick Start to Keras and TensorFlow

Keras is a deep learning library designed in the Python language. If you have worked on a deep learning project or are familiar with this area, you have definitely encountered Keras. There are many options in it that will allow you to create deep learning models and provide an environment for us to train our data.

Keras was originally developed to allow researchers to conduct faster trials.

Indeed, Keras is working as fast as possible for data training and pre-processing. If you want to get to know Keras better, you can access their documentation via this link.

Prominent Advantages of Keras

🔹Allows you to perform operations on both the CPU and GPU.

🔹It contains predefined modules for convoluted and iterative networks.

Keras is a deep learning API written in Python that runs on the machine learning platform Theano and TensorFlow.

🔹Keras supports all versions starting with Python 2.7.

Keras, Tensorflow, Theano and CNTK

Keras is the library that offers structures that can realize high-level deep learning models. In this article, we will define the backend engines that we use in our projects many times. Below are these engines running in the background, we include the use of TensorFlow.

Keras Upload

Activation Function

🔹 We can apply the libraries we want to use by selecting them as shown below. There are 3 backend applications that we use. These are TensorFlow, Theano, and Microsoft Cognitive Toolkit (CNTK) backend implementations.

Uploading Library

The platforms you see below are the platforms we encounter a lot in deep learning. As a footnote, I recommend GPU-based work when using TensorFlow. In terms of performance, you will find that with GPU usage, you will get faster and more performance results.

In summary, Keras works in harmony with these 3 libraries. In addition, it works by replacing the backend engine with these three libraries without making any changes to the code. Let’s take a closer look at TensorFlow, which we can use together with Keras.

TensorFlow

➡️ Let’s provide a version check if Python and Pip are installed for the project you are going to work with.

Version Control

➡️ I continue to work for my Mask RCNN project, where I am actively working. You can also create any project or create a segmentation project like me. If you want to continue in the same project, you can access the list of required libraries by clicking on the link.

Collecting Requirements

If you want, you can also upload these libraries one by one. But I require it in terms of being fast.I’m uploading it as a requirements.txt file.

➡️ Let’s go back to Keras and TensorFlow without surprising our goal. We can meet in another article for my Mask RCNN project. Now let’s make a quick introduction to TensorFlow. Let’s import both our project and print the version we use.

TensorFlow

➡️ As you can see as the output, I am using version 2.3.1 of TensorFlow. As I said, You can use it based on CPU or GPU.

Output Version

➡️ Tensorflow as follows when pre-processing the data. We can continue our operations by including the keras.preprocessing module. It seems passive because I am not actively running the method now, but when we write the method that we will use, its color will be activated automatically.

Tensorflow Preprocessing

➡️As an example, we can perform pre-processing with TensorfFlow as follows. We divide our data set into training and testing, and we know that with the validation_split variable, 20% is divided into test data.

In this way, we have made a fast start to Keras and TensorFlow with you. I hope to see you in my next post. Stay healthy ✨

REFERENCES

  1. https://keras.io/about/.
  2. Wikipedia, The free encyclopedia, https://en.wikipedia.org/wiki/Keras.
  3. https://keras.rstudio.com/articles/backend.html.
  4. Francois Chollet, Deep Learning with Python, Publishing Buzdagi.
  5. https://www.tensorflow.org.
  6. https://www.tensorflow.org/tutorials/keras/text_classification.

 

 

Basic Information About Feature Selection

Artificial learning, deep learning and artificial intelligence, which we actively come across in all parts of our lives, is a situation where everyone is working on it, and the predictions are measured with the success score. In business processes, the subject of artificial learning has a critical importance. The data that is in your hands or collected by the company personally and comes to the Feature Engineering phase, is carefully examined from many issues and prepared for the final situation and taken to the person working as a Data Scientist. He can make inferences for the firm by making sense of the data. Thus, if the product or service developed is tested by offering it to the customer and meets the necessary success parameters, we can make the performance of the product sustainable. One of the most important steps here is the scalability of the product produced and the rapid adjustment of the adaptation phase to business processes. Another event is to obtain the significance levels of the features determined by correlation from the data set, to make this meaningful and to determine by the Feature Engineer before the modeling phase. We can think of Feature Engineers as an additional power that accelerates and facilitates the Data Scientist’s business process.

 

 

In the case of job search, we may encounter a ‘Feature Engineer’ announcement, which may appear frequently. We can obtain the critical information we learn from the data during the feature selection process during the data preparation phase. Feature selection methods are intended to reduce the number of input variables to those believed to be most useful for a model to predict the target feature. Feature selection processes provide great convenience to employees by reducing the workload as much as possible, if they are determined logically while involved in data pre-processing processes. I mentioned that there is a special business area for this. Feature Selection situations affect the success of the data in modeling and directly affect the success of the values ​​to be predicted. For this reason, the most important part of the events from the first data to the product stage is the right decision of the working person to choose the feature. If the progress is positive, the product will come to life in a short time. Making statistical inferences from the data is as important as determining which data is and how important it is through algorithms. Statistics science should play a role in data science processes in general.

 

 

There are also feature selection methods to be determined by statistical filter. We can give examples of scales that differ in their choice of features. Unfortunately, most people working in this field do not care enough about statistical significance. Among some people working on Data Science and Artificial Intelligence, writing code is seen as the basis of this work. I can give examples of categorical and numerical variables for the data set. In addition, these variables are divided into two within themselves. While the feature we see numerically is known as integer and float, variables we see categorically are; known as nominal, ordinal and boolean. You can find this basically in the image I put below. These variables are literally vital to feature selection. In line with the operations performed, these variables can be decided with a statistician during the evaluation phase, and the analysis of the selected features should be made on a solid basis. One of the most necessary features of those working in this field is their ability to interpret and analyze well. In this way, they can easily present the data they prepare in the form of products, with the basics matching the logic.

 

 

There is almost no exact method available. Feature selection for each data set is evaluated with a good analysis. Because the operations performed may vary for each feature. That is, while one data set contains too many integers or float values, another data set you are working on may be boolean. Therefore, there may be cases where feature selection methods differ for each data set. The important issue may be to adapt quickly, understand what the data set offers us and produce solutions accordingly. With this method, it is possible for the decisions taken during the transactions to continue in a healthier way. Categorical variables can be determined by methods such as the chi-square test, even this method is more powerful and the rate of efficiency can reach higher points. The choice of features throughout the product or service development stages is the most important step that contributes to the success criteria of a model.

 

References:

https://globalaihub.com/basic-statistics-information-series-2/

https://globalaihub.com/temel-istatistik-tanimlari-ve-aciklamalari/

https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/#:~:text=Feature%20selection%20is%20the%20process,the%20performance%20of%20the%20model.

https://www.istmer.com/regresyon-analizi-ve-degisken-secimi/

https://towardsdatascience.com/the-5-feature-selection-algorithms-every-data-scientist-need-to-know-3a6b566efd2

Relationship between Video Games and Brain

Do you think there is a relationship between video games and the brain?

I know many people who play video games and some of them are really addicted to it. For those people does not matter when and where, they can play it almost every time. For example, one of my cousins, who is a year older than me, has been playing video games since his childhood, and what I can say is that he improved his English thanks to the games. 🙂

I have many memories with him that he was trying to convince me to play the games but I think I am not into it a lot. Oh, but I played arcades for a long time when I was a kid and one of my favorite platform games was “Super Mario”.

For sure, video games and platforms have developed over the years because also technological developments get involved in it. In fact, many countries are willing to develop new and powerful video games because this is a huge market. Let’s look at some countries by game revenues:

  1. China and revenues (USD): $40,854M
  2. The United States and revenues (USD): $36,921M
  3. Japon and revenues (USD): $18,683M
  4. South Korea and revenues (USD): $6,564M
  5. Germany and revenues (USD): $5,965M

Source: https://newzoo.com/insights/rankings/top-10-countries-by-game-revenues/

Well, there is a question that arises is how do video games have an impact on humans?

Our brains and video games

As far as we have heard and seen on the media channels and read some newspapers and articles, video games have an effect on our health and behavior. I cannot say whether they have a totally positive or negative impact on us but what I can say is that it is really famous activity among us in today’s world.

According to research, our brain performance and structure can be changed by video games. For example, video games can have a positive impact to increase memories. How so? I think there is a complex situation for me to understand deeply but the thing is memory depends on how information is complex and more. The 3-D games have all of them so, it helps somehow to boost memory.

Secondly, video games affect our attention because there are different types of attention requires such as selective attention. In order to accomplish the duty, there are many steps to pass successfully, and being successful is depending on how your memory strong and your attention is good enough to catch the details, and etc.

On the other hand, I have mostly thought video games have a negative impact on our behaviors because as far as I see, some games have a violent context. Furthermore, if you are exposed to violence almost every day or most of your time, you may become angrier, sensitive, and prone to violence. Also, because of your addiction to the games, it is likely to be apart from your social life. Actually, all of them depend on what kinds of games you prefer, that is why it is not true to be generalized all possibilities.

I would like to add one more thing is video games can increase the efficiency of brain regions related to visual skills. I have never thought about it but it makes sense because while playing games, there are many visuals and those visuals are important to remember the details in order to accomplish the duty. Therefore, gamers should pay attention to those visuals.

Summary

There are two aspects of video games but every day, those aspects are changing because also, new developments are involving in our lives. So, what do all these brain changes mean? “We focused on how the brain reacts to video game exposure, but these effects do not always translate to real-life changes,” says Palaus.

Resources

https://www.sciencedaily.com/releases/2017/06/170622103824.htm

https://www.medicalnewstoday.com/articles/318345#Video-games-boost-memory

 

 

Article Review: Multi-Category Classification with CNN

Classification of Multi-Category Images Using Deep Learning: A Convolutional Neural Network Model

In this article, the article ‘Classifying multi-category images using Deep Learning: A Convolutional Neural Network Model’ presented in India in 2017 by Ardhendu Bandhu, Sanjiban Sekhar Roy is being reviewed. An image classification model using a convolutional neural network is presented with TensorFlow. TensorFlow is a popular open-source library for machine learning and deep neural networks. A multi-category image dataset was considered for classification. Traditional back propagation neural network; has an input layer, hidden layer, and an output. A convolutional neural network has a convolutional layer and a maximum pooling layer. We train this proposed classifier to calculate the decision boundary of the image dataset. Real-world data is mostly untagged and unstructured. This unstructured data can be an image, audio, and text data. Useful information cannot be easily derived from neural networks that are shallow, meaning they are those with fewer hidden layers. A deep neural network-based CNN classifier is proposed, which has many hidden layers and can obtain meaningful information from images.

Keywords: Image, Classification, Convolutional Neural Network, TensorFlow, Deep Neural Network.

First of all, let’s examine what classification is so that we can understand the steps laid out in the project. Image Classification refers to the function of classifying images from a multi-class set of images. To classify an image dataset into multiple classes or categories, there must be a good understanding between the dataset and classes.

In this article;

  1. Convolutional Neural Network (CNN) based on deep learning is proposed to classify images.
  2. The proposed model achieves high accuracy after repeating 10,000 times within the dataset containing 20,000 images of dogs and cats, which takes about 300 minutes to train and validate the dataset.

In this project, a convolutional neural network consisting of a convolutional layer, RELU function, a pooling layer, and a fully connected layer is used. A convolutional neural network is an automatic choice when it comes to image recognition using deep learning.

Convolutional Neural Network

For classification purposes, it has the architecture as the convolutional network [INPUT-CONV-RELU-POOL-FC].

INPUT- Raw pixel values as images.

CONV- Contents output in the first cluster of neurons.

RELU- It applies the activation function.

POOL- Performs downsampling.

FC- Calculates the class score.

In this publication, a multi-level deep learning system for picture characterization is planned and implemented. Especially the proposed structure;

1) The picture shows how to find nearby neurons that are discriminatory and non-instructive for grouping problem.

2) Given these areas, it is shown how to view the level classifier.

METHODS

A data set containing 20,000 dog and cat images from the Kaggle dataset was used. The Kaggle database has a total of 25000 images available. Images are divided into training and test sets. 12,000 images are entered in the training set and 8,000 images in the test set. Split dataset of the training set and test set helps cross-validation of data and provides a check over errors; Cross-validation checks whether the proposed classifier classifies cat or dog images correctly.

The following experimental setup is done on Spyder, a scientific Python development environment.

  1. First of all, Scipy, Numpy, and Tensorflow should be used as necessary.
  2. A start time, training path, and test track must be constant. Image height and image width were provided as 64 pixels. The image dataset containing 20,000 images is then loaded. Due to a large number of dimensions, it is resized and iterated. This period takes approximately 5-10 minutes.
  3. This data is fed by TensorFlow. In TensorFlow, all data is passed between operations in a calculation chart. Properties and labels must be in the form of a matrix for the tensors to easily interpret this situation.
  4. Tensorflow Prediction: To call data within the model, we start the session with an additional argument where the name of all placeholders with the corresponding data is placed. Because the data in TensorFlow is passed as a variable, it must be initialized before a graph can be run in a session. To update the value of a variable, we define an update function that can then run.
  5. After the variables are initialized, we print the initial value of the state variable and run the update process. After that, the rotation of the activation function, the choice of the activation function has a great influence on the behavior of the network. The activation function for a specific node is an input or output of the specific node provided a set of inputs.
  6. Next, we define the hyperparameters we will need to train our login features. In more complex neural networks, we encounter more hyperparameters. Some of our hyperparameters can be like the learning rate.
    Another hyperparameter is the number of iterations we train our data. The next hyperparameter is the batch size, which chooses the size of the set of images to send for classification at one time.
  7. Finally, after all that, we start the TensorFlow session, which makes TensorFlow work, because without starting a session a TensorFlow won’t work. After that, our model will start the training process.

RESULTS

🖇 As deep architecture, we used a convolutional neural network and also implemented the TensorFlow deep learning library. The experimental results below were done on Spyder, a scientific Python development environment. 20,000 images were used and the batch is fixed at 100 for you.

🖇 It is essential to examine the accuracy of the models in terms of test data rather than training data. To run the convolutional neural network using TensorFlow, the Windows 10 machine was used, the hardware was specified to have an 8GB of RAM with the CPU version of TensorFlow.

📌 As the number of iterations increases, training accuracy increases, but so does our training time. Table 1 shows the number line with the accuracy we got.

Number of iterations vs AccuracyThe graph has become almost constant after several thousand iterations. Different batch size values can lead to different results. We set a batch size value of 100 for images.

✨ In this article, a high accuracy rate was obtained in classifying images with the proposed method. The CNN neural network was implemented using TensorFlow. It was observed that the classifier performed well in terms of accuracy. However, a CPU based system was used. So the experiment took extra training time, if a GPU-based system was used, training time would be shortened. The CNN model can be applied in solving the complex image classification problem related to medical imaging and other fields.

REFERENCES

  1. https://www.researchgate.net/figure/Artificial-neural-network-architecture-ANN-i-h-1-h-2-h-n-o_fig1_321259051.
  2. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786), 504- 507.
  3. Yoshua Bengio, “Learning Deep Architectures for AI”, Dept. IRO, Universite de Montreal C.P. 6128, Montreal, Qc, H3C 3J7, Canada, Technical Report 1312.
  4. Yann LeCun, Yoshua Bengio & Geoffrey Hinton, “Deep learning “, NATURE |vol 521 | 28 May 2015
  5. Yicong Zhou and Yantao Wei, “Learning Hierarchical Spectral–Spatial Features for Hyperspectral Image Classification”, IEEE Transactions on cybernetics, Vol. 46, No.7, July 2016.

Past, Present and Future of Artificial Intelligence (AI)

We talked with Ergi Şener about the change of artificial intelligence (ai) from the past to the present, its impact today and its effect of future.

Ergi Sener, who is indicated as one of the 20 Turkish people to be followed in the field of technology (*), received a BS in Microelectronics Engineering in 2005 and double MS in Computer Science & Management in 2007 from Sabancı University. He is pursuing a PHD degree in Management.

Ergi began his career as the co-founder and business development director of New Tone Technology Solutions in 2007 with the partnership of Sabancı University’s Venture Program. After the successful exit of this company, between 2009 and 2013, he worked as a CRM manager at Garanti Payment Systems and as a senior product manager in the New Technology Business Division of Turkcell.

In 2013, he joined MasterCard as a business development and innovation manager for emerging markets and managed the SEE cluster. After his corporate career, Ergi acted as a serial entrepreneur and founded 3 companies on fintech, IoT and AI. He was also the managing director of a Dutch based incubation center. He is currently the co-founder and CEO of a new generation fintech, Payneer. During his career, among with many others Ergi received “Big Data Analysis & Data Mining Innovation Award” in 2017 & 2018, “”Global Telecoms Business Innovation Award” in 2014, “MasterCard Europe President’s Award for Innovation” in 2013 and “Payment System of the Year Award” and Turkcell CEO Award’12.

Ergi is an instructer at Sabancı University and Bahcesehir University, and technology editor at hurriyet.com.tr. He is also the angel investor of Wastespresso and kadinlarinelinden.com.

 

Question 1: What do you think about the development of artificial intelligence from past to present? Since corona pandemic outbreak, we had a paradigm shift and everything has transformed very fast. What do you think about the development of artificial intelligence after post corona?

Although the term AI suffers from too many definitions, we can simply define it as “trying to make computers think and act like humans”. The more humanlike is the desired outcome, the more data and processing power are required. Since AI is one of the most prominent technology trends of today, we regularly find ourselves in a lot of conversations such as when will artificial intelligence replace our jobs or which sectors will be disrupted by AI… Indeed, AI is a branch of science that is changing the world in many ways. It is constantly growing and evolving on a large scale that includes research, education, and technological developments.

ai

Designed by pikisuperstar / Freepik (ai)

Almost 70 years passed since the father of AI, Elon Turing, laid the foundations of this discipline. Since the beginning of the first studies, the major aim was to have computers act as humans. Although some technology giants like Google or Amazon claim that they passed the Turing test, there are still many years ahead for such a progress.

As we enter 2021, it will not be wrong to say that, in order to relieve us against pessimistic scenarios; AI technology will not replace many jobs in the short term, but will cause radical changes in business practices and processes. It will also transform many sectors and jobs. Therefore, it is very important to follow the developments in the field of AI closely and make plans for its integration to our business. Danger bells will be ringing for those who do not care about the progress of AI or think it is too early.

I should state it in a way that “today, AI is eating the World”. A serious change is taking place with the development of AI in many different areas from driverless vehicles to image processing; from natural language processing, to optimization problems; from robotic systems to “drones”; or from speech recognition technologies and virtual assistants to process automation.

AI will be more with us in the Post-Corona period. We might recognize this more clearly if we analyze how AI has been used to fight Corona, one of the major crises of humanity. A global epidemic like Coronavirus, once again revealed the importance of technology, artificial intelligence and data science and their effectiveness in addressing the epidemic and returning to regular life by getting rid of the virus faster.

AI was used to monitor and predict the spread of epidemics

The better we track the spread and effect of the virus, the better we can fight with it. Analyzing virus-related news, social media posts, and government documents by artificial intelligence platforms can predict the spread of the epidemic. BlueDot, a Canadian start-up, uses artificial intelligence to track infectious disease risks. BlueDot’s AI warned about Corona threat days before the World Health Organization or Disease Control and Warning Centers. In doing so, BlueDot first reported articles on 27 cases published in Chine for the initial findings in Wuhan and added it to its warning system. Then, in order to find people who are likely to be infected and travel using global airline ticketing data, was used to determine the cities and countries that can be reached directly from Wuhan. The international destinations that BlueDot predicted to attract the most passengers from Wuhan were: Bangkok, Hong Kong, Tokyo, Taipei, Phuket, Seoul and Singapore, which were the first places Coronavirus was seen after Wuhan.

AI was used to diagnose the virus

Infervision, a Beijing-based start-up that also conducts AI-focused studies, developed a system that allows the disease to be effectively detected and monitored with the artificial intelligence-based solution it has developed. Thanks to this solution, which increases the speed of diagnosis, it is also possible to reduce the increasing panic in hospitals. Infervision’s software detects and interprets the symptoms of pneumonia caused by the virus. Chinese e-commerce giant Alibaba also implemented an AI-powered system that managed to diagnose the virus with 96% accuracy within seconds. Alibaba’s system was developed by training on images and data from 5,000 confirmed Coronavirus cases and is currently used in more than 100 hospitals in China. Both systems are developed by analyzing patients’ chest CT scans (tomography).

Color Coded Health Assessment Application

Despite its controversial technology and use of AI, China’s advanced surveillance system uses SenseTime’s facial recognition technology and temperature detection software to identify people who have fever and are more likely to be viruses. The Chinese Government also, with the support of tech giants such as Alibaba and Ant Financials, implemented a color-coded health rating system to help track millions of people returning to work after the rapidly spreading Coronavirus outbreak. With this application, people are divided into three categories as “green, yellow or red” according to their health conditions, travel history, whether they have visited the places where the virus is common, and their interactions with infected people. In line with the analysis of the program, it is determined whether individuals in the relevant category will be quarantined or permitted. For individuals provided with a green health code to enter public points, subways and office buildings, a QR code is sent from their mobile application and this code is scanned by the authorities.

Delivery of medical devices by drones

One of the fastest and safest ways to deliver the necessary medical devices in the event of an epidemic is the use of drones. For this purpose, the Japanese Terra Drone company supports the transportation of medical devices and quarantine materials between the Disease Control Centers and Hospitals in Xinchang with minimum risk. Drones can also be used to control public spaces, monitor compliance with quarantine practices, and thermal imaging.

Use of robots in sterilization, food and material supply

Since physical robots are not infected with viruses, they can be used to perform many routine tasks (cleaning, sterilization, food and medicine supply, etc.) and to reduce human contact. Robots from Denmark-based Blue Ocean Robotics use ultraviolet light to kill bacteria and viruses. In addition, Pudu Technology’s robots, which are used to bring food in hospitals, are used in 40 hospitals in China.

Use of chatbot to share information or answer questions

Citizens can get free health consultancy services from chatbot services delivered through WeChat, China’s popular social messaging application. However, chatbots also share information about recent travel procedures and circulars.

Using AI in new drug development

Google’s DeepMind division has used state-of-the-art AI algorithms and deep learning systems to understand the proteins that can cause the virus, and published the findings to help develop therapies for the virus. However, the BenevolentAI company uses AI systems to help treat Coronavirus. The company used its predictive capabilities to identify drugs that could be effective against the virus in the market after the outbreak.

Using autonomous taxis

Autonomous vehicles have become one of the most popular AI use cases in recent years. We are still waiting the autonomous vehicles in traffic, but there is a great progress from automotive industry and over-the-top technology companies to make this dream a reality. During epidemic, again in China, autonomous vehicles are used as taxis to reduce the spread of the virus.

All the applications above show that AI has fastened with Corona, and we will see with many different applications in many different sectors. Also, it is crucial that the companies advanced in AI and that are investing on AI will be effective in many areas and will have the ability to disrupt many sectors. Apple can be a good example with the new Apple Watches. Apple can track Corona and heart attack with its predictive analytics platforms.

Question 2: If you take in Turkey, what are we doing in artificial intelligence as Turkey? Are we at a level that can compete with the world? Has artificial intelligence taken its place in our professional life in all sectors?

As with any popular technology, we should isolate the truth and “hype” when talking about AI. With the popularization of AI, many new trends in the tech world has started to be associated with AI, and AI has become sought after in everything (increased investment in AI also has a direct and significant effect on this). Unfortunately, in Turkey, many people in the business do not have a correct understanding and enough information about AI. What can be accomplished with AI is truly limitless: virtual digital assistants, chatbots, driverless vehicles, real-time translation services, AI-powered physical robots, etc… In real terms, AI has begun to deeply affect both our daily life and business processes, and in 2021, we will be deeply feeling many concrete uses for it. In this context, it is obvious that AI will have a transformative effect on consumers, institutions and even government organizations around the world. In addition to the actual potential of artificial intelligence, how we manage such a profound technological revolution and its impact on our professional and personal lives should also be seriously discussed and a strategic road map should be determined for Turkey in a bigger picture.

ai

Designed by vectorpouch / Freepik (ai)

Today, AI has become a phase that transforms a faster intelligence than human into money. The most common use case of AI in Turkey as well as in many other countries is chatbots that answer consumers’ questions and direct them to the appropriate place or person. It is a common example of AI, and one, most people have experienced personally. Different applications of AI can be diversified in different areas such as fraud detection, predictions, optimization, product recommendation, pricing forecasts, recommendations and personalized marketing. We have seen some pilot uses for all these use cases in different sectors. But the real question is whether we have shared such a success study or not.

Especially AI, combined with a fast and robust infrastructure, provides the opportunity to access real-time data and reach every customer with personalized content at the right time and with the right message. Analytical and camera capabilities were on the agenda of every technology giant, but the real opportunity (“untapped opportunity”) lies in the lack of readiness for many companies’ platforms or products that will be worthwhile for daily life.

As Turkey, we should also increase our focus and investment on AI. Actually, we have really very valuable academicians and experts working on AI; however, these professionals mainly work abroad. One of my friends is currently the director of personalization platform of Netflix – one of the most advanced companies in terms of AI implications. On the other hand, another friend of mine is in a managerial role at Google autonomous vehicle, Waymo. We also have great start-ups developing state of the art AI use cases and many AI innovations. Great projects also take place in universities by researches and academicians. But these initiatives should be supported in a structured manner with a clear strategy. Currently, we have not seen concrete results on AI efforts. 

In this period, AI is likely affect our lives more and more every day. It is important to strategically advance the work in this field in our country and to follow this focus systematically.

Question 3: We know that singularity, or the technological singularity, is the hypothetical belief that in the future, artificial intelligence will go beyond human intelligence and will visibly change civilization – the nature of humanity. So, what do you think about singularity? Can artificial intelligence go beyond human intelligence? Or will humans always be one step ahead, as artificial intelligence will develop at the same rate as humans?

As I mentioned, with the first appearance of AI concept 70 years ago, the major goal was to have systems levels same as humans. Today, platforms claiming to pass the famous Turing test are increasing day by day (Turing test refers to the situation in which the answer to a question asked by a person cannot be distinguished whether it is given by a human or by a machine). Google, at its recent events mentioned that the Google Assistant passed the Turing Test. However, it is not proper to claim that the test was passed with the examples that were tried in an extremely limited fiction. In the face of the uncertain processes we are in, machines that can understand and connect with human reactions, natural languages and our world as much as the human brain have not been built.

On the other hand, based on a recent Stanford report, AI computational power is accelerating faster than traditional processor development. Every three months, the speed of AI computation doubles, according to Stanford University’s 2019 AI Index report. These improvements show how fast AI is improving. But there is still a long way to go.

ai

Designed by iuriimotov / Freepik (ai)

I believe that singularity concept is exaggerated. Elon Musk’s Neuralink initiative is so crucial to be determined as a first step to reach augmented humans, but many factors will affect this vision. We still need time to see many of the most popular use cases of AI in our daily lives, like autonomous vehicles, physical robots, etc. So, singularity can be considered like a science fiction concept, but we should also be aware that there is a critical progress on this issue.

Besides, leaders in the fields of AI, including Elon Musk and Google DeepMind’s Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons, otherwise known as “killer robots.” In their petition, the group states that the development of such technology would usher in a “third revolution in warfare,” that could equal the invention of gunpowder and nuclear weapons. The letter is signed by the founders of 116 AI and robotics companies from 26 countries,

Musk has a history of expressing serious concerns about the negative potential of AI. I agree with Elon Musk and believe that if we do not find ways to control the improvement of AI, we will not control it then. So, there should a global consensus for the sake of humanity. However, we should understand that AI will be one of the crucial factors that will affect the competition level of the countries. Russian president Putin stated that whichever country leads the way in AI research will come to dominate global affairs. So, it will not be easy to have such a global consensus, which will also result in  accelerated  crisis that will be shared by AI.

Thank you to Ergi Şener for the nice interview.

 

Designed by pikisuperstar / Freepik

Designed by vectorpouch / Freepik

Designed by iuriimotov / Freepik

 

Preprocessing with Image Processing Techniques

In order for many of the projects that we aim for to be realized, it is necessary to undergo image processing steps. In this article, we will reveal the pre-processing stages with Gauss, average filter, threshold filters, and Canny Edge Sensor. As a Platform, you can work in Colab like me! In this way, you can perform your projects both very quickly and without taking up space.

Image processing techniques are systems for obtaining various information by analyzing existing images. Image processing techniques work through different mathematical expressions, from the simplest algorithms to the most complex algorithms, depending on their place of use.

In order to use image processing methods, real-world data captured using a previously obtained camera will be processed. Operations were determined as reading data through OpenCV, controlling pixel values in terms of color channels, eliminating noise contained in the image, and using existing filters.

It would be better for our working principle to prepare the data set that we will use for our projects in advance. Images will be extracted from the file using the imread( ) method from your dataset. Let’s get to work by loading the necessary libraries for this process.

📍NOTE: After performing the preprocessing step, I usually use the imshow() function to check the image. But let’s not forget that since our imshow function does not work in Colab, we must install the cv2_imshow module!

The image you see below contains code that allows you to obtain the image and examine the pixel values.

After installing our libraries, we must create a class and copy the path to the PATH variable for the data set that we will use. Because our image processing techniques will work on the images contained in this folder.

As you can see in the imread function, I put 0 next to the file, since the images I use are gray-level images. In this way, even if the hue is possible in the images, this image will turn gray. We can print our first image on the screen using the RGB imshow method.

When we put 0 next to our image, we will have reached the image that you see below. it is possible to print the image that you see on the screen with the cv2_imshow(image) command. After this step, we can move on to the image processing steps.

Image Processing Steps

If you want to see the pixel values of your RGB or gray scale image, you can print it to the screen using the print command. In this way, you will also check which channel you are working on. Since I use an RGB rose image in this image, the pixel values show the following numbers.

📌 RGB Color Channel: RGB is the most commonly used color space. In this color model, each color sees red, green, and blue as its main spectral components. The infrastructure of this model includes a Cartesian coordinate system.

The code required to convert our image to RGB and examine the pixel values is given as follows.

📌 HSV Color Channel: The name of the HSV space comes from the initials hue, saturation and intensity, the English equivalent of the words hue, saturation and brightness. The HSV color space defines color with the terms Hue, Saturation, and Value. Although RGB also uses a mixture of colors, HSV also uses color, saturation, and brightness values. Saturation determines the vitality of the color, while brightness refers to the brightness of the color.

📌 CIE-LAB Color Channel: CIE, 1931 color spaces are the first defined quantitative links between the distribution of wavelengths in the electromagnetic visible spectrum and physiologically perceived colors in human color vision. The mathematical relationships that define these color spaces are essential tools for Color Management, which is important when dealing with recording devices such as color inks, illuminated displays, and digital cameras.

LAB Color Channel

Elimination of Noise In The Image

Because images are real-world data from the camera, they often contain Gaussian noise due to current changes on a camera’s sensor. Noisy images can lead to worse performance in edge detection, which we use for Element detection. Therefore, it is important to reduce this noise.

🖇 There are many methods available in the literature to reduce noise. Today we will discuss 2 methods with you.

  1. Adaptive Threshold Gaussian
  2. Adaptive Threshold Mean
➡️ Adaptive Threshold Gaussian

I have included the Python code in which the Gaussian method is applied to make our images become gaussian noise removed images as follows. It is possible to reach the desired result by manipulating the parameters in the adaptiveThreshold method here.

When Gaussian and average threshold filters, which are frequently used in the literature, are applied on these images, it is seen that almost the same blur (softening) level is approached. These methods are adaptive gaussian filter and average filter application, respectively.

➡️ Adaptive Threshold Mean

Adaptive thresholding is the method by which the threshold value is calculated for smaller regions and therefore there will be different threshold values for different regions.

You will appreciate that there are very minor nuances between Gaussian and Mean filters. You can continue with the filter you want by changing the parameter values yourself.

➡️ Edge Detection

Edge detection is an important technique used to detect features. Canny edge detection algorithm, which is one of the edge detection techniques in the figure, has been run on the images.

Canny Code

Canny Image

REFERENCES

  1. Medium, Cerebro, Using Artificial Intelligence in Image Processing Techniques, April 2018.
  2. Wikipedia, The Free Encyclopedia, ‘Image Processing’, September 2020.
  3. C. Gonzalez, Rafael, E. Woods, Richard, Digital Image Processing, Palme Publishing, (Ankara, 2014).
  4. S. Singh and B. Singh. “Effects of noise on various edge detection techniques”. In: 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom). Mar. 2015, pp. 827–830.
  5. https://www.tutorialspoint.com/opencv/opencv_adaptive_threshold.htm.
  6. Ajay Kumar Boyat and Brijendra Kumar Joshi. “A Review Paper: Noise Models in Digital Image Processing”. In: CoRR abs/1505.03489 (2015). arXiv: 1505.03489. url: http:// arxiv.org/abs/1505.03489.
  7. T. Silva da Silva et al. “User-Centered Design and Agile Methods: A Systematic Review”. In: 2011 Agile Conference. Aug. 2011, pp. 77–86. doi: 10.1109/AGILE.2011.24.

 

Contour Extraction Using OpenCV

In image processing, the concept called stroke is a closed curve that connects all continuous points that a color or density has. Strokes represent the shapes of objects in an image. Stroke detection is a useful technique for Shape analysis and object detection and recognition. When we do edge detection, we find points where the color density changes significantly, and then we turn those pixels on. However, strokes are abstract collections of points and segments that correspond to the shapes of objects in the image. As a result, we can process strokes in our program, such as counting the number of contours, using them to categorize the shapes of objects, cropping objects from an image (image partitioning), and much more.

Computer Vision

🖇 Contour detection is not the only algorithm for image segmentation, but there are many other algorithms available, such as state-of-the-art semantic segmentation, hough transform, and K-Means segmentation. For better accuracy, all the pipelines we will monitor to successfully detect strokes in an image:

  • Convert image to binary image, it is common practice for the input image to be a binary image (it must be a result of threshold image or edge detection).
  • FindContours( ) by using the OpenCV function.
  • Draw these strokes and show the image on the screen.

Apply Contour on Photoshop

Adobe PS
Before moving on to the encoding of contour extraction, I will first give you an example of Photoshop to give you better acquisitions.
Katmandan kontur çıkarımı
As a first step, to access the window you see above, right-click on any layer in Photoshop’s Layers window and select blending options.
🔎 If the Layers window is not active, you must activate the layers by clicking the Window menu from the top menu. The hotkey for Windows is F7.
It is possible to select the Contour color and opacity you want to create in the image by selecting the Contour tab from the left section. Then, background extraction is made to distinguish the contour extraction that will occur in the image first.
People siluet
After removing the background in the image you see here, I made a selection in yellow tones so that the object is visible in the foreground. After the background is removed, the outer contour will be applied to the image and the detection will be more successful.
People contour

Contour Extraction with Python OpenCV

I use Google Colab and Python programming language as a platform. If there are those who regularly code Python, it is a platform that I can definitely recommend! Come on, let’s start coding step by step.
📌 Let’s import the libraries required for our project as follows.
Gerekli Kütüphanelerin Yüklenmesi
📌 As the second step, we get our image with the imread function.
Görüntünün Alınması
📌 As you know in the world of image processing, our images come in BGR format. The BGR image must first be converted to RGB format and then assigned to the grayscale color channel.
Converting Color Spaces
📌 As the fourth step, a binary threshold operation is performed by specifying a threshold value in the image. To access the mathematics that the binary threshold function runs in the background, you must examine the following formula 👇
Formula
Binary threshold
If you have noticed, the image on which threshold will be applied is selected as a gray level image, not RGB. Please pay attention at this stage. When you follow these steps in order, you will receive the following feedback.
Background
📌In this step, we will use the findContours function to find the contours in the image. The image where the contours will be determined will be the binary image that we have realized the threshold.
Find Contours
📌 We will use drawContours function to draw these contours visually.
Draw Contours
🖇 The parameter cv2.CHAIN_APPROX_SIMPLE in the method removes all unnecessary points and saves memory by compressing the contour.
📌 Now we can print our contour extracted image on the screen.
Imshow contours

In this way, we made our inference. Hope to head to the world of other projects in another article … Stay healthy ✨

REFERENCES

  1. Contour Tracing, http://www.imageprocessingplace.com/downloads_V3/root_downloads/tutorials/contour_tracing_Abeer_George_Ghuneim/intro.html.
  2. Edge Contour Extraction, https://www.cse.unr.edu/~bebis/CS791E/Notes/EdgeContourExtraction.pdf, Pitas, section 5.5, Sonka et al., sections 5.2.4-5.2.5.
  3. https://www.thepythoncode.com/article/contour-detection-opencv-python adresinden alınmıştır.
  4. https://www.subpng.com/png-m7emk6/ adresinden alınmıştır.
  5. OpenCV, https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html.
  6. OpenCV, https://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html.

TensorFlow

Hello everybody, in this blog i want to talk about one of the free and most used open source deep learning library called TensorFlow. So why do we call it as open source? Open source allows the user to view and edit the codes of the software and to inform the user about program development. So you can easily create models with tensorflow, access machine learning pipeline with TensorFlow Extended (TFX), and train and deploy models in JavaScript environments with TensorFlow.js. You can also create complex topologies with features such as Functional API and Model Subclassing API.

What is TensorFlow?

TensorFlow was developed by Google Brain team initially to conduct machine learning and deep neural networks research and in 2015 TensorFlow codes were made available to everyone.TensorFlow is a library used for numerical computation using data flow charts in mathematics and if the literal meaning of tensor is a geometric object in which multidimensional data can be symbolized.

As you see above, tensors are multidimensional arrays that allow you to represent only higher dimensional datas. In deep learning, we deal with high-dimensional data sets where dimensions refer to different properties found in the data set.

Usage examples of TensorFlow

1)TensorFlow can be used efficiently in sound base applications with Artificial Neural Networks. These are; Voice recognition, Voice search, Emotion analysis and Flaw detection.

2) Further popular uses of TensorFlow are, text based applications such as sentimental analysis (CRM, Social Media), Threat Detection (Social Media, Government) and Fraud Detection (Insurance, Finance).As an example PayPal use TensorFlow for fraud detection.

3) It can also be used in Face Recognition, Image Search, Image Classification, Motion Detection, Machine Vision and Photo Clustering, Automotive, Aviation and Healthcare Industries.As an example Airbnb uses TensorFlow to categorize images and improve guest experience.

4) TensorFlow Time Series algorithms are used for analyzing time series data in order to extract meaningful statistics. As an example Naver automatically classifies shopping product categories with tensorflow

5) TensorFlow neural networks also work on video data. This is mainly used in Motion Detection, Real-Time Thread Detection in Gaming, Security, Airports and UX/UI fields.As an example Airbus uses tensorflow to extract information from satellite imagery and provide insights to customers.

Where can i learn TensorFlow?

You can join course “Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning” on Coursera and “Intro to TensorFlow for Deep Learning” on Udacity for free.Tutorials for beginners and experts are available on TensorFlow’s official site.   You can find Mnist data set and other “Hello World” examples that I also have applied before.

As a result, we talked about the meaning of the word tensorflow, what tensorflow is, the usage areas of tensorflow and how we can learn. As it can be understood from the blog, world-leading companies prefer tensorflow for many things such as image classification, voice recognition, disease detection. Step into this magical world without wasting time! Hope to see you in our next blog…

REFERENCES

https://www.tensorflow.org/?hl=tr

https://www.biltektasarim.com/blog/acik-kaynak-kodu-nedir

https://devhunteryz.wordpress.com/2018/06/27/tensorflowun-temeli-mantigi/

https://tr.wikipedia.org/wiki/Tens%C3%B6r

http://devnot.com/2019/tensorflow-nedir-nasil-kullanilir/

https://www.exastax.com/deep-learning/top-five-use-cases-of-tensorflow/#:~:text=Voice%2FSound%20Recognition,Automotive%2C%20Security%20and%20UX%2FUI

 

 

 

The Story of Artificial Intelligence

The story of artificial intelligence dates back to antiquity. Classical philosophers who worked to explain human thinking as a mechanical process of manipulating symbols, essenced the idea of AI technology afterwards. Significantly, with the invention of programmable computers in 1940; scientist took such philosophy a step forward and began to research whether it is possible to build an electronic brain which functions like the human brain.The period of tremendous technological developments accelerated with World War 2 has lasted 20 years after 1940 and it has been the most important era for the birth of AI. 

During such period important works on relating the machine and human functions together have been put forward. Cybernetics had an important role in such work. According to the leader of the area, Norbert Wiener, the aim of cybernetics was to create a theory that can be used to understand the control and communication mechanisms of both animals and machines. Moreover, in 1943, Warren McMulloch and Walter Pitts created the first computer and mathematical model of the biological neuron. With analyzing the developed models of neurons and their networks, they improved logical functions that worked with idealized artificial neurons. Such invention was the foundation of today’s neural networks.

Computing Machinery and Intelligence by Alan Turing.

Retrieved from: https://quantumcomputingtech.blogspot.com/2018/12/turing-computer-machinery-and.html

 

The well-known works like “Computing Machinery and Intelligence” by Alan Turing which question the possible intelligence of a machine, have been put forward at the beginning of 1950. Alan Turing answered such question in his paper with the test called the Turing Test. Such Test suggested that if computers come to a place where they can’t be distinguished with humans in a conversation, now that it can be said that they are thinking like humans. Even though there have been many arguments on such test, it is known as the first serious philosophical claim on AI.  Alan Turing’s work with John Von Naumann had a significant influence on AI’s future, also. Although their work was not referred to as AI, it had the main logic behind it. They have put forward decimal and binary logics of computers and showed that computers can universally be used on execution of programs. 

The term and the discipline of ‘AI’ was founded in the Summer Conference in Dartmouth College, 1956; especially in a workshop organized during the conference. The 6 participants of the workshop, including John McCharthy and Marvin Minsky, became the leaders of AI discipline for the following years. They have foreseen that a machine that thinks like a human can be developed in not much time and have been funded for such vision. After such significant workshop, important works have been put forward – such as programs of reasoning in search, natural language and micro-worlds – sophisticated new programs led computers to execute mathematical, geometrical problems and learn languages.  Such influential works increased the optimism about AI’s future. According to one of the AI leaders of the era Marvin Minsky, for only in one generation artificial intelligence would be solved to a great extent. 

The future leaders of AI in Dartmouth Summer Conference, 1965.

Retrieved from: https://medium.com/cantors-paradise/the-birthplace-of-ai-9ab7d4e5fb00

 

However, the optimism did not last for so long. The critics on the area of AI have arisen fast especially at the beginning of 1970’s. Such critics mainly concentrated on the relatively slow progress the area is taking in the era of over anticipation. Eventually, governmental fundings on the researches have been cut and there began a serious slow back in AI advancement that is known as the ‘First AI Winter’. After a period of slow progress on AI technologies, in the 1980’s the advancement of expert systems – a computer that has the knowledge about a subject as its expert – and the invention of microprocessors started the acceleration in the advancement on AI again, fundings started to be directed again especially on the information based expert systems. However, even though such projects had significance on the history of artificial intelligence, ‘Second AI Winter’ started in the 1980’s due to similar criticisms and irrational over hype. Fundings were cut again in the late 1980’s and early 1990’s, such periods were financially difficult times for AI researchers. For instance, the articles related to AI in the New York Times started to decrease in 1987 and had its lowest point in 1995. 

Deep Blue vs. Gary Kasparov

Retrieved from: https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours

 

Even in such difficult times, developments in the area continued. With the help of Moore’s Law’s applications, computers had much higher capacities while working faster than ever. Also, other concepts’ implications in computer science, such as probability, decision theory, Bayesian networks and many more, had strong influence in AI’s development. Eventually in 1997, IBM’s expert system Deep Blue defeated chess grandmaster Gary Kasparov. Especially for gaining  anticipation again, such victory was also an important milestone in AI history.
After such, as it is known the advancements in the 2000’s and especially 2010’s were exponential with the help of tremendous amounts of data and much faster processing systems. In 2020, loads of new articles are published about new AI researches and developments everyday. Furthermore, the up and downs explained in the history of AI, created the advanced technology that it is today. 

References 
https://www.coe.int/en/web/artificial-intelligence/history-of-ai
http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
https://towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45
https://www.techopedia.com/what-is-the-ai-winter-and-how-did-it-affect-ai-research/7/33404#:~:text=The%20%E2%80%9Cwinter%E2%80%9D%20has%20been%20blamed,expensive%20Lisp%20machines%20in%20performance.