Hate Speech and AI: Issues in Detection

Hate speech is a form of expression which attacks someone mostly based on their race, gender, ethnicity and sexual orientation. The history of hate speech dates back long time ago; however, with the expansion of the internet and social media, it had its most accelerated form. Now, 41% of the American population have experienced a form of online harassment as Pew Research Center’s report suggests. Also, the high correlation between suicide rates and verbal harrasment in migrant groups shows the crucial importance of detecting and preventing the spread of hate speech. Additonally as an instance from recent years, after the mass murder that happened in Pittsburg synagoge it has seen that the murderer was posting hated messages to jews constantly before the incident.

 

 

Retrieved from: https://www.kqed.org/news/11702239/why-its-so-hard-to-scrub-hate-speech-off-social-media

 

Furthermore, the Pew Research Center’s report also suggests that 79% of the American population thinks that the detection of hate speech/online harassment is in the responsibility of online service providers. Hence, many online service providers are aware of the importance of the issue and have close relationships with AI engineers while solving it.

When it comes to the logic of hate speech detection, there are many complex points. Firstly, such complexity comes from the current AI technologies’ limitations on understanding the contexts of human language. For instance, current technologies fail to detect hate speech or give false positives when there are contextual differences. As such, researchers from Carnegie Mellon University suggested that the toxicity of the speech may differ with the race, gender and ethnic characteristics of the people. Hence, to increase the quality of the data and detection; it is important to identify the characteristics of the author while identifying the hate speech and its toxicity rate according to the researchers. Also, such identification can also reduce the current bias the algorithms have.

Retrieved from: https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/pi_2017-07-11_online-harassment_0-01/

 

However, current AI technologies have difficulties in detecting such characteristics. Firstly, it’s difficult to identify the demographics and characteristics of the authors’; since in most of the cases such information is not available on the internet. So, the process of distinguishing hate speech becomes harder. Secondly, even if the author clearly indicates such information; sometimes the detection process becomes more difficult due to the cultural insights of the given context. The dynamics of the countries or even the regions in countries is changeable and is really related to their culture and language. Such differences and ongoing changing factors are also crucial points for the outcomes of the processes; some outcomes may fail to detect or detect false positives due to non-statistical cultural differences.

 

 

Language is one of the most complicated and most significant functions of the humankind. There are many different ways and contexts of communicating with language which even neuroscientists could not fully map yet. However, with artificial intelligence scientists are also one step forward in describing the patterns and mechanisms of language. In such sense, the crucially important subject in the age of the internet, hate speech detection, also has an advantage since it is much easier to detect online harassment with machine learning algorithms. Nevertheless, there is no way for humans to get out of the detection cycle in today’s technology with the issues faced in detection processes. 

 

References 

https://bdtechtalks.com/2019/08/19/ai-hate-speech-detection-challenges/

https://deepsense.ai/artificial-intelligence-hate-speech/

https://www.kqed.org/news/11702239/why-its-so-hard-to-scrub-hate-speech-off-social-media

 

The Story of Artificial Intelligence

The story of artificial intelligence dates back to antiquity. Classical philosophers who worked to explain human thinking as a mechanical process of manipulating symbols, essenced the idea of AI technology afterwards. Significantly, with the invention of programmable computers in 1940; scientist took such philosophy a step forward and began to research whether it is possible to build an electronic brain which functions like the human brain.The period of tremendous technological developments accelerated with World War 2 has lasted 20 years after 1940 and it has been the most important era for the birth of AI. 

During such period important works on relating the machine and human functions together have been put forward. Cybernetics had an important role in such work. According to the leader of the area, Norbert Wiener, the aim of cybernetics was to create a theory that can be used to understand the control and communication mechanisms of both animals and machines. Moreover, in 1943, Warren McMulloch and Walter Pitts created the first computer and mathematical model of the biological neuron. With analyzing the developed models of neurons and their networks, they improved logical functions that worked with idealized artificial neurons. Such invention was the foundation of today’s neural networks.

Computing Machinery and Intelligence by Alan Turing.

Retrieved from: https://quantumcomputingtech.blogspot.com/2018/12/turing-computer-machinery-and.html

 

The well-known works like “Computing Machinery and Intelligence” by Alan Turing which question the possible intelligence of a machine, have been put forward at the beginning of 1950. Alan Turing answered such question in his paper with the test called the Turing Test. Such Test suggested that if computers come to a place where they can’t be distinguished with humans in a conversation, now that it can be said that they are thinking like humans. Even though there have been many arguments on such test, it is known as the first serious philosophical claim on AI.  Alan Turing’s work with John Von Naumann had a significant influence on AI’s future, also. Although their work was not referred to as AI, it had the main logic behind it. They have put forward decimal and binary logics of computers and showed that computers can universally be used on execution of programs. 

The term and the discipline of ‘AI’ was founded in the Summer Conference in Dartmouth College, 1956; especially in a workshop organized during the conference. The 6 participants of the workshop, including John McCharthy and Marvin Minsky, became the leaders of AI discipline for the following years. They have foreseen that a machine that thinks like a human can be developed in not much time and have been funded for such vision. After such significant workshop, important works have been put forward – such as programs of reasoning in search, natural language and micro-worlds – sophisticated new programs led computers to execute mathematical, geometrical problems and learn languages.  Such influential works increased the optimism about AI’s future. According to one of the AI leaders of the era Marvin Minsky, for only in one generation artificial intelligence would be solved to a great extent. 

The future leaders of AI in Dartmouth Summer Conference, 1965.

Retrieved from: https://medium.com/cantors-paradise/the-birthplace-of-ai-9ab7d4e5fb00

 

However, the optimism did not last for so long. The critics on the area of AI have arisen fast especially at the beginning of 1970’s. Such critics mainly concentrated on the relatively slow progress the area is taking in the era of over anticipation. Eventually, governmental fundings on the researches have been cut and there began a serious slow back in AI advancement that is known as the ‘First AI Winter’. After a period of slow progress on AI technologies, in the 1980’s the advancement of expert systems – a computer that has the knowledge about a subject as its expert – and the invention of microprocessors started the acceleration in the advancement on AI again, fundings started to be directed again especially on the information based expert systems. However, even though such projects had significance on the history of artificial intelligence, ‘Second AI Winter’ started in the 1980’s due to similar criticisms and irrational over hype. Fundings were cut again in the late 1980’s and early 1990’s, such periods were financially difficult times for AI researchers. For instance, the articles related to AI in the New York Times started to decrease in 1987 and had its lowest point in 1995. 

Deep Blue vs. Gary Kasparov

Retrieved from: https://www.theguardian.com/technology/2017/dec/07/alphazero-google-deepmind-ai-beats-champion-program-teaching-itself-to-play-four-hours

 

Even in such difficult times, developments in the area continued. With the help of Moore’s Law’s applications, computers had much higher capacities while working faster than ever. Also, other concepts’ implications in computer science, such as probability, decision theory, Bayesian networks and many more, had strong influence in AI’s development. Eventually in 1997, IBM’s expert system Deep Blue defeated chess grandmaster Gary Kasparov. Especially for gaining  anticipation again, such victory was also an important milestone in AI history.
After such, as it is known the advancements in the 2000’s and especially 2010’s were exponential with the help of tremendous amounts of data and much faster processing systems. In 2020, loads of new articles are published about new AI researches and developments everyday. Furthermore, the up and downs explained in the history of AI, created the advanced technology that it is today. 

References 
https://www.coe.int/en/web/artificial-intelligence/history-of-ai
http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
https://towardsdatascience.com/history-of-the-second-ai-winter-406f18789d45
https://www.techopedia.com/what-is-the-ai-winter-and-how-did-it-affect-ai-research/7/33404#:~:text=The%20%E2%80%9Cwinter%E2%80%9D%20has%20been%20blamed,expensive%20Lisp%20machines%20in%20performance.

The Future of Environmental Sustainability: AI and Greenhouse Emissions

Climate change continues to be one of the most important issues that humankind faces today. One of the main factors that causes climate change is the greenhouse effect; simply such effect refers to increase on earth’s temperature with respect to emissions of gases like carbon dioxide, nitrous oxide, methane and ozone; more broadly greenhouse gases. Emission of such gases and increase in greenhouse effect is significantly correlated with human activities. However, AI based activities would create a difference in such processes and environmental sustainability studies suggest. AI use for environmental sustainability can lower worldwide greenhouse emissions by 4% at the end of 2030, PwC forecasts. Such percentage corresponds to 2.4 Gt, which is the combined annual greenhouse gas emission of Australia, Canada and Japan. Anticipation is that such quantities would lead many institutions to develop their sustainability models with help of AI. 
Considering AI’s ability to process data more efficiently than ever before, such ability can be used to analyze the data linked to the environmental issues the report suggests. Such analyzes would assist environmental sustainability by identifying patterns and making forecasts. As a current practice, IBM developed AI systems to process extensive data of weather models in order to make weather forecasts more reliable. It has been stated by the company that the system developed increased the accuracy rate by 30%. In terms of sustainability, such accuracy may lead large institutions to manage their energy amount and minimize greenhouse emissions. 
Moreover, AI can assist to reduce the greenhouse emissions with its practices on transportation. Autonomous vehicles can have a promising impact on such reduction, since the vehicles use less fossil fuels with fuel efficient systems. Furthermore, if AI based systems started to be used for calculating the efficient roads on car-sharing services, autonomous vehicles may change the passenger habits. With the benefits of such efficient road calculations, many passengers would prefer car-sharing services or public transportation rather than individual use of vehicles. Also, autonomous vehicles would have a reductive factor on traffic since such vehicles would be informed of each other. Such reduction on traffic and communicative systems may assist vehicles to be more efficient in terms of their energy use. Such shifts in the area of transportation may have a significant effect on environmental sustainability since the area has a remarkable emission ratio.
On the sectoral side, AI can also be used to manage companies’ emissions. Electric services company Xcel Energy’s practice with AI is an instance for such management. Prior to the practice; after producing the electricity with burning coal, the Xcel factory released the greenhouse gases like nitrous oxide into the atmosphere like the many other companies’ factories in the sector. However, in order to limit such emission; the company advanced its Texas factory’s smokestacks with artificial neural networks. Such advancement assisted the factory in terms of developing a more efficient system and most significantly, limiting the emissions. Such systems may reduce the nitrous oxide emissions 20% International Energy Agency forecasts. Therefore now, hundreds of other factories in addition to Xcel Energy are using such advanced systems for their environmental sustainability.
However; besides such significant developments, AI systems have carbon footprints too since training data requires a considerable amount of energy. Some sources even suggest that such quantities of energy can predominate AI’s benefits on energy efficiency. On the other hand, it is also suggested that as AI’s own energy efficiency is also being developed, such quantity could become a minor factor considering AI’s contributions to energy efficiency and limiting greenhouse emissions. 
AI’s such intersections with social and scientific issues are most likely to be the crucial points of society’s future. According to the research, “The Role of Artificial Intelligence in Achieving Sustainable Development Goals” AI can assist to resolve 95% of the issues that are directly related to environmental SDGs; which are climate action, life below water and life on land. Considering such effect, AI can be the tool that will be used for taking a step forward in environmental sustainability. 
 
References
Vinuesa, R., Azizpour, H., Leite, I. et al. The role of artificial intelligence in achieving the Sustainable Development Goals. Nat Commun 11, 233 (2020). https://doi.org/10.1038/s41467-019-14108-y
https://www.greenbiz.com/article/what-artificial-intelligence-means-sustainability
https://cyfuture.com/blog/how-is-artificial-intelligence-helping-in-the-fight-against-climate-change/
https://www.irdcsabanci.com/post/wildfires-of-2020-and-solastalgia-insights-of-climate-change
https://www.americanprogress.org/issues/green/reports/2016/11/18/292588/the-impact-of-vehicle-automation-on-carbon-emissions-where-uncertainty-lies/
https://www.infoworld.com/article/3568680/is-the-carbon-footprint-of-ai-too-big.html
https://www.pwc.co.uk/services/sustainability-climate-change/insights/how-ai-future-can-enable-sustainable-future.html