Artificial intelligence is entering our lives more and more every day. Many questions come with all these developments. Although artificial intelligence can do amazing things, there are some gaps when it comes to teaching human emotions.
Day after day, AI started to show itself in many different functions in many different areas: Health, transportation, communication… Some scientists say that it is not possible to leave all authority to artificial intelligence, especially in these areas. When it comes to ethics, artificial intelligence has not yet fully gained the trust of people. For example, in 2018, a Tesla vehicle had an accident while it was on auto-pilot, causing the driver to die. According to the statement from the company, the driver was found guilty, and “The driver did not heed the warnings, this could have prevented the accident.” had been called. Although what happened at that moment is a mystery, such news scares us.
In addition, vehicles with this type of artificial intelligence play a role as independent decision-makers in the case of autopilot, and when an object that will be encountered at that moment is a human, they must perceive it very quickly and make a decision very quickly accordingly. For example, a car is coming fast and the car has to swerve to the right. But at that moment, there is an object on the right, perhaps a child. Should the vehicle hit the oncoming vehicle or the child? Will he be able to detect these situations and react very quickly at that moment? What will it do? All these questions are very important because each question is the starting key to the solution. And each key brings further improvement.
So how do we know if artificial intelligence is reliable? To answer this question, a group of researchers worked on a project called “DeepTrust” for months. In the academic article of the study, what DeepTrust is was explained as follows:
“…a Subjective Logic (SL) inspired framework that constructs a probabilistic logic description of an AI algorithm and takes into account the trustworthiness of both dataset and inner algorithmic workings.” (Mingxi Cheng, Shahin Nazarian, and Paul Bogdan, 2020)
It can be called a subjective logic computation that includes a lot of mathematics, especially probability. If we talk a little bit about what subjective logic is here, Subjective Logic is a kind of probability logic. It is commonly used to analyze and model situations of uncertainty and unreliability. There are many transactions and probability calculations under this logic. It reaches this result by making a process according to the values coming from different components. You can access the pdf version of the book in reference “Subjective Logic: A formalism for reasoning under uncertainty” written by Audun Jøsang that explains the mathematics of this subject in detail.
DeepTrust’s job is not to check the millions of data taught to an AI algorithm and what data it is connected to, but to provide an insight into the structure that this data creates. When we think of Artificial Neural Networks, imagine that you are trying to read all the data there and trying to determine the accuracy of all of them one by one. This would take a lot of time, right? What is really important here is to check whether the architecture of these neural networks has high overall accuracy. This is what they are trying to do with DeepTrust.
DeepTrust, which is mostly based on subjective logic, can find out how reliable an algorithm is by probability calculation. For example, in the 2016 presidential election of the United States, it was able to find that a result stating that Clinton would win was wrong. And at that election, as we know, Clinton didn’t really win.
The conclusion from these and other examples shows us that precision and reliability are not always the same. Definitive results may not always pass the reliability filter.
Although the feeling of trust is an intrinsic motive, it is exciting to see that it can also be calculated mathematically and to measure the reliability of artificial intelligence! According to its researchers and developers, DeepTrust is the first study to analyze trust in the field of AI. According to Bogdan (2020), they aim to make artificial intelligence more conscious so that it can adapt more.
While dozens of studies are being carried out on the development of artificial intelligence, other studies that control these studies show us how incredible the activism is.
As IBM said: “Artificial intelligence is no longer the future …”