Explainable Artificial Intelligence: Principles and Applications
Explainable Artificial Intelligence: Principles and Applications in life!
One of my brothers met with an accident and had a knee injury. The doctor advised undergoing surgery based on the scan. I got a second opinion; the doctor suggested a different kind of surgery. In the third hospital, doctors suggested not to undergo surgery! Which option should I choose? Some of us might have faced similar problems. What can help us to take an informed decision? How technology can help here? What if, I had an AI app providing that helps to understand the nature of the injury and the rationale behind the suggestions? Can Explainable Artificial Intelligence principles play a role here? I think Explainable Artificial Intelligence principles can play a role. When AI models are used, there is a possibility of adverse impact on the social aspects. I believe Explainable Artificial Intelligence principles will help us solve some social issues using AI models.
AI models might be of some help to this kind of situation and everyday human life. Explainable Artificial Intelligence which can give recommendations and the reasons for their recommendations is vital for addressing social issues. However, trustable AI models are required. Eventually, we need socially responsible AI models to solve some of our day-to-day problems.
Explainable Artificial intelligence in everyday life!
In the Tamil language, there is a proverb – “Food is the Medicine,”. This concept is part of many cultures. However, over a period, this concept underwent changes and has led to various lifestyle diseases. As a countermeasure, nowadays many health apps are on the market guiding the food intake depending on the workout, calories that a person must take, and so on. Some Apps provide recommendations of the food menu depending on health conditions, emotional conditions, and workout parameters.
Explainable Artificial Intelligence can help to provide reasons for making the right food choice. The critical point is that, when the users get to know the rationale behind the recommendations by AI models, their confidence increases. As the confidence increases, users take improved decisions.
However, we need to ensure that the adverse effects that may be produced by automated AI models are mitigated. So, we need trustable models and apps that are socially responsible!
Explainable Artificial Intelligence for solving technical issues
In the past, some of us might have heard about recalling vehicles due to some malfunctioning. Sometimes, even the manufacturers might not know the real reason. In these cases, predictive analytics helps to predict the reasons for failure. However, the Engineers expect to understand more about the rationale of the prediction than the prediction itself. Explainable artificial intelligence principles explain the rationale of prediction and increase the trustworthiness of the model.
Explainable AI for solving Social Issues
-
- Drones, cameras, and sensors are getting more and more deployed for surveillance. What about people’s privacy, security & basic rights? what measures are to be taken in addressing the above challenges.
- Data sets play a very important role in giving the output of AI models. But, incorrect data sets lead to different incorrect results & incorrect automated decisions.
- How to build moral and ethical values in an automated decision-making process. The human being is a socio-economic ethnic tribe with diversities. How do build models considering the diversities of human society?
- Social Economic Divide: Many of us are aware that the top few percent (10% of people having approximately 90% of wealth) have the maximum wealth. How do democratize AI models? How advanced technologies are fair to the common people ?!
- Malicious purpose: Technically AI models are not malicious. AI models are mostly black boxes producing certain results. However, what guidelines and approval mechanisms can help to prevent the AI models deployed malicious intentions.
- These are social and economic challenges. Though technology plays an important role in solving the above issue. However, people from all aspects of life shall involve to develop socially responsible solutions!
Challenges in adopting artificial intelligence
- AI Model development is expensive and still in incubating phase. Hence these models are accessible to people who can invest. Building explainable artificial intelligence models requires additional effort and investment.
- Technological evolution –technological evolution is very fast and adapting to the new technologies during development is a challenge.
- Training and development of human capital are spiraling. As the competition increases, training and retaining cost also increases.
AI Economy & Explainable Artificial Intelligence
PWC estimates that AI will impact $15.7 trillion of the economy by 2030. The impact of AI is in almost industry segments. It impacts directly human lives, in health care, automotive, finance, Jobs, everyday life, etc.,
- The competition landscape is changing from conventional resources to Data and models!
- The competition is going to be on “who can build models faster? “
- When the economy is getting dependent on automation, the nature of jobs changes.
- Competition is moving towards the collaboration of physical & cyberspace. Metaverse is an example.
However, how human factors and considerations can be built in these kind of automated models ?, This needs to be discussed considering people from all walks of life.
Need for Explainable Artificial Intelligence principles.
Below are some of the reasons for the need for Explainable Artificial Intelligence.
- Avoiding Polarized recommendations. Recommendation engines provide recommendations based on previous searches.
- Influencing decisions/verdicts: Auto skewed search on selected topics. A selective history of legal rulings and examples might influence incorrect decisions.
- Human Behavior: What is the guarantee that human factors are considered in building some models?
- Self-learning & explanatory: How to prevent malicious data is given for model training?
Principles of Explainable Artificial Intelligence
- Explainable:
- This is the most important aspect of Explainable artificial intelligence. The model needs to explain the following
- The AI model itself
- Output – Rationale of the output/recommendations.
- This is the most important aspect of Explainable artificial intelligence. The model needs to explain the following
-
Reliability :
- How reliable the predictions can be based on the model built.
- Quality of the data, size of the data set for the training,
-
Consistency and Accuracy :
- Consistency and accuracy of the output: how consistent the model can deliver the output, even a small fraction of misjudgment is sometimes expensive.
- The models are expected to be accurate for the purpose-built. However, confirming this accuracy is critical under different conditions
AI Model Scoring!
One of the thought processes, which we need to consider is “AI model scoring!”
- Rating of AI models, like App rating & food reviews rating!
- The ratings might be based on various parameters such as data quality, ethics, human consideration, accuracy, etc.,
- Like ethical hacking, ethical AI model hacking might also be possible to understand the AI models
- Deployment of tools and making it transparent. Increasing the adoption of tools such as LIME, SHAP, and 360XAI is beneficial.
It is a complex topic by itself, nevertheless we need to consider various factors, like Auditing, Certification and so on. This will help us to develop more socially responsible models.
Who Is responsible?
The following are some of the expected impacts of Automated AI Models.
- Though technical specialists develop AI models to solve certain problems. However, there is a need to validate the security, privacy, and unbiased output of the models.
- These models and advanced technologies influence social changes, but many people may not be aware of the changes. Increasing awareness is important.
- Lifestyle changes such as the nature of jobs, searching for information, booking hotel rooms, etc., may become much easier. Despite the benefits, people may feel that freedom is gone.
- Nature of jobs and the knowledge/skill/competency development need changes. Developed economies may be better prepared than the developing world. We need to handhold developing economies.
- In developing AI models, the AI ecosystem involves – Platform developer / Owner, companies developing AI models, Data Providers, Solution Integrators, and other stakeholders. In a complex system like this, who will be responsible for connecting all the stakeholders and ensuring responsibility across the entire value chain?
As a responsible society, we need to look at the above aspects and consider the following. A few for consideration.
- Intervention of government regulations,
- Social Interest groups to create awareness
- Transparency publishing mechanisms – about data being used, model behavior, risk factors, etc., However, the disclosure to the public may be a challenge in some cases. So a regulated disclosure, auditing system to be developed, something similar to financial reports published by Public Companies.
The role of Explainable Artificial Intelligence principles and the transparent mechanism will be of immense help. A technical/social forum for Explainable Artificial Intelligence principles can be a starting point.
Next era: The benefits of implementing Explainable Artificial Intelligence principles
- AI solutions are changing the competitive landscape. Some companies like ANT, Intuit, etc., are demonstrations of leveraging AI platforms for a scalable business. These businesses provide services for whom it was not available earlier. In addition to the reach, these models provide economic benefits and quick service as well. However, when denial of service comes into the picture, the issue arises. Explainable Artificial Intelligence principles are one of the means to provide the rationale for the denial of service. This becomes a key differentiator for the organizations.
- Assessment of Organization processes – There are numerous processes in the organizations. These processes might be validated or audited on a regular basis, in compliance with the various regulatory, and statutory requirements. This is one area in which Explainable AI provides immense automated insights.
- Productivity improvement and Efficient operations – Remote operations of various automated industrial / manufacturing requirements. Continuous monitoring of these systems remotely and helping the humans to take course corrective actions.
- Increasing product sales / enhancing customer experience / improving profitability, in these areas Explainable AI plays a role.
From the above benefits, we infer that “Explainable Artificial Intelligence Principles” becomes a differentiator in chosen areas. The opportunities are wide open, although there are concerns. But, as we become more socially responsible, I am sure we will find ways to move forward.
Next Blog .. If you are curious