LONDON WALLET
  • Home
  • Investing
  • Business Finance
  • Markets
  • Industries
  • Opinion
  • UK
  • Real Estate
  • Crypto
No Result
View All Result
LONDON WALLET
  • Home
  • Investing
  • Business Finance
  • Markets
  • Industries
  • Opinion
  • UK
  • Real Estate
  • Crypto
No Result
View All Result
LondonWallet
No Result
View All Result

What is explainable AI (XAI)?

Carl Sandburg by Carl Sandburg
May 9, 2023
in Opinion
What is explainable AI (XAI)?
74
SHARES
1.2k
VIEWS
Share on FacebookShare on Twitter



You might also like

The Importance of a Sports Forum Today

JUTAWANTOTO: Bandar Toto Slot Gacor Aman untuk Semua Pemain 2026

JUTAWANTOTO – Rekomendasi Agen Slot Tergacor yang Lagi Hype

XAI involves designing AI systems that can explain their decision-making process through various techniques. XAI should enable external observers to understand better how the output of an AI system comes about and how reliable it is. This is important because AI may bring about direct and indirect adverse effects that can impact individuals and societies. 

Just as explaining what comprehends AI, explaining its results and functioning can also be daunting, especially where deep-learning AI systems come into play. For non-engineers to envision how AI learns and discovers new information, one can hold that these systems utilize complex circuits in their inner core that are shaped similarly to neural networks in the human brain. 

The neural networks that facilitate AI’s decision-making are often called “deep learning” systems. It is debated to what extent decisions reached by deep learning systems are opaque or inscrutable, and to which extent AI and its “thinking” can and should be explainable to ordinary humans.

There is debate among scholars regarding whether deep learning systems are truly black boxes or completely transparent. However, the general consensus is that most decisions should be explainable to some degree. This is significant because the deployment of AI systems by state or commercial entities can negatively affect individuals, making it crucial to ensure that these systems are accountable and transparent.

For instance, the Dutch Systeem Risico Indicatie (SyRI) case is a prominent example illustrating the need for explainable AI in government decision-making. SyRI was an automated decision-making system using AI developed by Dutch semi-governmental organizations that used personal data and other tools to identify potential fraud via untransparent processes later classified as black boxes.

The system came under scrutiny for its lack of transparency and accountability, with national courts and international entities expressing that it violated privacy and various human rights. The SyRi case illustrates how governmental AI applications can affect humans by replicating and amplifying biases and discrimination. SyRi unfairly targeted vulnerable individuals and communities, such as low-income and minority populations. 

SyRi aimed to find potential social welfare fraudsters by labeling certain people as high-risk. SyRi, as a fraud detection system, has only been deployed to analyze people in low-income neighborhoods since such areas were considered “problem” zones. As the state only deployed SyRI’s risk analysis in communities that were already deemed high-risk, it is no wonder that one found more high-risk citizens there (respective to other neighborhoods that are not considered “high-risk”). 

This label, in turn, would encourage stereotyping and reinforce a negative image of the residents who lived in those neighborhoods (even if they were not mentioned in a risk report or qualified as a “no-hit”) due to comprehensive cross-organizational databases in which such data entered and got recycled across public institutions. The case illustrates that where AI systems produce unwanted adverse outcomes such as biases, they may remain unnoted if transparency and external control are lacking.

Besides states, private companies develop or deploy many AI systems with transparency and explainability outweighed by other interests. Although it can be argued that the present-day structures enabling AI wouldn’t exist in their current forms if it were not for past government funding, a significant proportion of the progress made in AI today is privately funded and is steadily increasing. In fact, private investment in AI in 2022 was 18 times higher than in 2013.

Commercial AI “producers” are primarily responsible to their shareholders, thus, may be heavily focused on generating economic profits, protecting patent rights and preventing regulation. Hence, if commercial AI systems’ functioning is not transparent enough, and enormous amounts of data are privately hoarded to train and improve AI, it is essential to understand how such a system works. 

Ultimately, the importance of XAI lies in its ability to provide insights into the decision-making process of its models, enabling users, producers, and monitoring agencies to understand how and why a particular outcome was created. 

This arguably helps to build trust in governmental and private AI systems. It increases accountability and ensures that AI models are not biased or discriminatory. It also helps to prevent the recycling of low-quality or illegal data in public institutions from adverse or comprehensive cross-organizational databases intersecting with algorithmic fraud-detection systems.





Source link

Share30Tweet19
Previous Post

Former Coinbase product manager behind insider trading case sentenced to 2 years in prison

Next Post

Bankrupt crypto exchange QuadrigaCX to start ‘interim distribution’ of funds

Carl Sandburg

Carl Sandburg

Recommended For You

The Importance of a Sports Forum Today
Opinion

The Importance of a Sports Forum Today

January 22, 2026
Opinion

JUTAWANTOTO: Bandar Toto Slot Gacor Aman untuk Semua Pemain 2026

January 8, 2026
Opinion

JUTAWANTOTO – Rekomendasi Agen Slot Tergacor yang Lagi Hype

January 7, 2026
Opinion

LINGTOGEL77: Daftar Situs Togel Online yang Dipilih Ribuan Pemain Hari Ini

January 7, 2026
Next Post
Bankrupt crypto exchange QuadrigaCX to start ‘interim distribution’ of funds

Bankrupt crypto exchange QuadrigaCX to start 'interim distribution' of funds

Related News

Tesla hints at finally producing the next-gen Roadster in new job listing

Tesla hints at finally producing the next-gen Roadster in new job listing

October 29, 2025
RBC downgrades Carvana, says sell the car seller even after its recent earnings beat

RBC downgrades Carvana, says sell the car seller even after its recent earnings beat

July 20, 2023
Bybit lists Hamster Kombat’s token for pre-market trading

Bybit lists Hamster Kombat’s token for pre-market trading

July 8, 2024

Browse by Category

  • Business Finance
  • Crypto
  • Industries
  • Investing
  • jutawantoto
  • Markets
  • Opinion
  • Real Estate
  • UK

London Wallet

Read latest news about finance, business and investing

  • Contact
  • Privacy Policy
  • Terms & Conditions

© 2025 London Wallet - All Rights Reserved!

No Result
View All Result
  • Checkout
  • Contact
  • Home
  • Login/Register
  • My account
  • Privacy Policy
  • Terms and Conditions

© 2025 London Wallet - All Rights Reserved!

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?