In-Q-Tel explains explainable artificial intelligence

The intelligence community's venture capital arm, In-Q-Tel, published a helpful primer on explainable artificial intelligence.

Estimated read time: 1 minutes

Subscribe

Get the official GovFresh newsletter:

By GovFresh · July 26, 2019

Photo: In-Q-Tel

The intelligence community's venture capital arm, In-Q-Tel, published a helpful primer on explainable artificial intelligence.

I'm not an expert in AI and, if you're not either, these excerpts may elevate your understanding:

As use of AI and Machine Learning (ML) becomes increasingly common across industries and functions, interdisciplinary stakeholders are searching for ways to understand the systems they are using so that they can trust the decisions such systems inform. This effort is sometimes referred to as “Explainable AI” or “XAI”.

...

The focus on trust and understanding that is driving the XAI movement relates to important questions of law and policy. An explanation for an AI or ML system can put the system’s reasoning into the open for debate about whether it is equitable or just, or may enable some sort of actionable understanding around why a decision was made.(5)

...

Some researchers, like Facebook’s Chief AI Scientist Yann Lecun and Google Brain’s Geoff Hinton, have argued that asking systems to “explain” themselves is a complex, infeasible task that may not lead to actionable insight.(6,7) Others disagree, arguing that explainability is necessary, as technologists need to consider the social implications of all parts of their AI systems.(8,9,10) Moreover, they argue, evolving research may make the task increasingly feasible.(11)

...

One helpful way to characterize efforts in XAI is by applicability — for example, whether a technique can be used to interpret or justify a single model or many, or whether it can be used to interpret or justify a single decision or larger trends.

...

While most new work and research on these techniques is coming from the academic sector, XAI tools are beginning to materialize in the market. Whether XAI companies will be able to stand on their own, or if these tools will primarily be absorbed as a feature by established AI/ML players, remains to be seen.

Read more: Explainable AI

Feedback?

Have feedback on this page?

Submit a GitHub issue

Topics