•  
 
 

Tech glossary

Defining IT & technology terms

Responsible AI

Responsible AI refers to a set of frameworks that promote accountable, ethical and transparent Artificial Intelligence (AI) development and adoption. From approving loan agreements to selecting job candidates, many AI use cases are sensitive in nature. Organizations adopt responsible AI models to avoid biases, which can be ingrained in AI design or the data sources it uses.

As AI becomes more sophisticated and prevalent, ethical considerations must be given significant consideration. Industry leaders and end users alike are calling for more AI regulation.

Responsible AI best practices generally apply the following guidelines:

  • Asking questions that evaluate why you’re using AI for each use case
  • Establishing management policies that address accountability and potential flaws
  • Committing to appropriate and secure data use
  • Understanding that humans should be auditing an AI’s decision-making and results
  • Recognizing that bias can be unconsciously included in AI design
  • Creating documentation that explains how the AI works

Learn more about Responsible AI

Related terms

  • Artificial Intelligence (AI)
  • Big data

Featured content for responsible AI

Article The Path to Digital Transformation: Where Leaders Stand in 2023 Image

Insight report The Path to Digital Transformation: Where Leaders Stand in 2023

Article Outsmarting Ransomware: A Quick Response Guide for Military Image

eBook Outsmarting Ransomware: A Quick Response Guide for Military

Article 4 Technology Trends Impacting Effective Military Operations  Image

eBook 4 Technology Trends Impacting Effective Military Operations

Article Elevate the Military With AI Image

Datasheet Elevate the Military With AI

Narrow your topic:

Digital Innovation  Artificial Intelligence (AI)  View all focus areas