advertisement

The era of AI-powered automation is here, and we must ensure it is free of prejudices and biases so that it does not perpetuate human fallibilities. Here’s how.

Building ethical AI-powered automation products

[Images: PNG City / Adobe Stock]

Fast Company Executive Board

The Fast Company Executive Board is a private, fee-based network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.

BY Akhil Sahai2 minute read

Artificial intelligence (AI) offers countless possibilities for “smart agency” to augment human intelligence and enhance human agency. However, underlying AI models are susceptible to prejudices and biases. AI-powered automation products that rely on such models, in turn, inherit them. With generative AI increasingly being incorporated into such products, additional forms of bias are introduced that rely on these models.

BIASES GALORE IN AI AND GENERATIVE AI

Biases in AI models can stem from various sources, including the training data and data labeling which may have inherent or introduced bias; algorithmic bias, whereby biased weightage was programmed into them based on programmers’ personal opinions or experiences; or cognitive bias, which may be related to choosing one dataset over another. 

AI-powered automation products are now also beginning to rely heavily or entirely on generative AI models such as large language models (LLMs) and small language models (SLMs). This leads to other forms of bias being incorporated. These may be linguistic bias, where a particular style of writing, cultural preferences, or vocabulary is preferred; group attribution bias, whereby a group of people is attributed specific characteristics; and, most importantly, automation bias, where machines are assumed to be infallible, and the suggestions/recommendations are taken in blind faith. Additionally, LLM embeddings, a technique used in natural language processing, can further amplify the biases in training data. 

Compass Newsletter logo
Subscribe to the Compass newsletter.Fast Company's trending stories delivered to you daily

The prevalence of these biases in AI models results in situations like what was observed with Google Gemini. The replication or amplification of human preferences or biases by AI algorithms poses a significant threat to the cause of ethical AI-powered products and underscores the urgent need to address this issue.

TRUSTWORTHY HUMAN-CENTERED AI

AI-powered automation products are increasingly designed with the understanding that humans play a significant role in interacting with them. Addressing AI biases and ensuring ethical AI development is not just a necessity, but a responsibility.

Human-centered AI has to be trustworthy. Per Europe’s Ethics Guidelines for Trustworthy Artificial Intelligence, AI has to follow the rules and regulations, provide oversight, not be prejudiced or biased, benefit human beings, and be transparent and accountable.

THE WAY FORWARD

Transparency is critical to AI-powered automation products because it provides an understanding of how the decisions were determined and what confidence scores were used when making decisions, especially when faced with ambiguity. Having an audit trail of activities to review the decisions and the activity flow is also essential. Thus, “explicability” should be a vital trait of AI-powered automation products.

advertisement

Responses from LLMs must be vetted before they are used for decision-making, as they are prone to hallucinations and non-deterministic responses. This is easier said than done because inserting humans in the loop can slow down decision-making. Therefore, automated mechanisms should be built into such products for verifying the responses and identifying and flagging any underlying issues as exceptions that may need human review. 

Carefully curating data sets for training AI models (e.g., for fine-tuning LLMs/SLMs) to ensure that the training data is free of biases and prejudices helps. Also, periodic and regular auditing of AI algorithms used in such products to weed out personal preferences and prejudices is essential.

The importance of human intervention when such products fail to meet expectations or exhibit biases cannot be overstated. Humans must be able to assume control as needed and address any irregularities in decision-making, so these controls must be built into these products. 

The era of AI-powered automation is here and now. We must ensure it is free of prejudices and biases so that it does not perpetuate human fallibilities. 



ABOUT THE AUTHOR

Seasoned executive with roles at LexisNexis, HP, Dell, VMware & scaled startups to successful exits. Chief Product Officer at Kanverse.ai More


Explore Topics