Fast company logo
|
advertisement

As everyone starts automating their businesses, we must be sure they’re doing it in a way that helps everyone.

5 Principles To Make Sure Businesses Design Responsible AI

What needs to happen to drive the ethical development of corporate AI? [Image: Jumpeestudio/iStock]

BY Kriti Sharma5 minute read

A report from PwC predicts that 38% of American jobs will be automated by 2030. Analysis from The Washington Post puts the number of millennials who will be competing with robots for jobs in their lifetime at 50%. While these numbers matter (including to me personally, as a working millennial), it is important to put them in perspective and understand how bots–and artificial intelligence–will work alongside humans in the offices of the future. And how companies like Microsoft, Amazon, Slack and Facebook who are already scripting and powering workplace applications of AI can ethically create new integrations and innovations.

Let me explain.

I built Pegg, an autonomous chatbot that helps people manage their money, with ethics and accountability in mind, because both are important to me in my own work life. In the process, my team at Sage and I saw a clear demand among industry peers for a set of working-level principles that every company building AI should consider. While they could complement the visionary Asilomar Principles backed by Elon Musk, Stephen Hawking and other innovation giants–which are designed to instill caution into the AI-creation process–they should be crafted specifically for businesses developing AI, and their customers, who will be the end-users of this emerging tech.

Here’s what I believe needs to happen to drive the ethical development of corporate AI over the next few decades, and, in the process, make humans working with the AI more accountable, as well.

The business world is constantly at risk of repeating a pattern of systematic inequality. [Image: Jumpeestudio/iStock]

1: AI Needs To Reflect The Diversity Of The Users It Serves

In building and deploying bot technology, businesses and builders need to create diverse AI. The technology should be built by diverse teams of people, using diverse data sets and diverse design approaches. Why? Well, the business world is constantly at risk of repeating a pattern of systematic inequality produced–sometimes as an unforeseen byproduct–by previous revolutionary workplace innovations. AI technology should be built to recognize diverse inputs, be able to place feedback into context without deviating toward bias and should not perpetuate gender stereotypes under any circumstances.

Users build a relationship with enterprise AI and start to trust it after the first few meaningful interactions. [Photo: Jumpeestudio/iStock]

2: AI Must Be Held Accountable–And So Must Its Users

We don’t accept unsavory or unethical behavior from people in the workplace, so why should technology be the exception? My team and I believe that holding technology accountable actually boosts its potential. In fact, we think accountability is core to building the workplace of the future.

From our own experience building Pegg, we learned quickly that users build a relationship with enterprise AI and start to trust it after the first few meaningful interactions. Therefore, AI needs to be held accountable for its actions and responsible for its decisions, just like humans. As builders of bots and AI, engineers need to solve for traceability and auditability. And we also need to ensure that AI does not reward undesirable human behavior – like a manager screaming profanity at a staff member or a user making sexist comments.

Builders should create a rewards-based learning system that motivates robots. [Image: Jumpeestudio/iStock]

3: AI should be rewarded for successful work–and workplace behavior

AI and robots must be trained to swiftly make correct decisions in a countless number of workplace situations. Builders should create a rewards-based learning system that motivates robots and AI to achieve high levels of productivity. Under the current “reinforcement learning” system deployed today, a single AI or bot receives positive or negative feedback depending on the outcome generated when it takes a certain action. Productivity and performance will skyrocket if builders can build on this concept to include rewarding how a task is completed and scale all rewards programming to outfit an entire AI workforce. Ultimately, it’s not just about AI achieving the best objective results, but also doing it for the right reasons and in the right way.

Voice technology and social media-savvy robots provide new accessible solutions that effectively reach consumers in their comfort zones. [Image: Jumpeestudio/iStock]

4: AI should level the playing field

The inherent scalability of AI provides new opportunities to democratize access to technology. Voice technology and social media-savvy robots provide new accessible solutions that effectively reach consumers in their comfort zones. AI also breaks down barriers to access for people with sight issues, dyslexia, limited mobility and other conditions. AI’s ability to connect with more people also broadens talent pools for tech-minded companies across industries.

Indeed, AI’s greatest value is that it can process, analyze and act on various contextual data sets, including those generated by connected devices and the Internet of Things, to provide a seamless flow of voice-driven information that makes sense to more people.

We need to retrain humans for current and future opportunities that will serve to manage, maintain and complement AI. [Image: Jumpeestudio/iStock]

5: AI will replace, but it must also create

I will be honest here. Automation will lead to job replacement. But it will also lead to the creation of brand new jobs and the evolution of others. The first disciplines to be automated will be customer support, administrative workflows, and rules-based processes. The reason: AI learns faster than humans and is very good at repetitive, mundane tasks. In the long term, deploying AI will be cheaper than employing humans. Consequently, we need to retrain humans for current and future opportunities that will serve to manage, maintain and complement AI. Similar to how job descriptions changed in fields impacted by heavy automation, including shipping, telecom, and automobile manufacturing.

The extent to which robots will push the business world to reinvent how people work is not yet known. However, one thing is certain: the enterprise’s approach to building and deploying the AI technology needs to focus squarely on ethics and accountability. In the process, businesses building AI technology and supporting applications need to fundamentally incorporate ethics and responsibility into its engineering. As workplace AI becomes more human, the technology itself will need to be held as accountable–if not more than–its human colleagues. Why not start now?


Kriti Sharma is the vice president of bots and AI at Sage Group. She was recently named to Forbes‘ ’30 Under 30’ list.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

ModernCEO Newsletter logo
A refreshed look at leadership from the desk of CEO and chief content officer Stephanie Mehta
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics