advertisement
advertisement
advertisement

Can chatbots make workers more ethical?

Thorny issue at work? Don’t worry about reading that thick tome filled with legalese that you may have received on your first day of work.

Can chatbots make workers more ethical?
[Photo: Christina Morillo/Pexels]

Some questions in life and work have clear answers. But many are more opaque. Consider a question like this: “I have a small business that I am building on nights and weekends. Is that an issue with my current employer?”

advertisement
advertisement

The answer: It depends.

If you’re a full-time employee and the business isn’t in direct conflict or competition with your employer, it may not be a problem. But you’d need to make a formal disclosure to your employer to be sure.

Questions like this seldom have yes or no answers, but the details should be in the employer’s code of ethics and conduct. “Like most large companies, Kimberly-Clark maintains dozens of policies, procedures, and instructions to guide employees,” says Ronnie Kann, director of global ethics and compliance at Kimberly-Clark. “These are only effective if employees know there is a policy, where to find that policy, and what the policy requires.” In other words, if you have no idea where that code might be, you’re probably not alone–especially if you work at a large corporation. The other challenge to getting the information is that these codes are often thick tomes that are filled with legalese that you may only see once when you’re hired (and handed a stack of other paperwork).

[Image: courtesy of Convercent]

Ditching the 100-page PDF

But when questions arise regarding what constitutes harassment or fair labor practices, it’s not the type of thing you want to be digging through a 100-page PDF to find.

That’s why Convercent’s CEO Patrick Quinlan and his chief ethics officer, Katie Smith, decided to use their ethics and compliance technology to create a new kind of Code of Conduct. Tapping the power of emerging AI like chatbots, predictive analytics, and natural language processing, they developed an interactive Code of Ethics for themselves as a test case.

The first version, says Quinlan, “was like souped-up web content.” Highly interactive, the code led users through clickable sections instead of having to read page after page of a static document. The second version debuted “Finn” (named after an employee’s dog), a chatbot that popped up to respond to questions or allow the employee to report an issue directly in the chat. Finn is depicted as a gender-neutral robot, which Quinlan says was intentional.

advertisement

The backend featured an analytics dashboard that could alert company leaders to potential issues by notifying them when there was an uptick in activity, like: “30% of the marketing department in New York accessed the sexual harassment page six times in the last four days.”

Although there has been a number of bots, apps, and platforms launched in the last couple of years to combat workplace harassment, discrimination, fraud, and other issues, Convercent’s offering is set apart by its bringing together of each company’s unique code of conduct and its AI component.

Among the first companies to pilot the interactive code was Kimberly-Clark. Kann says that based on employee feedback, the company realized it had an opportunity to substantially improve how policies were organized and stored. Among the concerns were how best to engage and support day-to-day decision-making, and how to provide helpful, real-time information and business-friendly guidance. The result was a multimedia experience that includes policies, training, videos, and interactive search. Their chatbot was branded KayCee (get it?).

[Image: courtesy of Convercent]

Getting Answers

Since the test was rolled out last year using Convercent’s interactive Code, more than 21,000 people at Kimberly-Clark used it across more than 60 countries in 19 languages. It drove a 2.5x increase in Helpline questions and employees spent an average of 3.45 minutes on a page. They also initiated more than 3,000 chats with KayCee.

The chats are critical to the mission of getting employees’ questions answered and their reports dealt with promptly and with care. Quinlan says that 34% of Convercent helpline reports fall into the critical category including harassment, discrimination and abuse of power, fraud and bribery, ethics and compliance violations, wrongful termination and retaliation, or violence

[Image: courtesy of Convercent]
The bot itself generates a fairly basic question designed to buck a traditional industry practice which asked the reporting party to do a lot of categorization of their allegation. “It takes great courage to speak up,” Quinlan observes, “in the midst of anguish,” so it doesn’t help when they’re asked to explain whether they are calling out bribery vs. corruption.” This “Simple Intake” just says “Tell us what happened” to allow for open-ended reporting. The system reads that and can route the report to HR, legal, etc. Convercent saw 70% more text come through in reporting descriptions with this feature across its customer base which stands at roughly 6.7 million globally.

advertisement

Philip Winterburn, Convercent’s chief strategy officer believes that a chat interface removes a number of barriers both real and perceived when it becomes the first point of entry for employees to raise concerns, ask questions, and have a meaningful dialog.

“Currently our chatbot is ‘rules-based’ and leverages a combination of customizable questions and answers as well as keyword matching to offer assistance,” Winterburn explains. But Quinlan adds that they’re adopting machine learning and natural language processing to improve the conversations. “We’re still on the one yard line,” Quinlan maintains. However, Winterburn adds that machine learning will further help the bot know when it should hand the conversation over to a real human.

Machine learning is a tricky thing to manage, as Microsoft learned a couple of years ago when users gamed its chatbot Tay to generate hate speech. Quinlan admits that this is a recent conversation within his company because they are trying to be proactive to ensure people aren’t weaponizing these interactive codes of ethics, for example, if they wanted to take out a competitor for a new job. “We don’t know of an incidence where that happened yet,” Quinlan maintains but concedes that with more than 250,000 cases a year, the law of inevitability could change that.

Still, Kann says on balance the experience has been a good one for the company and its people. “Employees are no longer limited to searching for what they need,” Kann says, “Rather, we are able to push information based on the analytics.” Kimberly-Clark’s ambition is to provide the “right content at the right time to the right employee.”

Quinlan puts the onus on employers which at one point were simply looking for signoffs on their codes of conduct, just to protect themselves legally. He contends that now it’s about building trust. “What is the follow up when people engage,” he posits, and what organizational justice looks like. “We can’t just expect people to speak up,” and not have their employer take satisfactory action.

Ultimately, Kann says, “the Interactive Code’s greatest value is driving our ethics and compliance core mission: business engagement.”

advertisement
advertisement

About the author

Lydia Dishman is a reporter writing about the intersection of tech, leadership, and innovation. She is a regular contributor to Fast Company and has written for CBS Moneywatch, Fortune, The Guardian, Popular Science, and the New York Times, among others.

More