Fast company logo
|
advertisement

From my conversations with industry leaders at companies like Dell and data scientists like Jon Christiansen, the main takeaway is that closing the trust gap and ensuring a greater understanding of AI lies in our very humanity.

The secret to rebuilding trust in tech: More emotionally intelligent humans

[Photo: ROBYN BECK/AFP/Getty Images]

BY Meghan E. Butler8 minute read

There is a social truth universally understood but difficult to admit: We don’t like what we don’t trust, and we don’t trust what we don’t understand.

Herein lies the most modern of human dilemmas–navigating advanced technologies, specifically the juxtaposition of game-changing developments that improve the human experience versus the unknown outcomes of “thinking” machines and AI.

In an effort to better understand the trust gap between humans and advanced technologies–and find solutions to close it–I’ve hosted conversations with industry leaders at companies like Dell and data scientists like Jon Christiansen. The main takeaways are not shocking, but they are hugely important because, quite simply, they can actually be solved:

  • The trust gap is the result of poor communications standards
  • Complex technology like AI requires a new approach to trust building
  • The responsibility for trust building lies on organizational leaders

Perhaps most profoundly, everyone pointed to a simple truth either directly or indirectly: For technology made by humans for humans, the secret to closing the trust gap lies in our very humanity–our emotional intelligence.

According to the experts, here’s how AI leaders can set standards within their organizations and throughout the industry to collectively close the trust gap and ensure a greater understanding and adoption of AI.

Acknowledge the trust gap

The “black box” effect, where decisions and actions are made out of sight and beyond collective understanding, is largely responsible for this trust gap between humans and advanced technologies. Which ultimately means the trust gap is the result of a massive failure to communicate.

“We trust complicated systems. Artificial intelligence is considered a complex system, and that is a very different animal,” offered Olaf Groth, founder and managing partner of Cambrian AI, and coauthor of Solomon’s Code: Humanity in a World of Thinking Machines.

Groth explained that complicated systems are different from complex systems in that they are sophisticated designs requiring human expertise to understand, but for the most part, they are the sum of their parts. If something tragic happens, like an airplane crash, it can be deconstructed to a problem within its system.

Complex systems, however, cannot be reduced to their parts and are often a result of the emergence phenomenon. Artificial intelligence, a complex system, is a product of emergence. The parts don’t act the same together as they would individually, and they are largely influenced by external factors and data sets. This makes AI seem mysterious and unpredictable, therefore untrustworthy.

By demonstrating organizational self-awareness, and first acknowledging fears of the unknown, authentic connections can be made to hot-wire human trust. And hot-wiring human trust is the onus of the organization’s communications function in collaboration with its leadership.

Be transparent about intentions

For those of us outside the hallowed halls of innovation, but who are expected to adopt their products, it’s not enough to hot-wire our trust by acknowledging the fact that it is lacking. Our trust must also be kept, and to do so requires clarity and transparency.

Make no mistake–being transparent does not mean revealing algorithms or trade secrets. It means fulfilling our need to understand the intentions and considerations behind the technology more so than the actual mechanics.

It’s not enough to talk about deep deliberation in development. IBM has created a standard to ensure accountability among AI creators, and designed it to be used by any organization developing AI products.

“We should give more information. Transparency builds trust in those that will use the technology,” offered Francesca Rossi, the AI ethics global leader of IBM. “We believe people should be more aware and educated [on the considerations of the developers.] Our Everyday Ethics for Artificial Intelligence outlines seven requirements and the questions that each AI team member must answer.”

Much like parents are accountable for their children, every person responsible for bringing up AI should be held accountable for their creation.

Not replacing humans

Easing fears and building trust does not come easily or quickly, but can be done efficiently by improving explainability in general, and positioning advanced technologies in terms of human improvement–baby steps for technology meant to help humans, not giant leaps toward diminishing mankind.

Take Dell, for example. It is creating human-centered AI by keeping users in their optimal zone of productivity. Its AI augments the work experience with a digital twin that can tend to the ephemera of daily computing activity so the user can concentrate on more important work than, say, managing incoming junk mail.

“AI won’t be successful if it does not reduce human stress, or calm them, or allow them to enjoy the experience,” said John Roese, president and CTO of Dell EMC. “One of our measures of success is marking whether or not it improved the human condition. And the best way to measure that was assessing the stress level of the people using it. Are they frustrated? Do their biophysical markers indicate they are exerting more energy to do the tasks, or less?”

Narratives like Dell’s that are in service of us–rather than those about replacing us–will earn our trust, our attention, and our adoption at a far greater pace than those that tell us AI will take over the world.

Gabriel Fairman, founder and CEO of Bureau Works, an AI-driven platform to orchestrate global content delivery within organizations, also believes the trust gap boils down to a communication challenge. The solution lies in preparing people so they can leverage the best of AI rather than fear it.

“AI is especially useful when recognizing data patterns in huge data sets. That does not mean it will replace humans in analyzing data sets,” he said. “It means that humans will be responsible for understanding the ethical implications of those data sets, indirect correlations between data sets, actions desired from these data sets.”

To accomplish this understanding, AI organization’s communications strategies must focus on education and explainability in a way that emphasizes human values and the user’s responsibility for the outcomes–an approach that is far more impactful than technical specifics.

advertisement

Foster the humanity of the technologists

The collective “we,” the external assembly expected to adopt AI, is only one side of the trust gap. The other side accounts for the technologists, those geniuses responsible for coding, programming, and developing the technology–the human representatives of the technology itself.

While “soft skills” and other programming relating to emotional intelligence are available to the senior-most leaders, it is slow to be tailored for the rest of the organization, especially for younger generations and emerging leaders.

“Just like we go through sexual harassment training at work, there should be an emotional intelligence curriculum, especially for the data scientists who are writing the models,” says Lisa Seacat DeLuca, director of IoT & digital twin at IBM.

The entire goal of AI neural networks–the digital equivalent of the human brain–is to replicate what our brains do, but with computational power far beyond our own abilities. AI as cognitive, reasoning technology is all about comprehension, learning, intelligence, and insight generation.

“What cognition, and AI, don’t do, is feel. There’s no emotion in it,” said Jon Christiansen, PhD, data scientist and chief intelligence officer of Sparks Research. “That doesn’t suggest that feeling and emotion are antonyms of cognition, because they are not. They are complimentary. Put the two together, and you have a really powerful mechanism. But that starts with the people on the other side of the computer.”

Humans are messy, non-linear, and can’t be optimized, a constant pain point for the most analytical and technical thinkers. So, instead of asking them to answer questions about their end users, we should expect that they can answer questions about themselves.

Going a step beyond IBM’s suggested inquiries as part of their Everyday Ethics for Artificial Intelligence, every person responsible for the creation of an AI product should be able to answer the following questions:

  • What do they believe about the tech they’re creating?
  • Why do they believe it?
  • What do they hope to accomplish?
  • How would they describe the intended impact?
  • And perhaps most importantly, why do they care?

This takes us full circle, back to self awareness as the most fundamental component of emotional intelligence. And until technologists are expected to understand and articulate themselves and their inherent biases, the brains behind AI cannot be held accountable for the consequences of their creations.

Show your people, not your algorithms

Cambria AI’s Groth offered this: “In the technology-driven economy, power goes to those who hold the technical expertise and the financial capital.”

And with great power comes social responsibility.

Every organization creating and deploying artificial intelligence carries the burden to market it without exposing trade secrets. And the most emotionally intelligent solution is to show us your people, not your algorithms. Show us who they are and, perhaps most importantly, why they care.

“If [the organizations] brought their technologists and data scientists out of the dark and showcased them as actual human beings to the public, it would demonstrate how they are looking at programming AI as more than an outcome. That they are people developing for other people,” explained Aleksandra Przegalinska, an assistant professor at Kozminski University and an AI research fellow at MIT Sloan School of Management. “This would help eliminate the black box mentality.”

Humans are still the heartbeat of artificial intelligence. While there exists malevolence among us, the majority of technologists are well intentioned.

Just like the technology that intimidates us, we all have the ability to change. To evolve. To become more emotionally intelligent for ourselves and the artificial intelligence we’re ushering in.

And commitments to do so will signal to us all that we should be not afraid.


Meghan E. Butler is on a mission to create a more emotionally intelligent world by helping people make real-time connections that matter. She counsels the C-suite and other organizational leaders on internalizing the emotional intelligence framework and deploying it across their teams. She wrote a column at Inc., and contributes to Fast Company, Thrive Global,The Muse, and Rhapsody Magazine for United Airlines, among others. She is a cofounder and partner at Frame+Function, a communications consultancy, and is an emotional intelligence development coach, certified by MHS Systems to administer and coach to the EQ-i 2.0 and EQ-i 360 psychometric assessments.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics