advertisement

To win in this new hyper-lean environment requires the entire organization to shift gears to move faster.

Two scientists working

[Images: LuneVA/peopleimages.com/Adobe Stock]

Fast Company Executive Board

The Fast Company Executive Board is a private, fee-based network of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience.

BY Greg Samios4 minute read

From the moment ChatGPT was first introduced at the end of 2022, it began to transform how we think about innovation. While artificial intelligence (AI) and large language models (LLMs) have been in use for years, the rapid development and success of increasingly advanced ChatGPT models sparked worldwide interest and new research in generative AI applications. The Economist predicts 2024 will be the year generative AI will go mainstream.   

One of the ways generative AI will transform industries is by speeding up technology development. With generative AI, companies are developing the code for new software faster than ever before. Take, for example, Elon Musk’s chatbot Grok, which was trained in just four months. The team behind Grok disclosed they were able to develop the chatbot so quickly by using a development toolkit to accelerate product engineering and research.

In addition, companies are outsourcing coding to remove one of the biggest barriers slowing their speed to market. In a presentation at GitHub Universe 23, Shopify’s VP and head of engineering announced that Shopify had developed nearly a million lines of code without human developers using GitHub Copilot.  

Compass Newsletter logo
Subscribe to the Compass newsletter.Fast Company's trending stories delivered to you daily

The increasing speed of innovation is compounded by generative AI’s rapid self-learning capabilities. The faster new code is developed, the faster generative AI learns, iterates, and finetunes in an ever-accelerating learning loop.

We need to reinvent the operating model for innovation to keep up with generative AI. We can look to development models of the past for some lessons, but those models weren’t designed to answer the challenges, opportunities, and questions posed by generative AI-based technology.

From my background in engineering and technology, I believe we are entering the hyper-lean stage of rapid product development.

A BRIEF HISTORY OF DEVELOPMENT MODELS AND THE INTRODUCTION OF HYPER-LEAN THINKING

The use of product development models dates back to the 1970s, when Dr. Winston W. Royce introduced the concept of the waterfall model, which uses a logical progression of steps with set endpoints to guide teams toward their end development goal. Often, waterfall timelines would stretch over the period of several years before a new product was introduced.

When software developers began working on internet applications, competitive pressures increased and there was a need to bring products to market faster. Developers then shifted to a lean mindset which entails identifying the value to the customer, creating a simplified, continuous flow to bring that product to the customer, and managing that process toward perfection to obtain low-cost, high-quality, rapid throughput.

In the early 2000s, technologists first introduced agile thinking to give teams more flexibility to continuously learn and innovate throughout the development process. Lean and agile thinking remain the primary methodologies used in product development.

Today, developers face the new challenge of keeping up with generative AI as it iterates and learns at hyper-speed. How do you systematically engage customers and get enough feedback to keep up with the ever-accelerating learning loop? How do you create customer intimacy on an institutional level to ensure the end product resonates with your user? To win in this new hyper-lean environment requires the entire organization to shift gears to move faster.

On the flip side, the introduction of rapid generative AI-driven innovation will also require organizations to put a secure system of checkpoints in place to catch any errors or miscalculations quickly. Given the rapid pace of generative AI self-learning and the potential for unforeseen implications, organizations need short feedback loops to catch any warning signals early before they become bigger problems.

advertisement

BRINGING HYPER-LEAN INNOVATION TO HEALTHCARE

The need for a hyper-lean approach coupling rapid innovation with security checkpoints is clear when you look at how companies are starting to test generative AI applications in healthcare.

Recently, Long Island University researchers challenged ChatGPT with a series of health-related questions such as potential drug interactions. The research team found the chatbot produced false or incomplete answers for 29 out of 39 questions. In a press release, one of the study authors stated, “Anyone who uses ChatGPT for medication-related information should verify the information using trusted sources.”

To help stakeholders navigate the use of generative AI in healthcare, the American Medical Association (AMA) published a set of principles for the development, deployment, and use of augmented intelligence. The organization calls for “a risk-based approach,” suggesting healthcare organizations establish policies to ensure generative AI technologies are used in line with the current standard of care. In addition, the principles state that the level of oversight provided should be “proportionate to the potential overall or disparate harm and consequences the AI system might introduce.”  

It’s not just medical professionals and societies calling for a responsible approach to embedding generative AI in healthcare. A recent national consumer study found Americans have concerns about the use of generative AI in their own care, most notably with understanding where generative AI gets information. In fact, 86% of Americans agree a problem with using generative AI in healthcare is not knowing where the information comes from and how it is validated, and 82% agree a problem is that information may be coming from internet searches with no filter.

Still, the same survey showed half of Americans believe generative AI will be embedded in clinician/patient relationships within the next five years. With generative AI on the horizon, healthcare organizations must adapt quickly, but taking a hyper-lean approach with extensive piloting and testing does not mean extended timelines.

Instead, we should look for ways to marry existing gold-standard healthcare tools with the power of generative AI—taking advantage of this transformative technology without compromising the rigor required for vetting any new product being introduced into patient care.


Greg Samios is President and CEO of Clinical Effectiveness at Wolters Kluwer Health


ABOUT THE AUTHOR

Greg Samios is President and CEO of Clinical Effectiveness at Wolters Kluwer Health. Read Greg's Executive Profile here. More


Explore Topics