Fast company logo
|
advertisement

The most forward-thinking developers are blending domain intelligence and trust to make LLMs work for corporate America.

The most innovative AI applications today have this in common

BY Mark Sullivan3 minute read

Generative AI models don’t know what they don’t know. Ask a question on a subject they haven’t encountered in their training, and they might just make something up. For an individual playing around with the tech, that’s an annoyance. For a company, it’s a nightmare.

That’s why the best AI application developers are building their own AI around the foundation model, leveraging unique technology or data to get the good out of LLMs while managing the bad. When done well, these apps or services can do far more—and create far more real-world value—than an LLM-powered chatbot can by itself. 

“In an LLM-enabled world, it is quite tempting for a large model to feel like a hammer, and everything else a nail,” says Chris Kauffman, a partner at the VC firm General Catalyst. “In production, the truth is more nuanced,” he says, adding that the most forward-thinking builders are creating what he calls “integrated circuits of intelligence” with this blended approach.  

Some of the most effective AI applications add domain expertise that doesn’t exist in the vast world of internet content, which trains most LLMs. Casetext adds value to its legal assistant AI, CoCounsel, via specialized data. CoCounsel gets its basic text summarization and writing capabilities from OpenAI’s GPT-4 language model, but the LLM also has access to “ground truth” information from proprietary databases of verified legal data. Interplay Learning, which develops training resources for electricians and others in the skilled trades, married its extensive multimedia knowledge base with an LLM to launch its training assistant SAM (Skill Adviser and Mentor), which allows workers to ask specific questions when they’re in the middle of a job. And Seekr, which makes a search engine that scores news content on its reliability, fine-tunes its LLM with media expertise collected from a panel of independent journalists. It also trains the AI on data from a large repository of well-reported, well-written news articles that it’s painstakingly collected. The goal is an AI tool that functions something like a human editor with a distaste for spin, exaggeration, partisan bias, and clickbait headlines. 

advertisement

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics