advertisement
advertisement
advertisement

Lobe Is A Machine Learning Platform For Complete Idiots

From Apple, Facebook, and Nest alum Mike Matas, Lobe makes it so you don’t need to be a data scientist to build incredible AI tools.

Lobe Is A Machine Learning Platform For Complete Idiots
[Photo: courtesy of Lobe]

The theory of machine learning isn’t hard to grasp. If you want to train software to spot a face in a photography, you amass many pictures of faces in photographs. You draw a box around the face parts. And over millions of rounds of practice, the software will learn to spot faces in pretty much any photograph–as if it’s twisting a wax key in a lock over and over again until it can unlock a door effortlessly.

advertisement
advertisement

The problem is that while the theory is largely understandable, the tools are hard to use, let alone master. You have to create all sorts of custom bits of code, plug it into multiple pieces of software, and operate under an almost intuitive grasp of advanced data analytics to get anywhere.

Or maybe that was the case, until the launch of Lobe, which looks like the most user-friendly take on machine learning yet. All you need is a big pile of images or sounds, which you drag and drop onto Lobe’s website. From here, Lobe will automatically begin creating a machine that’s capable of learning pretty much anything. There’s no coding required, and you can even stack existing bit of AI onto your project, much like Lego bricks.

Meanwhile, everything you build and output is based upon the popular Tensorflow standard–what’s known as a machine learning framework–and applicable to apps on the web, iOS, and Android.

Lobe, the company, is a venture-backed startup that’s just come out of stealth following nearly two years of development. Its head of design is Mike Matas, who founded the landmark Delicious Library before working on products for Apple and Nest, is most recently known for helping launch the lauded, but ill-fated app Paper at Facebook. As Matas tells me, he himself is a designer who can’t code. And the ML revolution is leaving a lot of designers behind. He hopes that Lobe can level the playing field for creative thinkers who would like to develop, or at least rapid prototype, ML tools.

“As a UI designer when I first started, I made everything in Photoshop. Everything I designed was through a static interface. So every solution was a button on the screen,” says Matas. “Then I learned UX prototyping tools. With them, we could do iterative prototyping, and suddenly we could solve problems with motions, interaction, and gestures. And you get things like the iPhone X home swipe.”

“To me, AI is like the next wave of user interfaces, where UI can just disappear and work,” he says.” And as a proof point, many of the demos his team at Lobe are sharing were actually created by Matas on his own.

advertisement

So how does it work? Matas demonstrated a tool in which he created an image recognition system that could translate a person’s gestures into emoji, like a thumbs up or wave.

First he built a training set, by creating folders labeled with various emoji. Then he filled each folder with photos of people making that gesture.

To train the machine, he dragged and dropped this whole set of folders into Lobe’s browser window. Lobe translated this into a clear workflow as its server began running trials to link the gesture to the emoji. This isn’t just barebones image classification. Lobe is actually blurring and brightening his source photos automatically, to add more variance to his training set. That means Lobe can learn a whole lot more from far fewer photos, which saves a lot of time.

In real time, Lobe’s dashboard shows how accurate the image recognition model has become, allowing you to tap through any of your examples one by one to see the conclusions the computer has drawn in each instance. If Matas wanted, he could drill down into pretty much anything to tweak what’s going on. On the photo module, he can adjust parameters he’d like to add as edge cases (like telling the machine to mix in examples of extreme brightness, or reversing a hand left-to-right). He can access a view that shows literally what the computer “sees” as it views an image, in attempts to reduce the opaqueness of how these ML systems develop. It’s tempting to imagine this sort of view helping to one day reduce algorithmic bias.

advertisement

Lobe’s next trick is that you don’t have to start from scratch with your emoji project. You can access Lobe’s database of “lobes” (see a theme?). These are like ready-made plugins provided by Lobe (but eventually open to any user to submit) that allow you to chain ML logic together, and essentially build your project atop the intelligence of one that’s already made.

In this case, Matas can add a lobe that’s capable of spotting hands to his emoji project. That means his machine can eliminate the noise of the photo, and just focus on the hand itself. In other words, his machine no longer needs to squint through an entire photo to figure out where the hand sign is. Every photo will be logically cropped to just the hand, so it can learn faster.

“We’re making AI more modular,” says Matas. “That’s how other software is built. You have layers connected together, and you leverage people’s code to build more abstract things. Right now, a lot of AI stuff is, one model does everything.” For instance, if someone were to train an AI to detect songs by Beethoven, they might train one model by feeding it a lot of Beethoven songs. But perhaps it would be faster, and more flexible in the long run, to build this model upon models that can identify instruments, and genres of music, and other artists, first.

In about 90 hours of server time, Matas has his hand-to-emoji translator, and it runs with about 90% accuracy. This translator “model” could then be exported to javascript, or to run directly on an iPhone to power an iOS app. If someone actually knows Tensorflow or Keras well, they can access and adjust the deepest bits of ML code inside Lobe, too. It’s just not necessary for the newbs.

Lobe also lets you work with audio files, which for now allows you to create tools like music visualizers, but that Matas hopes to expand to support voice. And you can create generative images–much like we’ve seen with terrifying faces and Van Goghs. Basically, most of the ML-meets-media stuff you might want to do seems covered by Lobe. Not having tested it myself, or spoken to any serious ML programmer who has tried it, the limitations are tough to know. Still, Lobe seems to take the user-friendly approach to ML that Google has with its AI Experiments, but in the process, matured that idea from a toy into real, shapeable code that anyone can ship.

advertisement

How do you try Lobe, and how much does it cost? The company is accepting beta testers on a small, case-by-case basis right now, and it’s not discussing its plans for monetization. Given that much of Lobe’s service beyond that handy front-end interface is actually its server resources, which require considerable compute time training your machines, it seems like access could be expensive, and may even need to use a pricing model that’s metered by how much power you use rather than some one-size-fits-all monthly rate. We asked about Lobe’s business model, but the company demurred to elaborate. What we do know is that Lobe won’t be a piece of software you can download. It’s a platform and a service that wants to own a sizable chunk of the burgeoning ML market.

advertisement
advertisement

About the author

Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day

More