When a company’s bottom line depends on your app working the right way, you just can’t hand them a gleeful user experience and expect them to pay per seat. Training, productivity and infallibility are the priorities, not the enthusiastic approval of the user — but that presents a real conflict of interest if your product is a freemium service relying on word-of-mouth growth.
“For enterprise, the elephant in the room is the expense of training the people who operate the software,” explains Alex Balva, CTO of InboundWriter, a search optimization service for bloggers and content marketers. “That’s the reason there are deterministic apps out there: they save money.”
“Deterministic,” in this case, means “no fun at all.’ A deterministic app railroads you through a step-by-step process without giving you any way to screw up. That’s fine on the tiny screen of a mobile device (think Instagram!) but it’s a bore on desktop computers (think Bluetooth wizard). Finding a middle ground is precarious, but InboundWriter is using a pattern recognition engine that may have solved the problem. Here’s how it works.
InboundWriter targets people like yours truly, who spend much of their day in Microsoft Word, Apple Pages, Evernote, or WordPress. Writing inside InboundWriter had to feel like those familiar apps, but discreetly add in a powerful, usable optimization tool. For Balva and his technical team to succeed, they will have to build a foolproof enterprise app that doesn’t completely suck to use.
The company just opened its cloud-based service to enterprises this month, and if anyone needed to crack the user-friendly enterprise app, it was these guys. “There are huge problems in our space,” says Skip Besthoff, InboundWriter’s CEO. “There are lots of different analytics products on the market, but content creators don’t want all that stuff. They want simplicity. They want push button functionality where they don’t have to make this massive investment in understanding analytics and how to optimize content.”
InboundWrtier basically looks over your shoulder as you write. It tries to offer suggestions for improvement without interrupting the creative process. “There’s no pop-up that says, You got a typo! Shame on you!” says Balva. “We are quite discreet.”
The process begins when InboundWriter starts analyzing the very first words you write, in order to better understand the potential context of your piece and smartly narrow its suggestions. Based on trigrams, or groups of three words, InboundWriter identifies the central topic of your post, before using these small, correlative groups of words to try to parse the difference between, say, a car company named Lotus and a flower. Once it’s figured out your topic, InboundWriter begins researching sites which the system knows to be authorities on the subject, looking at the combinations of terms that are characteristic. Based on the niche of your article, InboundWriter predicts its Google ranking and associated keywords, summarizing the findings in a set of dynamic tips you can use to boost your post’s position in search results.
What the app advises you, and when it chooses to give you the tip, is up to the app’s pattern recognition engine. InboundWriter re-evaluates the structure and terminology of your writing with every keystroke, looking for patterns in your verbiage or layout that it predicts will cause your post to suffer in search rankings. “When the app knows a pattern was triggered, it can tell you about it in a user-friendly way and show you a recommendation which is aimed at undoing it,” says Balva. Typical bad patterns include a lack of consistent terminology throughout the article; too densely-structured text; a lack of multimedia or sentences which are too long or too short.
The overall SEO score of the post is constantly monitored by a simple gauge. Despite seeming simple, this feature is actually powered by an enormous number of behind-the-scenes computing operations. Each of the keywords generated takes a full minute to discover and requires over 2,000 API calls.
Finally, an extraction algorithm pulls out specific keywords and rates them for features like search popularity and competitiveness. The result is a list of keywords rated zero to five stars, which you can see below.
The pattern recognition engine is time-sensitive. “We prioritize those more fundamental patterns when you’re first writing, and once you address those, we give you more fine-grained spit and polish recommendations,” says Balva. Remarkably, all this software is so well-packed it can run in the user’s browser, not on the server side.
“There is no way for us to enumerate the ways that people write content, so we want to unconstrain them as much as we can,” says Balva. “That’s is the thing that excites me about the product most of all: The user experience was an original idea, the idea that you can build an application driven by the customer interaction, where we only constrain you in the cases where you can actually shoot yourself in the foot.”
Since Google constantly makes updates to its black-box search algorithms, as it recently did with its Panda and Penguin updates, InboundWriter must constantly adapt. “Ours is a very complicated algorithm, and it’s always changing,” says Balva. “But if you are serious about building a product of analytical measure and you’re constantly making changes to the algorithm, you have to ask repeatedly: Is this really better?” says Balva. “There are a lot of companies which pull things out of their butt. They say, this is how we combine the data and we think we’re right. But the important question is: Can you close the loop and prove it with trends and correlations?”
Defining what makes a “better” algorithm is a process of trial and error, says Balva, but there are general principles you can follow to make the process manageable.
To evaluate the efficacy of each iteration on their pattern recognition engine, the InboundWriter team starts with a piece of content that has already been indexed and ranked by Google. For the purposes of the experiment, they forget about the ranking and then try to use their own algorithms to predict what it will be, a process known as an error function. Here’s how they do it.
- Form a theory or conjecture about how to improve your algorithm’s accuracy, which you then try to prove or disprove. Expect to be wrong. “We sometimes model a theory and it makes perfect sense,” says Balva. “Then it blows up in testing.”
- Take a baseline reading from your error function. Since you’re looking to see if your next iteration is better than the baseline, assume that your current algorithm performs at a rating of 100.
- Find a correlation you’ll use as a metric for good/bad change. Better yet, choose several correlations, in order to simulate different use cases and different types of users.
- Define your level of precision. Are you going to be sensitive enough to results that you’ll consider detailed statistics? Say InboundWriter would like to optimize an article to get to page one of Google ranking; is any position on page one good enough? Now let’s say there’s a body of stats out there that tells you the very first place on page one gets 50% of the clicks while the second place only gets 15%. Will your definition be sensitive enough to consider this type of statistic when you define your error function?”
Lastly, don’t go nuts. Simplify things by examining correlations that are easy to evaluate in the marketplace, and assess only one theory at a time. “Don’t worry about being too exact,” says Balva. “You don’t have to outrun the bear, just the other tourists.”
[Image: Flickr user G A R N E T]