Fast company logo
|
advertisement

WORK LIFE

Why Tech’s Biggest Players Favor The Illusion Of Progress Over Real Innovation

Science fiction promised a future of intelligent devices designed to serve their owners. But today’s technology serves its manufacturers more than its end user.

Why Tech’s Biggest Players Favor The Illusion Of Progress Over Real Innovation

BY Daniel W. Rasmuslong read

On the bridge of the Starship Enterprise, people routinely talk to the computer in a casual, conversational manner. Not an uncommon representation of the future in science fiction. At the core of those computers sits a deep understanding of human language.

Although voice recognition has improved dramatically over the last decade, it remains a statistical activity, not a cognitive one. In other words, computers may be able to recognize, discern and disambiguate sentences through the identification of phonemes, but they are unable to understand what they recognize.

Many current technologies use statistical methods to approximate intelligent responses. Voice recognition systems in call centers, the correlation of advertisements to webpage viewer, buying suggestions in online marketplaces, influence scores in social media and sentiment analysis in marketing.

In early artificial intelligence work that sought to emulate a theory of mind, researchers would discuss the edge of the domain, where expertise within the system became irrelevant. A program that contained all of the knowledge of a manufacturing function was useless at solving even the most analogous problem in another domain or situation. But for all of this fragility, those systems were able to explain their reasoning, to illustrate the complex path of logic that had them chose one answer over another.

Statistical methods offer no such transparency, and they are no more robust. A Bayesian or Hidden Markov algorithm works only within the domain in which it is trained, though the generalized algorithms, like inference engines before them, can be applied to more than one problem, but for the most part, they are limited to one problem at a time.

The use of statistical models to mimic intelligence is not the only failing of our current technology. As desktop sales continue to plummet, companies like Intel and Microsoft talk about innovation, but offer very little in the way of software that takes advantage of Intel’s ever more powerful processors. It is not that we have solved all the problems of computing and must now be content with incremental innovation. What we face is a world where the large manufacturers of hardware and software have become protective of their franchises and unwilling to disrupt their own markets. The technology exists for radical reinvention, but it is more likely to come from Kickstarter than from companies like Microsoft, Google or Apple–each of whom, when they see a disruption, will invest not their intellect, but their marketing and legal budget, in making sure the new idea is perceived as either scary or irrelevant. Or they will buy the technology, often slowing its trajectory, or worse, squirrel it away in favor of existing investments. Most likely they ignore anything that doesn’t fit into their core products or services, only to rally against it when it becomes evident it matters in a market they care about.

We don’t have to look to the heady area of artificial intelligence, or hold a debate of the value of statistical models vs. cognitive models to see how our future has been stalled. We need look no further than the user interface, our personal information or our applications, to see that even simple activities, that should be common place, are not just uncommon, they simply don’t exist from any of the major suppliers of technology.

The User Interface: I have grown weary of hearing how the movie Minority Report foretold the future of the user interface. We find very little evidence of this in but the most advanced prototypes. We of course, have time to meet the gesturing, voice recognizing, face recognizing, transparent world of Minority Report, but the real issue lies not in our failure to achieve the movie’s vision, but for anyone to imagine anything beyond it. We have grown complacent with windows and mice. Human gestures, as innovative as they may be with touch systems or skeletal recognition systems like Microsoft’s Kinect, essentially substitute one input device for another. And rather than invent new systems that understand gestures and voice at a fundamental level, and use those approaches to drive innovation and design, software manufacturers layer new interfaces atop aging kernels.

User interfaces (UIs) have evolved very little since the invention of graphical user interfaces by Xerox in the 1970s. Like many technologies, they have been greatly refined, blinged-out with color, wall paper and three-dimensions, but they remain fundamentally unchanged. The advent of Microsoft’s tiled interface in Windows 8 could be considered a retrenchment from the empowerment of end users because it offers less, rather than more, capability to craft a personally meaningful computing environment. Windows 8 offers a generic computing environment where the only customization comes in the form of which tiles are visible or not. Google’s Chrome similarly fails to offer a more compelling computing environment, stripping away much offline data or offline application access, creating machines that serve the cloud rather than the user. Both systems begin to be much more meaningfully “windowing systems” as they offer “windows” into data that they do not possess. iOS and its tightly curated app store and mostly walled off apps, also fails to facilitate adaptive interfaces.

The designs of Windows 8, Chrome and iOS, however, are only symptoms of a deeper problem with user interfaces–UIs don’t reflect the integrated way that people work with data, tools and processes. User interfaces remain a hodgepodge of inconsistent, often multiplicitous, features and functions that the user must construct into a coherent work environment. No wonder we only use a fraction of the features provided, given that most people’s core job is not to create their work environment, but to perform work.

We were promised user interfaces that groked our work, that were contextual and information aware, that aided and assisted us, that understood the data they were managing and matched it up with applications that could provide insight or analysis, that could invoke processes, structured and ad hoc, to facilitate our own learning, to provide value to others or to accomplish a task or goal. Today’s operating systems may be colorful and animated, customizable and increasingly secure, but what they fail to do is partner with their owners to proactively help them get their work done.

Information Management: The cloud wastes computing resources. To prove this we need do little more than look at an multi-core processor device, be it powered by Intel, AMD or Samsung, when we are not doing anything and recognize that except for the most mundane of maintenance, security or synchronization tasks, our devices aren’t doing anything either. Their autonomous functions remain while their higher processing potential idles. The cloud certainly facilitates the exchange of information, and greatly reduces the possibility of data loss when employed properly. If a hard disk crashes, storage in the cloud can make files immediately available elsewhere, or even recover whole system images depending on what people were willing to spend on services.

The cloud, however, adds nothing to information management. The cloud simply stores bits. The cloud does not help make sense of our information.

When I managed Microsoft’s Center for Information Work, one of our demonstrations illustrated end users making sense of relationships between data elements, from word processing documents to appointments and images, and creating a logical relationship between those elements. Although folders create a relationship, the items inside folders remain unaware of each other. In this speculative demonstration, the documents knew they were related, and therefore, one could ask questions like: “What is the next appointment related to this collection of information?” or issues commands like: “Show me all images related to this document.” And rather than common, columnar representations of items in a folder, 3-dimensional visualization permitted an exploration of not only the documents, but their relationships.

We further surmised that in the future massive local storage and processing would work in parallel with cloud storage and services, with local processing making sense of data drawing upon the very personal context of its end user. Multiple systems could be configured to belong to a person, and their data would be synchronized, and that synchronized data would then be analyzed, even when computers were “sleeping” so that they “woke up” more intelligent than when they were last used. This “dream state” for the computer would be the equivalent of one aspect of human dreaming, which appears to be sense making of the days data collection.

Despite massive improvements in processor power, and the ability for some processors to work even while the overall device assumes a state of rest, software engineers have failed to pursue technology that helps the owners of data better organize the data they own, let alone provide insight into it. They have also done very little to take advantage of home networking to offer distributed computing capabilities, a not all too surprising fact given that most computing can easily take place on a single machine, because the companies looking to perpetuate current models, or rapidly develop new applications for the mobile market, think about quick turns and small footprints. I can share all of my family pictures easily enough, but they end up duplicated, misnamed, misclassified and scattered among hard drives, SD cards and USB sticks — not to mention in DropBox and on the iCloud. A distributed network could manage images effectively, and passively. Unfortunately, such an investment, though it would save hundreds of personal curation hours for computer owners, doesn’t offer computer manufacturers a clear revenue stream.

advertisement

One other small area brings information management issues into clear relief: the bookmark. One of the simplest background tasks for any system, be it Windows, OSX, Chrome, Linux, would be to monitor the validity of bookmarks and let their owner know when they become invalid, or when their content changes in a significant way. Third party applications offer this in batches, run like a 1970s job on a mainframe computer, but not one major computer or software manufacturer has offered this, or any information management or awareness technology in their core operating system, ever. Why not, because like image management, there is no revenue to be had, and to be completely transparent, all the major technology suppliers already know more about the sites you visit in their databases than you likely do in those you wish to return to. Providing more value to the end user does not bring them additional knowledge.

We were promised a future where our computers retrieved information concisely, and altruistically, without the need for our interactions to be paid for by advertising. We were promised a future where our information found us, waiting patiently if unimportant, chasing us down for critical events. That future seems as far away as ever, and the science fiction stories that presaged it, increasingly naïve and anachronistic.

Apps: The ancryonmization of the word application does not bode well for its future. By adopting a cute, consumer-driven word and definition, coding has become more-and-more focused on the development of siloed, standalone pieces of code that do one thing. Of course, developers of enterprise systems with their vast network of interrelated processes and data feeds know that applications of the traditional form still exist, but the future we were promised deploys something in-between simplification and contingent complexity. The future promised applications that knew where to get the information they needed, without being spoon feed.

We live in a complex, uncertain world, and that means that software necessarily contains hard edges where applications just don’t work for the problem at hand. Although we can use a spreadsheet to manage a list, that underutilization of a sophisticated software tool for creating numerical models does not mean that a spreadsheet also makes even an adequate word processor.

The problem with applications runs much deeper than the functions they offer, it runs to how those functions, and the various inputs, are managed by operating systems. Take a PDF file as an example. Any number of programs can open a “.pdf” file. All the operating system knows is that a particular extension has been associated with a particular “.exe” file. That association launches the “app” when the file is clicked, or it exposes files with that extension when a file open dialog box is invoked.

All of that knowledge and action is based on a simple text extension to the file. It is not uncommon for an association to fail and for the application to report it cannot read the file you double-clicked on. Consider a future, however, where the applications were aware of their capabilities — and understood the intent of the user — so that not just the assigned PDF reader was invoked, but the reader most likely to offer the services the end user needed, like collaboration or markup, for instance. If computers knew about data formats, and applications included metadata about what they could read and what they could then do with the data, software would then offer a configurable toolbox where the applications would partner with people to do a particular task.

Think about the implication for “app” stores. Currently, if you type “PDF” into an app store you end up with any number of apps that can read a PDF file, or create one. The person shopping must then read through capabilities and features lists, as well as reviews, to find the app that does what he or she wants, should it exist at all (or be described in the text).

If we combine applications metadata and services with the issues of information management, we can then conceive of a store that would take a question like: “Show me the best PDF app for annotating a map that has been used by one of more people in my social network.” Apps become as searchable as people in a social network with verified competencies. And apps then understand not only what they can do, but what other apps can do with the data they produce, creating internal workflows that hand off information, creating collaborative software that provides its owner with increased, proactive value, even when they aren’t sitting at a machine doing anything.

Reinventing the Future
By acquiescing to the cloud and giving our data over to it, by continuing to pander to backward compatibility, by insisting on familiar interfaces, by underutilizing the processing power of our devices, by accepting applications that create silos of process, we have allowed Microsoft, Google, Apple, Amazon and others to stall our future. We pay for software while receiving unrequested advertisements with little more than a grumble. We recognize that the majority of investments in technology have gone to making sense of us so that we can be more easily marketed to, more easily understood as a member in a set of statistics. These companies don’t need intelligence beyond statistics because their business model is based on transforming its user base into statistics.

No major computer or software manufacturer has offered anything more than improvements on old ideas for years. Not Microsoft with Kinect, not Apple with the iPhone or iPad, not Amazon with its marketplace, not Google with its search engine or Chromebooks. Devices have become smaller, but the basic functions of I/O and print, message forwarding and memory management, remain the same.

The current approach to technology offers the illusion of progress because we can do things faster and more beautifully. At the most basic level we simply continue to refine technological dead ends. Microsoft has not fundamentally reinvented the approach to word processing since its inception, even when new models at the edge demonstrate new approaches to thinking about words and the construction of documents (think Literature and Latte’s Scrivener, Mural.ly and Prezi). Neither Apple nor Adobe have fundamentally changed the way we manage image or video, or how we edit either. Google put a Linux kernel in their Chromebooks and turned webpages into Windows.

I want more. As I look at the shelves of science fiction books sitting alongside books about evolution, complexity, artificial life and artificial intelligence, I know we can not only imagine a better future, but we also have the intellectual capacity to create it. Unfortunately the business models that lead to entrenchment of the mediocre have forfeited our future out of complacency or fear or greed. My future doesn’t work on statistics alone, or run inside graphical user interfaces. Technology in my future serves me not its makers, and I for one, want that future back.

This post scratches only most obvious issues related to how the large technology firms are coopting of our future. It doesn’t explore social networks, security, digital rights management or personal ownership rights over hardware and software as well as the configuration of machines, both owned and borrowed (or assigned). And it doesn’t examine the symbiotic relationships between large implementation consultancies and technology suppliers. Those feedback loops and emergent opportunities sit at the edge of contention and complacency.

A new study by SimCorp StrategyLab reports that enterprises and governments working on legacy systems spend more on maintenance than new development, and that the percentage of budget dedicated to maintenance continues to increase. Like these IT departments, the legacy software industry is now spending inordinate amounts of customer’s cash (revenue) and their shareholder’s value fulfilling a 1970s vision of the future. They should be investing in reshaping expectations, inspiring young people to get involved in invention, and reconfiguring, dismantling or breaking up their businesses to create new opportunities for innovation.

[Image: Flickr user Jan Faborsky]

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

WorkSmarter Newsletter logo
Work Smarter, not harder. Get our editors' tips and stories delivered weekly.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Daniel W. Rasmus, the author of Listening to the Future, is a strategist who helps clients put their future in context More


Explore Topics