Why Tech's Biggest Players Favor The Illusion Of Progress Over Real Innovation

Science fiction promised a future of intelligent devices designed to serve their owners. But today's technology serves its manufacturers more than its end user.

On the bridge of the Starship Enterprise, people routinely talk to the computer in a casual, conversational manner. Not an uncommon representation of the future in science fiction. At the core of those computers sits a deep understanding of human language.

Although voice recognition has improved dramatically over the last decade, it remains a statistical activity, not a cognitive one. In other words, computers may be able to recognize, discern and disambiguate sentences through the identification of phonemes, but they are unable to understand what they recognize.

Many current technologies use statistical methods to approximate intelligent responses. Voice recognition systems in call centers, the correlation of advertisements to webpage viewer, buying suggestions in online marketplaces, influence scores in social media and sentiment analysis in marketing.

In early artificial intelligence work that sought to emulate a theory of mind, researchers would discuss the edge of the domain, where expertise within the system became irrelevant. A program that contained all of the knowledge of a manufacturing function was useless at solving even the most analogous problem in another domain or situation. But for all of this fragility, those systems were able to explain their reasoning, to illustrate the complex path of logic that had them chose one answer over another.

Statistical methods offer no such transparency, and they are no more robust. A Bayesian or Hidden Markov algorithm works only within the domain in which it is trained, though the generalized algorithms, like inference engines before them, can be applied to more than one problem, but for the most part, they are limited to one problem at a time.

The use of statistical models to mimic intelligence is not the only failing of our current technology. As desktop sales continue to plummet, companies like Intel and Microsoft talk about innovation, but offer very little in the way of software that takes advantage of Intel’s ever more powerful processors. It is not that we have solved all the problems of computing and must now be content with incremental innovation. What we face is a world where the large manufacturers of hardware and software have become protective of their franchises and unwilling to disrupt their own markets. The technology exists for radical reinvention, but it is more likely to come from Kickstarter than from companies like Microsoft, Google or Apple--each of whom, when they see a disruption, will invest not their intellect, but their marketing and legal budget, in making sure the new idea is perceived as either scary or irrelevant. Or they will buy the technology, often slowing its trajectory, or worse, squirrel it away in favor of existing investments. Most likely they ignore anything that doesn’t fit into their core products or services, only to rally against it when it becomes evident it matters in a market they care about.

We don’t have to look to the heady area of artificial intelligence, or hold a debate of the value of statistical models vs. cognitive models to see how our future has been stalled. We need look no further than the user interface, our personal information or our applications, to see that even simple activities, that should be common place, are not just uncommon, they simply don’t exist from any of the major suppliers of technology.

The User Interface: I have grown weary of hearing how the movie Minority Report foretold the future of the user interface. We find very little evidence of this in but the most advanced prototypes. We of course, have time to meet the gesturing, voice recognizing, face recognizing, transparent world of Minority Report, but the real issue lies not in our failure to achieve the movie’s vision, but for anyone to imagine anything beyond it. We have grown complacent with windows and mice. Human gestures, as innovative as they may be with touch systems or skeletal recognition systems like Microsoft’s Kinect, essentially substitute one input device for another. And rather than invent new systems that understand gestures and voice at a fundamental level, and use those approaches to drive innovation and design, software manufacturers layer new interfaces atop aging kernels.

User interfaces (UIs) have evolved very little since the invention of graphical user interfaces by Xerox in the 1970s. Like many technologies, they have been greatly refined, blinged-out with color, wall paper and three-dimensions, but they remain fundamentally unchanged. The advent of Microsoft’s tiled interface in Windows 8 could be considered a retrenchment from the empowerment of end users because it offers less, rather than more, capability to craft a personally meaningful computing environment. Windows 8 offers a generic computing environment where the only customization comes in the form of which tiles are visible or not. Google’s Chrome similarly fails to offer a more compelling computing environment, stripping away much offline data or offline application access, creating machines that serve the cloud rather than the user. Both systems begin to be much more meaningfully “windowing systems” as they offer “windows” into data that they do not possess. iOS and its tightly curated app store and mostly walled off apps, also fails to facilitate adaptive interfaces.

The designs of Windows 8, Chrome and iOS, however, are only symptoms of a deeper problem with user interfaces--UIs don’t reflect the integrated way that people work with data, tools and processes. User interfaces remain a hodgepodge of inconsistent, often multiplicitous, features and functions that the user must construct into a coherent work environment. No wonder we only use a fraction of the features provided, given that most people’s core job is not to create their work environment, but to perform work.

We were promised user interfaces that groked our work, that were contextual and information aware, that aided and assisted us, that understood the data they were managing and matched it up with applications that could provide insight or analysis, that could invoke processes, structured and ad hoc, to facilitate our own learning, to provide value to others or to accomplish a task or goal. Today’s operating systems may be colorful and animated, customizable and increasingly secure, but what they fail to do is partner with their owners to proactively help them get their work done.

Information Management: The cloud wastes computing resources. To prove this we need do little more than look at an multi-core processor device, be it powered by Intel, AMD or Samsung, when we are not doing anything and recognize that except for the most mundane of maintenance, security or synchronization tasks, our devices aren’t doing anything either. Their autonomous functions remain while their higher processing potential idles. The cloud certainly facilitates the exchange of information, and greatly reduces the possibility of data loss when employed properly. If a hard disk crashes, storage in the cloud can make files immediately available elsewhere, or even recover whole system images depending on what people were willing to spend on services.

The cloud, however, adds nothing to information management. The cloud simply stores bits. The cloud does not help make sense of our information.

When I managed Microsoft’s Center for Information Work, one of our demonstrations illustrated end users making sense of relationships between data elements, from word processing documents to appointments and images, and creating a logical relationship between those elements. Although folders create a relationship, the items inside folders remain unaware of each other. In this speculative demonstration, the documents knew they were related, and therefore, one could ask questions like: “What is the next appointment related to this collection of information?” or issues commands like: “Show me all images related to this document.” And rather than common, columnar representations of items in a folder, 3-dimensional visualization permitted an exploration of not only the documents, but their relationships.

We further surmised that in the future massive local storage and processing would work in parallel with cloud storage and services, with local processing making sense of data drawing upon the very personal context of its end user. Multiple systems could be configured to belong to a person, and their data would be synchronized, and that synchronized data would then be analyzed, even when computers were “sleeping” so that they “woke up” more intelligent than when they were last used. This “dream state” for the computer would be the equivalent of one aspect of human dreaming, which appears to be sense making of the days data collection.

Despite massive improvements in processor power, and the ability for some processors to work even while the overall device assumes a state of rest, software engineers have failed to pursue technology that helps the owners of data better organize the data they own, let alone provide insight into it. They have also done very little to take advantage of home networking to offer distributed computing capabilities, a not all too surprising fact given that most computing can easily take place on a single machine, because the companies looking to perpetuate current models, or rapidly develop new applications for the mobile market, think about quick turns and small footprints. I can share all of my family pictures easily enough, but they end up duplicated, misnamed, misclassified and scattered among hard drives, SD cards and USB sticks — not to mention in DropBox and on the iCloud. A distributed network could manage images effectively, and passively. Unfortunately, such an investment, though it would save hundreds of personal curation hours for computer owners, doesn’t offer computer manufacturers a clear revenue stream.

One other small area brings information management issues into clear relief: the bookmark. One of the simplest background tasks for any system, be it Windows, OSX, Chrome, Linux, would be to monitor the validity of bookmarks and let their owner know when they become invalid, or when their content changes in a significant way. Third party applications offer this in batches, run like a 1970s job on a mainframe computer, but not one major computer or software manufacturer has offered this, or any information management or awareness technology in their core operating system, ever. Why not, because like image management, there is no revenue to be had, and to be completely transparent, all the major technology suppliers already know more about the sites you visit in their databases than you likely do in those you wish to return to. Providing more value to the end user does not bring them additional knowledge.

We were promised a future where our computers retrieved information concisely, and altruistically, without the need for our interactions to be paid for by advertising. We were promised a future where our information found us, waiting patiently if unimportant, chasing us down for critical events. That future seems as far away as ever, and the science fiction stories that presaged it, increasingly naïve and anachronistic.

Apps: The ancryonmization of the word application does not bode well for its future. By adopting a cute, consumer-driven word and definition, coding has become more-and-more focused on the development of siloed, standalone pieces of code that do one thing. Of course, developers of enterprise systems with their vast network of interrelated processes and data feeds know that applications of the traditional form still exist, but the future we were promised deploys something in-between simplification and contingent complexity. The future promised applications that knew where to get the information they needed, without being spoon feed.

We live in a complex, uncertain world, and that means that software necessarily contains hard edges where applications just don’t work for the problem at hand. Although we can use a spreadsheet to manage a list, that underutilization of a sophisticated software tool for creating numerical models does not mean that a spreadsheet also makes even an adequate word processor.

The problem with applications runs much deeper than the functions they offer, it runs to how those functions, and the various inputs, are managed by operating systems. Take a PDF file as an example. Any number of programs can open a “.pdf” file. All the operating system knows is that a particular extension has been associated with a particular “.exe” file. That association launches the “app” when the file is clicked, or it exposes files with that extension when a file open dialog box is invoked.

All of that knowledge and action is based on a simple text extension to the file. It is not uncommon for an association to fail and for the application to report it cannot read the file you double-clicked on. Consider a future, however, where the applications were aware of their capabilities — and understood the intent of the user — so that not just the assigned PDF reader was invoked, but the reader most likely to offer the services the end user needed, like collaboration or markup, for instance. If computers knew about data formats, and applications included metadata about what they could read and what they could then do with the data, software would then offer a configurable toolbox where the applications would partner with people to do a particular task.

Think about the implication for “app” stores. Currently, if you type “PDF” into an app store you end up with any number of apps that can read a PDF file, or create one. The person shopping must then read through capabilities and features lists, as well as reviews, to find the app that does what he or she wants, should it exist at all (or be described in the text).

If we combine applications metadata and services with the issues of information management, we can then conceive of a store that would take a question like: “Show me the best PDF app for annotating a map that has been used by one of more people in my social network.” Apps become as searchable as people in a social network with verified competencies. And apps then understand not only what they can do, but what other apps can do with the data they produce, creating internal workflows that hand off information, creating collaborative software that provides its owner with increased, proactive value, even when they aren’t sitting at a machine doing anything.

Reinventing the Future
By acquiescing to the cloud and giving our data over to it, by continuing to pander to backward compatibility, by insisting on familiar interfaces, by underutilizing the processing power of our devices, by accepting applications that create silos of process, we have allowed Microsoft, Google, Apple, Amazon and others to stall our future. We pay for software while receiving unrequested advertisements with little more than a grumble. We recognize that the majority of investments in technology have gone to making sense of us so that we can be more easily marketed to, more easily understood as a member in a set of statistics. These companies don’t need intelligence beyond statistics because their business model is based on transforming its user base into statistics.

No major computer or software manufacturer has offered anything more than improvements on old ideas for years. Not Microsoft with Kinect, not Apple with the iPhone or iPad, not Amazon with its marketplace, not Google with its search engine or Chromebooks. Devices have become smaller, but the basic functions of I/O and print, message forwarding and memory management, remain the same.

The current approach to technology offers the illusion of progress because we can do things faster and more beautifully. At the most basic level we simply continue to refine technological dead ends. Microsoft has not fundamentally reinvented the approach to word processing since its inception, even when new models at the edge demonstrate new approaches to thinking about words and the construction of documents (think Literature and Latte’s Scrivener, Mural.ly and Prezi). Neither Apple nor Adobe have fundamentally changed the way we manage image or video, or how we edit either. Google put a Linux kernel in their Chromebooks and turned webpages into Windows.

I want more. As I look at the shelves of science fiction books sitting alongside books about evolution, complexity, artificial life and artificial intelligence, I know we can not only imagine a better future, but we also have the intellectual capacity to create it. Unfortunately the business models that lead to entrenchment of the mediocre have forfeited our future out of complacency or fear or greed. My future doesn’t work on statistics alone, or run inside graphical user interfaces. Technology in my future serves me not its makers, and I for one, want that future back.

This post scratches only most obvious issues related to how the large technology firms are coopting of our future. It doesn’t explore social networks, security, digital rights management or personal ownership rights over hardware and software as well as the configuration of machines, both owned and borrowed (or assigned). And it doesn’t examine the symbiotic relationships between large implementation consultancies and technology suppliers. Those feedback loops and emergent opportunities sit at the edge of contention and complacency.

A new study by SimCorp StrategyLab reports that enterprises and governments working on legacy systems spend more on maintenance than new development, and that the percentage of budget dedicated to maintenance continues to increase. Like these IT departments, the legacy software industry is now spending inordinate amounts of customer’s cash (revenue) and their shareholder’s value fulfilling a 1970s vision of the future. They should be investing in reshaping expectations, inspiring young people to get involved in invention, and reconfiguring, dismantling or breaking up their businesses to create new opportunities for innovation.

[Image: Flickr user Jan Faborsky]

Add New Comment

2 Comments

  • tjrowe

    Engineers produce evolutionary innovation faster than consumers adopt it.  Occasionally revolutionary innovation strikes a market nerve and changes the playing field.  There is a direct correlation between standard corporate management models and the lack of revolutionary innovation.

  • davidhill

    There are a great deal of misunderstandings between what constitutes great intelligent minds and great creative minds. For apparently if one understates in-depth research, high intelligence appears to block great creativity actually happening and where the two are on two distinct and different plains.  In this respect and although I have known for many years, it is only through recent research that is now on a scientific footing that great intelligence does not lead or equate to great discoveries being made. Indeed the opposite is the case according to the history of S&T. In this respect intelligence and creativity are two separate entities and both are completely different. In this respect someone can have high intelligence but will never come up with something that will revolutionise the world in technological terms. Indeed the best scholastic people at university with their ivy league degrees do not generally produce through their fundamental thinking, the technologies of the future; far from it. Therefore we have to start and think differently about people and to put them into different classifications of high intelligence and high creativity. If we did this we would soon start to realise that the intelligent people are secondary to advance the world's long-term sustainability outcomes. Unfortunately those running the show are in the main highly intelligent beings and the reason why the world is failing in providing the right economic systems to guarantee the human experience being preserved past this current century. For it is highly debateable that this will be the case with the present mindsets who run government and global business. Considering that our very existence is on the line here things simply have to change for all our good, as intelligence will not do this and where great creativity, insight and intuition are the vital factors in individuals that we need to drive humanity’s future global system forward. We can see the failures of our present mechanisms with the collapse of the global financial systems as just one example, where these bankers were considered to be of the highest intelligence possible with their Ivy League education. What they totally lacked was creativity in developing new financial systems that enhanced the status quo at the time and prevented the catastrophic failure of global monetary system. This was something that they clearly were not capable of doing, as they have little in the way of creative thinking, only high intelligence. Indeed things have not changed as the high intelligent people still run our financial systems and where eventually because they have not the right thinking, the system will collapse again. The question is, are we so stupid to allow these intelligent bankers to do this again. Apparently we are. There is an old adage also that is applicable with creativity, keep it simple. Therefore the dire problem may very well lie at the feet of people trying too hard and being too complex in their thinking – being too intelligent? In this respect many of the world's leading inventions were created from no more than their basic knowledge, intuition and bits of stuff near to hand. Baird with the TV and Kilby with the 'chip' are prime examples. The latter invention  now underpinning a global industry turning over around $2 trillion a year that did not exist 50 years ago. Another is the modern invention of the Finite Element Method (The FEM) invented by the late Prof. John Argyris and ceded to him by the US's most eminent structural engineer Prof. Ray Clough in his 1960 publication where he 'coined' the phrase the ‘Finite Element Method’ and stated that it was the ‘Argyris Method’. When Argyris first came up with the FEM no-one believed him because it was so simple and only had a limited number of nodes to calculate a solution from. Everyone said that it could not possibly be correct as it was so simple. It took a PhD mathematics thesis to prove that Argyris was right. The FEM is now the world's most advanced mathematical engineering design tool and no engineering practice or engineer worth their salt would be without it.  So the answer to many of our future problems and technologies may very well be there but where people are looking at things in a far too complex way. Indeed a new mind on the subject is usually what counts and where they can see further than all others that have gone before them. Unfortunately the US and all western economies have not learnt as yet what the golden key is to unleash the next phase of economic dynamism when it is constantly looking at them in the face. But where it has to be said that the first nation that realises this will be astounded like people were about Argryis’s world-changing invention and would over a relatively short period of time,  become the richest nation per capita in the world. For great fundamental thinking according to history if in-depth research is undertaken, does not come from our universities or advanced centres of corporate R&D but from other sources. A prime example of this is in modern times is the WWW where Berners-Lee created this phenomenon by himself and where CERN had no part in the unleashing of the WWW on the world-at-large whatsoever. In this respect Berners-Lee was as a 'freelancer' at the time working for CERN on a 6 month employment contract and could have been working for anyone at the time when he launched the WWW after years of private creative thinking and life-changing research. But strangely it has to be stated, when you ask people they seem to think that CERN invented the WWW. Indeed if Berners-Lee had made it all his own as he could have done and charged every person on the planet every time they used the WWW at a rate of 5cents a time, his wealth would dwarf the richest person in the word by at least a factor of ten - and even possible where he could have become the world’s first Trillionaire. Good for him I say and a pointer to all others that the accumulation of personal vast wealth is not the main thing that humans were born for. More important I would say to create a great deal of good in the world that Berners-Lee has done, dispensing with of course what a few have spawned with the bad things on the web.  I just wish that people would research into the ‘real’ basis of technologies at times and where especially governments could do with a dose of this for their own long-term good.  In this respect things will not happen that can change the world for the better with the current thinking I am afraid to say – and so-called high intelligent thinking at that. Dr David HillWorld Innovation Foundation