In Her, Spike Jonze’s latest film which won an Oscar for best screenplay, a man falls in love with his computer operating system. The OS, Samantha, has an incredible level of intelligence and general knowledge, is to all appearances sentient and boasts the voice of actress Scarlett Johansson. By most measures, she seems the perfect girlfriend for the reclusive protagonist, excepting the fact she doesn’t have a body–and even that doesn’t remain insurmountable. But of all of Samantha’s characteristics, the one you’re most likely to interact with in the not-very-distant future is the ability for computers to respond to human emotion.
The term for this technology is “affective computing” and it involves reading, interpreting, and even simulating emotion, for the purpose of interacting with and sometimes influencing human behavior. Essentially, affective computing uses pattern recognition algorithms to identify an individual’s emotional state from visual and audible cues. Such technology can distinguish whether a person is happy, sad, angry, indifferent, and so on–even if they try to disguise it. These techniques aren’t limited to faces and voices, but can also be applied to traits such as gait and posture. Because this could provide immediate, accurate information about the viewer’s response, it has powerful potential as a feedback mechanism.
Since an enormous amount of people’s communication is nonverbal, it makes sense that we’d want to tap into this to improve how computers work with us. For instance, in a recent North Carolina State University study, a computer program used video cameras to study student expressions, then indicate those students who were experiencing difficulty with the course. These technologies can draw from a wide range of visual and audible cues, many of which are undetectable by a human observer. Combined with the strengths of machine learning, these programs can already routinely outperform us.
Affective computing’s potential uses are myriad. A system that can read and respond to human behavior in fractions of a second has obvious applications in surveillance and law enforcement, though not all of these are necessarily respectful of civil liberties. Behavioral therapy programs could benefit from its feedback. Social networks will have the ability to share still another layer of information about their users. But perhaps the fields with the most commercial potential are those of marketing and advertising.
The ability to influence public interest and response to products, services and candidates has long been of keen interest to marketers. Up to now though, the tools used have been less than rigorous, making ad and marketing campaigns as much of an art as they are a science. But what if marketing and advertising were able to interact with their customer base on a one-on-one basis, instantly self-modifying on the fly based on immediate nonverbal feedback? Combined with other technologies such as augmented reality and Big Data, unprecedented interactiveness would be achieved and marketing’s impact could go through the roof.
So how might this work? Imagine a typical shopping district in a typical city. A fashion-conscious 20-something walks along casually window shopping. She’s wearing a pair of stylish glasses that have video display capability a la Google Glass. They’re set to translucent, augmented reality mode, allowing her to see price comparisons overlaid onto the different clothing items she’s interested in. This all takes place automatically, with the processing occurring through her smart phone.
As our shopper nears one particular store, exterior shop cameras detect her approach. Software services identify her age, gender, and demographic. Because she subscribes to a number of coupon services, one of which the store participates in, they have the ability to send offers directly to her. Another series of software services assesses her clothing style based on what she’s currently wearing, gauges her height, weight and dress size, then immediately renders a 3-D avatar that looks amazingly like our shopper. The avatar is dressed and kitted out with a jacket that’s been the shop’s biggest seller this season. The rotatable image is sent to the shopper along with the price, suggested accessories and a 20% off coupon while she’s still 30 feet from the store.
The shopper isn’t particularly interested since she already has a very similar jacket at home. Had she made her recent purchases public via her social network, the store could have already known this; however she chose to opt-out. The services are able to assess her response based on the camera feed. Changes in her expression, gait, and posture are fed to an algorithm that calculates an 83.4% likelihood she already has a similar jacket, so at least they’re on the right track, stylistically speaking. Because her skirt is identified as being from last year’s line, she’s sent an offer for a more current style. This is calculated to have a 90.2% chance of drawing her inside. The offer is successful and our shopper enters the store, where she soon makes a purchase.
The exact implementation of these technologies will take time to work out. In the next five years, the impact of affective computing on advertising will become increasingly evident, though it won’t fully mature for probably a decade. Because of the way the different elements interact, it’s probable that a diverse ecosystem of services and devices will be developed to take advantage of this new approach to advertising. Indeed it seems likely most such stores will eventually need to subscribe to some, if not all of the available services in order to remain competitive. In the meantime, savvy shoppers will see it as the best way to find exactly what they want at an attractive price. All because of increased interconnectedness, communication, and the changing relationship between businesses and consumers. It’s the kind of technology you could fall in love with.