If Your Company Isn’t Doing It Yet, Get Ready, Companies Will Soon Be Tracking your Emotions

Live Scenario

Deep in the bowels of Houston’s 72,000-seat NRG Stadium, in a curtained-off makeshift room near the court where the Villanova Wildcats and the University of North Carolina Tarheels are playing for the NCAA basketball championship, a small team of engineers and data scientists from a company called Lightwave huddles over laptops watching a stream of real-time data. But the engineers aren’t looking at shooting percentages The millions of data points show how excited the fans are every 10th of a second–shooting percentages. The millions of data points show how excited the fans are every 10th of a second–whether they’re clapping, screaming, jumping up and down, or sitting sullenly.

Throughout the stadium, fans wear custom-built wrist bands that send real-time biometric data to the engineers, while dozens of hidden sensors record decibel levels and other intel. When something big happens, another Lightwave team in New York City races to design and tweet slick infographics. For almost 30 seconds before Villanova made its game-winning buzzer beater, fans of both teams sat motionless and quiet, utterly transfixed. Lightwave’s hard data showed an audience at peak engagement–information that marketers live for.

The Real Thing

Lightwave, which calls itself an “applied neuroscience platform,” is the creation of a 29-year-old named Rana June–a former professional DJ fond of blue-dyed hair and vintage heavy metal T-shirts, whose appearance contrasts starkly with her tendency to talk tech jargon. Since it launched in 2012, the 10- person startup has parsed people’s biometrics for Google, Pepsi, 20th Century Fox, iHeartRadio, and Jaguar, among others. For the NCAA championship, tournament sponsor Degree antiperspirant–owned by the $140 billion conglom erate Unilever–hired Lightwave to study fan excitement.

Lightwave is one of several companies furiously at work creating a new field–let’s call it the emotion economy– focused on sensing and analyzing consumers’ mental states. In January, Apple bought a San Diego startup called Emotient, which uses facial-tracking technology to identify people’s feelings. A few months earlier, the consumer-research giant Nielsen bought Boston-based Innerscope, which combines facial-cue recognition with Lightwave-style wearables data.

And 2,700 miles from June’s office in San Francisco, in a lowrise building wedged between a strip mall and railroad tracks outside of Boston, another emotion-measuring pioneer named Rana–Rana el Kaliouby–has spent the past year and a half strategizing to make the facial-cue  recognition company she co-founded,Affectiva, the essential hub of the emerging emotion economy. She wants to make it the platform any business–from an appmaker to a car company–can use to add emotion sensing to its products.

Thus far, most of the emotion economy’s high-profile projects have been small scale. In Houston, only
150 students wore Lightwave’s wristbands. The data was used to create entertaining tweets, not to help Degree sneak into people’s wallets by targeting their emotional states. Nor are stores sending you moodtargeted offers as you wander the aisles. But Paul Zak, director of the Center for Neuro economics Studies at Claremont Graduate University, in Claremont, California, says thatemotion-optimized products
and services will be standard fare “very, very soon. I want to say ‘today.’” As June puts it, “Any business that has a customer is going to be affected by the ability to measure the emotional reaction of the customer.”

June, whose given name is Rana June Sobhany, stumbled onto the idea for Lightwave when she was an electronic dance music DJ in her early 20s. After dropping out of college, she’d helped found a mobile ad measurement company called Medialets in 2008, which was sold to global ad giant WPP in 2015. (She left before that sale.)

When the first iPad came out in 2010, June sensed what seemed like a wholly different opportunity: The tablet could be a musical instrument. EDM was exploding, and yet, during live performances, star DJs couldn’t leave their banks of computers and turntables; the best they could manage for onstage theatrics was to periodically throw their arms in the air to strike the much-mocked “Jesus pose.” June, who grew
up playing in punk bands around Washington, D.C., started DJ’ing with a system she’d rigged together that included six iPads and an exoskeleton of sorts that let her attach iPads to her arms and roam the stage like a lead guitarist.

She played 100 shows a year, for thousands of dollars each–Vegas one night, New York a few nights later, then L.A. But as the shows got bigger, she realized she had little insight on how the crowd was responding while she performed; in larger venues, bright stage lights often prevent artists from seeing past the first few rows of the crowd. Were they dancing wildly, or idly standing around? She didn’t know.
“Every night, I’d get off the stage and check what people were saying on Twitter. But you don’t know who they are. And if they were tweeting during a show, were they really  engaged?” She shakes her head. “It’s such an incomplete data set.”

She decided to try something new while DJ’ing the People and Time party celebrating the 2012 White House Correspondents’ Dinner: She used Microsoft’s gesture-control technology, Kinect, to create a perimeter of motion detectors around the room and let the resulting heat map of crowd density guide her performance. If part of the crowd seemed thinly populated or bored, she’d head in that direction. It was, she says, her eureka moment. She started to shape a business that would be the real-time crowd analytics brain for events. 

She’s wearing a ripped AC/DC T-shirt and metal skull-tipped Alexander McQueen stilettos as she tells me this, and she’s perched on a sofa in the bar on the first floor of her office in San Francisco’s SoMa neighborhood. The three-story townhouse was a music venue before Lightwave moved in, and the company left the bar and stage intact, for holding parties and testing its technology. June seeded the company with the proceeds from DJ’ing and funding from friends. Since then, it has run on its own revenue, and June has invested its earnings back into the company.

For its first partnership, Lightwave created with Pepsi a “bioreactive” concert at Austin’s South by southwest Interactive conference in 2014. Attendees wore wristbands and “unlocked” prizes, like a round of drinks, by getting hot and sweaty while dancing. That event garnered good press, and other high-profile clients came calling.
The global ad agency Mindshare hired Lightwave to measure attendees at the Cannes advertising festival, and connected the company to Jaguar to analyze the crowd at the Wimbledon tennis championship. There was a Google-sponsored concert in Singapore featuring star DJ Paul Oakenfold, a Cisco event at which Lightwave data determined the winner of a pitch competition, and a TED
conference at which Lightwave compared attendees’ self-perceptions with their responses to video scenes meant to evoke feelings like fear and compassion. (Lightwave found people often underrated their reactions.)

Perhaps most intriguingly, last year 20th Century Fox employed Lightwave to measure viewers’ reactions to pre release screenings of The Revenant, the Oscar-winning Leonardo DiCaprio epic. Typically, movie screenings are followed by a survey, but such feedback can be unreliable. Viewers can be swayed by others, or report what they think they should think. Survey data also does a poor job evaluating specific moments, because it is gathered after the fact. For The Revenant screenings, audience members wore
wristbands that measured physiological responses–heart rate variability, skin conductance (sweat, basically), body temperature, movement, and noise–throughout the film. Among other things, the study identified 15 moments when the audience experienced the fight-or-flight response (as determined by a specific heart-rate pattern) and 4,716 seconds during which viewers were motionless, signaling peak film goer engagement.

By mapping those emotional responses to the corresponding plot points, the studio gleaned objective data about the film–something that would otherwise be judged subjectively. The Revenant was finished and locked when the screenings took place, but the project captured the attention of many in Hollywood. June says Lightwave now works with several other studios, “much earlier in the creative process”–during the making of the film as well as in the formation of marketing plans. It’s not hard to see why. If a studio learns that women and men respond differently to various scenes, it might cut separate trailers depending on which audience it’s targeting. If data were to show diminishing engagement in the latter parts of a film, it might be reedited. In The Revenant‘s case, the moment of highest overall emotional intensity came right at the end, suggesting that the film, at two hours 36 minutes, was not, in fact, too long.

What do moviegoers really think? Lightwave’s data from screenings of The Revenant found them transfixed, not bored, despite its epic length.

Jeff Malmad, Mindshare North America’s head of mobile, sees that as a model for how emotional data will be used–to understand consumers’”moments of receptivity,” and not only targeting those moments with ads but also using them to create better products. “What are the things that get you excited in a store, or really stress you out when you board an airplane?” he says. “Those are very positive things to learn.”

Aside from the shared name and profession, Rana June and Rana el Kaliouby could hardly be more different. While June is an artist at heart, el Kaliouby, who’s 37, is pure scientist, steeped in the academic literature of computer science and psychology. She was a college student in Cairo in the late ’90s when she first learned of some of the pioneering work on emotion-sensing computing being done by MIT professor Rosalind Picard. Several years later, when el Kaliouby was finishing her PhD at Cambridge, she managed to meet
Picard when the professor visited the U.K. The two hit it off, and soon after they teamed up at MIT’s Media Lab, armed with a near-million-dollar National Science Foundation grant, to prototype a sort of emotional hearing aid for autistic people–essentially a wearable camera that scanned people’s facial expressions to interpret social cues, in real time, for the person wearing the device.

A very cool and noble idea, but not one targeting the biggest market. In 2008, el Kaliouby posted a demo of the software–called MindReader–to a section of the Media Lab site where sponsoring companies test the latest inventions. The number of inquiries–from Toyota, Microsoft, Fox, Hallmark, and many others– changed everything. The companies wanted to test TV ads, detect sleepy drivers, spot possible security threats–there were dozens of other uses for MindReader. The lab’s director suggested hiring a CEO and
spinning out as a startup.

Affectiva was born in 2009, and the first CEO el Kaliouby hired zeroed in on the most immediate opportunity: ad testing. The company created a program called Affdex that works with standard webcams to scan people’s faces as they watch a computer or TV screen. Affectiva went on to raise more than $30 million from investors including the Silicon Valley venture capital powerhouse Kleiner Perkins, and grew to 20-some employees in the U.S. and another 20 in Cairo, who manually code facial expressions to feed into the company’s machine-vision algorithms.

And yet, el Kaliouby was restless. “I had this moment one day in the late summer of 2014,” she recalls, “when I woke up and said, ‘What are we doing here?’” Advertising was never her dream. So the company created a version of its tech for mobile devices, released a developer’s kit to allow other
companies to use its facial-cue recognition system, and began to reposition itself to serve a broader array of clients, in fields as diverse as health care, education, and auto motive. “Advertising and media continues to be a big chunk of our revenue,” she says. “But you can do the thing in front of you that’s very low risk, or you can do the thing that’s potentially huge.” 

El Kaliouby argues that, as we spend ever more time with our mobile devices, and more products around us are connected via the internet of things, they will need to get better at adapting to our moods. “Studies
tell us that humans with high emotional intelligence are more likable, more persuasive, and more successful,” she says. “Our thesis is that digital devices and services need emotional intelligence as well ” so they can realize the same benefits and serve us better well,” so they can realize the same benefits and serve us better. 

Perhaps the best example of how this will work is the growing category of so-called “social robots,” like Amazon’s Echo, a cylinder that sits in your living room and responds to voice commands to control services ranging from ordering groceries to playing music to managing your schedule. “For better or worse, people develop intimate relationships with these digital assistants,” el Kaliouby says. “People confide in Siri that they’re sexually abused, or depressed. Right now, if you do that, it will just Google the phrase. It should really show empathy. It should say, ‘Oh, my goodness. How awful. Can I get you some help?’”

Since she expanded Affectiva’s horizons, el Kaliouby has again found herself surprised by what people dream up for her technology. A human resources startup uses it to screen video interviews. An education company uses it to create professional training scenarios. A Middle Eastern country wants to use it to study the public mood. 

Especially since Apple’s acquisition of Emotient this year essentially validated the space, she says, the volume of new ideas has increased rapidly. “We don’t know what Apple is going to do with Emotient,” she says–though it’s not hard to imagine its usefulness to Siri, or the much-rumored Apple car. But as a closed system, Apple leaves room for another company to become an open system: think Google taking
on the iPhone with Android. “The opportunity for us,” she says, “is to become the platform that powers all these other creative scenarios.”

“You can do the thing in front of you, or you can do the thing that’s potentially huge.”Rana el Kaliouby, CSO and Co-founder, Affectiva

Back in San Francisco

Rana June outlines a similar vision. “We’re building an empathy brain for technology,” she explains, “because right
now, technology does not understand the human experience. You have companies that are really good at search, or good at social, or hardware.
This is a new planet in the tech solar system: I think you’re going to have a company emerge that’s really good at emotions that remains independent and provides a tool set or operating system for other companies to incorporate.”

So which Rana wins? “The face remains the best window we have on moment-to-moment changes in emotional response,” says Paul Ekman, a psychologist who, in 1978, co-published the Facial Action Coding System, a seminal, 527-page reference tome of every possible facial muscle movement and how it maps to seven fundamental emotions (happiness, sadness, surprise, fear, anger, disgust, and contempt). His work is the foundation on which all efforts to algorithmically read faces–including Affectiva’s–are built. Ekman believes that Lightwave-esque efforts to track emotion using physiological cues are scientifically shaky. “There is no consensus among researchers,” he says, about
whether involuntary functions like heart rate can signal emotion accurately.

But Lightwave can collect data in almost any environment, no matter how chaotic. “We’re saying, ‘Don’t worry. Just do whatever you would be doing, and we’ll take the data from there,’” June says. Facial tracking can’t quite do that. “The conditions you need to do face tracking are very specific,” she says. “If you’re watching a film, you need enough light to illuminate the face. It just puts you in these unnatural
environments. Let’s say you’re at a sporting event–are you going to put hundreds or thousands of facialtracking
inputs around the stadium?”

Or maybe, as the internet of things stitches itself together and more devices and products speak to one another, they’ll simply share data from different emotion sensors, el Kaliouby predicts. “The way I see it, it doesn’t matter that your Fitbitdoesn’t have a camera, because your phone does, and your laptop does, and your TV will. All that data gets fused with biometrics from your wearable device and builds an emotional profile for you.” Affectiva, she adds, is exploring ways to measure emotion using the sound of your voice.

Lightwave, meanwhile, is exploring ways to make its sensors ever more invisible to wearers, and easier for the company to activate. For the NCAA championship at NRG Stadium, June and her team had to convince students from each school to outfit with wristbands; afterward, they hustled to rendezvous with the students to recover the devices. She envisions people getting stamped with temporary conductive skin tattoos when they enter a Lightwave event; the stamps will take measurements and transmit data. (A Boston company called MC10 already makes stickerlike “biostamp” sensors for the health care market.)

One thing is certain: Privacy battles will erupt as our inner lives become a currency. El Kaliouby says she’s repeatedly turned away clients who want to use her technology for any kind of surveillance: “We want to support the uses where people want to share their emotions, not uses that try to suck information out of you that you have not decided to share.” But, she predicts, in three to five years, most of our devices will be emotion-aware. And just as location tracking went from creepy to standard in a few years, emotion will simply be a standard part of business.
El Kaliouby became Affectiva’s CEO in May and is pursuing a fourth round of funding. June is mulling taking on investors after three years of bootstrapping. Both sense a very big game is under way. “Think about it,” June says. “I tweet to my airline that my bag got lost, and I expect a response. What if I’m in your store and something happens that makes me start to feel angry? You as a business are going to have to learn to respond to that, in the same way that you had to learn to monitor social media.” Which is to say: Get ready.

Author: Tom Foster

Print Friendly, PDF & Email

Tags

Leave a Reply

Your email address will not be published.