BE PART OF QUALITY JOURNALISM

Support Now
March 4, 2020 9:41 am

..And Soon There Will Be No Place Left To Hide

Share

Affectiva founder Dr Rana el Kaliouby

Boaz Lavie

Last December, 10-year-old Adam el Kaliouby took part in the United States Junior Open Squash Championships. His mother, Rana, who was on her way to London while he was playing, used the plane’s Wi-Fi to live stream what was happening in the Massachusetts tournament. Suddenly she saw that the game her son was playing in had been halted. In reply to a message she sent him, his coach, Karim, in a mixture of English and Arabic, wrote her that Adam had been injured and had a bleeding nose. While she was still in the air, Rana posted the exchange on Twitter, and also confessed to the agony she felt over not being there for her young son when he was hurt. “Mom guilt,” she called it.

Twitter swallows everything, and there’s nothing unusual about an emotional striptease on the social network. But in this case the mother who shared the incident with her 19,000 followers has a PhD in computer science and is the CEO of a startup in the field of emotional artificial intelligence. Not by chance, Rana el Kaliouby is out to place emotion at the center of every discussion. The price she is paying, and which we may all pay ultimately, is to forgo every barrier between what we feel and what can be divined about our feelings from the outside.

Dr. el Kaliouby was born in Egypt in 1978. She lived in Kuwait until the first Gulf War, in 1990, then moved with her family to the United Arab Emirates. After completing two university degrees in computer science in Egypt, she did her doctorate at the University of Cambridge. In 2009, when she was a research scientist at MIT Media Lab, she became the cofounder of Affectiva. Both on Twitter and in interviews and articles, el Kaliouby adopts a posture that is more characteristic of reality-show stars than high-tech entrepreneurs. She speaks openly about her marriage to an Egyptian entrepreneur and about her divorce a few years later. She talks about the pressure her father exerted on her to return permanently to Egypt in order to save the marriage, and is forthright about how she came to realize that the price she would have to pay was too high and that the marriage was lost. She admits the difficulty of raising two children by herself in a Boston suburb and talks about a visit by her mother – she came especially from Egypt to help her.

El Kaliouby’s approach is the absolute antithesis of the uptight, ostensibly businesslike tone that dominates the language of most executives in the American high-tech industry. From within the emotional-familial symbiosis in which el Kaliouby finds herself, she conceived the idea for the project that is the basis for her company.

“How can we trust in technology that strips us of what makes us human?” el Kaliouby asks/asserts in an interview with Re-Work, a forum that produces high-tech conferences, adding, “AI [artificial intelligence] needs to be able to understand all things human.” Her months of being disconnected from her family in the Middle East, during her studies, led her to start thinking about an emotional interface that would enrich the experience of using tools like Skype, for example. Her vision is to have an emotional chip embedded in every product. Her company’s first product, Affdex, which achieved a certain success in identifying emotion based on facial photographs, marked the direction.

The Bank of America wanted to detect emotions of people as they used ATMs; NASA examined the possibility of providing artificial emotional support for astronauts in space; and Affdex was also used to examine emotional responses of viewers in a television debate between Barack Obama and Mitt Romney during the 2012 presidential campaign.

In 2011, Affectiva partnered with a large-scale player in market research, and since then has conducted experiments to examine and analyze collectively the facial expressions of people who watched thousands of commercials; the aim was to gain an understanding of what generates emotional engagement with an advertisement, without resorting to a verbal questionnaire. Now, more than a decade after the company was launched, the whole field seems to have almost reached the boiling point. Every few weeks, the emotional artificial intelligence market, which several sources have estimated to be worth about $20 billion, provides unprecedented news. Last August, for example, Amazon announced the development of a tool that is capable of reading fear on customers’ faces.

How can we not be amazed? The problem is that the theory on which this recognition technology is based is, at the very least, controversial.

The bible of the field in its present form is “The Facial Action Coding System,” a 500-page manual that was first published in the year el Kaliouby was born. It presents an approach developed largely by an American psychologist named Paul Ekman. According to Ekman, the physiological, facial expression of emotions is not an acquired trait. Rather, he argues, it is a universal one that is independent of individual, social or specific cultural experience, and of any geographical location, language or personal circumstances. In his research, Ekman distilled our inner world into six basic emotions: happiness, anger, sadness, disgust, fear and surprise – to which contempt was later added. With the aid of photographs of students from different countries, along with those of members of a tribe from Papua New Guinea (all of the respondents were photographed as they reacted to certain situations as presented by the researchers), he showed that these emotions and their manifestations are universal. His thick tome is effectively a guide to identifying emotions through facial expressions that reflect basic human feelings.

Ekman’s approach won him many followers. Animators use it to create the emotional range of characters in animated movies. (Ekman himself was an adviser to Pixar in the production of the 2015 film “Inside Out,” in which only five of the emotions appear; surprise and contempt were left out, for some reason.) The method purportedly assists in detecting potential terrorists in airports, solely by analyzing facial expressions.

For her part, el Kaliouby also adopted this approach, which enables her to argue that emotions are objective and completely measurable, and that all that’s required to train algorithms to recognize them is data. Her team claims to possess what is called the largest emotion database in the world: five billion frames of expressions on 6.5 million faces, collected in 87 countries. Artificial neuron networks were trained to identify the connection between the expressions and the basic emotions they are supposed to reflect; when the Affectiva system is exposed to new faces, it uses the recognition capabilities it has acquired in order to determine the emotion they are expressing.

The tools Affectiva and other firms use have lately come under attack, in part because their underlying theory never acquired canonical status. In fact, it recalls 18th-century theories of character classification by physiognomy. Not much damage was caused as long as Ekman’s methods were utilized in narrow fields (though the idea of pulling “suspects” out of line in airports based solely on their facial expression is infuriating). However, the criticism becomes more acute when tech giants adopt tools that operate on the basis of the theory.

In a comprehensive article published half a year ago in the journal Psychological Science in the Public Interest, Lisa Feldman Barrett, a professor of psychology at Northeastern University, in Boston, and her colleagues examined about 1,000 studies in the realm of emotional classification that are also used in various forms to construct recognition tools. The authors agreed that expressions have emotional significance, but maintained that this significance is diverse and is context- and culture-dependent. For example, there is no way to infer with certainty an emotion of happiness from a smile, as that same smile can express anger or sadness as well, depending on the context – and the same applies to every expression.

Emotion, in short, is a complex business, and deciphering it using crude technological tools reduces it to the level of a stereotype. A report by AI Now, a research institute at New York University that deals with the social implications of artificial intelligence, went a step further. Last December it called for a total ban on the use of emotional artificial intelligence by corporations and government authorities in the United States, because of their inherent biases. In the light of the fact that tools of this sort are already being used by such corporations as IBM, Unilever and Dunkin’ Donuts, for example, to filter job candidates – the identification of a certain expression as threatening, when disconnected from any context, is indeed highly problematic.

Affectiva is also involved in therapeutic projects, such as helping individuals on the autistic spectrum to “read” other people’s expressions, and monitoring warning signs among potential suicides. However, not everyone is persuaded by these socially beneficial applications.

“Here and there you get a fig-leaf study about recognizing autism or people about to have a stroke, but that’s not where the big money is,” says Romi Mikulinsky, head of the master’s program in industrial design at Bezalel Academy of Art and Design in Jerusalem – and who, in conjunction with her students, is engaged in applied research about emotional artificial intelligence. “In the end, this technology, as I am familiar with and follow it, is intended to manipulate people into thinking that things are better for them.”

Like others, Dr. Mikulinsky thinks that the technology is based largely on erroneous data and on a method that needs to be rethought. “Even if very superficial things are being produced here,” she adds, “at least some of the energy needs to be directed to good causes – not only to make the rich richer and help the police regiment us.”

It’s likely that emotion-recognition abilities based on external data will improve, and that in the future certain machines will be capable of deciphering complex nuances in our facial expressions, even without truly understanding us. The electronic devices, in whose company we were until recently able to spend our time without feeling self-conscious, will then join those who are constantly checking up on us, and making sure that we are efficient and polite. The freedom we had to scowl at a dead battery, to cast an angry eye at the laptop, or just to be fed up with the printer, will be taken from us, too – because of our fear that some algorithm will classify us as problematic types. Perhaps one day Rana el Kaliouby, who is so revealing about her feelings of maternal guilt, will share with us more serious pangs of guilt. Even if her intentions are good, she is liable to discover that the road to hell is paved with emotional chips. –The article first appeared HERE

  • Boaz Lavie is a writer, comics creator and lecturer on creative artificial intelligence.

Follow this link to join our WhatsApp group: Join Now

Be Part of Quality Journalism

Quality journalism takes a lot of time, money and hard work to produce and despite all the hardships we still do it. Our reporters and editors are working overtime in Kashmir and beyond to cover what you care about, break big stories, and expose injustices that can change lives. Today more people are reading Kashmir Observer than ever, but only a handful are paying while advertising revenues are falling fast.

ACT NOW
MONTHLYRs 100
YEARLYRs 1000
LIFETIMERs 10000

CLICK FOR DETAILS

*