Building Trust in AI Part IIII: Trust in Business

9 minute read

Data Begets Data

In today’s steadily growing digital economy information is the new currency, data is the new oil and attention is a scarce resource. The amount of data on earth is doubling at a rate that is arguably unmeasurable right now. There are many of studies about data and they are all looking at different things. –How much data is transferred on the Internet? –How much data is stored on devices? All of these questions are proxies; the truth is no one really knows anymore. The thing is that data begets data so just having more information doesn’t mean we necessarily have more grounded truth. In an information-rich world the abundant wealth of information means a dearth of something else –the scarcity of whatever it is that information consumes which in our case is the attention of us its intended recipients. Companies that build our favorite digital tools don’t just collect data they monetize. Companies are now competing with each other to see who can get the most data on you and then offer it to advertisers we are not just their customers we are what they sell. However it isn’t all bad as it means that it’s easy for companies to create different web pages for different people. Sometimes that customization is helpful, such as when you see search results for restaurants near you. Moreover consumers of today have become almost expecting that they’re going to get personalization from the private sector. However sometimes it can be creepy, such as when ads follow you around from website to website. Sometimes customization can cost you money, for instance Orbitz showed higher-priced hotels to owners of Mac computers, for instance.

Emotional Contagion

Facebook knows who all your online friends are. It has seen all your photos; it uses facial recognition on all your devices, it knows where you live yet Facebook they still buys data from data brokers about your offline life to enhance their profile on you like what car you drive the cost of your mortgage and what you buy at the supermarket even though this data is often sloppy or inaccurate it’s a multi-billion dollar industry well I do this to serve you targeted ads of course they call this the optimization of your Facebook experience but maybe more accurately it’s the monetization of your behavior.

In 2014 Facebook released a paper were their researchers to manipulate the News Feeds of 689,003 of their users showing that emotional states can be transferred to others via emotional contagion. Emotional contagion is this idea that emotions, both positive and negative, can be transferred between people. The 2014 study showed that emotional contagion can occur outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. Simply but when they reduce the amount of positive news to participants feed, these individuals reported more negativity in general (less positive expressions). The research observed the opposite effect (more positive expression) when they instead reduce the amount of negative news to participants new feed. What they showed was that it is actually possible to lead a group of people to experience the same emotions without their awareness and this can be without an direct interaction between people, and in the complete absence of nonverbal cues . So it turns out Facebook and other social media platforms at large can deliberately alter your emotions and hence how you think. This gives the market among other things an easier more strategic doorway into manipulating your mind. For instance studies show that women feel most vulnerable on Mondays and feel best about themselves on Thursdays so naturally particular ads are delivered during peak vulnerability moments. In addition, let’s say a woman just posted a selfie with the hashtag beautiful… well it follows then that to best target her the market must align with her current emotions and feed her with weekend style, fashion and weekend fun the a slogan such as how to be a “true independent woman.”

The Belmont Report was written by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Commission, created as a result of the National Research Act of 1974, summarizes ethical principles and guidelines for research that should underlie the conduct of biomedical and behavioral research involving human subjects Three core principles are identified where informed consent, assessment of risks and benefits, and selection of subjects. The Declaration of Helsinki is a set of ethical principles regarding human experimentation developed for the medical community by the World Medical Association. Although it is not a legally binding instrument under the international law it is widely regarded as the cornerstone document on human research ethics. The fundamental principle of the Declaration of Helsinki boil down to respect for the individual (Article 8), their right to self-determination and the right to make informed decisions (Articles 20, 21 and 22) regarding participation in research, both initially and during the course of the research. The investigator’s duty is solely to the patient (Articles 2, 3 and 10) or volunteer (Articles 16, 18), and while there is always a need for research (Article 6), the subject’s welfare must always take precedence over the interests of science and society (Article 5), and ethical considerations must always take precedence over laws and regulations (Article 9).

An editor on the study, Susan Fiske, had this to say in her interview with The Guardian:

[“People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty.”] –Susan Fiske 2014_

Susan Fiske’s comment suggests that despite Facebook knowledge of best practices research and ethics and international guidelines Facebook still did not feel the need to inform their users of the nature of their research or the potential impacts of it. Can this and other cases like this such as the fake news debacle be taken as evidence that best practices research, ethics and international guideline in AI and machine learning systems which are not enforced by international law will largely be ignored by big digital corporations?

9:1 Negativity Bias

Bad is stronger than good. When we look at high performance in the brain there’s one part that’s really important and it’s called the prefrontal cortex (PFC) and this is the part of the brain that makes us different from animals it’s the part of the brain that has come very late in evolution and it’s a part of the brain we need for rational processing –higher-level processing and decision making and that’s what companies are looking for in the people they hire –those who can really perform well and have well-functioning brains this is also the part of the brain that matures only at age 18 or 21. The PFC is the part of the brain that’s also responsible for inhibition and delay of gratification and executive control which is the ability to delay gratification and plan ahead and to not always get all things at once. As such the PFC is implicated in addiction. The PFC is always overruled by another part of the brain called the limbic system. This is because there limbic system there’s myriad of connections projecting to the PFC than vise versa and so the limbic system has a much greater influence on the PFC. The limbic system is the part of our brain where pleasure, reward, emotion, memory and reinforcement are processed it’s a part of the brain that’s way older from an evolutionary perspective. When people are in a reward state dopamine is released from the limbic system and this will have a very positive impact on the functioning of the PFC so people who are in a good mood and are feeling well will perform better. when people are in a threat mode your PFC shuts off triggering the fight or flight response important to make quick decisions you don’t want to think of ten different solutions of escaping a tiger you just run okay and social situations kick off our threat circuit but they are not really a danger to our life but we experience them our brain processes them just as if somebody is about to hurt you and as such you won’t have access to your full mental power anymore. 9:1 negativity bias; between reward and threat is the brain does not treat positive stimulus and negative equally we process negative experiences nine times more strongly than positive experiences negative information is like velcro. We are now experiencing real time manipulation by our dopamine and by companies that have understood how to manipulate our attitudes through gamification of our dopamine responses. It‘s become common place for companies use AI and machine learning algorithms to maximize dopamine response; they know how to make a system that maximizes user engagement to make them money. It is surely no coincidence, then, that the activities social networking sites foster fake news, gossip, virtue signaling, rumor-mongering and the ever-shifting movements of popular culture and fad.

The AI and machine learning algorithms which accomplish this task are largely hidden from view, remaining opaque even when we are prompted to examine them. They are rarely subject to the same checks and balances as human decision-makers. Google uses algorithms for almost all of its services; everything from search engine results to driving directions on Google Maps –it can make connections that aren’t just strange but actually discriminatory. For instance consider the case of the Google search results leading up to the 2012 presidential election in which people who typed Obama were shown Obama results in their subsequent searches but those who queried Romney were not shown any Romney results in subsequent searches. Google response to this was that their machine learning algorithm had found that people who searched for Obama wanted more Obama results and those who searched for Romney didn’t. For a traditional media company such disparity would be viewed as providing biased coverage however it’s more difficult to blame a machine for bias in the era of algorithms. The question that arises is in an ever growing age of digital customization how do we strike a balance between the side effects of personalization?

Corporations are using AI and machine learning algorithms to consciously exploit a vulnerabilities in human psychology. Not only have AI and machine learning algorithms become interrupting but it perhaps even feel normal for a lot of us to naturally keep checking and reaching our phone with the expectation that there’s a notification. Perhaps you know what I’m talking about, perhaps you do this yourself, and perhaps your doing this right now while reading this post… Thing is we don’t do this consciously, but habitually just like Pavlov’s dog which is famous example of classical conditioning. So let’s put this into context, if Pavlov’s Bell is your notification ringtone and the treat is that one message you a day you might get that makes you happy.

AI becomes the tool of inattention a potential tool for the dismantling of democracy. We are often losing sight of the actual trade-off because we are so focused on the benefit. We really need to think very hard about what makes corporations behave unethically it’s not the people within them. There is something about our current business models and economic systems that makes corporations behave legally and unethically and that means either we need to change the laws to make more behaviors illegal or we need to rethink the incentives of the system.

Hope this helps…