deepfakes though an ethical lens

Creating a false narrative using deepfakes is very dangerous and can create intentional and unintentional harm to individuals and society at large. The unprecedented realism of deepfake should be looked at with an ethical lens.

Ashish Jaiman
9 min readAug 2, 2020
greatest philosophers and thinkers: illustration licensed from matiasdelcarmine-stock.adobe.com

Imagine a few days before the election a video of a candidate is released with the candidate spewing hate speech, racial slurs and epithets undercutting his image of pro minorities. Imagine a teenager waking up to an explicit video of her on social media. Now, imagine a CEO on a roadshow to raise money for taking her company public when an audio clip stating her fears of regulation and anxieties about her product is sent to her investors. All these scenarios are not real, they are examples of malicious use of AI-Generated Synthetic media also known as deepfakes [1].

Recent advancements in artificial intelligence (AI) and cloud computing technologies have led to rapid development in the sophistication of audio, video, and image manipulation techniques. With access to commodity cloud computing, public research AI algorithms, and abundant data and access to it has created a perfect storm to democratize creation of deepfakes. Distribution of deepfakes is also democratized at scale by open and closed social platforms.

The Book is now available at Amazon — https://www.amazon.com/Deepfakes-aka-Synthetic-Media-Humanity-ebook/dp/B0B846YCNJ/

While there are numerous positive use cases of the deepfakes such as art, expression, accessibility, and business, it can also be weaponized for malicious purposes. Deepfakes can harm individuals, society, business, and democracy and accelerate the already declining trust in media. Such erosion of trust in truth will promote a culture of factual relativism, unravelling the increasingly strained fabric of democracy and civil society. Deepfakes can additionally enable the least democratic and authoritarian leaders to thrive, as they may leverage what is called the Liar’s Dividend, where any inconvenient truth is quickly discounted as “fake news.”

The unprecedented realism of deepfake requires comprehensive ethical analysis. Creating a false narrative using deepfakes is very dangerous and can cause intentional and unintentional harm to individuals and society at large. Deepfakes could worsen the global post-truth crisis as they are not just fake but are so realistic that they betray our most innate senses of sight and sound. Putting words into someone else’s mouth, swapping someone face with another, creating synthetic images and digital puppets of public personas to systematizing deceit are ethically questionable actions and should be held responsible for the potential harm to individuals and institutions.

Deepfakes created to intimidate, humiliate, or blackmail an individual are unambiguously unethical. In this article, I would explore the ethical implications of deepfakes with a few scenarios, effects on the democratic process, and responsibility of the technology and social media platforms.

FaceSwap

The first malicious use of deepfakes was in celebrity and revenge pornography. Deepfake pornography is placed in the macro-context of gender inequality and exclusively targets and harms women, infecting emotional, reputational, and even violence. According to the Deeptrace [2], 96% of deepfakes are pornographic videos, with over 135 million views on the pornographic website alone.

Pornographic deepfakes can threaten, intimidate, and inflict psychological harm on an individual. Deepfake porn reduces women to sexual objects, torments them causing emotional distress, reputation harm, abuse, and in some cases, material harm like financial loss and collateral consequences like employment loss.

Deepfake pornography strikes most people as intuitively, disturbing, and immoral. A moral imperative is straightforward for non-consensual deepfake pornography, celebrity face swap or individual revenge deepfake porn, etc. Several sites (e.g., Reddit, Pornhub, LinkedIn, etc.) have pre-emptively banned deepfakes.

The ethical issue is convoluted for consensual synthetic pornography. Some services will help create deepfake pornography with consent. It may be argued that it is equivalent to a morally acceptable practice of sexual fantasy. The ethical conundrum to consensual deepfakes is that it can normalize the idea of artificial pornography, which could further exacerbate concerns about the negative impact of pornography on psychological and sexual development. The realistic virtual avatars could similarly lead to online bullying and more negative experiences from others, morally it may be acceptable to act adversely towards a virtual avatar than a real human.

Synthetic Resurrection

The ethical issues for the synthetic resurrection are very muddy. The right to publicity applies to living people, so the individual has the right to control the commercial use of their likenesses. In a few states in the US, the right extends the afterlife, it may be different and complex in other countries.

For public personalities, the main question is who owns their face and voice posthumously. Can it be used for publicity, propaganda, and commercial gain? There are moral and ethical concerns about how deepfakes can be used to misrepresent political leaders’ reputation posthumously to attain political and policy motives. There are some legal protections to use the voice and face for commercial gain, but if the heirs have the legal right to use the face and voice, they can use it for their commercial benefit.

Another potential ethical concern is to create a deepfake audio or video of a loved one after they have passed. There are voice technology companies that will create synthetic voice as a new kind of bereavement therapy or help people remember the deceased and remain connected with them [3]. There is some moral ambiguity about using the voice and face, even create realist digital puppets of the departed for personal reasons. Some may argue that it is akin to keeping pictures and videos of the dead loved ones.

Synthetic Voice

Voice assistants like Alexa, Cortana, Siri are getting a very realistic voice; still, consumers can differentiate a synthetic voice from real human interaction. The improvement in speech technology is now making the voice assistant to imitate human and social elements of speech like pauses and verbal cues. Google’s duplex was released with a hypothesis that it will be indistinguishable from human voice while booking an appointment or interacting with a human on the other side of the conversation [4].

The realistic human voice raises ethical concerns since the deepfake voice technology is created to project voice as the real person undermining real social interaction. There are some ethical concerns of racial and cultural bias with the voice assistant when it may differently process and may not decipher a different dialect, voice, and accent due to bias in the training dataset for these tools. The training of AI with biased data is a more concerning issue for Artificial intelligence, not limited to deepfakes.

Synthetic voice, deepfakes can also be used to deceive people for monetary and commercial gain. Automated call centers, deepfakes audio of public personalities, and phone scamsters can use synthetic voice for their benefits, which is unethical. Custom voice fonts for voice artists raise an ethical dilemma as it may potentially impact their profession. It may limit current employment opportunities, impact on their voice likeness, and future recording prospects.

Image Generation

Deepfake technology can create a face, a person, or an object that is entirely artificially generated. There are vast numbers of good uses cases for an artificial computer-generated face or avatar, but the synthetic artifact can also be used for deception. There are cases of unethically creating and enhancing fake digital identities to fraud, infiltrate, and espionage [2].

A deep learning algorithm (Artificial Intelligence system) requires a lot of training data to create an efficient model and generate a deepfake. A synthetic face is generated by training the algorithm with a lot of real face images. It is unethical to train a model on real faces if the proper consent framework to use the faces in not applied.

Democratic discourse and process

In politics, stretching a truth, over-representing a policy position, and presenting alternate facts for opposition are acceptable tactics. They help mobilize, influence, and persuade to gain vote and donors. Political opportunism, though unethical, is considered okay.

Using deepfakes and synthetic media may have a profound impact on the outcome of the polls, and the means to the end of winning the elections must be looked through the ethical lens. The utilitarianism principle of ethics, which proposes the outcome of an action determines ethics, the concept of the greater good, is used to justify winning an election.

Deepfake can have an impact on the voters and the candidate. Deception creates profound harm to individuals because it impedes their ability to make informed decisions in their own best interests. Intentionally distributing false information about the opposition or presenting an alternate truth for your candidate in an election manipulates voters into serving the interests of the deceiver [5]. These practices are unethical and have limited legal recourse. Similarly, a deepfake used to intimidate voters not to vote is immoral as well.

Deepfakes may also be used for misattributions, telling a lie about a candidate, or falsely amplifying their contribution, or inflicting reputational harm to a candidate. A deepfake with an intent to deceive, intimidate, misattribute, and inflict reputational harm to perpetuate disinformation is unambiguously unethical. It is also unethical to invoke liar’s dividend, dismiss reality as deepfake. Liars dividend is when a true piece of media or undesirable truth can be dismissed as fake news by a leader.

Utilitarianism principle of ethics, which proposes the outcome of an action determines ethics, can be used to justify winning an election.

Stakeholder obligations

All the stakeholders for the issue area of deepfakes, the creator and the distributor, must make sure to implement synthetic media ethically. It is the moral responsibility of big technology platforms like Microsoft, Google, Amazon, which provide tooling and cloud computing to create deepfakes with speed and at scale. Social Media platforms like Facebook, Twitter, LinkedIn, TikTok, which offer the ability to distribute a deepfake at scale and with internet speed, news media organizations and journalists, legislators and policymakers, and civil society, must all ascribe towards the ethical and social responsibility towards deepfakes.

The ethical obligation of the platforms is to prevent harm. While users on those platforms do have a responsibility towards sharing and consuming content, structural and informational asymmetries make it hard and unethical to expect users to play a primary role in effectively responding to malicious deepfake. The burden-shifting to users to respond to malicious synthetic media might be ethically defensible. Still, platforms must do the right thing and bear the primary responsibility of identifying and preventing the spread of misleading manipulated media.

Most technology and social platforms have a policy for disinformation and malicious synthetic media. Platforms must align their deepfake policies to their ethical principles. For example, if the deepfake can cause significant harm (reputation or otherwise), platforms should ethically remove the content. Platforms should act to add dissemination controls or differential promotional tactics like limited sharing or downranking to stop the spread of deepfakes on their networks. Labelling content is another effective tool, which should be used ethically, and platforms must deploy labeling objectively and transparently, without any political bias and business model conflict.

Platforms bear ethical obligations to create and maintain the dissemination norms of their user community. Framing of community standards, community identity and user submission constraints can have a real impact on content producers. Norms and Community guidelines, including examples of desirable behavior and positive expectations of users as community participants and reinforce behavior consistent with those expectations. Terms of use and platform policies play a meaningful pivotal role in preventing the spread of harmful fabricated media. Institutions interested in combating problems related to manipulated media have an ethical obligation to ensure access to educational programs.

Along with published AI principles of Fairness, Inclusiveness, Transparency, Reliability & Safety, Accountability, and Privacy & Security, Microsoft has developed a Responsible Innovation [6], a best practices toolkit. The toolkit and harms modelling provides creators with a set of best practices in development and to anticipate and address the potential negative impacts of technology on people. The framework is grounded in four core pillars to examine the potential negative impact of how people’s lives can be negatively impacted by technology; potential injuries to individuals, denial of consequential services, infringement on human rights, and erosion of democratic & societal structures.

Parting thoughts

Deepfakes makes it possible to create realistic fabricated media — swap faces, lip-syncing, and puppeteer — mostly, without consent and are threat to individual psychology, security, political stability, and business disruption. The weaponization of deepfakes can have a massive impact on the economy, personal freedom, and national security. The ethical implications of deepfake are enormous. Deepfake threat models, harm frameworks, ethical AI principles, and commonsense regulations must be developed with the public, private partnerships, and civil society oversight, which promote awareness, encourage advancement and does not stifle innovation.

References

[1]R. Chesney and D. K. Citron, “A Looming Challenge for Privacy, Democracy, and National Security,” 107 California Law Review 1753, p. 68, 21 July 2018.

[2]G. Patrini, H. Ajder, F. Cavalli and L. Cullen, “The State of deepfakes,” Deeptrace, 2019.

[3]H. Ajder, “The ethics of deepfakes aren’t always black and white,” 16 June 2019. [Online]. Available: https://thenextweb.com/podium/2019/06/16/the-ethics-of-deepfakes-arent-always-black-and-white/.

[4]N. Lomas, “Duplex shows Google failing at ethical and creative AI design,” Tech Crunch, 10 May 2018. [Online]. Available: https://techcrunch.com/2018/05/10/duplex-shows-google-failing-at-ethical-and-creative-ai-design/.

[5]N. Diakopoulos and D. Johnson, “Anticipating and Addressing the Ethical Implications of Deepfakes in the Context of Elections,” New Media & Society, p. 27, 2019.

[6]Mira Lane, “Responsible Innovation: The Next Wave of Design Thinking,” Microsoft, 2020.

--

--

Ashish Jaiman

thoughts on #AI, #cybersecurity, #techdiplomacy sprinkled with opinions, social commentary, innovation, and purpose https://www.linkedin.com/in/ashishjaiman