[ad_1]
This month’s assembly elections saw a slew of viral political videos that turned out to be deepfakes. Amitabh Bachchan did not take digs at Madhya Pradesh Chief Minister Shivraj Singh Chouhan on Kaun Banega Crorepati. Telangana minister Malla Reddy did not tell voters they wouldn’t get jobs if they re-elected K Chandrashekar Rao. Congress leader Kamal Nath did not warn he’d dismantle a women’s scheme in MP.
The controversy surrounding such content, including a warning from the Prime Minister Narendra Modi, has not only raised the issue of online authenticity again but highlighted the real threat deepfakes pose to democracy, as they can influence public perception.
Deepfakes, in simple terms, involve altering media — images, video, or audio— using technologies such as artificial intelligence (AI) and machine learning, thereby blurring the lines between fiction and reality. Beyond the erosion of online authenticity, the victims of deepfakes face obstacles in claiming privacy violations.
While governments have to strike a balance between protecting individual reputations without compromising citizens’ freedom of expression, there are measures they can put in place.
In the US, Congress has mandated that national security agencies work toward countering deepfakes. The US Department of Homeland Security has also published detailed reports on the harmful consequences of deepfake technologies, as well as mitigation measures. The public is also awakening to the danger. A series of three attitudinal studies conducted in 2020-21 by Northwestern University researchers found that “(a)cross all scenarios, people were extremely willing to punish those who made and distributed deepfake videos”.
While India lacks specific laws to address deepfakes, legal provisions in the Information Technology (IT) Act are currently being utilized. Earlier this month, the IT ministry issued notices to social media platforms under the IT Act and the IT Rules 2021, stating that online impersonation is illegal.
Apart from this, the Copyright Act 1957 and the Indian Consumer Protection Act include provisions that may be invoked for certain types of offences involving the creation and dissemination of deepfakes.
However, such measures are limited in scope. For instance, IT ministry directives place the responsibility on social media platforms, but WhatsApp, Meta, X, and the likes cannot fully regulate the private content generated by individuals, potentially leading to privacy violations.
There is a need, therefore, for the Indian government to draw up a comprehensive policy addressing the complete deepfake value chain— from content creation (traceability) to communication (social media and other channels) to enabling technologies (datasets and Models such as FakeAVCeleb, FaceForensics, and so on).
Also Read: Govt says deepfake regulation coming soon, considering penalty for content creators, social media
What other countries are doing
Many countries are taking legislative measures to address the challenges posed by deepfake technology.
In the US, the 2020 Identifying Outputs of Generative Adversarial Networks (IOGAN) Act aims to develop metrics and standards for technologies to detect deepfakes. Several states have also enacted deepfake legislation.
Additionally, the proposed DEEPFAKES Accountability Act of 2023 (which builds upon a 2019 proposal) criminalises the failure to identify “malicious deepfakes”, including content related to foreign interference in any election and criminal incitement, among other things.
China, through the Cyberspace Administration of China (CAC) office, has introduced provisions that address the entire deepfake life cycle, from creation to communication and consumption. China has also introduced laws mandating the disclosure of the use of deepfake technology in videos and other media.
The European Union has issued guidelines requiring tech companies such as Google, Meta, and X to take measures to counter deepfakes. The Digital Services Act and the proposedEU AI Act address the monitoring of digital platforms for misuse. The UK’s freshly minted Online Safety Act also addresses the sharing of sexual deepfakes.
India needs a strong regulation & policy framework
A robust government policy is essential to tackle the entire deepfake cycle, from content creation and communication to the role of technology apps.
To start, defining what constitutes a deepfake is necessary. The legal framework should consolidate provisions from various existing laws under one umbrella, encompassing AI regulation, data protection, copyright issues, and action plans on disinformation.
All people utilising technologies to create, publish, or communicate content should be required to obtain consent, verify identities, report deepfake information, provide watermark disclaimers, and so on.
The policy should include measures to build public awareness and fund research & development of deepfake detection technologies. Further, should establish mechanisms for effective coordination across ministries and social media intermediaries.
The biggest challenge, of course, lies in enforcement. Malicious users often operate anonymously, adapt quickly to technological advancements, and find ways to address loopholes in the law.
On the flip side, the policy must also guard against overreach. It should address possibilities for human rights violations and safeguard the right to privacy and personal data protection.
Also Read: Celeb deepfakes just the tip, revenge porn, fraud & threat to polls form underbelly of AI misuse
Where tech, law & public awareness meet
An effective policy must address the legal as well as technological aspects of deepfakes.
A potential tool to combat fake content is the use of deepfake recognition software apps on social media platforms. These apps can detect fake videos or images before they enter the social sphere and can also be used to watermark content generated using deep fake technology. Computer scientists from the University of Buffalo claim a high success rate in detecting deepfakes through a tool that uses light reflections in the eyes and there is plenty of ongoing research on detection technologies.
Social media algorithms, however, are not currently great at detecting deepfakes— for instance, one that won a detection challenge on Facebook picked up just two-thirds of a set of manipulated videos.
The domain of deepfakes is notoriously murky, which any policy must take into account. For instance, in a wrongful death suit against Tesla, Elon Musk’s lawyers contended that videos of him promoting the safety of Tesla Autopilot could be deepfakes.
Sophisticated and trained algorithms using generative adversarial networks (GANs) are required to recognise and manipulate faces to create deepfakes— and these are difficult to detect. Software apps like FakeApp, Faceswap, and Zao have made creating deepfakes easy for the average person.
However, active information campaigns can help mitigate the detrimental effects of deepfakes. Educating citizens about the prevalence of deepfakes on social media, as well as their negative consequences, is the most cost-effective way to combat the issue. This awareness-building has to be done in a calibrated manner to enable citizens to distinguish deepfakes from genuine content. Else, authentic information may start losing credibility as well.
Democratic governments have to regulate deepfakes responsibly. Policy changes, along with efforts to foster awareness, can empower citizens to become more discerning about what they see online, thus reducing the negative impacts of deepfakes.
Ramakanth Desai is a management consultant & strategic advisor to startups in fintech, medtech, and edtech. He was formerly president & co-CEO of the IT services business HappiestMinds technologies.
(Edited by Asavai Singh)
[ad_2]