Generative AI has advanced tremendously in the past half decade and we can certainly note it in the kind of content that we view. A deceased figure could be singing a Frank Sinatra song, somebody could be talking in Spanish despite being unfamiliar with it, political rivals gleefully hugging each other—these are all examples you may have seen circulating in your WhatsApp group chats. You may have noted the improving caliber of these images, at times becoming difficult to distinguish from real, non-AI generated images, which brings in the question of authenticity.
Authenticity & Identification
Authenticity in this context is the degree of genuineness that we prescribe to something. The authenticity of something determines our likelihood to view it as truthful, sincere, or in good faith. Authenticity is determined through identification, a process in which we assign identities to something.To attest to whether something is authentic or not, we see the characteristics it has and if these characteristics line up with the identities that we deem as genuine, we consider it to be authentic. These characteristics are known as identifiers and we use them as tools to corroborate authenticity.
These identities range from those of inanimate products to us as people and over time we’ve seen growth in the systems in place to create and verify these identities. This systemic growth is not just technological but also sociological, given the new factors we consider for identification which may not have existed a generation ago. Because of how useful it is to give people sanctioned identifiers, we see efforts for their widespread usage, with results such as an increase in birth registrations by 15 percentage points between 2000 and 2019. Because the legitimacy of these identifiers is what creates the authenticity of a thing, the replicability of these identifiers and the trustworthiness of the authorities linked to them has always been in question. With how high the degree of replicability is in our digital landscape, a higher degree of scrutiny is required to check if these identifiers are indeed authentic.
To show how this phenomenon is present in all published media, a practical and historical example of this is the media censorship in Soviet Russia. Stalin was notorious for doctoring photographs, often erasing anyone he considered a transgressor. While we may now claim the published material to be inauthentic, it certainly wasn't a variable that people often considered back then, especially for photographic material.
Digitized media and its replicability
In the digital landscape media is not only propagated with immense speed but also published just as fast. Over 500 hours of content have been uploaded to YouTube in the past minute. This begs the question of what fraction of this content is inauthentic or unreliable in nature. A digital landscape allows one to adopt a separate identity for themselves and even the presence of seemingly credible identifiers e.g. a voice or face, being attached to something gives no assurance that what is presented is authentic. Growth in technology has also made it such that even the aforementioned physical identifiers may not be apt in a digital landscape. Speech synthesis and facial deep-fakes have shown to be capable of doing immense harm.
Digital landscapes can also be tools for propaganda; an increasingly common example of this is social media personalities, at times unknowingly, using their influence to spread sponsored narratives. Due to the reach that influencers on social media have, governments have begun employing them as tools for propaganda and due to the nature of propaganda even the influencers themselves may not be aware of the part they’ve played in spreading misinformation/disinformation.
Another concerning part is how due to the algorithmic nature of these platforms, those most hurt by misinformation are more likely to see it. If an informed person were to see blatant misinformation they would mostly ignore it and move on; however if a less informed person interacts with it, the algorithm is more likely to push similar content—empirically we see that older people have significantly more fake news on their feeds.
Authentication tools and systems
One of the forms of online authentication that is well-cemented in most social media platforms is a verification badge. While this helps distinguish agencies or prominent figures, it only acts as a way to ensure that the credentials or identity of a person match up. The content posted by said figure or organisation has no assurance of reliability, a fact which many people fail to acknowledge.
A type of counter measure that is being developed for AI-based imagery are deep-fake detection tools and AI-watermarking tools that do an apt job at catching AI images or videos. However, a thing to note for such detection technology is that they are ultimately a cat and mouse game—advancements in AI technology may make detection harder and make generated content less conspicuous. Furthermore, due to the detection being done only after someone inputs the media into a tool, one must have an initial level of skepticism to verify an image, which usually is not the case due to the sheer amount of media people go through nowadays.. Although a reasonable solution, postemptive detection tools certainly aren't a definitive solution.
Blockchain technology is able to act as a ledger with utmost reliability which makes it a great tool for authentication. Every transaction on the blockchain is shared on a peer-to-peer network of systems across the world that each verify the transaction, which is then attached to the publicly accessible and permanent history of transactions. The rise of cryptocurrencies such as Bitcoin are due to the technological capabilities of blockchain and we may see the rise of more identification tools utilising blockchain. It is worth noting however that the technology is highly resource intensive and unsustainable from an environmental lens.
Another promising approach is issuing credibility to media, content credentials which are able to act as a badge of authenticity, similar to food safety labels. They are an endeavor backed by many organizations that aim to give credence to online content. They also record if AI has been used to alter an image. It functions through the metadata present in the image and due to it being done in advance,it is able to bestow credence more reliably.
What this means for our future
This information may seem daunting, after all if nobody is able to govern this issue in its present state what does our future look like? A key thing to note however is the fact that there is a market for authenticity. People crave authenticity. The reason why the aforementioned propaganda model of influencers conveying disinformation is so powerful, is due to the fact that we regard these people as authentic and genuine. This is further demonstrated in how brands market themselves nowadays through personal stories in order to portray something that is genuine, hence that a ‘real’ connection is formed. Conclusively, it is not just the demand we have for authenticity which matters but also the scrutiny which we must place for that which is inauthentic.
Soham Kale