Facebook was the worst thing to happen to user privacy over the last two decades. Artificial intelligence could be the worst thing to happen in the days ahead.
In an increasingly AI-driven world, blockchain could play a critical role in preventing the sins committed by apps like Facebook from becoming widespread and normalized.
Artificial intelligence platforms such as ChatGPT and Google’s Bard have entered the mainstream and have already been accused of inflaming the political divide with their biases. As foretold in popular films such as The Terminator, The Matrix and most recently, Mission: Impossible — Dead Reckoning Part One, it’s already become evident that AI is a wild animal we’ll likely struggle to tame.
From democracy-killing disinformation campaigns and killer drones to the total destruction of individual privacy, AI can potentially transform the global economy and likely civilization itself. In May 2023, global tech leaders penned an open letter that made headlines, warning that the dangers of AI technology may be on par with nuclear weapons.
One of the most significant fears of AI is the lack of transparency surrounding its training and programming, particularly in deep learning models that can be difficult to expropriate. Because sensitive data is used to train AI models, they can be manipulable if the data becomes compromised.
In the years ahead, blockchain will be widely utilized alongside AI to enhance the transparency, accountability and audibility concerning its decision-making process.
Chat GPT will make fun of Jesus but not Muhammad pic.twitter.com/LzMXBcdCmw
— E (@ElijahSchaffer) September 2, 2023
For instance, when training an AI model using data stored on a blockchain, the data’s provenance and integrity can be ensured, preventing unauthorized modifications. Stakeholders can track and verify the decision-making process by recording the model’s training parameters, updates and validation results on the blockchain.
With this use case, blockchain will play a leading role in preventing the unintentional misuse of AI. But what about the intentional? That’s a much more dangerous scenario, which, unfortunately, we’ll likely face in the coming years.
Even without AI, centralized Big Tech has historically aided and abetted behavior that profits by manipulating both individuals and democratic values to the highest bidder, as made famous in Facebook’s Cambridge Analytica scandal. In 2014, the “Thisisyourdigitallife” app offered to pay users for personality tests, which required permission to access their Facebook profiles and those of their friends. Essentially, Facebook allowed Cambridge Analytica to spy on users without permission.
The result? Two historic mass-targeted psychological public relations campaigns that had a relatively strong influence on both the outcomes of the United States presidential election and the United Kingdom’s European Union membership referendum in 2016. Has Meta (previously Facebook) learned from its mistakes? It doesn’t look like it.
In comes AI. If the after-effects of the Cambridge Analytica scandal warranted concerns, can we even begin to comprehend the impacts of a marriage between this invasive surveillance and the godlike intelligence of AI?
The unsurprising remedy here is blockchain, but the solution isn’t as straightforward.
One of the main dangers of AI rests in the data it can collect and then weaponize. Regarding social media, blockchain technology can potentially enhance data privacy and control, which could help mitigate Big Tech’s data harvesting practices. However, it’s unlikely to “stop” Big Tech from taking sensitive data.
To truly safeguard against the intentional dangers of AI and ward off future Cambridge Analytica-like scenarios, decentralized, preferably blockchain-based, social media platforms are required. By design, they reduce the concentration of user data in one central entity, minimizing the potential for mass surveillance and AI disinformation campaigns.
Put simply, through blockchain technology, we already have the tools needed to safeguard our independence from AI at both the individual and national levels.
Shortly after signing the open letter to governments on the dangers of AI in May, OpenAI CEO Sam Altman published a blog post proposing several strategies for responsible management of powerful AI systems. They involved collaboration among the major AI developers, greater technical study of large language models and establishing a global organization for AI safety.
While these measures make a good start, they fail to address the systems that make us vulnerable to AI — namely, the centralized Web2 entities such as Meta. To truly safeguard against AI, more development is urgently required toward the rollout of blockchain-based technologies, namely in cybersecurity, and for a genuinely competitive ecosystem of decentralized social media apps.
This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.