news-26102024-165309

In the face of AI-powered surveillance, the need for decentralized confidential computing is more apparent than ever. With the expansion of AI surveillance, DeCC offers a decentralized approach to safeguard sensitive information with transparency and accountability.

When Oracle AI CTO Larry Ellison presented his vision for a global network of AI-powered surveillance aimed at ensuring citizens’ “best behavior,” critics were quick to draw parallels to George Orwell’s dystopian novel, 1984. The use of mass surveillance infringes on privacy, has negative psychological impacts, and discourages individuals from participating in protests.

What is particularly alarming about Ellison’s vision is that AI-powered mass surveillance is no longer just a concept – it is a reality. During the recent Summer Olympics, the French government enlisted the help of four tech companies to conduct video surveillance in Paris using AI analytics to monitor behavior and enhance security.

While France may be the first country in the EU to legalize AI-powered surveillance, the use of video analytics is not new. The UK implemented CCTV systems in cities during the 1960s, and as of 2022, 78 out of 179 OECD countries were utilizing AI for facial recognition systems. The demand for this technology is expected to increase as AI progresses and facilitates more accurate and extensive information services.

Governments historically have utilized technological advancements to upgrade mass surveillance systems, often outsourcing the work to private companies. In the case of the Paris Olympics, tech firms were given the opportunity to test their AI models on a large scale, collecting data on the location and behavior of millions of individuals.

The ethical dilemma surrounding AI surveillance revolves around the balance between privacy and public safety. While proponents argue that surveillance enhances public safety and ensures accountability, privacy advocates contend that it restricts individuals’ freedom and induces anxiety. The question of whether tech firms should have access to public data and how sensitive information is managed between various parties remains unanswered.

Decentralized Confidential Computing (DeCC) emerges as a potential solution to the challenges posed by AI data privacy. DeCC aims to eliminate single points of failure in AI training models, establishing a decentralized and trustless system for data processing and analytics. Techniques such as Zero-knowledge Proofs (ZKPs), Fully Homomorphic Encryption (FHE), and Multi-Party Computation (MPC) are being tested to ensure data security and confidentiality.

MPC, in particular, has shown promise in enabling transparent settlement and selective disclosure, allowing for encrypted data processing and analysis without compromising sensitive information. By leveraging DeCC technology, it becomes possible to conduct facial recognition and surveillance while maintaining data confidentiality and protection.

Although decentralized confidential computing is still in its developmental stages, its potential to address the risks associated with trusted systems and protect individual privacy is significant. As machine learning becomes increasingly integrated into various sectors, DeCC will play a crucial role in ensuring data protection and privacy in the future. To prevent a dystopian future, decentralizing artificial intelligence is imperative.