With companies such as Meta, Apple, Microsoft, and Epic Games all working to provide 3D virtual worlds to the public, security issues are more relevant now than ever. In light of recent data breaches from big tech companies (e.g., Meta’s $4.9 billion fine for failing to stop Cambridge Analytica from collecting user data on Facebook), it is clear that ethical guidelines around sensitive personal data are not always adhered to by these corporations. The amount of personal data required to create the metaverse’s curated domains will be enormous. As a result, it’s evident that a policy framework for data security in the metaverse is required. We’re heading to an unknown place regarding data privacy and protection in the virtual world. There are many things that platform providers, developers, and users need to be aware of if they want to avoid any legal or ethical issues.

The Metaverse Will Require New Privacy Protection Regulations

The success of a metaverse platform depends on how well it can balance different users’ privacy laws while still being accessible to people from all corners of the globe. The European Union’s General Data Protection Regulation (GDPR) is a comprehensive data privacy and protection law that protects the privacy of EU residents’ information solely within the European Economic Area (EEA). The GDPR has stringent rules that include special provisions for EU citizens’ personal data usage, processing, and transfer. This might imply that metaverse platform providers and third parties must comply with varying standards before handling a user’s sensitive information—which may include identifying the purpose for collecting data, the kind of information collected, and its intended use. Personal data protection would ideally have separate provisions for collecting, using, and sharing a child’s information, another set of rules depending on the user’s location, and different clauses again for special categories of personal data (like biometric data) used sensitively. The GDPR stipulates strict parameters for the collection, use, and transfer of “special categories of personal data,” which reads: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data […] shall be prohibited.”

Contradictory Laws of Data Protection

Will the metaverse’s facade be shattered by different criteria and complex international data and privacy laws provisions? There isn’t yet a complete legal architecture that allows users worldwide to enjoy their rights in a global metaverse. In order to keep user information secure, lawmakers will need to create new privacy protection regulations specifically for the metaverse. These rules should anticipate potential issues that could arise as more people begin using the virtual world, and they should be fluid enough to adapt as the technology changes. You can prevent new and existing threats by creating new privacy regulations. If someone took advantage of the vulnerabilities in unprotected data, such as what’s inferred from users’ digital avatars, social interactions, or physiological or biometric data (for example, eye movements), it could cause a lot of damage to real lives. This might include virtual identity theft, identity fraud, and phishing attacks. What can platform providers and third-party services do to reassure their consumers and address the worries of wary users over poor data handling?

A fundamental rule followed by many international data privacy laws is that platform providers must gather data so that users are certainly aware of the type of collected data, who will collect it, how it’ll be employed, and its reasoning. Stringent GDPR rules make platform providers like us responsible if data is not explicitly consented to be collected and stored or transferred to third parties. With the GDPR having a say in digital governance and other regulatory authorities, metaverse platform developers must keep this in mind as they explore Web 3.0 frontier territories. Aside from establishing a clear user policy, the introduction of new consent models that adequately cover the metaverse’s new and unusual data-gathering methods will undoubtedly be suggested. Most worldwide regulatory bodies also require platform providers to use transparent terms of service, which seek voluntary consent. This implies metaverse providers must acquire permission from data subjects willingly and without suffering any negative consequences if they choose not to participate. A GDPR report on consent defines what qualifies as voluntary permission. It states: “Any element of pressure or influence which could affect the outcome of that choice renders the consent invalid.” To give users control over their data moving forward, especially given VR wearables’ ability to collect granular data such as analyzed gaze, consent mechanisms within the metaverse should be set up accordingly. The perils of keeping a user’s sensitive personal information emphasize the need for metaverse platforms and third-party partners to develop standards that enable data sovereignty. This might be done by incorporating features that allow users to restrict, modify, or delete personal data at their leisure. “The user can withdraw consent at any time,” according to GDPR Article 4. “A company must make it simple and available to withdraw consent.” Consent from data subjects must be obtained every time they enter virtual spaces, when a new type of data is collected, or if third parties plan to use their data for other purposes. Users usually agree to allow access to their data within certain parameters, such as context, environment, or time period. Consent forms requested at each relevant access point will ensure those consent mechanisms are reliable, cover all bases, and can adapt to the metaverse’s live state.

On the other hand, more comprehensive consent methods could also create obstacles that interfere with a user’s online experience. For example, if safeguards like refreshed consent become routine, will internet users constantly have to deal with popup permission requests? If companies want to avoid breaking data protection laws that require informed consent, they may need to ask users for permission each time they enter a new virtual store or space. Suppose these companies’ internal privacy and protection practices don’t allow them to proactively market and promote their products in compliance with regulations. In that case, users might become disgruntled at the constant interruption of their seamless virtual experience. Finally, when users are targeted by advertisements based on their data, they must be informed and consent obtained. If platform providers or third parties misuse user data (e.g., sociodemographic or physiological information) to deliver tailored ads in front of metaverse netizens, they must be informed and made aware that they are being offered a paid promotion rather than a standard advertisement. This will help prevent fake worldbuilding from influencing how users perceive the site, allowing them to see if their personal information is being used at each stage.

Potential Dangers of Combining Biometric Data With AI

While gathering private data may be necessary to generate a genuinely pre-planned individualized experience—the automated systems, given artificial intelligence and machine learning abilities will allow metaverse providers easy access to user information on an unprecedented scale. Extended reality (XR) wearables, artificial intelligence (AI), machine learning, and other behavioral learning technologies will allow businesses to collect data on a much more detailed level than possible. This new category of data collection will enable platform providers and third parties to keep track of a user’s personal information, such as voice cadence, eye movements, facial expressions, and even heart and respiratory monitors that monitor vital signs. The data collection processes currently used by social media platforms are small compared to what is being developed. There is a worry that these new technologies will not have strong enough safeguards to prevent misuse of the collected information. We don’t yet fully understand how significant the future impact could be. For example, Clearview AI, a facial recognition company, broke several international privacy laws by using its technology to collect billions of people’s images from the internet without telling them or asking for permission. To put it succinctly, Clearview’s data collection practices got them into some serious hot water. Not only were they fined $9 multi-million dollars by the United Kingdom, but Australia and Canada also took legal action against the US company. Furthermore, once people started looking into how this data was being used, it became evident that it was collected unethically for rather malicious purposes. The GDPR’s Article 9 provides a clear plan for using “special categories of personal data,” which includes biometric information. This part of the GDPR also strictly bans processing personal data to uniquely identify a person. If these laws are violated, penalties and fines will result. For the metaverse to function properly, we need laws that emulate data privacy and protection, like the EU’s GDPR. These laws would stipulate rules surrounding the collection use and potential misuse of biometric data. However, very few global jurisdictions currently have these comprehensive laws in place. So, as a result, there are a few measures that platform providers and third parties can take to safeguard their users’ data. Users should be informed when their physiological and biometric data is collected—primarily via artificial intelligence—in an obvious manner. And specific consent mechanisms (such as those outlined in the GDPR) should be implemented.

AI-Driven Data Influencing Consumer Behavior

The metaverse will have real people and NPCs, virtual agents, chatbots, and AI-driven virtual humans. When AI-driven virtual humans engage with users and access their sensitive personal data (with or without permission), they could use it to influence consumer behavior. To ensure a ’trusting’ virtual agent doesn’t misuse a user’s data for an apathetic corporation, we need methods of moral compliance that come from informed consent. If platforms and third-party companies do not follow the regulations concerning data collection, storage, and transfer of a user’s physical and biometric information, AI-controlled virtual assistants could be used to control users in more often occurring ways. Furthermore, given the increased security risks of storing sensitive information, measures must be taken to allow users to delete their data after the necessity for it has passed. This way, platform providers and third parties may utilize user data to provide personalized experiences and then delete and quarantine such information to protect and anonymize users. In the virtual world, new laws will be put in place to dictate what platform providers and other companies are allowed to do with users’ private information. The GDPR will set a precedent for other global data privacy organizations to mirror. It’s critical to remember that AI, at its core, has shown to be a valuable and successful instrument in the automation and innovation of entire sectors, allowing for data uniformity through tried-and-true statistical pattern recognition methods. The ethical problems raised by artificial intelligence and the advantage it may provide today’s fast-paced business practices are where intended use might deviate into unethical territory.

The Challenge of Enforcing Third-Party Accountability

The metaverse will be where people have unique experiences and access content suited to their interests. As a result, brands and other businesses will want to capitalize on this data to better sell and advertise their products and services. When data use from businesses and third parties rises to this level, it will require the retention, merging, and transmission of enormous data archives from numerous sources. Third-party marketing campaigns such as those in the AdTech ecosystem now gather and utilize the information that infers a user’s age, gender, geo-location, and favorite meals. Establishing secure and safe data sharing is still necessary by setting up appropriate privacy standards and accountability systems. AdTech uses software to streamline and automate digital advertising campaigns’ buying, selling, delivery, and measuring. This system regulates an extensive contract roster that includes many unique requirements, specifications, and clauses. Since the metaverse will operate on a similar scale as AdTech, we expect the data-sharing apparatus in the metaverse to be modeled after AdTech’s ecosystem. The obvious application of such big data sets will be to develop fully automated artificial intelligence. Unsurprisingly, new regulatory requirements will be required to apply to the real-time meta-environments of tomorrow’s world. Users in a borderless metaverse will encounter an overabundance of advertisements and paid promotions. An unrivaled amount of personal data will be required to target them appropriately. What if businesses operating within the metaverse are oblivious that they’re advertising to EU residents and monitoring their data? How would they know in a borderless environment like the metaverse that they’ve broken the GDPR?

Blockchain to Facilitate Contracts, Interoperability, and Dataflow?

So, the question is: Will advertisers and third parties be able to enter a contract-driven AdTech model that can be signed and bargained among advertisers and publishers? Can they also ensure that they join ones with adequate privacy and responsibility standards at their core? How will they operationalize a potentially infinite number of agreements in the metaverse? And how will they guarantee compliance with an equally daunting string of external data and privacy regulations? The AdTech system has a clear structure and governance in place via a plethora of automated communications. Many different worlds may exist in the metaverse, and achieving interoperability will demand seamless dataflow between third-party services and platform providers. It’s impossible to have an interconnected metaverse if you can’t move your digital assets from one world to the next. To allow for the safe use, processing, and transfer of large data sets between parties, developers and brands must agree to multilateral data-sharing agreements. Blockchain technology allows intelligent contracts to execute verified permissions and uphold contractual obligations on their unchanging ledger. This way, the opportunity for data exploitation and non-compliance can be mitigated with immutable transactions. And by 2023, the traditional third-party cookie will be phased out, ending advertisers’ and publishers’ current data collection, use, and transfer methods. Another potential solution is designating data controllers for each metaverse responsible for collecting, using, and processing user data. Data responsibility would then fall to centralized administrators rather than relying on numerous contracts.

Users Are Responsible for Their Own Security

The metaverse is a universe that aspires to copy many aspects of the real world by providing users with an internal online experience rather than making them view it from the outside. Data integrity and interoperability hold this endeavor together, so platform providers must ensure these vulnerabilities are intensified for virtual realms tomorrow to be successful. In this article, we’ve attempted to provide answers for platform builders and third parties who need to deal with compliance and ensure secure data privacy and protection in a live interoperable world such as the metaverse. Many questions are yet to be answered about the metaverse, but here are some of the most pressing concerns dominating the current discourse. The metaverse has the potential to transform societies’ relationship with technology entirely. Still, only time will tell if developers can deliver a metaverse that prioritizes confidentiality, transparency, and integrity over profit. There will be plenty of opportunity for big tech to continue down its current path of unethical data usage in the metaverse, as it has frequently done. Netizens who go into the metaverse may have an opportunity to shape the unknown digital terrain of Web 2.0 and usher in a new era of digital democracy by doing so. Users must actively appreciate the value of their data and become aware of their control over its collection, use, and transfer to realize this goal. This content is accurate and true to the best of the author’s knowledge and is not meant to substitute for formal and individualized advice from a qualified professional. © 2022 Ashley Mangtani

Protecting Data Privacy and Security in the Metaverse - 22Protecting Data Privacy and Security in the Metaverse - 7