The recent CEO interview at DeepMind suggests that the future holds unprecedented developments in AI that may surpass our current understanding. AI is poised to have a deeper insight into individual data than even governmental bodies. The influence of AI entities is expected to grow significantly in the global economy. This raises critical questions not only about ethics and regulations but also about legitimacy, global governance, the role of government, and ultimately, determining what is morally right or wrong. To what extent will the government be able to influence and shape its role in the future economy?Â
INTRODUCTION
AI is reshaping our world, from healthcare to finance, education to security and in future human cognitive thoughts and interests through the rise of AGI (Artificial General Intelligence). As the world navigates this transformative landscape, there is an increasing collective responsibility of the diverse ecosystem in helping guide policies that harness the potential of AI, while safeguarding humanity and the planet
The collaboration efforts in the UAE have attracted significant investments from major players like G42 and Microsoft, particularly in the realm of emerging technologies. This includes a focus on data centres, as well as advancements in AI. The synergy between Open AI, Microsoft, and other entities in the region highlights a strong momentum towards driving innovative solutions in collaboration efforts within the UAE.
The Saudi Data and AI Authority has been recently established to drive AI development, contributing to a wider GCC and Asian landscape of innovation. Noteworthy collaborations such as the ASEAN initiative between Singapore, Malaysia, and Thailand are focusing on enhancing AI skills through public-private partnerships, preparing the workforce for the future economy.
Asia is beginning to drive various initiatives in their region to develop important skill sets. The recent AI retreat highlighted a strong focus on ecosystem partnering. Workshops covered a wide range of topics, with particular emphasis on discussions surrounding infrastructure and policy. Ministerial presence was notable during these discussions. Of utmost importance were the announcements leading up to and following the retreat, where the Dubai government announced the appointment of twenty-two chief AI officers across various government sectors.
Recently, there have been competition announcements involving major players such as Apple and OpenAI. With the growing importance of data sets stored in devices and centralized entities, there will be more scrutiny on data privacy and security measures. Additionally, there have been geopolitical implications in the sourcing of chipsets for high compute needs in AI technology, driven by the expected rise of AGI (Artificial General Intelligence). As AI continues to advance, the availability and reliability of these chipsets will become increasingly critical. It is essential for regions to consider their strategic positioning in this regard. Various regions around the world are making investments to address these challenges and ensure access to the necessary technology through collaboration and strategic investments.
There have been noticeable shifts in the distribution of technology, research, and intellectual property within not only this region but also the United States. The Chinese have also made advancements in technology, which is expected to continue to grow at a rapid pace. Additionally, individuals who have previously worked with major tech companies have expressed concerns regarding responsible AI. It is important to consider these concerns, especially when those developing algorithms or devices are the ones interpreting data and raising questions about responsibility, ownership, and governance in this sphere.
A group called “A Right to Warn about Artificial Intelligence” has been established to raise awareness about AI and advanced AI, including former and current employees of relevant firms, developing guiding core principles regarding the enforcement of agreements that restrict criticism. Additionally, there are concerns about the verifiability and anonymity of data used in algorithmic processes, calling for governance in this area. The group also emphasizes the importance of fostering a culture of open criticism.
INITIAL THOUGHTS OF PARTICIPANTSÂ
- Exploring the impact of legislation on AI is crucial. How does law influence the development and regulation of AI? Furthermore, what changes can we expect in our society due to the rise of AI technology? Public and private sector collaboration is vital in fostering a shared vision for ensuring security and safety in AI, placing emphasis on these factors over corporate interests like growth and profit.
- The topic of big tech is a major focus as regulators are holding us responsible for any AI applications in our bank. When we adopt big tech solutions, how can we collaborate with technology companies to share accountability effectively?
- What are the frameworks in the current unregulated landscape? Collaboratively establishing frameworks that support the legal rights for consumers to own their data is a vital aspect that we need to address.
- One of the key considerations is how consumers can monetize their data and regulate the information that is being shared. Another topic to explore is data sustainability and the frameworks needed to manage and utilize data effectively.
- One key consideration revolves around the safety of children and youth. The impact of proper management is incredibly positive, but conversely, the repercussions of negligence can be equally severe. It is essential to acknowledge the responsibility we bear in this regard.
- An important issue is the disparity between the private and public sectors. Typically, the private sector outpaces the public sector in terms of speed and innovation across industries. However, in certain cases, allowing this gap can lead to significant challenges. Individuals may prioritize personal gain over collective welfare, posing a threat to the overall welfare of society.
- A topic of interest is how policy can keep pace with innovation. At present, policies are created based on existing knowledge, but future developments are unpredictable. How can we effectively address this uncertainty? This is a critical issue that needs to be addressed.
- Mapping the utilization of AI by key sectors of the economy and the private sector poses a challenge. Friction between nation states and platforms, as well as the scale and sustainability of platforms, further complicate this analysis.
- The future of AI will largely be shaped by consumer needs and behaviours. Currently, the landscape is unregulated, especially in terms of how individuals on platforms like YouTube and TikTok utilize AI technologies. Neglecting the impact of consumer perceptions on AI could have repercussions on enterprise operations. If the public becomes wary or adversely affected by AI, it will inevitably affect all enterprise endeavours.
- Understanding the capabilities and limitations of AI is essential for engaging in discussions about governance. Without a shared set of values and goals, global systems may struggle to function effectively. We need to shift the focus of conversations from solely discussing risks to also considering the trade-offs involved in utilizing AI technologies.
- The current use of AI is involved in intergenerational manipulation, which will impact children. There is a need to explore the intersection of AI with biology, including DNA servers and brain interfaces. It is essential to consider the implications beyond AI alone to stay ahead of policy-making decisions.
ETHICS & RESPONSIBLE AI
There is an emergence of responsible AI initiatives by major tech companies, such as the Microsoft Responsible AI framework, which sets a valuable standard for ethical AI practices. This framework could serve as a model for comprehensive guidelines that should be implemented universally. Ethical AI encompasses issues such as algorithmic bias, transparency, and accountability in AI deployment. How can policies ensure responsible AI deployment that incorporates ethical practices and protects consumers and citizens?
There should be a global baseline no-go zone. When discussions start to revolve around this restricted area, there may be various cultural differences that need to be considered. What is deemed acceptable in one country may be deemed unacceptable in another. This is where arbitrage comes into play. This ties into the concept of a risk-based approach and trade-offs. We may be willing to forego potential benefits to mitigate risks and maintain control in certain areas. It is essential to establish these boundaries to prevent any potential negative outcomes.
AI regulation should focus on controlling the specific use cases of AI, rather than the technology itself. For example, regulations should target areas like missile development to prevent misuse of AI. It is important to address how we can discourage harmful use of AI while promoting beneficial applications, such as using AI for simulations. Additionally, the massive data centres needed to train AI models can have negative impacts on the environment, posing potential risks to climate change. You cannot govern AI fully, but you can govern the use cases.
NIST (National Institute of Standards and Technology) most recently has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The framework balances the need to improve AI system trust by incorporating safety, security, privacy, and fairness as core considerations.
- EU AI ACT
The effectiveness of self-regulation is questionable. It has been observed, how GPT-4 and Microsoft were able to circumvent the self-regulatory body within OpenAI to evaluate GPT-4, despite assurances that they would not do so. Looking ahead, the EU AI Act, coming into effect in August, will establish a framework for identifying high-risk AI systems and imposing obligations on AI providers, including conducting fundamental rights impact assessments, implementing technology and operational safeguards required by law, ensuring human oversight, as well as maintaining records and monitoring compliance.
The EU AI Act, much like the GDPR’s impact on data regulation, will extend its influence on a global scale. Its significance will be impossible to overlook, as it introduces substantial fines based on a percentage of global revenue. Companies will undoubtedly take heed when faced with the realization that non-compliance carries significant financial consequences.
- Jobs & Skills
In the realm of job cuts, Klarna, a prominent fintech company based in Europe, recently announced a hiring freeze and expected reduction of approximately 20% of its workforce, citing AI efficiencies as a significant factor. This trend may become a common rationale for downsizing in the future.
The primary concern at hand is the repurposing of skills. If we start seeing FinTech’s and other tech companies or organizations across various industries using AI as a justification for downsizing, it is important to carefully examine this issue from a policy standpoint, especially for industries.
- Misinformation & Deepfakes to Data Identification
There is an increased prevalence of fake news and deep fakes, it is crucial to address the spreading of misinformation.
Just recently, in Australia, a student was arrested and under investigation for taking images of schoolgirls in his year and using them to create deepfake pornographic content. This incident has garnered significant attention from Australian and global media outlets, including CNN. The sharing and manipulation of such images raise important questions about accountability, access to AI tools for creating misinformation, and the need for considerations around ethical use. These issues underscore the importance of educating and guiding younger generations on responsible use of technology.
The advancement of technology has made deepfake videos more accessible to the public. It is becoming increasingly important for consumers to be aware that not all videos they see are authentic. It would be beneficial for technology to develop a system that can quickly recognize the digital signatures of manipulated videos, potentially alerting users if a video appears to be AI-generated. Although it may be challenging to regulate how technology is used, implementing detection mechanisms could help mitigate the spread of misleading content.
In 2019, Sam Altman quietly founded World Coin, a company focused on developing a physical identification system for online users. This technology aims to detect deepfake content and verify its authenticity through blockchain technology. This innovative solution provides a secure and verifiable online presence for individuals. It is evident that identity management verification online has become crucial in today’s digital landscape. The question arises regarding the role of governments in this process. The European Union is poised to excel in this area, particularly in verifying online identities, giving them a unique advantage over other states.
There is a need for comprehensive mapping capabilities of data, facilitated within each system itself. Current legislation does not incorporate tokenization or integration with various data categories such as health records, ownership information, or personal images from my private collection. The UAE is in an ideal position, given its size, to implement tokenized IDs and assets.
THE DATA DILEMMA
The UAE should implement specific guidelines for data scraping when using LMS training platforms to ensure that data is only accessed with consent.
- Data Ownership & Legacy
Who is responsible for the interactions created over time? Will this information be passed down to future generations? The existing legislation surrounding data usage presents vast unregulated territory. As we continue to generate content, it is crucial to address the potential scenarios where engines may access data posthumously with consent from relatives. How do you examine these facets of human existence from a policy standpoint?
- Medical/Health & Financial Data
It is important to discuss potential restrictions and determine a baseline for acceptable practices.
The idea of combining mental health and physical health data with credit risk assessment is concerning and unethical.
There should be clear guidelines in place to prevent certain use cases that could result in unethical practices. Before considering any potential benefits, it is important to establish a moral baseline to prevent misuse of data.
- Data Controllers vs. Data Processors
The responsibility lies in discussing the legal obligations associated with data liability when one is a data controller. There exists a legislative differentiation between data controllers, who have control and influence over data management decisions, and data processors, who simply follow instructions without discretion. This distinction is important as some businesses have historically claimed to be processors to avoid certain obligations.
In specific situations, tech companies have often claimed to be simply acting as a platform and not taking on any responsibility. However, certain regulatory bodies, such as the DIFC, have started to challenge this concept. They are now asserting that companies utilizing AI or autonomous systems can be considered controllers and can be held accountable. This shift in perspective is both intriguing and necessary in the evolving landscape of technology regulation. It is likely that we will witness more instances of this kind of re-evaluation in the future.
- Data Bias & Representation
There are some key areas to consider when discussing the challenges of bias, privacy, user data, safety and transparency on digital platforms. One area is representation, which involves ensuring diversity and inclusivity in content and imagery. The existing data predominantly consists of English content from the global north, creating bias. Governments, such as the UAE government, can play a role in mitigating this bias by promoting the digitization of cultural archives. This effort should focus on preserving cultural heritage in local languages, making it accessible to a wider audience. Funding for this initiative should come from collective efforts. The process of digitizing cultural archives, or digital libraries, is crucial for preserving and sharing important cultural assets.
PERSONALIZED ALGORITHMS
In the future, the creation and ownership of algorithms may become increasingly important. Your digital legacy could be tied to the algorithms you create, potentially outliving you. This raises questions about legacy and what happens to your creations after you are gone. Data sustainability in the context of AI is also an important aspect to consider when discussing these themes.
An AI algorithm representing someone’s digital presence raises concerns about control, legacy management, and potential revenue streams, current and in the future. In the event of the person’s absence, the question arises of who will oversee and benefit from this algorithm. Additionally, there are concerns about possible hacking incidents and the need for robust rights, liability, and insurance policies to mitigate risks effectively. The expected evolution towards personalised algorithms that represent and work for you in the digital world will have future implications on one’s digital revenue streams, cross-border taxation and other potential local and global personal income assessment implications. It will also enable the uplift and economic equalization for people outside of their home countries and when off-line. Marking a transient point in economic inclusion through personal algorithmic representation.
You can design your own GPUs, a task that can be completed independently. It is envisioned that individuals will soon be able to create a digital representation of themselves, essentially a personalized file serving as an extension of their cognitive capabilities.
It is interesting to see the developments in the tech industry, such as Nvidia’s advancements and Microsoft’s venture into building their own circuit. It is worth noting that many companies are exploring chip production and other innovative technologies. Corporate responsibility in the realm of AI is crucial, and while some may question the ethics of corporations, it is evident that responsible AI practices are not only beneficial for society but also for business.
Collaboration with external stakeholders is key in developing a framework that is widely accepted and inclusive. By working with industry and government partners, companies can ensure that AI initiatives are transparent and trustworthy.
When you transfer a large, medium, or small language model to the endpoint, you are essentially moving a virtual replica of yourself, or a digital twin, into an android application. Nvidia has developed an Omniverse that closely mirrors real-world physics. With just the push of a button, they can transform you into a robot, effectively downloading your essence into a mechanical form. This goes beyond simply operating digitally in another location; it entails physically embodying an android and engaging with others in a lifelike manner, complete with your unique voice, emotions, and thought processes, while also managing an audience that has become tangible.
In the world of trademarks, it operates on a first come, first serve basis. This means that individuals can trademark anything they desire. However, it is essential to consider that this practice ultimately boils down to the importance of individual rights. It revolves around the idea of control and the increasing prevalence of multiple personas. As businesses aim to tailor their products or services to specific individuals, they engage in profiling tactics. This leads to the need for individuals to have agency in managing and refining their own personal brand.
The emergence of new marketing and profiling laws highlights the growing shift towards empowering individuals in controlling their online persona.
LAST THOUGHTS & RECOMMENDATIONS
- One framework that is crucial is the development of academic research on metrics pertaining to fairness, particularly in unsupervised learning and generative AI. While many discussions revolve around bias and fairness, the focus on this specific area is lacking. The academic community in the UAE can take the lead in advancing this critical area of study.
- Universities should make it mandatory for all graduating students to take a class on ethics and responsible AI. Even if students may not have in-depth knowledge on topics such as unsupervised learning, engaging in dialogue and being aware of these issues is essential for creating a more informed and responsible society.
- It is imperative to prevent the centralization of power in artificial intelligence and to oppose any form of power consolidation in a particular platform, country, nation state, or institution. Policies should prioritize equal access to economic opportunities for all individuals and communities, to ensure that the potential societal impact of AI, and potentially AGI in the future, benefits humanity rather than being controlled and manipulated by a select few.
- The UAE lacks a comprehensive youth safety policy, particularly concerning online social media, in contrast to the US, which has stricter regulations. Arab countries have yet to enforce similar policies, leaving a gap in safeguarding youth. It is imperative that we act and implement a youth safety policy that collaborates with academia and industry partnerships to ensure innovation is not hindered. Companies, including the one I work for, should not fear these policies as they provide necessary guardrails. Collaboration, proper parental consent, and a focus on child and youth safety policies related to AI and social media are essential.
- One significant area of discussion revolves around the positive impact of AI technology. While the discussion encompasses different perspectives, the benefits of AI technology are undeniable. Particularly in underprivileged communities and developing nations, the reach of AI technology remains limited, with many individuals never having the opportunity to experience its potential benefits. AI literacy and digital technology inclusion is necessary for the future economy to be an AI inclusive one across the world. We need an AI Inclusion Policy and AI Inclusion Fund.
- From a policy perspective, it is crucial to consider the long-term implications of our current approach. The increasing rate at which jobs are becoming obsolete each year necessitates a re-evaluation of our strategies for preparing future generations. Without proper intervention, highly educated individuals entering the workforce may find themselves without viable career prospects. It is imperative that we prioritize the preservation and fair compensation of jobs to ensure economic stability and opportunities for all. Merely upskilling individuals is not enough; we must actively work to safeguard existing employment opportunities. This proactive approach is essential to address the widening income disparity in society and secure a sustainable future for all individuals in the workforce.