From AI ethics, collaborations and investments to technology leaps, R&D and how all businesses will change with generative AI, the diverse and expert panel shared their insights into all things AI.
Expert Panel:
â—¾ Oskar Mencer, CEO, Maxeler Technologies, a Groq company. Oskar is one of the best technology visionaries in the world.
â—¾ Fahed Bizzari, Founder & Coach, The ChatGPT Accelerator; Founder & Public Speaker, Bellamy Alden Institute for AI Transformation.
â—¾ Paul Masitow. Terra VC, Paul is leading next-gen computing, artificial intelligence, computational biology & cybersecurity verticals.
â—¾ Moderator: Nader AlBastaki, Managing Director, Dubai Future District Fund; Co-Founder & Strategic Advisor, trippal UAE.
Nader AlBastaki: Where is AI today and where it’s heading?
Oskar Mencer: The exploration of the brain’s functionality began in the 1950s when a psychologist hypothesized the brain operates similarly to a system that processes inputs and generates outputs, with multiple such systems in operation. Despite this discovery, the practical implications were unclear. In the 1990s, significant progress was made in developing mathematical concepts related to this theory, yet application remained elusive. Recently, there has been a breakthrough in understanding this concept, enabling communication with the brain-like system. This newfound ability has sparked further investigation, and talking to today’s AIs is akin to nurturing a one-year-old child, in anticipation of the exciting journey ahead.
Fahed Bizzari: When viewed through the lens of AI application, the progress may resemble a high-speed race car. However, many business leaders remain tentative and are holding back, opting to take a cautious “wait and see” approach. If implementing AI is not prioritized soon, the task of adapting to a rapidly evolving future will only become more difficult.
Paul Masitow:Â There are approximately 100 trillion synapses in the human brain, making it one of the most successful neural networks in history. In comparison, the GPT-4 neural network has 1.2 trillion synapses, showing significant progress but still falling short of the human brain’s capacity. Once the GPT-4 reaches parity with the human brain in terms of synapse count, its intelligence will be on par with that of a human. The main difference between the two will be how they are utilized. This is an interesting observation rather than a mere fact, with a focus on practical applications as opposed to theoretical knowledge.
My perspective leans towards the technical infrastructure of technology, including hardware frameworks and databases, rather than just the fundamental concepts. Today, we will discuss various levels of technology, spanning from basic advancements to more complex systems. As we move towards Artificial General Intelligence (AGI), there will be a shift in focus to more advanced technologies.
Nader AlBastaki: What are views about the hesitations and opportunities, how do we bridge the gap?
Paul Masitow: One major concern surrounding data privacy is the potential misuse of uploaded information, despite assurances in signed documents that it will not be used for training purposes. It is important to recognize that anything posted online may be utilized for training other models.
There have been significant scandals involving Samsung and other companies, where private information was leaked using ChatGPT. Some employees inadvertently input sensitive information into ChatGPT without considering its potential consequences. This highlights a major concern regarding data privacy that has yet to be addressed. It is imperative that this issue be resolved promptly to ensure the protection of sensitive information in the future.
The bias of models is heavily influenced by the dataset they are trained on. This poses a particular challenge for the Arabic world, as the data in the Arabic language differs significantly from that in English-speaking countries. Therefore, training models in Arabic presents unique engineering tasks. These models often encounter various issues due to the inherent bias in the dataset.
Fahed Bizzari: Many concerns surrounding technology stem from individuals allowing it to take the lead. It is crucial to view ourselves as the pilots with artificial intelligence serving as our copilots. Failure to adopt this mindset may result in AI taking control, leaving us as mere passengers. It is important to understand this distinction to maintain control and direction over technology.
If the alignment is off, it may cause the vehicle to veer in a different direction than intended. Currently, we have control over the steering, and we hope to maintain that control in the future. In the corporate world, hesitation is a common human emotion that can impact decision-making. Through my observations, I have noticed that the key difference between leaders who are proactive and those who are hesitant is their experience with pivotal breakthrough moments.
Oskar Mencer: I do not observe any hesitation, but there are many actively engaged in AI related activities due to the advancement of LMS. This newfound capability to comprehend language creation while speaking has brought about a significant amount of change that is beyond imagination.
Nader AlBastaki:Â Is there an opportunity cost of lagging behind?
Oskar Mencer:Â How can we each truly comprehend the situation at hand? Who will be the one to interpret it accurately? As the saying goes, “the harder I work, the luckier I get.” This acknowledges the uncertainty of the future but emphasizes the importance of putting in effort. The allure lies in the unpredictability of what lies aheadÂ
Nader AlBastaki: Â You have successfully created the fastest computer in the world. Who has been using this advanced technology?
Oskar Mencer: Creating the most high-speed computers globally is certainly no simple task. In fact, it requires a great deal of effort and expertise. We have adopted a similar sales approach to those in the field of heart replacements.
Amid the financial crisis, we sold the computer to JP Morgan. It was a critical moment as they were in urgent need. The situation appeared dire, with the potential for catastrophic consequences on global finance. JP Morgan held a significant portion of credit derivatives worldwide           Faced with the urgency of needing a risk report to conduct trades throughout the day, they realized they required a high-speed computer to meet their needs. As a result, we delivered the world’s fastest financial derivatives risk computer for    use in their trading operations.
Nader AlBastaki: Where are we today with ChatGPT? What are the opportunities we can leverage from this technology?
Fahed Bizzari: I am now able to complete tasks that previously took me a month in just a day. Recently, I gathered testimonials from students who have spent four weeks with me, asking them about the impact on their productivity. All of them reported a significant increase, with some even mentioning achieving tasks that were previously considered impossible.
How can you quantify the increase in productivity from that? Essentially, I see it as the distinction between intelligence and superintelligence. When discussing superintelligence in the field of AI, most people envision the AI achieving superhuman levels of intelligence down the line. This is the goal we are striving for.
At present, a person with AI capabilities can enhance their intelligence to a super level. This external augmentation allows them to perform highly intelligent tasks with superior speed, efficiency, and thoroughness. This exoskeleton-like enhancement extends their cognitive abilities, enabling them to achieve remarkable feats.
A crucial distinction in AI development lies between replacing humanity and augmenting humanity. Regrettably, capitalism tends to favour the former approach, as it offers greater operational efficiency and cost savings.
In the current phase of transitioning towards increased artificial intelligence replacing human labour, there is an inevitable shift taking place. While the timeline for full automation is uncertain, meaningful human work could eventually be automated. However, there is still time before superintelligent systems are fully realized. In this interim period, individuals can enhance their capabilities and intelligence. Technologies like GPT and other language models represent efforts to simulate the human mind, with GPT showing promising results.
AI is not inherently intelligent, but its potential under proper guidance is truly impressive. However, to date, we have only scratched the surface of what AI can offer, both on an individual and corporate level. This lack of exploration has hindered us from fully addressing concerns about job displacement. It is possible that those who are hesitant to fully embrace AI may be the ones who thrive in the long run. In contrast, those who rely too heavily on AI risk a bleak future if technology fails. It is essential to approach AI cautiously and consider all possible outcomes.
It is inevitable that we all will eventually follow this path. Those who choose to avoid it will face the most challenges. In my view, those who begin to embrace and accept this direction will thrive the most in the long run.
Nader AlBastaki: Who will be most affected by AI?
Fahed Bizzari: It seems that robotics is focusing on the labour-intensive tasks, while language models are tackling more intellectual work. This is because knowledge work heavily relies on language. We interact with words constantly in our daily tasks, from compiling reports to formulating thoughts about various subjects.
The language models can gradually replace human translators in certain tasks. The speed and efficiency at which these models operate are remarkable. For example, consider a translation company with 50 translators. With the introduction of AI, basic-level translators may no longer be needed as AI can rapidly complete translation tasks. However, a higher-level individual is now required to oversee and guide the AI in its translation work.
One must then ascend to the next level, where the AI is being edited by the editor. The issue arises when each level, if the editor provides feedback to the AI, the AI will assimilate the edits and become increasingly intelligent. Consequently, there is a significant demand for all individuals to enhance their skills. It is reminiscent of a return to the 90s, where we are acquainting ourselves with the realm of IT literacy. In the past, accountants and writers relied on manual methods, but now there is an imperative to embrace computers in various capacities.
It is essential for everyone to continuously enhance their skills. The only exceptions are individuals in leadership positions such as fund managers or group chairpersons, where personal upskilling may not be necessary. Upskilling is a necessary requirement for all individuals, without exception.
Nader AlBastaki: What are the opportunities and risks for investors?
Paul Masitow:Â Â I currently have over 1000 companies in my tracking list. However, the issue is that GPT-5 may remove a significant portion of the products without any valid reason. This can be frustrating and lead to inaccuracies in the results. It is important to have a more sophisticated and intelligent system that not only provides the correct answer but also ensures its accuracy. When utilizing other AI systems, a lack of response can be disheartening and create doubts about its effectiveness. Therefore, it is crucial to have a reliable and precise solution in place.
One issue I have is that it is possible that GPT-5 could potentially disrupt half of the market, impacting the companies I am currently assessing. Furthermore, some companies are seeking funding without a clear business model in place, disregarding revenue generation. In addition, there are also companies that lack clarity on their products and their intended market. Ultimately, there appears to be a significant focus on quantity of employees rather than on product quality.
Two notable companies exemplifying successful ventures in the field are Adeptia and Inflexion. Adeptia, distinguished for its expertise in autonomous agents and browser automation, recently achieved a valuation of $1 billion following a $400 million fundraising round. Notably, the founder, who also serves as the SVP of engineering at OpenAI, retains a 60% stake in the company. In a similar vein, Inflexion, a new venture spearheaded by Reid Hoffman, secured an impressive $1.3 billion in funding, with notable participation from Nvidia in the investment round.
Many sound investors contributed a significant amount of 1.3 billion dollars to the round, with half of the funds returning to Nvidia. This proves to be a lucrative venture for Nvidia, as investing in the company yields substantial returns. As an external investor, I am impressed by the trend of companies raising over 100 million dollars and allocating most of the funds towards hardware development. It will be interesting to see how these investments pan out in the future.
Could they emerge as a new force in the field of AI? A French team has been displaying exceptional performance recently, even surpassing OpenAI. It is possible that by not acting, you risk failure, or your model may not achieve the same level of success as other competing models. Another important factor to consider is the speed at which open-source models are gaining popularity, in contrast to closed source models like OpenAI. While OpenAI is currently closed off, it benefits from the collaborative efforts of the community to improve its capabilities. Utilizing open-source models strategically can now provide solutions for various tasks. Furthermore, integrating different open-source models can lead to even more effective outcomes.
By leveraging a multi-agent system and incorporating long chains and other techniques, it is possible to create a scalable solution using open-source technologies. The low barrier to entry allows anyone to utilize open-source tools to build sophisticated systems. The primary differentiator among competitors will be the quality and uniqueness of their data sets. Access to proprietary data sets presents a significant advantage in this regard.
The more unique data you have access to, the better the results will be. As users in the B2B sector, we are entering a new phase of process automation, akin to RPA 3.0. Automating rule-based tasks and formalized processes has become commonplace.
RPA can be utilized to automate complex, rule-based tasks within a company, such as those found in the industry sector. This presents an opportunity to automate tasks that are challenging to formalize and require input from multiple individuals and data sources. Building smart systems, like multi-agent systems, allows for collaboration among agents with different roles to tackle these difficult tasks. In cases where there are unique challenges, leveraging tools like the GPT-5 API can provide solutions. This advancement represents a new level of automation capabilities within the organization.
Nader AlBastaki: How do you work with universities to create products and solutions into commercially viable opportunities?
Paul Masitow: The process is ongoing. Initially, we are collaborating with top academic institutions worldwide. Some laboratories are being considered for specific projects, focusing on a group of postdoctoral researchers. We are identifying their current research areas, as many PHDs. who previously worked in robotics are now transitioning to LMS. Additionally, we are incorporating process automation into our initiatives. A notable example from Berkeley, a respected individual from Carnegie Mellon previously specializing in robotics, but now also delving into LMS. The challenge we face now is how to commercialize these advancements.
How can research be monetized in a business venture? Whether attending top-tier universities like Khalifa University and Mohammed bin Zayed University or more cost-effective options like Egyptian universities, the key lies in developing a process to successfully commercialize the research findings. In collaboration with scientific and engineering teams, the focus should not only be on advancing knowledge but also on efficiently applying it to produce marketable products.
We are encouraging our engineers and scientists to proactively seek out clients and companies to address their needs. If a client requires a specific solution such as automation of cybersecurity, and you identify similar tasks among multiple clients, that could signal a potential product opportunity. Conduct thorough research before deciding whether to focus on providing consultancy services rather than developing a product, as the latter can involve greater risk. Remain client-focused and tailor your solutions to meet their specific requirements to maximize success. This approach aims to cultivate a more client-centric mindset among our team members.
Audience: How do you mitigate AI-generated misinformation risks?
Fahed Bizzari: The default behaviour of AI and the capacity to guide it play a pivotal role in managing misinformation. Often, individuals encounter misinformation when interacting with AI in its default mode.
When working with clients, it is crucial to understand the nature of language models. These models appear intelligent because they recognize patterns, like how we operate. Sometimes we may say something profound and surprise ourselves with our own words. It is important to recognize that we, like language models, rely on patterns from our education and experiences. If you view ChatGPT as an AGI, you will likely be disappointed. Similarly, expecting language models to be the pinnacle of AI we are working towards will also lead to disappointment.
The effectiveness of ChatGPT depends on the hands it is in. If you are a marketer focusing on marketing tasks, ChatGPT can yield impressive results for you. However, if you attempt to use ChatGPT for legal purposes in place of a lawyer, you may encounter disappointment. ChatGPT may provide inaccurate information, such as referencing non-existent cases, or may not meet the standards of a diligent legal professional. It is important to use ChatGPT within the appropriate scope of its capabilities to ensure optimal outcomes.
Audience: What are your thoughts on the fact that a large percentage of AI investment is in hardware?
Paul Masitow: I used to review over 150 different companies in the hardware industry. This was the first area I focused on systematically due to the realization that Moore’s Law cannot be surpassed indefinitely. As transistor sizes reach their limits, companies must explore other architectures or chipsets to improve energy efficiency. Currently, GPUs dominate data centre operations, but inference is shifting towards Asics for faster processing speeds. Novel architectures are also being considered to ensure low latency in real-time inference applications.
Other innovations such as in-memory computing are sought after to address potential bottlenecks in traditional architectures, particularly related to memory usage. In-memory computing is increasingly being adopted to mitigate these challenges. Additionally, there is growing interest in analogue computing, despite its current lack of commercial viability. Companies are exploring analog computing and hybrid digital-analog computing for their potential to significantly improve energy efficiency compared to leading Asics. These advancements align with the goal of achieving the energy efficiency of the human brain. Efforts to mimic the human brain on software and architecture levels are underway.
Furthermore, there are a few projects that involve placing live neurons, sourced from mice brains, onto an electrical platform to send and receive electrical signals. This project aims to help the neurons understand and react to external stimuli. For example, live neurons have been trained to play a pong game, where they interact with a small ball. If errors occur, the system can be reset to default settings. This development signifies a shift from the GPU era to something new and intriguing.
Oskar Mencer: The company I am now a part of, GROQ, recently announced groundbreaking results in     AI and LLM inference capabilities. Independent studies have shown that we are five to ten times faster, more energy efficient, and superior in AI inference for specific models. This achievement is the result of our unique strategy focused on optimizing data travel distance within the computer, rather than solely on computational power as traditional models do, which are limited by Moore’s Law. Our approach challenges traditional assumptions and highlights the importance of data movement optimization. The key point of interest lies not in the result, but in the innovative methodology that led us to this point.
The speed of LLMs plays a crucial role in the potential for building upon it. A faster infrastructure not only improves energy efficiency and reduces costs, but also allows for more extensive enhancements. This ability to differentiate products built on top of LLMs is key for staying competitive in the market. While discussing potential applications is important, investing in the infrastructure driving these innovations is a more secure investment strategy, as predicting successful ventures can be challenging.
Paul Masitow: We must consider the implementation of an additional panel once Quantum Advantage is achieved, as current quantum algorithms have shown potential in solving fundamental tasks.
Oskar Mencer: We can run quantum algorithm simulations on GROQ computers as well. This infrastructure can be utilized in various applications. It is indeed a versatile tool that can be applied to a wide range of tasks.
Audience: What are the change management principles that will help guide us to prepare for the future?
Oskar Mencer: Change is inevitable. The key consideration is identifying opportunities for personal growth and development. How can we make the most of each new day that we wake up to? What potential lies ahead for tomorrow? Engaging in prediction can be a stimulating exercise. It is up to everyone to make their own predictions and assess the outcomes. By refining our predictive skills, we can strive for improvement and enjoy the process as a challenge.