Artificial Intelligence:
Navigating the fine line between fostering AI and keeping it safe

Artificial Intelligence: Navigating the fine line between fostering AI and keeping it safe

Authors:

Jules Antoine Goffre, Chief Solutions Officer and Board Member, Symanto
Andrei Belitski, AI-based Data Products Lead, Symanto

Context

All along the history, disruptive innovations like the discovery of fire, the agricultural revolution, the printing machine, the steam engine, electricity, and internet 3.0, have torn humans between awe and fear. Yet AI trumps all existing innovations in terms of societal and economic impact as well as the magnitude of the upsides and downsides. It affects virtually all sectors: manufacturing/services and public/private, almost every human being will have used, benefitted, or suffered from its use.

It will be a daunting task to strike the right balance between fostering AI development to allow humankind to benefit from its advantages and curbing its risks by imposing regulations. In front of huge challenges, AI can help us live longer. It can represent the next wave of productivity gains to compensate for hyperinflation, protectionist commerce, decoupled supply chains, natural resource scarcity, ageing population, not to mention climate change (with water management, and intelligent farming). The list is impressive.

But at what cost does this come? The list of risks is just as long:

  • criminal or unethical use of AI
  • discrimination
  • bias intensification
  • a negative spiral of fake data generating more fake data
  • unfair competition
  • infringements of intellectual property rights
  • excessive job losses
  • rogue or hijacked machines
  • Lethal Automated Weapon Systems (LAWS)

Just to name the most prominent ones, picked up today by the media.

In this article, we will expose Symanto’s view on a comprehensive agenda towards AI, looking at the main aspects of its disruption. We believe that along the lines of Influence, Relevance and Growth¹) (IRG), we will have to do a much better job collectively at educating, informing and guiding the choices for the general population but more importantly also for policymakers, who struggle to understand what AI really is and what kind of impact it can deliver – for good or for bad. We also take the view that it is our collective duty to make AI safe, trustworthy and human-centric, as we see no going back on this disruption, arguably the single most important in our evolution’s long history.

Shedding light on recent developments in AI globally – Europe the laggard

Three models of development for AI have emerged in the three relevant geographic regions:

  • North America: with a success model based on entrepreneurship, brain drain, little to no legislation and a handful of leading companies together with a vibrant start-up community.
  • Europe: technically competent companies, some talent, but not always ready to take entrepreneurship risk without the legislative environment.
  • Asia: very heterogeneous, with China as a country with successful tech players like Baidu, Tencent and Alibaba and state-owned companies following a long-term masterplan to dominate the world, robotics-enamored Japan, high-tech powerhouse Korea and incredibly competent, tech-savvy India with some leading IT companies having an amazing access to a large talent pool but with modest success in actually scaling AI solutions to assume a leadership position.

United States’ pro-AI-Stance, risk-taking companies and funds have attracted talent worldwide and have created a very strong base of AI companies upstream with quite a few successful start-ups downstream. Most remarkably, Nvidia has become the dominant player on the chipset side, also moving itself downstream to AI applications like Nvidia Canvas, the hyper scalers with their deep pockets and dominance in cloud infrastructure and in development of Large Language Models as well as companies like Tesla betting the farm on pushing visionary activities with autonomous driving. The lack of regulatory environment and the “Just do it” or “Yes you can” mentality has facilitated driverless taxis, cashier-less supermarkets and automated food delivery carts in California.

Europe on the other hand, realizing that it was losing ground to both the United States (3x less investments in Europe than the US back in 2018) and recognizing that it might lose against China, has worked on its legislative agenda starting in 2018, leading to a white paper in February of 2020²) and to a AI legislation planned for the end of 2023. The motivation has been to promote AI while working on establishing guardrails against the misuse of AI.

China, as usual, is taking a longer-term view of how to achieve supremacy in this space by fostering AI within its research institutes, educational programs and even attracting talent from abroad. According to The Carnegie Endowment for International Peace³), nine policies on AI were issued between 2017 and 2013, from five ministries. The clear goal for China is to dominate the AI landscape by 2030. This plan was laid out already in 2017! One of the several initiatives in place is to revitalize economic growth.

Fast forwarding a few years to today, Europe is still lagging in AI development compared to other regions, especially in upstream AI. A talent-friendly immigration policy still can’t compete with the US and Canada ones. As a result, most leading companies in AI tend to be based in the United States and China.

Gen AI – the Hell breaks loose.

First, AI is not homogeneous. There is embedded AI (or narrow AI) focusing on areas such as healthcare, energy management or autonomous driving. Then there is “Broad AI” or what we call Human-level AI that attempts to mimic the human brain, which is currently much more driven by web based data e.g. social media, news.

Narrow AI is focused on very specific use case areas and embedded in specific applications. Its risks have less in common with scalability than they have to do with cybersecurity, liability for rogue systems, limitation in usage of Lethal Automated Legal Weapons (LAWS), etc. So, what is at stake is to close the technology gap to the U.S. and allow companies to develop, test and quickly commercialize their products, while keeping these closed or semi-closed systems secure. In embedded AI, Europe still has a competitive chance with its competencies in application engineering such as: automotive, defense systems, MedTech, industrial automation, infrastructure equipment, and power management to name a few areas.

In contrast, the stakes in Human AI are about scale on one side: data access, processing power, size of models and on protection of people on the other side: opt-in rights, protection against automated decision-making, discrimination, copyright law, etc. Particularly in Human AI, we see a flagrant gap between Europe and its regional contenders in the United States and China. It is also the area with the greatest loopholes in terms of legislation and guardrails.

Enter GenAI. The first wave of GenAI products that have been broadly commercialized over the last year have been focusing on both mass B2B and B2C consumption. This has the simple implication that millions of users have gotten access to these tools in the segment of AI (Human AI) with the biggest gap in terms of legislation. This has had a Pandoras Box effect as the level of legislative preparedness was at best Ok to cope with dozens, hundreds or perhaps thousands of responsible companies, but not for the case that hundreds of millions of apprentice sorcerers would gain access to the capabilities of content generation for text, computer code, images, and videos. The United States, who had little to no regulatory frameworks in place, was caught completely off-guard so that individual states passed legislation focused on author rights, opting out of consumers, ban on automated decision making, inappropriate or criminal use of deep fakes and even protection against job losses. The federal government realizing it was being left behind, started a flurry of activities with soundings and hearings, passing executive orders and preparing bills. In Europe, the laws were in the course of preparation and therefore they were also caught by surprise because the laws were not yet aligned with the member states and had obviously not been enacted.

Policy Making going forward.

Right now, we are witnessing a frenzy of initiatives (policies, self-regulation proposals, laws, guidelines, committees) to curb the risks of AI. The chaotic situation on both sides of the Atlantic requires immediate attention, but more importantly, it is crucial for experts in the field to educate, inform and guide policy makers in their choices for regulation. The lack of coordination and over-regulation might constitute a huge braking force to its development. On the other side sluggish regulation might lead to the general population’s sentiment turning frankly negative, should the broader public lose trust in AI. At this moment in time, the weighing is on the latter, as media are quick to report on misuse but are not helping really to educate people on the right and safe application of AI. Also, corporates, educational bodies and public administrations have been gathering their experiences with Human AI but not very effective in communicating about the Safe AI agenda. Bad experiences such as the reported cases of training models on confidential data, data privacy breaches and discriminatory recommendations as well as company bans to use Gen AI show that the current guardrails are not sufficient. The good part of this story is that there is a clear call for action but what to do and how to solve the problem remains vague.

Priority 1) Ensuring a level playing field in Human AI development:

We have seen a few companies dominate the upstream access to large models. A brute force approach of “large is better” is disconcerting but has been very successful and commercially rewarding. Open AI’s success is mostly due to the large investments made in data access, large language models with billions of parameters fed into transformer based neural network models with huge capacities for annotation, which is the term used for properly categorizing the outputs of the model to ensure accurate results. The overall result is surprisingly good, whereby many users are finding helpful answers to their questions. This is driving the uptake numbers we all have witnessed. The risk is that with this leadership position and adoption rates, the successful tech companies, will distance themselves from the rest of the pack and create situations like Google had in search, Microsoft in Operating Systems but at a much larger consequences due to the magnitude of what is at stake.

It is not too late. The current models that are in place give people brute force access to insights and to automatically generated content. The new neural network models have been trained on huge amounts of data, are good at understanding the context of language, categorizing and summarizing using human-like language and even mimicking emotions but fall short of complex human reasoning. This will make up the next wave of Human AI development. Should the companies that have the commercial edge now then buy up all smaller companies developing human reasoning resembling AI, we will be putting ourselves in a huge dependency upstream, even if downstream application specific use cases will flourish off the back of these models. We need to ensure that the general population and companies do not fall into the trap of having to resort to only a couple of companies for their Human AI insights. This should not only be seen from the economic point of view but also from the moral perspective as we are witnessing intensive discussions already on what is ethical and moral with freedom of speech defenders vs. moralists wanting to ban any posts based on incorrect data or on statements that might harm others. Do we want to have just a few companies to choose from or do we want to promote a plurality of views? I think most will agree that we need to go for plurality and not be held ransom to only one version of the truth, especially when fake data can feed into more fake data. Here is precisely where self-regulation has its limits. As a response to this we need to foster a healthy ecosystem of contributors to avoid the concentration of power and allow for healthy competition.

Another element of a level playing field is access to data and processing power. We need to make sure that we avoid vertical data integration so that only certain players have access to this data or afford to pay for it. Similarly, we need to ensure that the processing power is not only in the hand of a few companies, as processing power might constitute large entry barriers and suffocate competition. We also need to rethink the direction in which Human AI is taking that “bigger is better” approach. We need to allow smarter solutions to compete with the bigger ones.

Priority 2) Establish the right policies and guardrails to guide the development of Human AI

Here we see three points to highlight where all stakeholders need to work on:

  1. Putting human-centric guardrails and avoid full automation when it comes to human AI. No machine should be able to make decisions on its own. The co-pilot concept of Microsoft is an example of good practice but doesn’t reach far enough.
  2. Making sure that there is traceability and trust in all data processing. If we do not act quickly, we will have a situation of mistrust, which will create a huge breach of confidence in machines.
  3. Placing the proper oversight to ensure a safe and ethical use of AI to prevent rogue use of AI. It is not only to the creators of AI platforms to be responsible for what they offer, but also up to the users to consume it correctly (like listing data sources, or acknowledging generative AI content). Only some form of oversight at all levels will be able to monitor the adherence to the guardrails. This will have to be a mix of self-regulation and public authorities to oversee the right behavior.

Priority 3) Ensure that Embedded AI solutions are resilient.

The third and for that matter not least important is how to make sure that closed, semi-closed and embedded systems are performing correctly. Here we need to make sure that automated control systems like autonomous driving, manufacturing facilities or energy management are operating in a resilient way and are trained to deal with different situations. With more and more sensors and data interfaces and edge and cloud processing the risk of security breaches gets larger and the negative implications greater. We will need to triple down on cybersecurity and create resilient data architecture to prevent systems from being hijacked. The danger here is that many systems are operating in real-time with automated decision-making. Clearly risk management and systems oversight will need to be bolstered to prevent critical infrastructure or military systems to be remotely accessed and controlled.

Engagement of all constituents

We can only make a solemn call on all constituents: regulators, academia, media, AI experts, research institutes, lawyers, large tech and beneficiaries like companies, consumers and activists to collectively shape the options that exist. Everybody can contribute to making AI safe.

At Symanto, we are stepping up as well. We have been working on a product leveraging our own developments, that helps people / users to protect themselves from the risks of GenAI. 

  1. Identifies content that is created by AI (even which GenAI model has been used to create the content & how much of “human” editing has been done, etc.)
  2. Fact checks content from multiple sources to validate the credibility and reliability of e.g. news, documents, etc. incl. use of blockchain technology
  3. Detects IP violations, fake news, hate speech or even bias or stereotypes in AI generated content.

We are currently raising funds for this and aim to launch the first part of the product by the end of this year.

We want to invite all contributors to join us or other initiatives to help keep Human AI safe and allow for a positive development with less downside risks.

Footnotes:

  1. Napolitano, F. (2023). Influence, Relevance and Growth for a Changing World, Milan Italy, Bocconi University Press
  2. European Commission White Paper on AI, 2020, https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf
  3. Carnegie Endowment for International Peace, 2023 https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117

About Symanto

Symanto is an artificial intelligence company aiming to provide data-driven understanding through the combination of psychology and AI. The company was founded in 2010 and today employs a team of over 80 full-time staff, located in its headquarters in Nuremberg and in its resource hubs in Valencia and Skopje. We work in close partnership with professors and researchers at Columbia University and the Technical University of Valencia.

Through unique text-processing technology and psycholinguistic algorithms, Symanto empowers market researchers, brands, agencies and organisations to understand emotions, attitudes and motivation and drive insights-driven strategies with impact.

About the authors

Jules Goffre ([email protected]) is Chief Solutions Officer and Board Member of Symanto Research GmbH and Partner Emeritus of Kearney. He has been focused on developing and implementing AI solutions at his corporate clients. He holds a B.S. and M.S. from Columbia University in the fields of Applied Mathematics and Operations Research.

Andrei Belitski ([email protected])is leading the AI-based data products development for strategic business insights at Symanto. He holds a Ph.D. degree in Computational Neuroscience from the Eberhard Karl University of Tübingen.