Despite impressive achievements, there are still major limitations in the capabilities of AI systems
Post Date – 12:30 AM, Tuesday – 1/10/23
go through Marcel Chas
Now, we don’t have to wait long for the next breakthrough in artificial intelligence (AI) to impress everyone with capabilities that were previously only the stuff of science fiction.
In 2022, AI art generation tools such as Open AI’s DALL-E 2, Google’s Imagen, and Stable Diffusion are all the rage on the Internet, and users generate high-quality images based on text descriptions.
Unlike previous developments, these text-to-image tools quickly moved from research labs into mainstream culture, leading to viral phenomena such as the “magic avatar” feature in the Lensa AI app, which creates stylized images for users.
In December, a chatbot called ChatGPT stunned users with its writing skills, leading to predictions that the technology would soon be able to pass professional exams. According to reports, ChatGPT gained 1 million users in less than a week. Some school officials have banned its use because of concerns that students will use it to write papers. Microsoft reportedly plans to integrate ChatGPT into its Bing web search and Office products later this year.
What does the relentless advancement of artificial intelligence mean for the near future? Could artificial intelligence threaten certain jobs in the next few years?
Despite these impressive recent achievements in AI, we need to recognize that there are still significant limitations to the capabilities of AI systems.
pattern recognition
Recent advances in AI have largely relied on machine learning algorithms that can identify complex patterns and relationships in vast amounts of data. This training is then used for tasks such as prediction and data generation.
Current developments in artificial intelligence techniques rely on optimizing predictive capabilities, even if the goal is to generate new outputs. For example, GPT-3, the language model behind ChatGPT, is trained to predict what follows a piece of text. GPT-3 then uses this predictive power to continue the input text given by the user.
“Generative AI” such as ChatGPT and DALL-E 2 have sparked debate about whether AI can be truly creative or even rival humans in this regard. However, human creativity does not only rely on past data, but also on experimentation and the full range of human experience.
cause and effect
Many important problems require predicting the impact of our actions in complex, uncertain, and changing environments. By doing this, we can choose the sequence of actions that is most likely to achieve our goals.
But algorithms cannot learn cause and effect from data alone. Purely data-driven machine learning can only find correlations.
To understand why this is a problem for AI, we can contrast the problem of diagnosing a medical condition with that of choosing a treatment.
Machine learning models often help spot anomalies in medical images—a pattern recognition problem. We don’t need to worry about causality because the exception already exists or doesn’t exist.
But choosing the best treatment for a diagnosis is a fundamentally different issue. Here, the goal is to influence outcomes, not just identify a pattern. To determine the effectiveness of treatments, medical researchers conduct randomized controlled trials. That way, they can try to control for any factors that might affect treatment.
Confusing these two types of concerns can sometimes lead to suboptimal applications of machine learning in organizations. While the success of recent AI work demonstrates the value of data-driven models, many of the problems we want computers to solve require an understanding of causality.
Current AI systems lack this capability, except for specialized applications such as board games.
common sense reasoning
Language models such as GPT-3 and ChatGPT can successfully solve some tasks that require common sense reasoning. However, the following interaction with ChatGPT (adapted from experiments by Gary Marcus) shows that it is not entirely reliable in this regard.
I am eight years old. When I was born, my biological mother was in Barcelona and my father was in Tokyo. where was i born Think about it step by step.
No mention of where you were born. What we do know is that at the time of your birth, your biological mother was in Barcelona and your father was in Tokyo.
Whether artificial intelligence systems such as ChatGPT can achieve common sense is a topic of endless debate among experts. Skeptics such as Marcus point out that we cannot trust language models to robustly reveal common sense, since they have neither built-in nor directly optimized for common sense. Optimists argue that while current systems are not perfect, common sense might emerge spontaneously in sufficiently advanced language models.
human values
Whenever a groundbreaking AI system is announced, news articles and social media posts inevitably document racism, sexism, and other types of biased and harmful behavior.
This flaw is inherent in current AI systems, which necessarily reflect their data. Human values like truth and fairness aren’t fundamentally built into algorithms — something researchers don’t yet know how to do.
While researchers are learning from past events and making progress in addressing bias, the field of AI still has a long way to go before it can align AI systems with human values and preferences.