By one major metric, artificial general intelligence is much closer than you think.
By one unique metric, we could approach technological singularity by the end of this decade, if not sooner.
A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity.
An AI that can translate speech as well as a human could change society.
In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that it’s enormously difficult to predict where it begins and nearly impossible to know what’s beyond this technological “event horizon.”
However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human.
One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).
“That’s because language is the most natural thing for humans,” Translated CEO Marco Trombetti said at a conference in Orlando, Florida, in December 2022. “Nonetheless, the data Translated collected clearly shows that machines are not that far from closing the gap.”
The company tracked its AI’s performance from 2014 to 2022 using a metric called “Time to Edit,” or TTE, which calculates the time it takes for professional human editors to fix AI-generated translations compared to human ones. Over that 8-year period and analyzing over 2 billion post-edits, Translated’s AI showed a slow, but undeniable improvement as it slowly closed the gap toward human-level translation quality.
On average, it takes a human translator roughly one second to edit each word of another human translator, according to Translated. In 2015, it took professional editors approximately 3.5 seconds per word to check a machine-translated (MT) suggestion—today, that number is just 2 seconds. If the trend continues, Translated’s AI will be as good as human-produced translation by the end of the decade (or even sooner).
“The change is so small that every single day you don’t perceive it, but when you see progress … across 10 years, that is impressive,” Trombetti said on a podcast. “This is the first time ever that someone in the field of artificial intelligence did a prediction of the speed to singularity.”
Although this is a novel approach to quantifying how close humanity is to approaching singularity, this definition of singularity runs into similar problems of identifying AGI more broadly. And while perfecting human speech is certainly a frontier in AI research, the impressive skill doesn’t necessarily make a machine intelligent (not to mention how many researchers don’t even agree on what “intelligence” is).
Whether these hyper-accurate translators are harbingers of our technological doom or not, that doesn’t lessen Translated’s AI accomplishment. An AI capable of translating speech as well as a human could very well change society, even if the true “technological singularity” remains ever elusive.
Before discussing the odds of such a dystopian future, let’s first take a closer look at what singularity means.
Technological Singularity: What It Means and How It Became a Part of Popular Imagination?
The term ‘singularity’ refers to a whole collection of concepts in science and mathematics. Most of these ‘concepts’ make sense only by setting the right context. Singularity describes dynamic and social systems in the natural sciences where minor changes can have significant effects.
Let’s first talk about the technological singularity, the original or umbrella phrase, before we get into the more recent obsession with AI singularity.
The term ‘singularity’ originated in physics but is now commonly used in technology. We heard the phrase, possibly for the first time, in 1915 as a part of Albert Einstein’s Theory of General Relativity.
According to Einstein, singularity is the point of infinite density and gravity at the heart of a black hole from which nothing, not even light, can escape. The singularity is a point beyond which our existing understanding of physics fails to describe reality.
Vernor Vinge, a celebrated science fiction writer and mathematics professor, had the gift of mixing fact with fiction, a quality omnipresent around the concept of singularity. Thus, it’s not surprising that this concept made its way into literature in 1983 in one of Vinge’s novels. He used the term ‘technological singularity’ to describe a hypothetical future in which technology was so advanced that it went beyond human knowledge and control. Furthermore, Vinge popularized the term in 1993 by predicting that the singularity would become a reality around 2030.
What is AI Singularity?
AI singularity is a hypothetical idea where artificial intelligence is more intelligent than humans. In simpler terms, if machines are smarter than people, a new level of intelligence will be reached that humans can’t achieve. It will cause technology to develop exponentially, and humans cannot evolve fast enough to catch up. Experts believe that AI can improve itself repeatedly at some point, leading to rapid technological advances that will be impossible for humans to fathom or control. Events like this are expected to cause significant changes in society, the economy, and technology.
AI singularity can be viewed from various angles, each with advantages and disadvantages. Some experts consider singularity a genuine and present danger, while others dismiss it as pure science fiction. What such a singularity would mean for humanity is another topic of heated debate. Some think it would create a utopia, while others see it as doomsday.
How Far Away is the Singularity of AI?
We cannot deny that significant progress has happened in the field of AI, so much so that machine learning algorithms can now teach themselves. While we are yet to see a fully autonomous AI creature surpassing human intelligence, generative AI’s advent has made many experts uneasy.
While futurist and computer scientist Ray Kurzweil has predicted that singularity will come around in 2045, others have speculated that the tipping point will occur far sooner. The fact that the founder of OpenAI, the company that launched ChatGPT, Sam Altman, admits that he feels “a little scared’ of his own creation, the chances of AI becoming a Frankenstein we cannot control doesn’t seem all that improbable.
However, the human race’s only safety net is possibly the complexity of human intelligence and ‘stream of consciousness’ or the ability to move seamlessly from one thought to another by association.
ARTIFICIAL GENERAL INTELLIGENCE or AGI
The term “artificial general intelligence” (AGI) is used to refer to a fictional category of intelligent machines. If created, an AGI would be capable of learning to perform whatever mental work a human or animal is capable of. Another definition of AGI holds that it is an autonomous system that can do better than humans at most economically valuable tasks. Some AI studies and firms like OpenAI, DeepMind, and Anthropic have the creation of AGI as their major focus. Both science fiction and futurology frequently feature discussions of AGI.
When this stage is reached, these computer programs and AI will become superintelligent machines with more intelligence than humans. At this point, people would have no more power over them.
Those in Favor of AI Singularity…
We usually speak of AI singularity in hushed tones and somber faces as if it were the end of the world. But is AI singularity entirely negative as a possibility? The honest answer, in my opinion, is ‘no’. There are some possibilities of positive growth that might occur from AI singularity.
For instance, the possibility of gaining new insights into the cosmos is a point in favor of singularity. The speed at which AI could analyze information would allow it to solve problems that have stumped humans for generations. The implications for physics, biology, and the study of the cosmos are profound. Novelist Yuval Noah Harari introduced the concept of ‘superhumans’ in his book Homo Deus. Let’s just say we need AI singularity to evolve from homo sapiens to homo deus!
Those Not in Favor of AI Singularity
There are, however, numerous counterarguments against the singularity. One major worry is that AI could eventually reach a level of intelligence beyond human control. Similarly, the loss of individuality is another potential outcome of the singularity. If AI ever surpasses human intelligence, it may one day replace humankind. It could result in a future where humans are no longer the dominant species on the planet and are enslaved by machines in a very Transformer-esque way!
Besides, AI singularity is ultimately a complicated and unpredictable phenomenon. It’s impossible to anticipate what singularity will bring to humanity, and people have diverse opinions. It’s crucial to consider the concept from all these angles to be ready for the future.
ARTIFICIAL SUPERINTELLIGENCE
It is a hypothetical idea about a machine that is smarter than any human brain. According to the theory, significant advances in genetics, nanotechnology, automation, and robots will set the stage for singularity in the first half of the 21st century.
Surely, the Second Coming is at Hand
Many experts say that the AI singularity has already begun. People who benefit the most from AI development tend to downplay the chance that we will soon hit a point of singularity. They say that AI was only made to help humankind and make them more productive. The contradiction is that we want AI machines to have traits that aren’t part of human nature, like unlimited memory storage, fast thinking, and making decisions without feelings. Yet, we also want to be able to control the outcome of our most unpredictable invention! Humans, what can be said of our endless wants?
What one should believe we need is a Second Coming of sorts. And that requires political gumption. It is the time for political action on a global scale. There has to be a worldwide treaty in AI outlining basic ethical principles and a global organism of technological oversight that includes governments that produce AI and those that do not. There needs to be a codified set of rules that define laws that govern AI across borders.
What we should fear most is not AI or singularity but human frailty. The most significant risk, in this regard, is that humans will only realize AI singularity has arrived once robots eliminate human input from their learning processes. Such a state of AI singularity will be permanent once computers understand what we so often tend to forget: making mistakes is part of being human.
Also:
What is AI Singularity: Is It a Hope or Threat for Humanity?