OpenAI CEO Sam Altman Hopes AGI, or Artificial general intelligence—AI that outperforms humans in most tasks—around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026And he has claimed that he was “losing sleep over the threat of an AI threat.” Such predictions are wrong. as Limitations As current AI becomes increasingly apparent, most AI researchers have come to the conclusion that simply building bigger and more powerful chatbots will not lead to AGI.
However, in 2025, AI will still pose a major threat: not from artificial superintelligence, but from human abuse.
These can be unwitting abuses, as lawyers over-rely on AI. Since the release of ChatGPT, for example, many lawyers have been sanctioned for using AI to prepare inaccurate court briefings, apparently unaware of chatbots’ propensity to fabricate content. in the British ColumbiaLawyer Chong Kay was ordered to pay costs for opposing counsel after including fake AI-generated cases in a legal filing. in the new YorkSteven Schwartz and Peter Loduca were fined $5,000 for providing false references. in the coloradoZakaria Krabil was suspended for one year for using fake court cases created using ChatGPT and blaming a “legal intern” for the errors. The list is growing rapidly.
Other abuses are intentional. In January 2024, sexually explicit Taylor Swift’s deepfakes Flood social media platforms. These images were created using Microsoft’s “Designer” AI tool. Although the company had guardrails in place to avoid creating images of real people, the misspelling of Swift’s name was enough to bypass them. Microsoft has since stable This error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are spreading widely — in part because open-source tools for creating deepfakes are publicly available. Legislation around the world seeks to combat deep fakes in hopes of preventing harm. It remains to be seen if this is effective.
In 2025, it will be even harder to distinguish what is actually made. The fidelity of AI-generated audio, text and images is remarkable, and video will be next. This can lead to a “dividend of lies”: those in positions of power reject evidence of their abuse by claiming it is fake. In 2023, Tesla Argued That a 2016 Elon Musk video may be a deep fake in response to allegations that the CEO overstated the safety of Tesla Autopilot led to the crash. An Indian politician claimed that audio clips of him admitting corruption in his political party were discredited (at least one clip contained audio. certified original by a press outlet). And two defendants in the January 6 riots claimed that the videos they appeared in were profoundly false. Both were Found guilty.
Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products labeled “AI.” It can go horribly wrong when such tools are used to classify people and make consequential decisions about them. For example, recruitment company Retorio, claims that its AI predicts job suitability of candidates based on video interviews, but one study found that the system could be tricked by the mere presence of glasses or by replacing a plain background with a bookshelf, This shows that it depends on superficial relationships.
There are also dozens of applications in healthcare, education, finance, criminal justice, and insurance where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authority used an AI algorithm to identify people committing child welfare fraud. This false accusation Thousands of parents demand refunds, often thousands of euros. As a result, the Prime Minister and his entire cabinet resigned.
In 2025, we expect AI threats to arise not from AI acting on its own, but from what people do with it. This includes cases where it It seems to function well and is highly dependent on it (lawyers using ChatGPT); when it works well and is abused (the dividend of non-consensual deepfakes and liars); and when it is not fit for purpose (denying people their rights). Mitigating these risks is a huge task for companies, governments and society. It would be hard enough without being distracted by scientific concerns.