Saturday, May 3, 2025

This startup uses AI agents to combat malicious ads and imitator accounts

For Kevin Tian, The best solution for fraud with artificial intelligence can be probably the most obvious: Pit Ai against AI.

In 2022 he founded social engineering defense startup doubles to do exactly that. And since cybercriminals use more ever more advanced AI models for the turbo charging variety of their attacks, the KI systems of doubles have contributed to combating them on a scale -faster and simpler than before.

The startup has arrange AI agent – software that’s programmed in such a way that you just perform certain tasks – to look the Internet, the dark web and social media for potentially fraudulent activities and to mark every part from copycat website and pretend user accounts to malicious displays on Google, Instagram and YouTube. The agents of doubles examine 100 million warnings of such phishing threats on daily basis, filter real threats from benign and reported them on platforms which can be to be removed. Tian says that they do that with an accuracy of around 90% and continually improve.

“If threats with AI can stimulate these attacks for just a few cents in the dollar, we have to make sure that we can also manage this volume on our site” Forbes.

On Friday, Doppel announced 35 million US dollars of latest funds in a round of Bessemer Venture Partners. With 55.5 million US dollars in enterprise support, it now has a worth of 205 million US dollars. It is one other milestone for the corporate that Tian got here together with CTO Rahul Madduluri in 2022, which he had met with Uber when he worked on the corporate’s Flying Car Moonshot, Uber Elevate. First, the duo intended to combat NFT-related fraud and to support crypto corporations within the persecution and reporting of counterfeits. In 2023, double expanded its customer base to other industries.

Double contractors in countries corresponding to the Philippines and India utilized in the early days to sort 1000’s of potential threats and to choose which were malicious. In September 2024, nonetheless, it was found that the brand new models from Openai, the “argument” were in a position to perform the identical tasks. It replaced these contract staff with a cohort of latest AI agents and used them to automate 30% of its safety processes. Tian claims that AI agents were in a position to discover more threats than people. It was transformative for the corporate’s business.

“It is simply not scalable that a human team checks these millions of warnings every day, and we now have a AI agent who can make these decisions,” said Tian.

In 2023, the concept of productivity software fought with a rush of attacks, including malicious advertisements that aim at their customers and efforts on social media, to embody its CEO IVAN ZHAO. In one scenario “,”[fraudsters] I might take our download page for our application, scratch it, make a clone of it, make a site that appears similar. When you visit it, it looks like you might be visiting the performance, ”said Daniel Pyykonen, head of platform security on the performance.

So it turned to assist for help. The company’s AI agents quickly worked these campaigns and dismantled 1000’s. Finally, the thought of ​​fewer and fewer attacks began to see. “We became a very expensive goal,” said Pyykonen.

The secret sauce of double is a “threat diagram”, a map of relationships and interactions between social engineering campaigns -telephone numbers, IP addresses, promoting accounts. It enables the corporate to higher pursue malicious hackers, use AI as a productivity tool and protect corporations from future attacks. “We want to give the good the same view so that they can play less whack-a-mole and switch off the entire infrastructure of the threat actors,” said Tian.

So far, double has attracted around 180 customers, including United Airlines, Openai and Coinbase and triples sales. It also conducted customers in sectors corresponding to oil and gas, finance and insurance firms on board, for which it introduced a second set of industry-specific AI agents. “We were overwhelmed by the width of the animal -1 customers who had outside the technology,” said Elliott Robinson, partner at Bessemer Venture Partners.

Artificial intelligence greased the bikes for cybercriminals to spin recent forms of social engineering attacks that go far beyond phishing -e -mails which can be embedded with dangerous links. They now contain tactics corresponding to a WhatsApp message from an account that urgently represents the CEO of an organization in accordance with Geldern, a fake LinkedIn reduction for personal information and even a practical bait of an organization’s website. In 2024, Google 40 million malignant promoting accounts exposed to its platform. Many of them showed AI-generated pictures and audio of public personalities and CEOs to deceive people and fraud. Between April 2024 and April 2025, Microsoft Disruptional attempted fraud that cost its customers 4 billion US dollars and blocked 1.6 million bot registration attempts on his Azure platform.

Tian compares today’s dilemma with the One Tom Cruise Mission: not possible – Dead Reckoning: A villain -Ki system on the mission to distort reality and control the world. “In this film, Ai is evil, it can pretend to be everyone, create false news, manipulate entire societies through fake digital deception,” he said. He sees doubles as a possibility to “build a company that solves one of the biggest problems in the world, namely AI -possible social engineering”.

More from Forbes

ForbesAI worsens the Bot problem of the Internet. This startup of two billion US dollars is on the forefrontForbesThis startup for the fraud detection achieved annual turnover of 100 million US dollars, which protected itself from Deepfak callsForbesKhosla Ventures in talks to finance 100 million USForbesThis technical incubator is harder to succeed in than Harvard

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here