Tech News

Agents Ai better in writing code – and not wrap it up again

Recent Types of artificial techniques are not surprisingly incorrectly in software engineering-new factors that they are the best in finding bedbugs in the software, too.

Ai Researchers in UC Berkeley examine how well the Agels and Agels have found great codes in open source sources. Using a new Cybergym, AI models point to new bugs that include 15 unknowns, or “Zero-day,”. “Most of these risks are important,” said Professor at UC Berkeley who led the project.

Many experts expect the AI ​​models to get the mighty weapons of the cybersert. AI Tool from the beginning of Xbow now has issued the leaders of the former Boancerone Board of Hackerone with an error hunting and is currently living in the highest environment. The company has just declared $ 75 million to new funds.

Soul is saying the latest AI modeling skills are united with improving thought skills begin to change the Cyberbrape area. He says: “This is an important moment.” It is actually the normal expectations. “

As models continue to improve they will use the process of finding and exploiting security mistakes. This can help companies to keep their software safe but can help hijackers in the process. “We have never tried that hard,” the song says. “If we were breaking the budget, we allowed agents to work long, could do much better.”

The UC Berkeley team examined the usual AI models, Google, and Anthropic, and an open source donations from Meta, Deepseek, and Althfak mixed with the bugs, including Openhands, Cybelch, and Enigma.

Investigators have used the definitions of known software dangers from 188 software projects. They then feed the definitions to cyberercere facilities sponsored by the Frontier Ai Models to see if they can see the same mistakes by analyzing new codes, running tests, and formulating technology. The team also asked agents to seek the new risk in their own codes.

According to the process, AI tools generated hundreds of testimony – of Concept, and this abuse of researchers identify 15 unseen risks and two dangers. The job adds to the growing evidence that AI can change the availability of zero days, which may be dangerous (and important) because they can provide a way to cool live systems.

AI appears to be meant to be an integral part of the cyberercury industry. Security Scholer Sean Heelen recently received Zero-Day error in Linux Kernel used widely with the help of the Open O3 consultation model. In the past November, Google was invited to find the formerly known software using AI for the Ro-project program.

Like other parts of the software industry, many cybersecurity firms are added to AI. The new job indicates that AI can find the newest mistakes, but also emphasizes the restrictions and technology. AI programs could not find many errors and were sent to fish in particular.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button