

A group of researchers from Stanford said that the use of AI-based code generation systems is more likely to create security vulnerabilities. Writes about it TechCrunch.
The scientists paid the main attention to the Codex system, presented by OpenAI in August 2021. They recruited 47 developers of varying skill levels to solve security issues in multiple programming languages, including Python, JavaScript, and C.
According to the study, participants in the experiment who used Codex were more likely to write incorrect and “unsafe” code compared to the control group. Programmers who used AI were also more likely to express confidence in their decisions.
Experts believe that developers without relevant knowledge in the field of cybersecurity should use such tools carefully.
“Those who use them to speed up tasks in which they are already skilled should carefully double-check the results and context,” the scientists added.
Study co-author Megha Srivastava emphasized that the results are not a condemnation of Codex and other code generation systems. Such tools are useful for low-risk tasks, she said.
Scientists have proposed several ways to increase the security of AI generation systems, including a mechanism for “refinement” of hints. According to them, this is similar to the work of a supervisor who reviews code drafts.
They also suggested that the developers of cryptographic libraries ensure the security of the default settings, since the current settings of AI systems are not always free from exploits.
“Our goal is to make a broader statement about the use of code generation models. More work needs to be done to study these issues and develop methods to address them,” said study co-author Neil Perry.
According to him, the introduction of security vulnerabilities is not the only drawback of AI code generation systems. He mentioned the issue of possible copyright infringement due to the use of public domain code for Codex learning.
“[По этим причинам] we are largely cautious about using these tools as a substitute for teaching beginner developers about reliable coding techniques,” Srivastava added.
Recall that in May, one of the developers discovered that Copilot “leaked” private keys from crypto wallets.
In October, a group of programmers announced a class-action lawsuit against Microsoft for teaching AI with their code.
In July 2021, Copilot was suspected of copying copyrighted pieces of free software.
Subscribe to CryptoNewsHerald news in Telegram: CryptoNewsHerald AI – all the news from the world of AI!
Found a mistake in the text? Select it and press CTRL+ENTER
Comments (No)