Paper Highlight: Advances in AI-Powered Code Security (LLMSA)
I invited Chengpeng's team for a GitHub Day of Learning talk to discuss LLM-powered security analysis:
Joining live us from Purdue University, Chengpeng Wang and Prof. Xiangyu Zhang will share their latest research on AI-powered code security analysis and how their work has led to the discovery of new vulnerabilities in real-world projects.
Through these discussions, they hope to showcase a vision for static analysis and bug detection, particularly how LLMs can address the challenges of traditional static analysis techniques, including customization, compilation-free analysis, multi-language support, and multi-modal analysis.
Starting with their previous work on agentic & agentless LLM-powered dataflow modeling and security testing (LLMDFA and LLMSAN), this session is focused on sharing recent advances that have been published in December 2024 as LLMSA: A Compositional Neuro-Symbolic Approach to Compilation-free and Customizable Static Analysis.