Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[suggestion] Automated Unit Test Improvement using Large Language Models at Meta #12

Open
3 tasks
profnandaa opened this issue Feb 29, 2024 · 1 comment
Open
3 tasks

Comments

@profnandaa
Copy link
Collaborator

profnandaa commented Feb 29, 2024

Abstract

This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination. We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms. In an evaluation on Reels and Stories products for Instagram, 75% of TestGen-LLM's test cases built correctly, 57% passed reliably, and 25% increased coverage. During Meta's Instagram and Facebook test-a-thons, it improved 11.5% of all classes to which it was applied, with 73% of its recommendations being accepted for production deployment by Meta software engineers. We believe this is the first report on industrial scale deployment of LLM-generated code backed by such assurances of code improvement.

Links

Presenters

  • Anthony Nandaa
  • Bildan Urandu
  • Timothy Kaboya

Timeframe

  • April or May meetup
  • Now scheduled for July 9, 2024 12 noon GMT+3
@profnandaa
Copy link
Collaborator Author

profnandaa commented Jul 10, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

1 participant