Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Assess results for different prompting strategies and models #116

Open
jwmatthews opened this issue Mar 27, 2024 · 0 comments
Open

Assess results for different prompting strategies and models #116

jwmatthews opened this issue Mar 27, 2024 · 0 comments
Assignees
Labels
experiment prompt-engineering Issues related to changes for prompts or tweaks for specific models solution-server

Comments

@jwmatthews
Copy link
Member

We want to gather information to help assess results based on model and prompting strategies.
By prompting strategies I am specifically referring to 3 broad themes:

  1. Zero-shot with minimal guidance: Prompt instructions and input file to migrate
  2. Zero-shot with analysis info/hints: Prompt instructions, analysis guidance/hints, and input file to migrate
  3. Few-shot with analysis info/hints: Prompt instructions, analysis guidance/hints, related solved examples, and input file to migrate
@jwmatthews jwmatthews self-assigned this Mar 27, 2024
@shawn-hurley shawn-hurley added prompt-engineering Issues related to changes for prompts or tweaks for specific models experiment labels Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experiment prompt-engineering Issues related to changes for prompts or tweaks for specific models solution-server
Projects
None yet
Development

No branches or pull requests

3 participants