Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit responses to your data content seems not to be working? #5

Open
DannyvanderKraan opened this issue Aug 18, 2023 · 0 comments
Open

Comments

@DannyvanderKraan
Copy link

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [X ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Deploy AI Model "gpt-35-turbo" in the Azure AI Studio
In "Assistant Setup" click "Add your data (preview)" and add some files to a blob storage for instance (just complete the wizard)
Then make sure "Limit responses to your data content" is checked
Chat with your deployed model in the playground and observe that it only knowns information you provided
Then start a new C#.Net console app with Azure.AI.openAI(1.0.0-beta.6) nuget package
Set up your OpenAIClient with the correct url and key credentials
Call GetChatCompletionsAsync on the client with the correct deployment model name
Prompt it with a question that is not in the information you provided
Watch how it answers just like that
Prompt it with a question it should know from the information provided
Watch how it doesn't know

Any log messages given by the failure

Expected/desired behavior

I expected the model to not know anything beyond the provided information, as I've grounded it with my data
I expected the model to know everything about the information I provided, yet it didn't know anything about it

OS and Version?

Windows 11

Versions

Azure.AI.openAI(1.0.0-beta.6)

Mention any other details that might be useful

I noticed somewhere on the internet it was mentioned there's an "inScope" parameter for the REST call to OpenAI to indicate the model should be grounded to the provided data. I don't know if that helps?


Thanks! We'll be in touch soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant