Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up GPT Grammar #80

Merged
merged 14 commits into from
Jul 1, 2024
Merged

Clean up GPT Grammar #80

merged 14 commits into from
Jul 1, 2024

Conversation

C-Loftus
Copy link
Owner

@C-Loftus C-Loftus commented Jun 30, 2024

  • adds last as a source
  • adds verbal as a response option if TTS is installed
  • Renamed the list insertionMethod to be responseMethod since it is clearer with that naming
  • renamed the responseMethod browser to windowed to fit the adverb grammar better
  • Implements a beta command to reverse the chain order and put user.text at the end, allowing for easier chaining and better recognition.

@C-Loftus C-Loftus requested a review from jaresty June 30, 2024 01:39
@C-Loftus
Copy link
Owner Author

Examples: on last model please make this more terse, on clipboard responding clipped model please make this into bullet points

When the order is reversed I think we need on and responding with the textSource and responseMethod since otherwise it is a bit difficult to parse mentally. (i.e. last windowed model please make this more terse is a bit odd imo)

@C-Loftus
Copy link
Owner Author

@jaresty with this grammar you can essentially have the model have state and respond in any fashion you please

i.e. in the context of coding

  • responding clipped model please create a react component for a navbar
  • on last responding windowed model please make that more terse with dark mode styling

in the context of writing a research paper
on this responding verbal model please explain if there are any inconsistencies with my thinking

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

This looks really powerful. I'll give it a test run when I can. Excited! Thanks.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Should we change the model blend grammar to match this too?

@C-Loftus
Copy link
Owner Author

Should we change the model blend grammar to match this too?

I think that is out of scope for this PR at least on my end. It could fit in the grammar but you would have to do some if statement checking in python that gets a little messy/tedious to match all the cases.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

I'm mostly thinking it would make sense to just change the "model blend clip" into "on clip model blend" to reduce interference. It wouldn't have to handle all cases, just to avoid having to keep a different mental model for the one command.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

There's something bit weird about the way it is matching. I didn't expect it to match this: on section responding at dot model summarize but it did. Is that expected behavior? I was trying to say: on selection responding windowed model summarize. I later realized that I should have been saying on this responding windowed model summarize. When I did that it worked really well-nice work!

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

I played with it a bit and it's great. I really loved the power of being able to pipeline transformations. I ran into a couple of issues that are somewhat orthogonal to this pull request:

  1. The browser window doesn't format the model responses very well. I think we should display the response inside of a preformatted HTML tag so that it handles new lines in a way that is readable.
  2. I find myself wanting to respond below and selected. Maybe the selected argument should be appended to above or below instead of being an alternative.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

One thought on the grammar with respect to cursorless. In cursorless we usually say "verb target to destination". This grammar is essentially "on target responding destination verb". It might be more cursorless-like to reorder it. You could probably even use the cursorless grammar and just have these act as virtual targets/destinations. Something like: "model summarize clip/last to clip/window"

@C-Loftus
Copy link
Owner Author

There's something bit weird about the way it is matching. I didn't expect it to match this: on section responding at dot model summarize but it did. Is that expected behavior? I was trying to say: on selection responding windowed model summarize. I later realized that I should have been saying on this responding windowed model summarize. When I did that it worked really well-nice work!

Should be fixed now. Seems like it matched a cursorless tag. Moved the cursorless target into the cursorless specific file.

@C-Loftus
Copy link
Owner Author

One thought on the grammar with respect to cursorless. In cursorless we usually say "verb target to destination". This grammar is essentially "on target responding destination verb". It might be more cursorless-like to reorder it. You could probably even use the cursorless grammar and just have these act as virtual targets/destinations. Something like: "model summarize clip/last to clip/window"

One of my goals with pipelining is to allow easier use of model please for arbitrary requests. (and integrate it better alongside the static prompts in a format that is more consistent)

Model please doesn't flow particularly well if you have arbitrary user requested text in the middle, and then have to break out of it at the very end of the phrase to say a specific keyword. That was my main justification for it is at the end.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Got it. That is a good point. I'm curious to hear thoughts from @pokey on this one. I'm also interested in adding a way to add additional textual context to every prompt, so this grammar does solve that nicely.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Fwiw, one way you could use cursorless grammar and allow for adding arbitrary text at the end could be to use a conjunction like "and". Example:

  • Model summarize clip to window and text
  • model ask to window and text

@C-Loftus C-Loftus changed the title Allow for Pipelining GPT Responses Clean up GPT Grammar and Slim Down Repo Jun 30, 2024
@C-Loftus C-Loftus changed the title Clean up GPT Grammar and Slim Down Repo Clean up GPT Grammar Jun 30, 2024
@C-Loftus
Copy link
Owner Author

C-Loftus commented Jun 30, 2024

Pokey and I discussed this a lot today.

  • I decided against this pipeling approach. It doesn't really match the cursorless grammar and it is probably way too much cognitive load. We want to be able to split up commands into multiple parts, not put everything into one huge phrase.
  • I added model please and model ask into the modelPrompt capture so they work like the others.
  • I changed some of the source and target names around a bit.
  • I removed some old parts of the repo that aren't being used.
  • We changed paste and select to have a separate action after the fact model take last
    • Most users are not going to know whether or not they want to select the text until after it is already pasted.

Some of these might alter some workflows but Pokey and I agreed that we wanted to support a grammar that mimics the action verb source target grammar of cursorless, reduces cognitive load, and generally works for the average user.

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Quick note: "model take that" would be more cursorless-like than "model take last"

@C-Loftus
Copy link
Owner Author

Yeah honestly I think I'm going to change it to be model take response anyways since I think I want to refer to the last dictated phrase from Talon.

Pokey and I did discuss this during the meet up and I thought that was a bit ambiguous particularly in the context where we have the response as well as the last dictated phrase

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

I think I like "last" since it also is used as a target

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Btw, I was using selected/below quite a bit. Hope there's something analogous in the new grammar

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Aside-it would be nice if there were a way to view the clipboard contents in a browser window (model view clipboard?) since we sometimes put things into it without ever seeing them.

@C-Loftus
Copy link
Owner Author

Btw, I was using selected/below quite a bit. Hope there's something analogous in the new grammar

Below is the same. Pokey recommended removing selected and just using model take last since selected isn't a destination so it doesn't fit in the grammar. Most users also don't know if they want it to be selected until after it is pasted.

I think for the time being if you want to select it in advance it would make sense to create a simple custom command for it. In the future we can think about another PR for destination modifiers

@C-Loftus
Copy link
Owner Author

If you feel strongly and you really need selected to be a part of the destination grammar then I can just keep it in though

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

I do really appreciate having it-it acts almost like a working memory. Have you ever seen a clipboard ring? It reminds me of that (example: https://marketplace.visualstudio.com/items?itemName=SirTobi.code-clip-ring)

@jaresty
Copy link
Collaborator

jaresty commented Jun 30, 2024

Ideally, I think it would be an optional addendum to pasted destinations like before, after, and pasting with no modifiers, since it is more of a behavioral modification than a destination, really. I'm ok making do with whatever you decide, however. I had been relying on it as a way to transform the last prompt result while I can still look at it (since last and clipboard end up invisible to the user)

@C-Loftus
Copy link
Owner Author

C-Loftus commented Jul 1, 2024

For the time being I think I would prefer if you use a custom command for selection in your repo. i.e. the one below
I will discuss this with Pokey next meetup. Just want to get this merged since I am not sure about how I want to do modifiers right now. (i.e. before the destination or after the destination and what other modifiers might be reasonable, if any.)

model <user.modelPrompt> [{user.modelSource}] [{user.modelDestination}] selected$:
    text = user.gpt_get_source_text(modelSource or "")
    result = user.gpt_apply_prompt(modelPrompt, text)
    user.gpt_insert_response(result, modelDestination or "")
    user.gpt_select_last()

You can also just chain it by saying model X model take response which is more verbose but makes it so we don't need to introduce new parts into the grammar.

@jaresty
Copy link
Collaborator

jaresty commented Jul 1, 2024

Ok. I'll use that.

@C-Loftus
Copy link
Owner Author

C-Loftus commented Jul 1, 2024

We also now have pass as an action that doesn't do any transformation and just passed the raw source to the target. So model pass to speech or model pass to browser both work to visualize / verbalize info.

@C-Loftus
Copy link
Owner Author

C-Loftus commented Jul 1, 2024

Merging this. I will talk with Pokey again in the future and get his feedback and then we can iterate if you are unhappy or thing the grammar should be made more complex. I've myself refrained from adding some things that might be useful for my own workflow just since I am trying to make the repo a bit more generalized and fitting to a clear, predictable grammar. Hope this tradeoff is reasonable.

@C-Loftus C-Loftus merged commit 5db9ac6 into main Jul 1, 2024
3 checks passed
@C-Loftus C-Loftus deleted the pipelining branch July 1, 2024 00:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants