ChatGPT, Claude, Perplexity, and Gemini integrations for chat, real-time information retrieval, and text processing tasks, such as paraphrasing, simplifying, or summarizing. With support for third party proxies and local LLMs.
- Configure Hotkeys to quickly view the current chat, archive, and inference actions.
- For instance ⌥⇧A, ⌘⇧A, and ⌥⇧I (optional).
- Install the SF Pro font from Apple to display icons.
- Enter your API keys for the services you want to use.
- Configure your proxy or local host settings in the Environment Variables (optional).
- For example configurations see the wiki.
Converse with your Primary via the ask
keyword, Universal Action, or Fallback Search.
- ↩ Continue the ongoing chat.
- ⌘↩ Start a new conversation.
- ⌥↩ View the chat history.
- Hidden Option
- ⌘⇧↩ Open the workflow configuration.
- ↩ Ask a question.
- ⌘↩ Start a new conversation.
- ⌥↩ Copy the last answer.
- ⌃↩ Copy the full conversation.
- ⇧↩ Stop generating an answer.
- ⌘⌃↩ View the chat history.
- Hidden Options
- ⇧⌥⏎ Show configuration info in HUD
- ⇧⌃⏎ Speak the last answer out loud
- ⇧⌘⏎ Edit multi-line prompt in separate window
- ⇧↩ Switch to Editor / Markdown preview
- ⌘↩ Ask the question.
- ⇧⌘⏎ Start a new conversation.
- Type to filter archived chats based on your query.
- ↩ Continue archived chat.
- ⌥ View the modification date.
- ⌘↩ Reveal the chat file in Finder.
- ⌘L Inspect the unabridged preview as Large Type.
- ⌘⇧↩ Send conversation to the trash.
Inference Actions1
Inference Actions provide a suite of language tools for text generation and transformation. These tools enable summarization, clarification, concise writing, and tone adjustment for selected text. They can also correct spelling, expand and paraphrase text, follow instructions, answer questions, and improve text in other ways.
Access a list of all available actions via the Universal Action or by setting the Hotkey trigger.
- ↩ Generate the result using the configured default strategy.
- ⌘↩ Paste the result and replace selection.
- ⌥↩ Stream the result and preserve selection.
- ⌃↩ Copy the result to clipboard.
The inference actions are generated from a JSON file called actions.json
, located in the workflow folder. You can customize existing actions or add new ones by editing the file directly or by editing actions.config.pkl
and then evaluating this file with pkl.
Important
Always back up your customized Inference Actions before updating the workflow or your changes will be lost.
A prompt is the text that you give the model to elicit, or "prompt," a relevant output. A prompt is usually in the form of a question or instructions.
- General prompt engineering guide
- OpenAI prompt engineering guide | Prompt Gallery
- Anthropic prompt engineering guide | Prompt Gallery
- Google AI prompt engineering guide | Prompt Gallery
The primary configuration setting determines the service that is used for conversations.
OpenAI Proxies2
If you want to use a third party proxy, define the correlating host
, path
, API key
, model
, and if required the url scheme
or port
in the environment variables.
The variables are prefixed as alternatives to OpenAI, because Ayai expects the returned stream events and errors to mirror the shape of those returned by the OpenAI API.
Local LM's3
If you want to use a local language model, define the correlating url scheme
, host
, port
, path
, and if required the model
in the environment variables to establish a connection to the local HTTP initiated and maintained by the method of your choice.
The variables are prefixed as alternatives to OpenAI, because Ayai expects the returned stream events and errors to mirror the shape of those returned by the OpenAI API.
Note: Additional stop sequences can be provided via the shared finish_reasons
environment variable.
Footnotes
-
Ayai will make sure that the frontmost application accepts text input before streaming or pasting, and will simply copy the result to the clipboard if it does not. This requires accessibility access, which you may need to grant in order to use inference actions. To disable the safety check for specific applications, add the application's bundle identifier to the relevant environment variable. ↩
-
Third party proxies such as OpenRouter, Groq, Fireworks or Together.ai ↩
-
Local HTTP servers can be set up with interfaces such as LM Studio or Ollama ↩