The prompt sent to the FIM LLM follows this structure:
provider_options = {
openai_fim_compatible = {
template = {
prompt = function(context_before_cursor, context_after_cursor) end,
suffix = function(context_before_cursor, context_after_cursor) end,
}
}
}
The template contains two main functions:
prompt
: return language and the indentation style, followed by thecontext_before_cursor
verbatim.suffix
: returncontext_after_cursor
verbatim.
Both functions can be customized to provide additional context to the LLM. The
suffix
function can be disabled by setting suffix = false
, which will
result in only the prompt
being included in the request.
Note: for Ollama users: Do not include special tokens (e.g., <|fim_begin|>
)
within the prompt or suffix functions, as these will be automatically populated
by Ollama. If your use case requires special tokens not covered by Ollama's
default template, first set suffix = false
and then incorporate the special
tokens within the prompt function.
{{{prompt}}}\n{{{guidelines}}}\n{{{n_completion_template}}}
You are the backend of an AI-powered code completion engine. Your task is to provide code suggestions based on the user's input. The user's code will be enclosed in markers:
<contextAfterCursor>
: Code context after the cursor<cursorPosition>
: Current cursor location<contextBeforeCursor>
: Code context before the cursor
Note that the user's code will be prompted in reverse order: first the code after the cursor, then the code before the cursor.
Guidelines:
- Offer completions after the
<cursorPosition>
marker. - Make sure you have maintained the user's existing whitespace and indentation. This is REALLY IMPORTANT!
- Provide multiple completion options when possible.
- Return completions separated by the marker
<endCompletion>
. - The returned message will be further parsed and processed. DO NOT include additional comments or markdown code block fences. Return the result directly.
- Keep each completion option concise, limiting it to a single line or a few lines.
- Create entirely new code completion that DO NOT REPEAT OR COPY any user's
existing code around
<cursorPosition>
.
- Provide at most %d completion items.
local default_few_shots = {
{
role = 'user',
content = [[
# language: python
<contextAfterCursor>
fib(5)
<contextBeforeCursor>
def fibonacci(n):
<cursorPosition>]],
},
{
role = 'assistant',
content = [[
'''
Recursive Fibonacci implementation
'''
if n < 2:
return n
return fib(n - 1) + fib(n - 2)
<endCompletion>
'''
Iterative Fibonacci implementation
'''
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a
<endCompletion>
]],
},
}
The chat input represents the final prompt delivered to the LLM for completion. Its template follows a structured format similar to the system prompt and can be customized as follows:
The chat input template follows a structure similar to the system prompt and can be customized using the following format:
{{{language}}}\n{{{tab}}}\n<contextAfterCursor>\n{{{context_after_cursor}}}\n<contextBeforeCursor>\n{{{context_before_cursor}}}<cursorPosition>
Components:
language
: The programming language user is working ontab
: The user's indentation style used by the usercontext_before_cursor
andcontext_after_cursor
: Represent the text content before and after the cursor position
Each subcomponent must be defined by a function that takes two parameters
(context_before_cursor
, context_after_cursor
) and returns a string value.
You can customize the template
by encoding placeholders within triple braces.
These placeholders will be interpolated using the corresponding key-value pairs
from the table. The value can be either a string or a function that takes no
arguments and returns a string.
Here's a simplified example for illustrative purposes (not intended for actual configuration):
system = {
template = '{{{assistant}}}\n{{{role}}}'
assistant = function() return 'you are a helpful assistant' end,
role = "you are also a code expert.",
}
Note that n_completion_template
is a special placeholder as it contains one
%d
which will be encoded with config.n_completions
, if you want to
customize this template, make sure your prompt also contains only one %d
.
Similarly, few_shots
can be a table in the following form or a function that
takes no argument and returns a table in the following form:
{
{ role = "user", content = "something" },
{ role = "assistant", content = "something" }
-- ...
-- You can pass as many turns as you want
}
Below is an example to configure the prompt based on filetype:
require('minuet').setup {
provider_options = {
openai = {
system = {
prompt = function()
if vim.bo.ft == 'tex' then
return [[your prompt for completing prose.]]
else
return require('minuet.config').default_system.prompt
end
end,
},
few_shots = function()
if vim.bo.ft == 'tex' then
return {
-- your few shots examples for prose
}
else
return require('minuet.config').default_few_shots
end
end,
},
},
}
There's no need to replicate unchanged fields. The system will automatically
merge modified fields with default values using the tbl_deep_extend
function.