You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added a text iterator that outputs a portion of characters each time, safely splitting the text based on newline characters and chunk size without splitting in the middle of a text. chunk_overlap refers to how many characters overlap in the split text. This allows for batch input of long texts; just click mindlessly or enable loop execution in ComfyUI, and it will automatically execute. Remember to enable the is_locked attribute to automatically lock the workflow at the end of the input, preventing further execution. Example workflow: Text Iterative Input
Added the model name attribute to the local LLM loader, local llava loader, and local guff loader. If empty, it uses the various local paths in the node. If not empty, it will use the path parameters you filled in config.ini. If not empty and not in config.ini, it will download from Hugging Face or load from the Hugging Face model save directory. If you want to download from Hugging Face, please fill in the model name attribute in the format like THUDM/glm-4-9b-chat. Note! The model loaded this way must be compatible with the transformer library.
Adapted CosyVoice, now you can use TTS functionality without downloading any models or API keys. Currently, this interface only supports Chinese.
Added JSON file parsing node and JSON value extraction node, allowing you to get the value of a key from a file or text. Thanks to guobalove for the contribution!
Improved the tool invocation code, now LLMs without tool invocation functionality can also enable the is_tools_in_sys_prompt attribute (local LLMs do not need to enable it by default, automatically adapted). After enabling, tool information will be added to the system prompt, allowing LLM to call tools. Related paper on the implementation principle: Achieving Tool Calling Functionality in LLMs Using Only Prompt Engineering Without Fine-Tuning
Created a custom_tool folder for storing custom tool code. You can refer to the code in the custom_tool folder, place the custom tool code in the custom_tool folder, and then call the custom tool in LLM.
Added a knowledge graph tool, allowing LLM to interact perfectly with the knowledge graph. LLM can modify the knowledge graph based on your input and reason on the knowledge graph to get the answers you need. Example workflow reference: graphRAG_neo4j
Added the functionality to connect agents to Discord. (Still in testing)
Added the functionality to connect agents to Feishu, thanks a lot to guobalove for the contribution! Reference workflow Feishu Bot.
Added a universal API call node and many auxiliary nodes for constructing request bodies and capturing information from responses.
Added a model clearing node, allowing you to unload LLM from memory at any position!
Added the chatTTS node, thanks a lot to guobalove for the contribution! The model_path parameter can be empty! It is recommended to use the HF mode to load the model, which will automatically download from Hugging Face without manual download; if using local loading, please place the model's asset and config folders in the root directory. Baidu Cloud Address, extraction code: qyhu; if using custom mode loading, please place the model's asset and config folders in the model_path.