Skip to content

(v0.4.0) 【Bookish Tea Talk】 chatTTS support! KG graphRAG neo4j support!More access to social apps!

Latest
Compare
Choose a tag to compare
@heshengtao heshengtao released this 03 Aug 10:04
· 1055 commits to main since this release

✨v0.4.0✨【书香茶话】【Bookish Tea Talk】

This release includes the following features:

  1. Added a text iterator that outputs a portion of characters each time, safely splitting the text based on newline characters and chunk size without splitting in the middle of a text. chunk_overlap refers to how many characters overlap in the split text. This allows for batch input of long texts; just click mindlessly or enable loop execution in ComfyUI, and it will automatically execute. Remember to enable the is_locked attribute to automatically lock the workflow at the end of the input, preventing further execution. Example workflow: Text Iterative Input
  2. Added the model name attribute to the local LLM loader, local llava loader, and local guff loader. If empty, it uses the various local paths in the node. If not empty, it will use the path parameters you filled in config.ini. If not empty and not in config.ini, it will download from Hugging Face or load from the Hugging Face model save directory. If you want to download from Hugging Face, please fill in the model name attribute in the format like THUDM/glm-4-9b-chat. Note! The model loaded this way must be compatible with the transformer library.
  3. Adapted CosyVoice, now you can use TTS functionality without downloading any models or API keys. Currently, this interface only supports Chinese.
  4. Added JSON file parsing node and JSON value extraction node, allowing you to get the value of a key from a file or text. Thanks to guobalove for the contribution!
  5. Improved the tool invocation code, now LLMs without tool invocation functionality can also enable the is_tools_in_sys_prompt attribute (local LLMs do not need to enable it by default, automatically adapted). After enabling, tool information will be added to the system prompt, allowing LLM to call tools. Related paper on the implementation principle: Achieving Tool Calling Functionality in LLMs Using Only Prompt Engineering Without Fine-Tuning
  6. Created a custom_tool folder for storing custom tool code. You can refer to the code in the custom_tool folder, place the custom tool code in the custom_tool folder, and then call the custom tool in LLM.
  7. Added a knowledge graph tool, allowing LLM to interact perfectly with the knowledge graph. LLM can modify the knowledge graph based on your input and reason on the knowledge graph to get the answers you need. Example workflow reference: graphRAG_neo4j
  8. Added the functionality to connect agents to Discord. (Still in testing)
  9. Added the functionality to connect agents to Feishu, thanks a lot to guobalove for the contribution! Reference workflow Feishu Bot.
  10. Added a universal API call node and many auxiliary nodes for constructing request bodies and capturing information from responses.
  11. Added a model clearing node, allowing you to unload LLM from memory at any position!
  12. Added the chatTTS node, thanks a lot to guobalove for the contribution! The model_path parameter can be empty! It is recommended to use the HF mode to load the model, which will automatically download from Hugging Face without manual download; if using local loading, please place the model's asset and config folders in the root directory. Baidu Cloud Address, extraction code: qyhu; if using custom mode loading, please place the model's asset and config folders in the model_path.

本次发行包含如下功能:

  1. 新增了一个文本迭代器,可以每次只输出一部分的字符,是根据回车符号和chunk size来安全分割文本的,不会从文本中间分割。chunk_overlap是指分割的文本重叠多少字符。这样可以批量输入超长文本,只要无脑点击,或者开启comfyui里的循环执行就行了,就可以自动执行完了。记得开启is_locked属性,可以在输入结束时,自动锁住工作流,不会继续执行。示例工作流:文本迭代输入
  2. 在本地LLM加载器、本地llava加载器、本地guff加载器上添加了model name属性,如果为空,则使用节点中的各类本地path加载。如果不为空,则会使用config.ini中你自己填写的路径参数加载。如果不为空且不在config.ini中,则会从huggingface上下载或则从huggingface的模型保存目录中加载。如果你想从huggingface上下载,请按照例如:THUDM/glm-4-9b-chat的格式填写model name属性。注意!这样子加载的模型必须适配transformer库。
  3. 适配了CosyVoice,现在可以无需下载任何模型或者任何API key,直接使用TTS功能。目前该接口只适配了中文。
  4. 新增了JSON文件解析节点和JSON取值节点,可以让你从文件或者文本中获取某一个键的值。感谢guobalove的贡献!
  5. 改进了工具调用的代码,现在没有工具调用功能的LLM也可以开启is_tools_in_sys_prompt属性(本地LLM默认无需开启,自动适配),开启之后,工具信息会添加到系统提示词中,这样LLM就可以调用工具了。实现原理的相关论文:Achieving Tool Calling Functionality in LLMs Using Only Prompt Engineering Without Fine-Tuning
  6. 新建了custom_tool文件夹,用于存放自定义工具的代码,可以参考custom_tool文件夹中的代码,将自定义工具的代码放入custom_tool文件夹中,即可在LLM中调用自定义工具。
  7. 新增了知识图谱工具,让LLM与知识图谱可以完美交互,LLM可以根据你的输入修改知识图谱,可以在知识图谱上推理以获取你需要的答案。示例工作流参考:graphRAG_neo4j
  8. 新增将智能体接入discord的功能。(还在测试中)
  9. 新增将智能体接入飞书的功能,超级感谢guobalove的贡献!参考工作流
    飞书机器人
  10. 新增了万能API调用节点以及大量的辅助节点,用于构造请求体和抓取响应中的信息。
  11. 新增了清空模型节点,可以在任意位置将LLM从显存中卸载!
  12. 已添加了chatTTS节点,超级感谢guobalove的贡献!model_path参数可以为空!推荐使用HF模式加载模型,模型会自动从hugging face上下载,无需手动下载;如果使用local加载,请将模型的assetconfig文件夹放到根目录下。百度云地址,提取码:qyhu;如果使用custom模式加载,请将模型的assetconfig文件夹放到model_path下。