English / [简体中文](./README_CN.md)
-One-Click to get well-designed cross-platform ChatGPT web UI.
+One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
-一键免费部署你的跨平台私人 ChatGPT 应用。
+一键免费部署你的跨平台私人 ChatGPT 应用, 支持 GPT3, GPT4 & Gemini Pro 模型。
[![Web][Web-image]][web-url]
[![Windows][Windows-image]][download-url]
[![MacOS][MacOS-image]][download-url]
[![Linux][Linux-image]][download-url]
-[Web App](https://chatgpt.nextweb.fun/) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Twitter](https://twitter.com/mortiest_ricky) / [Buy Me a Coffee](https://www.buymeacoffee.com/yidadaa)
+[Web App](https://app.nextchat.dev/) / [Desktop App](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [Discord](https://discord.gg/YCkeafCafC) / [Twitter](https://twitter.com/NextChatDev)
-[网页版](https://chatgpt.nextweb.fun/) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [QQ 群](https://github.com/Yidadaa/ChatGPT-Next-Web/discussions/1724) / [打赏开发者](https://user-images.githubusercontent.com/16968934/227772541-5bcd52d8-61b7-488c-a203-0330d8006e2b.jpg)
+[网页版](https://app.nextchat.dev/) / [客户端](https://github.com/Yidadaa/ChatGPT-Next-Web/releases) / [反馈](https://github.com/Yidadaa/ChatGPT-Next-Web/issues)
[web-url]: https://chatgpt.nextweb.fun
[download-url]: https://github.com/Yidadaa/ChatGPT-Next-Web/releases
@@ -25,7 +25,9 @@ One-Click to get well-designed cross-platform ChatGPT web UI.
[MacOS-image]: https://img.shields.io/badge/-MacOS-black?logo=apple
[Linux-image]: https://img.shields.io/badge/-Linux-333?logo=ubuntu
-[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web)
+[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&env=GOOGLE_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web)
+
+[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/ZBUEFA)
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
@@ -37,8 +39,8 @@ One-Click to get well-designed cross-platform ChatGPT web UI.
- **Deploy for free with one-click** on Vercel in under 1 minute
- Compact client (~5MB) on Linux/Windows/MacOS, [download it now](https://github.com/Yidadaa/ChatGPT-Next-Web/releases)
-- Fully compatible with self-deployed llms, recommended for use with [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) or [LocalAI](https://github.com/go-skynet/LocalAI)
-- Privacy first, all data stored locally in the browser
+- Fully compatible with self-deployed LLMs, recommended for use with [RWKV-Runner](https://github.com/josStorer/RWKV-Runner) or [LocalAI](https://github.com/go-skynet/LocalAI)
+- Privacy first, all data is stored locally in the browser
- Markdown support: LaTex, mermaid, code highlight, etc.
- Responsive design, dark mode and PWA
- Fast first screen loading speed (~100kb), support streaming response
@@ -59,10 +61,11 @@ One-Click to get well-designed cross-platform ChatGPT web UI.
## What's New
-- 🚀 v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: [ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting](https://www.allabtai.com/prompt-engineering-tips-zero-one-and-few-shot-prompting/).
-- 🚀 v2.7 let's share conversations as image, or share to ShareGPT!
-- 🚀 v2.8 now we have a client that runs across all platforms!
+- 🚀 v2.10.1 support Google Gemini Pro model.
- 🚀 v2.9.11 you can use azure endpoint now.
+- 🚀 v2.8 now we have a client that runs across all platforms!
+- 🚀 v2.7 let's share conversations as image, or share to ShareGPT!
+- 🚀 v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: [ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting](https://www.allabtai.com/prompt-engineering-tips-zero-one-and-few-shot-prompting/).
## 主要功能
@@ -189,6 +192,14 @@ Azure Api Key.
Azure Api Version, find it at [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions).
+### `GOOGLE_API_KEY` (optional)
+
+Google Gemini Pro Api Key.
+
+### `GOOGLE_URL` (optional)
+
+Google Gemini Pro Api Url.
+
### `HIDE_USER_API_KEY` (optional)
> Default: Empty
@@ -350,9 +361,11 @@ If you want to add a new translation, read this [document](./docs/translation.md
[@Licoy](https://github.com/Licoy)
[@shangmin2009](https://github.com/shangmin2009)
-### Contributor
+### Contributors
-[Contributors](https://github.com/Yidadaa/ChatGPT-Next-Web/graphs/contributors)
+
+
+
## LICENSE
diff --git a/README_CN.md b/README_CN.md
index 631054ed79f..b5cd0f1c51a 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -1,14 +1,16 @@
-
ChatGPT Next Web
+
NextChat
-一键免费部署你的私人 ChatGPT 网页应用。
+一键免费部署你的私人 ChatGPT 网页应用,支持 GPT3, GPT4 & Gemini Pro 模型。
-[演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N) / [QQ 群](https://user-images.githubusercontent.com/16968934/228190818-7dd00845-e9b9-4363-97e5-44c507ac76da.jpeg) / [打赏开发者](https://user-images.githubusercontent.com/16968934/227772541-5bcd52d8-61b7-488c-a203-0330d8006e2b.jpg) / [Donate](#捐赠-donate-usdt)
+[演示 Demo](https://chat-gpt-next-web.vercel.app/) / [反馈 Issues](https://github.com/Yidadaa/ChatGPT-Next-Web/issues) / [加入 Discord](https://discord.gg/zrhvHCr79N)
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web)
+[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/ZBUEFA)
+
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/Yidadaa/ChatGPT-Next-Web)
![主界面](./docs/images/cover.png)
@@ -19,7 +21,7 @@
1. 准备好你的 [OpenAI API Key](https://platform.openai.com/account/api-keys);
2. 点击右侧按钮开始部署:
- [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web),直接使用 Github 账号登录即可,记得在环境变量页填入 API Key 和[页面访问密码](#配置页面访问密码) CODE;
+ [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FYidadaa%2FChatGPT-Next-Web&env=OPENAI_API_KEY&env=CODE&env=GOOGLE_API_KEY&project-name=chatgpt-next-web&repository-name=ChatGPT-Next-Web),直接使用 Github 账号登录即可,记得在环境变量页填入 API Key 和[页面访问密码](#配置页面访问密码) CODE;
3. 部署完毕后,即可开始使用;
4. (可选)[绑定自定义域名](https://vercel.com/docs/concepts/projects/domains/add-a-domain):Vercel 分配的域名 DNS 在某些区域被污染了,绑定自定义域名即可直连。
@@ -104,6 +106,14 @@ Azure 密钥。
Azure Api 版本,你可以在这里找到:[Azure 文档](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions)。
+### `GOOGLE_API_KEY` (optional)
+
+Google Gemini Pro 密钥.
+
+### `GOOGLE_URL` (optional)
+
+Google Gemini Pro Api Url.
+
### `HIDE_USER_API_KEY` (可选)
如果你不想让用户自行填入 API Key,将此环境变量设置为 1 即可。
diff --git a/app/api/auth.ts b/app/api/auth.ts
index b41e34e059b..16c8034eb55 100644
--- a/app/api/auth.ts
+++ b/app/api/auth.ts
@@ -1,7 +1,7 @@
import { NextRequest } from "next/server";
import { getServerSideConfig } from "../config/server";
import md5 from "spark-md5";
-import { ACCESS_CODE_PREFIX } from "../constant";
+import { ACCESS_CODE_PREFIX, ModelProvider } from "../constant";
function getIP(req: NextRequest) {
let ip = req.ip ?? req.headers.get("x-real-ip");
@@ -16,15 +16,15 @@ function getIP(req: NextRequest) {
function parseApiKey(bearToken: string) {
const token = bearToken.trim().replaceAll("Bearer ", "").trim();
- const isOpenAiKey = !token.startsWith(ACCESS_CODE_PREFIX);
+ const isApiKey = !token.startsWith(ACCESS_CODE_PREFIX);
return {
- accessCode: isOpenAiKey ? "" : token.slice(ACCESS_CODE_PREFIX.length),
- apiKey: isOpenAiKey ? token : "",
+ accessCode: isApiKey ? "" : token.slice(ACCESS_CODE_PREFIX.length),
+ apiKey: isApiKey ? token : "",
};
}
-export function auth(req: NextRequest) {
+export function auth(req: NextRequest, modelProvider: ModelProvider) {
const authToken = req.headers.get("Authorization") ?? "";
// check if it is openai api key or user token
@@ -49,22 +49,23 @@ export function auth(req: NextRequest) {
if (serverConfig.hideUserApiKey && !!apiKey) {
return {
error: true,
- msg: "you are not allowed to access openai with your own api key",
+ msg: "you are not allowed to access with your own api key",
};
}
// if user does not provide an api key, inject system api key
if (!apiKey) {
- const serverApiKey = serverConfig.isAzure
- ? serverConfig.azureApiKey
- : serverConfig.apiKey;
+ const serverConfig = getServerSideConfig();
- if (serverApiKey) {
+ const systemApiKey =
+ modelProvider === ModelProvider.GeminiPro
+ ? serverConfig.googleApiKey
+ : serverConfig.isAzure
+ ? serverConfig.azureApiKey
+ : serverConfig.apiKey;
+ if (systemApiKey) {
console.log("[Auth] use system api key");
- req.headers.set(
- "Authorization",
- `${serverConfig.isAzure ? "" : "Bearer "}${serverApiKey}`,
- );
+ req.headers.set("Authorization", `Bearer ${systemApiKey}`);
} else {
console.log("[Auth] admin did not provide an api key");
}
diff --git a/app/api/common.ts b/app/api/common.ts
index 6b0d619df1d..ca8406bb361 100644
--- a/app/api/common.ts
+++ b/app/api/common.ts
@@ -1,6 +1,6 @@
import { NextRequest, NextResponse } from "next/server";
import { getServerSideConfig } from "../config/server";
-import { DEFAULT_MODELS, OPENAI_BASE_URL } from "../constant";
+import { DEFAULT_MODELS, OPENAI_BASE_URL, GEMINI_BASE_URL } from "../constant";
import { collectModelTable } from "../utils/model";
import { makeAzurePath } from "../azure";
@@ -9,8 +9,21 @@ const serverConfig = getServerSideConfig();
export async function requestOpenai(req: NextRequest) {
const controller = new AbortController();
- const authValue = req.headers.get("Authorization") ?? "";
- const authHeaderName = serverConfig.isAzure ? "api-key" : "Authorization";
+ var authValue,
+ authHeaderName = "";
+ if (serverConfig.isAzure) {
+ authValue =
+ req.headers
+ .get("Authorization")
+ ?.trim()
+ .replaceAll("Bearer ", "")
+ .trim() ?? "";
+
+ authHeaderName = "api-key";
+ } else {
+ authValue = req.headers.get("Authorization") ?? "";
+ authHeaderName = "Authorization";
+ }
let path = `${req.nextUrl.pathname}${req.nextUrl.search}`.replaceAll(
"/api/openai/",
@@ -109,6 +122,12 @@ export async function requestOpenai(req: NextRequest) {
// to disable nginx buffering
newHeaders.set("X-Accel-Buffering", "no");
+ // The latest version of the OpenAI API forced the content-encoding to be "br" in json response
+ // So if the streaming is disabled, we need to remove the content-encoding header
+ // Because Vercel uses gzip to compress the response, if we don't remove the content-encoding header
+ // The browser will try to decode the response with brotli and fail
+ newHeaders.delete("content-encoding");
+
return new Response(res.body, {
status: res.status,
statusText: res.statusText,
diff --git a/app/api/cors/[...path]/route.ts b/app/api/cors/[...path]/route.ts
index 0217b12b08f..1f70d663082 100644
--- a/app/api/cors/[...path]/route.ts
+++ b/app/api/cors/[...path]/route.ts
@@ -40,4 +40,4 @@ export const POST = handle;
export const GET = handle;
export const OPTIONS = handle;
-export const runtime = "nodejs";
+export const runtime = "edge";
diff --git a/app/api/google/[...path]/route.ts b/app/api/google/[...path]/route.ts
new file mode 100644
index 00000000000..ebd19289129
--- /dev/null
+++ b/app/api/google/[...path]/route.ts
@@ -0,0 +1,116 @@
+import { NextRequest, NextResponse } from "next/server";
+import { auth } from "../../auth";
+import { getServerSideConfig } from "@/app/config/server";
+import { GEMINI_BASE_URL, Google, ModelProvider } from "@/app/constant";
+
+async function handle(
+ req: NextRequest,
+ { params }: { params: { path: string[] } },
+) {
+ console.log("[Google Route] params ", params);
+
+ if (req.method === "OPTIONS") {
+ return NextResponse.json({ body: "OK" }, { status: 200 });
+ }
+
+ const controller = new AbortController();
+
+ const serverConfig = getServerSideConfig();
+
+ let baseUrl = serverConfig.googleUrl || GEMINI_BASE_URL;
+
+ if (!baseUrl.startsWith("http")) {
+ baseUrl = `https://${baseUrl}`;
+ }
+
+ if (baseUrl.endsWith("/")) {
+ baseUrl = baseUrl.slice(0, -1);
+ }
+
+ let path = `${req.nextUrl.pathname}`.replaceAll("/api/google/", "");
+
+ console.log("[Proxy] ", path);
+ console.log("[Base Url]", baseUrl);
+
+ const timeoutId = setTimeout(
+ () => {
+ controller.abort();
+ },
+ 10 * 60 * 1000,
+ );
+
+ const authResult = auth(req, ModelProvider.GeminiPro);
+ if (authResult.error) {
+ return NextResponse.json(authResult, {
+ status: 401,
+ });
+ }
+
+ const bearToken = req.headers.get("Authorization") ?? "";
+ const token = bearToken.trim().replaceAll("Bearer ", "").trim();
+
+ const key = token ? token : serverConfig.googleApiKey;
+
+ if (!key) {
+ return NextResponse.json(
+ {
+ error: true,
+ message: `missing GOOGLE_API_KEY in server env vars`,
+ },
+ {
+ status: 401,
+ },
+ );
+ }
+
+ const fetchUrl = `${baseUrl}/${path}?key=${key}`;
+ const fetchOptions: RequestInit = {
+ headers: {
+ "Content-Type": "application/json",
+ "Cache-Control": "no-store",
+ },
+ method: req.method,
+ body: req.body,
+ // to fix #2485: https://stackoverflow.com/questions/55920957/cloudflare-worker-typeerror-one-time-use-body
+ redirect: "manual",
+ // @ts-ignore
+ duplex: "half",
+ signal: controller.signal,
+ };
+
+ try {
+ const res = await fetch(fetchUrl, fetchOptions);
+ // to prevent browser prompt for credentials
+ const newHeaders = new Headers(res.headers);
+ newHeaders.delete("www-authenticate");
+ // to disable nginx buffering
+ newHeaders.set("X-Accel-Buffering", "no");
+
+ return new Response(res.body, {
+ status: res.status,
+ statusText: res.statusText,
+ headers: newHeaders,
+ });
+ } finally {
+ clearTimeout(timeoutId);
+ }
+}
+
+export const GET = handle;
+export const POST = handle;
+
+export const runtime = "edge";
+export const preferredRegion = [
+ "bom1",
+ "cle1",
+ "cpt1",
+ "gru1",
+ "hnd1",
+ "iad1",
+ "icn1",
+ "kix1",
+ "pdx1",
+ "sfo1",
+ "sin1",
+ "syd1",
+];
diff --git a/app/api/openai/[...path]/route.ts b/app/api/openai/[...path]/route.ts
index 6095d2270d2..28b7a02422d 100644
--- a/app/api/openai/[...path]/route.ts
+++ b/app/api/openai/[...path]/route.ts
@@ -1,6 +1,6 @@
import { type OpenAIListModelResponse } from "@/app/client/platforms/openai";
import { getServerSideConfig } from "@/app/config/server";
-import { OpenaiPath } from "@/app/constant";
+import { ModelProvider, OpenaiPath } from "@/app/constant";
import { prettyObject } from "@/app/utils/format";
import { NextRequest, NextResponse } from "next/server";
import { auth } from "../../auth";
@@ -45,7 +45,7 @@ async function handle(
);
}
- const authResult = auth(req);
+ const authResult = auth(req, ModelProvider.GPT);
if (authResult.error) {
// return NextResponse.json(authResult, {
// status: 401,
diff --git a/app/client/api.ts b/app/client/api.ts
index eedd2c9ab48..4b39fbfaed2 100644
--- a/app/client/api.ts
+++ b/app/client/api.ts
@@ -1,8 +1,13 @@
import { getClientConfig } from "../config/client";
-import { ACCESS_CODE_PREFIX, Azure, ServiceProvider } from "../constant";
-import { ChatMessage, ModelType, useAccessStore } from "../store";
+import {
+ ACCESS_CODE_PREFIX,
+ Azure,
+ ModelProvider,
+ ServiceProvider,
+} from "../constant";
+import { ChatMessage, ModelType, useAccessStore, useChatStore } from "../store";
import { ChatGPTApi } from "./platforms/openai";
-
+import { GeminiProApi } from "./platforms/google";
export const ROLES = ["system", "user", "assistant"] as const;
export type MessageRole = (typeof ROLES)[number];
@@ -41,6 +46,13 @@ export interface LLMUsage {
export interface LLMModel {
name: string;
available: boolean;
+ provider: LLMModelProvider;
+}
+
+export interface LLMModelProvider {
+ id: string;
+ providerName: string;
+ providerType: string;
}
export abstract class LLMApi {
@@ -73,7 +85,11 @@ interface ChatProvider {
export class ClientApi {
public llm: LLMApi;
- constructor() {
+ constructor(provider: ModelProvider = ModelProvider.GPT) {
+ if (provider === ModelProvider.GeminiPro) {
+ this.llm = new GeminiProApi();
+ return;
+ }
this.llm = new ChatGPTApi();
}
@@ -93,7 +109,7 @@ export class ClientApi {
{
from: "human",
value:
- "Share from [ChatGPT Next Web]: https://github.com/Yidadaa/ChatGPT-Next-Web",
+ "Share from [NextChat]: https://github.com/Yidadaa/ChatGPT-Next-Web",
},
]);
// 敬告二开开发者们,为了开源大模型的发展,请不要修改上述消息,此消息用于后续数据清洗使用
@@ -123,32 +139,39 @@ export class ClientApi {
}
}
-export const api = new ClientApi();
-
export function getHeaders() {
const accessStore = useAccessStore.getState();
const headers: Record = {
"Content-Type": "application/json",
"x-requested-with": "XMLHttpRequest",
+ Accept: "application/json",
};
-
+ const modelConfig = useChatStore.getState().currentSession().mask.modelConfig;
+ const isGoogle = modelConfig.model.startsWith("gemini");
const isAzure = accessStore.provider === ServiceProvider.Azure;
const authHeader = isAzure ? "api-key" : "Authorization";
- const apiKey = isAzure ? accessStore.azureApiKey : accessStore.openaiApiKey;
-
+ const apiKey = isGoogle
+ ? accessStore.googleApiKey
+ : isAzure
+ ? accessStore.azureApiKey
+ : accessStore.openaiApiKey;
+ const clientConfig = getClientConfig();
const makeBearer = (s: string) => `${isAzure ? "" : "Bearer "}${s.trim()}`;
const validString = (x: string) => x && x.length > 0;
- // use user's api key first
- if (validString(apiKey)) {
- headers[authHeader] = makeBearer(apiKey);
- } else if (
- accessStore.enabledAccessControl() &&
- validString(accessStore.accessCode)
- ) {
- headers[authHeader] = makeBearer(
- ACCESS_CODE_PREFIX + accessStore.accessCode,
- );
+ // when using google api in app, not set auth header
+ if (!(isGoogle && clientConfig?.isApp)) {
+ // use user's api key first
+ if (validString(apiKey)) {
+ headers[authHeader] = makeBearer(apiKey);
+ } else if (
+ accessStore.enabledAccessControl() &&
+ validString(accessStore.accessCode)
+ ) {
+ headers[authHeader] = makeBearer(
+ ACCESS_CODE_PREFIX + accessStore.accessCode,
+ );
+ }
}
return headers;
diff --git a/app/client/platforms/google.ts b/app/client/platforms/google.ts
new file mode 100644
index 00000000000..6832400ca58
--- /dev/null
+++ b/app/client/platforms/google.ts
@@ -0,0 +1,231 @@
+import { Google, REQUEST_TIMEOUT_MS } from "@/app/constant";
+import { ChatOptions, getHeaders, LLMApi, LLMModel, LLMUsage } from "../api";
+import { useAccessStore, useAppConfig, useChatStore } from "@/app/store";
+import { getClientConfig } from "@/app/config/client";
+import { DEFAULT_API_HOST } from "@/app/constant";
+export class GeminiProApi implements LLMApi {
+ extractMessage(res: any) {
+ console.log("[Response] gemini-pro response: ", res);
+
+ return (
+ res?.candidates?.at(0)?.content?.parts.at(0)?.text ||
+ res?.error?.message ||
+ ""
+ );
+ }
+ async chat(options: ChatOptions): Promise {
+ // const apiClient = this;
+ const messages = options.messages.map((v) => ({
+ role: v.role.replace("assistant", "model").replace("system", "user"),
+ parts: [{ text: v.content }],
+ }));
+
+ // google requires that role in neighboring messages must not be the same
+ for (let i = 0; i < messages.length - 1; ) {
+ // Check if current and next item both have the role "model"
+ if (messages[i].role === messages[i + 1].role) {
+ // Concatenate the 'parts' of the current and next item
+ messages[i].parts = messages[i].parts.concat(messages[i + 1].parts);
+ // Remove the next item
+ messages.splice(i + 1, 1);
+ } else {
+ // Move to the next item
+ i++;
+ }
+ }
+
+ const modelConfig = {
+ ...useAppConfig.getState().modelConfig,
+ ...useChatStore.getState().currentSession().mask.modelConfig,
+ ...{
+ model: options.config.model,
+ },
+ };
+ const requestPayload = {
+ contents: messages,
+ generationConfig: {
+ // stopSequences: [
+ // "Title"
+ // ],
+ temperature: modelConfig.temperature,
+ maxOutputTokens: modelConfig.max_tokens,
+ topP: modelConfig.top_p,
+ // "topK": modelConfig.top_k,
+ },
+ safetySettings: [
+ {
+ category: "HARM_CATEGORY_HARASSMENT",
+ threshold: "BLOCK_ONLY_HIGH",
+ },
+ {
+ category: "HARM_CATEGORY_HATE_SPEECH",
+ threshold: "BLOCK_ONLY_HIGH",
+ },
+ {
+ category: "HARM_CATEGORY_SEXUALLY_EXPLICIT",
+ threshold: "BLOCK_ONLY_HIGH",
+ },
+ {
+ category: "HARM_CATEGORY_DANGEROUS_CONTENT",
+ threshold: "BLOCK_ONLY_HIGH",
+ },
+ ],
+ };
+
+ const accessStore = useAccessStore.getState();
+ let baseUrl = accessStore.googleUrl;
+ const isApp = !!getClientConfig()?.isApp;
+
+ let shouldStream = !!options.config.stream;
+ const controller = new AbortController();
+ options.onController?.(controller);
+ try {
+ let chatPath = this.path(Google.ChatPath);
+
+ // let baseUrl = accessStore.googleUrl;
+
+ if (!baseUrl) {
+ baseUrl = isApp
+ ? DEFAULT_API_HOST + "/api/proxy/google/" + Google.ChatPath
+ : chatPath;
+ }
+
+ if (isApp) {
+ baseUrl += `?key=${accessStore.googleApiKey}`;
+ }
+ const chatPayload = {
+ method: "POST",
+ body: JSON.stringify(requestPayload),
+ signal: controller.signal,
+ headers: getHeaders(),
+ };
+
+ // make a fetch request
+ const requestTimeoutId = setTimeout(
+ () => controller.abort(),
+ REQUEST_TIMEOUT_MS,
+ );
+ if (shouldStream) {
+ let responseText = "";
+ let remainText = "";
+ let finished = false;
+
+ let existingTexts: string[] = [];
+ const finish = () => {
+ finished = true;
+ options.onFinish(existingTexts.join(""));
+ };
+
+ // animate response to make it looks smooth
+ function animateResponseText() {
+ if (finished || controller.signal.aborted) {
+ responseText += remainText;
+ finish();
+ return;
+ }
+
+ if (remainText.length > 0) {
+ const fetchCount = Math.max(1, Math.round(remainText.length / 60));
+ const fetchText = remainText.slice(0, fetchCount);
+ responseText += fetchText;
+ remainText = remainText.slice(fetchCount);
+ options.onUpdate?.(responseText, fetchText);
+ }
+
+ requestAnimationFrame(animateResponseText);
+ }
+
+ // start animaion
+ animateResponseText();
+
+ fetch(
+ baseUrl.replace("generateContent", "streamGenerateContent"),
+ chatPayload,
+ )
+ .then((response) => {
+ const reader = response?.body?.getReader();
+ const decoder = new TextDecoder();
+ let partialData = "";
+
+ return reader?.read().then(function processText({
+ done,
+ value,
+ }): Promise {
+ if (done) {
+ console.log("Stream complete");
+ // options.onFinish(responseText + remainText);
+ finished = true;
+ return Promise.resolve();
+ }
+
+ partialData += decoder.decode(value, { stream: true });
+
+ try {
+ let data = JSON.parse(ensureProperEnding(partialData));
+
+ const textArray = data.reduce(
+ (acc: string[], item: { candidates: any[] }) => {
+ const texts = item.candidates.map((candidate) =>
+ candidate.content.parts
+ .map((part: { text: any }) => part.text)
+ .join(""),
+ );
+ return acc.concat(texts);
+ },
+ [],
+ );
+
+ if (textArray.length > existingTexts.length) {
+ const deltaArray = textArray.slice(existingTexts.length);
+ existingTexts = textArray;
+ remainText += deltaArray.join("");
+ }
+ } catch (error) {
+ // console.log("[Response Animation] error: ", error,partialData);
+ // skip error message when parsing json
+ }
+
+ return reader.read().then(processText);
+ });
+ })
+ .catch((error) => {
+ console.error("Error:", error);
+ });
+ } else {
+ const res = await fetch(baseUrl, chatPayload);
+ clearTimeout(requestTimeoutId);
+ const resJson = await res.json();
+ if (resJson?.promptFeedback?.blockReason) {
+ // being blocked
+ options.onError?.(
+ new Error(
+ "Message is being blocked for reason: " +
+ resJson.promptFeedback.blockReason,
+ ),
+ );
+ }
+ const message = this.extractMessage(resJson);
+ options.onFinish(message);
+ }
+ } catch (e) {
+ console.log("[Request] failed to make a chat request", e);
+ options.onError?.(e as Error);
+ }
+ }
+ usage(): Promise {
+ throw new Error("Method not implemented.");
+ }
+ async models(): Promise {
+ return [];
+ }
+ path(path: string): string {
+ return "/api/google/" + path;
+ }
+}
+
+function ensureProperEnding(str: string) {
+ if (str.startsWith("[") && !str.endsWith("]")) {
+ return str + "]";
+ }
+ return str;
+}
diff --git a/app/client/platforms/openai.ts b/app/client/platforms/openai.ts
index dedc7cebf84..ad458199d97 100644
--- a/app/client/platforms/openai.ts
+++ b/app/client/platforms/openai.ts
@@ -1,3 +1,4 @@
+"use client";
import {
ApiPath,
DEFAULT_API_HOST,
@@ -45,7 +46,9 @@ export class ChatGPTApi implements LLMApi {
if (baseUrl.length === 0) {
const isApp = !!getClientConfig()?.isApp;
- baseUrl = isApp ? DEFAULT_API_HOST : ApiPath.OpenAI;
+ baseUrl = isApp
+ ? DEFAULT_API_HOST + "/proxy" + ApiPath.OpenAI
+ : ApiPath.OpenAI;
}
if (baseUrl.endsWith("/")) {
@@ -59,6 +62,8 @@ export class ChatGPTApi implements LLMApi {
path = makeAzurePath(path, accessStore.azureApiVersion);
}
+ console.log("[Proxy Endpoint] ", baseUrl, path);
+
return [baseUrl, path].join("/");
}
@@ -323,6 +328,11 @@ export class ChatGPTApi implements LLMApi {
return chatModels.map((m) => ({
name: m.id,
available: true,
+ provider: {
+ id: "openai",
+ providerName: "OpenAI",
+ providerType: "openai",
+ },
}));
}
}
diff --git a/app/components/auth.tsx b/app/components/auth.tsx
index 7962d46bee4..57118349bac 100644
--- a/app/components/auth.tsx
+++ b/app/components/auth.tsx
@@ -64,6 +64,17 @@ export function AuthPage() {
);
}}
/>
+ {
+ accessStore.update(
+ (access) => (access.googleApiKey = e.currentTarget.value),
+ );
+ }}
+ />
>
) : null}
diff --git a/app/components/emoji.tsx b/app/components/emoji.tsx
index 03aac05f278..b2434930755 100644
--- a/app/components/emoji.tsx
+++ b/app/components/emoji.tsx
@@ -10,7 +10,10 @@ import BotIcon from "../icons/bot.svg";
import BlackBotIcon from "../icons/black-bot.svg";
export function getEmojiUrl(unified: string, style: EmojiStyle) {
- return `https://cdn.staticfile.org/emoji-datasource-apple/14.0.0/img/${style}/64/${unified}.png`;
+ // Whoever owns this Content Delivery Network (CDN), I am using your CDN to serve emojis
+ // Old CDN broken, so I had to switch to this one
+ // Author: https://github.com/H0llyW00dzZ
+ return `https://fastly.jsdelivr.net/npm/emoji-datasource-apple/img/${style}/64/${unified}.png`;
}
export function AvatarPicker(props: {
diff --git a/app/components/exporter.tsx b/app/components/exporter.tsx
index 1f38e8dddc7..add266bfa40 100644
--- a/app/components/exporter.tsx
+++ b/app/components/exporter.tsx
@@ -29,10 +29,11 @@ import NextImage from "next/image";
import { toBlob, toPng } from "html-to-image";
import { DEFAULT_MASK_AVATAR } from "../store/mask";
-import { api } from "../client/api";
+
import { prettyObject } from "../utils/format";
-import { EXPORT_MESSAGE_CLASS_NAME } from "../constant";
+import { EXPORT_MESSAGE_CLASS_NAME, ModelProvider } from "../constant";
import { getClientConfig } from "../config/client";
+import { ClientApi } from "../client/api";
const Markdown = dynamic(async () => (await import("./markdown")).Markdown, {
loading: () => ,
@@ -301,10 +302,17 @@ export function PreviewActions(props: {
}) {
const [loading, setLoading] = useState(false);
const [shouldExport, setShouldExport] = useState(false);
-
+ const config = useAppConfig();
const onRenderMsgs = (msgs: ChatMessage[]) => {
setShouldExport(false);
+ var api: ClientApi;
+ if (config.modelConfig.model.startsWith("gemini")) {
+ api = new ClientApi(ModelProvider.GeminiPro);
+ } else {
+ api = new ClientApi(ModelProvider.GPT);
+ }
+
api
.share(msgs)
.then((res) => {
@@ -530,7 +538,7 @@ export function ImagePreviewer(props: {