Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Langchainstream not working #15

Open
scottklein7 opened this issue Oct 12, 2023 · 0 comments
Open

Langchainstream not working #15

scottklein7 opened this issue Oct 12, 2023 · 0 comments
Labels
enhancement New feature or request

Comments

@scottklein7
Copy link

Hello all! I am having an issue when using the LangChainStream functionality and saving the chat history to my supabase DB. The main issue I am getting is that the stream seems to get interrupted and does not complete, as a result, the chat is not saved to my DB. The langchain chat history does not seem to be working as well. Any help would go a long way @jacoblee93.

`import { getSession } from '@/app/supabase-server';
import { Database } from '@/lib/db_types';
import { templates } from '@/lib/template';
import { nanoid } from '@/lib/utils';
import { PineconeClient } from '@pinecone-database/pinecone';
import { createServerActionClient } from '@supabase/auth-helpers-nextjs';
import { LangChainStream, Message, StreamingTextResponse } from 'ai';
import { ConversationalRetrievalQAChain } from 'langchain/chains';
import { ChatOpenAI } from 'langchain/chat_models/openai';
import { OpenAIEmbeddings } from 'langchain/embeddings/openai';
import { BufferMemory } from 'langchain/memory';
import { PineconeStore } from 'langchain/vectorstores/pinecone';
import { cookies } from 'next/headers';
import { redirect } from 'next/navigation';
import { NextResponse } from 'next/server';
import { Configuration, OpenAIApi } from 'openai-edge';

export const runtime = 'nodejs';

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
});

const openai = new OpenAIApi(configuration);

const formatMessage = (message: Message) => {
  return `${message.role === 'user' ? 'Human' : 'Assistant'}: ${
    message.content
  }`;
};

export async function POST(req: Request) {
  const cookieStore = cookies();
  const supabase = createServerActionClient<Database>({
    cookies: () => cookieStore
  });
  const session = await getSession();
  const userId = session?.user.id;

  if (!userId) {
    return new Response('Unauthorized', {
      status: 401
    });
  }

  const json = await req.json();
  // const messages: Message[] = json.messages ?? [];
  const { messages } = json;
  const formattedPreviousMessages = messages.slice(0, -1).map(formatMessage);
  const question = messages[messages.length - 1].content;

  try {
    const sanitizedQuestion = question.trim().replaceAll('\n', ' ');
    const pinecone = new PineconeClient();
    await pinecone.init({
      environment: process.env.PINECONE_ENVIRONMENT ?? '',
      apiKey: process.env.PINECONE_API_KEY ?? ''
    });

    const pineconeIndex = pinecone.Index(process.env.PINECONE_INDEX_NAME!);

    const vectorStore = await PineconeStore.fromExistingIndex(
      new OpenAIEmbeddings(),
      { pineconeIndex }
    );

    const { stream, handlers } = LangChainStream({
      async onCompletion(completion) {
        const title = json.messages[0].content.substring(0, 100);
        const id = json.id ?? nanoid();
        const createdAt = Date.now();
        const path = `/chat/${id}`;
        const payload = {
          id,
          title,
          userId,
          createdAt,
          path,
          messages: [
            ...messages,
            {
              content: completion,
              role: 'assistant'
            }
          ]
        };

        await supabase.from('chats').upsert({ id, payload }).throwOnError();
      }
    });

    const streamingModel = new ChatOpenAI({
      modelName: 'gpt-4',
      streaming: true,
      verbose: true,
      temperature: 0
    });

    const nonStreamingModel = new ChatOpenAI({
      modelName: 'gpt-4',
      verbose: true,
      temperature: 0
    });

    const chain = ConversationalRetrievalQAChain.fromLLM(
      streamingModel,
      vectorStore.asRetriever(),
      {
        qaTemplate: templates.qaPrompt,
        questionGeneratorTemplate: templates.condensePrompt,
        memory: new BufferMemory({
          memoryKey: 'chat_history',
          inputKey: 'question', // The key for the input to the chain
          outputKey: 'text', // The key for the final conversational output of the chain
          returnMessages: true // If using with a chat model (e.g. gpt-3.5 or gpt-4)
        }),
        questionGeneratorChainOptions: {
          llm: nonStreamingModel
        }
      }
    );

    chain.call(
      {
        question: sanitizedQuestion,
        chat_history: formattedPreviousMessages.join('\n')
      },
      [handlers]
    );

    // Return the readable strea
    return new StreamingTextResponse(stream);
  } catch (error) {
    console.error('Internal server error ', error);
    return NextResponse.json('Error: Something went wrong. Try again!', {
      status: 500
    });
  }
}`
@scottklein7 scottklein7 added the enhancement New feature or request label Oct 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant