Welcome to the LangSmith Cookbook — your practical guide to mastering LangSmith. While our standard documentation covers the basics, this repository delves into common patterns and some real-world use-cases, empowering you to optimize your LLM applications further.
This repository is your practical guide to maximizing LangSmith. As a tool, LangSmith empowers you to debug, evaluate, test, and improve your LLM applications continuously. These recipes present real-world scenarios for you to adapt and implement.
Your Input Matters
Help us make the cookbook better! If there's a use-case we missed, or if you have insights to share, please raise a GitHub issue (feel free to tag Will) or contact the LangChain development team. Your expertise shapes this community.
Tracing allows for seamless debugging and improvement of your LLM applications. Here's how:
- Tracing without LangChain: learn to trace applications independent of LangChain using the Python SDK's @traceable decorator.
- REST API: get acquainted with the REST API's features for logging LLM and chat model runs, and understand nested runs. The run logging spec can be found in the LangSmith SDK repository.
- Customizing Run Names: improve UI clarity by assigning bespoke names to LangSmith chain runs—includes examples for chains, lambda functions, and agents.
- Tracing Nested Calls within Tools: include all nested tool subcalls in a single trace by using
run_manager.get_child()
and passing to the childcallbacks
- Display Trace Links: add trace links to your app to speed up development. This is useful when prototyping your application in its unique UI, since it lets you quickly see its execution flow, add feedback to a run, or add the run to a dataset.
Efficiently manage your LLM components with the LangChain Hub. For dedicated documentation, please see the hub docs.
- RetrievalQA Chain: use prompts from the hub in an example RAG pipeline.
- Prompt Versioning: ensure deployment stability by selecting specific prompt versions over the 'latest'.
- Runnable PromptTemplate: streamline the process of saving prompts to the hub from the playground and integrating them into runnable chains.
Test and benchmark your LLM systems using methods in these evaluation recipes:
- Q&A System Correctness: evaluate your retrieval-augmented Q&A pipeline end-to-end on a dataset. Iterate, improve, and keep testing.
- Evaluating Q&A Systems with Dynamic Data: use evaluators that dereference a labels to handle data that changes over time.
- RAG Evaluation using Fixed Sources: evaluate the response component of a RAG (retrieval-augmented generation) pipeline by providing retrieved documents in the dataset
- Evaluating an Agent's intermediate steps: compare the sequence of actions taken by an agent to an expected trajectory to grade effective tool use.
- Evaluating a Conversational Chat Bot: Evaluate chatbots within multi-turn conversations by treating each data point as an individual dialogue turn. This guide shows how to set up a multi-turn conversation dataset and evaluate a simple chat bot on it.
- Evaluating an Extraction Chain: measure the similarity between the extracted structured content and structured labels using LangChain's json evaluators.
- Comparison Evals: use labeled preference scoring to contrast system versions and determine the most optimal outputs.
- You can incorporate LangSmith in your existing testing framework:
- LangSmith in Pytest benchmark your chain in pytest and assert aggregate metrics meet the quality bar.
- Unit Testing with Pytest: write individual unit tests and log assertions as feedback.
- Evaluating Existing Runs: add ai-assisted feedback and evaluation metrics to existing run traces.
- Naming Test Projects: manually name your tests with
run_on_dataset(..., project_name='my-project-name')
- How to download feedback and examples from a test project: export the predictions, evaluation results, and other information to programmatically add to your reports.
Incorporate LangSmith into your TS/JS testing and evaluation workflow:
- Evaluating JS Chains in Python: evaluate JS chains using custom python evaluators, adapting methods from the "Evaluating Existing Runs" guide.
- Logging Assertions as Feedback: convert CI test assertions into LangSmith feedback, enhancing trace visibility with minimal modifications.
Harness user feedback, "ai-assisted" feedback, and other signals to improve, monitor, and personalize your applications. Feedback can be user-generated or "automated" using functions or even calls to an LLM:
- Streamlit Chat App: a minimal chat app that captures user feedback and shares traces of the chat application.
- The vanilla_chain.py contains an LLMChain that powers the chat application.
- The expression_chain.py contains an equivalent chat chain defined exclusively with LangChain expressions.
- Next.js Chat App: explore a simple TypeScript chat app demonstrating tracing and feedback capture.
- Building an Algorithmic Feedback Pipeline: automate feedback metrics for advanced monitoring and performance tuning. This lets you evaluate production runs as a batch job.
- Real-time Automated Feedback: automatically generate feedback metrics for every run using an async callback. This lets you evaluate production runs in real-time.
- Real-time RAG Chat Bot Evaluation: This Streamlit walkthrough showcases an advanced application of the concepts from the Real-time Automated Feedback tutorial. It demonstrates how to automatically check for hallucinations in your RAG chat bot responses against the retrieved documents. For more information on RAG, check out the LangChain docs.
- LangChain Agents with LangSmith instrument a LangChain web-search agent with tracing and human feedback.
Fine-tune an LLM on collected run data using these recipes:
- OpenAI Fine-Tuning: list LLM runs and convert them to OpenAI's fine-tuning format efficiently.
- Lilac Dataset Curation: further curate your LangSmith datasets using Lilac to detect near-duplicates, check for PII, and more.
Turn your trace data into actionable insights:
- Exporting LLM Runs and Feedback: extract and interpret LangSmith LLM run data, making them ready for various analytical platforms.
- Lilac: enrich datasets using the open-source analytics tool, Lilac, to better label and organize your data.