Skip to content

Commit

Permalink
Merge pull request #13 from pinecone-io/spruce
Browse files Browse the repository at this point in the history
Bump @pinecone-database to v2.0.0
  • Loading branch information
austin-denoble authored Jan 16, 2024
2 parents 78edaea + 2ffca08 commit bc975f2
Show file tree
Hide file tree
Showing 12 changed files with 156 additions and 105 deletions.
5 changes: 3 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
PINECONE_API_KEY=
PINECONE_ENVIRONMENT=
PINECONE_INDEX=semantic-search
PINECONE_INDEX="semantic-search"
PINECONE_CLOUD="aws"
PINECONE_REGION="us-west-2"
6 changes: 2 additions & 4 deletions .github/actions/integrationTests/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,6 @@ inputs:
pinecone_api_key:
description: "API key"
required: true
pinecone_environment:
description: "Environment/region to target"
required: true
runs:
using: "composite"
steps:
Expand All @@ -15,8 +12,9 @@ runs:
env:
CI: true
PINECONE_API_KEY: ${{ inputs.pinecone_api_key }}
PINECONE_ENVIRONMENT: ${{ inputs.pinecone_environment }}
PINECONE_INDEX: "semantic-search-testing"
PINECONE_CLOUD: "aws"
PINECONE_REGION: "us-west-2"
run: npm run test
- name: "Report Coverage"
if: always() # Also generate the report if tests are failing
Expand Down
3 changes: 0 additions & 3 deletions .github/workflows/regularCheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@ on:
secrets:
PINECONE_API_KEY:
required: true
PINECONE_ENVIRONMENT:
required: true
jobs:
run-integration-tests:
name: Integration tests
Expand All @@ -28,4 +26,3 @@ jobs:
uses: ./.github/actions/integrationTests
with:
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
3 changes: 0 additions & 3 deletions .github/workflows/validate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ on:
secrets:
PINECONE_API_KEY:
required: true
PINECONE_ENVIRONMENT:
required: true

jobs:
basic-hygiene:
Expand Down Expand Up @@ -49,4 +47,3 @@ jobs:
uses: ./.github/actions/integrationTests
with:
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
103 changes: 57 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ In this walkthrough we will see how to use Pinecone for semantic search.
## Setup

Prerequisites:

- `Node.js` version >=18.0.0

Clone the repository and install the dependencies.
Expand All @@ -17,24 +18,27 @@ npm install

### Configuration

In order to run this example, you have to supply the Pinecone credentials needed to interact with the Pinecone API. You can find these credentials in the Pinecone web console. This project uses `dotenv` to easily load values from the `.env` file into the environment when executing.
In order to run this example, you have to supply the Pinecone credentials needed to interact with the Pinecone API. You can find these credentials in the Pinecone web console. This project uses `dotenv` to easily load values from the `.env` file into the environment when executing.

Copy the template file:

```sh
cp .env.example .env
```

And fill in your API key and environment details:
And fill in your API key and index name:

```sh
PINECONE_API_KEY=<your-api-key>
PINECONE_ENVIRONMENT=<your-environment>
PINECONE_INDEX=semantic-search
PINECONE_INDEX="semantic-search"
PINECONE_CLOUD="aws"
PINECONE_REGION="us-west-2"
```

`PINECONE_INDEX` is the name of the index where this demo will store and query embeddings. You can change `PINECONE_INDEX` to any name you like, but make sure the name not going to collide with any indexes you are already using.

`PINECONE_CLOUD` and `PINECONE_REGION` define where the index should be deployed. Currently, this is the only available cloud and region combination (`aws` and `us-west-2`), so it's recommended to leave them defaulted.

### Building

To build the project please run the command:
Expand All @@ -52,8 +56,8 @@ There are two main components to this application: the data loader (load.ts) and
The data loading process starts with the CSV file. This file contains the articles that will be indexed and made searchable. To load this data, the project uses the `papaparse` library. The loadCSVFile function in `csvLoader.ts` reads the file and uses `papaparse` to parse the CSV data into JavaScript objects. The `dynamicTyping` option is set to true to automatically convert the data to the appropriate types. After this step, you will have an array of objects, where each object represents an article​.

```typescript
import fs from "fs/promises";
import Papa from "papaparse";
import fs from 'fs/promises';
import Papa from 'papaparse';

async function loadCSVFile(
filePath: string
Expand All @@ -63,7 +67,7 @@ async function loadCSVFile(
const csvAbsolutePath = await fs.realpath(filePath);

// Create a readable stream from the CSV file
const data = await fs.readFile(csvAbsolutePath, "utf8");
const data = await fs.readFile(csvAbsolutePath, 'utf8');

// Parse the CSV file
return await Papa.parse(data, {
Expand All @@ -85,19 +89,19 @@ export default loadCSVFile;
The text embedding operation is performed in the `Embedder` class. This class uses a pipeline from the [`@xenova/transformers`](https://github.com/xenova/transformers.js) library to generate embeddings for the input text. We use the [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model to generate the embeddings. The class provides methods to embed a single string or an array of strings in batches​ - which will come in useful a bit later.

```typescript
import type { PineconeRecord } from "@pinecone-database/pinecone";
import type { TextMetadata } from "./types.js";
import { Pipeline } from "@xenova/transformers";
import { v4 as uuidv4 } from "uuid";
import { sliceIntoChunks } from "./utils/util.js";
import type { PineconeRecord } from '@pinecone-database/pinecone';
import type { TextMetadata } from './types.js';
import { Pipeline } from '@xenova/transformers';
import { v4 as uuidv4 } from 'uuid';
import { sliceIntoChunks } from './utils/util.js';

class Embedder {
private pipe: Pipeline | null = null;

// Initialize the pipeline
async init() {
const { pipeline } = await import("@xenova/transformers");
this.pipe = await pipeline("embeddings", "Xenova/all-MiniLM-L6-v2");
const { pipeline } = await import('@xenova/transformers');
this.pipe = await pipeline('embeddings', 'Xenova/all-MiniLM-L6-v2');
}

// Embed a single string
Expand Down Expand Up @@ -132,23 +136,22 @@ class Embedder {
const embedder = new Embedder();

export { embedder };

```

## Loading embeddings into Pinecone

Now that we have a way to load data and create embeddings, let put the two together and save the embeddings in Pinecone. In the following section, we get the path of the file we need to process from the command like. We load the CSV file, create the Pinecone index and then start the embedding process. The embedding process is done in batches of 1000. Once we have a batch of embeddings, we insert them into the index.

```typescript
import cliProgress from "cli-progress";
import { config } from "dotenv";
import loadCSVFile from "./csvLoader.js";
import cliProgress from 'cli-progress';
import { config } from 'dotenv';
import loadCSVFile from './csvLoader.js';

import { embedder } from "./embeddings.js";
import { embedder } from './embeddings.js';
import { Pinecone } from '@pinecone-database/pinecone';
import { getEnv, validateEnvironmentVariables } from "./utils/util.js";
import { getEnv, validateEnvironmentVariables } from './utils/util.js';

import type { TextMetadata } from "./types.js";
import type { TextMetadata } from './types.js';

// Load environment variables from .env
config();
Expand All @@ -162,7 +165,7 @@ let counter = 0;

export const load = async (csvPath: string, column: string) => {
validateEnvironmentVariables();

// Get a Pinecone instance
const pinecone = new Pinecone();

Expand All @@ -178,16 +181,25 @@ export const load = async (csvPath: string, column: string) => {
// Extract the selected column from the CSV file
const documents = data.map((row) => row[column] as string);

// Get index name
const indexName = getEnv("PINECONE_INDEX");

// Check whether the index already exists. If it doesn't, create
// a Pinecone index with a dimension of 384 to hold the outputs
// of our embeddings model.
const indexList = await pinecone.listIndexes();
if (indexList.indexOf({ name: indexName }) === -1) {
await pinecone.createIndex({ name: indexName, dimension: 384, waitUntilReady: true })
}
// Get index name, cloud, and region
const indexName = getEnv('PINECONE_INDEX');
const indexCloud = getEnv('PINECONE_CLOUD');
const indexRegion = getEnv('PINECONE_REGION');

// Create a Pinecone index with a dimension of 384 to hold the outputs
// of our embeddings model. Use suppressConflicts in case the index already exists.
await pinecone.createIndex({
name: indexName,
dimension: 384,
spec: {
serverless: {
region: indexRegion,
cloud: indexCloud,
},
},
waitUntilReady: true,
suppressConflicts: true,
});

// Select the target Pinecone index. Passing the TextMetadata generic type parameter
// allows typescript to know what shape to expect when interacting with a record's
Expand All @@ -202,7 +214,7 @@ export const load = async (csvPath: string, column: string) => {
await embedder.embedBatch(documents, 100, async (embeddings) => {
counter += embeddings.length;
// Whenever the batch embedding process returns a batch of embeddings, insert them into the index
await index.upsert(embeddings)
await index.upsert(embeddings);
progressBar.update(counter);
});

Expand Down Expand Up @@ -246,11 +258,11 @@ Index is ready.
Now that our index is populated we can begin making queries. We are performing a semantic search for similar questions, so we should embed and search with another question.
```typescript
import { config } from "dotenv";
import { embedder } from "./embeddings.js";
import { Pinecone } from "@pinecone-database/pinecone";
import { getEnv, validateEnvironmentVariables } from "./utils/util.js";
import type { TextMetadata } from "./types.js";
import { config } from 'dotenv';
import { embedder } from './embeddings.js';
import { Pinecone } from '@pinecone-database/pinecone';
import { getEnv, validateEnvironmentVariables } from './utils/util.js';
import type { TextMetadata } from './types.js';

config();

Expand All @@ -259,9 +271,9 @@ export const query = async (query: string, topK: number) => {
const pinecone = new Pinecone();

// Target the index
const indexName = getEnv("PINECONE_INDEX");
const indexName = getEnv('PINECONE_INDEX');
const index = pinecone.index<TextMetadata>(indexName);

await embedder.init();

// Embed the query
Expand All @@ -272,7 +284,7 @@ export const query = async (query: string, topK: number) => {
vector: queryEmbedding.values,
topK,
includeMetadata: true,
includeValues: false
includeValues: false,
});

// Print the results
Expand All @@ -285,7 +297,6 @@ export const query = async (query: string, topK: number) => {
}))
);
};

```
The querying process is very similar to the indexing process. We create a Pinecone client, select the index we want to query, and then embed the query. We then use the `query` method to search the index for the most similar embeddings. The `query` method returns a list of matches. Each match contains the metadata associated with the embedding, as well as the score of the match.
Expand All @@ -301,11 +312,11 @@ The result for this will be something like:
```js
[
{
text: "Which country in the world has the largest population?",
text: 'Which country in the world has the largest population?',
score: 0.79473877,
},
{
text: "Which cities are the most densely populated?",
text: 'Which cities are the most densely populated?',
score: 0.706895828,
},
];
Expand All @@ -322,11 +333,11 @@ And the result:
```js
[
{
text: "Which cities are the most densely populated?",
text: 'Which cities are the most densely populated?',
score: 0.66688776,
},
{
text: "What are the most we dangerous cities in the world?",
text: 'What are the most we dangerous cities in the world?',
score: 0.556335568,
},
];
Expand Down
Loading

0 comments on commit bc975f2

Please sign in to comment.