Skip to content

Commit

Permalink
Merge pull request #1 from supabase-community/with-supa
Browse files Browse the repository at this point in the history
With supa
  • Loading branch information
kiwicopple authored Jul 26, 2024
2 parents 7f5081d + 8c79012 commit a087676
Show file tree
Hide file tree
Showing 195 changed files with 27,205 additions and 4,497 deletions.
7 changes: 6 additions & 1 deletion .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,17 @@ on:
pull_request:
branches:
- main

types:
- opened
- reopened
- synchronize
- ready_for_review
env:
TESTING_REMOTELY: true

jobs:
build:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest

strategy:
Expand Down
356 changes: 10 additions & 346 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/snippets/2_installation.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ npm i @xenova/transformers
Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using [ES Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), you can import the library with:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alpha.0';
</script>
```
3 changes: 1 addition & 2 deletions docs/snippets/4_custom-usage.snippet
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@


By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:

By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/[email protected]/dist/), which should work out-of-the-box. You can customize this as follows:

### Settings

Expand Down
15 changes: 15 additions & 0 deletions docs/snippets/6_supported-models.snippet

Large diffs are not rendered by default.

17 changes: 15 additions & 2 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,21 @@
title: ONNX
title: Backends
isExpanded: false
- sections:
- local: api/generation/parameters
title: Parameters
- local: api/generation/configuration_utils
title: Configuration
- local: api/generation/logits_process
title: Logits Processors
- local: api/generation/logits_sampler
title: Logits Samplers
- local: api/generation/stopping_criteria
title: Stopping Criteria
- local: api/generation/streamers
title: Streamers
title: Generation
isExpanded: false
- sections:
- local: api/utils/core
title: Core
Expand All @@ -61,8 +76,6 @@
title: Tensor
- local: api/utils/maths
title: Maths
- local: api/utils/generation
title: Generation
- local: api/utils/data-structures
title: Data Structures
title: Utilities
Expand Down
21 changes: 21 additions & 0 deletions examples/florence2-webgpu/.eslintrc.cjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
module.exports = {
root: true,
env: { browser: true, es2020: true },
extends: [
'eslint:recommended',
'plugin:react/recommended',
'plugin:react/jsx-runtime',
'plugin:react-hooks/recommended',
],
ignorePatterns: ['dist', '.eslintrc.cjs'],
parserOptions: { ecmaVersion: 'latest', sourceType: 'module' },
settings: { react: { version: '18.2' } },
plugins: ['react-refresh'],
rules: {
'react/jsx-no-target-blank': 'off',
'react-refresh/only-export-components': [
'warn',
{ allowConstantExport: true },
],
},
}
24 changes: 24 additions & 0 deletions examples/florence2-webgpu/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

node_modules
dist
dist-ssr
*.local

# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
8 changes: 8 additions & 0 deletions examples/florence2-webgpu/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# React + Vite

This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.

Currently, two official plugins are available:

- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/README.md) uses [Babel](https://babeljs.io/) for Fast Refresh
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
12 changes: 12 additions & 0 deletions examples/florence2-webgpu/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Florence2 WebGPU</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>
30 changes: 30 additions & 0 deletions examples/florence2-webgpu/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
{
"name": "florence2-webgpu",
"private": true,
"version": "0.0.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0",
"preview": "vite preview"
},
"dependencies": {
"@xenova/transformers": "github:xenova/transformers.js#v3",
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"@types/react": "^18.3.3",
"@types/react-dom": "^18.3.0",
"@vitejs/plugin-react": "^4.3.1",
"autoprefixer": "^10.4.19",
"eslint": "^8.57.0",
"eslint-plugin-react": "^7.34.2",
"eslint-plugin-react-hooks": "^4.6.2",
"eslint-plugin-react-refresh": "^0.4.7",
"postcss": "^8.4.38",
"tailwindcss": "^3.4.4",
"vite": "^5.3.1"
}
}
6 changes: 6 additions & 0 deletions examples/florence2-webgpu/postcss.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}
218 changes: 218 additions & 0 deletions examples/florence2-webgpu/src/App.jsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,218 @@
import { useEffect, useState, useRef, useCallback } from 'react';

import Progress from './components/Progress';
import ImageInput from './components/ImageInput';

const IS_WEBGPU_AVAILABLE = !!navigator.gpu;

function App() {

// Create a reference to the worker object.
const worker = useRef(null);

// Model loading and progress
const [status, setStatus] = useState(null);
const [loadingMessage, setLoadingMessage] = useState('');
const [progressItems, setProgressItems] = useState([]);

const [task, setTask] = useState('<CAPTION>');
const [text, setText] = useState('');
const [image, setImage] = useState(null);
const [result, setResult] = useState(null);
const [time, setTime] = useState(null);

// We use the `useEffect` hook to setup the worker as soon as the `App` component is mounted.
useEffect(() => {
if (!worker.current) {
// Create the worker if it does not yet exist.
worker.current = new Worker(new URL('./worker.js', import.meta.url), {
type: 'module'
});
}

// Create a callback function for messages from the worker thread.
const onMessageReceived = (e) => {
switch (e.data.status) {
case 'loading':
// Model file start load: add a new progress item to the list.
setStatus('loading');
setLoadingMessage(e.data.data);
break;

case 'initiate':
setProgressItems(prev => [...prev, e.data]);
break;

case 'progress':
// Model file progress: update one of the progress items.
setProgressItems(
prev => prev.map(item => {
if (item.file === e.data.file) {
return { ...item, ...e.data }
}
return item;
})
);
break;

case 'done':
// Model file loaded: remove the progress item from the list.
setProgressItems(
prev => prev.filter(item => item.file !== e.data.file)
);
break;

case 'ready':
// Pipeline ready: the worker is ready to accept messages.
setStatus('ready');
break;

case 'complete':
setResult(e.data.result);
setTime(e.data.time);
setStatus('ready');
break;
}
};

// Attach the callback function as an event listener.
worker.current.addEventListener('message', onMessageReceived);

// Define a cleanup function for when the component is unmounted.
return () => {
worker.current.removeEventListener('message', onMessageReceived);
};
}, []);

const handleClick = useCallback(() => {
if (status === null) {
setStatus('loading');
worker.current.postMessage({ type: 'load' });
} else {
setStatus('running');
worker.current.postMessage({
type: 'run', data: { text, url: image, task }
});
}
}, [status, task, image, text]);

return (
IS_WEBGPU_AVAILABLE
? (<div className="flex flex-col h-screen mx-auto items justify-end text-gray-800 dark:text-gray-200 bg-white dark:bg-gray-900 max-w-[630px]">

{status === 'loading' && (
<div className="flex justify-center items-center fixed w-screen h-screen bg-black z-10 bg-opacity-[92%] top-0 left-0">
<div className="w-[500px]">
<p className="text-center mb-1 text-white text-md">{loadingMessage}</p>
{progressItems.map(({ file, progress, total }, i) => (
<Progress key={i} text={file} percentage={progress} total={total} />
))}
</div>
</div>
)}
<div className="h-full overflow-auto scrollbar-thin flex justify-center items-center flex-col relative">
<div className="flex flex-col items-center mb-1 text-center">
<h1 className="text-6xl font-bold mb-2">Florence2 WebGPU</h1>
<h2 className="text-xl font-semibold">Powerful vision foundation model running locally in your browser.</h2>
</div>

<div className="w-full min-h-[220px] flex flex-col justify-center items-center p-2">

<p className="mb-2">
You are about to download <a href="https://huggingface.co/onnx-community/Florence-2-base-ft" target="_blank" rel="noreferrer" className="font-medium underline">Florence-2-base-ft</a>,
a 230 million parameter vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks like captioning, object detection, and segmentation.
Once loaded, the model (340&nbsp;MB) will be cached and reused when you revisit the page.<br />
<br />
Everything runs locally in your browser using <a href="https://huggingface.co/docs/transformers.js" target="_blank" rel="noreferrer" className="underline">🤗&nbsp;Transformers.js</a> and ONNX Runtime Web,
meaning no API calls are made to a server for inference. You can even disconnect from the internet after the model has loaded!
</p>

<div className="flex w-full justify-around m-4">
<div className="flex flex-col gap-2 w-full max-w-[48%]">
<div className="flex flex-col">
<span className="text-sm mb-0.5">Task</span>
<select
className="border rounded-md p-1"
value={task}
onChange={(e) => setTask(e.target.value)}
>
<option value="<CAPTION>">Caption</option>
<option value="<DETAILED_CAPTION>">Detailed Caption</option>
<option value="<MORE_DETAILED_CAPTION>">More Detailed Caption</option>
<option value="<OCR>">OCR</option>
<option value="<OCR_WITH_REGION>">OCR with Region</option>
<option value="<OD>">Object Detection</option>
<option value="<DENSE_REGION_CAPTION>">Dense Region Caption</option>
<option value="<CAPTION_TO_PHRASE_GROUNDING>">Caption to Phrase Grounding</option>
{/* <option value="<REFERRING_EXPRESSION_SEGMENTATION>">Referring Expression Segmentation</option> */}
{/* <option value="<REGION_TO_SEGMENTATION>">Region to Segmentation</option> */}
{/* <option value="<OPEN_VOCABULARY_DETECTION>">Open Vocabulary Detection</option> */}
{/* <option value="<REGION_TO_CATEGORY>">Region to Category</option> */}
{/* <option value="<REGION_TO_DESCRIPTION>">Region to Description</option> */}
{/* <option value="<REGION_TO_OCR>">Region to OCR</option> */}
{/* <option value="<REGION_PROPOSAL>">Region Proposal</option> */}
</select>
</div>
<div className="flex flex-col">
<span className="text-sm mb-0.5">Input Image</span>
<ImageInput className="flex flex-col items-center border border-gray-300 rounded-md cursor-pointer h-[250px]" onImageChange={(file, result) => {
worker.current.postMessage({ type: 'reset' }); // Reset image cache
setResult(null);
setImage(result);
}} />
</div>
</div>
<div className="flex flex-col gap-2 w-full max-w-[48%] justify-end">
{
task === '<CAPTION_TO_PHRASE_GROUNDING>'
&& (<div className="flex flex-col">
<span className="text-sm mb-0.5">Text input</span>
<input className="border rounded-md px-2 py-[3.5px]"
value={text}
onChange={(e) => setText(e.target.value)}
/>
</div>)
}

<div className="flex flex-col relative">
<span className="text-sm mb-0.5">Output</span>
<div className="flex justify-center border border-gray-300 rounded-md h-[250px]">
{result?.[task] && (<>
{
typeof result[task] === 'string'
? <p className="pt-4 px-4 text-center max-h-[205px] overflow-y-auto">{result[task]}</p>
: <pre className="w-full h-full p-2 overflow-y-auto">
{JSON.stringify(result[task], null, 2)}
</pre>
}
{
time && <p className="text-sm text-gray-500 absolute bottom-2 bg-white p-1 rounded border">Execution time: {time.toFixed(2)} ms</p>
}
</>)
}
</div>

</div>
</div>
</div>

<button
className="border px-4 py-2 rounded-lg bg-blue-400 text-white hover:bg-blue-500 disabled:bg-blue-100 disabled:cursor-not-allowed select-none"
onClick={handleClick}
disabled={status === 'running' || (status !== null && image === null)}
>
{status === null ? 'Load model' :
status === 'running'
? 'Running...'
: 'Run model'
}
</button>
</div>
</div>

</div >)
: (<div className="fixed w-screen h-screen bg-black z-10 bg-opacity-[92%] text-white text-2xl font-semibold flex justify-center items-center text-center">WebGPU is not supported<br />by this browser :&#40;</div>)
)
}

export default App
Loading

0 comments on commit a087676

Please sign in to comment.