Datasette running in your browser using WebAssembly and Pyodide
Live tool: https://lite.datasette.io/
More about this project:
- Datasette Lite: a server-side Python web application running in a browser
- Joining CSV files in your browser using Datasette Lite
- Plugin support for Datasette Lite
Datasette Lite runs the full server-side Datasette Python web application directly in your browser, using the Pyodide build of Python compiled to WebAssembly.
When you launch the demo, your browser will download and start executing a full Python interpreter, install the datasette package (and its dependencies), download one or more SQLite database files and start the application running in a browser window (actually a Web Worker attached to that window).
Datasette Lite uses the most recent stable Datasette release from PyPI.
To use the most recent preview version (alpha or beta) add ?ref=pre
:
Or for a specific release pass the version number as ?ref=
:
You can load data from a CSV file hosted online (provided it allows access-control-allow-origin: *
) by passing that URL as a ?csv=
parameter - or by clicking the "Load CSV by URL" button and pasting in a URL.
This example loads a CSV of college fight songs from the fivethirtyeight/data GitHub repository:
You can pass ?csv=
multiple times to load more than one CSV file. You can then execute SQL joins to combine that data.
This example loads the latest Covid-19 per-county data from the NY Times, the 2019 county populations data from the US Census, joins them on FIPS code and runs a query that calculates cases per million across that data:
If you have data in a JSON file that looks something like this you can load it directly into Datasette Lite using the ?json=URL
parameter:
[
{
"id": 1,
"name": "Item 1"
},
{
"id": 2,
"name": "Item 2"
}
]
This also works with JSON documents where one of the keys is a list of objects, such as this one:
{
"rows": [
{
"id": 1,
"name": "Item 1"
},
{
"id": 2,
"name": "Item 2"
}
]
}
In this case it will search for the first key that contains a list of objects.
If a document is a JSON object where every value is a JSON object, like this:
{
"anchor-positioning": {
"spec": "https://drafts.csswg.org/css-anchor-position-1/#anchoring"
},
"array-at": {
"spec": "https://tc39.es/ecma262/multipage/indexed-collections.html#sec-array.prototype.at"
},
"array-flat": {
"caniuse": "array-flat",
"spec": "https://tc39.es/ecma262/multipage/indexed-collections.html#sec-array.prototype.flat"
}
}
Each of those objects will be loaded as a separate row, with a _key
primary key column containing the object key.
This example loads scraped data from this repo.
Newline-delimited JSON works too - for example a file that looks like this:
{"id": 1, "name": "Item 1"}
{"id": 2, "name": "Item 2"}
You can use this tool to open any SQLite database file that is hosted online and served with a access-control-allow-origin: *
CORS header. Files served by GitHub Pages automatically include this header, as do database files that have been published online using datasette publish.
Copy the URL to the .db
file and either paste it into the "Load SQLite DB by URL" prompt, or construct a URL like the following:
https://lite.datasette.io/?url=https://latest.datasette.io/fixtures.db
Some examples to try out:
- Global Power Plants - 33,000 power plants around the world
- United States members of congress - the example database from the Learn SQL with Datasette tutorial
To load a Parquet file, pass a URL to ?parquet=
.
For example this file:
https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet
Can be loaded like this:
You can also initialize the data.db
database by passing the URL to a SQL file. The easiest way to do this is to create a GitHub Gist.
This example SQL file creates a table and populates it with three records. It's hosted in this Gist.
https://gist.githubusercontent.com/simonw/ac4e19920b4b360752ac0f3ce85ba238/raw/90d31cf93bf1d97bb496de78559798f849b17e85/demo.sql
You can paste this URL into the "Load SQL by URL" prompt, or you can pass it as the ?sql=
parameter like this.
SQL will be executed before any CSV imports, so you can use initial SQL to create a table and then use ?csv=
to import data into it.
To skip loading the default databases and just provide /_memory
- useful for demonstrating plugins - pass ?memory=1
, for example:
https://lite.datasette.io/?memory=1
Datasette supports metadata, as a metadata.json
or metadata.yml
file.
You can load a metadata file in either of these formats by passing a URL to the ?metadata=
query string option.
A tricky thing about using Datasette Lite is that the files you load via URL need to be hosted somewhere that serves open CORS headers.
Both regular GitHub and GitHub Gists do this by default. This makes them excellent options to host data files that you want to load into Datasette Lite.
You can paste in the "raw" URL to a file, but Datasette Lite also has a shortcut: if you paste in the URL to a page on GitHub or a Gist it will automatically convert it to the "raw" URL for you.
Try the following to see this in action:
- https://lite.datasette.io/?json=https://gist.github.com/simonw/7eacc70cd8b2868be0a18796cec078b9 (this Gist)
- https://lite.datasette.io/?csv=https://github.com/nytimes/covid-19-data/blob/master/us-counties-recent.csv (this file)
Datasette has a number of plugins that enable new features.
You can install plugins into Datasette Lite by adding one or more ?install=name-of-plugin
parameters to the URL.
Not all plugins are compatible with Datasette Lite at the moment, for example plugins that load their own JavaScript and CSS do not currently work, see issue #8.
Here's a list of plugins that have been tested with Datasette Lite, plus demo links to see them in action:
- datasette-packages - Show a list of currently installed Python packages - demo
- datasette-dateutil - dateutil functions for Datasette - demo
- datasette-schema-versions - Datasette plugin that shows the schema version of every attached database - demo
- datasette-debug-asgi - Datasette plugin for dumping out the ASGI scope. - demo
- datasette-query-links - Turn SELECT queries returned by a query into links to execute them - demo
- datasette-json-html - Datasette plugin for rendering HTML based on JSON values - demo
- datasette-haversine - Datasette plugin that adds a custom SQL function for haversine distances - demo
- datasette-jellyfish - Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the Jellyfish Python library - demo
- datasette-pretty-json - Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays. - demo
- datasette-yaml - Export Datasette records as YAML - demo
- datasette-copyable - Datasette plugin for outputting tables in formats suitable for copy and paste - demo
- datasette-mp3-audio - Turn
.mp3
URLs into an audio player in the Datasette interface - demo - datasette-multiline-links - Make multiple newline separated URLs clickable in Datasette - demo
- datasette-copyable - adds an interface for copying out data in CSV, TSV, LaTeX, GitHub Markdown tables and many other formats - demo
- datasette-statistics - SQL functions for statistical calculations - demo
- datasette-simple-html - simple SQL functions for stripping tags and escaping or unescaping HTML strings - demo
By default, hits to https://lite.datasette.io/
are logged using Plausible.
Plausible is a privacy-focused, cookie-free, GDPR-compliant analytics system.
Each navigation within Datasette Lite is logged as a separate event to Plausible, capturing the fragment hash and the URL to the currently loaded file.
The site is hosted on GitHub Pages, which does not offer any analytics that are visible to the site owner. GitHub Pages can only log visits to the https://lite.datasette.io/
root page - it will not have visibility into any subsequent #
fragment navigation.
To opt out of analytics, you can add ?analytics=off
or &analytics=off
to the URL. This will prevent any analytics being sent to Plausible.