You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The archive crate will likely be a standalone crate in its own repo to better allow for it to be used in other projects as needed. For projects in active development, they can point their Cargo.toml dependencies straight at the Github repo, and then once they're ready for a pull request, can tie it to a specific version that we'll also publish to crates.io.
The interface itself will be really basic to begin with:
A create interface to take a new document and insert it into the backend repository
An update interface to take a document and update the copy in the backend repository
A get interface to retrieve a document by some criteria or ID (potentially split into multiple functions)
Possibly a delete interface, although we don't necessarily want most consumers to be able to remove data
The interface won't do much to delve into the contents of objects/documents/blobs passed in itself, but will assume that we can use Rust's serde_derive to serialise and deserialise to a format that is conducive to the backend.
The above interfaces will specify which archive backend to use, and which instance. In the short term, the only backend will be MongoDB and the instance equates to the MongoDB database name (allowing for multiple instances to be hosted on the same infrastructure in cases where that makes sense).
The MongoDB backend will also use the document type (a simple enum with no real awareness of the data structures themselves) to determine which collection (table) the document should be written to. This allows the short term goal of supporting accounts and transaction data and for those to be archived in separate repositories. Other document types/repos can be added, and additional archive backends written in the future can likely do the same.
I have a start on the crate itself. There were some Rust hurdles to get around, including async traits not being native and not being complete. I need to fix up some serialisation stuff and should be able to push an initial implementation that allows archive writes. This will let us start plugging in the LASR code to at least start archiving data -- potentially on a test network. The advantage of having this path somewhat ready is that it gives other folks data they can start looking into for a potential block explorer.
The text was updated successfully, but these errors were encountered:
The archive crate will likely be a standalone crate in its own repo to better allow for it to be used in other projects as needed. For projects in active development, they can point their
Cargo.toml
dependencies straight at the Github repo, and then once they're ready for a pull request, can tie it to a specific version that we'll also publish to crates.io.The interface itself will be really basic to begin with:
The interface won't do much to delve into the contents of objects/documents/blobs passed in itself, but will assume that we can use Rust's serde_derive to serialise and deserialise to a format that is conducive to the backend.
The above interfaces will specify which archive backend to use, and which instance. In the short term, the only backend will be MongoDB and the instance equates to the MongoDB database name (allowing for multiple instances to be hosted on the same infrastructure in cases where that makes sense).
The MongoDB backend will also use the document type (a simple enum with no real awareness of the data structures themselves) to determine which collection (table) the document should be written to. This allows the short term goal of supporting accounts and transaction data and for those to be archived in separate repositories. Other document types/repos can be added, and additional archive backends written in the future can likely do the same.
I have a start on the crate itself. There were some Rust hurdles to get around, including async traits not being native and not being complete. I need to fix up some serialisation stuff and should be able to push an initial implementation that allows archive writes. This will let us start plugging in the LASR code to at least start archiving data -- potentially on a test network. The advantage of having this path somewhat ready is that it gives other folks data they can start looking into for a potential block explorer.
The text was updated successfully, but these errors were encountered: