Skip to content

Commit

Permalink
refactor: chunk -> share (celestiaorg#3240)
Browse files Browse the repository at this point in the history
Rename instances of `chunk` to `share`.

## Motivation

The rsmt2d repo uses chunks and shares interchangeably and we increase
readability if one term was used consistently. A few people have
expressed favor for share over chunk because of it's broader usage in
celestia-node and celestia-app.

See celestiaorg/rsmt2d#215

Note: There aren't occurrences of shard or buffer that need renaming in
this repo. Shard is used frequently but it seems applicable to
https://github.com/filecoin-project/dagstore/blob/master/docs/design.md#shards
and not meant to refer to a share.
  • Loading branch information
rootulp authored Mar 6, 2024
2 parents 32bc7a9 + 2469e7a commit 0cb1b02
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion das/doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Package das contains the most important functionality provided by celestia-node.
It contains logic for running data availability sampling (DAS) routines on block
headers in the network. DAS is the process of verifying the availability of
block data by sampling chunks or shares of those blocks.
block data by sampling shares of those blocks.
Package das can confirm the availability of block data in the network via the
Availability interface which is implemented both in `full` and `light` mode.
Expand Down
32 changes: 16 additions & 16 deletions docs/adr/adr-011-blocksync-overhaul-part-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ are kept with additional components introduced. Altogether, existing and new com
foundation of our improved block storage subsystem.

The central data structure representing Celestia block data is EDS(`rsmt2d.ExtendedDataSquare`), and the new storage design
is focused around storing entire EDSes as a whole rather than a set of individual chunks, s.t. storage subsystem
is focused around storing entire EDSes as a whole rather than a set of individual shares, s.t. storage subsystem
can handle storing and streaming/serving blocks of 4MB and more.

#### EDS (De-)Serialization
Expand Down Expand Up @@ -174,9 +174,9 @@ and getting data by namespace.

```go
type Store struct {
basepath string
basepath string
dgstr dagstore.DAGStore
topIdx index.Inverted
topIdx index.Inverted
carIdx index.FullIndexRepo
mounts *mount.Registry
...
Expand All @@ -189,18 +189,18 @@ func NewStore(basepath string, ds datastore.Batching) *Store {
carIdx := index.NewFSRepo(basepath + "/index")
mounts := mount.NewRegistry()
r := mount.NewRegistry()
err = r.Register("fs", &mount.FSMount{FS: os.DirFS(basePath + "/eds/")}) // registration is a must
err = r.Register("fs", &mount.FSMount{FS: os.DirFS(basePath + "/eds/")}) // registration is a must
if err != nil {
panic(err)
}

return &Store{
basepath: basepath,
dgst: dagstore.New(dagstore.Config{...}),
topIdx: index.NewInverted(datastore),
carIdx: index.NewFSRepo(basepath + "/index")
mounts: mounts,
}
}
}
```

Expand All @@ -224,7 +224,7 @@ out of the scope of the document._
//
// The square is verified on the Exchange level and Put only stores the square trusting it.
// The resulting file stores all the shares and NMT Merkle Proofs of the EDS.
// Additionally, the file gets indexed s.t. Store.Blockstore can access them.
// Additionally, the file gets indexed s.t. Store.Blockstore can access them.
func (s *Store) Put(context.Context, DataHash, *rsmt2d.ExtendedDataSquare) error
```

Expand All @@ -243,9 +243,9 @@ ___NOTES:___

```go
// GetCAR takes a DataHash and returns a buffered reader to the respective EDS serialized as a CARv1 file.
//
// The Reader strictly reads our full EDS, and it's integrity is not verified.
//
//
// The Reader strictly reads our full EDS, and it's integrity is not verified.
//
// Caller must Close returned reader after reading.
func (s *Store) GetCAR(context.Context, DataHash) (io.ReadCloser, error)
```
Expand All @@ -265,8 +265,8 @@ ___NOTES:___
- EDIT: We went with custom implementation.

```go
// Blockstore returns an IPFS Blockstore providing access to individual shares/nodes of all EDS
// registered on the Store. NOTE: The Blockstore does not store whole Celestia Blocks but IPFS blocks.
// Blockstore returns an IPFS Blockstore providing access to individual shares/nodes of all EDS
// registered on the Store. NOTE: The Blockstore does not store whole Celestia Blocks but IPFS blocks.
// We represent `shares` and NMT Merkle proofs as IPFS blocks and IPLD nodes so Bitswap can access those.
func (s *Store) Blockstore() blockstore.Blockstore
```
Expand All @@ -284,8 +284,8 @@ ___NOTES:___
blocks._

```go
// CARBlockstore returns an IPFS Blockstore providing access to individual shares/nodes of a specific EDS identified by
// DataHash and registered on the Store. NOTE: The Blockstore does not store whole Celestia Blocks but IPFS blocks.
// CARBlockstore returns an IPFS Blockstore providing access to individual shares/nodes of a specific EDS identified by
// DataHash and registered on the Store. NOTE: The Blockstore does not store whole Celestia Blocks but IPFS blocks.
// We represent `shares` and NMT Merkle proofs as IPFS blocks and IPLD nodes so Bitswap can access those.
func (s *Store) CARBlockstore(context.Context, DataHash) (dagstore.ReadBlockstore, error)
```
Expand All @@ -301,7 +301,7 @@ The `GetDAH` method returns the DAH (`share.Root`) of the EDS identified by `Dat

```go
// GetDAH returns the DataAvailabilityHeader for the EDS identified by DataHash.
func (s *Store) GetDAH(context.Context, share.DataHash) (*share.Root, error)
func (s *Store) GetDAH(context.Context, share.DataHash) (*share.Root, error)
```

##### `eds.Store.Get`
Expand All @@ -315,7 +315,7 @@ ___NOTE:___ _It's unnecessary, but an API ergonomics/symmetry nice-to-have._

```go
// Get reads EDS out of Store by given DataHash.
//
//
// It reads only one quadrant(1/4) of the EDS and verifies the integrity of the stored data by recomputing it.
func (s *Store) Get(context.Context, DataHash) (*rsmt2d.ExtendedDataSquare, error)
```
Expand Down
6 changes: 3 additions & 3 deletions docs/adr/adr-012-daser-parallelization.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

## Context

DAS is the process of verifying the availability of block data by sampling chunks or shares of those blocks. The `das` package implements an engine to ensure the availability of the chain's block data via the `Availability` interface.
DAS is the process of verifying the availability of block data by sampling shares of those blocks. The `das` package implements an engine to ensure the availability of the chain's block data via the `Availability` interface.
Verifying the availability of block data is a priority functionality for celestia-node. Its performance could benefit significantly from parallelization optimisation to make it able to fully utilise network bandwidth.

## Previous approach
Expand Down Expand Up @@ -61,7 +61,7 @@ amount of workers: 32, speed: 11.33
amount of workers: 64, speed: 11.83
```

Based on basic experiment results, values higher than 16 don’t bring much benefit. At the same time, increased parallelization comes with a cost of higher memory consumption.
Based on basic experiment results, values higher than 16 don’t bring much benefit. At the same time, increased parallelization comes with a cost of higher memory consumption.
Future improvements will be discussed later and are out of the scope of this ADR.

## Status
Expand All @@ -70,7 +70,7 @@ Implemented

## Future plans

Several params values that come hardcoded in DASer (`samplingRange`, `concurrencyLimit`, `priorityQueueSize`, `genesisHeight`, `backgroundStoreInterval`) should become configurable, so the node runner can define them based on the specific node setup. Default values should be optimized by performance testing for most common setups, and could potentially vary for different node types.
Several params values that come hardcoded in DASer (`samplingRange`, `concurrencyLimit`, `priorityQueueSize`, `genesisHeight`, `backgroundStoreInterval`) should become configurable, so the node runner can define them based on the specific node setup. Default values should be optimized by performance testing for most common setups, and could potentially vary for different node types.

## References

Expand Down
2 changes: 1 addition & 1 deletion share/ipld/add.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ func AddShares(
return eds, batchAdder.Commit()
}

// ImportShares imports flattened chunks of data into Extended Data square and saves it in
// ImportShares imports flattened pieces of data into Extended Data square and saves it in
// blockservice.BlockService
func ImportShares(
ctx context.Context,
Expand Down

0 comments on commit 0cb1b02

Please sign in to comment.