Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the IPFS core object to post block data during proposal #178

Merged
merged 27 commits into from
Mar 15, 2021

Conversation

evan-forbes
Copy link
Member

@evan-forbes evan-forbes commented Mar 4, 2021

Description

Here's a rough draft for using the core IPFS object to post block data while proposing a block. This PR shows some of the difficulties that go with using an IPFS core object. The IPFS object is not initiated until OnStart is called, but we need to get the core object to where the block is being proposed. While hacky, if we simply add a field containing a IPFS core object to the consensus.State struct, then we can fill that field during Onstart, and have access to the ipfs object when the block is being proposed.

There is a second problem, which is that the erasure data is not cached. Unless we want to pass the IPFS object all the way to where we are first generating the erasure data, we have no way to access it. This PR simply generates the data again, but another option would be to cache the erasured data in the Block struct and use it from there.

As mentioned in #170, there's the option to not use the IPFS coreapi object and instead use some other IPFS client.

closes #185

@evan-forbes evan-forbes changed the title Use the IPFS core ojbect to post block data during proposal without computing the ExtendedDataSquare twice Use the IPFS core ojbect to post block data during proposal Mar 4, 2021
@evan-forbes evan-forbes self-assigned this Mar 4, 2021
Copy link
Member

@liamsi liamsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great start 👍🏼 🚀

types/block.go Outdated
@@ -259,6 +264,128 @@ func mustPush(rowTree *nmt.NamespacedMerkleTree, id namespace.ID, data []byte) {
}
}

func (b *Block) PutBlock(ctx context.Context, api ipfsapi.CoreAPI) error {
Copy link
Member

@liamsi liamsi Mar 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we pass in a NodeAdder instead of the whole API directly?

It looks like we can even just pass in a pinning NodeAdder: https://github.com/ipfs/interface-go-ipfs-core/blob/b935dfe5375eac7ea3c65b14b3f9a0242861d0b3/dag.go#L12
(which since 0.8.0 seems to use the data store directly instead of coupling it with the DAG - hence it got faster with the last release as far as I understand).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I didn't notice the Pinning method in the interface, that simplifies things a fair bit. 29244a0

Copy link
Member

@liamsi liamsi Mar 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

referencing #167 s.t. we know that we are already pinning some of the data.

types/block.go Outdated
Comment on lines 281 to 311
extendedDataSquare, err := rsmt2d.ComputeExtendedDataSquare(shares, rsmt2d.RSGF8, rsmt2d.NewDefaultTree)
if err != nil {
panic(fmt.Sprintf("unexpected error: %v", err))
}

squareWidth := extendedDataSquare.Width()
originalDataWidth := squareWidth / 2

// add namespaces to erasured shares and chunk into tree sized portions
leaves := make([][][]byte, 2*squareWidth)
for outerIdx := uint(0); outerIdx < squareWidth; outerIdx++ {
rowLeaves := make([][]byte, squareWidth)
colLeaves := make([][]byte, squareWidth)
for innerIdx := uint(0); innerIdx < squareWidth; innerIdx++ {
if outerIdx < originalDataWidth && innerIdx < originalDataWidth {
rowShare := namespacedShares[outerIdx*originalDataWidth+innerIdx]
colShare := namespacedShares[innerIdx*originalDataWidth+outerIdx]
rowLeaves[innerIdx] = append(rowShare.NamespaceID(), rowShare.Data()...)
colLeaves[innerIdx] = append(colShare.NamespaceID(), colShare.Data()...)
} else {
rowData := extendedDataSquare.Row(outerIdx)
colData := extendedDataSquare.Column(outerIdx)
parityCellFromRow := rowData[innerIdx]
parityCellFromCol := colData[innerIdx]
rowLeaves[innerIdx] = append(copyOfParityNamespaceID(), parityCellFromRow...)
colLeaves[innerIdx] = append(copyOfParityNamespaceID(), parityCellFromCol...)
}
}
leaves[outerIdx] = rowLeaves
leaves[2*outerIdx] = colLeaves
}
Copy link
Member

@liamsi liamsi Mar 4, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be orthogonal to this PR but we should either refactor this s.t. this is not duplicated here, or, use only the rsmt2d lib for getting the row/column roots (the latter is preferable if it works) and to collect the ipfs nodes.

Or, we could also just use the NMT + a "node collector" function here directly and skip using DataSquareRowOrColumnRawInputParser).

https://github.com/lazyledger/lazyledger-core/blob/899f5b23d738e6d0118fddfbd3c0410d2f562834/types/block.go#L213-L244

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a good idea! I ended up refactoring out the redundant code. 5bfb03f I don't think we can use the rsmt2d libs, because we have to add the namespaces back to the data before generating the roots, but I could be missing something. We could definitely use the node collector in this package instead of DataSquarerowOrColumnRawInputParser, but I didn't for the time being in order not to export nmtNode from the plugin lib.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can use the rsmt2d libs, because we have to add the namespaces back to the data before generating the roots,

Alright! Yeah, I knew there was a catch but I couldn't remember what it was. We could add this kind of processing to the rsmt2d lib though: celestiaorg/rsmt2d#12 (comment)

But yeah, this is also completely orthogonal to this PR. Thanks for the refactoring! Looks good so far.

We could definitely use the node collector in this package instead of DataSquarerowOrColumnRawInputParser, but I didn't for the time being in order not to export nmtNode from the plugin lib.

Couldn't we simply export the NodeCollector and keep the nmtNode private?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. Is this what you had in mind? 47b4125

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that looks good!

Comment on lines +184 to +194
func prependNode(newNode node.Node, nodes []node.Node) []node.Node {
nodes = append(nodes, node.Node(nil))
copy(nodes[1:], nodes)
nodes[0] = newNode
return nodes
Copy link
Member

@liamsi liamsi Mar 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not the most pressing issue but I think @adlerjohn was worried about unnecessary allocations here (voiced in #152 or #155 - can't find the comment rn, there were so many). Now that this is a struct that has the nodes as a field, couldn't this be also a method of NmtNodeCollector and you could modify that field directly instead of returning the modified node slice here.

Copy link
Member

@adlerjohn adlerjohn Mar 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method I believe only avoids allocations if the slice that's being prepended to never has to grow. It looks n.nodes is reserved with a size of extendedRowOrColumnSize, and should never exceed that, so there should be no allocations. That being said, as @liamsi said it might be a good idea to make this method more tightly linked to NmtNodeCollector to prevent it from being used without this guarantee.

Copy link
Member

@liamsi liamsi Mar 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The production code will now use a batch and immediately pass the node object the moment it is created to that batch instead of keeping track of the nodes in a slice (see NmtNodeAdder below). Not sure about the allocs in that batch object but we'd need to pass the nodes there anyway.
I'd be OK with making the NmtNodeCollector a private function again as previously and only expose the batch adder version of it.

Copy link
Member Author

@evan-forbes evan-forbes Mar 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made NmtNodeCollector a private struct c99bc38, but I have kept prependNode the way it is for now until further discussion. The main reason being that we are now only using it for testing purposes, and the second being that I think we should default to the idiomatic approach and optimize later. Given that a slice is kinda a pointer under the hood, my gut is telling me that it (weirdly) might not save on allocations or speed. I'm happy to do a quick fix on this though should we decide it worth it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main reason being that we are now only using it for testing purposes, and the second being that I think we should default to the idiomatic approach and optimize later.

Absolutely agree 💯

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case we even go further and rework the entire IPLD representation of NMT nodes, this code will very likely disappear entirely.

P.S. As you can see I am not satisfied with the current solution, but that is ok for now :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P.S. As you can see I am not satisfied with the current solution, but that is ok for now :)

Can you jot down what you dislike and what you would suggest in an issue? That would be helpful :-)

@@ -32,5 +33,5 @@ var (
ParitySharesNamespaceID = namespace.ID{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}

// change accordingly if another hash.Hash should be used as a base hasher in the NMT:
newBaseHashFunc = sha3.New256
newBaseHashFunc = sha256.New
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ref: #187

types/block.go Outdated
Comment on lines 15 to 17
format "github.com/ipfs/go-ipld-format"

"github.com/lazyledger/lazyledger-core/p2p/ipld/plugin/nodes"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a dependencies perspective, it is probably desirable to not add any networking-related dependencies here to the types package. We can still merge this as is, but we should track this in an issue then.

Note that the main reason to make this thing a method of Block was that we could potentially cache the extended data square as a field and reuse it for putting the block data to ipfs. Maybe we can achieve the same without adding the actual networking logic here. Either way, we can figure this out later.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This a great point! You're right,PutBlock definitely does not have to be a method of Block. #196

@codecov-io
Copy link

codecov-io commented Mar 10, 2021

Codecov Report

Merging #178 (8cbaa42) into master (eff282a) will increase coverage by 1.07%.
The diff coverage is 14.54%.

@@            Coverage Diff             @@
##           master     #178      +/-   ##
==========================================
+ Coverage   60.87%   61.95%   +1.07%     
==========================================
  Files         261      258       -3     
  Lines       23640    23069     -571     
==========================================
- Hits        14392    14292     -100     
+ Misses       7759     7291     -468     
+ Partials     1489     1486       -3     
Impacted Files Coverage Δ
cmd/tendermint/commands/init.go 2.22% <0.00%> (-1.78%) ⬇️
config/toml.go 60.86% <ø> (ø)
consensus/state.go 68.30% <0.00%> (-0.11%) ⬇️
node/node.go 54.77% <ø> (-3.25%) ⬇️
proxy/client.go 0.00% <ø> (-23.08%) ⬇️
test/e2e/generator/generate.go 0.00% <ø> (ø)
types/block.go 77.08% <ø> (-0.90%) ⬇️
config/ipfs_config.go 83.33% <83.33%> (ø)
config/config.go 77.88% <100.00%> (+0.21%) ⬆️
... and 23 more

@evan-forbes evan-forbes force-pushed the evan/put-ipfs-api-object-compute-twice branch 2 times, most recently from 4d572a6 to 3b52870 Compare March 10, 2021 05:32
fix node collection and commit data to ipfs

remove extra plugin var

unexport node collector

fix missed unexport
update test result, which changed because of the updated hash function

update to the new PutBlock API
Comment on lines +184 to +194
func prependNode(newNode node.Node, nodes []node.Node) []node.Node {
nodes = append(nodes, node.Node(nil))
copy(nodes[1:], nodes)
nodes[0] = newNode
return nodes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case we even go further and rework the entire IPLD representation of NMT nodes, this code will very likely disappear entirely.

P.S. As you can see I am not satisfied with the current solution, but that is ok for now :)

// TODO(evan): don't hard code context and timeout
if cs.IpfsAPI != nil {
// longer timeouts result in block proposers failing to propose blocks in time.
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*1500)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a special context.TODO() for such cases.

Also, what's the rationale behind having timeout here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I didn't know that. In this case, we need the timeout because PutBlock is synchronous and will take too long, stopping the block proposer from proposing a block in time.

@@ -1100,6 +1103,19 @@ func (cs *State) defaultDecideProposal(height int64, round int32) {
} else if !cs.replayMode {
cs.Logger.Error("enterPropose: Error signing proposal", "height", height, "round", round, "err", err)
}

// post data to ipfs
// TODO(evan): don't hard code context and timeout
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this case, we can pass-through context already, having an issue for that seems like an overmanagement :)

p2p/ipld/plugin/nodes/nodes.go Show resolved Hide resolved
@musalbas musalbas changed the title Use the IPFS core ojbect to post block data during proposal Use the IPFS core object to post block data during proposal Mar 10, 2021
Copy link
Member

@Wondertan Wondertan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to add Nmt codec to cid.Codecs and cid.CodecsToStr maps in init func. We don't have that and only register MH codec. While this does not break anything for now, it may in the future.

@evan-forbes
Copy link
Member Author

@Wondertan that makes sense. What string should be used in association with that codec? "nmt-inner-or-leaf-node"? @liamsi can I include this in this PR, or does it belong in another?

@Wondertan
Copy link
Member

@evan-forbes, considering the fact that we have two nodes nmtNode and nmtLeafNode we should have two CID codecs - "nmt-leaf" and "nmt-inner" respectively. Also, I would rename nmtNode struct into nmtInnerNode not to introduce confusion.

@liamsi
Copy link
Member

liamsi commented Mar 13, 2021

@liamsi can I include this in this PR, or does it belong in another?

It's certainly orthogonal to this PR but it's a small change and if it is a no-brainer, feel free to include it here.

That said, we were planning to submit a PR to go-cid to see if the maintainers would accept Nmt in cid.Codecs and that csv file instead - as soon as we have a running mvp. I'm not aware of any downsides of simply not including Nmt in that map in the plugin's code. Other plugins do not seem to follow that pattern of adding to the Codecs map locally. But I also do not see any downsides of simply including it in the map.
It's probably good practice to make sure we are not trying to use a codec that is already in use for some other plugin accidentally. Is that assumption right @Wondertan?

What string should be used in association with that codec?

The string should probably just be "nmt" or "nmt-node" as that codec is only used to register a block decoder.

@evan-forbes, considering the fact that we have two nodes nmtNode and nmtLeafNode we should have two CID codecs - "nmt-leaf" and "nmt-inner" respectively.

@Wondertan yeah, you brought that up in that first LL plugin PR. It's a good point! Feel free to open an issue about it so we have it in our backlog and if you really feel that it would add immediate value, feel free to start working on it (I certainly agree that it feels cleaner - simply not sure if it's worth the effort right now).
It's also orthogonal to this PR though. Otherwise, I was planning to revisit it after we have a running MVP anyway. Same with renaming the unexported node-types (also a good point though).

@Wondertan
Copy link
Member

Wondertan commented Mar 14, 2021

Is that assumption right @Wondertan?

Yes. Although, I come from the intention: "if you introduce any codec, pls be kind to register it in your app." :)

if it's worth the effort right now

I can agree here, but also I can agree that this is a no-brainer. I would compare this with the tech-debt or following good practices where one node type = one codec type. We can quickly resolve this in the PR without starting a separate discussion branch. I think adding a few lines(two string consts and map puts) does not worth that.

@liamsi
Copy link
Member

liamsi commented Mar 14, 2021

@evan-forbes can you please add the Nmt to the cid.Codecs map?
@Wondertan Regarding "one node type = one codec type", do you mind opening a separate PR? I think it's a bit more involved than the first issue. And I'd like to keep this PR focused on writing block data into the merkle dag / ipld during consensus.

@Wondertan
Copy link
Member

@liamsi, ok, will do

@liamsi
Copy link
Member

liamsi commented Mar 15, 2021

The data race that shows up in the CI seems unrelated to the changes in this PR. Here is the output though:

Click here to expand logs
--- FAIL: TestTransportHandshake (0.04s)
    transport_test.go:620: read tcp 127.0.0.1:42939->127.0.0.1:37038: i/o timeout
FAIL
coverage: 72.2% of statements
FAIL	github.com/lazyledger/lazyledger-core/p2p	12.079s
ok  	github.com/lazyledger/lazyledger-core/p2p/conn	2.148s	coverage: 85.9% of statements
?   	github.com/lazyledger/lazyledger-core/p2p/mock	[no test files]
?   	github.com/lazyledger/lazyledger-core/p2p/mocks	[no test files]
ok  	github.com/lazyledger/lazyledger-core/p2p/pex	22.200s	coverage: 79.2% of statements
ok  	github.com/lazyledger/lazyledger-core/p2p/trust	0.092s	coverage: 87.1% of statements
?   	github.com/lazyledger/lazyledger-core/p2p/upnp	[no test files]
ok  	github.com/lazyledger/lazyledger-core/privval	9.372s	coverage: 79.6% of statements
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/blockchain	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/consensus	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/crypto	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/libs/bits	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/mempool	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/p2p	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/privval	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/state	[no test files]
ok  	github.com/lazyledger/lazyledger-core/proto/tendermint/statesync	0.032s	coverage: 26.9% of statements
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/store	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/types	[no test files]
?   	github.com/lazyledger/lazyledger-core/proto/tendermint/version	[no test files]
ok  	github.com/lazyledger/lazyledger-core/proxy	0.761s	coverage: 46.2% of statements
?   	github.com/lazyledger/lazyledger-core/proxy/mocks	[no test files]
Tendermint running!
==================
WARNING: DATA RACE
Write at 0x000004301f58 by goroutine 25:
  github.com/lazyledger/lazyledger-core/rpc/core.SetEnvironment()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/core/env.go:37 +0x9a4
  github.com/lazyledger/lazyledger-core/node.(*Node).ConfigureRPC()
      /home/runner/work/lazyledger-core/lazyledger-core/node/node.go:1049 +0x316
  github.com/lazyledger/lazyledger-core/rpc/client/local.New()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/client/local/local.go:53 +0x44
  github.com/lazyledger/lazyledger-core/rpc/client_test.getLocalClient()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/client/rpc_test.go:54 +0x5c
  github.com/lazyledger/lazyledger-core/rpc/client_test.GetClients()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/client/rpc_test.go:61 +0xaf
  github.com/lazyledger/lazyledger-core/rpc/client_test.TestBlockEvents()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/client/event_test.go:57 +0x36
  testing.tRunner()
      /opt/hostedtoolcache/go/1.15.8/x64/src/testing/testing.go:1123 +0x202

Previous read at 0x000004301f58 by goroutine 123:
  github.com/lazyledger/lazyledger-core/rpc/core.UnsubscribeAll()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/core/events.go:106 +0x18e
  runtime.call32()
      /opt/hostedtoolcache/go/1.15.8/x64/src/runtime/asm_amd64.s:540 +0x3d
  reflect.Value.Call()
      /opt/hostedtoolcache/go/1.15.8/x64/src/reflect/value.go:337 +0xd8
  github.com/lazyledger/lazyledger-core/rpc/jsonrpc/server.(*wsConnection).readRoutine()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/jsonrpc/server/ws_handler.go:381 +0xaab

Goroutine 25 (running) created at:
  testing.(*T).Run()
      /opt/hostedtoolcache/go/1.15.8/x64/src/testing/testing.go:1168 +0x5bb
  testing.runTests.func1()
      /opt/hostedtoolcache/go/1.15.8/x64/src/testing/testing.go:1439 +0xa6
  testing.tRunner()
      /opt/hostedtoolcache/go/1.15.8/x64/src/testing/testing.go:1123 +0x202
  testing.runTests()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/jsonrpc/server/ws_handler.go:226 +0xb9
  github.com/lazyledger/lazyledger-core/libs/service.(*BaseService).Start()
      /home/runner/work/lazyledger-core/lazyledger-core/libs/service/service.go:140 +0x538
  github.com/lazyledger/lazyledger-core/rpc/jsonrpc/server.(*WebsocketManager).WebsocketHandler()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/jsonrpc/server/ws_handler.go:91 +0x5c8
  github.com/lazyledger/lazyledger-core/rpc/jsonrpc/server.(*WebsocketManager).WebsocketHandler-fm()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/jsonrpc/server/ws_handler.go:74 +0x64
  net/http.HandlerFunc.ServeHTTP()
      /opt/hostedtoolcache/go/1.15.8/x64/src/net/http/server.go:2042 +0x51
  net/http.(*ServeMux).ServeHTTP()
      /opt/hostedtoolcache/go/1.15.8/x64/src/net/http/server.go:2417 +0xaf
  github.com/rs/cors.(*Cors).Handler.func1()
      /home/runner/go/pkg/mod/github.com/rs/[email protected]/cors.go:219 +0x239
  net/http.HandlerFunc.ServeHTTP()
      /opt/hostedtoolcache/go/1.15.8/x64/src/net/http/server.go:2042 +0x51
  github.com/lazyledger/lazyledger-core/rpc/jsonrpc/server.maxBytesHandler.ServeHTTP()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/jsonrpc/server/http_server.go:234 +0x187
  github.com/lazyledger/lazyledger-core/rpc/jsonrpc/server.(*maxBytesHandler).ServeHTTP()
      <autogenerated>:1 +0x9d
  github.com/lazyledger/lazyledger-core/rpc/jsonrpc/server.RecoverAndLogHandler.func1()
      /home/runner/work/lazyledger-core/lazyledger-core/rpc/jsonrpc/server/http_server.go:207 +0x461
  net/http.HandlerFunc.ServeHTTP()
      /opt/hostedtoolcache/go/1.15.8/x64/src/net/http/server.go:2042 +0x51
  net/http.serverHandler.ServeHTTP()
      /opt/hostedtoolcache/go/1.15.8/x64/src/net/http/server.go:2843 +0xca
  net/http.(*conn).serve()
      /opt/hostedtoolcache/go/1.15.8/x64/src/net/http/server.go:1925 +0x84c
==================
E[2021-03-15|13:39:57.891] Failed to write response                     module=rpc-server protocol=websocket remote=127.0.0.1:56756 err="write tcp 127.0.0.1:39049->127.0.0.1:56756: write: broken pipe" msg="RPCResponse{1 7B7D}"
E[2021-03-15|13:39:57.891] error while stopping connection              module=rpc-server protocol=websocket error="already stopped"
E[2021-03-15|13:39:58.095] Failed to write response                     module=rpc-server protocol=websocket remote=127.0.0.1:56758 err="websocket: close sent" msg="RPCResponse{1 7B7D}"
E[2021-03-15|13:39:58.095] error while stopping connection              module=rpc-server protocol=websocket error="already stopped"
--- FAIL: TestBlockEvents (0.40s)
    testing.go:1038: race detected during execution of test
E[2021-03-15|13:39:58.428] Failed to write response                     module=rpc-server protocol=websocket remote=127.0.0.1:56764 err="websocket: close sent" msg="RPCResponse{1 7B7D}"
E[2021-03-15|13:39:58.429] error while stopping connection              module=rpc-server protocol=websocket error="already stopped"
E[2021-03-15|13:39:58.706] Failed to write response                     module=rpc-server protocol=websocket remote=127.0.0.1:56768 err="websocket: close sent" msg="RPCResponse{1 7B7D}"
E[2021-03-15|13:39:58.706] error while stopping connection              module=rpc-server protocol=websocket error="already stopped"
FAIL
coverage: 89.7% of statements
E[2021-03-15|13:40:11.599] Stopped accept routine, as transport is closed module=p2p numPeers=0
E[2021-03-15|13:40:11.599] Error serving server                         err="accept tcp 127.0.0.1:39049: use of closed network connection"
E[2021-03-15|13:40:11.599] Error starting gRPC server                   err="accept tcp 127.0.0.1:41473: use of closed network connection"
FAIL	github.com/lazyledger/lazyledger-core/rpc/client	15.306s

I'll restart CI now.

@evan-forbes
Copy link
Member Author

I was still trying to debug, and restarted the CI a few times with no luck... Looks like it passed this time! 😬

I'll open an issue regarding the nondeterministic test failures. #216

// iterate through each set of col and row leaves
for _, leafSet := range leaves {
// create a batch per each leafSet
batchAdder := nodes.NewNmtNodeAdder(ctx, format.NewBatch(ctx, nodeAdder))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is better to extract a local var from format.NewBatch(ctx, nodeAdder) and use that to commit to that batch.

Comment on lines +58 to +60
// register the codecs in the global maps
cid.Codecs[NmtCodecName] = Nmt
cid.CodecToStr[Nmt] = NmtCodecName
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we panic in case the codec already exists? (e.g. because another project registered 0x7700 for sth else)

Copy link
Member

@liamsi liamsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left two nits. They can also be addressed in follow-up PRs instead.

Overall, this is amazing work 🚀 Thanks so much @evan-forbes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Store block Data using IPLD API
6 participants