-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ledger Enclave Strategy #402
Comments
I think that instead of mimicking Fabric semantics in a complete manner, the strategy should be really deviating from Fabric by:
By doing (1), validation logic is reduced and most importantly, doesn't need to play catch-up with Fabric changes. To achieve (1), there are several relaxations (which are, semantic restrictions) that can be taken into account:
By doing the above, we are essentially decoupling the FPC transaction validation from the Fabric configuration block processing and the Fabric transaction lifecycle and greatly simplifying the code. To realize (2) I think the best way would be to introduce to Fabric a new transaction type and have a custom transaction processor for that transaction. While it does modify Fabric, I believe it is the right approach in the long term because it doesn't require changing any of the existing Fabric endorsement transaction logic which is already complex, and also allows flexibility and more freedom to the FPC transaction validation logic, as it is no longer bound by the format and structure of the endorsement transaction. The FPC would still appear to have a similar transaction flow to the user, only internally it'll be easier to implement and to reason about security. |
The challenge is, though, that current trusted ledger architecture, besides validation of fpc chaincode transaction, relies on proper validation of (a) standard fabric channel definition with, e.g., msp and orderer info, and any related updates and (b) validating one "normal" chaincode, the enclave registry (ercc). While with more invasive changes in fabric we probably can get away with (b), the reliance on (a) is pretty fundamental for the security. So simply bifurcating validation won't really work. But as mentioned by both of you, we do not have to support the complete generality of validation (e.g., custom validation or endorsement plugins), i guess the main challenge is to cleanly and as narrowly as possible to define the (restricted) scope and then also have good processes in place to keep the code relation traceable and change-trackable (e.g., clear attribution on our case to where the go-equivalent is, maybe trying to also keep similar code structure/naming and maybe some scripts which can easily identify whether fabric changes will cause our code to correspondingly change?). The mid/long-term strategy of course would be re-use common code but that would need (a) the dust settling on go support for sgx -- there are some early PoC based on graphene, but they are not yet in stable state yet that i would like to base it on ... -- and, potentially, (b) some modularization of the validation code in fabric. |
I think that you only need to implement a small subset of the logic for channel definition: The logic that parses the consensus definitions and can perform signature validation on block signatures.
Therefore, as long as a block is signed by the required amount of orderers, and the peers remain functioning after ingesting the block, then the config block is valid. If the Ledger Enclave resides in an even lower level, as long as it sees that signatures of a config block are correct, then this block is the only one with its sequence, and it can accept it without problems. |
Hmm, i don't think you can rely on this. If the trusted ledger doesn't validate the config-changes but just takes them at face value, a bad peer could create one with a new (insecure) orderer configuration. Yes, good peers might panic but that doesn't stop the bad peer to give it to the trusted_enclave and then feeds it with bogus ordering messages which can then lead to revelation to supposedly secret state? PS: After some discussion with Bruno: Did you mean that orderer actually validate the (complete?) content of the config messages themselves (i.e., signed by enough orgs according to the corresponding lifecycle policy and alike) and forward an ordered config message only if it is valid? If so, then i could see that we don't have to repeat validation as we have to unconditionally trust orderers anyway. But then a (properly ordered/signed) config block would always be accepted by peers? At least unless the orderers are corrupted but in that case all bets are off anyway? |
Recall that I said:
This means that the peer cannot create a config block which the orderer didn't validate beforehand. If "good peers" panic, then it is irrelevant what the "bad peer" that surrounds your enclave is doing, since all honest players in the system are now out of the game and you have bigger problems to face.
The orderer has always been validating configuration blocks. More specifically, it takes the config update, and tries to "simulate" a config update by "proposing it" to the current config.
An orderer validates everything in the configuration, besides a single thing which is the capability level of the peers. However, a capability level is a binary "yes or no" and if the peer doesn't have the capability it will panic, therefore it is possible that the peers will panic due to a config update that the orderer signed, but nothing more. But, this is by design, because the orderer can be in a high compatibility version than peers but it doesn't know the compatibility version of the peers. |
Thanks for the clarification. In this case as mentioned in my PS i agree with you that verifying orderer signature for config-update messages should be enough. Given that we need MSP-rooted signature verification, (subsets of) policy evaluation and the parsing of the protobufs elsewhere (and hence corresponding library functions), it's less clear how much complexity we can save in this context? That said, it is certainly key to first identifying what we really need and where we can subset -- e.g., for MSP we clearly need only X509, only new lifecycle, no custom validator, endorser or decorator plugins. Also important to distinguish between what ever falls out of the subset we can silently ignore -- e.g., for chaincode other than fpc chaincodes and the special ercc "normal" chaincode we can ignore any use of custom plugins or different MSPs -- and what we will have to abort of encountering -- e.g., any unexpected MSP in channel or lifecycle policies as applicable to "our" chaincodes will have to lead to an abort. Probably best might be to first describe in high-level pseudo-code the validation flows fabric does (maybe with some references where in the code the corresponding logic is) and then annotate what we have to do and how we handle it? That could bring us on a safe ground and by keeping it as a living document (and also cross-referencing our code to this) hopefully also will help keeping the security invariants enforced as times go by? |
You don't want to implement the Fabric configuration parsing at its whole. It's a deep rabbit hole and you might hurt the earth's inner core. Also, the policy engine is very flexible and you don't need to implement all of it. You just need to implement verification for a subset of use cases and support some reasonable policy, not every possible policy, and the Fabric deployment use case will just need to adjust and configure the policy that is supported.
Why do you need a lifecycle at all? Assuming you never support FPC reading non FPC and vice versa (which I think is the way to go) and only support a simple naive endorsement policy of "one-of-any-enclave" then do you need a lifecycle at all? Endorsement policies are there because you cannot assume honest execution, but, with SGX you can.
No, there are lots of corner cases and gotchas, especially with state-based endorsement... I suggest you not implement anything other than the bare minimum, as I think the "less is more" applies here. |
Ultimately, we have to be able to reconstruct and maintain the channel definition with the MSP configs for orderer and orgs in channel plus related policies. I've definitely noticed that protobuf parsing is non trivial given, e.g., the nesting of sub-protobufs via binary blobs and alike. Certainly seems protobuf as technology has room for improvement :-)
Oh, of course we don't want to support all possible policies; that's why i mentioned the subset of (policies). E.g., for channel policies and lifecycle policies i guess one could reasonably restrict to majority (which if i'm not mistaken is now also the default for these policies)? For our ercc chaincode, we also need the same policy as lifecycle, so this should also be covered, and for fpc we anyway have our "custom subset" (initially a single org as an OR term)
This is not completely true: on the one hand, we need the lifecycle for ercc which is standard chaincode. On the other hand, we also need it for FPC chaincode to get the initial explicit agreement from all orgs on a particular chaincode (essentially the same reason fabric needs it also to bootstrap the root-of-trust and getting a common agreement on it)
Oh, state-based endorsement we already explicitly ruled out as unsupported, guessing it raising lots of issues which we better don't tackle at this point in time :-) |
Right. For FPC, retrieving the crypto material of the organizations should be sufficient.
We are definitely on that path with the current version, which requires "one enclave at a designated peer" #273 .
The "one-of-any-enclave" policy is reasonable for some (but not all) use cases. There are two risks.
As a side note, these issues came up also in the context of the Private Data Objects project. The first is addressed through a set of provisioning services for contract state keys, and a set of storage services for the state. The second is addressed by (optionally) letting users re-run the computation on other enclaves for verification. |
With FPC Lite / FPC 1.0 this issue is tabled for now until we come back to future extensions including rollback protection .... |
The integration of the Ledger Enclave is one of the fundamental pillars in the FPC security architecture, however, the implementation is also challenging.
This issues continues the discussion based on the FPC RFC.
The text was updated successfully, but these errors were encountered: