You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you shoehorn SSI into a device's tiny footprint? The TinyML community undertook this years' ago to squeeze machine learning models onto resource constrained devices.
How small a footprint?
We don't have measurements of the compute, memory, power, storage, or compute costs of each protocol-part in the trust stack.
We don't have a model that product managers or product designers or developers can use to scope how much of the stack can fit, which bits to leave out, how to configure the remain bits to preserve resources, workarounds for what doesn't fit, etc.
When extensions to SSI models are proposed, does anyone calculate the resource tax imposed on devices that use them and their connectivity?
Are there lightweight versions of, say, DIDComm that would do the job well enough?
The text was updated successfully, but these errors were encountered:
The text was updated successfully, but these errors were encountered: