Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIP-0128? | Preserving Order of Transaction Inputs #758

Merged
merged 16 commits into from
Aug 20, 2024

Conversation

solidsnakedev
Copy link
Contributor

@solidsnakedev solidsnakedev commented Feb 2, 2024

This CIP is desired by Plutus developers to improve the validation efficiency of transaction inputs.

We propose the introduction of a new structure for transaction inputs aimed at significantly enhancing the execution efficiency of Plutus contracts.

This CIP facilitates explicit ordering of transaction inputs, diverging from the current state. This explicit ordering enables seamless arrangement of input scripts intended for utilization within the application's business logic.

This implementation was somehow discussed in an old CIP:
#231

Rendered Version

CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
CIP-transaction-inputs-as-list/README.md Outdated Show resolved Hide resolved
@Ryun1 Ryun1 added the Category: Plutus Proposals belonging to the 'Plutus' category. label Feb 4, 2024
@Ryun1
Copy link
Collaborator

Ryun1 commented Feb 4, 2024

This proposal would benefit with comments from the IOG Plutus and Ledger teams 🙏
@michaelpj @lehins

@WhatisRT
Copy link
Contributor

WhatisRT commented Feb 5, 2024

I don't fully understand the use-case. Why is the list of indices necessary? Can you not filter the set of inputs by their redeemer? Is that too slow? Also, how does having a list of inputs fix this issue? If your script expects all relevant inputs in the beginning of the list, it still needs to check that this is in fact the case, which should be almost as expensive as filtering them.

So if you need a list of indices without duplicates, the easiest way to check this is by also requiring the list to be sorted (failing if it isn't). Then it can be done easily in linear time. Is that maybe the problem? Do you want a list of indices that selects a sublist in an arbitrary ordering?

Like @michaelpj, I also don't understand the two points in the Alternatives section. The first looks like it may relate to some of the things I mentioned, but I'm not sure about that.

@lehins
Copy link
Contributor

lehins commented Feb 6, 2024

In general I am in favor of this change for reasons unrelated to how Plutus see those inputs. I haven't gone through the CIP in detail, since that is unfortunately not going to be on our priority list until the next era. However, here is just a quick feedback.

First of all, it should not be a list of inputs, but an ordered set of inputs, because we can't allow duplicates in that field.

Furthermore this is a fairly complicated topic in general and we'd have to be very careful in order to get it right, because:

  • we can't change the behavior of the old scripts, so PlutusV1 through PlutusV3 will probably continue getting those in sorted order
  • redeemers currently rely on the sorted order, which is not good, so we'll need to address that somehow in a backwards compatible manner, which is gonna be pretty tricky.

So, as the CIP currently stands it is definitely far away from how it should be addressed.
But, I promise you that I will revisit this topic once Conway is released.

@colll78
Copy link
Contributor

colll78 commented Feb 7, 2024

I don't fully understand the use-case. Why is the list of indices necessary? Can you not filter the set of inputs by their redeemer? Is that too slow? Also, how does having a list of inputs fix this issue? If your script expects all relevant inputs in the beginning of the list, it still needs to check that this is in fact the case, which should be almost as expensive as filtering them.

The benefit of being able to control the ordering of the inputs is not to move everything to the beginning of the inputs list. It is to be able to order the elements with respect to how they will be processed so that it becomes unnecessary for a validator to traverse the tx inputs multiple times to find all the inputs they need (and sort them into the order that the validator needs them in). Additionally, the benefit is that having full control over the order of the inputs vastly simplifies the following design pattern and makes it accessible to DApps that do not have extremely specialized (proprietary) offchain tooling:

validatorB :: AssetClass -> BuiltinData -> BuiltinData -> ScriptContext -> () 
validatorB stateToken _ _ ctx =
  let ownInput   = findOwnInput ctx
      authInput  = findAuth ctx  
      goodOutput = findOutputWithCriteria ctx
   in validate ownInput authInput goodOutput
    where
    findAuth :: ScriptContext -> Maybe TxInInfo
    findAuth ScriptContext{scriptContextTxInfo=TxInfo{txInfoInputs},                   
                           scriptContextPurpose=Spending txOutRef} =
        find (\TxInInfo{txInOutput} -> assetClassValueOf stateToken (txOutValue txInOutput) == 1) txInfoInputs
    findAuth _ = Nothing

    findOutputWithCriteria :: ScriptContext -> Maybe TxInInfo
    findOutputWithCriteria ScriptContext{scriptContextTxInfo=TxInfo{txInfoOutputs}} = find (\txOut -> criteria txOut) txInfoOutputs 

The above script is extremely inefficient. For every input and output we are searching for, we apply expensive checks to each element in the inputs / outputs until we find the elements that pass the checks.

It can be vastly improved to:

validatorB :: AssetClass -> BuiltinData -> (Integer, Integer, Integer) -> ScriptContext -> () 
validatorB stateToken (inputIdx, outputIdx, authIdx) _ ctx =
  let ownInput   = elemAt inputIdx
       authInput  = elemAt authIdx  
       ownOutput = elemAt outputIdx
   in 
      (assetClassValueOf stateToken (txOutValue (txInInfoOutput authInput)) == 1) -- check that element at authIndex does indeed have the auth token
      && (ownOutRef == txOutRef ownInput) -- check that element at input index is indeed the input that is being unlocked
      && (criteria ownOutput) -- check that the output at outputIdx does indeed satisfy the criteria required to unlock ownInput
    where
      txInfo  = scriptContextTxInfo context
      inputs  = txInfoInputs  txInfo
      outputs = txInfoOutputs txInfo
      Spending ownOutRef = scriptContextPurpose ctx
      findOutputWithCriteria :: ScriptContext -> Maybe TxInInfo
      findOutputWithCriteria ScriptContext{scriptContextTxInfo=TxInfo{txInfoOutputs}} = find (\txOut -> criteria txOut) txInfoOutputs 

You can read more about the above:
https://github.com/Anastasia-Labs/design-patterns/blob/main/UTXO-INDEXERS.md

This whole design pattern is designed to take advantage of the Deterministic script evaluation property. Because inputs to Plutus scripts are fixed, we don't need to ever apply "search" for anything; because we know what the script context looks like at the time of transaction construction we can search for the element off-chain, and provide the location index of the element to the onchain code (via redeemer) and then the onchain code only needs to verify that the element at the provided index does indeed satisfy the criteria of what we are expecting. Even without O(1) index lookup this is extremely powerful (from O(n) conditional executions to O(1) conditional executions). When we get O(1) index access data structures this becomes even more powerful as it will allow validators to lookup anything in O(1) with this pattern.

The issue is that without this CIP, this design pattern is extremely difficult to implement because of the complexity this introduces to offchain code. There is currently no open-source off-chain transaction framework that is capable of taking advantage of this design pattern. The issue that you run into when you attempt to use this pattern with existing offchain frameworks is as follows: When building the tx offchain, you search through the inputs, add the indices of the inputs you are looking for to the redeemer in the above example we pass redeemer as new Constr(0, [INDEX_OF_OWN_INPUT, INDEX_OF_INPUT_WITH_BATCHERTOKEN]) But after, you balance the transaction new inputs are added to the transaction so the indices you put in the redeemer are no longer correct, and then if you adjust the redeemer again after the balancing to reflect the new locations of the inputs you are interested in, when you rebalance it can change the input set again. So currently you need to first put a dummy redeemer in with expected size as the actual redeemer, then perform balancing, and then swap out the dummy redeemer for the redeemer with the correct indices provided.

Filtering by redeemer is far too expensive. Right now, nearly every smart contract protocol on Cardano has a RequestValidator

RequestValidator: The validator that users interact with. User sends a UTxO to the request validator and the datum of that UTxO descibes the action that the user wants to perform (and thus the action that the protocol is allowed to perform with that UTxO). Technical users run bots to continuously process these UTxOs in bulk (in some protocols this action can only be performed by a permissioned actor, where-as in others anyone can perform this action).

For these protocols, the number of requests they can process in a transaction is extremely important because it represents the protocol throughput. The more requests they can process in a transaction, the higher throughput the protocol can achieve. So for instance a DEX that can process 50 requests in a transaction will have 5 times the throughput of a DEX that can only process 10 requests in a transaction.

The criteria that is required to process a request depends on the content of the datum of the request. For instance, for a request UTxO for any of the major DEXs, the most common request type is swap. The criteria to process a swap request is:

For each request UTxO in the inputs (ie being processed by the transaction) there must be a corresponding output that pays out the requested amount of tokens (or an amount that is within some acceptable slippage percentage of that amount in which case the slippage percent allowed is also present in the request UTxO) to the address provided in the request UTxO's datum. A corresponding output that fulfills the request is not in and by itself sufficient. To process a request UTxO, you must also modify some global state (global in the sense it is shared across all request UTxOs in the transaction, ie a pool UTxO) correctly based on the content of the request.

In general, the criteria for processing any request UTxO (of any type on any DApp in Cardano) typically requires that there must be one or more outputs (often just one) that correspond the to request UTxO, these outputs "fulfill" the request, and are commonly referred to as "destination outputs" or "payouts".

When processing requests in bulk, this means each request UTxO in the transaction inputs needs to be matched with one or more "destination outputs" / "payouts" in the transaction outputs. Because each request has some impact on a shared state (ie pool UTxO), the order in which requests are processed is important and DApps need to be able to efficiently control this.

The common way to "match" these request UTxOs with the outputs that fulfill them is to create two lists:

  1. List of request UTxOs from transaction inputs [i0, i1, i2, ..]
  2. List of payouts (from the transaction outputs) [p0, p1, p2] such that p0 fulfills request i0, p1 fulfills request i1 and so forth.

Then we traverse the lists together and check that the conditions required to fulfill each request are indeed fulfilled by the corresponding UTxO in the payouts list and that the pool state is adjusted properly with respect to each request. Currently locating this list of payouts is easy, we provide an index in the redeemer to the location of the first payout and then grab n elements starting at that index in the tx outputs list where n is the number of requests, this is linear time. The problem of creating the request UTxOs list is much harder. We cannot control the order of the tx inputs, so we cannot enforce that the first request utxos to be processed must be before the other request UTxOs in the list. This means we have to traverse the tx inputs list once for each request UTxO, thus the time complexity here is quadratic, or we have to get all the request inputs and then sort them which is also quadratic.

This problem (of matching inputs to corresponding outputs) is broadly described in the following CPS:
https://github.com/cardano-foundation/CIPs/blob/5ec27da12ce01197fc9202f890e4914231ee9d4a/CIP-%3F%3F%3F%3F/README.md

Do you want a list of indices that selects a sublist in an arbitrary ordering?

Yes, that would be great to have on top of this. But this will still be more efficient since we can group the related elements together, provide the index to the first element and the grab the N elements that we starting from there.

@michele-nuzzi
Copy link

I'm sorry I struggle to see what problem does this CIP solve.

The inputs are ordered on the ledger, but don't have to be in the transaction.

The plutus script context is constructed based on the order of the inputs in the CBOR, not on the ledger.

I a transaction is built with a certain order, the order will be preserved in the script context, and will be ordered on the ledger (but that doesn't matter for plutus)

so the property you are looking for is already there.

Perhaps the tools you are using to build transactions will order the inputs, but that is not necessary; plu-ts-offchain preserves the inputs order as they are specified and I have never had problems with the transaction ordering.

P.S. redeemer indexing is handled by the ledger, so the redeemer index will need to be the one of the sorted set, not the one of the order of the transactio CBOR

@lehins
Copy link
Contributor

lehins commented Feb 8, 2024

I a transaction is built with a certain order, the order will be preserved in the script context, and will be ordered on the ledger (but that doesn't matter for plutus)

That is actually not true.

The order in which inputs or redeemers are placed into the transaction is not preserver. It will always be sorted by ledger. Which is a problem that we would like to fix in ledger and a way to solve this would be to preserve the order of inputs in which they were placed on the wire.

@michele-nuzzi
Copy link

That is actually not true.

The order in which inputs or redeemers are placed into the transaction is not preserver. It will always be sorted by ledger. Which is a problem that we would like to fix in ledger and a way to solve this would be to preserve the order of inputs in which they were placed on the wire.

I'll run some test transaction to make sure my understanding is correct

@michele-nuzzi
Copy link

That is actually not true.

The order in which inputs or redeemers are placed into the transaction is not preserver. It will always be sorted by ledger. Which is a problem that we would like to fix in ledger and a way to solve this would be to preserve the order of inputs in which they were placed on the wire.

@lehins that is infact true, I was wrong thank you for your correction.

At this point though it is strange that a transaction that is not ordered in the inputs succeeds phase 1.

I believe either this CIP is implemented or else some restriction should be implemented in the ledger for consistency.

@WhatisRT
Copy link
Contributor

WhatisRT commented Feb 9, 2024

Ok, so from the above discussion it seems to me like this would really solve the following two problems:

  • Coin selection for the currently viable solutions to select (reordered) sublists of inputs is hard, and
  • it is evidently unintuitive behaviour to not preserve the order.

I think these are somewhat good points, but for the sake of completeness, here are some counterarguments to those points:

  • Coin selection was always going to be hard in the presence of scripts (we've had discussions about this before Alonzo was launched). A general solution to this problem probably has to accept slightly overpaying the fees sometimes. Fixing this problem here doesn't address the general issue.
  • While unintuitive, it's consistent with how we present other information to scripts, first and foremost multi-asset bundles. It is possible to put them in an arbitrary order inside a transaction as well, and they'll just get sorted. And that's a good thing in this case, since people really don't think that two token bundles are different just because you swap out the order. And arguably that's the same with inputs: it doesn't matter in what order I'm taking my cash out of the drawer, just that it's sufficient to pay for whatever I want.

Now maybe this is one of these cases where a specific use-case trumps somewhat abstract arguments. I still consider the case for this somewhat weak, but I think I'm now at the point where I'd regard it as a nuisance rather than an anti-feature.

For completeness, let me repeat my main argument against preserving the order: It will likely make future scripts completely uninteroperable. It is likely that every script will simply assume that the inputs it cares about will be at the beginning, and two such scripts will be unable to run at the same time, even if it would have been perfectly fine to do it otherwise. For example, I know that (at least some versions of) the MuesliSwap script don't care at all about other inputs. So any user can do swaps more efficiently by combining them with some other transaction if they want to. I think that implementing this CIP closes the door on that opportunity for future scripts.

@michele-nuzzi
Copy link

I would add that it could turn incredibly useful to extend the CIP to reference inputs, also a set in the babbage.cddl hence ordered in the script context

@michaelpj
Copy link
Contributor

For completeness, let me repeat my main argument against preserving the order: It will likely make future scripts completely uninteroperable. It is likely that every script will simply assume that the inputs it cares about will be at the beginning, and two such scripts will be unable to run at the same time, even if it would have been perfectly fine to do it otherwise.

I think this argument proves too much, since the same argument applies today to outputs. Very many scripts care about outputs, if your argument holds then today they would all just assume that their outputs came first, rendering them non-interoperable. I think this means that making inputs a list can't make things much worse.

@solidsnakedev
Copy link
Contributor Author

I would add that it could turn incredibly useful to extend the CIP to reference inputs, also a set in the babbage.cddl hence ordered in the script context

I think it makes sense, I'll update the cddl accordingly

@ch1bo
Copy link
Contributor

ch1bo commented Apr 23, 2024

I just wanted to chime in on this proposal and mention that the hydra protocol would benefit from this too as we have a quite constrained and not expected to compose validator. Furthermore, I remember we too were quite puzzled and annoyed of inputs being "re-ordered".

@lehins I see the cardano-ledger has a OSet type now which could be used on the spendingInputs of a tx body now? Besides that and updating the plutus-ledger-api to resemble this, e.g. in a PlutusV4, do you see any other steps this is needing?

@solidsnakedev
Copy link
Contributor Author

I've update the CIP as we are getting close to Conway transition, addressing the concerns from @lehins @michaelpj @michele-nuzzi @rphair

@solidsnakedev solidsnakedev changed the title CIP-???? | Transaction Inputs as List CIP-???? | Transaction Inputs as Unordered Set Jul 12, 2024
@rphair
Copy link
Collaborator

rphair commented Jul 14, 2024

thanks @solidsnakedev ... have put on CIP meeting agenda for Review next time; hope you can make it (cc @michele-nuzzi @MicroProofs): https://hackmd.io/@cip-editors/93

@fallen-icarus
Copy link

fallen-icarus commented Jul 17, 2024

For completeness, let me repeat my main argument against preserving the order: It will likely make future scripts completely uninteroperable. It is likely that every script will simply assume that the inputs it cares about will be at the beginning, and two such scripts will be unable to run at the same time, even if it would have been perfectly fine to do it otherwise.

@WhatisRT I don't think this is true. I think this misses the economic utility that is possible with DApp composability. Imagine if there was an options contract you wanted to buy, but the sale price was in WMT while you only had DJED. You have your assets in DJED to protect yourself from market volatility. If you had to first convert your DJED to WMT in one transaction and then buy the options contract in another, you are exposing yourself to the market volatility of WMT while you wait to see if you can actually get the options contract (before someone else buys it). What if someone beats you to the options contract after you converted your DJED to WMT? You now need to convert it back to DJED, and you likely lost money (due to the tx fees + DApp fees + market volatility).

The above scenario is entirely avoidable if you just compose converting DJED to WMT with buying the options contract. If the options contract is bought before your transaction is processed, your composed transaction will fail due to the options contract UTxO being missing (no collateral is lost either since no scripts need to be run). This also means your DJED wasn't unnecessarily converted to WMT. Composing the actions guarantees that your DJED will only be converted to WMT if, and only if, you successfully buy the options contract. The risk of loss in the case where you don't get the options contract is entirely eliminated due to DApp composability. Risk management plays a huge role in economics (and regulations) and I think DeFi DApps that compose will out-compete those that do not. If corporations are also going to eventually use DeFi, they need to be able to manage their economic risk as much as possible. For the big players (governments, corporations, etc), throughput is no where near as important as risk management.

Another example that doesn't deal very much with risk management is the ability to unify the liquidity across all stablecoins. Currently, if you have DJED but need USDC to buy something, you need to take the same two step approach as in the previous example. The reason is, despite both DJED and USDC effectively being USD, smart contracts cannot securely know this and therefore, must treat them differently. As a consequence, the USD liquidity in DeFi is fractured across all stablecoins. But what if you could convert DJED to USDC (for a slight conversion fee) in the same transaction where you buy the item? If you could, the liquidity would no longer be fractured across all stablecoins; the ability to compose converting them with other DApps means DeFi would effectively have a single "meta" stablecoin. And again, the conversion would only happen if you successfully buy the item at the end of the composed chain of actions. DApps that sacrifice composability in the name of throughput will be cutting themselves off from this "meta" stablecoin liquidity.

IMO, DApp composability is the killer feature for eUTxO. You can compose 5-10 separate DApps on Cardano right now; AFAIU this would be prohibitively expensive on an account style blockchain. I think this composability will make things possible in DeFI that aren't even possible in TradFi. I'm sure some DApps will sacrifice composability for throughput in the short-term (like they are currently doing), but I think the economic utility strongly favors composable DApps to the point where future DApps will prioritize composability.

I am personally in favor of this CIP because I think it will actually help composability in certain scenarios. For example, for some of my protocols, the order of the protocol's required outputs depends on the order of the protocol's inputs. The only thing that matters is the order of the protocol's inputs/outputs: other inputs/outputs can be interspersed and the required inputs/outputs can appear at any point in their respective lists. (This is for throughput reasons since it allows me to only traverse the inputs and outputs list once each.) Right now, I can't control the order of the inputs which means I can't control the order of the outputs. This doesn't stop me from being able to compose my protocols, but if there was another DApp where the input/output requirements were more strict (for whatever reason), that DApp would be more likely to compose with my protocols if the order of the inputs could be controlled (and therefore the order of the outputs could also be controlled). Even if the customized ordering is less efficient overall, composing the actions in a single transaction can actually save the end-user money on net, and decrease their overall economic risk.

@WhatisRT
Copy link
Contributor

@fallen-icarus I fully agree that composability is a fantastic feature, but that's the point I was making: the average script would be less composable with this feature. The cheapest way to get the inputs for your script is just to require that they are at the beginning of the list. So if you want your script to be composable, you now have to argue that it's better to linearly search through the list of inputs (or maybe include that information in the redeemer), both of which increases execution cost. So now the cheapest way to implement your script is non-composable.

@fallen-icarus
Copy link

So now the cheapest way to implement your script is non-composable.

This is what I was trying to argue is false. I think you are thinking about a specific script in isolation which I do not think is realistic. If Alice is considering buying an options contract with an asset she doesn't have, what matters to her is the total cost of the high-level action (DEX conversion + options purchase). Whether this requires one transaction or two transactions is (mostly) irrelevant to her.

In an extreme case, if Alice chooses to sacrifice composability, she could possibly save 1 ADA in execution costs for this high-level action. But since she couldn't compose, she is now subject to market volatility which could easily be 5%. If 1000 ADA was involved, the total extra cost to Alice is the 50 ADA volatility loss.

However, if she chooses to compose, she pays the extra 1 ADA in execution costs, but doesn't have to deal with the market volatility so the total extra cost to her is just the 1 ADA in extra tx fees. Alice saves 49 ADA by composing! To break even would require market volatility of 0.1% which is extremely likely to be exceeded by pretty much all trading pairs...

The math seems to clearly favor composable DApps. The cheapest way for end-users to use DApps is to compose them, even if individually the DApps are slightly more expensive.

So if you want your script to be composable, you now have to argue that it's better ...

I don't think you need to argue anything. The market will naturally punish (through increased costs) those who choose to not compose DApps. The same goes for DApp developers who sacrifice composability. Users will gravitate towards DApps that allow them to accomplish their high-level actions more cheaply, which likely means composable DApps.

@WhatisRT
Copy link
Contributor

Well, you're making the argument that users want composable scripts, but the question is whether authors will make them. Writing and auditing a script is very expensive, and script authors might not care much about composability themselves. Also, users might simply not have the choice here: if somebody wants to execute a particular script, and it happens to be non-composable, then they only have the choice of not using it. Maybe they'll complain to the script author, but what's the realistic chance that the author will spend a bunch of extra money and effort on making a composable version? And will it be adopted properly? There are still massive amounts of Plutus V1 transactions being made.

So I think it's unlikely that market forces are strong enough to ensure composability. Companies really like to make walled gardens for all sorts of things, and lots of people dislike it but the dislike it clearly not strong enough to put pressure on them. I don't see why it would be different here.

Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last CIP meeting was in favour of giving this a CIP number sooner rather than later... as I recall, changes (like this one) requiring a hard fork are generally considered around the time of the previous hard fork so it seems a good time to robustly discuss this idea.

@solidsnakedev one of the things that should be discussed & resolved early on is the question not only of the now oppositely-sensed title but the changes in the text that would be required to resolve this crucial ambiguity (#758 (comment)).

Please change the directory name to CIP-0128 and update the link that points to the rendered of your proposal with the new pathname. 🎉

CIP-transaction-inputs-as-unordered-set/README.md Outdated Show resolved Hide resolved
@colll78
Copy link
Contributor

colll78 commented Jul 27, 2024

Well, you're making the argument that users want composable scripts, but the question is whether authors will make them. Writing and auditing a script is very expensive, and script authors might not care much about composability themselves. Also, users might simply not have the choice here: if somebody wants to execute a particular script, and it happens to be non-composable, then they only have the choice of not using it. Maybe they'll complain to the script author, but what's the realistic chance that the author will spend a bunch of extra money and effort on making a composable version? And will it be adopted properly? There are still massive amounts of Plutus V1 transactions being made.

So I think it's unlikely that market forces are strong enough to ensure composability. Companies really like to make walled gardens for all sorts of things, and lots of people dislike it but the dislike it clearly not strong enough to put pressure on them. I don't see why it would be different here.

Yes the author of this CIP wants composability. This CIP in general actually helps facilitate composability and I don't think that any developers in the ecosystem willing to invest money and development hours to get an application to mainnet and pay for an audit would sacrifice composability (a powerful feature that attracts liquidity and users) to save an extremely small amount of ex-units by enforcing that the expected inputs must be at the beginning of the list. If there was any demand to make that trade-off then the DApps that are currently on mainnet today would be doing this except for the outputs.

The amount of ex-units you save by enforcing all inputs relevant to validation must appear at the beginning of the inputs list is completely negligible. What developers actually care about is that ordering of inputs can be preserved. In practice what you will see is that developers will require that relevant inputs are a continuous sub-list within the tx inputs list, and it will not matter where this sub-list starts because the start of the sub-list will be indexed via the redeemer (this is a design pattern that is already in practice today in nearly every major DApp protocol except it is a thousand times more error prone because the redeemer must contain a list of indices to the inputs which represents each index of each relevant input in the canonical ordering instead of just the start of the contiguous already ordered sub-list). The cost of indexing the start of the relevant inputs is extremely trivial (dropList indexOfSubListFromRedeemer txInputs). If you want composability, you do not have to argue that it is better to linearly search through the list of inputs, linear search is extremely inefficient and unnecessary. Linear search should never be performed in onchain code under any circumstance ever because it completely throws away what is perhaps the biggest advantage that Cardano's smart contract platform has over those in other ecosystems, namely the deterministic script evaluation property. We made huge design sacrifices to obtain this property so to not take advantage of it frankly would be like running a marathon with ankle weights. By taking advantage of this deterministic script evaluation property we never have to perform linear search onchain because we know what the list looks like at the time of transaction construction we can just pass in the index where the item we are looking for should be and fail if it is not there. This way, only assuming transactions are built correctly, it will succeed. Because we can lookup anything in O(1) checks/boolean-conditions by providing the onchain code with the index (via redeemer) to where the element you want to find is supposed to be and just erroring if the indexed element is not indeed what you are looking for. The fact any element can be found onchain without linear-search is an extremely powerful property of our smart contract platform that simply doesn't exist outside our ecosystem.

@rphair rphair changed the title CIP-0128? | Transaction Inputs as an Ordered Set CIP-0128? | Preserving Order of Transaction Inputs Jul 30, 2024
@solidsnakedev
Copy link
Contributor Author

@rphair I’ve made the updates based on the feedback. Please let me know if there’s anything else that needs attention.

@rphair
Copy link
Collaborator

rphair commented Jul 31, 2024

thanks @solidsnakedev - I think all the points have been addressed; we don't have much CIP bandwidth now so I've put it on the agenda again with updated title & assigned CIP number, so hopefully can move this to Last Check at that meeting (https://hackmd.io/@cip-editors/94) or in the meantime if editors & Ledger reviewers are in agreement.

cc @Ryun1 @Crypto2099
cc @lehins @WhatisRT
cc @colll78 @fallen-icarus @michele-nuzzi

CIP-0128/README.md Outdated Show resolved Hide resolved
CIP-0128/README.md Outdated Show resolved Hide resolved
CIP-0128/README.md Outdated Show resolved Hide resolved
Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pending categorisation as per #758 (comment) the CIP meeting has decided this should be Last Check for the next meeting (https://hackmd.io/@cip-editors/95): since the construction & validity of the proposal itself were considered satisfactory. I'll ✅ this as soon as the pending conversations are resolved.

@rphair rphair added the State: Last Check Review favourable with disputes resolved; staged for merging. label Aug 6, 2024
Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just went back and checked that all previously raised issues have been resolved... mainly the confirmation of the Ledger category. Looking forward to seeing this merged at next CIP meeting unless there are any further reservations.

@rphair rphair requested review from Crypto2099 and Ryun1 August 9, 2024 03:05
@rphair rphair merged commit 6bae516 into cardano-foundation:master Aug 20, 2024
@rphair rphair removed the State: Last Check Review favourable with disputes resolved; staged for merging. label Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Category: Plutus Proposals belonging to the 'Plutus' category.
Projects
None yet
Development

Successfully merging this pull request may close these issues.