You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since every model gives different results (including the Pre-built models). It would be interesting to have comparison utility for users which model to finally use.
Proposal:
Add comparison utility for different models.
Pseudo code
# compare_models
# input_param: array of models to compare
# output_param: comparison graph
compare_models(*custom_models)
The text was updated successfully, but these errors were encountered:
This is very interesting! For a general-use case API, there are a lot of dynamics that would need to be captured (I think). For example, most quantum algorithms require many circuit compilations during execution. Different steps of the same algorithm might be compiled widely differently. Also, there are many metrics that users might be interested in. How would you propose handling the different situations/dynamics?
Since every model gives different results (including the Pre-built models). It would be interesting to have comparison utility for users which model to finally use.
Proposal:
Add comparison utility for different models.
Pseudo code
The text was updated successfully, but these errors were encountered: