Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow objects to be in multiple sites #1712

Open
Daniel63656 opened this issue May 16, 2024 · 0 comments
Open

Allow objects to be in multiple sites #1712

Daniel63656 opened this issue May 16, 2024 · 0 comments

Comments

@Daniel63656
Copy link

Daniel63656 commented May 16, 2024

Motivation
I noticed that the way music21objects are organized in voices in polyphonic/pianoform music is in Voice streams. Because a Voice is part of a Measure is part of a PartStaff, this means voices can not change staff, a case that sometimes happens for the piano.
Unfortunately, the way music21 is set up right now, allowing objects to be in one site only, prevents it from expressing voice and staff independently because this would mean logging an object in two different sites (Voice and Staff). I think this library really needs to allow for this to be a truly general representation for music notation. This would also be useful in other cases: Consider tuplets and beams for example. Right now, they are set up as objects that are logged in notes (correct me if I'm wrong). It would make a lot of sense from a modelling perspective to consider beams and tuplets as sites that live in Voice and that contain objects themselves. For that, see these figures from the musicdiff paper https://inria.hal.science/hal-02267454v2/document
beams
tuplets

As you can see, this makes it easy to express the nested nature of beams and tuplets but comes with the same problem as Voice/Statff: objects must be in multiple sites (both located inside Voice). This is because beams can extend over tuplet borders so you really need two independent strucures.

I think if music21 would overcome this specific limitation, it would become the library OMR researchers like myself desperately need right now. I would happily work on this if someone is open to discuss this issue. What makes me qualified to do so?
I already dedicated the last decade to develop a music model myself that can handle all these cases, albeit without being so dynamic (objects have fixed parents and can't be instantiated and used without them). I know this model can express all these cases because I also wrote a custom score rendering tool, playback and import/export from musicxml/midi. The model can also be edited and automatically tracks changes to tell the renderer what to update. Problem is, it is in Java and I need a powerful model for my work in OMR, especially to try and provide a common standard for comparing OMR results. You can look at the structure I came up with:
music_model

I don't want to translate my model because it is designed for other use cases, but I would happily work on this library if this makes it more versatile and usable for my research. My dataset contains and is explicitly tested for voices changing staffs, so I can not use it right now (among other smaller issues, see #1633, #1638).
Please tell me if you think having objects in multiple sites is feasible.

Intent

[x] I plan on implementing this myself.
[ ] I am willing to pay to have this feature added.
[ ] I am starting a discussion with the hope that community members will volunteer their time to create this. I understand that individuals work on features of interest to them and that this feature may never be implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant