You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One thing is clear: we should represent it atomically (all bytes bundled together) rather than as a steam like the Max midiin object.
In Jamoma1 we use dictionaries. It's a little hard to tell if this is too heavy-handed or it just feels that way because the dictionary implementation in JamomaGraph is so poor.
We could have a bunch of types for different MIDI events. Or just a "midievent" type that is a std::array<int>?
One place we could start is thinking about the use cases:
Midi input driving a synth in a C++ coded app
Midi input for mapping (and learning mappings) in Jamoma Modular
Feeding VST plug-in
Is there anything we can learn from the Max/Pd/SuperCollider implementations?
Another approach is to not represent "MIDI" data at all. Instead we have our own "note" data format and MIDI gets converted into (or out of) it. This abstracts the thing we are trying to represent from the transport.
How should we represent MIDI data?
One thing is clear: we should represent it atomically (all bytes bundled together) rather than as a steam like the Max
midiin
object.In Jamoma1 we use dictionaries. It's a little hard to tell if this is too heavy-handed or it just feels that way because the dictionary implementation in JamomaGraph is so poor.
We could have a bunch of types for different MIDI events. Or just a "midievent" type that is a
std::array<int>
?One place we could start is thinking about the use cases:
Is there anything we can learn from the Max/Pd/SuperCollider implementations?
Thoughts from @lossius , @nwolek , or @Nilson ?
The text was updated successfully, but these errors were encountered: