Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: modules as plugins #1002 #3062

Draft
wants to merge 45 commits into
base: main
Choose a base branch
from
Draft

feat: modules as plugins #1002 #3062

wants to merge 45 commits into from

Conversation

Julusian
Copy link
Member

@Julusian Julusian commented Oct 1, 2024

in progress..

This needs some further UX thought, but the functionality is making progress.

Managing modules

There is a new 'modules' tab, which uses the split layout approach.
image

The left column lists all the modules that are installed, allowing selecting which one to manage. Some more details should be added to this table at some point.
It has a similar style of filtering as we use for the connections list. One of the options is to show all modules on the store

At the top of the page, you can upload a module package (a single module, perhaps from the module author, or downlaoded from the store), or a module bundle (multiple modules in one archive, for bulk importing).

The list from the store is refreshed when opening this view if it hasnt been updated in a while, and can be refreshed on demand by clicking the button.
Currently this is using a temporary 'api', pulling json files and module builds from https://github.com/Julusian/companion-module-repository-poc. This repo has been seeded with the module versions from a few releases of companion

In the right panel, when a module is selected, the table lists all the versions of the module either reported by the store, or already installed
You can install/uninstall versions here with the button on the left. That button changes to different icons if it can't be installed for some reason (eg, module-api is too new for companion or api doesn't give a download url)
The icons on the right indicate whether each of the module is in use.

The data for this table is fetched from the api separetely, so has its own refresh button.

Managing connections

When adding a connection, you can choose what version of the module to use. You can choose between the 'dev' version (if one is found), installing the latest stable version from the store, and any already installed stable versions.
image

On any connection, you can now open the right panel, even when disabled/broken. In here you can change the label of the connection, the version of the module that it is using and the update policy. This includes more versions than when adding the module.
image

Some additional behaviours:

  • When selecting a version that is not installed, it will be installed before the connection is activated
  • After installing a new version of a module, connections will automatically restart if needed. When satisfying a missing versions, it will start any connections wishing to use it
  • The 'add connections' tab is loading and caching the list of modules. Intended to scrape the api on demand, or automatically after some number of hours.

TODO:

  • All the UX
  • Differentiating between stable and prerelease doesnt work currently
  • Hook it up to a real api
  • Figure out how/where to show a hint that there are newer versions of modules that could be installed.
  • one click update to latest for an individual module
  • Better handling of offline environments
  • Much error handling
  • Implement more granular reloading of connections after module install/uninstall
  • Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading? builtin modules will be removed
  • Installed modules shoudn't be stored inside the release specific config dir
  • Make sure that the versions persist correctly through import/export
  • Install modules during import process
  • How should the legacyId flow work now?
  • should the module declaring itself as prerelease be part of manifest, in the api or a file inside the archive?
  • bulk import of module-bundles
  • include all modules in search, and install on first add
  • move modules tab to near settings
  • add warning to module dir, to discourage editing
  • remove builtin modules, once there is a package to import
  • better api cache expiry
  • can't edit the version of an enabled connection where the needed version isnt installed
  • rework 'manage modules' page to always open sidebar when no version is installed
  • allow forcing a connection across modules? (in case legacyId is missing)
  • allow module version selection during page import?
  • [ ]

@Julusian Julusian added this to the v3.5 milestone Oct 7, 2024
@Julusian Julusian linked an issue Oct 10, 2024 that may be closed by this pull request
@krocheck
Copy link
Member

What's the intent in terms of bundling modules in the future? Should we have multiple download options, such as:

  • minimal - no bundled modules
  • common - top 50? modules
  • full - all current modules

In terms of the UI, I'm thinking we use the Connections tab for inspiration.

  • Have a main listing of all modules "in the system" (bundled, dev, installed) filterable and searchable. Have the "Import Module" button at the top here.
  • "+ Add connection" would be the "Discover" tab showing Bitfocus cache and display any not installed or modules with new versions. "Add" button is instead "Install" with a popup to select which version to install.
  • Clicking an existing module pulls up what would be the "edit" tab but called something different. There you can see all the versions available, purge unused ones, check the help files for each one ... potentially see a listing of the other versions available (same as under the discover tab).

I will want to talk about data structures once #3045 is merged. I think the cache should be in cache. I standardized on _ instead of - in key name stores as part of that. Could parts of this be better served with its own table instead of full JSON blobs. I'm sure you've already considered some of these points.

@Julusian
Copy link
Member Author

What's the intent in terms of bundling modules in the future? Should we have multiple download options, such as:

I think that having multiple download options will add more confusion and complexity than it is worth.

My view is that we should always include 'common' modules. And the versions bundled in the betas will be updated much less often, maybe as infrequently as updated just before a stable release.
As part of this 'store' I think that it needs to be possible to download the modules from the bitfocus website, to cater to offline users who can't use the store inside of companion.

I'm not yet sure if it should be possible to include modules inside exports, or if they need to be installed separetely (for online users, this could be automatic)

In terms of the UI, I'm thinking we use the Connections tab for inspiration.

I think that makes sense. That list could have icons to indicate what 'types' of build each module has.

I'll give this a try, this is just the inspiration I need to have some direction :)

I will want to talk about data structures once #3045 is merged.

The code is still in the 'getting things working' stage, and I haven't thought much about the cache other than knowing what I am doing is temporary. Tbh, until there is an api to scrape, I don't have any idea how the fetched data will look, or if I am assuming it will provide more in the first level than it does.

@Julusian Julusian force-pushed the feat/loadable-modules2 branch 4 times, most recently from 781b691 to 31cc6f0 Compare October 13, 2024 20:02
@dnmeid
Copy link
Member

dnmeid commented Oct 16, 2024

Thank you Julian for digging in this super long standing request.
Here are a few thoughts I have:

  • Should we call it "module" or "plugin" in the UI? Personally I'm fine with both, maybe plugin is the more correct term.
  • Would it be possible to automatically include the store modules in search and download the selected version on first use if the user has internet connection right from the connections tab. So the user with internet wouldn't need to take care of module management a lot.
  • I see the point of the order of the tabs when you are forced to use the Modules tab first to get any module. But when downloading of the modules is also possible from the Connections tab, I see the module management more like a setting and would see the tab more in vicinity of the settings tab at the right.
  • I would not make any differentiation between custom modules and non-custom modules. I think there should just be modules/plugins. So the whole concept of the local development modules path would be superseeded by just a directory where all the modules are stored at. The user should be free to just store modules there from the OS or Companion does download modules there, so this would be an accessible directory. Only tiny caveat here is if there are multiple modules with the same version number, maybe add then the filename and/or date for unambiguity.
  • There should be a method of mass management, like downloading a lot or all modules, downloading updates of all used modules, deleting outdated versions
  • Isn't the "discover modules" sub tab somehow redundant?
    Right now the workflow seems to be 1. go to modules 2. discover modules at the right 3. add module so it appears on the left 4. select module on the left so it can be managed on the right 5. use module in connections tab.
    Couldn't we have just one list of modules in the management tab and this list shows a combination of all modules, the discovered store modules and the locally available ones? Filters can still be applied to show only the local ones or only the remote available ones or only the betas or only ones with available updates

According to the open questions:

Figure out how/where to show a hint that there are newer versions of modules that could be installed

I'd suggest to show a hint in the module list of the management tab, maybe a little up arrow or something. On the tab header we could have a dot with the number of updates available

Better handling of offline environments

That's where the mass management is a benefit. You are online once, download all modules and go to the job. Also the user accessible module directory helps for users which are installing completely offline. They can install Companion offline and then just copy/unzip a bundle with all the needed modules to the directory.

Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading?

I'd say yes. Then the only difference between a builtin module and all other modules is that the builtins are always delivered with the core application.

How should the legacyId flow work now?

Why should there be any difference from today? I see that the legacyId sytem is not super robust and there might be an increase of problems if the user can downgrade modules more easily. Do you have any particular problems in mind?

What's the intent in terms of bundling modules in the future?

My suggestion would be to always bundle the internal modules and that's it, no different installers for the core application. But at the same time there needs to be a convenient download possibility for a "all modules bundle", so you can grab them e.g. as a zip archive and then just drop them in the module directory with only one additional step if you are doing a complete offline installation.

@dnmeid
Copy link
Member

dnmeid commented Oct 16, 2024

And another thing is coming to my mind right now. Today we are talking about modules for connections. Maybe in the future we want to have plugins also for surfaces, APIs, Themes, Language-Packs...
Maybe we should have an eye on these possibilities right now when it comes to the interface.

@krocheck
Copy link
Member

@dnmeid Most of what you mentioned above @Julusian and I have been discussing in Slack, so I'll add some of that here.

I tend to agree that the Modules tab should be more of a 'settings' aspect of the workflow instead of a stopping point in getting a working installation going (except for maybe offline users). Custom having its own section was discussed and conflicts related to version numbers in custom vs published releases was a main sticking point to keeping them clearly separated. I also think it would be better for the "built in" modules to not necessarily show up in the "installed" list; with my preference being they install on demand as part of creating a connection, and I thought it was good option to allow JIT install of a newer "discovered" version of a module at the time of creating the connection. But none of that negates the need to have the Modules tab to purge unused versions, upload custom versions, or bulk upload a module bundle. Part of not installing the built-in modules was so that any used modules would copy over as being installed on use so that they are retained on a core update.

I'm not sure yet what the module upgrade notifications should look like yet, but that's definitely going to be a part of this somehow.

The offline workflow is going to be the more interesting aspect. I don't like the concept of bundling "commonly used" modules because its doubtful that's going to 100% cover offline users' needs anyway. My thoughts have been to have a 'minimal' and 'full' version (minimal having no modules and full being what we do today). Those could just as easily be called 'online' and 'offline'. Or we go with no bundled modules and have an offline module bundle download. I'm good with either. But the notion that someone can "sync up" after they download before they take the system back offline is not workable for my use-cases because we need the stable installs on a shared drive for the offline systems. Those would not have an opportunity to grab modules from the online system right after downloading and installing.

Julian and I have discussed a module approach for Surfaces as a start, so that has been in the back of our minds during this. Services would be next on the list after that.

@Julusian
Copy link
Member Author

Should we call it "module" or "plugin" in the UI? Personally I'm fine with both, maybe plugin is the more correct term.

I don't feel strongly on this, just that later on surfaces will also use a similar 'plugin' system. So if we call this plugins, would the surface ones share the ui, or be called something else? There is already some confusion from users over how to enable additional surface types.

Would it be possible to automatically include the store modules in search

Good point, I don't see why not. Maybe it might be a little confusing as I'm not sure if we can show the module help until we have installed the module (the store api is very unknown at this point)

I would not make any differentiation between custom modules and non-custom modules.
Only tiny caveat here is if there are multiple modules with the same version number,

This is my current reasoning for it. If we have a module on disk claiming to be 'v1.1.1' is that the same as the version on the store, or is that a user modified/custom build? That affects how we should treat it in the ui. If its the same as the store then we shouldn't offer to download that version. If its different then we should.
This separation is intended to force the separation of 'we know where this came from' and 'the user gave us this, it could be anything'. There are other ways to solve this, depending how strict/accurate we want this to be, and how much we trust users to blindly copy these folders and make edits to the files they contain.

The user should be free to just store modules there from the OS or Companion does download modules there, so this would be an accessible directory.

I disagree with this, I don't want this directory to be accessible.
The current implementation of managing these modules is assuming a very fixed folder structure, in part to make its life easier, and to keep things predictable. Without that predictability we will get weird behaviour when version numbers clash.

There should be a method of mass management, like downloading a lot or all modules, downloading updates of all used modules, deleting outdated versions

Yeah, not something I've thought about. I think this should be a follow up PR, none of it sounds critical to making this PR useful.

Isn't the "discover modules" sub tab somehow redundant?

I'm not sure that it is. Even with Would it be possible to automatically include the store modules in search, being able to install a module but not add a connection with it might be a useful thing?
Maybe more so if there is a filter or it only shows non-installed modules?

Why should there be any difference from today? I see that the legacyId sytem is not super robust and there might be an increase of problems if the user can downgrade modules more easily. Do you have any particular problems in mind?

I could write a lot on thoughts I have about this, but trying to keep it short;
The main thing is where the definition exists in the api. In the api, it makes sense to have a deprecated field in the api for each module. So that in the lists and things we can show that a module is deprecated. It would be ideal to know from that same api payload which module to direct them towards.
I suppose that I'm really just trying to justify making this be the way to do it instead of having each module version declare what it replaced.
But this approach has some downsides, of not working offline and whether we want to be adding all the current legacyIds as modules in the api for this to work.

I haven't really thought too much about when/how we push connections across from one module to another, because being able to pin connections to a specific version of a module makes that harder (especially as we should allow going backwards across the rename, just like we do for within a module).

Should builtin modules be moved into the same store mdoules path, so that they persist when upgrading/downgrading?

The default/recommended way of adding a module will be to leave it set to 'use latest version'. Meaning that most of the time, when you update it then wont try to use the old versions.
I have exposing manual selection is so that users can pin an old version when needed. Such as because of a bug in a newer version, or testing a new version as a secondary module.

I propose that modules should be copied into the same store modules path just before they are first used.
Doing it for every module automatically both wastes disk space, and adds noise to that folder and companion (forcing users to clean out old versions of modules they never use). And provides the safety net of preserving the current version of a module when updating.

My suggestion would be to always bundle the internal modules and that's it
But at the same time there needs to be a convenient download possibility for a "all modules bundle"

My concern here is that the number of modules is only going to increase. Currently we are at I think ~400, taking 280MB on disk, over 7400 files. (that size might jump by ~35MB if we repackage the legacy modules to no longer be handled in a special way)
We aren't talking huge numbers, but that 7400 files is noticable when installing on windows.

If we are ok with shipping these 400, is there a point where we will say that there are too many?
Thats a lot of modules and files some of which each user uses a small minority of. Some of those don't work anymore, some which have a target audience of one organization, and some which need internet to work so don't need to available in an offline system.

That said, I think we need to keep them all in for a couple of releases. That might be necessary anyway to ensure that we aren't deleting modules out from under people.

But a question I have, while there is a simplicity to being able to load in 'all modules', is that something that is actually useful? Surely you already know what kit you have beforehand, so you can narrow it down to a small list of modules?

I should add, one thing I am leaning towards is that full exports probably wouldn't package up the modules they use, which could make doing a full import on a new machine painful if we dont ship all the modules (although those could be the wrong/old versions).

@dnmeid
Copy link
Member

dnmeid commented Oct 19, 2024

I would not make any differentiation between custom modules and non-custom modules.
Only tiny caveat here is if there are multiple modules with the same version number,

This is my current reasoning for it. If we have a module on disk claiming to be 'v1.1.1' is that the same as the version on the store, or is that a user modified/custom build?

I still think there should be no difference.

  1. The division between custom modules and non-custom modules sounds like a developer's solution for a developer's problem.
  2. The division doesn't actually solve this problem, you can still end up with the same version numbers in each section and then everything is getting even more messy.

We can't avoid that users want to install module x.y.z that they have obtained somewhere and that may have the same version number although it may or may not have different code. We can't guarantee that version numbers in the store are always correct either.
I think the best solution is to make sure we have at least one thing that makes a module unique in an installation and that can be easily understood by the user. I think the filename would be the best such thing. All files in a folder need to have a unique name and the OS will enforce that no matter how a file got into a folder. That means if we have e.g. blackmagic-atem_v3.14.1 it is a different name from blackmagic-atem_v3.13.0 and if a user somehow manages to get a custom module where the developer didn't change the version number and it is named blackmagic-atem_v3.14.1 too and he wants to store it in the modules folder, there will be an error and he will be forced to cancel or overwrite the existing file or rename one of the files. All this is known to users and can easily be understood. The only part Companion would need to do is to look if there are multiple files of the same module with the same version and then additionally show the filename in the GUI so the user can pick the module he really wants.
I also don't think we have to take care if a module is really the same as in the store or not. If a user has a modified version and this has an unmodified version number, he will know the reasons why he has this and how to deal with it. We on the other hand don't know the reasons and how to deal with it and so I think we should not try to deal with it.
But we definitely should rise module developer's awareness for that topic and make sure they use e.g. build numbers or alpha or beta markers for their custom builds.

But a question I have, while there is a simplicity to being able to load in 'all modules', is that something that is actually useful? Surely you already know what kit you have beforehand, so you can narrow it down to a small list of modules?

Yeah, for me that is a must have. I would roughly divide the Companion users in two groups here: 1. Users with a slow changing set of devices, like studios, streamers, churches, home-users, small companies with limited scope; 2. Users with dynamically changing devices, like freelance technicians, full service companies, rental companies.
First users tent to have internet connectivity and they can download just the modules they need and they usually do more complex Companion setups so they are more affected by some breaking changes and are more concerned about version management. Second users usually are challenged with very dynamic situations, where gear is added in last minute and they often don't have internet and they usually don't have much time, so the goal is not to make the most fancy setup, but to make this additional thing working asap. They tent to have company computers that are updated maybe once a year. For them it is important to have something to carry on your USB-drive that gives you a Companion with all options without having to think a lot or make some additional selections.
Size does matter of course, but I'd rather download and carry 2GB of modules than invest 5 minutes in making a selection of the modules I may need on my next job just to find out I missed one when I'm there.
Hence my suggestion: Don't bundle any modules except the internal with Companion and additionally offer an all-modules bundle. With bundle I mean a bundle of only the modules without Companion itself. That bundle could atomatically be updated the same way like today the beta version is updated.

Another thing just came to my mind: do we want to show module vs. core compatibility anywhere? Version numbers are one thing but the used version of the base-module / API is different.

@Julusian
Copy link
Member Author

We can't avoid that users want to install module x.y.z that they have obtained somewhere and that may have the same version number although it may or may not have different code. We can't guarantee that version numbers in the store are always correct either.

What I am afraid of is a user reporting a bug in 1.2.3 of a module, the maintainer then spending some hours trying to reproduce the issue and failing, because the user was running a modified build of the module which caused the bug. Maybe they modified it, maybe someone else they work with did and didn't tell them.
This might be me being overly paranoid about not trusting the users to be sensible, but users are frequently unpredictable and do unexpected things. So if a module dev can't trust a bug report saying 1.2.3 is broken is actually 1.2.3, then that can make their life a lot harder.
If I'm modifying some code quickly to make it work, I often wouldn't bother updating the version number, especially if I had no intention to distribute it when I made the change.

But it sounds like this separation has to go anyway to make the bundle import work with similar ability to installing from the store. If I really want to keep it, perhaps adding another file to each module which contains checksums of the other files would be enough to cause it to be flagged as modified unless someone really wanted to avoid that.

Another thing just came to my mind: do we want to show module vs. core compatibility anywhere? Version numbers are one thing but the used version of the base-module / API is different.

That is a good point to consider here. When installing from the store companion will know what api versions it supports, so it doesnt offer to install ones which it knows are incompatible. Same when importing a single module downloaded in the browser, we just need to make sure the browser version can give users good indicator of version support.

For the 'module-packs', I guess they will need to be versioned for what version of companion it targets. To keep it simple, use the same major+minor version numbers. We could either generate them at the same time as doing a release of companion, or just follow their own cycle but under the same numbering.
You often wouldn't be getting the latest version of each module, but it would be no worse than today.
And if there is one you explicitly want the newer version of, you could always download it separately
Making these packs is going to require some thought on workflow, but I am still leaning towards the first release should still bundle every module, and to drop some/all the release after.

@dnmeid
Copy link
Member

dnmeid commented Oct 21, 2024

So if a module dev can't trust a bug report saying 1.2.3 is broken is actually 1.2.3, then that can make their life a lot harder.

That's why I say we should rise awarenes of the developers for that topic. I think the users that do modify the code themselves won't file a bug report for the modified version. The problem is developers sending out modules with modified code and the same version number, so they are shooting themselves in the foot.
So far I often did it myself the same way but with a plugin system and you never know where a preview version spreads, I'd rather use unique beta version numbering.
A checksum though is always a good thing even if it is only for ensuring integrity.

wip: remove separation of custom versions
wip: install when adding
wip: split out file
wip: move modules into shared dir
wip: implement install latest
wip: module download headers
wip: better error handling
wip: format versions table dates better
wip: improve refresh and last updated styling
wip: some better error handling
wip: use url routing for modules manager
wip
wip: tidy
fix: module filter producing bad results
wip: docs
wip: add file on disk to indicate that store version is a prerelease
wip: refactor
wip: versions visibility
wip: refactor visibility header buttons to avoid duplication
wip: better tables
wip: implement things a little more
wip: start of module manage page
wip: start of per module data fetching
wip: remove unused files
wip: change types
wip: remove old modules poc ui
wip: some ui boilerplate
wip: move cache storage into cache
wip: some refinement of modules discover tab
wip: new ui to discover and install modules
wip: add import module button
wip: new ui design
fix: reimplement reloading dev module (untested)
wip: some more flow
wip: some crude ui to install, backend to install modules
wip: crude start of module discover ui
wip: start of backend to scrape module store
fix: add warning
wip: present as proper table
wip: fix module adding
wip: adding and removing custom modules
wip: first version of importing custom modules
wip: alternate list of all modules
wip: fixup module list page a bit
wip: fix help modal
fix: incompat warning
wip: config fixup
wip: choose module versions to use
wip: reworking (again)
wip: try using specific version when adding
wip: better
wip: attempt at ui for adding modules
wip: something
wip: change to single dev module
wip:
wip: list versions in ui
wip: Some ui
wip: pump more data to the ui
wip: remove separation between legacy and bundled
wip: something ui
Until we have a store, or a bundle to import we need the builtin modules

This reverts commit 34c26c5b1a0f1c1989ad08af2986db65911a5757.
@Julusian
Copy link
Member Author

Julusian commented Nov 5, 2024

Would it be possible to automatically include the store modules in search and download the selected version on first use if the user has internet connection right from the connections tab. So the user with internet wouldn't need to take care of module management a lot.

I'm struggling to figure out how to present this in a vaguely sane way.
To give all the options, this list needs to include (better named needed, going for clarity in this comment):

  • Latest already installed version
  • Latest already installed prerelease
  • Install latest version from store
  • Install latest prerelease from store
  • Each specific version
    • These could be deprecated which would be good to show.
    • These should indicate whether they are installed or not

It is also worth noting that installing a new version could affect other modules, but that can be indicated with a warning shown below the dropdown when one of those versions is selected

I don't know how to present this in a dropdown that doesn't feel confusing/cluttered.
A large part of the challenge is that a dropdown doesn't give a nice way to filter the options, I can see reasons to only want to be offered versions which are installed, and we should not show not installed deprecated versions unless explicitly enabled.
Maybe the answer here is to reuse something like the module management table here to be the way of selecting the version, but I'm not sure if that is a good idea.

So I am tempted to treat this as two different scenarios.

  1. If a version of the module is installed, leave it as is. This also matches the versions offered in the connection config panel.
  2. Module is not installed, so offer just 'install latest version' and 'install latest prerelease'.

If the user wants other versions, they can do that in the modules tab where we have space and options on how to present this in a meaningful manner.
Whatever happens, the options available in the connection config panel should match what is available when adding a connection to avoid confusion, and putting a table of modules here will be strange.

@Julusian
Copy link
Member Author

I am getting near to the end of my todo list, and one of the chunks I haven't reimplemented yet is the legacyId logic.

The problems I see with the current system are:

  1. With it being defined in the manifest, it could be different across different versions of the module. For simplicity we would use the latest, could that lead to weird cases?
  2. With it being defined in the manifest, we need to have a manifest of the new module to know to nudge users across to it. This could be mirrored in the module list api (like we are doing for other properties), but should it be?
  3. The api has the ability to deprecated a module. It would be really nice if the api could also know what module to send users to. This could be defined at the time when deprecating the old module, but it means doing this in two places.
  4. It would be easier to check that the legacyIds are valid and aren't malicious if modules push you to another, rather than pull you from one.

None of this sounds like a dealbreaker, just some edge cases to be careful of.
The last two points could possibly be handled by the api backend, when publishing a module check for any unexpected legacyIds, and automatically compile the list of suggested replacement modules based on the legacyIds of others.

The downsides I see with moving it away from the manifest:

  1. When offline, how will companion know what rules to use?

I think this one will be a big problem, with offline being considered a common scenario.

I guess for now I have no strong argument for changing it given this list, so for now I won't change it

@dnmeid
Copy link
Member

dnmeid commented Nov 24, 2024

  1. With it being defined in the manifest, it could be different across different versions of the module. For simplicity we would use the latest, could that lead to weird cases?

Afair our legacyId system doesn't rewrite the instance type in the db, right? It just reinterprets the instances on the fly and hides the legayc module. So one module can take over several older ids (n:1) but one old module can't be taken over by several newer ones (1:n). So we actually have one edge case right now, if multiple modules declare themself to be the successor of one module. I guess the latest one wins then.
Replacement so far has been done automatically. But now users are able to up- and downgrade. That means we can't hide or remove legacy or deprecated modules or versions.

For the normal workflow I think we will need a replacement now on a per module basis and not per system. E.g. new has legacyIds = ['old','wrong']. The user has multiple instances of old and now he installs module new or it is added to the store. I think it should now show the versions of new on top of the versions of old and the user can update "cross-module".

If new has only been a local prerelease and is not in the store and the user removes it from the local installation, nothing is lost. Now we just can't find the module/version the instance is set to and the user can choose how to fix it.

I think it is save to show all versions of new on top of all versions of old. Again 1:n shouldn't happen so we don't need to decide between multiple news which is the newest of them.
But we can have a chain of legacy modules. If there is wrong -> old -> new then old should have legacyIds ['wrong'] and new should have ['wrong','old']. In this case I think it is acceptable to show them in the order of appearance in the array of new. So everything will work if new legacyId are just added. And if the order is wrong in the array I think that is only a cosmetic problem.
But if new only declares ['old'] we will miss wrong. I don't know how many modules have that problem or might have it, but I think if this happens, we should fix it in the manifests and not make a super complex Id replacement.

I think I didn't got the deprecation point yet. If you want to deprecate a module, I think you have to update a field in the manifest. Does that mean you have to do a new version for it? Actually yes, otherwise there could be the same version with or without the deprecated flag. Or do you deprecate older versions in the manifest like "with this version I declare 1.0 deprecated"? Or is deprecation stored somewhere else?
And in any of the cases what is the ultimate goal of the deprecation?
As I see it, we have some modules that are inactive with no sucessor. E.g. I have written the module for Barco PDS and although I updated it for v3 this hardware is not sold for many years and there will be no update for it and no bugfix. I can imagine to give users a visual hint for this module that it is EOL.
Then there are modules that are still fine but there may be a better alternative even if it is not a successor, like Analogway AWJ for Analogway Livepremiere. They are both independant modules and you can use both but AWJ uses a different API, can do more and better. So I'd like to give users of Livepremiere a hint to change to AWJ but I wouldn't call it deprecated because it is still valid.
And then there are the modules that just changed the name because of typo or clarification and are actually one module. I think this should be a normal version upgrade from the point of the user and not something where the old module is deprecated.
And if I just release v2.0 of a module because the hardware I'm controlling did an API change and my v1.x doesn't work any more. Would I deprecate v1.x? Or is is enough to have a new major version. There may be users out there who don't update the firmware of the device and still need v1.x.
So how does it work and what is it for?

@Julusian
Copy link
Member Author

Afair our legacyId system doesn't rewrite the instance type in the db, right?

No, it does update the instance type in the db.

So we actually have one edge case right now, if multiple modules declare themself to be the successor of one module. I guess the latest one wins then.

Yep, which one wins is not particularly deterministic, it depends on the order of keys in a map.

Replacement so far has been done automatically. But now users are able to up- and downgrade. That means we can't hide or remove legacy or deprecated modules or versions.

Yeah, which is why we need to make sure that we can easily figure out which other modules which either replace or are replaced by the current module.

For the normal workflow I think we will need a replacement now on a per module basis and not per system

So far the extent of my thought on this chunk is 'it needs to be possible to upgrade and downgrade across id changes'. I am also worried about the confusion it could add to the version picker, as we will need to differentiate versions with the old module name.

Maybe the legacyId field could be changed to be more descriptive, rather than just the names, include a few properties so that we know if it was a simple rename or a version number reset

I think I didn't got the deprecation point yet. If you want to deprecate a module, I think you have to update a field in the manifest. Does that mean you have to do a new version for it? Actually yes, otherwise there could be the same version with or without the deprecated flag. Or do you deprecate older versions in the manifest like "with this version I declare 1.0 deprecated"? Or is deprecation stored somewhere else?

I think that deprecation is a property of the store, and not of module packages.
The offline bundles shouldn't be including any deprecated modules, if a module is deprecated then there should be a newer non-deprecated version or a replacement module, so I don't see why we would include deprecated versions in that bundle.
If you're installing a loose single module, then again why would you use a deprecated version of the module unless the deprecation doesn't matter to you

I see this as we can deprecate a module, typically when it has been replaced by another. And see two modes for deprecating a version of a module.

  1. soft - just a warning on the version
  2. hard - the version will no longer be downloadable

Some common scenarios where deprecation would be used:

  1. When the version is no longer useful (eg, it talks to a cloud api which is no longer usable)
  2. A serious issue was discovered that makes the module 'unusable' or dangerous to use. Perhaps the zoom module, where if a viewer sets their name to a certain sequence it causes a redos causing the module to continually crash. Or I made a silly mistake in atem logic, causing the connection to stall but not disconnect when the packet counter wraps around (this was a bug in the atem library at some point)

Your Barco PDS example; I feel like that shouldnt be marked as deprecated. It's still perfectly valid to use it, even though it likely wont see any updates
For AWJ, I'm not sure what to do there. While simply setting the legacyId would allow for swapping between the two, that will likely cause pain with the upgrade scripts if both keep getting developed

Renaming modules would be fine with just deprecating the module, and adding the legacyId. We definitely need a way of being able to hide the old module when adding a new connection, but at the same time it still needs to exist there so that the update logic can find it and figure out what to do.

@dnmeid
Copy link
Member

dnmeid commented Nov 24, 2024

'it needs to be possible to upgrade and downgrade across id changes'

Exactly, that is also what I mean with "cross-module" upgrades.
If the module type is changed in the db, than that means whenever you do a crossup- or -downgrade the type needs to be updated.
The modules with cross-updateability can be determined from the legacyIds and we know what module is the newest one.
In the version picker I think that best solution is to show the available versions of the newest module on top and then the versions of the older module. For the module that you are currently using it is enough to show only the version number, for a different module (i.e. that is a cross upgrade) we should also show the name of the module. Then I think we don't need to include more info in the manifest, later versions will always be on top.

There is a general problem but this is also already existing today: we don't know at what point a module did an upgrade script and if that upgrade script made a breaking change. But I also think that we should allow downgrade to older versions even with possibly broken data because people will have a reason to downgrade and then it is better to live with maybe one broken action.

I would really appreciate a more formalized version history for the modules that we can show in the store and also are able to show users the exact changes of a new version instead of just a version number notification.

The offline bundles shouldn't be including any deprecated modules

Ah, agreed. That is a good point for deprecation, we want them to be downlodable if someone really needs but don't include in the bundle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

Modules as downloadable plugins
3 participants