Replies: 8 comments 20 replies
-
I think there are a lot of ideas here, that want discussing separately, so I'm going to start new comment threads for each area I have comments on. (these will trickle in as I find time time to think about them today)
Yeah, the left sidebar is something I rarely interact with, so I think there is a lot of scope to move more things over to there, but I wonder if it will decrease discoverability with some things ending up too hidden? I don't know, something to be careful of. I do wonder if a tree view like that could end up being a bit overwhelming to new users, who aren't really sure where to start. |
Beta Was this translation helpful? Give feedback.
-
I 100% agree that surfaces should be done via some plugin system. Whether that is the same plugin system used for modules, or a separate one is open for discussion. I haven't put much thought into this yet, as without being able to represent surfaces properly in Companion I am not keen on opening the gates to support many more types.
The bit that concerns me about plugins is making sure that the various native libraries they each need will behave. But seeing as this is working today for native libraries in modules today, this may be a solved problem. |
Beta Was this translation helpful? Give feedback.
-
This is an area I havent thought about much myself, so I dont have much insight.
I think we need to be careful here. While this will give the surface modules large amounts of flexibility in how they draw, it will also make it immensely hard to add/change anything about the drawing. Instead of changing a couple of patches of code that we control, adding a new 'font-flip' property will mean updating 10s of surface integrations. |
Beta Was this translation helpful? Give feedback.
-
Ignoring the window and cells portion, as I have that as another point. I have a very different proposal, #1187. While we could make it so that the modules are responsible for understanding what an encoder is and how to map that into things on the device they control, that will continue down the route of every module is slightly different. One module will let you choose how much single click of the encoder changes, others wont expose it and it will be fixed. In some, pressing that encoder will reset it to 0, others will mute/unmute. So in my proposal, an encoder would have the same press & release actions as today, but the rotate actions would be replaced with 'binding' a property. I am unsure if it should be possible to keep with rotate actions like today for compatibility. |
Beta Was this translation helpful? Give feedback.
-
When I first heard the pitch for an infinite canvas and windows a while ago I was on board. But over time I have been going off it. Take the loupedeck-live as an example. (the ct has the same problem, but also more). I would also like to question whether a large grid is in fact easier to work with instead of pages. While on a technical level it is superior, but from a usability I dont think it is. I think it will be much easier to understand having a streamdeck on page 50, than having a streamdeck at position I have been leaning towards a concept I am calling 'spaces'. It essentially boils down to having multiple sets of pages, and the ability to create a 'space' for a certain device type. So that when I want to use a loupedeck-ct, I will have to create a new 'space' for it, and it will get its own set of pages. I think that this will solve a common confusion for new users, in that it will then be possible to have different pages for different surfaces and not to have to assign different ones to different surfaces. And it will give a better experience when programming for non button-grid surfaces, as the ui will tailor the programming to the device you are working with. We can make it possible to 'symlinks' and 'hardlinks' the control definitions into multiple places (the new data structures in 3.0 are designed to allow for hardlinks), as long as the control type matches. So it will be possible to have the same buttons in different places and not have the maintenance cost of editing multiple places. I don't have an alternate proposal for windows here, and I do wonder if lacking that will be a dealbreaker. I do see some benefits to them that I don't have an alternative to replicate (tabs, sticky buttons) |
Beta Was this translation helpful? Give feedback.
-
I am not a fan of moving the actions off of buttons in the ui. I think that doing so will make it much harder to follow what will happen when you press a button. It is possible that this can be solved with some UX work, but does clicking a cell in the grid, then clicking 'view actions' which opens another panel/view help with understanding what happens or make it harder? I wrote about this the other week somewhere else, but I view this as triggers are an 'advanced' feature that are designed to be powerful and as a consequence are less intuitive and harder to use. But buttons/controls should be pretty self explanatory and understandable to beginners. Ill admit that it is entirely possible that the steps functionality goes against this, but that should be easily possible to make less prominent. That said, I have no problem having the ability to create stepped scripts or other complex things, as long as they dont increase that beginner burden. Perhaps this will even be 'free' with the ability to 'hardlink' controls? I am expecting them to live in a pool, so it would be possible to have a control not assigned to a button. That control would be possible to invoke directly, so would in effect be this stepped script. Fun fact: triggers are also a different form of 'control'. They can't be placed, but live in the same data structure as buttons do |
Beta Was this translation helpful? Give feedback.
-
@dnmeid your ideas for a new interface looks wonderful! |
Beta Was this translation helpful? Give feedback.
-
Seeing a lot of the proposed ideas here integrated in Buttons and I really like them there from what I've messed with. I think this sort of position-based management is much more intuitive than pages. I think having the surface overlay grid be separate from the buttons grid is the way to go. Hoping that Buttons will help enable Companion to continue to grow! |
Beta Was this translation helpful? Give feedback.
-
Compañeros,
a few ideas and thoughts about future Companion development. I think everyone of you has some ideas, visions, dreams how to develop Companion. Some of them are in the Github issues, some in the Github projects, some of them are buried behind the 90 days Slack paywall, some are in private conversations or still in your head. So forgive me if I'm not up to date what may have already been discussed. These are just my ideas and some of them are going back to the very early days of Companion.
Before I start with the technical stuff, let me say that I also see organisational challenges. We do not have a roadmap, we do not have an organisation, just some dudes who do whatever they like. On one hand I like the freedom here but on the other hand I feel a little more structure couldn't hurt. E.g. I don't think that we should have a super tight release schedule but we should have a clear workflow towards a release, with feature freeze and time for QA. We should have agreed standards for modules and check them and we should have a way to find out how to agree on something and how to make team decisions.
We have for example tons of feature requests, often a user requests a feature only he needs and even he needs it only for one very special purpose. These feature requests are often very specific and the proposed solution tackles only exact one problem. We don't discuss new features and in the lack of an overall strategy often features are implemented just because they are easy to implement.
This has led to a bunch of new checkboxes here and there, modals, new tabs and so on.
I'd vote for having a general vision how Companion should look like and work. Feature requests should always be checked if they fit in and if not they either are rejected or we find a way to come up with a fitting feature which is also good to solve the requester's problem.
So let me elaborate on how I could imagine Companion to look and work.
I think Companion has grown to a state where it looks blown, not very intuitive any more. We have to admit that after the years we have so many great new features that we just can't reach the simplicity of the first Companion any more. But at the moment every new feature makes Companion look even more convoluted. I think with the existing GUI we don't have much headroom.
My proposal is a three columns design.
It could look like this:
Control layout
Surface layout
Window layout
At the moment we have three menu like elements, we have the sidebar which annoyingly is always extended at page load and only adds very few essential links, we have the main tab selector and in some tabs we have sub-tabs changing only a part of the tab. The tree would unify all of them making it more intuitive to navigate to the wanted part. Additionally it would be quite easy to dynamically add branches and leaves to the tree. So it would be maybe one click more to get to some places but at least one click less to get to most of the places. Most of the users with large configurations will have to do some scrolling but I think it should still be an overall improvement. Additionally scrolling could be reduced by something like a customisable favourites section.
The adjustment pane always has the possible adjustments for the branch or leave, e.g. if the connections branch is selected you have the today's connections tab and can add connections. If you select a connection directly, you have the configuration of the connection.
On to surfaces. We've been talking about this for long time and this is quite a challenging topic but I think we agree that 1. we want to open Companion for even more different surfaces, 2. the current implementation of everything which is not a Streamdeck XL is not optimal and everything which is not a Streamdeck Classic or Mini or Pedal is even worse, 3. we want and have to improve in this area.
Let me break it down in different aspects.
The surfaces themselves should be defined by plugins, so we have an abstraction layer between the core and the real device. This would make it much easier to adopt for new surfaces. For this we need a description language of all possible types of surfaces, where we can describe input elements like buttons, faders, joysticks, other proportional inputs, gyros, touch elements and so on, output elements like LEDs, LED-strips, displays, speakers, and so on. I want to emphasise the distinction between input and output elements. At the end the description should have all the information for being able to draw an emulator. It doesn't have to be photorealistic, only functional identical and also should offset some controls to better fit in the regular raster.
Communication with the surface is handled by the plugin, so it would be easier to integrate devices with their own drivers, midi and serial stuff should be possible. Different webbuttons would be just surface modules. Surface modules? At the moment a companion module for a connection is able to talk to an external device and to transport data from Companion to the device and transport data from the device to Companion. There are so much similarities that it should be possible to make the normal modules able to work for surfaces. At the same time the normal connection could also benefit from it, maybe it is possible to have an emulator for the device defined by the module. And finally what are the big differences between a surface and an API? It is just the method of how to assign an incoming command to an executable element. So why not have the APIs in modules too. More modularity is more security, more performance and easier maintenance.
How to deal with that surfaces in Companion. I think while the concept of the pages shared by all surfaces, like we have it, is quite easy, once you got the point, we nonetheless have to replace it. This is going to be far more advanced and flexible for the power user but should be similar as easy for the novice user. First of all I suggest that we stick with a grid, let's call it the canvas. Every cell in the canvas has x and y coordinates and the canvas' size is only limited by the js number range.
Now we can place surfaces on the canvas completely to our liking, but a surface always snaps to a grid cell. The point is that we should have matches between I/O elements of our surface and the grid cells. That doesn't mean that a control has to be exactly a 1x1 cell, it just has to be at least 1x1.
Surfaces can also overlap partly or completely but don't have to. So now we can emulate what we are doing today: Button 1 is button 1 on all surfaces.
Surface rotation is tbd, maybe one individual module per orientation, maybe an option.
How can the actions and feedbacks be placed on a control? Easy would be to just assign everything to a cell, all output-controls from all surfaces positioned at that cell get the output data and all input-controls feed the action. Only difference compared to today would be that we don't number the buttons page/number, but page/x/y.
But I think that is too easy because now our page is huge and we are always switching the whole canvas. Actually many users have built complex menu structures where they shuffle around buttons, all with absolute addressing. A nightmare to program and to maintain but the pages won't allow for more. More and more complex surfaces will rise the already existing demand for a more flexible system.
The solution could be windows just like computer desktop windows. A window would have a x/y position which always snaps to a canvas cell, the window size also snaps to the grid. That means the smallest window is 1x1.
A window has a content which is a grid like the canvas, that means the window's content can be larger than the shown part of the window. The visible part of the window can be scrolled to show any part of the window's canvas.
A window cell is used like we use a button today, you assign actions and feedbacks to a window cell. A window cell can be referenced by a three part number window/x/y.
There should be internal actions to scroll the visible part of the window to fixed positions or e.g. by the window height or window width. That exactly can resemble todays pages, page up/down just scrolls a window.
A user can create as many windows as he likes. Windows live in a window bin and they don't have to be open all the time. A user can open, close, reopen a window or drag it to a different location at the canvas. Windows also can overlap or cover each other, windows have a z-order and there are actions to move a window to top or bottom.
Unlike the surfaces which are "transparent", i.e. control data will be passed to all surfaces under the cell, windows really cover each other, so only the logic of the topmost window cell will interact with the surfaces below. But covered or not shown window cells can be used by APIs all the time.
You see how this can improve flexibility? E.g. if I create a 2x1 window and place it on top with two buttons, I created sticky/fixed buttons which always stay there even if a window below scrolls. If I create a 1x5 window and use the buttons to scroll a different window to defined positions, I have created a tab controller.
Two more aspects of windows: 1. Today we have a shared set of pages and each surface can be at a different page. I think this should be an option with the windows. That means by default all surfaces under the window show the same part of the window's canvas. If you change the position on one surface, it changes for all surfaces. Optional you can have individual positions per surface. Why? The most common use of Companion is a single operator use. The question is only relevant if you have more than one surface, so let's think of single operator with multiple surfaces. They more often than not want to synchronise their surfaces and this not only would allow it, but is also the expected behaviour if you never had worked with Companion before. The legacy workflow would be to have all surfaces placed on top of each other and opt for individual positions with a single window. The new workflow would allow to place surfaces side by side and span a large window across multiple surfaces or to have individual windows for each surface or whatever you can imagine.
2. When working with multiple surfaces at different positions and with different sizes you may find yourself programming the same buttons in different windows because you want to have them on different surfaces but the surface's layouts are too different. That's why all windows should share the same canvas, just scrolled to an individual position. So actually the user is programming all of his buttons or controls on one huge canvas and then has the possibility to take parts of the canvas and arrange them like crazy on the surfaces. Since we are loosing page names and user programming may spread over a large area, named bookmarks for positions should be available. They also would make it possible to scroll to a bookmark position and thus making it easier to move stuff around if you find out later that you need more space somewhere.
I suggest to separate the buttons from their actions and unify this with the triggers. What do I mean with this? A button has some steps with actions, a trigger has some actions. A button's actions are triggered by a press or release, trigger actions are triggered by a self made condition. What if we tell the trigger to run on a specific button press? Or the other way round if the only action of a button is to run a trigger. Let's take the steps and stuff from the buttons and move it to a new element, let's say "stepped script". The actual trigger and condition part of the trigger is part of the element stepped script, because it defines when to run or advance the stepped script. A button can also trigger or advance the stepped script, so we actually are generating a new state (at which step are we now) and many buttons can share that script. In the long term there can also be other types of scripts, e.g. user defined javascripts. They would have the trigger part to decide when the script is run and then they could access variables and run actions and so on but all from the javascript.
So far I've been silently ignoring surface heterogeneity. I said a windows cell interacts with all controls below it. How can this be done? I think the solution we have with the rotary controls for SD+ was good for making rotaries available in a very short time, but this can't be the way to have more and more checkboxes and special actions for different controls. First we need to decide when an action is triggered which is a "bang" but not all controls generate a bang. Think of the X-Keys T-Bar. Today's workflow is to have a trigger watch for variable change and then trigger an action which uses the variable in an option.
If every surface exposes it's control's values like internal does for X-Keys, all these variables could be used to trigger stuff like scripts with ultimate flexibility and very fine grained control. But at the cost of ease of use. I think for 98% of todays Companion programming it is good, if we just use all the bangs of controls below to trigger zero to x scripts. So if you don't want to use something more complicated to trigger a script, you can decide to add more events to the script trigger itself or omit triggering it from a button and do all the logic in the trigger of the script.
Next we need to deal with values which are not a bang. How should they be transported? Exposing all this to variables is good, but would require much programming at the user side and is likely to break if a button is moved and also it would only help a little if there are different controls triggering a button. I think it would be good to transport normalised and original non-bang values with the event. We actually need this today for the rotary which not only gives a bang (turning speed is not 0), but also a directional turning speed. There needs to be some standardised and normalised format and the possibility to also transport raw data. The rest is up to the action. Not every action has to be 100% compatible with all events.
At the moment rotaries trigger different special actions, with this proposal it would be only one action with only one event and the direction is in the event data. For the future and also for many other analog and even fancier controls I think this will simplify programming for the user dramatically because the action can decide what to do with the extra values. E.g. a set fader action can use a proportional value from the event to set the the value at the connection. A joystick can send its x/y position and the action decides what to do with it. API calls to Buttons, e.g. OSC, can pass additional data. According to that logic actually an ordinary button would have to send a value too, wether it is pressed or released. That means press and release would trigger the same action and the state of the button would be in the event data.
Today all actions are just triggered and have only very little knowledge of why they are triggered. The decision between different usages like press, release, turn left, turn right is made upstream in the button. I think it is not practical to have even more possibilities at button level in the future for other input elements and map them all to just a trigger. With the proposal the decision of how to react on a value change of an input would be possible in the action.
On the other hand we have to provide a way to not break the existing release action workflow or the existing rotaries workflow and generally it would be good if users had the ability to be more creative with extra event data.
This can be done with internal helper actions, which can manipulate the event data or change the actions execution in a conditional way.
Also it would be possible to use this event data in the trigger expression of the script. Like I said earlier the actions wouldn't be part of the button any more, they would live in a script which is triggered from the button (actually the controls below the button) .
It would be easy to extend the script's condition to only run the script if the event.button.state === 1 or if it is 0.
Or within one script we could have an action which aborts the script if there is no event.tbar.value, or whatever you can think of.
That way we can make all the existing actions compatible with additional event data by optionally externalising the branching decision and todays release actions or rotary actions would be scripts with a special condition based on event data.
This being the input side, there's also an output side, like LEDs and displays.
Just like inputs in my dream outputs should be reusable and a thing independent from the button. One or many buttons can use a set of output elements, maybe also use a combination of more than one output set. That means what today the combination of the button styling and feedback is, would be an output set. Each output set can contain feedbacks and the like.
It is no secret that our current graphics generation is outdated and actually never has been very sophisticated. I plan to start a large overhaul soon, which not necessarily has to end in what I describe here but would be a foundation for it.
I would introduce a user defined amount of layers to the graphics. Today we actually have layers with the background color in the back, then a user image or images from feedbacks, finally the text on top. Users should be able to use more layers and define the content of a layer by choosing among different elements like text, image, rectangle, triangle, value formatter, gauge. Layers have some sort of id and feedbacks can reference the id, that means a feedback could e.g. change the color of the rectangle "background" or of the rectangle "left indicator" or of the text "top textbox" and so on.
Additionally it would be good to have a way of changing the look of the output completely without having to change many things in many layers. I was thinking of different states earlier but now I think it would be even better to have layer groups which can be activated and deactivated. Group activation for sure is handled best with an expression. (another benefit is, that unlike with states all the manipulation targets are there all the time)
I think now it should be left up to the surface module to render or show the information. A small display will render at a small skia-canvas, a larger display at a larger skia-canvas, a RGB-LED will show the color of the lowest rectangle, a on/off LED may light if brightness is greater than 50%, a LCD-text-display maybe show the first text layer.
Even if not every possible layer combination is not 100% compatible to every possible surface output element, I think this is quite advanced and with the surfaces being able to be at different positions I don't expect too many fancy combinations of output capabilities.
To wrap it up, in this proposal buttons would actually loose most of their own functionality. The functionality would be provided by other, from the button independent elements like scripts or output sets. The button just uses those elements and they are not exclusive to a button. For convenience it could be possible to have inline creation and editing possibilities, then programming buttons would feel much the same like today. You just start with an empty button, add an existing or new output set, the options of the output set will be shown in the button editor. Same with scripts. And of course button wouldn't be the best name for this control.
I've been thinking about this concept for quite a while now and I think it is an overall solution which is consistent, easy to understand and works. I've tried to omit details as much as possible and only give the big picture.
So, although this may seem all together like a huge effort and many big changes, I think it is quite doable in smaller chunks and with coordinated collaboration. The overall goal should be to make Companion a tool which can grow further and can offer more functionality by replacing the design by feature approach with a flexible and expandable workflow and gui design.
Now it's your turn. What do you think about it? Should we take this route or are there other proposals? Anything you really dislike or really like? Any comments are welcome but please don't get lost in details, this is about the big picture.
Beta Was this translation helpful? Give feedback.
All reactions