-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ts] Add first event audio implementation to webplayer #2169
base: 4.2
Are you sure you want to change the base?
[ts] Add first event audio implementation to webplayer #2169
Conversation
I forgot to mention an important aspect of webaudio API. This made me think right now that if the UI is disabled by configuration, the audio on/off button will never be shown and the user cannot enable the audio.
Do you have any thought about this? |
Today I implemented:
I think I will wait for your feedback before proceeding further with the other open points since there can be multiple possible solutions : |
Sorry, I overlooked the PR. Without having looked to much into the implementation, this is going to need testing across all supported platforms/browsers. Sadly, the Audio API is a bit flakey across some (older) browsers and you already found the trouble with supporting various audio file formats. I'll do a proper review sometime this week! |
Thanks for the reply :) Last thing, these APIs are the same used by the Unity WebGL exporter. I'll try to search on their website if there is some browser compatibility declaration. |
All APIs, but two, are supported by browsers from 2014-2015: Chrome: >14, Edge: >12, Safari: >6, Firefox: >25, Opera: >22, IE: Not Supported. The two APIs for pause/resume (AudioContext.resume, AudioContext.suspend) are supported from browser version from 2016-2017: Chrome: >41, Edge: >14, Safari: >9, Firefox: >40, Opera: >28, IE: Not Supported So, I probably need to add a check for suspend/resume in order to avoid crash if called when not defined. The only other caveat is that Safari does not support OGG. Last thing, I was able to play OGG file in Safari using this library (same library used to play media on Wikipedia/media), but I don't think that bringing a dependency for this worth it. |
A few bugs and behavioural suggestions: Scrubbing causes queueing up of audio sourcesReproduction
Expected Observed Suggestion Speaker button visibility/discoverabilityThe speaker button should only be visible if Users have to know they have to click on the player to enable audio. I propose this for better discoverability: upon loading the player, if a The button can disappear after say 5 seconds in the top left corner and will then show up in the player UI (toolbar). You can work out the details to make it nice :D Player disposalA player can be disposed (see dispose example). All audio should immediately stop when a player is disposed. Testing & documentationI've tested this in Chrome, Edge, Safari, and Firefox on Windows and macOS and it works as intended. It must also be tested at least in Chrome for Android and Safari for iOS, including older versions. We need to document the audio feature here: http://en.esotericsoftware.com/spine-player I have commented on other things directly in the PR code. |
Replies to your open points: Resolved
Yes.
Yes.
Log to the console. This is not a critical error that requires a complete shutdown of the player.
That was a contribution from a user that was needed in their context. It was pretty non-invasive, so it's kept as is.
That is something @erikari may be able to help with :)
Not supporting inline audio data is fine. Unresolved
The Spine Editor only stores the audio file name in an event, but not its full path. E.g. in your modified mix-and-match project, you have a folder As such, requiring the Ideally, the editor actually stores the full audio file path relative to the path given in the Related: do all browsers support at least one common compressed audio format like MP3? If not, then it might make sense to switch If there's one common compressed format, then we don't need this more elaborate configuration scheme. |
@@ -0,0 +1,370 @@ | |||
mix-and-match-pro.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename the mixAndMatch
folder to audio
and move it to examples/mix-and-match/
. Add the audio events to the examples/mix-and-match/mix-and-match-pro.spine
Spine project file, make sure the Audio Files node has its path set to ./audio
. Modify examples/export/runtimes.sh
to copy over the .json
, .atlas
, .png
files to spine-ts/player/example/assets/
, and also make it copy everything from examples/mix-and-match/audio
to spine-ts/player/example/assets/audio
. Remove example-audio.html
and instead add a third player to example.html
that loads the mix-and-match example.
To explain why: we want a single source of truth for the Spine projects that are used in all Spine Runtimes examples. We store them in examples/
. When one of these Spine projects changes, or when we release a new Spine Editor major.minor version, we run examples/export/export.sh
. That will export all Spine projects to .json
, .skel
, and .atlas
files, stored in each project's respective $PROJECT_NAME/export
folder. We then run examples/export/runtimes.sh
to sprinkle those files around the example projects of each runtime.
<script> | ||
// Creates a new spine player. The debugRender option enables | ||
// rendering of viewports and padding for debugging purposes. | ||
new spine.SpinePlayer("container", { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment above. Move this to example.html
as a third player. Remove example-audio.html
spine-ts/spine-player/src/Player.ts
Outdated
@@ -532,6 +544,25 @@ export class SpinePlayer implements Disposable { | |||
if (skeletonData.animations.length == 1 || (config.animations && config.animations.length == 1)) this.animationButton!.classList.add("spine-player-hidden"); | |||
} | |||
|
|||
// Load audio | |||
if (config.audioFolderUrl) { | |||
const audioPaths = skeletonData.events.reduce((acc, event) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you hate for loops?! :D
It already works that way. Like with Spine's images path, if your audio path is Re: inline audio data, can this be enabled via the normal Re: Re: AssetManager, do we want to decode all the audio into an AudioBuffer when the skeleton data is loaded? There could be a lot of audio events, and many might not get used. I agree we only want to do decode into an AudioBuffer once, but is it worth doing lazily, the first time the AudioBuffer is needed? |
Lot of things to say, but first of all thanks for the suggestions and review. Scrubbing causes queueing up of audio sourcesFixed, however audio will start only if the audio event is encountered during animation running.
Right now I simply avoided to "queue" audio to play, as suggested. The problem above can be fixed since it is possibile to play audio from a certain offset, but it is not so easy since audio buffer source can be played once and basically they would have to be recreated for each "timeline seek". What to do when audio decode fails?
Done Player disposal
Done, but can probably be improved since cached audio buffer are not cleaned up right now. I will give a more appropriate look. Speaker button visibility/discoverabilityDone.
Last note: the button is horrible and I'll ask @erikari to help me in making it fancy. Playback speed/rateI noticed it was easy to change audio speed/rate accordingly to the player speed. Audio Buffer cachingIn the previous version, I was storing the combination of Storing the The Audio Context is initialised only when the user presses the unmute button that means that I could do the audio decoding only after that.
In any case I noticed that storing the aforementioned combination of audio nodes led me to decode I fixed this behaviour and now for each (requested to be played) audio file the decoding operation is done only once (there is the edge case in which an audio event requires an However now AudioBuffer and AudioNodes are stored in the Player, while the ArrayBuffer is stored in the AssetManager. Inline base64 audio file
It can be easily done actually. I just need to give to the user another way to provide the url - so the rawDataUri - for each audio file.
The users can also provide a different audio file from the one used within the editor. Audio format browser compatibilityMP3 seems to be supported. I converted an OGG file to MP3 using and online tool and it works on Safari and Mobile Chrome. This can be easily saw using this tool that tests the audio decode of several audio file format using the same API. Example project
I used a different skeleton because I had the source of that one. Things I did not worked on yet
|
If the player can do it, that would be neat (FWIW the editor does it), but we can leave it for later/never if it's complex.
There could be a long running audio event (like background music) that started before the user clicked unmute. Ideally we'd seek to the appropriate position when unmuted for audio events that have already occurred, rather than play nothing until new audio events occur.
A skeleton could have many audio events. Currently we download them all to ArrayBuffers via AssetManager We can't assume the player has a single animation because the AnimationState can be used directly by the user to play multiple animations at once. We could use AnimationStateListener Doing this would mean when animations change, new audio events can't play until the audio is downloaded. Maybe that isn't great and the simplicity of just downloading all the audio up front is OK. Loading only for the current animations could be left for later/never.
I wonder, should AssetManager
The
I'm not a fan of this, as mentioned in my last comment. It makes sense for data URIs, but not to workaround browser limitations. Users shouldn't need to list every audio path in the skeleton. If it's common to need eg to replace If the user really wants to customize audio paths with their own code, they could modify the actual skeleton data just after it is loaded.
It's probably best to modify an existing skeleton to have audio events. Spineboy has footstep events, but no audio. That's the default project loaded by the editor, so maybe we can leave it without audio. Adding the squeak footstep to the mix-and-match project seems fine. Be sure to base the new version on the latest project file. |
I think it's fine to stick to the current simple behaviour.
You can detect if a player is visible inside the browser viewport. Only start the timer if it it is.
A pitch change would not be desireable. IIRC, the Spine Editor doesn't do pitch shifting when playback speed is increased. Let's stick to the simple behaviour (I'm also pretty sure pitch shifting in the Web audio API is broken in some implementations, cause audio is always terrible).
Ah, yes, I totally forgot that we need interaction before being able to get the audio context. Let's keep it as is. That said, the key is currently the audio path, balance and volume. It makes more sense to key the audio path + event name. Those are unique within the skeleton. I think having
|
Raw data URI support looks good! |
I finally have some time to work on this. Here some small updates, I'll do some others improvements today. rawDataURIs
Yeah, I completely misunderstood how rawDataURIs worked and I indeed implemented Speaker button visibility/discoverability
Done. Speed change -> Pitch change
Example project
Actually mix-and-match project is not in the examples folder of the player. There's only Spineboy and Raptor. See here. When manually moving timeline, play also audio events already started seeking at the right point
I have to think the best way to implement it. Probably, when an animation start/play button is pressed, I should schedule all audio events of current active animations. I such a case I need a time schedule of these events, but I did not find a convenient way to access the Things I did not worked on/answered yet
|
I've commented on this extensively in my initial code review. All examples are stored in the The Spine projects in the
What I suggested was to:
This is the workflow when adding a new asset to a runtime's example project. We do want to minimize manual copying and duplication of "canonical" Spine projects.
Yeah, that's a bit tricky. The That's one part of the issue solved. The other part, detecting the scrubbing and reacting to it by manually calculating audio playback positions, is actually quite a bit harder. I think we can postpone scrubbing support though, unless you like a challenge :)
I'd say at the beginning, so it's likely to be ready when the animation starts.
The current storage in
Documentation is written in the CMS in Markdown. The audio feature would need entries in this section, analogous to the other features. http://en.esotericsoftware.com/spine-player#Configuration
Maybe Erika can help with that?
Not sure what that is. |
For starting audio events when unmute occurs, you can use EventTimeline apply to obtain all events from Pseudo code:
I think the only downside is if unmute occurs just after an animation loops, any audio events near the end of the animation won't be played. That's not perfect, but probably fine. |
Example project
I am so sorry, I completely forgot you already explained that to me. You should have said to read your previous message 😅 Events triggered in the past when unlock audio or timeline movedThanks to your suggestions and secretes about runtimes, I rewrote how audio is played so that is easier to use play and seek audio. Howler.js - External dependency to manage audioI wanted to avoid the usage of an external library, but now that the code is grown I think that using and external library to simplify some stuff and to increase robustness is not a bad idea. As you can see from the stars, howler.js is the library when working on audio on browsers. I actually think that right know the implementation does most of is useful for us and the usage of this library would lead to another heavy refactor. But before realising this feature I want to be sure we have analysed all the possibilities. Trackentry usageAm I doing it right? Other stuffI'll leave a list of things I still I have to work on as usual:
|
This is great and works exactly as I'd have imagined it. I do not think this needs any improvement. Good job!
I've used Howler.js in the past, it's OK. However, we strive for zero dependencies. The amount of code added to Player.ts for audio support is still < 200 LOC and I think it's pretty easy to follow. Howler would not enable us to add much more functionality, so I do not think switching to Howler is an improvement. If we considered it, it would have to be an optional dependency, which complicates the build and deployment further. So, my answer is no :)
Jupp, that looks good!
I don't think we need the A separate audio class seems fine. |
8fba616
to
6c9545b
Compare
…hen player is within viewport
…started are played if audio is activated later or timeline is moved.
…for audio in player
6c9545b
to
1ce2ff4
Compare
1ce2ff4
to
edec640
Compare
I eventually used the raptor project since it was easier to find audio for its animations (usuale external example). I exported the project using the I probably made something wrong, but that can be fixed quickly. Below you can find the documentation I wrote. English can obviously be improved. Audio eventsIn order to play audio when audio events occur, it is necessary to provide the absolute or relative prefix URL where audio files can be downloaded. <script>
new spine.SpinePlayer("player-container", {
...
audioFolderPath: "assets/audio"
});
</script> The player looks for all audio events within your skeleton and downloads the audio file corresponding to the combination of the The player starts with the audio muted to respect autoplay policy. To unmute audio the user can:
Be aware that not all browsers support decoding all audio formats. Use MP3 for a wider compatibility. |
This is a first attempt to implement audio following events in the webplayer as requested in #1620.
You can have a try here (caveat: Safari works only on the walk animation since it does not support
.ogg
audio file and I left like that as example).The following implementation is a draft and I just want to know if the approach I used is the right one.
I extended AssetManager to load audio.
I added a new configuration variable
audioFolderUrl
to specify the base path where audio files are placed.Audio assets are loaded only when Skeleton data is parsed since audio file name is within the skeleton.
Main question is if the approach I used is correct. I attached a listener to the
animationState
eventevent
and the sound is played within event callback. Is this ok to use such high level API?Open point:
Right now, Playing sounds does not pause/stop when player is stopped.-> Implemented for play/pause button, not when animation is changed. Should audio be stopped when animation is changed?Balance works only on recent versions of Safari. There is another API for audio balancing with a wider support, but little trickier to use.-> Used the older API to implementbalance
.audioFolderUrl
configuration correct or a dictionary with file audio file name and url would be better (right now it is not possible to use an "inline base64 audio file"