Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Libass Integration #448

Open
wants to merge 51 commits into
base: master
Choose a base branch
from
Open

Libass Integration #448

wants to merge 51 commits into from

Conversation

ramtinak
Copy link
Collaborator

Hi,

I've made some changes to FFmpegInteropX to support libass as a subtitle renderer. The implementation is largely inspired by the libass integration for JavaScript, which you can find here:
https://github.com/libass/JavascriptSubtitlesOctopus

By default, libass operates as follows:

  • Initialize the library using ass_library_init.
  • Initialize the renderer using ass_renderer_init.
  • Create a subtitle track using ass_read_memory (other methods exist, but we're constrained by UWP).
  • Load the subtitle header using ass_process_codec_private.
  • Add subtitle chunks from FFmpeg using ass_process_chunk.
  • The issue I'm encountering is with creating IMediaCue.

Libass uses ass_render_frame to generate an ASS_Image, which works well for rendering. However, since this process must happen in real-time, I’m unsure if it's feasible to create IMediaCue instances based on current implementation. Is it possible to display subtitles accurately using media duration?

P.S. I’ve noticed a recent issue compiling FFmpegInteropX with the target platform set to 10.0.22000.0. To resolve it, I switched to 10.0.26100.0.

Thanks.
#439

@ramtinak ramtinak requested review from brabebhin and lukasf December 30, 2024 07:09
@ramtinak
Copy link
Collaborator Author

I just tested my C# sample from #439 (comment) , and instead of using a timer to update the UI, I switched to mediaPlayer.PlaybackSession.PositionChanged for updates.

However, I noticed that PositionChanged is significantly slower compared to using a timer to repeatedly query mediaPlayer.PlaybackSession.Position.

@brabebhin
Copy link
Collaborator

Thanks for sharing this.
It is a good starting point for the integration.

@brabebhin
Copy link
Collaborator

brabebhin commented Dec 31, 2024

This is almost complete.
We have to call Blend in the CreateCue method, and then create a bitmap from that blend result. Nothing too fancy there.

The problem you are having, which is the same problem I was having is that ass_render_frame does not work. It returns NULL, and therefore nothing is being blended for all my test files. This happens despite the ass_read_memory and ass_process_chunk being called and the track correctly having events inside. My approach was slightly different, using an ass_track for each cue, which is safer in seek and flush situations, but this can be refactored later.

brabebhin and others added 8 commits December 31, 2024 19:16
when you enable subtitle streams, throws exceptions, I have no idea what is this:

FlushCodecsAndBuffers
libass: Event at 24354, +96: 499,0,Default - Copier,,0,0,0,fx,{\an5\pos(649,63)\bord2\shad0\be0\}To
libass: Event at 24354, +28: 514,2,Default - Copier,,0,0,0,fx,{\galovejiro\an5\blur0\bord4.0909\pos(755,63)\fad(0,200)\t(0,100,\blur8\3c&H0000FF&\fscx125\fscy125)\t(100,180,\fscx100\fscy100\bord0\blur0)}the
libass: Event at 24354, +28: 515,2,Default - Copier,,0,0,0,fx,{\galovejiro\an5\blur0\bord4.0909\pos(755,63)\fad(0,200)\t(0,100,\blur8\3c&H0000FF&\fscx100\fscy100)\t(100,180,\fscx100\fscy100\bord0\blur0)}the
Exception thrown at 0x00007FFCECCEFB4C (KernelBase.dll) in MediaPlayerCPP.exe: WinRT originate error - 0xC00D36B2 : 'The request is invalid in the current state.'.
Seek
SeekFast
 - ### Backward seeking
FlushCodecsAndBuffers
Exception thrown at 0x00007FFCECCEFB4C in MediaPlayerCPP.exe: Microsoft C++ exception: winrt::hresult_error at memory location 0x000000EACCBFEBB8.
@ramtinak
Copy link
Collaborator Author

ramtinak commented Jan 1, 2025

Happy New Year.

This issue occurs when you don't call ass_set_fonts after initializing ASS_Renderer. I realized I had forgotten to include this step.

Regarding your point, I'm not entirely sure you're correct. Calling ass_render_frame inside createcue doesn't seem appropriate (at least, I don't think so). The ass_render_frame function should only be called when a frame changes. I believe createcue doesn't handle this scenario.

I made some adjustments, and while the changes work to some extent, the SoftwareBitmap isn't being displayed as expected.

Here's what I tested:
I used the MediaPlayerCS sample, added an Image control to the UI, and set up a CueEntered event as follows:
(This actually worked)

private async void OnTimedTrackCueEntered(TimedMetadataTrack sender, MediaCueEventArgs args)
{
    if (args.Cue is ImageCue cue)
    {
        await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, async () =>
        {
            var sub = cue.SoftwareBitmap;
            var bitmapSource = new SoftwareBitmapSource();

            await bitmapSource.SetBitmapAsync(sub);

            image.Source = bitmapSource;

            Debug.WriteLine($"{cue.StartTime} | {cue.Duration} | {sub.PixelWidth}x{sub.PixelHeight}");
        });
    }
}
<Image x:Name="image" Grid.Row="1" Width="400" Height="300" />
  • Update Issue: The image doesn't update consistently, but it's a start.
  • Pixel Calculation Error: There's an issue with pixel calculations somewhere in the code.

libass requires ass_render_frame to be called for every frame. So, how should the ImageCue handle StartTime and Duration in this context?

MediaPlayerCS
image

PotPlayer:
image

@softworkz
Copy link
Collaborator

softworkz commented Jan 1, 2025

Using ImageCues for ASS rendering is not suitable. It just doesn't go together.
ImageCue timed tracks are for static bitmap subtitles like dvdsub, dvbsub or HDMV/PGS.

Even though the definition of ASS events involves a start time and a duration, an ASS event doesn't necessarily stand for a bitmap which remains static (unchanged) over the duration of an event - but that's what ImageCue bitmaps are designed for.

Also, there's another mismatch: ASS event can overlap in time, so there can multiple be active at the same time. You are creating an ImageCue for each ASS event - which would still be fine in case of static bitmaps and without libass. But libass doesn't render any output that is related to a specific ASS event, so in turn it also can't render anything that is related to a specific image cue.

Even further, an ImageCue is supposed to have a fixed position and size, but libass doesn't give you anything like that. Both can change from frame to frame.

The overall conclusion is simply that TimedMetadataTrack and ImageCue aren't a suitable APIs for the way how libass is rendering its output: frame-wise (not per ass-event)

You need to call ass_render_frame() once for each video frame being shown (or for every second one, etc..) and when detect_change is 1, you need to bring that output on screen in some way.

@softworkz
Copy link
Collaborator

softworkz commented Jan 1, 2025

In case when you would want to render ass subs statically without animation:

That would of course be possible with ImageCues - but the problem here is that the FFmpegInteropX SubtitleProvider base implemention is not suitable for this, since it assumes that each AVPacket would create one MediaCue and that doesn't work out in this case.

It would work like this:

  • Feed all ASS events (AVPacket) into libass
    (this happens on the start of playback, because ASS subs are not interleaved in the stream)
  • While doing so, for each ASS event
    • Modify the event to strip all animations (I have code for that)
    • Put start and end time of each event into a list of time stamps (1 dimensional, without distinction of start and end)
  • Finally, distinct and sort that list
  • Now, iterate through that list and for each timestamp
    • ass_render an image
    • use graphical algorithms to detect regions with content
    • create an image cue for each regiion
    • all have the same start and duration (from current time to next timestamp in the list)

Finally, there's one problem to solve: You don't want to create all the images on playback start, so you need to synchronize in some way with the playback position and make sure you only create image cues for e.g. the next 30s.

This will give you static non-animated rendering of ASS subs - and that can be done using ImageCue. You also don't need to care about the rendering but let Windows Media to it (so no listening to cue entered events).

PS: Happy New Year as well!

@brabebhin
Copy link
Collaborator

Happy new year everyone!
Excellent work. This is an important milestone.
We can modify the SubtitleProvider to return a list of ImageCues, one for each individual frame in the animation.

@ramtinak
Copy link
Collaborator Author

ramtinak commented Jan 1, 2025

I’ve added a new function to the SubtitleProvider class, which creates a new collection of IMediaCue objects (IVector<IMediaCue>). In this function, I populate a list of cues based on position and duration. For this implementation, I used a loop with a duration of 500 milliseconds for each cue.

Despite this, subtitles still don’t display unless they’re manually added in C#.

Additionally, I implemented a new function in FFmpegMediaSource to capture the current frame from libass directly. Here is the result of that implementation:

https://1drv.ms/v/c/6ad13c09a43a4b36/Ef1Xvke1IG9MutjWh7NkQUkBP_BewvVVJwhKaahjI9nmNg?e=fIAgZ0

However, there’s an issue with blending colors—the color calculation is incorrect. None of the displayed colors match the actual intended colors. For reference, the correct colors should look like this:
https://1drv.ms/v/s!AjZLOqQJPNFqf5OHo1X5i1OD_WA?e=bm7fle

@brabebhin
Copy link
Collaborator

brabebhin commented Jan 1, 2025

Hmm I was under the impression that the blending algorithm came from the JavaScript implementation? Haven't looked much at it, although iirc something jumped my eye at some point that seemed incorrect.

In any case, colours aren't so important. We need to think through the animation side.

Assuming the animation fps is the same as the video fps (the libass API seems to point in that direction), we can use the sample request events to drain the subtitle provider of animation frames. It would work similarly to avcodec or the filter graph. So some more ample refactoring might be necessary here.

I see no reason ImageCue cannot handle animations, assuming 1 cue = 1 animation frame. Other than maybe potential performance problems in MPE.

@softworkz
Copy link
Collaborator

I see no reason ImageCue cannot handle animations

I do. One full-size PNG image for each video frame? Seriously?

@ramtinak
Copy link
Collaborator Author

ramtinak commented Jan 1, 2025

I found out why the cue doesn't appear in the UI: the (x, y) cuePosition was set to 100. I changed it to 0, and the subtitle displayed correctly.

The ConvertASSImageToSoftwareBitmap function was created with the help of ChatGPT, so I’m not sure where it came from. However, I referenced multiple sources from different projects to ChatGPT, but none of them seem to work correctly.

I also tried animations again, but there are many dropped cues, and most of them don't show up. However, as you can see, it works fine when you render it yourself with a timer—it's fast and works (except for the color part, of course).

As @softworkz mentioned, I think the ImageCue is not meant to be used for animation effects.

A side thought: Is it possible that our data (styles and colors) are incorrect when appending it to libass?

@softworkz
Copy link
Collaborator

A side thought: Is it possible that our data (styles and colors) are incorrect when appending it to libass?

You can easily find out by not doing it. For actual ASS subtitles, this shouldn't be done anyway.

@softworkz
Copy link
Collaborator

For the canvas control, only the .Dispose() needs to be done on the UI thread: https://microsoft.github.io/Win2D/WinUI2/html/M_Microsoft_Graphics_Canvas_UI_Xaml_CanvasVirtualControl_CreateDrawingSession.htm

I'm still not sure whether the canvas control is the right thing for the job. It's definitely easy, though.

@brabebhin
Copy link
Collaborator

CanvasAnimatedControl is not available in winui3. We should stick to swap chains.

@softworkz
Copy link
Collaborator

CanvasAnimatedControl is not available in winui3. We should stick to swap chains.

Yes of course. I thought this would generally apply to a CanvasDrawingSession.

@lukasf
Copy link
Member

lukasf commented Jan 25, 2025

Yes I know, but the only time you call await (letting execution yield) is while there's a pending DrawingSession. (=inside the using).

Wrong, please look closely.

It doesn't matter how long the other code takes to execute. What matters is that you are blocking any other dispatched code from executing (in response to your method calls).

await is not blocking. The UI thread is blocked less than 1ms per frame. During the 30ms of render, UI thread is free to do whatever it likes. Only after render is done, the rest of the loop is scheduled on UI thread again.

If I call RenderSubtitle() instead of await RenderSubtitleAsync(), then the UI is blocked, getting totally unresponsive and crashing. But doing the render on background and just awaiting it does sure not block the UI.

The sane way is to wrap the whole execution in Task.Run() (=threadpool thread)

This is what RenderSubtitleAsync does

        private Task<SubtitleRenderResult> RenderSubtitleAsync(CanvasRenderTarget target)
        {
            return Task.Run(() => RenderSubtitle(target));
        }

It does not matter if you run the loop on dispatcher and run RenderSubtitle on a thead, or if you do the inverse (run loop on background and invoke present on dispatcher thread). In the end, the same parts are done on the dispatcher. The only thing that matters is how long and how often do you block and invoke the UI thread. And that's exactly the same for both approaches.

@softworkz
Copy link
Collaborator

softworkz commented Jan 25, 2025

Yes I know, but the only time you call await (letting execution yield) is while there's a pending DrawingSession. (=inside the using).

Wrong, please look closely.

I was refering to this loop, which has a single await only:

image

It does not matter if you run the loop on dispatcher and run RenderSubtitle on a thead, or if you do the inverse (run loop on background and invoke present on dispatcher thread). In the end, the same parts are done on the dispatcher.

It does matter (at least it can matter).

What I meant by "blocking" is that you don't give other things a chance to execute which are dispatched as well.
The only point where you release the UI thread and allow it to execute other things is the single await call which is inside the two using blocks, but you never allow to yield at any other place.

The red lines are all places where the UI thread might be needed to execute something else, which you do not allow because you are running on that thread. That's what I meant by "blocking".
Essentially you never allow any code execution on the main thread without having a pending drawingsession and rendertarget.

@softworkz
Copy link
Collaborator

softworkz commented Jan 25, 2025

In the end, the same parts are done on the dispatcher.

No - not in the same order.

The only thing that matters is how long and how often do you block and invoke the UI thread.

And at which moments in time.

And that's exactly the same for both approaches.

No. When you'd do like I'm suggesting (doing only what's absolutely needed on the UI thread) there would be multiple dispatch invokes and thus more chances for the UI thread to execute other things.

With the way I'm suggesting, no Task.Delay will be needed.

EDIT:
Of course you can also insert more Task.Delay calls - but it's against the principle of not hijacking the UI thread.

@lukasf
Copy link
Member

lukasf commented Jan 25, 2025

What I meant by "blocking" is that you don't give other things a chance to execute which are dispatched as well. The only point where you release the UI thread and allow it to execute other things is the single await call which is inside the two using blocks, but you never allow to yield at any other place.

That single await gives the UI thread 30ms of time to do anything else. The rest of the loop (in this case) takes about 2ms, that's the only time where the UI thread is blocked. So more than 90% of the time, the UI thread is free. It might be possible to do even more stuff on the background thread in case of the swap chain.

The canvas image (I originally tought you were talking about that one) does not allow background thread access. But then, on the canvas image loop, the UI thead is blocked about 0.1 ms per 30ms loop. So we sure do not have any issue there.

The red lines are all places where the UI thread might be needed to execute something else, which you do not allow because you are running on that thread. That's what I meant by "blocking".

Yes, blocking 2ms of 32ms per loop. I guess it can be further reduced, since the swap chain allows more stuff to be called from background. But the app is super responsive anyways. I am not saying that this is finished. Just saying that we do not have any major issue with UI thead being blocked.

If we'd block the UI thead for the 30ms of render (like in the very first version), then we'd have a major issue. But that's not the case anymore.

@softworkz
Copy link
Collaborator

softworkz commented Jan 25, 2025

What you are saying is all fine, but you miss my point: It is not about the amount of time for which the UI thread is running. It is about in which moments and states of other objects it can execute something.

For example, it is not possible that a UI thread execution is run at a moment in time when there's no active drawing session:
From the moment that the current one is closed until the next one is created, there is no opportunity for code execution that is dispatched to run on the UI thread. Just as one example.

I mean - what are we talkiing about? You are wondering why you need an additional await Task.Delay() and I'm just explaining it... 😆

@lukasf
Copy link
Member

lukasf commented Jan 25, 2025

What you are saying is all fine, but you miss my point: It is not about the amount of time for which the UI thread is running. It is about in which moments and states of other objects it can execute something.

The longest period where the UI thread is blocked is for about 2ms. Yes this can be further improved, but it is nowhere critical. And it does not matter if a drawing session is open or not.

I mean - what are we talkiing about? You are wondering why you need an additional await Task.Delay() and I'm just explaining it... 😆

The additional Delay() is not needed in the swap chain loop. It is only needed in the canvas image loop, and it has to do with the fact that the device seems to be used. As I said, the UI thread is only blocked for ~0.2ms in a 30ms loop, in that scenario. It is not about UI thread being blocked.

@lukasf
Copy link
Member

lukasf commented Jan 25, 2025

Oh, and I am not holding a drawing session or anything else in the canvas image loop! Still, there I need an additiona await inbetween draw and starting next render.

@softworkz
Copy link
Collaborator

softworkz commented Jan 25, 2025

The additional Delay() is not needed in the swap chain loop.

Ah, I thought you meant it's needed there. But the same applies to the other case.

You need to understand that other components can be using threads as well and at times they need to dispatch code to run on the UI thread - but the point in time matters at which this code is executed - for example before you continue your loop and do other things. Then it can be already too late for the other component's dispatch code to be run - and that's why you need to force the yielding via Task.Delay(). It doesn't matter whether it's 20ms or 0.2ms. If there's some dispatched code which needs to be executed in a certain state and you don't let it, things can go wrong.
That's exactly why you need to insert the Task.Delay().

and it has to do with the fact that the device seems to be used.

Because it hadn't had the chance to dispatch some code on the UI thread

@brabebhin
Copy link
Collaborator

brabebhin commented Jan 25, 2025

In the swap chain loop, aside from the trivial size checks and the ass_image rendering, nothing is going on the CPU. And the ass_image is on a background thread. So that loop is actually very light cause almost everything is done either on background thread or GPU. So all the UI thread does are some trivial checks and calls to keep everything in order. Nothing to be worried about.

@softworkz
Copy link
Collaborator

Nothing to be worried about.

It was @lukasf being worried about the need for the await Task.Delay(5) call.

@softworkz
Copy link
Collaborator

softworkz commented Jan 25, 2025

Sorry, got it.

@brabebhin
Copy link
Collaborator

brabebhin commented Jan 25, 2025

There are probably private calls happening in the Image control that's cause the app to freeze and thus lose the directx device. But that's no matter, since the swap chain is significantly faster and can achieve the theoretical maximum fps, we don't really need the image implementation other than maybe fall backs for regression checks.

We do need to handle the device lost scenario on the swap chain as well. Reallocate everything and so on.

Technically our inhouse cpp swap chain should work but it does not. I'll see what's up with that.

@softworkz
Copy link
Collaborator

What's actually the advantage of using CanvasDrawingSession and CanvasRenderTarget? I mean this involves at least one additional copying of the image (in Session::Draw) - albeit in GPU memory.

Why not use IDXGISwapChain::GetBuffer() directly? AFAIU, the CanvasSwapChain can be casted to IDXGISwapChain1, right?

@brabebhin
Copy link
Collaborator

That would be what the cpp sample front is trying to do. Except without win2d.

@lukasf
Copy link
Member

lukasf commented Jan 25, 2025

@softworkz Nothing time critical can ever be done on the dispatcher thread, this is sure not the reason I had trouble. The dispatcher thread can easily and repeatedly be bogged for extensive periods of time. E.g. navigate to a new page with lots of list items with complex templates. The dispatcher thread might spend half a second or more creating and layouting hundreds of controls, and no one will get their continuations run on the dispatcher during that extensive period of time. If anyone uses it, expecting tight timing behavior and instant response, is totally going to fail. That's absolutely not what it is made for, and it never has and never will provide that. Using it for few ms is nothing, though it's best to use it as little as possible. We will be working on that, but it is trial and error to find out which APIs you can call from background and which you can't.

Why not use IDXGISwapChain::GetBuffer() directly? AFAIU, the CanvasSwapChain can be casted to IDXGISwapChain1, right?

That would of course be the best way. You just can't easily do that from C# code. When going native code, we can as well directly go with the DXGI interfaces, allowing better optimizations.

@brabebhin
Copy link
Collaborator

Actually the truth is somewhere in between. You may get device lost errors in directx if the dispatcher thread gets bogged down. However, the timeouts are somewhat on a different scale (miliseconds vs seconds).

That would of course be the best way. You just can't easily do that from C# code. When going native code, we can as well directly go with the DXGI interfaces, allowing better optimizations.

So I was able to create the swap chain, attach it to the panel, and rendering + presenting seems to go without errors.
Yet no subtitles are shown :(

@lukasf
Copy link
Member

lukasf commented Jan 25, 2025

As expected, the canvas swap chain is much more thread friendly than the canvas image. Basically the whole loop can run on a background thread (just done that). I'd guess even the size change could be done on bg, we'd just need to pass in the new size and dpi.

I wonder what's the best approach to handle size changes with swap chain, to allow smooth resize without artifacts. Resize buffers will cause the image to disappear until a new one was rendered. Maybe it would be better to create a new swap chain, render to it, and then exchange the old for the new swap chain? Not that it's important now. Just noticing that the sub disappears and re-appears during resize.

@brabebhin
Copy link
Collaborator

brabebhin commented Jan 25, 2025

I wonder what's the best approach to handle size changes with swap chain, to allow smooth resize without artifacts. Resize buffers will cause the image to disappear until a new one was rendered. Maybe it would be better to create a new swap chain, render to it, and then exchange the old for the new swap chain? Not that it's important now. Just noticing that the sub disappears and re-appears during resize.

In my frame server mode implementation, I simply redraw the swap chains after a resize, somewhat outside the main callbacks. This is only done when playback is Paused, because when it is playing, the loop will simply pick up the change before the user can see anything.

I think in the end we could go down the frame server mode way and render video + subs on the same swap chain. This should be theoretically the most efficient way.
I have pretty much figured out everything there (including HDR, which is something others said it doesn't work) - except the threading model, wasn't quite sure what could and couldn't be used in background lol.

We can even use the Media Foundation subtitle rendering for non-ass subtitles.

@softworkz
Copy link
Collaborator

I wonder what's the best approach to handle size changes with swap chain, to allow smooth resize without artifacts.

Create a static copy of the current image and display it in an image control with auto-resizing. The swapchain remains hidden (or has clear content). On each resize message, restart a timer (like 500ms). When it fires, resize the swapchain, hide the static image and continue swapping.

@softworkz
Copy link
Collaborator

I wonder what's the best approach to handle size changes with swap chain, to allow smooth resize without artifacts.

Create a static copy of the current image and display it in an image control with auto-resizing. The swapchain remains hidden (or has clear content). On each resize message, restart a timer (like 500ms). When it fires, resize the swapchain, hide the static image and continue swapping.

Or - for not stopping animations:

  • When the size is reduced, keep the swapchain size and only render smaller images aligned left-top
  • When the size is increased, increase the swapchain size to a much larger size and only render the ass images according to the view size

In both cases this allows to do just few resizings for the swapchain - opposed to the many size changes.

@softworkz
Copy link
Collaborator

@softworkz Nothing time critical can ever be done on the dispatcher thread, this is sure not the reason I had trouble. The dispatcher thread can easily and repeatedly be bogged for extensive periods of time. E.g. navigate to a new page with lots of list items with complex templates. The dispatcher thread might spend half a second or more creating and layouting hundreds of controls, and no one will get their continuations run on the dispatcher during that extensive period of time. If anyone uses it, expecting tight timing behavior and instant response, is totally going to fail. That's absolutely not what it is made for, and it never has and never will provide that. Using it for few ms is nothing, though it's best to use it as little as possible

You are still totally misunderstanding what I'm trying to say. It's not about the period of time that it is unavailable (said it 4 times).
Anyway - not the most important thing atm if it works ok.

Why not use IDXGISwapChain::GetBuffer() directly? AFAIU, the CanvasSwapChain can be casted to IDXGISwapChain1, right?

That would of course be the best way. You just can't easily do that from C# code.

You can :-)
=> https://www.nuget.org/packages/JeremyAnsel.DirectX.Dxgi/3.0.33

@softworkz
Copy link
Collaborator

As expected, the canvas swap chain is much more thread friendly than the canvas image. Basically the whole loop can run on a background thread (just done that)

This also gives you better control about the scheduling of things that need to run on the UI thread, as you can set a prioriy when invoking.

We will be working on that, but it is trial and error to find out which APIs you can call from background and which you can't

I would have started by putting every call inside a Dispatcher.RunAsync() lambda, and then try each one after another to exec directly on the bg thread.

var height = mediaPlayerElement.ActualHeight;
swapChain?.ResizeBuffers((float)width, (float)height, displayInfo.LogicalDpi);
swapChainSizeChanged = false;
});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should use .ConfigureAwait(false) now on all await calls. Currently, it synchronizes back to the same threadpool thread like before the await and when that thread isn't available, it will block until it is.

@softworkz
Copy link
Collaborator

As expected, the canvas swap chain is much more thread friendly than the canvas image. Basically the whole loop can run on a background thread (just done that).

This sounds a bit suspicious - probably there's some magic in the canvas swap chain to make it convenient? In that case, the question would be what's the cost of it..

@lukasf
Copy link
Member

lukasf commented Jan 26, 2025

This sounds a bit suspicious - probably there's some magic in the canvas swap chain to make it convenient? In that case, the question would be what's the cost of it..

This is actually what I expected and how it is documented (dxgi swap chain docs). Only I was not sure if the win2d abstractions add some dependency on the dispatcher. It's good that that's not the case. The canvas swap chain could be quite usable, if only it would expose the buffers directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants