It will be almost three years since we landed initial support for screensharing on Wayland with the use of PipeWire in the WebRTC project. This enabled screensharing support in both major Linux browsers. Last year I implemented support for window sharing, added support for PipeWire 0.3 and added support for DMA-BUF and MemFD buffer types. Problem was, as it turned out, the DMA-BUF support was not implemented in a correct way.
The original implementation was using mmap() to get the buffer content. This worked correctly for current Intel GPUs, but was terrifically slow on e.g. AMD GPUs. Proper solution is to use OpenGL context to get the content from buffer. However, there were many implementations using mmap() already, including WebRTC and we needed a way how to properly communicate between the server and the client that when the client advertises DMA-BUF support, it means it doesn’t use mmap() and goes through OpenGL context instead.
Here are some issues if you want to read about the details:
- PipeWire bug#1055: No way for source node to know whether sink node interacts with DMA buffers the right way
- PipeWire bug#1084: General problems with dmabufs
- Mutter bug#1736: DMA-BUF screensharing crashes applications
This all resulted into a completely different way how the communication between the consumer and the producer should happen in order to use DMA buffers for way faster and smoother screensharing support. Both sides are now required to query the list of all supported modifiers and add this as a new stream parameter, including flags that the modifiers are mandatory parameter, rest of stream parameters are kept as before so we can keep using other types in case DMA-BUFs are not supported by the producer. Once both sides matches their expectations, we can query whether the stream includes modifiers, based on that we know we can use DMA buffers, which we now properly open using OpenGL context, while we kept mmap() for MemFd buffer types as fallback. This will result into faster screensharing support in your web browsers.
Last but not least, I made screensharing even faster, regardless of buffer type we use. Originally when we received buffer from PipeWire, we copied it to a local variable so we can apply cropping and adjust the position and only after that we copied this adjusted content into a DesktopFrame, which each DesktopCapturer (a class representing screensharing implementation) is supposed to return and let it be displayed by the browser. That means we performed two copy operations for each frame. I improved this implementation and now we copy the PW buffer content directly to a desktop frame which we can return directly so one copy operation less than before. I didn’t do some exact measures, but simply running htop and comparing usage of top 5 processes when sharing a 4k screen I got:
- Original result: 66%, 64%, 26% 23%, 10%
- Updated result: 41%, 39%, 19%, 17%, 12%
I also have some other improvements on my TODO list, all of them should bring some additional optimizations and improvements. I will keep you informed once I have news to share with you.
Both changes have been merged into WebRTC, that means it should be in Chrome/Chromium 96 (released during November 2021).
Please consider removing font-weight: 300 from your style sheet. It’s a workaround for MacOS’s atrocious font rendering, which causes almost all fonts to render at minimum thickness on all other systems. See https://work.lisk.in/2020/07/18/font-weight-300.html