Jan Grulich

Making PipeWire default option for Firefox camera handling

Starting with Fedora 41, we will make PipeWire the default backend for camera handling in Firefox. This is part of the effort to integrate support for Intel IPU6 cameras. You can read more about IPU6 cameras at the link proposing this feature, but the important part is that these cameras can only (as of now) be used with PipeWire.

Is PipeWire camera support mature enough?

Yes! At least I think so. I know there are people using it every day and I’ve been working hard to fix all the possible issues and close the gaps between PipeWire and V4L2 backends. There has also been a lot of progress in making libcamera’s software ISP work with PipeWire camera in WebRTC. This was mainly the work of Robert Mader and Kieran Bingham. Robert submitted a patch to libcamera and Kieran has helped me to verify and test changes on the WebRTC side, where I had to add support for all the additional video formats [1, 2].

Screenshot with libcamera’s software ISP in action using Firefox. Rumors are that Kieran is already using it regularly for softISP calls 🙂

There has also been amazing work done by Hans de Goede on this front. Besides all the IPU6 work, Hans fixed duplicate device entries in PipeWire’s libcamera implementation and pushed some fixes to V4L2 support in PipeWire to fix some races in device enumeration. This should fix issues with some cameras not being properly detected at startup, which affects the camera portal to properly advertise camera availability, as we rely on that Firefox. Speaking of duplicate device entries, I recently also fixed (removed) duplicate camera entries represented by IR cameras. These were previously shown as duplicate camera entries in the device list, but if the user accidentally selected the IR camera, it didn’t work at all. Another set of PipeWire camera fixes in Firefox are currently under review, mainly with some fixes and WebRTC backports and also implementing missing support for device change updates. I have to also mention Andreas Pehrson from Mozilla, who has been a great and active reviewer of all the changes in WebRTC upstream and especially in Firefox, which also contributes to the adoption of PipeWire camera support.

PipeWire camera support in Chromium

PipeWire camera support has been merged into Chromium for M127 and was implemented by Michael Olbrich from Pengutronix. To use it, you just need to go to chrome://flags and enable the PipeWire camera support flag. Unfortunately, this version is broken, because Michael and I haven’t tested it with an official build. The official build enables Control Flow Integrity, which might be violated by PipeWire calls, because of the way the PipeWire library is used (it is dlopened). We had the same problem in the past with screencast support. This was uncaught and released, but I fixed it already in time to be backported to M128, which is behind the doors already. Chromium will also benefit from all of the above fixes because it’s all done in WebRTC, which is shared with Chromium and Firefox.

Testing PipeWire camera support

Testing is something we need right now to catch all the bugs in time for the Fedora 41 release. In the case of Chromium, you will need to wait for Chromium 128 and enable the flag as mentioned above. For Firefox you can go to about:config and enable the media.webrtc.camera.allow-pipewire option. In the case of Fedora 41+, you shouldn’t need to do this as we’ve already made the switch in the latest Firefox build. You can also read my previous blog post for more details, but you can ignore all the issues there as they are already fixed. If you do find a problem, the best place to report it is on the WebRTC bug tracker, as it’s most likely a general issue, not specific to Chromium or Firefox.

In most setups (default), you will probably end up using PipeWire camera with V4L2 backend in PipeWire, but you can also install libcamera and libcamera plugin in PipeWire as mentioned in my previous blog post. The only thing that has changed is that recent Wireplumber will avoid duplicate camera entries and libcamera nodes will be hidden. To filter out V4L2 nodes and show libcamera instead, you can create a .config/wireplumber/wireplumber.conf.d/disable-v4l2.conf file with the following content:

wireplumber.profiles = {
  main = {
    monitor.v4l2 = disabled
  }
}

After restarting Wireplumber, you should see only libcamera nodes. You can check this with wpctl status.

Now let’s make Fedora 41 another great release, packed with the latest technologies.

PipeWire camera support in Firefox

New year, new challenges.

We finally reached a major milestone with Chromium 110, which was a release where we finally got screen sharing enabled by default on Wayland, and since then you no longer have to go into the preferences and enable the flag you need. That doesn’t mean my work there is over, but I’ve shifted my focus to something related but slightly different and that is PipeWire camera support.

Work on PipeWire camera support started in 2021 and was done by Michael Olbrich (Pengutronix). He submitted a huge change to Chromium to add this support and had trouble finding a reviewer because there was actually no one who knew anything about PipeWire in the Chromium project. I actually saw his change request by accident, but we got in touch and decided to move this to WebRTC instead, because having it lower in the stack means we would get it automatically in other browsers, like Firefox. Michael attended a meeting we used to have regularly for screen sharing support in WebRTC and we discussed how to implement PipeWire camera support in WebRTC instead and how to reuse some of the code we already had for screen sharing to avoid code duplication. After a few submitted and reverted reviews (usually when things break Chromium parts that are not covered by CI, happened to me many times), we ended up with PipeWire camera support in WebRTC (talking about the beginning of this year).

Journey to PipeWire camera support in Firefox

Up to recently, my work has mostly been 95% WebRTC and 5% Chromium, but I have not been familiar with Firefox at all (not counting WebRTC backports). I actually started fixing screen sharing support in there first before moving to camera, because I noticed a few issues after Firefox (finally) did a WebRTC rebase to some of the newer versions. They’ve actually started doing monthly WebRTC rebases, which is really a good thing and I’m glad to see that happening. Anyway, even though Firefox has more recent WebRTC these days, when I started in February, there was still no PipeWire camera support at all because WebRTC was still a few months behind, so I had to backport all the patches and make them work with Firefox. Only then I could finally start working on the actual PipeWire camera support from WebRTC. Working on the backports, I was still working in the WebRTC space, so everything was somewhat familiar. Implementing the actual PipeWire support was a different story and took me some extra time to understand how everything works. This includes camera API on the WebRTC side, camera support on the Firefox side, and I also had to learn all the APIs specifically used in Firefox, but admittedly, learning about new things is fun too. After some tries and errors it started to work and I was able to share my camera using the PipeWire camera backend from WebRTC. You have to trust me that the picture below is not using the V4L2 backend.

I went ahead and submitted my WebRTC backports and the PipeWire camera backend implementation for review to Firefox. Unfortunately, I was told that the code where I placed my implementation could also be used by the WebRTC Javascript API, which is used by bots to check for camera presence on the client side, which I didn’t know as someone who just recently started working on camera support. This was a problem because we get PipeWire access through xdg-desktop-portal and this involves showing a dialog to the user asking for camera access. Showing a camera request dialog randomly to the user would not be a good experience. Going back to the drawing board, I talked to Andreas Pehrson (Mozilla/WebRTC). Andreas was a great helper and we came up with a solution on how to implement it properly in Firefox and avoid things like I’ve mentioned before. This time it involved some re-org changes in WebRTC, where I split the xdg-desktop-portal and PipeWire implementations for PipeWire video capture, so we can request camera access in Firefox only when appropriate and only do the PipeWire stuff in the backend assuming the access was granted. So I did implement it again, this time according to what we agreed on with Andreas and it worked.

This is now submitted for review again and hopefully this time it will only need some minor fixes and not a complete rewrite like before, and you will be able to try/use it sooner rather than later. The main change is submitted here, but it is accompanied by other changes with WebRTC backports or changes that make the backports buildable with Firefox. With the first version of the change, I had a Fedora COPR repository, but I had to discontinue it, because it was too hard to maintain it in a buildable state on top of a stable Firefox. But you can be sure that Fedora will be the very first consumer of these changes once they are merged.

Why do we need this?

For many reasons. I would recommend you to read a blog post from Christian Schaller, where everything is explained into the details and gives you more information about the camera stack. Main reasons are:

  • Security
    • Access to the camera must be granted by the user, so you can be sure that no one is using your camera behind your back.
  • Flexibility
    • Your camera can be accessed by multiple clients simultaneously.
  • Libcamera support
    • Needed for ARM devices or devices using ChromeOS

Chromium support

While support in WebRTC has been done already a few months ago, Chromium originally didn’t use WebRTC video capture API for camera support and for that reason it had to be added. Michael implemented it and it is still currently pending on review so currently both Chromium and Firefox are both implemented, but waiting for approval.

Future plans

Most importantly, I want to get everything merged and working seamlessly, but I’m already aware of some issues and missing functionality in the PipeWire backend in WebRTC. And we also have the same problem we used to have with screen sharing, which is that it’s not enabled by default, unit tested, and feature complete and these things take time fix.

WebRTC (Chromium): Year end report

Although Wayland screen sharing is still not yet enabled by default in Chromium, which is what I hoped to achieve this year, I think I can say we are almost there and you can expect it sooner than later. Let’s summarize what we have accomplished this year to make this change happen:

Stream restoration support

You probably remember that you had to go through two portal (xdg-desktop-portal) dialogs all the time you wanted to share your screen. We had first portal dialog to have your selected screen visible in Chromium preview dialog and yet another portal dialog to make your screen shared with the web page itself. This was quite annoying as users had to make the same selection twice. Thanks to a new addition into portal API I was able to implement stream restoration support in WebRTC to bypass the second portal dialog and have your selection instantly shared with the web page itself once you confirm it in the Chromium dialog. This was released in Chromium 105.

Tests for PipeWire (streaming) code

This is not a feature that is visible to users, but it makes an important part of the whole process. It was a long effort to make this happen as it’s something that is not trivial to test and needs some dependencies for the tests itself to run. As a first step we had to bring PipeWire and some of its dependencies into the infrastructure. As easy as it sounds, in order to add a new dependency there you have to add it in form of a CIPD (Chrome Infrastructure Package Deployment) package. This means you write recipes with information how to build and get your package. This all on a CentOS 7 based distribution where you have to work with older libraries or missing libraries and tools (e.g. Meson). This makes your packages later available in third_party directory that is available to both Chromium and WebRTC. Next step was to write the tests itself. The only way how I could test our PipeWire code, code that is all about receiving frames over PipeWire stream, was to write another “testing” stream that will be sending us frames with parameters where we will know what to expect and can verify we received what we were supposed to receive. For the tests itself I used GTest framework which is very well documented and quite comprehensive. Sadly, we were still just in the middle of the process and one would hope that it cannot get more complicated, but the opposite is true. There is a whole runtime setup we need for our tests to run. We need PipeWire and PipeWire session manager to run, otherwise we would never activate and connect our streams. To create the setup I had to write a Python wrapper script that sets all environment variables for PipeWire to find everything in the third_party directory (plugins, libspa, etc.) and run both PipeWire and PipeWire session manager in order to run our test. As a last step, because we had a script that runs the test, we had to create a mapping in the infrastructure making the script itself a launcher of our test + limit this only to x86_64 architecture as that’s the only one where we have PipeWire available as CIPD package.

Here are upstream changes that implement all above mentioned:

UX improvements in Chromium preview dialog

All these improvements were made by Alexander Cooper, Alex is Google engineer working on screen sharing stack in Chromium and WebRTC. I have been intensively working with Alex for the past year and he is my go to person when it comes to code reviews or anything related to screen sharing in Chromium. Alex made a great set of UX improvements in their preview dialog. My original implementation always automatically invoked portal dialog when screen sharing was initiated. This led into one dialog overlaping the other. In most recent Chromium version (I think starting with 107) users will be presented with Chromium preview dialog first asking to share a web tab and only once they pick to share a screen they will get presented with the portal dialog. You can also now re-request the portal dialog in case you pick a wrong screen to share.

Web Engines Hackfest in A Coruña

I travelled to A Coruña in May to attend the Web Engines hackfest and meet Alex Cooper from Google there. This was a perfect opportunity for us to meet in person and I’m really grateful for that. Even though that for me it was a bit stepping outside of my comfort zone as I went somewhere where I didn’t know anyone (besides Alex), but as I later found out, all the people there were super friendly and I’m happy that I met some new faces. We had very productive conversations with Alex and having enough time to talk in person was beneficial for both sides as we could explain to each other technical details of our backgrounds and talk about future plans and current issues.

Firefox and WebRTC rebase

Firefox upstream has been behind with our WebRTC changes for more than ~2 years. This has changed recently with Firefox 106, where they finally rebased their WebRTC version to M103 (Chromium 103). I also provided them list of additional backports they should pick up in order to have fixes for some crashes and issues we have fixed since then. Sadly, as I later found out, even though they did the rebase and all the backports, the new codebase (for screen sharing) is not in use and instead they still keep the old code and use it instead. This is because they don’t have all the dependencies in place and it’s been blocked on this issue since then, hopefully it will get figured out soon and also Firefox users can benefit from all the improvements we did over the past two years. This is not an issue for Fedora users where I created a patch for our Firefox package that enables the new code and use it instead.

Plans for the future

  • Extend PipeWire code test coverage. Currently we test only the essential part of the code, but I would like to further extend the tests with tests for all kinds of metadata we can use (damage regions, crops, mouse cursor etc.)
  • Use portal dialog as the default one. This has some requirements that need to be fulfilled before Chromium can rely on it. We need to show screen/window previous, as currently it can easily happen you pick a wrong screen (especially when you have two identical monitors) and we need a way how to invoke Chromium dialog so users can share a web tab.
  • Fix bugs. I’m currently not aware of any issue and we already fixed a bunch of them this year as people use it more often, but I expect more issues to appear as Wayland becomes more and more dominant and we finally make it enabled by default.

DMA-BUF support in WebRTC

It will be almost three years since we landed initial support for screensharing on Wayland with the use of PipeWire in the WebRTC project. This enabled screensharing support in both major Linux browsers. Last year I implemented support for window sharing, added support for PipeWire 0.3 and added support for DMA-BUF and MemFD buffer types. Problem was, as it turned out, the DMA-BUF support was not implemented in a correct way.

The original implementation was using mmap() to get the buffer content. This worked correctly for current Intel GPUs, but was terrifically slow on e.g. AMD GPUs. Proper solution is to use OpenGL context to get the content from buffer. However, there were many implementations using mmap() already, including WebRTC and we needed a way how to properly communicate between the server and the client that when the client advertises DMA-BUF support, it means it doesn’t use mmap() and goes through OpenGL context instead.

Here are some issues if you want to read about the details:

This all resulted into a completely different way how the communication between the consumer and the producer should happen in order to use DMA buffers for way faster and smoother screensharing support. Both sides are now required to query the list of all supported modifiers and add this as a new stream parameter, including flags that the modifiers are mandatory parameter, rest of stream parameters are kept as before so we can keep using other types in case DMA-BUFs are not supported by the producer. Once both sides matches their expectations, we can query whether the stream includes modifiers, based on that we know we can use DMA buffers, which we now properly open using OpenGL context, while we kept mmap() for MemFd buffer types as fallback. This will result into faster screensharing support in your web browsers.

Last but not least, I made screensharing even faster, regardless of buffer type we use. Originally when we received buffer from PipeWire, we copied it to a local variable so we can apply cropping and adjust the position and only after that we copied this adjusted content into a DesktopFrame, which each DesktopCapturer (a class representing screensharing implementation) is supposed to return and let it be displayed by the browser. That means we performed two copy operations for each frame. I improved this implementation and now we copy the PW buffer content directly to a desktop frame which we can return directly so one copy operation less than before. I didn’t do some exact measures, but simply running htop and comparing usage of top 5 processes when sharing a 4k screen I got:

  • Original result: 66%, 64%, 26% 23%, 10%
  • Updated result: 41%, 39%, 19%, 17%, 12%

I also have some other improvements on my TODO list, all of them should bring some additional optimizations and improvements. I will keep you informed once I have news to share with you.

Both changes have been merged into WebRTC, that means it should be in Chrome/Chromium 96 (released during November 2021).

WebRTC/Chromium updates in 2020

In 2019, I started with my first contribution to WebRTC. This was all about screen sharing support on Linux Wayland sessions, using xdg-desktop-portal and PipeWire. Back then, it was quite simple, we only had PipeWire 0.2 and all portal backends supported only screen sharing (no window sharing). While this was relatively easy, it was not ideal as each screen sharing request involved two portal dialogs to get the screen content on the web page itself. For me it was a big success, because I made quite a significant contribution to such a big project, which is used by many people, and a project which is used by all modern web browsers.

At the beginning of 2020, the year everyone would like to erase from their memories, we got PipeWire 0.3 (with slightly different API) and later with xdg-desktop-portal-gtk and xdg-desktop-portal-kde (later this year) people were finally able to share application windows. Support for all of this was lacking in WebRTC, because back then those were not available. I wanted to tackle all issues at once, bring support for window sharing and get rid of the “dialog hell” with portals, which was even worse with the new window sharing capabilities in portal backends.

This is what the situation looks like. With each request to share a screen, you got the preview dialog from Chromium. This dialog consists from three pages. One is for screen sharing making one portal request, second one is for window sharing, which is another portal request, and the last one is just to allow you to share a web page you have opened. You had to confirm both portal dialogs, then confirm the Chromium dialog and finally you got one more portal dialog (ouch) to get the screen content on the web page itself.

I had a solution. I made all portal calls identified with an ID and shared this ID (portal call) in Chromium between both pages in the Chromium preview dialog and with the request made for the web page itself. With this solution we only had ONE portal dialog. This was a perfect solution (at least seemed to be). I started working on this at the beginning of this year, we exchanged many emails with people from Chromium UX team, because I wanted to do also some minor UI changes in the preview dialog. Unfortunately, those were rejected for consistency with all platforms. It was not a big deal and I submitted my changes for review, keeping UI as it was, just adding all necessary bits into Chromium and WebRTC to make it all work.

I wish to say things went smoothly since then, but the opposite is true. It took a while to get everything reviewed, but this is probably no surprise with this year being weird and many people working from home with less than ideal conditions. Anyway, few months passed away, I ended up rewriting my changes many times, not even counting hours I spent on it. This all resulted into me being obsessed with this change, it mattered to me so much to get it merged. I was constantly thinking about how to make it better, I was many times fixing issues in the evening (as reviewers were mostly US based), instead spending time with my family. It would be even better to waste my time with my beloved Playstation. This had really negative impact on my mental health and I realized this has to stop and I simply gave up, because I couldn’t continue this way and needed a break. I abandoned both changes (WebRTC and Chromium) and decided to just pick changes I will be able to successfuly upstream. I probably made my change too ambitious and complicated or maybe it’s just Chromium not being ready for this kind of change, because some tweaks were specific for my use-case. It’s also hard to say I wish upstream devs had helped me more, because there is so much to understand around Wayland, portals and PipeWire and way how it all works together.

Anyway, with a new start, without pressure after gaving up on the change, I picked the most important changes and submitted them separately. I was surprised now how smoothly this went and how fast those changes were upstreamed. Simply those changes were simple, understandable and easy to review. I didn’t gave up on fixing the “dialog hell” completely, I have some other ideas, but next time I will try to submit them step by step and will keep some distance and my free time.

And what are the changes you can expect in upcoming Chromium release in 2021?

Support for PipeWire 0.3

You can now build Chromium/WebRTC with both PipeWire 0.2 and Pipewire 0.3. There is a new “rtc_pipewire_version” option you can pass to your build configs.

Window sharing support

There is probably no description needed. You will be able to share application windows in case you don’t want to share whole screen.

Suppport for DmaBuf and MemFd buffer types

This should allow faster transfer of your screen content from your Wayland compositor, through PipeWire to your browser.

Less portal dialogs involved

If you look back into the screenshot I posted above, you can see there are two portal dialogs opened just for the Chromium preview dialog. I at least tried to reduce this to just one portal dialog. This was done by removing the page for window sharing, because the screen share request will already handle both screen and windows.

I think you can expect above mentioned changes in Chromium 89 and I hope you will at least appreciate some of these improvements even though I didn’t deliver everything I wanted to. Also, thanks to Martin StránskĂ˝ from our Firefox team, you can expect all these changes to be also part of Firefox.

Happy holidays and see you in a better year.

Screen sharing support in WebRTC for Wayland sessions

Last time I wrote about possibility to share a screen of Plasma wayland session, using xdg-desktop-portal and our xdg-desktop-portal-kde backend implementation. Problem was that during that time, there was no application which would implement support for this, leaving my previous effort useless so far. Luckily, this should change pretty soon. I, together with my Red Hat collegues Tomáš Popela and Eike Rathke, have been working for past few weeks on bringing support for screen sharing on Wayland to web browsers. All modern browsers use WebRTC for all audio-video communication, including screen sharing, meaning that in a perfect world, just one implementation would be needed, which is not that exactly this case. Let’s go a bit into the details first.

Each system (Windows, Mac and X11) in WebRTC reimplements an abstract class called DesktopCapturer, which defines API used by applications to support screen sharing. For our wayland support, we started with a new implementation using Pipewire as the core technology used for screen content delivery and for the communication part, to request which screen to share and to obtain Pipewire stream information, we use xdg-desktop-portal, providing simple API to do so. Advantage of using xdg-desktop-portal is that it will work also in sandbox (Flatpak and Snap) and that there is support in Plasma (using xdg-desktop-portal-kde backend) and support for Gnome (using xdg-desktop-portal-gtk backend), both using same API. Using our desktop capturer implementation, WebRTC starts to communicate with xdg-desktop-portal, we set up a session associated with our request, we tell xdg-desktop-portal that we have interest in screen sharing so xdg-desktop-portals asks your backend implementation to provide a dialog to select a screen to share and once this is settled, we request to start screen sharing and backend implementation will set up Pipewire stream, sending us back file descriptor of a Pipewire remote which we can open and connect to it. Once we are connected, we finally start receiving buffers from Pipewire with screen data and providing them to applications. This so far sounds simple and that the work is basically done, but this is unfortunately not an ideal world.

We found out that e.g. Firefox uses some older copy of WebRTC, so while working on WebRTC trunk, to have our work ready for upstream, we had to modify slightly our changes for Firefox older copy in order to be able to test our changes. There is also one thing we haven’t figured out yet, the thing is that Firefox has its own dialog to select a screen (for other capturer implementations) and we were not able to avoid displaying it when our capturer is used. Another thing is Chromium, which seems to be using WebRTC in a different way when compared to Firefox as Chromium is using plugins so this is also something we have to figure out. There are probably still plenty of other things to do before all of this can be upstream, but we have made great progress on this so far. We even had couple of Bluejeans calls where both sides could share their screens, one running Gnome Wayland session and me running Plasma Wayland session.

And for those who like adventures, I have set up a Fedora COPR repository (currently building), with Firefox containing our changes so you can test it yourself, we will need testers soon or later anyway. You just need to make sure that Pipewire is running, otherwise this won’t work, portals (both xdg-desktop-portal and backend implementation) should be started automatically.

Screenshot of Firefox in action:

Scroll To Top