Skip to content


Announcing expanded support for Asus, Framework, and Surface Devices

Over the past few weeks contributors have reworked parts of our build system to support more hardware. Reusing cloud patterns allows Universal Blue to offer pretty awesome Nvidia support that's baked right into the image. This has always been a special case, something we only did for Nvidia support. We've now successfully productized the pattern into something that we can repeat quickly.

Devices like certain Asus laptops sometimes need special tools, and these laptops are supported by projects like asus-linux. They not only provide tools, they also hook you up with the right kernels that you need. Similarly, the linux-surface project provides special kernels for Microsoft Surface devices. They provide the tools and repositories needed to get Fedora working great on those devices. But you have to set that up by hand. Ewww.

Bazzite on a Surface

One of the strengths of the cloud-native pattern is our ability to use existing Fedora base images, and then automate the post-installation steps as new composable layers, and then mix and match what we need. Both Asus and Surface devices can also come with Nvidia cards, which means we can use the existing Nvidia images to help us compose Nvidia-enabled images on top of the good stuff we need from asus-linux and linux-surface! These are now ready to test!

The Framework Experiment

The Framework 13 laptop was a unique prototype in this. It was only available as a custom spin of Bluefin to gauge interest. We've reworked that repo to line up with the rest, so instead of being Bluefin-specific you can now enjoy a wider variety of images for your Framework.

If you're wondering if we plan on supporting the Framework 16 and new AMD Framework 13 then you're in luck, it's just another set of parameters we can add. We're hoping to get feedback from folks who get this hardware so we can enable it!


Why do it this way?

Both asus-linux and linux-surface have existed for a long time, what's the benefit of doing it this way? We list these benefits in Why would I use these?, but these images are a real world example of how the pattern works.

Both of these projects have done the hard work and provided the community with the right bits. All we do is automate that and run it every night and on every merge. If it works, it ends up on your machine, if it doesn't work, your machine still works. It's broken in our GitHub repository and it doesn't get to you. Instead we use one of the strongest advantages that we get with open source: It's easier to fix things when you do it together. When the nerds figure it out, the builds pass, and THEN you get the image on your machine. GitOps for the win!

The one gotcha is that we still need to do some post-installation steps manually. You might have had to do this as part of installation, where we ask you to do a just framework-13 or just nvidia-set-kargs in order to get the right kernel parameters in place. We expect this limitation to go away someday and hope to declare kernel arguments directly in the Containerfile.

Testers Wanted!

The amount of images we're generating is getting larger, so give these a spin and let us know how you get on!