A love song for Linux gamers with old GPUs (EOY 2025)
AMD GPUs are famous for working very well on Linux. However, what about the very first GCN GPUs? Are they working as well as the new ones? In this post, I’m going to summarize how well these old GPUs are supported and what I’ve been doing to improve them.
This story is about the first two generations of GCN: Southern Islands (aka. SI, GCN1, GFX6) and Sea Islands (aka. CIK, GCN2, GFX7).
Working on old GPUs
While AMD GPUs generally have a good reputation on Linux, these old GCN graphics cards have been a sore spot for as long as I’ve been working on the driver stack.
It occurred to me that resolving some of the long-standing issues on these old GPUs might be a great way to get me started on working on the amdgpu kernel driver and would help improve the default user experience of Linux users on these GPUs. I figured that it would give me a good base understanding, and later I could also start contributing code and bug fixes to newer GPUs.
Where I started
The RADV team has supported RADV on SI and CIK GPUs for a long time. RADV support was already there even before I joined the team in mid-2019. Daniel added ACO support for GFX7 in November 2019, and Samuel added ACO support for GFX6 in January 2020. More recently, Martin added a Tahiti (GFX6) and Hawaii (GFX7) GPU to the Mesa CI which are running post-merge “nightly” jobs. So we can catch regressions and test our work on these GPUs quite quickly.
The kernel driver situation was less fortunate.
On the kernel side, amdgpu (the newer kernel driver) has supported CIK since June 2015 and SI since August 2016. DC (the new display driver) has supported CIK since September 2017 (the beginning), and SI support was added in July 2020 by Mauro. However, the old radeon driver was the default driver. Unfortunately, radeon doesn’t support Vulkan, so in the default user experience, users couldn’t play most games or benefit from any of the Linux gaming related work we’ve been doing for the last 10 years.
In order to get working Vulkan support on SI and CIK, we needed to use the following kernel params:
radeon.si_support=0 radeon.cik_support=0 amdgpu.si_support=1 amdgpu.cik_support=1
Then, you could boot with amdgpu and enjoy a semblance of a good user experience until the GPU crashed / hung, or until you tried to use some functionality which was missing from amdgpu, or until you plugged in a display which the display driver couldn’t handle.
It was… not the best user experience.
Where to go from there?
The first question that came to mind is, why wasn’t amdgpu the default kernel driver for these GPUs? Since the “experimental” support had been there for 10 years, we had thought the kernel devs would eventually just enable amdgpu by default, but that never happened. At XDC 2024 I met Alex Deucher, the lead developer of amdgpu and asked him what was missing. Alex explained to me that the main reason the default wasn’t switched is to avoid regressions for users who rely on some functionality not supported by amdgpu:
- Display features: analog connectors in DC (or DP/HDMI audio support in non-DC)
- VCE1 for video coding on SI
It doesn’t seem like much, does it? How hard can it be?
Display features
On a 2025 summer afternoon…
I messaged Alex Deucher to get some advice on where to start. Alex was very considerate and helped me to get a good understanding of how the code is organized, how the parts fit together and where I should start reading. Harry Wentland also helped a lot with making a plan how to fit analog connectors in DC. Then, I plugged my monitors into my Raphael iGPU to be used as a primary GPU, then plugged in an old Oland card as a secondary GPU, and started hacking.
Focus
For the display, I decided that the best way forward is to add what is missing from DC for these GPUs and use DC by default. That way, we can eventually get rid of the legacy display code (which was always meant as a temporary solution until DC landed).
Additionally, I decided to focus on dedicated GPUs because these are the most useful for gaming and are easy to test using a desktop computer. There is still work left to do for CIK APUs.
Analog connector support in DC
Analog connectors have been actually quite easy to deal with, once I understood the structure of the DC (display core) codebase. I could use the legacy display code as a reference. The DAC (digital to analog converter) is actually programmed by the VBIOS, and the driver just needs to call the VBIOS to tell it what to do. Easier said than done, but not too hard.
It also turned out that some chips that already defaulted to DC (eg. Tonga, Hawaii) also have analog connectors, which apparently just didn’t work on Linux by default. I managed to submit the first version of this in July. Then I was sidetracked with a lot of other issues, so I submitted the second version of the series in September, which then got merged.
Going shopping
It is incredibly difficult to debug issues when you don’t have the hardware to reproduce them yourself. Some developers have a good talent for writing patches to fix issues without actually seeing the issue, but I feel I still have a long way to go to be that good. It was pretty clear from the beginning that the only way to make sure my work actually works on all SI/CIK GPUs is to test all of them myself.
So, I went ahead an acquired at least one of each SI and CIK chip. I got most of them from used hardware ad sites, and Leonardo Frassetto sent me a few as well.
Fixing DC support on SI (DCE6)
After I got the analog connector working using the old GPUs as a
secondary GPU, I thought it’s time to test how well it works
as a primary GPU. You know, the way most actual users would use them.
So I disabled the iGPU and booted my computer with each dGPU with amdgpu.dc=1 to see what happens.
This is where things started going crazy…
- Tahiti (R9 280X) booted into “no signal”, absolutely dead
- Oland (R7 250) had heavy flickering and hung very quickly
- Oland (Radeon 520) booted into “unsupported signal” with DC and massive flickering with non-DC
- Pitcairn (R9 270X) had some artifacts
- Cape Verde (HD 7770) I didn’t even plug it in at this point…
- Hainan fortunately doesn’t have DCE (display controller engine) so that wasn’t a problem
The way to debug these problems is the following:
- Boot with
amdgpu.dc=0, dump all DCE registers using umr:umr -r oland.dce600..* > oland_nodc_good.txt - Boot with
amdgpu.dc=1, dump all DCE registers using umr:umr -r oland.dce600..* > oland_dc_bad.txt - Compare the two DCE register dumps using a diff viewer, eg. Meld. Try to find what are the register differences that are responsible for the bad behaviour.
- Use umr (either the GUI or CLI) to try to change the registers in real time, poke at it until the issue is gone.
- Wait until headache is gone.
- Read the code that sets the problematic registers and develop an actual fix.
I decided to fix the bugs before adding new features. I sent a few patch series to address a bunch of display issues mainly with SI (DCE6):
- Fixed broken PLL programming and some mistakes
- Fixed DC “overclocking” the display clock and a few other issues ― many years ago someone fixed an issue on Polaris by unconditionally raising the display clock by 15%, but unfortunately they also applied this to older GPUs; additionally the display clock was already set higher than the max
- Fixed DVI-D/HDMI adapters ― these would just give a black screen when I plugged in a 4K monitor
- While at it, I also fixed them in the legacy code
- Fixed a freeze caused by relying on a non-existent interrupt ― it seems that DCE6 is not capable of VRR, so we just shouldn’t try to enable it or rely on interrupts that don’t exist on this HW
- Fixed another black screen by rejecting too high pixel clocks ― technically, DP supports the bandwidth required by 4K 120Hz using 6-bit color with YUV420 on SI/CIK/VI, so DC would happily advertise this mode, but the GPUs didn’t actually support a high enough display clock for 4K 120Hz
- Fixed an issue with the DCE6 scaler not being properly disabled ― apparently the VBIOS sets up the scaler automatically on some GPUs, which needs to be disabled by the kernel, otherwise you get weird artifacts
DisplayPort/HDMI audio support on SI (DCE6)
I noticed that HDMI audio worked alright on all GPUs with DC (as expected), however DP audio didn’t (which was unexpected). However it worked when both DP and HDMI were plugged in… After consulting with Alex and doing some trial and error, it turned out that this was just due to setting some clock frequency in the wrong way: Fix DP audio DTO1 clock source on DCE6.
In order to figure out the correct frequencies, I wrote a script that set the frequency using umr then played a sound. I just laid back and let the script run until I heard the sound. Then it was just a matter of figuring out why that frequency was correct.
A small fun fact: it turns out that DP audio on Tahiti didn’t work on any Linux driver before. Now it works with DC.
Poweeeeer
The DCE (Display Controller Engine), just like other parts of the GPU, has its own power requirements and needs certain clocks, voltages, etc. It is the responsibility of the power management code to make sure DCE gets the power it needs. Unfortunately, DC didn’t talk to the legacy power management code. Even more unfortunately, the power management code was buggy so that’s what I started with.
- SI power management fixes ― contains some fixes to get Tahiti to boot with DC, also it turns out that the SMC (system management controller) needs a longer timeout to complete some tasks
- More SI power management and PLL fixes ― most importantly this disables ASPM on SI, which caused “random hangs”
- Finally, this series hooks up SI to the legacy DPM code ― this is required because the power management code needs to know how many and what kind of displays are connected
After I was mostly done with SI, I also fixed an issue with CIK, where the shutdown temperature was incorrectly reported.
VCE1 video encoding on SI
Video encoding is usually an afterthought, not something that most users think about unless they are interested in streaming or video transcoding. It was definitely an afterthought for the hardware designers of SI, which has the first generation VCE (video coding engine) that only supports H264 and only up to 2048 x 1152. However, the old radeon kernel driver supports this engine and for anyone relying on this functionality, it would be a regression when switching to amdgpu. So we need to support it. There was already some work by Alexandre Demers to support VCE1, but that work was stalled due to issues caused by the firmware validation mechanism.
In order to switch SI to amdgpu by default, I needed to deal with VCE1. So I started a conversation with Christian König (amdgpu expert) to identify what the problem actually was, and with Alexandre to see how far along his work was.
- It turns out that due to some HW/FW limitations, the firmware (VCPU BO) needs to be mapped at a low 32-bit address.
- It also needs to be in VRAM for optimal performance (otherwise it would incur too many roundtrips to system RAM).
- However, there was no way for amdgpu to place something in VRAM and map it in the low 32-bit address space. (Note, it can actually do this for the address space of userspace apps, just not inside the kernel itself.)
Christian helped me a lot with understanding how the memory controller and the page table work.
After I got over the headache, I came up with this idea:
- Let amdgpu place the VCPU BO in VRAM
- Map the GART (graphics address remapping table) in the low 32-bit address space (instead of using best fit)
- Insert a few page table entries in the GART which would practically map the VCPU BO into the low 32-bit address space
With that out of the way, the rest of the work on VCE1 was pretty straightforward. I could use Alexandre’s research, as well as the VCE2 code from amdgpu and the VCE1 code from radeon as a reference. Finally, a few reviews and three revisions later, the VCE1 series was accepted.
Final thoughts
Who is this for?
In the current economic situation of our world, I expect that people are going to use GPUs for much longer, and replace them less often. And when an old GPU is replaced, it doesn’t die, it goes to somebody who upgrades an even older GPU. Eventually it will reach somebody that can’t afford a better one. There are some efforts to use Linux to keep old computers alive, for example this one. My goal with this work is to make Linux gaming a good experience also for those who use old GPUs.
Other than that, I also did it for myself. Not because I want to run old GPUs myself, but because it has been a great learning experience to get into the amdgpu kernel driver.
Why amdgpu? Why DC?
The open source community including AMD themselves as well as other entities like Valve, Igalia, Red Hat etc. have invested a lot of time and effort into amdgpu and DC, which now support many generations of AMD GPUs: GCN1-5, RDNA1-4, as well as CDNA. In fact amdgpu supports more generations of GPUs now than what came before, and it looks like it will support many generations of future GPUs.
By making amdgpu work well with SI and CIK, we ensure that these GPUs remain competently supported for the foreseeable future.
By switching SI and CIK to use DC by default, we enable display features like atomic modesetting, VRR, HDR, etc. and this also allows the amdgpu maintainers to eventually move on from the legacy display code without losing functionality.
What is left to do?
Now that amdgpu is at feature parity with radeon on old GPUs, we switched the default to amdgpu on SI and CIK dedicated GPUs. It’s time to start thinking about what else is left to do.
- Add support for DRM format modifiers for all SI/CIK/VI/Polaris GPUs. This would be a huge step forward for the Vulkan ecosystem, it would enable using purely Vulkan based compositors, Zink, and other features.
- Add support for TRAVIS and NUTMEG display bridges, so that we can also
switch to amdgpu by default for CIK APUs.
I couldn’t find the hardware for this work (mainly looking for Kaveri APUs), if you have it and want to help, please reach out. Your
dmesglog will mention if the APU uses TRAVIS or NUTMEG. - Refactor SI and KV power management so that we can retire the legacy power management code, which would further ease the maintenance burden of these GPUs.
- Eventually retire the non-DC legacy display code from amdgpu to ease the maintenance burden.
- Deal with a few lingering bugs, such as power limit on Radeon 430, black screen with the analog connector on Radeon HD 7790, as well as reenable compute queues on SI, mitigate VM faults on SI/CIK, etc.
- Verify sparse mapping (PRT) support. I already wrote a kernel fix and a Mesa MR for enabling it.
- Implement transfer queue support for old GPUs in RADV.
What have I learned from all this?
It isn’t that scary
Kernel development is not as scary as it looks. It is a different technical challenge than what I was used to, but not in a bad way. Just needed to figure out a good workflow for how to configure a code editor, as well as what is a good way to test my work without rebuilding everything all the time.
Maintainers are friendly
AMD engineers have been very friendly and helpful to me all the way. Although there are a lot of memes and articles on the internet about attitude and rude/toxic messages by some kernel developers, I didn’t see that in amdgpu at least.
My approach was that even before I wrote a single line of code, I started talking to the maintainers (who would eventually review my patches) to find out what would be the good solution to them and how to get my work accepted. Communicating with the maintainers saved a lot of time and made the work faster, more pleasant and more collaborative.
Development latency
Sadly, there is a huge latency between a Linux kernel developer working on something and the work reaching end users. Even if the patches are accepted quickly, it can take 3~6 months until users can actually use it.
- In Mesa, we can merge any features to the next Mesa release up to the branch point, and afterwards we backport bug fixes to that release. Simple and efficient.
- In the Linux kernel, there is a deadline for merging new features to the next release, (it’s not clearly communicated when that is). Bug fixes may be backported to previous releases any time, but upstream maintainers aren’t involved in that.
In hindsight, if I had focused on finishing the analog support and VCE1 first (instead of fixing all the bugs I found), my work would have ended up in Linux 6.18 (including the bug fixes, as there is no deadline for those). Due to how I prioritized bug fixing, the features I’ve developed are only included in Linux 6.19, so this will be the version where SI and CIK will default to amdgpu by default.
XDC 2025
I presented a lightning talk on this topic at XDC 2025, where I talked about the state of SI and CIK support as of September 2025. You can find the slide deck here and the video here.
Acknowledgements
I’d like to say a big thank you to all of these people. All of the work I mentioned in this post would not have been possible without them.
- Alex Deucher, Christian König (amdgpu devs)
- Harry Wentland, Rodrigo Siquiera (DC devs)
- Marek Olsák, Pierre-Eric Pelloux-Prayer (radeonsi devs)
- Bas Nieuwenhuizen, Samuel Pitoiset (radv devs)
- Martin Roukala, né Peres (CI expert)
- Tom St Dennis (umr dev)
- Mauro Rossi (DCE6 in DC)
- Alexandre Demers (VCE1 research)
- Leonardo Frassetto (HW donation)
- Roman Elshin and others (testing)
- Pierre-Loup Griffais (Valve)
Which graphics cards are affected exactly?
When in doubt, consult Wikipedia.
GFX6 aka. GCN1 - Southern Islands (SI) dedicated GPUs: amdgpu is now the default kernel driver as of Linux 6.19. The DC display driver is now the default and is now usable for these GPUs. DC now supports analog connectors, power management is less buggy, and video encoding is now supported by amdgpu.
- Tahiti
- Radeon HD 7870 XT, 7950, 7970, 7990, 8950, 8970, 8990
- Radeon R9 280, 280X
- FirePro W8000, W9000, D500, D700, S9000, S9050, S10000
- Radeon Sky 700, 900
- Pitcairn
- Radeon HD 7850, 7870, 7970M, 8870, 8970M
- Radeon R9 265, 270, 270X, 370, 370X, M290X, M390
- FirePro W5000, W7000, D300, R5000, S7000
- Cape Verde
- Radeon HD 7730, 7750, 7770, 8730, 8760
- Radeon R7 250E, 250X, 350, 450
- FirePro W600, W4100, M4000, M6000
- Oland
- Radeon HD 8570, 8670
- Radeon R5 240, 250, 330, 340, 350, 430, 520, 610
- FirePro W2100
- various mobile GPUs
- Hainan
- various mobile GPUs
GFX7 aka. GCN2 - Sea Islands (CIK) dedicated GPUs: amdgpu is now the default kernel driver as of Linux 6.19. The DC display driver is now the default for Bonaire (was already the case for Hawaii). DC now supports analog connectors. Minor bug fixes.
- Hawaii
- Radeon R9 290, 290X, 295X2, 390, 390X
- FirePro W8100, W9100, S9100, S9150, S9170
- Bonaire
- Radeon HD 7790/8870
- Radeon R7 260/360/450
- Radeon RX 455, FirePro W5100, etc.
- various mobile GPUs
GFX8 aka. GCN3 - Volcanic Islands (VI) dedicated GPUs:
DC now supports analog connectors.
(Note that amdgpu and DC were already supported on these GPUs since release.)
- Tonga
- Radeon R9 285, 380, 380X
- (other chips of this family are not affected by the work in this post)
Comments
The blog doesn't have comments, but
feel free to reach out to me on IRC (Venemo on OFTC) or Discord (sunrise_sky) to discuss.