Ah, too bad the GPU driver is still closed-source. Hopefully they do indeed release that open source driver they apparently promise... I kinda expected most RISC-V boards are going to be open hardware, but maybe I was wrong to assume that, heh
I understand the link's author got mesa3d working.
But to do so, they used git head mesa3d, experimental kernel patches to get the kernel part working, and firmware blobs they took from the proprietary driver.
We'll have to wait a little more, but will get system images with all of that already working. Eventually, it'll all be upstream.
It helps that large amounts of these boards (more than every RISC-V SBC before, combined) are getting to developers. This was the whole point of the boards after all.
> they used git head mesa3d, experimental kernel patches to get the kernel part working,
In fairness, that's what I'd expect for new hardware; out of tree patches, then head, then unstable/rc/beta releases, then eventually normally stable versions. The firmware blobs are unfortunate, though.
While it is reasonable to expect this much, we seldom get anywhere near that.
The usual fare is, unfortunately, just an android kernel source tarball that has been modified so that it works with the hardware, with no history of changes nor documentation of the hardware.
The situation with GPUs, specifically, is really bad.
Intel, AMD and now Imagination Technologies are about the only ones, that I am aware of, who seem to care about open drivers.
Yeah, the open drivers don't matter a whole lot. It just loads the closed source proprietary blob into the gpu. It's a step in the right direction but can be very misleading.
The good news is that with the new open Linux kernel drivers from NVIDIA, Nouveau folks will be able to use the same signed proprietary firmware as the proprietary userspace component of the NVIDIA drivers. So that should unblock development of the open userspace component of the libre Nouveau drivers too.
They do, IIRC, but, interestingly, Nvidia doesn't only because they put a RISC V processor on board that keeps it's own blob a few generations back (20xx?). I'm not sure that's actually more open, but I guess if you can run it with all open source drivers and no proprietary blob, that's good?
With x86 PC compatibles you expect the opposite: You can use old software and the system will still boot and be mostly usable but not with SoC boards :shrug:
Why is it that an open cpu can be developed but not an open gpu?
A GPU as I understand it is simpler than a cpu, but parallel. That to me seems like it would lower barriers to entry. But that obviously isn't the case, so I must be missing something.
GPUs are much more complex than CPUs (at least in terms of their interface; obviously a Xeon is not simple). Plus they haven't been around as long so their entire architecture is not as stable (e.g. consider the switch away from fixed function).
The V instruction set extension of RISC-V started out from efforts at creating an open alternative to GPU compute. So at least in principle, you just need something that exposes a framebuffer and everything else can be done via software rendering. This VisionFive-2 chip does not seem to implement V, but future chips likely will.
From researching this in the past, it seems like there are many more learning resources about CPU design than GPU design. Not to mention that CPU design is often covered in college curricula while I'm not aware of any GPU design courses (though if anybody know of some, please do share)
RISC-V (+embedded Rust) is my tech-skill resolution for the year. I like the openness and the modularity. I've massively ignored everything close to the metal over the last couple of years so it's kind of fun to dive into an ISA. A Sparkfun Red-V arrived at my doorstep on Friday and I'll stick to embedded and rv32imac for now but plan to dive into rv64g eventually (and possibly down the whole osdev for fun rabbit hole).
I think there's still a pretty hard requirement to know and write C for embedded dev. It's not that you can't get away with using just Rust, but the full Rust support for chips is really not there. I too started down a similar path. It's a great way to learn, but I found myself needing to reach into C for nearly everything practical.
Yeah I'm more interested in the ISA and diving back into low level stuff. Doing everything in Rust is more of a nice to have but my goal is to default to Rust and only reach for C if it is absolutely neccessary (I'm ok with C but by no means an expert).
Thankfully it seems like there's a lot of people experimenting and building cool stuff in the Rust embedded space. At least from my initial research there was good documentation and plenty of tutorials around :)
> Actually, the full UI even feels much smoother than on my RPi4
Is KDE using the accelerated graphics on Raspberry Pi 4 now? Last I checked, which admittedly is a year or so ago, it seemed the answer was it's using software rendering, with no plans to change that.
Of course hardware accelerated rendering is much smoother than pure software rendering. I still recall just how much smoother Windows 3.11 became when I got that Tseng ET6000 graphics card, my first with any acceleration.
Raspberry Pi 4's SoC is widely known to have a very bottlenecked GPU, which simply can't fill the screen fast enough to ever keep up at even 1080p.
VisionFive 2 claims to have 4x the gpu performance, thus assuming acceleration on both SBCs, it would be very easy for the VisionFive 2 to perform better.
Fair point. When I did try KDE shortly after the Pi 4 launched, there definitely wasn't hardware acceleration and the desktop was noticeably less fluid than the default Gnome. Though that explains why even Gnome never felt very snappy.
>When the driver is FOSS they could release new boards.
The driver is FOSS. OP used mesa3d to get acceleration. What it isn't is finished and rock solid. It is very hard to get to that point w/o hardware availability. This is why VisionFive 2 exists.
Making a chip on a competitive node has a lot of cost upfront. To make it viable, they'd have to sell long millions of them. The approach here has been to design a chip that can be put almost everywhere; This is a very low power design, with great performance vs watt, error correction and extremely useful set of peripherals, all of it in a chip designed to function in an industrial range of temperature, while at room temperature even without a heatsink it won't get anywhere near its maximum, by like 60C.
The VisionFive 2 SBC is the "development board" for this chip. Its purpose is to accelerate upstream support for this SoC.
Of course, as it is the very first such board at <$100, the fastest to date and the first one to ship in the tens of thousands, which is more than the aggregate of every previous RISC-V development board shipped, it also means accelerating the RISC-V ecosystem as a whole, so everybody wins.