

How much VRAM does alacritty use? On my machine, nvidia-smi reports 6MiB for konsole, which I’m seems to be some default reserved by Qt apps (eg dolphin reports the same amount)


How much VRAM does alacritty use? On my machine, nvidia-smi reports 6MiB for konsole, which I’m seems to be some default reserved by Qt apps (eg dolphin reports the same amount)


Is buying a smartphone with a properietary OS from an EU company really a smart decision after chat control?
I think I’m going to be sticking with Graphene
Don’t use Mint or Ubuntu, use Bazzite. It actually is “just works” with the added benefit of “you can’t break it”. It’s perfect for both beginners and experienced users who are looking to do work rather than tinker with their OS.
And if you have a graphics card (which you probably do since you mentioned gaming), Bazzite comes with Nvidia or AMD drivers preinstalled, so you don’t have to do anything extra to get it to work.
But if you really want to follow the YT influencer Linux memes, at least go with Ubuntu instead of Mint. Mint is just Ubuntu with a different default desktop, but worse in every other way less reliable (edit: toned down the exaggeration)
You dont need to yse the terminal/command line for this. Just open the settings app and look for the Bluetooth section. Pairing your keyboard is pretty much the same process as on a phone ir tablet.
Btw, Bazzite has different versions. Which did you install?


I don’t like this. Flatpaks are a huge step forward, but Fedora Flatpaks are two steps back. I’m not at all convinced by his arguments here or in the rejected proposal.
The only potential benefit that might make sense is that they will contain the same Fedora-specific patches found in the fedora RPMs… Except that is exactly the type of thing Flatpaks were supposed to prevent! Neither users nor developers want a middle man adding or removing features for their software. It has historically been one of the biggest pain points for migrating to Linux as a user, or supporting it as a developer. It was necessary in the past for compatibility reasons, but Flatpaks fixed that. Now, developers can publish one Flatpak that will work on all distros, and users don’t have to wonder if they’ll be able to use some app or not, or whether it will work… Unless they’re on Fedora
But I don’t like this post nor the wording in the proposal. He doesn’t actually outline why he wants this to go through. I’m not claiming any tinfoil hat conspiracy behind the scenes, it’s just his argument is not well articulated. If someone wants to use an app with Fedora-specific patches for some reason, they can layer the RPM on top of their Atomic distro. There’s no reason to add uncertainty and confuse users by turning those into flatpaks.
If you already know cron and are too lazy to learn something new, then use cron with the knowledge that it’s a personal failure and not a real technical decision… Otherwise, use systemd timers.


I know nothing about what it takes to develop a laptop. Are these issues (BIOS updates, virtualization support, USB4 support, etc) something the laptop manufacturer needs to develop solutions for in-house? Wouldn’t that be the job of Qualcomm? Or are Tuxedo saying that these things aren’t supported on the Linux side yet? Qualcomm claimed to be contributing to the kernel last year, so idk if that just hasn’t happened yet or if they just lied.
Either way this is disappointing, but understandable. There’s no sense in working to release a laptop with previous-gen hardware that’s not going to be competitive.


Where is the closed source user space of Intel and AMD drivers?
They’re not in user space, they’re in the firmware of the GPUs. It’s embedded in some chip somewhere on the card or in the motherboard. The open source components communicate with that closed part.
Nvidia previously implemented nearly everything in their nonfree kernel module driver. Today, they’ve pushed enough of the parts they’re protective of into the firmware, so that they can release the kernel module as open source/GPL.
they use Mesa for the best possible compatibility.
Mesa is just the userspace implementation of higher level graphics APIs like OpenGL or Vulkan, which communicate with the underlying drivers. I actually think its a good thing the Nvidia has their own implementation of this as it creates competition, and they’re positioned to improve consistency across windows/Linux since they likely reuse a lot of code on both platforms.
I’ve read comments by people bashing the recent Baldur’s Gate 3 Linux release and being full of graphics glitches. Then they list their hardware as proof how great it is and they all have NVidia GPUs.
That’s Larian’s fault for releasing a buggy port. They probably only tested on AMD because they only care about the Steam Deck on Linux. GPU drivers are always buggy, even on windows. The only way to ensure compatibility is to spend the time and effort to test on all of them.


Sure, you get an A for answering the question, but my point was that the hate they get today on Linux is misguided because people only have vague or non-specific complaints. The only specific instance of assholery that I know of is the one you pointed out, which is vintage at this point.
When Nvidia announced that they were going to move the proprietary parts of their driver into the GPU firmware, and open source the kernel module, there was a lot of hate about how they’re being assholes for not releasing the whole thing as open source, relying on proprietary blobs, etc. Yet that’s stupid, because it’s literally the exact same thing AMD and Intel do for their much beloved drivers. Because of the vague and non specific criticisms, people feel inclined to draw negative conclusions like that.
I took your original reply further up to mean that Nvidia does deserve that kind of response today, even though they haven’t done anything particularly evil in the Linux world lately (AFAIK)


The GBM controversy is the (only) one I know about. Afaik, their drivers support GBM today so it’s kind of outdated.


Can you give an example? There has been a ton of vague non-specific hate towards them in the Linux community ever since Linus gave them the finger. So for the sake of not being ignorant, I’d like to hear specific examples if you have them.
Don’t get me wrong, I’m all for hating on Nvidia for many different reasons. I’ve hated them since even before the AI and crypto eras.


Lazy headline from Phoronix makes it sound like Nvidia is just complaining about Wayland. It’s a technical presentation aimed at Wayland developers to discuss shortcomings that make it difficult to implement screen casting. A talk like this from a hardware vendor who is an active contributor to Wayland, and develops/maintains drivers is very helpful, and the first step to addressing/fixing the issues.
I hate Nvidia just as much as the next guy, but they’re currently a valuable asset for Wayland and Linux graphics in general. In case you aren’t aware, Nvidia was the main driving force behind getting explicit sync support into Wayland, which is a feature that greatly improves performance for modern graphics APIs.


The same way you did, via the name of the member: my_test.test2.b = 'x';
The unnamed struct provides the type for a member named test2. Doing it this way saves you the trouble of defining the struct externally and giving it a name. It’s identical to this, except in this example you can reuse the struct definition:
struct crappyname {
char b;
float c;
};
struct test {
int a;
struct crappyname test2;
double d;
};
I see, by PC you mean you don’t want a traditional ‘tower’ PC, which is perfectly reasonable. I personally consider anything within the umbrella of “PC gaming” to be a PC, including laptops (even Macbooks).
That’s a very strange opinion to read on programming.dev’s Linux Lemmy community


Everyone knows that memory safety isn’t the only source of security vulnerabilities (unless you’re bickering about programming languages on the internet, in which case 100% of security vulnerabilities are related to memory safety)
Rust users are one of Rust’s biggest weaknesses.


You can already do that in standard C like this:
struct test {
int a;
struct {
char b;
float c;
} test2;
double d;
};
I can’t think of any particular reason why you’d want an unnamed struct inside a struct, but you definitely would want to be able to have an unnamed struct inside a union. I suspect the struct-inside-struct thing can become useful in some scenarios involving unions.


I’m sitting around doing IT shit waiting things to download/backup/install/etc and have nothing better to do, so here’s an AI-free explanation with code samples:
It’s basically just a code style thing. Standard C allows you to declare unnamed structs/unions within other structs/unions. They must be unnamed, so it’d look like this:
struct test {
int a;
struct {
char b;
float c;
};
double d;
};
Which is fine, but the -fms-extensions flag enables you to do the same thing with named structs. For example:
struct test {
int a;
struct test2 {
char b;
float c;
};
double d;
};
without -fms-extensions, the above will compile, but won’t do what you might assume. b and c will be members of struct test2, not test. So something like this won’t compile:
struct test my_test;
my_test.b = 1; // error: ‘struct test’ has no member named ‘b’
But with the flag, not only does it work, it also lets you do some convenient things like this:
struct test2 {
char b;
float c;
};
struct test {
int a;
struct test2;
double d;
};
//...
struct test my_test;
my_test.b = 1; //OK
That is, you can reuse an existing struct definition, which gives you a nice little tool to organize your code.
Source: https://gcc.gnu.org/onlinedocs/gcc/Unnamed-Fields.html
It’s neat, but not a serious competitor to something like Framework. The MNT laptops are just cool shells around a Rock chip RK3588, which is a quad core ARM (meaning it only has two performance cores, and two efficiency cores). It’s a good competitor in the Raspberry Pi world, but not a serious contender in the x86 one.
If they somehow release a modern x86 version, RIP framework. Otherwise, I don’t think many existing FW customers will be switching to MNT. (Although there are a lot of other better laptops on the market they could switch to)
Wrong. If that were true, it wouldn’t have suddenly gone up 22% this past year.
…I wonder, did something happen recently that might have led to an influx of incels/cucks/betas into the chad Linux community?