Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Installing Linux after Windows should be fine without disconnecting drives.
The reverse is troublesome. Microsoft’s installer is all too happy to shit on your drives, even the ones you’re not using for installation. But Linux installers are much more friendly to dual-booting and all kinds of complex setups.
Haven’t heard of Hiren’s BootCD in like 15 years. Good to see it’s still around!
Yeah, I had to disconnect all my SATA HDs to stop the Windows installer from shitting all over them.
I’d be worried about Windows updates doing the same thing now, after the the recent glitch that broke bootloaders.
F-Droid link for the lazy: https://f-droid.org/packages/com.junkfood.seal/
Definitely going to check this out. I’ve been using yt-dlp via command line in Termux but that experience is less than ideal.
It was bought out and cleaned up a few years ago. It’s legit again now, though I don’t think it’ll ever really recover from that fiasco.
Chromium itself will. Other Chromium-based browser vendors have confirmed that they will maintain v2 support for as long as they can. So perhaps try something like Vivaldi. I haven’t tried PWAs in Vivaldi myself, but it supports them according to the docs.
Debian still supports Pentium IIs. They axed support for the i586 architecture (original Pentium) a few years back, but Debian 12 (current stable, AKA Bookworm) still supports i686 chips like the P2.
Not sure how the rest of the hardware in that Compaq will work.
See: https://www.debian.org/releases/stable/i386/ch02s01.en.html
Probably ~15TB through file-level syncing tools (rsync or similar; I forget exactly what I used), just copying up my internal RAID array to an external HDD. I’ve done this a few times, either for backup purposes or to prepare to reformat my array. I originally used ZFS on the array, but converted it to something with built-in kernel support a while back because it got troublesome when switching distros. Might switch it to bcachefs at some point.
With dd specifically, maybe 1TB? I’ve used it to temporarily back up my boot drive on occasion, on the assumption that restoring my entire system that way would be simpler in case whatever I was planning blew up in my face. Fortunately never needed to restore it that way.
Hopefully they have better defenses against legal action from Nvidia than ZLUDA did.
In the past, re-implementing APIs has been deemed fair use in court (for example, Oracle v Google a few years back). I’m not entirely sure why ZLUDA was taken down; maybe just to avoid the trouble of a legal battle, even if they could win. I’m not a lawyer so I can only guess.
Validity aside, I expect Nvidia will try to throw their weight around.
It’s worth mentioning that with a large generational gap, the newer low-end CPU will often outperform the older high-end. An i3-1115G4 (11th gen) should outperform an i7-4790 (4th gen), at least in single-core performance. And it’ll do it while using a lot less power.
Interesting. I’m not sure that’s a Lemmy thing per se, maybe specific to your client, or some extension or something altering CSS?
I just checked in my browser’s inspector, and the italicized text’s <em> tag has the same calculated font setting as the main comment’s <div> tag.
FWIW, I’m using Firefox with my instance’s default Lemmy web UI.
YES.
And not just the cloud, but internet connectivity and automatic updates on local machines, too. There are basically a hundred “arbitrary code execution” mechanisms built into every production machine.
If it doesn’t truly need to be online, it probably shouldn’t be. Figure out another way to install security patches. If it’s offline, you won’t need to worry about them half as much anyway.
Hospitals and airports typically have their own backup generators, yeah. Not entirely sure how long they’re prepared to operate off-grid.
Both.
The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.
The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You’ll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.
That said, AMD’s equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.
The easy way is to just use OpenCL. But that’s not going to give you the best performance, and it’s not going to be compatible with everything out there.
Backing up / in it’s entirety might cause issues since there will be a lot of special files and crossed mount points. You should probably exclude /proc and any system folders from the backup. See: https://github.com/bit-team/backintime/blob/dev/FAQ.md#does-back-in-time-support-full-system-backups
Since you’re planning to start with a clean Nobara install, you can probably exclude those during the restore step. Just be careful not to restore files that are in active use by the running system.
Have you tested restoring from your backup? Can you do it from the liveUSB?
Does that really work for RAID 0? Since RAID 0 is striped (with “zero” redundancy), I wouldn’t expect an array with a missing device to work at all. But I can’t say I’ve ever tried.
As a reminder, the same (closed-source) user-space components for OpenGL / OpenCL / Vulkan / CUDA are used regardless of the NVIDIA kernel driver option with their official driver stack.
CUDA hell remains. :(
Perhaps you are a more discerning filesystem user than I am, but I don’t think I’ve actually noticed any difference on btrfs except that I can use snapshots and deduplication.
There’s one called Redox that is entirely written in Rust. Still in fairly early stages, though. https://www.redox-os.org/