Every community I care about is dead

  • 31 Posts
  • 138 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Everyone fully missing the point here. This is the banner image for !linux@programming.dev (that’s not where we are right now for the record), and it has a normal JPEG size of 7.7MB. When it’s served as WebP it’s 3.8MB. OP is correct that this is very stupid and wasteful for a web content image. It’s a triple-monitor 1440p wallpaper that’s used verbatim, and it should instead be compressed down to be bandwidth-friendly. I was able to get it to 1.4MB at JPEG quality 80, and when swapping it out in dev tools and performing A/B testing I can’t tell the difference. This should be brought to the attention of a mod on that community so it can stop sucking people’s data for no reason.



  • JXL is the best image codec we have so far and it’s not even close. I did a breakdown on some of its benefits here. JXL can losslessly convert PNG, JPG, and GIF into itself, and can losslessly send them back the other way too. The main downside is that Google has been blocking its adoption by keeping support out of Chromium in favor of pushing AVIF, which started a chicken and egg problem of no one wanting to use it until everyone else started using it too. If you want to be an early adopter you can feel free to use JXL, but just know that 3rd party software support is still maturing.

    Something you might find interesting is that the original JPEG is such a badass format that they’ve taken a lot of their findings from JXL and made a badass JPEG encoder with it named jpegli. Oddly, jpegli-based JPEGs are not yet able to be losslessly-compressed into JXL files, per this issue - hopefully that will be fixed at some point.



  • Arch should be fine for university stuff. The main problem with Arch is not Arch itself, but all the software it tracks being very fresh. You’ll be pulling updates as they come down the line, and that may result in temporary bugs or day-to-day workflow changes - caused by the software developers themselves. I don’t think an Arch system is unusually unstable or prone to breaking, but last year they did brick everyone’s GRUB loaders by pushing an update too early (post-mortem here). It’s up to you, but if you want to err on the side of system/software stability I would go for Mint/OpenSUSE Tumbleweed/Debian.

    I don’t have any practical experience with EndeavourOS but TMK it’s just preconfigured Arch and it uses the default repos, so that sounds good to me. Vanilla Arch is not inherently better or worse, it’s just a more minimal starting point.




  • Conduit is also licensed under Apache 2.0, so it could also be taken closed source at any point in time. The reason this wouldn’t impact Conduit as much is that there’re other contributors, whilst Synapse and Dendrite are almost exclusively developed by Element.

    Right. The current perspective is based on the idea that if Synapse/Dendrite go closed-source right now, an open source version would be good as dead. Element is responsible for 95% of Synapse/Dendrite and I’m sure a community fork would have to play a lot of catch-up to figure out how to keep it going. If the community was more involved in Synapse/Dendrite implementation (and if Element let them) there would be less cause for alarm, as closing the source would just mean an immediate community fork and putting Element on ignore. Also to reiterate, The Matrix Foundation is not going along with Element on this move, and even if Element pulled something shady the Matrix Core Spec etc. would still remain open and under the Foundation’s control, so the max we have to lose is Synapse/Dendrite and all of Element’s developers.

    As for the rest I agree and I do actually trust that Element is simply playing their only card here. These maneuvers are all required in order for Element to survive as a company at all, but they also unfortunately leave this backdoor open as a consequence. Matthew has pinky-promised over and over that they are only acting in good faith and that they would never use the backdoor, but it’s understandable that the presence of the backdoor is putting everyone at unease. Best case scenario we take this as a warning sign that if Element drops dead tomorrow then Matrix is also dead. If people want Matrix to not be practically owned by Element then we should diversify and prepare escape plans.



  • This is actually quite a controversial change mainly because of their switch to a CLA. This indirectly gives them the opportunity to switch the license to closed source whenever they feel like it in the future. Semi-controversially, they are also primarly making this AGPL change in order to begin selling dual-licensing to companies. The Matrix Foundation itself does not support this change from Element, though Element is within its rights to do so.

    You can read some more thoughts on this from the pessimistic folks at HackerNews. My main takeaway is that I don’t trust Element because I don’t trust anyone. I’m sure they’re doing this in good faith but I don’t like the power they have at the moment. I hope this is what’s needed to begin focusing efforts on alternative homeserver implementations like Conduit.




  • Flatpak is like an alternative packaging system that exists outside of your distro’s normal packaging model, e.g. apt/dnf/pacman etc. The killer features are that Flatpaks work on any distro with a single universal package, and that the software versions will be cutting-edge without needing cutting-edge system dependencies. Flatpaks run in their own dependency network and generally don’t rely on anything from the host system - this means that you can have arbitrary software on your machine that your distro/repo maintainers don’t need to compile/quality-control/stability-test/etc. It also comes with an easy sandboxing framework out of the box as a bonus.

    In my case I usually use Flatpaks to get more current versions of software without totally messing up Debian’s “Debian does not break” stability model - Debian is meticulously maintained so that its “Stable” branch only has ultra-stable versions of software, at the expense of those packages being older and frozen. If you use a distro with smaller package repos (e.g. OpenSUSE/Fedora/etc) you’ll probably appreciate finding Flatpak versions of software that you’d normally need to manually compile.

    Flatpaks are cool, and they have a specific use. They’re not the end-all be-all of packaging and they’re (hopefully) not going to replace apt/dnf/pacman. As for why they hate apt I have no idea. apt is good, and you can even make it a little nicer by installing nala and using that instead of apt.

    If the basis of this thread is that you’re digging for distro recommendations I’d personally steer you towards Linux Mint and OpenSUSE Tumbleweed for their ease of use. Debian is a little more difficult to set up than Linux Mint but not tremendously so. Arch is more of an “intermediate” difficulty distro where the main challenge is that your system packages are fast-moving and can break/change in small ways from day-to-day. If you aren’t comfortable with Linux you might get frustrated with minor bugs that you don’t know how to troubleshoot. Conversely, if you want to learn Linux then dealing with Arch’s shenanigans will help expose you to various parts of the system naturally.


  • Yote.zip@pawb.socialtoLinux@lemmy.mlWhat do you think about this?
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    The video is clickbait and a few of the distros are in categories just for dramatic effect. I personally share Chris’s criteria for “pointless” distros however, and I hope that his main “clickbait motive” was trying to stop people from hopping around from gimmick distro to gimmick distro when the real magic has always been with the Debian/Arch base underneath the hood. I don’t care to give Chris the attention he wants so I’d rather answer your questions instead of talk about the video directly:

    I agree that Debian and Arch are “S-tier” distros. Not that they’re better than everything else for every usecase but they are very high quality community-run distros with large package bases, and they accomplish their mission statements with ease. If you’re a Linux power user for long enough you may eventually settle into one of these two distros because they give you a lot of room to mold your configuration without being opinionated by downstream distro maintainers.

    Linux Mint is very good, and it’s probably the only “fork distro” that I recommend people use because it makes Debian/Ubuntu very simple and usable for new users, and it’s done so for many years with a great track record. I currently run Debian Stable but if you put a gun to my head and said “you can only run Linux Mint from now on” I’d be fine with it. Specifically, I prefer the LMDE edition but the normal version is good too.

    You can run cutting-edge gaming stuff on Debian Stable and Linux Mint by using Flatpak Lutris/Steam, which uses its own cutting-edge Mesa package instead of the system’s, and you can also install a cutting-edge kernel on these stable distros by using Debian backports or e.g. XanMod. I prefer using stable distros like Debian Stable and pulling cutting-edge versions of your important packages through Flatpak or other means, which gives you a “stable base and rolling top”.

    I think the general usecase for Arch has diminished from half a decade ago due to Flatpak’s popularity, and IMO a stable base setup makes more sense if you can get everything important that you need from Flatpaks. With Arch, not only are the programs you care about bleeding-edge, everything is bleeding-edge, and you may end up with annoying bugs from packages you didn’t even know existed.

    If you want a more modern version of the Linux desktop without the bleeding-edge of Arch I think OpenSUSE Tumbleweed is a great cutting-edge distro. They have extensive automatic testing that ensures high system stability even while living near the edge of package freshness. The main downside is OpenSUSE’s smaller package base compared to Debian/Arch-based distros.


  • Yote.zip@pawb.socialtoLinux@lemmy.mlWhat is the best distro for gaming?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I no longer use Arch, but this wouldn’t have happened to me because I used vanilla Arch. On Manjaro it can happen at any moment that an AUR package silently depends on a new part of a dependency not implemented in the older versions. The AUR does not care to figure out which exact version dependencies are needed for a program, because you are expected to always have an up-to-date Arch system before installing. If the AUR cared about Manjaro compatibility they would need to mark every dependency with a minimum version number, but that’s a lot of effort and the AUR understandably doesn’t care about supporting Manjaro’s repos. If Manjaro stood up its own AUR this would no longer be a problem.

    (Personally, I don’t think AUR packages are a good idea for system stability/security even on vanilla Arch, but it is understandable that people like them for their convenience.)


  • Yote.zip@pawb.socialtoLinux@lemmy.mlWhat is the best distro for gaming?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    1 year ago

    Arch has made a lot of mistakes, and their most recent one where they bricked everyone’s GRUB loader is the one that caused me to stop using it as a general recommendation. This sort of thing would never happen in Debian, and pretending that “every distro makes massive mistakes!” is disrespectful to distros that actually put a ton of effort into making sure these things don’t happen. Sweeping those mistakes under the rug is harmful to new users who don’t know what they’re signing up for when they download the distro that you are sugarcoating, and that is the primary reason to make sure that anyone considering Manjaro is aware of its past so they can make their own decisions.

    Security updates aren’t delayed in Manjaro, they’re pushed through out of band.

    Manually. Also read as: delayed. The comment from Arch’s security team that you are minimizing is part of the reason why this is a bad idea: “They just forward our security advisories without reading them. Leaving critical security issues to rot in their “stable” repositories while only pushing forward issues that are publicized or users telling them about”. Once again, why would I trust the Manjaro team to be on top of security when they can’t figure out how to keep an SSL cert alive? Their security mailing list hasn’t even been updated in a year.

    Once you’ve compiled an AUR package it will remain compatible with the system you compiled it on until you update and introduce an incompatibility.

    You are dodging the real dependency problem by focusing on this half. The real dependency problem is that when an AUR package updates and Manjaro’s packages are not new enough for the update, it will cause breakage. AUR packages are built with Arch Linux’s repos in mind and no care whatsoever for the versions of packages that Manjaro holds. Updating your AUR packages frequently will all but guarantee that you will eventually run an AUR update that requires a dependency with a newer version than Manjaro provides, and that app will break (or worse, the AUR package is a dependency for other apps which will cause further breakage). Even Manjaro knows this: “Using AUR also implies Arch stable branch - which is only achievable by using Manjaro unstable or testing branch.”. Also take it from their team: “The AUR is neither officially supported by Arch nor Manjaro. If you do use the AUR on Manjaro, use our unstable branch. Problem solved.”

    That’s not the “Arch’s security team”, it’s one person on a 3rd party forum, with a history of issuing personal statements reeking of personal grudge. Yeah I know that comment unfortunately. It’s a singular, isolated piece of flamebait and it makes me sad to see it’s still being bookmarked and passed around 5 years later.

    Yes very sad that a member of Arch’s security team made a warning about Manjaro’s security 5 years ago and still we have people pretending that it’s “flamebait” because that’s a convenient excuse to dismiss it.


  • Yote.zip@pawb.socialtoLinux@lemmy.mlWhat is the best distro for gaming?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    1 year ago

    The receipts that I just linked show far more than 2 mistakes. I don’t care whether they have fixed them or not, I care that they have made so many. Trust arrives on foot and leaves on horseback. Distro forks are nothing special, so why use the one with a history of bad management? Use Arch proper or any of the countless Arch forks that use the real Arch repos, which will inherently sidestep a lot of issues that Manjaro created for itself.

    You say that delaying packages makes things more stable but there is a clear history of that not being the case, which has already been described in the links I posted. This is most importantly true in terms of delayed security updates. You also don’t understand how the AUR works in conjunction with outdated Manjaro packages, which will cause dependency problems and lead to breakage. This is a very simple cause and effect so I’m not sure how you think you can try to assert “everyone else must misunderstand how dependencies work”.

    As for the last bit, no Arch is obviously not being hurt when Manjaro is called out. If anything I’ll bet Arch wishes Manjaro would stop tripping over itself and giving Arch a bad name. They are already sick of Manjaro users using the AUR and complaining every time it breaks their packages, and you can read what Arch’s security team thinks about Manjaro here on r/archlinux (image mirror here if you don’t want to visit that site).




  • I prefer recertified ones if they’re significantly cheaper, but that’s up to you. Recertified will likely fail faster but when they’re close to ~60% of the cost it makes sense to gamble.

    As for which RAID that is up to you and how you’re setting up your array. If you’re running ZFS then mirrored pairs are somewhat flexible since you can add a pair whenever you want of any size disks, but they will cost you 50% of your disk space in redundancy. For RAID5/6 you want the disk sizes to match and for ZFS you won’t be able to add any disks to a RAID5/6 array for about a year - the code that adds that feature is coming in the next release which will take about a year.



  • A couple nits to pick: BTRFS doesn’t use/need journaling because of its CoW nature - data on the disk is always correct because it doesn’t write data back over the same block it came from. Only once data is written successfully will the pointer be moved to the newly-written block. Also, instantaneous copies from BTRFS are actually due to reflinking instead of CoW (XFS can also do reflinking despite not being CoW, and ZFS didn’t have this feature until OpenZFS 2.2 which just released).

    I agree with the ZFS bit and I’m firmly in the BTRFS/ZFS > Ext4/XFS/etc camp unless you have a specific performance usecase. The ability to scrub checksums of data is so invaluable in my opinion, not to mention all the other killer features. People have been running Ext4 systems for decades pretending that if Ext4 does not see the bitrot, the bitrot does not exist. (then BTRFS picks up a bad checksum and people scold it for being a bad filesystem)


  • Yote.zip@pawb.socialtoLinux@lemmy.mlUNRAID on sale 23-27 November
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    1 year ago

    Where’s the fun in paying someone else to do it all for you?

    MergerFS+SnapRAID will give you a very similar set of features/flexibility compared to UNRAID storage. OpenMediaVault has native MergerFS+SnapRAID support and can also do ZFS - I would look at that for a comparable alternative. Otherwise, I’m very fond of a Proxmox host with a TrueNAS VM for ZFS pool management, or just managing the ZFS pool with the Proxmox host itself through this cockpit extension.