On one hand I agree, on the other hand I just know that some people would immediately abuse it and put relevant data into comments.
On one hand I agree, on the other hand I just know that some people would immediately abuse it and put relevant data into comments.
At the very least it failed in a way that’s obvious by giving you contradictory statements. If it left you with only the wrong statements, that’s when “AI” becomes really insidiuos.
register hours in Windows. We also all have iPhones that we only use for 2FA.
Without background information that sounds kind of insane. Switching to alternative time tracking software and getting YubiKeys or alternatives instead for 2FA would’ve saved so much money as well as time every day.
Even further, there’s also a clean split between the game and the framework they’ve built for it. So people can actually build their own games or tools using the osu!framework, and some already did so.
Which is neat, because it seems to me like it’s really performant and of course, low-latency, based on what I’ve seen trying the new client.
Considering the movie industry is currently at a point where it’s even punishing paying customers with low-quality 720p for daring to use the “wrong” browser, I don’t think the industry will figure out that there’s a market out there for high quality drm-free media anytime soon.
There really should be a right to adequate human support that’s not hidden behind multiple barriers. As you said, it can be a timesaver for the simple stuff, but there’s nothing worse than the dread when you know that your case is going to need some explanation and an actual human that is able to do more than just following a flowchart.
Depends a bit on the clients.
Assuming you only have one desktop and mobile client you should never run into any issues. If you do have multiple KeePassXC clients it’s all fine as well assuming Syncthing always has another client it can sync with.
Most amazingly, this setup is also unexpectedly resilient against merge conflicts and can sync even when two copies have changed. You wouldn’t expect that from tools relying on 3rd party file syncing.
I still try to avoid it, but every time it accidentally happened, I could just merge the changes automatically without losing data.
Oh yeah, you’re right. It’s both degradation in some way, but through entirely different causes.
Technically you can do everything through email, because everything online can be represented as text. Doesn’t mean you should.
PRs also aren’t just a simple back and forth anymore: Tagging, Assignees, inline reviews, CI with checks, progress tracking, and yes, reactions. Sure, you can kinda hack all of that into a mailing list but at that point it’s becoming really clunky and abuses email even more for something it was never meant to handle. Having a purpose-built interface for that is just so much nicer.
I’m sorry to be blunt, but mailing lists just suck for group conversations and are a crutch that only gained popularity due to the lack of better alternatives at the time. While the current solutions also come with their own unique set of drawbacks, it’s undeniable that the majority clearly prefers them and wouldn’t want to go back. There’s a reason why almost everyone switched over.
I’d guess because the same argument could be made for the website you’re on right now. Why use that when we could just use mailing lists instead?
More specifically: Sure, Git is decentral at its core, but all the tooling that has been built around it, like issue tracking, is not. Suggesting to go back to email, even if some projects still use it, isn’t the way to go forward.
Same thing with Stable Diffusion if you’ve ever used a generated image as an input and repeated the same prompt. You basically get a deep-fried copy.
I’ve been trying to find some better/original sources [1] [2] [3] and from what I can gather it’s even worse. It’s not even an upscaler of any kind, it apparently uses an NPU just to control clocks and fan speeds to reduce power draw, dropping FPS by ~10% in the process.
So yeah, I’m not really sure why they needed an NPU to figure out that running a GPU at its limit has always been wildly inefficient. Outside of getting that investor money of course.
AI acceleration for 3d upscaling
Isn’t that not only similar to, but exactly what DLSS already is? A neural network that upscales games?
And Linux will slowly turn into Windows.
Some distros maybe, but I’d say that instead we’d quickly have another golden era of malware.
Yup. I’ve never done anything besides installing NVIDIA drivers. Just switching the cable of the secondary monitor to the motherboard ports and it just worked. No reboot even, just making sure that adaptive sync is enabled in KDE or wherever.
VRR does not work if you have a NVIDIA card and more than one monitor enabled.
I recently learned that’s not entirely correct for Wayland. The critical thing is that VRR stops working if more than one enabled monitor is connected to the NVIDIA GPU. Meaning that if you connect only one display to the NVIDIA GPU and the other monitors to the integrated GPU it should just work.
I felt pretty stupid when I realized that I could’ve just switched a single cable and be using VRR way earlier. Didn’t even need a reboot to work. For reference, I’m using a NVIDIA GPU + AMD CPU with 1 G-Sync as my main monitor and one non-VRR as my secondary monitor.
Yup. I can get away with prepaid 1GB/month for 3€ because I’m almost always near Wi-Fi and don’t really need to use anything bandwidth when I’m not.
I also find it wild how some people will get an expensive contract that comes with a “free” phone, but then don’t switch to an equal but cheaper contract (without a “free” phone) when the contract term expires, or at the very least renew the term so they get a new phone.
That’s assuming people actually use a parser and don’t build their own “parser” to read values manually.
And before anyone asks: Yes, I’ve known people who did exactly that and to this day I’m still traumatized by that discovery.
But yes, comments would’ve been nice.