I use Karch, btw.
I use Karch, btw.
And that’s mostly the “bullshit IoT” category. It’s not like the demand for phones and laptops exploded in the last years, it’s IoT, AI and other useless crap - regardless of the process node.
We could start by not requiring new chips every few years.
For 90% of the users, there hasn’t been any actual gain within the last 5-10 years. Older computers work perfectly fine, but artificial slow downs and bad software cause laptops to feel sluggish for most users.
Phones haven’t really advanced either. But apps and OSes are too bloated, hardware impossible to repair, so a new phone it is.
Every device nowadays needs wifi and AI for some reason, so of course a new dishwasher has more computing power than an early Cray, even though nothing of that is ever used.
What exactly do you think these chips are used for?
Because it’s often enough AI, crypto and bullshit IoT.
Usually ~/devel/
On my work laptop I have separate subdirs for each project and basically try to mirror the Gitlab group/project structure because some fucktards like to split every project into 20 repos.
Ansible is actually pretty nice, if you get the hang of it. Not perfect, but better than triple tunnel ssh.
You could simply automate step by step, each time you change something, you add that to the playbook and over time you should end up with a good setup.
Flakey dev setups are productivity killers.
The real question is why you’re torturing yourself by manually fixing that stuff? Don’t you terraform your Ansibles?
Admittedly, I only ever entered an operating room under anesthesia, but could you just, you know, put the displays somewhere else?
This seems like one of those informercial “problems”.
You’re oversimplifying things, drastically.
Corporations don’t have one projects, they have dozens, maybe hundreds. And those projects need staffing.
It’s not a chair factory where more people equals faster delivery
And that’s the core of your folly - latency versus throughput. Yes, putting 10 new devs in a project won’t increase speed magically. But 200 developers can get 20 projects done, where 10 devs only finish one.
This is just the peak power. The average power is much less. And batteries can maybe work on a grid scale for smoothing, but not for an individual consumer like a data center.
Exactly. But mods here are too butthurt to accept that and rather delete my comments, so they can live in their delusions - which was my point
As I wrote: sanctions. That’s what compliance means.
Outsourcing is realistically often a tool to get mass, not for cost.
There’s a reason so many people went to coding boot camps, there was a huge demand for developers. Here in Germany for quite a while you literally couldn’t get developers, unless you paid outrageous salaries. There were none. So if you needed a lot of devs, you had the chance to either outsource or cancel the project.
I actually talked to a manager about our “near shoreing” and it wasn’t actually that much cheaper if you accounted for all the friction, but you could deliver volume.
BTW: there’s a big difference between hiring the cheapest contractors you can find and opening an office in a low income country. My colleagues from Poland, Estonia and Romania were paid maybe half what I got, but those guys are absolutely solid, no complaints.
I get your point, but have you looked into the power demands of data centers? They already have room filling batteries for power outages, but those are just enough to keep the lights on while the diesel generators start.
None. There is no model that can output anything even remotely usable on that tiny amount of RAM and certainly not using the few CPU cycles your vps has to offer.
Businesses (at least the larger ones) replace their hardware every few years anyway. They don’t care whether their new Optiplexes run Windows 10 or 11 and most hardware bought since 2022 probably has Windows 11 installed already, probably all since 2020 supports it. So there’s hardly a problem here. (Btw I’m taking the management view here, I know that it’s a pain to actually deploy, but that doesn’t matter to management).
The new Intel chips already addressed that, at least for notebook class devices.
Realistically, there wasn’t really a reason for Intel and AMD to be super power efficient, simply because there wasn’t any competition for quite a while. It took Apple Silicon to show how powerful arm can be and how easy the transition could be.
I find it extremely frustrating how weirdly wrong-density much documentation is. It’s extremely detailed in all the wrong places and often lacks examples for common use cases.
I learned a while ago that news articles are supposed to have increasing levels of detail from top to bottom. Each paragraph adds a bit more context, but the general picture should be contained in the first one. Hardly any documentation follows that pattern.