From what I understand OP’s images aren’t the same image, just very similar.
From what I understand OP’s images aren’t the same image, just very similar.
short of all using the same wordpress or whatnot hoster, that is.
That’s the thing, that’s common practice. It’s basically a given nowadays for shared web hosting to use one IP for a few dozen websites, or for a service to leverage a load/geo-balancer with 20 IPs into a CDN serving static assets for thousands of domains.
I’m thinking Ctrl+C quits and Ctrl+S is scroll lock is that correct?
with infrastructure the size of twitter you can also blackhole their whole IP range
Just one note, services the size of Twitter typically use cloud infrastructure so if you block that indiscriminately you risk blocking a lot of unrelated stuff.
Any PC can do that, it’s called “status after power off” or something like that.
It stops working occasionally but they release fixed versions pretty fast.
Looking through the packages available for OpenWRT I would suggest Tcl, Lua, Erlang or Scheme (the latter is available through the Chicken interpreter). Try them out, see what you like.
I’ve actually tried using PHP on OpenWRT and embedded before. It’s not exactly lightweight, it’s a memory and CPU hog. Keep in mind that the kind of machine that runs OpenWRT might only have 32 or even 16 MB of RAM to work with.
Also, PHP is not the first language that comes to mind when doing data processing and/or functional programming. You can but it doesn’t lend itself well to it.
They buy the hardware once then sell services based on it.
Because AI reversed the ratio.
It’s much worse. Generally speaking projects in large corporations at least try to make sense and to have a decent chance to return something of value. But with AI projects is like they all went insane, they disregard basic things, common sense, fundamental logic etc.
They typically use internal personnel and being parcimonious about it so you’re right about that.
Well probably not just Nvidia but the next likely beneficiaries are in the same range (Microsoft etc.)
The most successful ML in-house projects I’ve seen took at least 3 times as long than initially projected to become usable, and the results were underwhelming.
You have to keep in mind that most of the corporate ML undertakings are fundamentally flawed because they don’t use ML specialists. They use eager beavers who are enthusiastic about ML and entirely self-taught and will move on in 1 year and want to have “AI” on their resume when they leave.
Meanwhile, any software architect worth their salt will diplomatically avoid to give you any clear estimate for anything having to do with ML – because it’s basically a black box full of hopes and dreams. They’ll happily give you estimates and build infrastructure around the box but refuse to touch the actual thing with a ten foot pole.
There’s been talk about exploring porting the engine to iOS at the beginning of 2023 but AFAIK the current state of things was that it’s a significant undertaking and probably not worth it just for the EU market.
There’s no Firefox engine for iOS and Mozilla says it doesn’t make financial sense to port it.
If you go forward 12 months the AI bubble will have burst. If not sooner.
Most companies who bought into the hype are now (or will be soon) realizing it’s nowhere near the ROI they hoped for, that the projects they’ve been financing are not working out, that forcing their people to use Copilot did not bring significant efficiency gains, and more and more are realizing they’ve been exchanging private and/or confidential data with Microsoft and boy there’s a shitstorm gathering on that front.
For now, but the EU will force Apple to allow non-WebKit engines on iOS. At which point only Google will have enough money to spare porting an entire engine to a small market.
You don’t have to install drivers or CUPS on client devices. Linux and Android support IPP out of the box. Just make sure your CUPS on the server is multicasting to the LAN.
You may need to install Avahi on the server if it’s not already (that’s what does the actual multicasting). The printer(s) should then auto magically appear in the print dialogs on apps on Linux clients and in the printer service on Android.
On Linux it may take a few seconds to appear after you turn it on and may not appear when it’s off. On Android it shows up anyways as long as the CUPS server is on.