This is the way!
Way simpler than using any GUI tool or somehow recreating the partition and manually copying the files.
This is the way!
Way simpler than using any GUI tool or somehow recreating the partition and manually copying the files.
~~“Batteries” is a rather broad category.
Are we talking hydroelectric batteries? Other potential or kinetic batteries? Chemical batteries (and what subcategory)? Or maybe hydrogen-based power storages?
Since there’s a dam on the list, I’d imagine “batteries” to be electrolytic power stores or hydrogen fuel cells, but the visualization remains lazy and perhaps borderline misinformative (depending on how nit-picky you are).
EDIT: The illustration might also use a simplified definition of a battery (to store, excluding conversion between kinds of power) instead of the different battery technologies which exist or the full definition, which could have one argue that batteries aren’t renewable by definition.
Though, that might be reading too much into it.~~
Actually, never mind, I’m probably too tired to go out on an adventure about the technicalities of the definition of “battery” to make any real amount of sense and not fall into edge cases.
I also misread “energy source” as “renewable”…
You don’t have to sanitize the weights, you have to sanitize the data you use to get the weights. Two very different things, and while I agree that sanitizing a LLM after training is close to impossible, sanitizing the data you give it is much, much easier.
Oh no, it’s very difficult, especially on the scale of LLMs.
That said, we others (those of us who have any amount of respect towards ourselves, our craft, and our fellow human) have been sourcing our data carefully since way before NNs, such as asking the relevant authority for it (ex. asking the post house for images of handwritten destinations).
Is this slow and cumbersome? Oh yes. But it delays the need for over-restrictive laws, just like with RC crafts before drones. And by extension, it allows those who could not source the material they needed through conventional means, or those small new startups with no idea what they were doing, to skim the gray border and still get a small and hopefully usable dataset.
And now, someone had the grand idea to not only scour and scavenge the whole internet with no abandon, but also boast about it. So now everyone gets punished.
At last: don’t get me wrong, laws are good (duh), but less restrictive or incomplete laws can be nice as long as everyone respects each other. I’m excited to see what the future brings in this regard, but I hate the idea that those who facilitated this change likely are the only ones to go free.
So now LLM makers actually have to sanitize their datasets? The horror…
There has to be a law against such heretical actions somewhere! Even if it’s .00, this computer is an affront to order! I propose we burn it alongside those frivolous computers who think they can simply name themselves .0 or .255!
Huh, I’m not sure they are comparable.
Didn’t USB A and USB B use a master-slave relationship in which the male would (generally) always be the slave, whereas USB C uses agreement and discussion to decide the master and slave roles regardless of connector gender.
Please do correct me if I’m wrong. Also, do we say “agent” now instead of “slave”, or what is the new term?
And then links to a similar sounding but ultimately totally unrelated site.
I’m pretty sure it’s more like
Junior dev: Got all the nice addons, RGB lighting, only uses dark theme, got all the stickers, works from either a café or moms basement.
VS Senior dev: Works on company standard issue hardware, barely customizes visuals (but got a script which makes a cup of coffee on the shared machine in exactly 2 minutes and 30 seconds), works in shared office, has old rolling cabinet with unknown artifacts last touched 10+ years ago.
Obviously this is an overgeneralization and not a catch-all, you might even say that it’s “programmer humor”.
AI: “Can I copy your work?”
Phil: “Just don’t make it obvious.”
AI:
Why should it be enough? We already have multiple Linux communities across different instances, decentralized and with alternative modteams, should we just merge them all into one conglamorate community with a single point of failure?
Agreed for induction, but I’d mich rather use one or two minutes more cleaning the knobs than having to almost cook my finger on this 60-90 degree Celcius hot conventional stove’s touch surface to change the plate from step 7 to 4 for 10 FUKKEN SECONDS! OUCH!
Having to restart it 2-3 times during cooking because it got confused (pan moved slightly to the side) is also rather annoying.
Edit & tl:dr: Touch works decent on induction, just please keep it far away from any conventional stoves.
Oh right, I do actually have track, volume, and “take call” on the wheel. I think I did use them once, but it just never stuck since they felt awkward to use.
I’m more concerned about fog lights, emergency lights, and Window heating, as law usually requires you to be able to use them if conditions require it.
Same, I’ve got an Opel Corsa from 2016, so it’s pretty much brand new.
The only things in the wheel are the speed control, wipers, and default lights.
For everything else required for driving, such as fog lights, emergency lights, front and back Window heating, AC, radio, and of course the shift stick, I’ll need to remove a hand from the wheel.
Luckily for me, the Touchscreen in the middle only handles less important things like navigation and external music sources.
Neural nets are a technology which is part of the umbrella term “machine learning”. Deep learning is also a term which is part of machine learning, just more specialized towards large NN models.
You can absolutely train NNs on your own machine, after all, that’s what I did for my masters before Chatgpt and all that, defining the layers myself, and also what I do right now with CNNs. That said, LLMs do tend to become so large that anyone without a super computer can at most fine tune them.
“Decision tree stuff” would be regular AI, which can be turned into ML by adding a “learning method” like a KNN or neural net, genetic algorithm, etc., which isn’t much more than a more complex decision tree where decision thresholds (weights) were automatically estimated by analysis of a dataset. More complex learning methods are even capable of fine tuning themselves during operation (LLMs, KNN, etc.), as you stated.
One big difference from other learning methods and to NN based methods, is that NN likes to add non-weighted layers which, instead of making decisions, transform the data to allow for a more diverse decision process.
EDIT: Some corrections, now that I’m fully awake.
While very similar in structure and function, the NN is indeed no decision tree. It functions much the same as one, as is a basic requirement for most types of AI, but whereas every node in a decision tree has unique branches with their own unique nodes, all of a NN’s nodes are interconnected to all nodes of the following layer. This is also one of the strong points of a NN, as something that seemed outrageous to it a moment ago might have become much more plausible when looking at it from a different point of view, such as after a transformative layer.
Also, other learning methods usually don’t have layers, or, if one were to define “layer” as “one-shot decision process”, they pretty much only have a single or two layers. In contrast, the NN can theoretically have an infinite amount of layers, allowing for pretty much infinite complexity as long as the inputted data is not abstracted beyond reason.
At last, NN don’t back-propage by default, though they make it easy to enable such features given enough processing power and optionally enough bandwidth (in the case of chatGPT). LLMs are a little different, as I’m decently sure they implement back-propagation as part of the technologies definition, just like KNN.
This became a little longer than I had hoped, it’s just a fascinating topic. I hope you don’t mind that I went into more detail than necessary, it was mostly for the random passersby.
AI is a very broad term, ranging from physical AI (material and properties of a robotic grabbing tool) to AI (as seen in many games, or in a robotic arm to calculate path from current position to target position) and to MLAI (LLM, neural nets in general, KNN, etc.).
I guess it’s much the same as asking “are vehicles bad?”. I don’t know, are we talking horse carriages? Cars? Planes? Electric scooters? Skateboards?
Going back to your question, AI in general is not bad, though LLMs have become too popular too quick and have thus ended up being misunderstood and misused. So you can indeed say that LLMs are bad, at least when not used for their intended purposes.
It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.
As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.
Huh? Isn’t this about Microsoft changing out a button with a well established use, in order to take advantage of muscle memory and the unobservant?
Don’t think it’s much to do with people opposing technological advancement, but rather with opposing another company wanting to making a fool of them.
You thought journalism had reached rock bottom already? Watch this: