LLMs can reason about information. It’s fine to call them intelligent systems.
LLMs can reason about information. It’s fine to call them intelligent systems.
It’s reasonable to refer to unsupervised learning as “learning on its own”.
An LLM trained exclusively on Facebook would be hilarious. It’d be like the Monty Python argument skit.
My hypothesis is that wealth causes brain damage.
It’s an obvious overreach.
An AI generated image is essentially the solution to a math problem. Say the images are/become illegal. Is it then also illegal to possess the input to that equation? The input can be used to perfectly replicate the illegal image after all. What if I change a word in the prompt such that the subject of the generated image becomes clothed? Is that then suddenly legal?
I understand the concern, but it’s just incredibly messy to legislate what amounts to thought crimes.
Maybe we could do something to discourage distribution, but the law would have to be very carefully worded to prevent abuse.
Not so. There are plenty of use cases that already have better solutions.
The point being that Denmark also has regulations…
I live in Copenhagen, and there are new developments going up every day.
That’s absolutely not true where I live, so maybe be careful with the generalizations.
https://en.m.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.
Among non-experts, conventional wisdom holds that corporate law requires boards of directors to maximize shareholder wealth. This common but mistaken belief is almost invariably supported by reference to the Michigan Supreme Court’s 1919 opinion in Dodge v. Ford Motor Co.
Lol
I can tell GPT to do a specific thing in a given context and it will do so intelligently. I can then provide additional context that implicitly changes the requirements and GPT will pick up on that and make the specific changes needed.
It can do this even if I’m trying to solve a novel problem.
GPT can write and edit code that works. It simply can’t be true that it’s solely doing language patterns with no semantic understanding.
To fix your analogy: the Spanish speaker will happily sing along. They may notice the occasional odd turn of phrase, but the song as a whole is perfectly understandable.
Edit: GPT can literally write songs that make sense. Even in Spanish. A metaphor aiming to elucidate a deficiency probably shouldn’t use an example that the system is actually quite proficient at.
Ternary expressions aren’t switches though
Why?
It’s perfectly readable.
https://en.m.wikipedia.org/wiki/Supernormal_stimulus