Sure, but you can get that with something more long-form, too; it’s not exclusive to Twitter/microblogging .
Sorry about that.
Sure, but you can get that with something more long-form, too; it’s not exclusive to Twitter/microblogging .
I would argue that the format incentivizes short quips and discussions lacking nuance in favor of brevity, and yes, therefore it’s “bad” (to use their term) to use Twitter even if musk wasn’t turning it into Truth Social.
Well, arguably the microblogging format does have some intrinsic disadvantages.
Are you speaking legally or morally when you say someone “aught” to do something?
You most certainly can. The discussion about whether copyright applies to the output is nuanced but certainly valid, and notably separate from whether copyright allows copyright holders to restrict who or what gets trained on their work after it’s released for general consumption.
The article is literally about someone suing to prevent their art from being used for training. That’s the topic at hand.
Are you confused, or are you trying to shoehorn a different but related discussion into this one?
I was under the impression we were talking about using copyright to prevent a work from being used to train a generative model. There’s nothing in copyright that says anything about training anything. I’m not even convinced there should be.
There’s nothing in copyright law that covers this scenario, so anyone that says it’s “absolutely” one way or the other is telling you an opinion, not a fact.
I subscribed to releases! Good work so far!
Yeah I read that but I don’t have the knowledge to say “what a rookie mistake” or “in hindsight that was a bad idea”. I take it, it’s the former?
I’m not a cybersecurity expert. Did they make a foolish decision that would warrant a lack of trust, or were they just unlucky?
I can’t say I fully understand how LLMs work (can’t anyone??) but I know a little and your comment doesn’t seem to understand how they use training data. They don’t use their training data to “memorize” sentences, they use it as an example (among billions) of how language works. It’s still just an analogy, but it really is pretty close to LLMs “learning” a language by seeing it used over and over. Keeping in mind that we’re still in an analogy, it isn’t considered “derivative” when someone learns a language from examples of that language and then goes on to write a poem in that language.
Copyright doesn’t even apply, except perhaps on extremely fringe cases. If a journalist put their article up online for general consumption, then it doesn’t violate copyright to use that work as a way to train a LLM on what the language looks like when used properly. There is no aspect of copyright law that covers this, but I don’t see why it would be any different than the human equivalent. Would you really back up the NYT if they claimed that using their articles to learn English was in violation of their copyright? Do people need to attribute where they learned a new word or strengthened their understanding of a language if they answer a question using that word? Does that even make sense?
Here is a link to a high level primer to help understand how LLMs work: https://www.understandingai.org/p/large-language-models-explained-with
You can disable it to install stuff if you want.
Check out VanillaOS. I think it’s pretty neat. Their webpage doesn’t really get into the benefits as much as I think they should, but a very quick summary is that it leverages distrobox and some custom package manager to allow you to seamlessly install and run packages from other distros. It’s also kind of an immutable OS (but not really). It lets you pick which types of apps you want during the install (snaps, fltapak, AppImage, etc)
I am not super in the loop about why people are so against snaps, but I don’t like the centralized nature of them, and if that’s also the general concern, then flatpak should be fine, since it’s decentralized.
I saw a couple youtube videos about VanillaOS; I could certainly find you one of them if you want to know more.
This is probably right. LLMs can be used as a replacement for people (well, almost), or it can be used as a tool for people. Where that line is will be crucial.
I also don’t think it’s the same kind of “”“AI”“” as the kind that would be used to recreate a person’s likeness. That’s almost certainly going to be covered under copyright. (I bring this up because the article mentions it).
And even if there somehow is no line and any script written even partially by an AI cannot be copyrighted (unlikely I think) then the resulting film is still eligible for copyright protections.
I’m not sure your second point is as strong as you believe it to be. Do you have a specific example in mind? I think most vehicle problems that would require an emergency responder will have easy access to a tow service to deal with the car with or without a human being involved. It’s not like just because a human is there that the problem is more easily solved. For minor-to-moderate accidents that just require a police report, things might get messy but that’s an issue with the law, not necessarily something inherently wrong with the concept of self driving vehicles.
Also, your first point is on shaky ground, I think. I don’t know why the metric is accidents with fatalities, but since that’s what you used, what do you think having fewer humans involved does to the chance of killing a human?
I’m all for numbers being crunched, and to be clear (as you were, I think) the numbers are the real deciding metrics here, not thought experiments.
And I think it’s 100% true that autonomous transportation doesn’t have to be perfect, just better than humans. Not that you disagree with this, but it is probably what people are thinking when they say “humans do this too”.
How sure are you? If licenses were such valuable troves of information, surely one person would have thought of a small hidden camera, right?
You don’t think there is a camera aimed at the register?
Oh. I was thinking private like a password.
Well, that’s a good point but I still think there are better services than Twitter/microblogging for that. Like our old friend RSS