No no you see once in Lemmy you can engage in the communist left vs liberal center-left battles. It’s turtles all the way down.
No no you see once in Lemmy you can engage in the communist left vs liberal center-left battles. It’s turtles all the way down.
Ooph. I guess we can get really dismayed at this, or maybe we should just think that given his infamy it’s only natural he’s able to monetize it, one way or another. So many influencers produce content that is on the face of it of very little value. Granted, not so many are making a career out of promoting the most toxic aspects of (so-called) masculinity. Even Jordan Peterson has redeeming qualities compared to this. Of course that’s a low bar. A low bar that seemingly many men are happy to pass and fork money for in the hopes of bettering themselves.
F
Knowledge and unconditional love should do the trick 🤞I have found that the men most prone to this are men with a chronic lack of love and self-esteem.
I am not sure that technology alone is to blame. But I have been thinking for a while now that recommendation algorithms should be open, scrutinized, and regulated even.
I think you’re spot on.
Omg 800,000 people (presumably mostly young men) signed up for that bullshit? Have we failed so many that they will seek help with the top douchebag himself?
Unfortunately unless you are a tiny niche community that isn’t ever targeted by spam or idiots (and how common is that really), moderators are a necessary evil. You probably don’t hate moderators. You probably hate bad/aggressive/biased/etc moderators. Or maybe sometimes you are the problem, I don’t know. It is not a problem with an easy solution. Usually large forums with no moderation become quickly unbearable to most people. And then moderators become in turn unbearable to some people.
Maybe a trusted AI can do a better job at this - like give it the community rules and ask it to enforce them objectively, transparently, and dispassionately, unless a certain number of participants complain, in which case it can reverse its decision and learn from that.
Do you use the integrated AI in new versions of Excel or do you ask ChatGPT or some other AI to write it out for you?
Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.
The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.
Nice but paywalled for me
It was a more optimistic time, perhaps a more naive time depending on your perspective. A time when most people felt that crowds were wise and the truth would surface spontaneously. Where the internet would help us spread knowledge and democracy and none of the bad things. Where conspiracy theories, disinformation, outright hatred and bigotry were considered fringe phenomena that could be kept at bay. When people would point to 4chan as the worst the internet had to offer, if they even knew about it. Where politicians and their voters could argue passionately, without necessarily feeling that other side are “extremists” or “fascists” who would literally “destroy our country” if they win an election.
The world is cracking at the seams lately and this leads more people to wanna put the brakes on the internet. Liberals especially, witnessing with horror the surge of the far right and attributing it in part to the internet’s ultimate ability to amplify anything, any voice, any shitty little take, no matter how extreme, how misinformed, or bigoted. Most likely misinformed and bigoted with someone like Musk at the helm, the thinking goes. In short, liberals have shifted from the exuberant naïveté of the past to protection mode, trying to stem the tide of right wing populism and perhaps ultimately fascism. And thus will come off as overbearing censors to anyone who doesn’t understand why they do what they do or is still optimistic that a lack of censorship will only lead to good things.
Freedom only works with a social contract in place, some consensus, some ground truths about the world that we can all agree on. Or that a solid, relatively stable majority at least can agree on. When that starts to break down, freedom to say and do whatever you want online may in fact bring the downfall sooner by stoking the fires of division. Of course the likes of Musk probably do think that they are fighting the good fight and are championing free speech, but increasingly he seems to be shifting to the right politically, and rather fighting for his presumed right to shape the world in his image and grow his business empire unchecked, if anything, and not some ideal of freedom and democracy. The likes of him, businessmen with nearly unchecked power and ultimately more concern for their business and personal aspirations than democracy, are probably going to become a bigger threat to our freedoms than the government of Australia. Maybe. Probably.
I personally do think that liberals have often gone overboard in their speech policing zeal, but on the other hand understand why they do what they do. Policing the internet seems like a much easier alternative than actually addressing all the major, sometimes seemingly existential socioeconomic challenges liberal democracies face today. The latter would deprive right wing populists and extremists of much of their influence, but is of course way, way harder than policing speech.
Well it is one thing to automate a repetitive task in your job, and quite another to eliminate entire professions. The latter has serious ramifications and shouldn’t be taken lightly. What you call “menial bullshit” is the entire livelihood and profession of quite a few people, speaking of taxis for one. And the means to make some extra cash for others. Also, a stepping stone for immigrants who may not have the skills or means to get better jobs but are thus able to make a living legally. And sometimes the refuge of white collar workers down on their luck. What are all these people going to do when taxi driving is relegated to robots? Will there be (less menial) alternatives? Will these offer a livable wage? Or will such people end up long-term unemployed? Will the state have enough cash to support them and help them upskill or whatever is needed to survive and prosper?
A technological utopia is a promise from the 1950s. Hasn’t been realized yet. Isn’t on the horizon anytime soon. Careful that in dreaming up utopias we don’t build dystopias.
Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.
AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.
AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.
Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.
See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.
TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.
Be careful with that logic, these are jobs forever lost to robots. They will eventually come for your job or the job of someone you know. Increasingly the question won’t be whether robots can do X better than humans, but whether they should.
Interesting. And shady. Though not about recording conversations.
A market agency claiming they do something of the sort isn’t proof that conversations are being monitored en masse. Security researchers can and probably have tested for this and found no clear, verifiable evidence, otherwise we would have known. Also, this stuff can be blocked at the OS level and I find it hard to imagine (esp. without solid proof) that Google or Apple would jeopardize their reputations to this extent by enabling such unauthorized listening in on users’ conversations.
Of course it’s good to keep watching this space but we shouldn’t jump to conclusions.
Lusted after one as a teenager but could not afford one. It was a bit of a luxury item where I grew up.
The real question here is why the researcher “librarian” didn’t even attempt to anonymize the dataset before making it available. Full anonymization isn’t a trivial task, but at least removing unique identifiers or replacing them with randomly generated ones would be good practice.