The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 282 Comments
Joined 8 months ago
cake
Cake day: January 12th, 2024

help-circle


  • The backlash to this is going to be fun.

    In some cases it’s already happening - since the bubble forces AI-invested corporations to shove it down everywhere. Cue to Microsoft Recall, and the outrage against it.

    It has virtually no non-fraud real world applications that don’t reflect the underlying uselessness of the activity it can do.

    It is not completely useless but it’s oversold as fuck. Like selling you a bicycle with the claim that you can go to the Moon with it, plus a “trust me = be gullible, eventually bikes will reach Mars!” A bike is still useful, even if they’re building a scam around it.

    Here’s three practical examples:

    1. I use ChatGPT as a translation aid. Mostly to list potential translations for a specific word, or as conjugation/declension table. Also as a second layer of spell-proofing. I can’t use it to translate full texts without it shitting its own virtual pants - it inserts extraneous info, repeats sentences, removes key details from the text, butcher the tone, etc.
    2. I was looking for papers concerning a very specific topic, and got a huge pile (~150) of them. Too much text to read on my own. So I used the titles to pre-select a few of them into a “must check” pile, then asked Gemini to provide me three paragraphs summaries for the rest. A few of them were useful; without Gemini I’d probably have missed them.
    3. [Note: reported use.] I’ve seen programmers claiming that they do something similar to #1, with code instead. Basically asking Copilot how a function works, or to write extremely simple code (if you ask it to generate complex code it starts lying/assuming/making up non-existent libraries).

    None of those activities is underlyingly useless; but they have some common grounds - they don’t require you to trust the output of the bot at all. It’s either things that you wouldn’t use otherwise (#2) or things that you can reliably say “yup, that’s bullshit” (#1, #3).



  • It’s interesting how interconnected those points are.

    Generative A"I" drives GPU prices up. NVidia now cares more about it than about graphics. AMD feels no pressure to improve GPUs.

    Stagnant hardware means that game studios, who used to rely on “our game currently runs like shit but future hardware will handle it” and similar assumptions get wrecked. And gen A"I" hits them directly due to FOMO + corporates buying trends without understanding how the underlying tech works, so wasting talent by firing people under the hopes that A"I" can replace it.

    Large game companies are also suffering due to their investment on the mobile market. A good example of is Ishihara; sure, Nintendo simply ignored his views on phones replacing consoles, but how many game company CEOs thought the same and rolled with it?

    I’m predicting that everything will go down once it becomes common knowledge that LLMs and diffusion models are 20% actual usage, 80% bubble.



  • You know, the ban here was enlightening for me, about certain people from my social circles. Four examples:

    1. Resumed Twitter shitposting in Bluesky. Different URL. No mention of Twitter.
    2. Cheering Twitter being gone, as they were only using it due to their contacts, but felt like shit for doing it. Criticising how Moraes did it, but not the goal itself.
    3. LARPs as against fascism but screeches nonstop in Bluesky about Twitter being gone, as they think that the world revolves around their own convenience.
    4. Left microblogging altogether.

    But I digress (as this has barely anything to do with the OP). Those people like Musk are bound to “creatively reinterpret” the words: in one situation orange is yellow, in another it’s red, both, neither. Sometimes it isn’t “ackshyually” related to red or yellow, it’s “inverted blue”. And suckers fall for it. That’s what Musk is doing with fascism.


  • My prediction is different: I think that, in the long term, banning targetted ads will have almost no impact on the viability of ad-supported services, or the amount of ads per page.

    Advertisement is an arms race; everyone needs to use the most efficient technique available, not just to increase their sales but to prevent them from decreasing - as your competitor using that technique will get the sales instead.

    But once a certain technique is banned, you aren’t the only one who can’t use it; your competitors can’t either.

    And the price of the ad slot is intrinsically tied to that. When targetted ads were introduced, advertisers became less willing to pay for non-targetted ads; decreased demand led to lower prices, and thus lower revenue to people offering those ad slots on their pages, forcing those people to offer ad slots with targetted advertisement instead. Banning targetted ads will simply revert this process, placing the market value of non-targetted ad slots back where it used to be.



  • The difference is sort of like the difference between a qualified ESL teacher and a native English speaker […]

    This example is perfect - native teachers (regardless of the language being taught) are often clueless on which parts of their languages are hard to master, because they simply take it for granted. Just like zoomers with tech - they take for granted that there’s some “app”, that you download it, without any further thought on where it’s stored or how it’s programmed or anything like that.


  • As others highlighted this is not surprising given that Gen Z uses phones a lot more than computers, and writing in one is completely different than in the other.

    [Discussion from multiple comments ITT] It’s also damn slower to write in a phone screen, simply because it’s smaller - you need a bit more precision to hit the keys, and there’s no room to use all the fingers (unlike in a physical keyboard).

    Swiping helps, but it brings up its own problems - the keyboard application needs to “guess” what you’re typing, and correcting mistakes consumes time; you need to look at the word being “guessed” instead of either the keyboard or the text being written, so your accuracy goes down (increasing the odds of wrong “guesses”); and eventually you need to tap write a few words anyway, so you’re basically required to type well two ways instead of just one to get any semblance of speed.



  • This is bad on three levels. Don’t use AI:

    1. to output info, decisions or advice where nobody will check its output. Will anyone actually check if the AI is accurate at identifying why the kids aren’t learning? (No; it’s a teacherless class.)
    2. use AI where its outcome might have a strong impact on human lives. Dunno about you guys, but teens education looks kind like a big deal. /s
    3. where nobody will take responsibility for it. “I did nothing, the AI did it, not my fault”. School environment is all about that blaming someone else, now something else.

    In addition to that I dug some info on the school. By comparing this map with this one, it seems to me that the target students of the school are people from one of the poorest areas of London, the Tower Hamlets borough. “Yay”, using poor people as guinea pigs /s


  • And is it ethical to keep using it?

    No. And I’ll go further: if you still use it, at the very least you’re an entitled arsehole ranking their own dopamine over the well-being of everyone else. And you deserve to be treated as such.

    But I’ve had some of the most interesting conversations of my life on there, both randomly, ambling about, and solicited, for stories:

    They’re weighting the emotional investment in the platform, caused by their earlier interactions, with it, as if it mattered when deciding future usage. It does not; that’s a fallacy = stupid shit called “sunken cost”.

    fast realised that I would never get 70,000 followers on there like I had on Twitter. It wasn’t that I wanted the attention per se, just that my gang wasn’t varied or noisy enough

    Refer to what I said about the title.

    Stopped reading here. This article is a waste of my time.


  • Yup, 100% this. And there’s a crowd of muppets arguing “ackshyually wut u’re definishun of unrurrstandin/intellijanse?” or “but hyumans do…”, but come on - that’s bullshit, and more often than not sealioning.

    Don’t get me wrong - model-based data processing is still useful in quite a few situations. But they’re only a fraction of what big tech pretends that LLMs are useful for.


  • It goes without saying that this shit doesn’t really understand what’s outputting; it’s picking words together and parsing a grammatically coherent whole, with barely any regard to semantics (meaning).

    It should not be trying to provide you info directly, it should be showing you where to find it. For example, linking this or this*.

    To add injury in this case it isn’t even providing you info, it’s bossing you around. Typical Microsoft “don’t inform a user, tell it [yes, “it”] what it should be doing” mindset. Specially bad in this case because cost vs. benefit varies a fair bit depending on where you are, often there’s no single “right” answer.

    *OP, check those two links, they might be useful for you.


  • You’re right that it won’t be completely undesirable for bots, ever. However, you can make it less desirable, to the point that the botters say “meh, who cares? That other site is better to bot”.

    I’ll give you an example. Suppose the following two social platforms:

    • Orange Alien: large userbase, overexcited about consumption, people get banned for mocking brands, the typical user is as tech-illiterate enough to confuse your bot with a human.
    • White Rat: Small userbase, full of communists, even the non-communists tend to outright mock consumption, the typical user is extremely tech-savvy so they spot and report your bot all the time.

    If you’re a botter advertising some junk, you’ll probably want to bot in both platforms, but that is not always viable - coding the framework for the bots takes time, you don’t have infinite bandwidth and processing power, etc. So you’re likely going to prioritise Orange Alien, you’ll only bot White Rat if you can spare it some effort+resources.

    The main issue with point #1 is that there’s only so much room to make the environment unattractive to bots before doing it for humans too. Like, you don’t want to shrink your userbase on purpose, right? You can still do things like promoting people to hold a more critical view, teaching them how to detect bots, asking them to report them (that also helps with #4), but it only goes so far.

    [Sorry for the wall of text.]


  • As others said you can’t prevent them completely. Only partially. You do it four steps:

    1. Make it unattractive for bots.
    2. Prevent them from joining.
    3. Prevent them from posting/commenting.
    4. Detect them and kick them out.

    The sad part is that, if you go too hard with bot eradication, it’ll eventually inconvenience real people too. (Cue to Captcha. That shit is great against bots, but it’s cancer if you’re a human.) Or it’ll be laborious/expensive and not scale well. (Cue to “why do you want to join our instance?”).