My only problem with that is that they lobotomized Google to make the AI seem valuable. Not that they weren’t going to destroy Google’s utility eventually, but, once generative AI entered the scene, it deteriorated with a quickness.
My only problem with that is that they lobotomized Google to make the AI seem valuable. Not that they weren’t going to destroy Google’s utility eventually, but, once generative AI entered the scene, it deteriorated with a quickness.
Generative AI, as it is being built right now, is a dead-end. It won’t get much better than it currently is (markedly worse once the next-gen is forced to scrape data that includes AI generated data) and hallucinations are always going to be the reality for them.
It’s why there’s this big push over the last couple of years to get these products to market. Not because you’re going to corner some burgeoning industry (though the hype definitely is designed to look like that), but because this is a grift now and you have to get the goods while there’s still goods to get. Need to recoup those R&D dollars somehow.
UK has just been importing American politics whole cloth, makes sense that it appeared so fast over there.
Exactly. Good weird people embrace being weird. Bad weird people think they’re normal and everyone else is insane, they will get very annoyed by being called weird. That’s why it’s a good litmus test.
Having read the article and then the actual report from the Sakana team. Essentially, they’re letting their LLM perform research by allowing it to modify itself. The increased timeouts and self-referential calls appear to be the LLM trying to get around the research team’s guardrails on it. Not because it’s become aware or anything like that, but because its code was timing out and that was the least effort way to beat the timeout. It does handily prove that LLMs shouldn’t be the one steering any code base, because they don’t give a shit about parameters or requirements. And giving an LLM the ability to modify its own code will lead to disaster in any setting that isn’t highly controlled like this.
Listen, I’ve been saying for a while that LLMs are a dead end towards any useful AI, and the fact that an AI Research team has turned to an LLM to try and find more avenues to explore feels like the nail in that coffin.
It’s semi-autonomous, not remote controlled
Don’t bring attention to their privilege! They just want to go back to being blissfully ignorant! /s
World of Warcraft 2 announced. World of Warcraft to be canceled (this is joke… I hope)
This person doesn’t understand infinity. Don’t feel bad, no one really does, it sort of breaks our brains.
Carbon offsets? Yes indeed. The easiest, most useless way to reach carbon neutral.
Yeah, the early primaries really do benefit establishment democrats, and it seemingly painted a damning picture for Bernie. I think if we had synchronized primaries, this benefit would be much smaller and Bernie would’ve had a significant shot.
COPD fucking sucks, my dude. Living longer isn’t the goal, living comfortably is and being unable to breathe all the time is the worst.
HTTP 418 is the “I’m a teapot” code
Yeah! And that’s only a privilege for white oligarchs! /s
It’s not in an orbit. Well, it technically is, but it moved far enough away that that orbit no longer intersects with the solar system’s SOI. That was the whole point of the voyager probes, to go out into deep space. That’s why we sent them out with our mix tapes
With the amount of people who are either lying or genuinely can’t tell when images are made by AI… I’m scared
When the AMOC collapses, the UK and all of Europe will freeze harder than they ever have before. Wanker
I’m not certain what you mean here Can you explain who’s doing a good job and what join they’re doing?