My understanding of quantum algorithms is that they set up parallel computations in such a way that incorrect solutions cancel out and correct ones reinforce each other. They indicate the existence of multiple universes to the same extent that the double slit experiment does.
One thing to bear in mind is that, whenever someone accustomed to one platform explores another, they’ll tend to ascribe any differences between the communities to the other platform being an echo chamber of some kind.
Presumably there was something in the training data that caused this pattern. The difficulty is in predicting beforehand what the exact effect of changes to the training data will be.
Is it a simple error that OpenAI has yet to address
Since even their own creators can’t understand or control LLMs at that level of granularity, that seems to go without saying. Although calling it an “error” implies that there’s some criterion for defining correct behavior, which no one has yet agreed on.
or has someone named David Mayer taken steps to remove his digital footprint
Sure—someone gained a level of control over ChatGPT beyond that of its own developers, and used that power to prevent it from inventing gossip about himself. (ChatGPT output isn’t even a digital footprint, it’s a digital hallucination.)
A quick Google search of the name leads to results about British adventurer and environmentalist David Mayer de Rothschild…
Oh, FFS.
It’s to be expected that an industry would want to study the safety of its own products—although it could point to the need for more independent research. What is a cause for concern is this:
Another study published in Scientific American found that meta-analyses by industry employees were 22 times less likely to have negative statements about a drug than those run by unaffiliated researchers.
That does suggest that the meta-analyses (as opposed to primary studies) are being used more for marketing than for product improvement.
Enshittification is defined as the gradual deterioration of a service or product brought about by a reduction in the quality of service provided, especially of an online platform, and as a consequence of profit-seeking.
I think that’s overly broad in comparison to Doctorow’s original meaning (which they also cite in the article). The critical element missing from their definition is that the enshittified product/service never had a viable business model to begin with: it uses the hype cycle to sell users and investors on an unsustainable mirage before inevitably collapsing.
It’s so secret, it’s already scrubbed itself from the internet.
the tech community keeps waiting for everyday people to take the baton of self-hosting. They never will—because the effort and cost of maintaining self-hosted services far exceeds the skill and interest of the audience.
The same argument could have been used a century ago to claim that everyday people would never switch from trains to private cars, because the effort and cost of maintaining a car exceeds the skill and interest of most travelers. That may have been true at one point, and may be true again in the future—but it’s contingent on changing circumstances, not a categorical truth.
If the AT protocol allows public access to content, they can’t create a proprietary training set. But the content is available for anyone who wants to add it to a public training set.
If n is the day the item is introduced, the total quantity is 42-(n-6.5)2.
Interesting approach—to detect fake news by simulating humans’ reaction to it rather than judging the content itself.
Not with a typewriter, though.
Yeah, that’s why we need at least… two of them.
TIL Habermas is still alive.
Many authors stipulate that their books must be sold on Amazon without DRM, so their readers can back up and use their books outside Amazon’s ecosystem. Does preventing users from accessing their files violate any conditions that were implied when people bought and sold books with that feature?
There is one thing I would find genuinely useful that seems within its current capabilities. I’d like to be able to give an AI a summary of my current knowledge on a subject, along with a batch of papers or articles, and have it give me one or more of the following:
A summary of the papers omitting the stuff I already know
A summary of any prerequisite background info I don’t already know, but isn’t in the papers
A summary of all the points on which the papers are in agreement
A summary of any points where the papers are in contention.
Another advantage of Nextcloud over Syncthing is selective syncing: Syncthing replicates the entire collection of synced files on each peer, but Nextcloud lets clients sync and unsync subfolders as needed while keeping all the files on the server. That could be useful for OP if they have a terabyte of files to sync but don’t have that much drive space to spare on every client.
“I do feel like it added a level of distance to it that wasn’t a bad thing,” he told Ars Technica. “Maybe a bit like a personal assistant who stays professional and has your back even in the most awful situations, but yeah, more than anything it felt unreal and dystopian.”
If the single word “dystopian” is how the editors decided to summarize that description, I’m not sure they’re doing any better than the AI.
Seems like they could have avoided this by having the Sandy Hook families join the bid with an arbitrarily high dollar amount—which they’d immediately get back as creditors.