• 0 Posts
  • 369 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • This is a long post and I’m not even going to try to address all of it, but I want to call out one point in particular, this idea that if we somehow made a quantum leap from the current generation of models to AGI (there is, for the record, zero evidence of there being any path to that happening) that it will magically hand us the solutions to anthropogenic climate change.

    That is absolute nonsense. We know all the solutions to climate change. Very smart people have spent decades telling us what those solutions are. The problem is that those solutions ultimately boil down to “Stop fucking up the planet for the sake of a few rich people getting richer.” It’s not actually a complicated problem, from a technical perspective. The complications are entirely social and political. Solving climate change requires us to change how our global culture operates, and we lack the will to do that.

    Do you really think that if we created an AGI, and it told us to end capitalism in order to save the planet, that suddenly we’d drop all our objections and do it? Do you think that an AGI created by Google or Microsoft would even be capable of saying “Stop allowing your planets resources to be hoarded by a priveliged few”?


  • Powered flight was an important goal, but that wouldn’t have justified throwing all the world’s resources at making Da Vinci’s flying machine work. Some ideas are just dead ends.

    Transformer based generative models do not have any demonstrable path to becoming AGI, and we’re already hitting a hard ceiling of diminishing returns on the very limited set of things that they actually can do. Developing better versions of these models requires exponentially larger amounts of data, at exponentially scaling compute costs (yes, exponentially… To the point where current estimates are that there literally isn’t enough training data in the world to get past another generation or two of development on these things).

    Whether or not AGI is possible, it has become extremely apparent that this approach is not going to be the one that gets us there. So what is the benefit of continuing to pile more and more resources into it?


  • “it’s been incorporated into countless applications”

    I think the phrasing you were looking for there was “hastily bolted onto.” Was the world actually that desperate for tools to make bad summaries of data, and sometimes write short form emails for us? Does that really justify the billions upon billions of dollars that are being thrown at this technology?


  • Maybe because we’re all getting really tired of industry propaganda designed to sell us on the “inevitability” of genAI when anyone who’s paying even a little attention can see that the only thing inevitable about this current genAI fad is it crashing and burning.

    (Even when content like this comes from a place of sincere interest, it becomes functionally indistinguishable from the industry propaganda, because the primary goal of the propagandists is to keep genAI in the public conversation, thus convincing their investors that its still the hottest thing around, and that they should keep shoveling money into it so that they don’t miss the boat).

    OpenAI, the company behind that giant bubble in the middle there, loses two dollars and thirty five cents for every dollar of revenue. Not profit. Revenue. Every interaction with ChatGPT costs them a ridiculous amount of money, and the percentage of users willing to actually pay for those interactions is unbelievably small. Their enterprise sales are even smaller. They are burning money at an absolutely staggering pace, and that’s with the deeply discounted rate they currently get on their compute costs.

    No one has proposed anything that will lower their backend costs to the point where this model is profitable, and even doubling prices (which is their current plan) will not make them profitable either. Literally not one person at OpenAI has put forth a concrete plan for the company to reach profitability. And that’s the biggest player in the game. If the most successful genAI company on the planet can’t figure out a way to actually make profit off this thing, it’s dead. Not just OpenAI; the whole idea.

    The numbers don’t lie; users, at best, find it moderately interesting and fun to play around with for a while. Barely anyone wants this, and absolutely nobody needs it. Not one single genAI product has created a meaningful use-case that would justify the staggering cost of building and running a transformer based model. The entire industry is just a party trick that’s massively overstayed it’s welcome.




  • Point four I’ve already answered; the need to liquidate stock amplifies any costs, with a potential to create a catastrophic snowball that could lead to a significant collapse in his fortune (nothing could ever make Musk “poor” by any sane standard, but he could become significantly poorer, which I’m sure to him would be the end of the world).

    Point three is answered by him being overleveraged. He took on a lot of debt to buy Twitter, which makes taking on additional debt significantly harder. You’ve both tried to dispute this, while simultaneously confirming it. We’ll get to that with point one.

    Point two is misleading. While Twitter does have its own accounts, those coffers are bare. Either Musk foots the bill out of his own pocket, or the company goes bankrupt. Either way, he’s still on the hook for about $800,000,000 a year in interest payments on the debt it took to buy it.

    Which brings us to point one; you’ve tried to dispute this point by offering the evidence that confirms it. As your correctly state, Musk went into business with a murderers row of the kind of merciless loan sharks that you only do business with if the banks all laughed at you. As I mentioned previously the interest on the debts he took on to buy Twitter is $800 million a year. You don’t accept those kinds of financing terms if you have better options. The fact that he did is all the proof you need that his credit is shit. The banks know damn well how precarious his wealth is. And if further evidence was needed, consider this; why did he trigger a significant collapse in Tesla’s stock price last year selling off stock to service those debts if he had the option of simply borrowing against his assets as you claim?



  • Twitter’s revenue has cratered hard, and because its privately owned, every dollar Twitter loses is a dollar that Elon has to come up with.

    Because his wealth is entirely in overinflated Tesla stock, and because he’s already massively overleveraged from buying Twitter, coming up with that money means selling Tesla stock, and because the Tesla stock price is based on dreams and unicorn farts any amount he sells tends to sink the price.

    This means that for Elon to cover $400,000, that could easily lose him tens of millions in net worth. And there’s no telling when the Tesla stock price will just collapse entirely as investors finally start valuing it like a car manufacturer, and not like kind of predestined savior of the human race (for context, Tesla in its entirety is currently valued at $800bn. Ford is currently valued at $40bn. And Ford sell a LOT more cars than Tesla).


  • I don’t think there’s anything inherently wrong with the idea of using a GUI, especially for a non-professional who mostly just wants to get into self-hosting. Not everyone has to learn all the ins and outs of every piece of software they run. My sister is one of the least technical people in the world, and she has her own Jellyfin server. It’s not a bad thing that this stuff has become more accessible, and we should encourage that accessibility.

    If, however, you intend to use these tools in a professional environment, then you definitely need to understand what’s happening under the hood and at least be comfortable working in the command line when necessary. I work with Docker professionally, and Dockge is my go to interface, but I can happily maintain any of my systems with nothing but an SSH connection when required. What I love about Dockge is that it makes this parallel approach possible. The reason I moved my organization away from Portainer is precisely because a lot of more advanced command line interactions would outright break the Portainer setup if attempted, whereas Dockge had no such problems.


  • The thing is, those poor design decisions have nothing to do with those features, i claim that every feature could be implemented without “holding the compose files hostage”.

    Yes, this is exactly my point. I think I’ve laid out very clearly how Portainer’s shortcomings are far more than just “It’s not for your use case.”

    Portainer is designed, from the ground up to trap you in an ecosystem. The choices they made aren’t because it’s necessary to do those things that way in order to be a usable Docker GUI. It’s solely because they do not want you to be able to easily move away from their platform once you’re on it.


  • Not the point. If you want to interact with the compose files directly through the command line they’re all squirelled away in a deep nest of folders, and Portainer throws a hissy fit when you touch them. Dockge has no such issues, it’s quite happy for you to switch back and forth between command line and GUI interaction as you see fit.

    It’s both intensely frustrating whenever it comes up as an issue directly, and speaks to a problem with Portainer’s underlying philosophy.

    Dockge was built as a tool to help you; it understands that it’s role is to be useful, and to get the fuck out of the way when its not being useful.

    Portainer was built as a product. It wants to take over your entire environment and make you completely dependent on it. It never wants you to interact with your stacks through any other means and it gets very upset if you do.

    I used Portainer for years, both in my homelab and in production environments. Trust me, I’ve tried to work around its shortcomings, but there’s no good solution to a program like Portainer other than not using it.


  • Please don’t use Portainer.

    • It kidnaps your compose files and stores them all in its own grubby little lair
    • It makes it basically impossible to interact with docker from the command line once it has its claws into your setup
    • It treats console output - like error messages - as an annoyance, showing a brief snippet on the screen for 0.3 seconds before throwing the whole message in the shredder.

    If you want a GUI, Dockge is fantastic. It plays nice with your existing setup, it does a much better job of actually helping out when you’ve screwed up your compose file, it converts run commands to compose files for you, and it gets the fuck out of the way when you decide to ignore it and use the command line anyway, because it respects your choices and understands that it’s here to help your workflow, not to direct your workflow.

    Edit to add: A great partner for Dockge is Dozzle, which gives you a nice unified view for logs and performance data for your stacks.

    I also want to note that both Dockge and Dozzle are primarily designed for homelab environments and home users. If we’re talking professional, large scale usage, especially docker swarms and the like, you really need to get comfortable with the CLI. If you absolutely must have a GUI in an environment like that, Portainer is your only option, but it’s still not one I can recommend.





  • But I don’t think even that is the case, as they can essentially just “swap out” the video they’re streaming

    You’re forgetting that the “targeted” component of their ads (while mostly bullshit) is an essential part of their business model. To do what you’re suggesting they’d have to create and store thousands of different copies of each video, to account for all the different possible combinations of ads they’d want to serve to different customers.



  • Comparitively speaking, a lot less hype than their earlier models produced. Hardcore techies care about incremental improvements, but the average user does not. If you try to describe to the average user what is “new” about GPT-4, other than “It fucks up less”, you’ve basically got nothing.

    And it’s going to carry on like this. New models are going to get exponentially more expensive to train, while producing less and less consumer interest each time, because “Holy crap look at this brand new technology” will always be more exciting than “In our comparitive testing version 7 is 9.6% more accurate than version 6.”

    And for all the hype, the actual revenue just isn’t there. OpenAI are bleeding around $5-10bn (yes, with a b) per year. They’re currently trying to raise around $11bn in new funding just to keep the lights on. It costs far more to operate these models (even at the steeply discounted compute costs Microsoft are giving them) than anyone is actually willing to pay to use them. Corporate clients don’t find them reliable or adaptable enough to actually replace human employees, and regular consumers think they’re cool, but in a “nice to have” kind of way. They’re not essential enough a product to pay big money for, but they can only be run profitably by charging big money.