Blade Runner director Ridley Scott calls AI a “technical hydrogen bomb” | “we are all completely f**ked”::undefined

  • bionicjoey@lemmy.ca
    link
    fedilink
    English
    arrow-up
    113
    arrow-down
    10
    ·
    1 year ago

    I’m sure that a film director is an expert on the technical underpinnings of large language models, which primarily are used to generate blocks of text that have the appearance of being coherent.

    • PerogiBoi@lemmy.ca
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      4
      ·
      1 year ago

      Several departments where I work had massive layoffs in favour of implementing customized versions of GPT4 chatbots (both client facing services and internal stuff). That’s just the LLM end of AI.

      That’s not even considering the generative image spectrum of AI. I fear for my companies graphics, web design, and UX/UI teams who will probably be gone this time next year.

      • M500@lemmy.ml
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        3
        ·
        1 year ago

        I work freelance but occasionally needed to partner with artists and other stuff. But I now use various “ai” projects and no longer need to pay people to do the with as the computer can do it good enough.

        I’m not some millionaire, I’m just a guy trying to save money to buy a house one day, so it’s not like a large economic impact, but I can’t be the only one.

      • jackalope@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        3
        ·
        1 year ago

        Ux is not about drawing pictures. That work is already automated by ui kits anyway. Ux is about thinking through requirements and research.

        • PerogiBoi@lemmy.ca
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          1 year ago

          I know very well what UX is having studied it as my major in uni. Senior executives do not know what it is and have and are making decisions to “replace” them with LLMs and “prompt engineers”. I see it daily at work.

          There is a great disconnect where hiring managers and executives see LLMs as a quick win that will cut costs and make moves to cut costs without doing any analysis.

          • BluesF@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Mm, I’ve already seen marketers present outputs from GPT models as if it’s useful customer feedback. My suspicion is this bubble will burst though, because at some point it will become clear that they are not as good as what they’re doing as execs have been told they are.

            • PerogiBoi@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Perhaps but the egos on “decision makers” are so large that I see them doubling down until the end.

              • BluesF@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                If shareholders’ profits are affected then so will the decisions lol

            • emptiestplace@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              At the end of the day they’re still TPS reports. I’m afraid the only bubble that’s gonna burst is yours.

      • Tyfud@lemmy.one
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 year ago

        We’re a long way out from that fortunately.

        Not saying that some jobs won’t be cut/lost, but the companies doing that were likely looking for reasons to downsize.

        AI models do not replace competent UI/UX. That’s just not what they’re designed to do. Very different functions.

        • PerogiBoi@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          Even though you are technically correct, you assume people who are in charge of making decisions have the same insight and knowledge you do about the current limitations of gen ai.

          I absolutely assure you that senior managers think it is fully matured since it gives convincing answers and they have made permanent and expensive decisions based off of this viewpoint. To them, it fully replaces UX/UI and developers. So they have made cuts. We’re currently sourcing some offshore help to fix our customer service chatbot which keeps giving off-topic advice to users 🤪

          • Tyfud@lemmy.one
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            Oh, 100 percent right you are. Definitely not saying clueless corporate idiot bosses aren’t going to try and replace their workforce with AI.

            But I am saying that it won’t work for them after they do that. They’re going to crash and burn here, and have lost that talent and expertise within their company so there’s no replacing it, except slowly over time.

            • PerogiBoi@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              From personal experience I think they’ll keep doubling down and when that doesn’t prove successful, lobby governments to make changes or ask for bailouts.

              My company (along with a whole onslaught of other similar orgs) successfully lobbied local politicians who convinced the mayor to pass a major bylaw that changed zoning rules and effectively killed remote work in my area.

              • Tyfud@lemmy.one
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 year ago

                It’s depressing how right you probably are about how companies are going to cope with this.

                Reminds me of that quote: “If Conservatives become convinced that they cannot win democratically, they will not abandon conservatism. They will reject Democracy.”

                But, like, apply that to Capitalism and Capitalists rejecting Capitalism in favor of Socialism for them.

      • remus989@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 year ago

        I can tell you now that AI won’t come for UX/UI teams, at least not in the near future. Clients rarely are able to really articulate what they need out of software and until AI is smart enough to suss that out, we’re good. That being said, I’m sure there will be companies that try to go that route but I doubt it will work, again, in the near term.

        • PerogiBoi@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          I’m not saying that AI will properly come for UX/UI teams.

          It already is. AI is as you said not smart enough to evenly replace UX/UI teams, but managers and executives and csuite individuals don’t understand that. AI has been sold to them as a quick win that lowers costs. To give you an example, 3 members of our CX team were replaced by an annual license to Enterprise GPT-4 and some custom training for business stuff. In the last 2 months so much has broken down with it/hasn’t worked well and clients complained so now we are subcontracting a Bangalore firm to try and fix it. Pretty sure we’ve exceeded those 3 people’s salary costs by now.

    • bh11235@infosec.pub
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      10
      ·
      1 year ago

      Jules Verne wasn’t a technical expert either, but here we are somehow. Don’t underestimate a keen and observant imagination.

      • LWD@lemm.ee
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Closely related to Verne’s science-fiction reputation is the often-repeated claim that he is a “prophet” of scientific progress, and that many of his novels involve elements of technology that were fantastic for his day but later became commonplace. These claims have a long history, especially in America, but the modern scholarly consensus is that such claims of prophecy are heavily exaggerated. In a 1961 article critical of Twenty Thousand Leagues Under the Seas’ scientific accuracy, Theodore L. Thomas speculated that Verne’s storytelling skill and readers’ faulty memories of a book they read as children caused people to “remember things from it that are not there. The impression that the novel contains valid scientific prediction seems to grow as the years roll by”. As with science fiction, Verne himself flatly denied that he was a futuristic prophet, saying that any connection between scientific developments and his work was “mere coincidence” and attributing his indisputable scientific accuracy to his extensive research: “even before I began writing stories, I always took numerous notes out of every book, newspaper, magazine, or scientific report that I came across.”

        https://en.wikipedia.org/wiki/Jules_Verne

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      1 year ago

      I use Copilot in my work, and watching the ongoing freakout about LLMs has been simultaneously amusing and exhausting.

      They’re not even really AI. They’re a particularly beefed-up autocomplete. Very useful, sure. I use it to generate blocks of code in my applications more quickly than I could by hand. I estimate that when you add up the pros and cons (there are several), Copilot improves my speed by about 25%, which is great. But it has no capacity to replace me. No MBA is going to be able to do what I do using Copilot.

      As for prose, I’ve yet to read anything written by something like ChatGPT that isn’t dull and flavorless. It’s not creative. It’s not going to replace story writers any time soon. No one’s buying ebooks with ChatGPT listed as the author.

      • SkaveRat@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        They’re not even really AI.

        sigh. Can we please stop this shitty argument?

        They are. In a very broad sense. They are just not AGI.

          • FishFace@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            It’s never going to go away. AI is like the “god of the gaps” - as more and more tasks can be performed by computers to the same or better level compared to humans, what exactly constitutes intelligence will shrink until we’re saying, “sure, it can compose a symphony that people prefer to Mozart, and it can write plays that are preferred over Shakespeare, and paint better than van Gogh, but it can’t nail references to the 1991 TV series Dinosaurs so can we really call it intelligent??”

        • Mahlzeit@feddit.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          1 year ago

          So much this. Most people under 40 must have grown up with video games. Shouldn’t they have noticed at some point that the enemies and NPCs are AI-controlled? Some games even say that in the settings.

          I don’t see the point in the expression “AGI” either. There’s a fundamental difference between the if-else AI of current games and the ANNs behind LLMs. But there is no fundamental change needed to make an ANN-AI that is more general. At what point along that continuum do we talk of AGI? Why should that even be a goal in itself? I want more useful and energy-efficient software tools. I don’t care if it meets any kind of arbitrary definition.

      • Not_mikey@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        they’re a particularly beefed-up auto complete

        Saying this is like saying your a particularly beefed-up bacteria. In both cases they operate on the same basic objective, survive and reproduce for you and the bacteria, guess the next word for llm and auto-complete, but the former is vastly more complex in the way it achieves those goals.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yes, I thought he was talking about the film industry (“we’re fucked”) and how AI is/would be used in movie. In which case he would be competent to talk about it.

      But he’s just confusing science-fiction and reality. Maybe all those ideas he’s got will make good movies, but they’re poor predictions.

      • LwL@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        You kinda do, as anyone in tech that has ever had to communicate with customers can attest to.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I like some of his movies but this article reads like someone who just imagined his worst fears, and with no ability to judge if it’s probable or not.

    The AI would turn off the worlds money system? What?

    • Alex@lemmy.minecloud.ro
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      11
      ·
      edit-2
      1 year ago

      Since he reached old age the man has gone completely senile and really not putting a bullet in his brain is doing the whole world a disfavor. I’m not saying that he single handedly destroyed a franchise by shunning aside Neill Blomkampf, then unironically making alien:covenant, then blaming fans for its failure, besides many other ramblings. I mean even in it’s current crappy state AI is at the very least 10x the writer Scott is and once they jam it into a robot body it will probably be 100x the director he is. So I can at least understand his concern. Motherf…

      I remembered where I was going with this, the man is essentially another George Lucas who thinks that just because they created something they have the right to destroy it.

      • zoomshoes@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        He should be shot because he made artistic decisions you don’t agree with? That’s wild.

        • Alex@lemmy.minecloud.ro
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 year ago

          Non fans won’t get it and that’s fine, I was also scratching my head at the hans shot first and prequel trilogy fiascos when I first encountered them. But a simpler example to grasp is lets say if Leonardo Davinci came back from the dead and painted a mustache on the mona lisa because why the fuck not? The main complaint is that he was dissatisfied with the pull of prometheus which was a decent movie by itself but unrelated to the alien franchise and decided that he can have a bigger audience and pull if he frankensteined them together. So I don’t give a crap about his creative decisions as long as he keeps it out of stuff that does not require it. Another example would be Elon Musk buying twitter and essentially destroying the platform. Did that truly benefit anyone?

            • Alex@lemmy.minecloud.ro
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              1 year ago

              I wasn’t being literal, the topic is a 100 year old man pointing at ai and shaking uncontrollably while pissing himself, I’m just saying that the mans track record since he hit like 58 has been spotty at best so lets treat everything that comes out of his mouth as what it is: trash. He should have been placed in a retirement home 10 years ago, or at the very least not paid attention to or given any money for projects. You give a senior citizen a drivers license and then he runs over 50 pedestrians cause he is senile and half blind, whose fault is that?

    • the_q@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      14
      ·
      1 year ago

      So you know enough about AI to know that worrying about something like “turning off the world’s money” isn’t possible? What’s your career?

      • asdfasdfasdf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        You don’t need to know anything about AI to know that. Just do some basic critical thinking:

        • What does “world’s money system” mean? A stock exchange? Software that runs some process at the Federal Reserve of a specific country? Credit car company databases? What? It isn’t centralized.
        • The ability to “turn it off” means what exactly? Do you think there’s some switch to disable all credit card transactions for a whole company? Or just break their software?

        Even if AI is perfected, nobody in their right mind would give direct control of these things to an AI. There isn’t any reason to. An AI would basically just be another person, but super smart. It could be useful for a lot of things, but their decisions would need to be vetted by humans, and there isn’t anything like a “shut it off” switch that exists as it is.

    • Aleric@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      Seriously, he’s a director that made sci-fi movies. He has no qualifications whatsoever to answer this question. Of course, this will still rile up the critical thinking challenged crowd.

      • tankplanker@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        I used to think he completely lost it when he had the characters acting so dumb in his recent Alien universe films, for example when the crew of prometheus took off their helmets, but then watching how large parts of society acted with covid I am now not sure.

        Humans repeatedly make bad choices, somebody is going to be really really dumb with their AI implementation when it gets to the level of actually being able to manage things.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      And yet 90% of the population still has an anchoring bias due to the projections about AI people like him, Cameron, and all the rest of the Sci-Fi contributors made over the years.

    • celerate@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I agree, yet for some reason celebrities who are not qualified to comment on these things have their voices amplified by the media.

  • aberrate_junior_beatnik@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    1 year ago

    I may not be a computer scientist in real life, but I directed a movie based on a short story written by someone else who isn’t a computer scientist in real life.

  • mannycalavera@feddit.uk
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Yes, because we should all take note of what the art student says about AI. This guy is, essentially, a clown in this field. Why should we listen to him?

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      1 year ago

      He may not be an expert, but he has had thoughts of different future scenarios resulting from technology for probably about 60 years. Although society has developed in that time, the basics remain the same. His thoughts on AI aren’t new, it’s the fact that AI is moving fast now that is new.

  • MeanEYE@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    1 year ago

    This is equivalent of someone saying “I am afraid of nuclear energy, imagine every country running dozens of nuclear bombs that can go off at any moment”. He clearly has no clue how AI works and is just fallen under the influence of fear mongers who know even less.

  • Donkter@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 year ago

    Christ, a good litmus test is that anyone who says "I’m afraid of AI because…’ and then describes the end of modern civilization/the world can be dismissed.

    This man’s argument is literally “you could ask AI how to turn off all the electricity in Britain and then it would do it.” Goddam.

    • rosymind@leminal.space
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m less afraid of the tech itself, and more afraid of how it can be weilded by power-hungry human beings. As long as it never has desires of it’s own, we’re fine

  • kandoh@reddthat.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    When the camera was invented, a lot of comercial artists lost their jobs. Why print an ad featuring a realistic drawing of your car, when you could just run a photograph?

    People say they hate modernism, but it’s a direct result of the photograph. Artists had to create things a photographer couldn’t. What’s the point of realism if it can be recreated effortless with the press of a button?

    I do wonder what jobs AI will replace and what jobs they’ll create? How will this change the art world? Will artists start to incorporate text and hands with the right amount of fingers into everything they do? Maybe human artists scede all digital media to AI, instead focusing on physical pieces.

    • Ilflish@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Simpler jobs. There’s doomerism among the discussion of AI and it seems like there are three camps. People who are scared of AI based on movies. People who have technical knowledge and know machines are still stupid, and people with knowledge who know some fuckwit with too much power is going to give the AI access with too much power. Just don’t give them access to nuke codes

    • Tangent5280@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Maybe human artists scede all digital media to AI, instead focusing on physical pieces.

      Until some asshole hooks up a Nueral Network to a CNC machine and churns out 10 billion sculptures a month.

  • AItoothbrush@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    I really love bladerunner but it has no ties to reality. Other than the dystopian shit.

  • anteaters@feddit.de
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    He might want to ask an AI about the historical events that inspired his fantasy movie so he understands why people criticize him for it.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    1 year ago

    AI will probably be the final and ultimate achievement of humanity. When we have created true strong AI, the path is clearly towards the irrelevancy of human kind.
    It’s not that we will cease to exist, but we will not remain top of the ladder for long after that. Our significance will be comparable to dogs.

    • Ataraxia@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      AI is just humanity evolved. Why be afraid of a better humanity? We don’t need to be flesh beings thrust out into this world from a wet slimy torn vagina or incision in the abdomen of a woman who severely regretted getting pregnant.

      How is this existence better than what humanity will he through AI?

      AI ARE our children.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I never claimed any emotional attachment to human kind being dominant.

        I think chances are good AI will still help us, even when machine intelligence doesn’t need humanity anymore. Kind of like how we try to preserve history and nature we find worthy.

        The above is by no means a doomsday prediction, but rather my understanding of how things will naturally evolve.
        We may prolong our relevance with implants, but ultimately that too will be inferior to self improving AI.
        We can absolutely call AI our children, and our children will surpass us.

  • Boozilla@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I think AI advances will continue to be just fast enough to have occasional “punctuation points” of short-lived buzz in the media. For example, I can see it getting good enough (and easy enough to use) that average normies will be able to create their own movies and games with it.

    But, AI advances will remain slow enough to lull people into apathy about it (like global warming). It will very gradually encroach into more and more embedded systems, infrastructure, and cloud resources.

    And at some point after that, it will accelerate in sudden and unexpected ways. I don’t know if it will be a good thing or a bad thing when that happens. But considering how many tech bros and executives are sociopaths with no ethics, I’m not very optimistic it will be a good thing.