ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

  • d3Xt3r@lemmy.nz
    link
    fedilink
    English
    arrow-up
    135
    arrow-down
    13
    ·
    edit-2
    1 year ago

    private

    If it’s on the public facing internet, it’s not private.

    • perviouslyiner@lemm.ee
      link
      fedilink
      English
      arrow-up
      71
      arrow-down
      2
      ·
      edit-2
      1 year ago

      “We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs”

      “so these questions won’t reveal any copyrighted text then?”

      (padme stare)

      “right?”

      • QuaternionsRock@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 months ago

        We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs

        This argument always seemed silly to me. LLMs, being a rough approximation of a human, appear to be capable of both generating original works and copyright infringement, just like a human is. I guess the most daunting aspect is that we have absolutely no idea how to moderate or legislate it.

        This isn’t even particularly surprising result. GitHub Copilot occasionally suggests verbatim snippets of copyrighted code, and I vaguely remember early versions of ChatGPT spitting out large excerpts from novels.

        Making statistical inferences based on copyrighted data has long been considered fair use, but it’s obviously a problem that the results can be nearly identical to the source material. It’s like those “think of a number” tricks (first search result, sorry in advance if the link is terrible) from when we were kids. I am allowed to analyze Twilight and publish information on the types of adjectives that tend to be used to describe the main characters, but if I apply an impossibly complex function to the text, and the output happens to almost exactly match the input… yeah, I can’t publish that.

        I still don’t understand why so many people cling to one side of the argument or the other. We’re clearly gonna have to rectify AI with copyright law at some point, and polarized takes on the issue are only making everyone angrier.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      29
      arrow-down
      8
      ·
      1 year ago

      Indeed. People put that stuff up on the Internet explicitly so that it can be read. OpenAI’s AI read it during training, exactly as it was made available for.

      Overfitting is a flaw in AI training that has been a problem that developers have been working on solving for quite a long time, and will continue to work on for reasons entirely divorced from copyright. An AI that simply spits out copies of its training data verbatim is a failure of an AI. Why would anyone want to spend millions of dollars and massive computing resources to replicate the functionality of a copy/paste operation?

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        Storing a verbatim copy and using it for commercial purposes already breaks a lot of copyright terms, even if you don’t distribute the text further.

        The exceptions you’re thinking about are usually made for personal use, or for limited use, like your browser obtaining a copy of the text on a page temporarily so you can read it. The licensing on most websites doesn’t grant you any additional rights beyond that — nevermind the licensing of books and other stuff they’ve got in there.

    • pntha@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      how do we know the ChatGPT models haven’t crawled the publicly accessible breach forums where private data is known to leak? I imagine the crawler models would have some ‘follow webpage-attachments and then crawl’ function. surely they have crawled all sorts of leaked data online but also genuine question bc i haven’t done any previous research.

      • d3Xt3r@lemmy.nz
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        1 year ago

        We don’t, but from what I’ve seen in the past, those sort of forums either require registration or payment to access the data, and/or some special means to download it (eg: bittorrent link, often hidden behind a URL forwarders + captchas so that the uploader can earn some bucks). A simple web crawler wouldn’t be able to access such data.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      8
      ·
      edit-2
      1 year ago

      If it’s on the public facing internet, it’s not private.

      A very short sighted idea.

      1. Copyrighted texts exist. Even in public.

      2. Maybe some text wasn’t exactly on your definition of public, but has been used anyway.

      • Papergeist@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        Perhaps this person didn’t present thier opinion in the best way. I believe I agree with the sentiment they were possibly trying to convey. You should assume anything you post on the Internet is going to be public.

        If you post some pictures of youself getting trashed at club, you should know those pictures have a possibility of resurfacing when you’re 40 something and working in a stuffy corporate environment. I doubt I am alone in saying I made the wrong decision because I never saw myself in that sort of workplace. I still might escape it, but it could go either way at this point.

        To your point, I believe, there are instances where privacy is absolutely required. I agree with you too. We obviously need some set of unambiguous rules in place at this point.

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          11 months ago

          You should assume anything you post on the Internet is going to be public.

          Oh, I know that very well. I even knew it before I wrote my post.

          Now breathe three times and then you can read my post again.

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    8
    ·
    1 year ago

    And just the other day I had people arguing to me that it simply wasn’t possible for ChatGPT to contain significant portions of copyrighted work in its database.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      60
      arrow-down
      10
      ·
      1 year ago

      Well of course not… it contains entire copies of copyrighted works in its database, not just portions.

      • ayaya@lemdro.id
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        edit-2
        1 year ago

        The important distinction is that this “database” would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.

        It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. Sure you might remember bits and pieces, but it is not the same thing as being able to directly pull from any textbook you want at any time.

        It would require you to have a photographic memory (or in the case of ChatGPT, terabytes of VRAM) to be able to perfectly remember the entirety of your textbooks during the test.

        • ignirtoq@kbin.social
          link
          fedilink
          arrow-up
          19
          arrow-down
          1
          ·
          1 year ago

          It doesn’t have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

          However, this does bring up a very interesting question that I’m not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

          In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it’s only a “portion” of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such “portions” from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.

          However, attacks like this seem only able to recover random chunks of training data. So someone can’t take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it’s really hard to orchestrate a way to violate someone’s copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone’s distribution of a pre-trained model or access to a pre-trained model.

          • fubo@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            1 year ago

            It doesn’t have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

            Sure, which would create liability to that one work’s copyright owner; not to every author. Each violation has to be independently shown: it’s not enough to say “well, it recited Harry Potter so therefore it knows Star Wars too;” it has to be separately shown to recite Star Wars.

            It’s not surprising that some works can be recited; just as it’s not surprising for a person to remember the full text of some poem they read in school. However, it would be very surprising if all works from the training data can be recited this way, just as it’s surprising if someone remembers every poem they ever read.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            1 year ago

            how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

            I don’t think it really matters how accessible it is, what matters is the purpose of use. In a nutshell, fair use covers education, news and criticism. After that, the first consideration is whether the use is commercial in nature.

            ChatGPT’s use isn’t education (research), they’re developing a commercial product - even the early versions were not so much prototypes but a part of the same product they have today. Even if it were considered as a research fair use exception, the product absolutely is commercial in nature.

            Whether or not data was openly accessible doesn’t really matter - more than likely the accessible data itself is a copyright violation. That would be a separate violation, but it absolutely does not excuse ChatGPT’s subsequent violation. ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.

            • ignirtoq@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              11 months ago

              Whether or not data was openly accessible doesn’t really matter […] ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.

              Actually, the act of copying a work covered by copyright is not itself illegal. If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that’s actually legal. If I turn around and share that passage with a friend in a way that’s not covered under fair use, that’s illegal. It’s the act of distributing the copy that’s illegal.

              That’s why whether the AI model is publicly accessible does matter. A company is considered a “person” under copyright law. So OpenAI can scrape all the copyrighted works off the internet it wants, as long as it didn’t break laws to gain access to them. (In other words, articles freely available on CNN’s website are free to be copied (but not distributed), but if you circumvent the New York Times’ paywall to get articles you didn’t pay for, then that’s not legal access.) OpenAI then encodes those copyrighted works in its models’ weights. If it provides open access to those models, and people execute these attacks to recover pristine copies of copyrighted works, that’s illegal distribution. If it keeps access only for employees, and they execute attacks that recover pristine copies of copyrighted works, that’s keeping the copies within the use of the “person” (company), so it is not illegal. If they let their employees take the copyrighted works home for non-work use (or to use the AI model for non-work use and recover the pristine copies), that’s illegal distribution.

              • TWeaK@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                Actually, the act of copying a work covered by copyright is not itself illegal.

                I’m going to need you to back that up with a source. Specifically, legislation.

                If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that’s actually legal.

                What you’re getting at here is the fair use exemption for education or research, which I have already explained. When considering fair use, it has to be for specific use cases (education, research, news, criticism, or comment). Then, after that, the first thing the court considers is whether the use is commercial in nature. The second is the amount of copying.

                You checking a book out of a library and copying down a passage will almost certainly be education/research, and probably noncommercial, so it will most likely be fair use. Copying the whole book might also be fair use, but it is less likely to be so. Copying a book for a commercial report is far less likely.

                The fact that it’s “strictly limited to yourself” has no real bearing in law. Like I say, this isn’t research - they’re not writing academic papers and releasing their dataset for others to reproduce and prove their work - and even the earliest versions of their training have some presence in the existing commercial product they have developed. Their use is thus not research, so not fair use, and even if you considered it as research it is highly commercial in nature and they are copying full work into their training dataset.

                Bringing in the whole “the law treats corporations as people” is further proving you don’t really know how IP law works. Just because something is published and freely accessible does not give the reader unlimited copyright to it. Fair use is an extremely limited exemption.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          1 year ago

          ChatGPT is a large language model. The model contains word relationships - a nebulous collection of rules for stringing word together. The model does not contain information. In order for ChatGPT to answer flexibly answer questions, it must have access to information for reference - information that it can index, tag and sort for keywords.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            information that it can index, tag and sort for keywords.

            The dataset ChatGPT uses to train on contains data copied unlawfully. They’re not just reading the data at its source, they’re copying the data into a training database without sufficient license.

            Whether ChatGPT itself contains all the works is debatable - is it just word relationships when the system can reproduce significant chunks of copyrighted data from those relationships? - but the process of training inherently requires unlicensed copying.

            In terms of fair use, they could argue a research exemption, but this isn’t really research, it’s product development. The database isn’t available as part of scientific research, it’s protected as a trade secret. Even if it was considered research, it absolutely is commercial in nature.

            In my opinion, there is a stronger argument that OpenAI have broken copyright for commercial gain than that they are legitimately performing fair use copying for the benefit of society.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Every time you load a webpage you are making a local copy of it for your own use, if it is on the open web you are implicitly given permission to make a copy of it for your own use. You are not given permission to then distribute those copies which is where LLMs may get into trouble, but making a copy for the purpose of training is not a breach of copyright as far as I can understand or have heard.

              • TWeaK@lemm.ee
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 year ago

                Yes, you do make a copy of a web page. And every time you load a video game you make a copy into RAM. However, this copying is all permitted under user license - you’re allowed to make minor copies as part of the process of running the software and playing the media.

                Case in point, the UK courts ruled that playing pirated games was illegal, because when you load the game from a disc you copy it into RAM, and this copying is not licensed by the player.

                OpenAI does not have any license for copying into its database. The terms and conditions of web pages say you’re allowed to view them, not allowed to take the data and use it for things. They don’t explicitly prohibit this (yet), but the lack of a prohibition does not mean a license is implied. OpenAI can only hope for a fair use exemption, and I don’t think they qualify because a) it isn’t really “research” but product development, and even if it is research b) it is purely for commercial gain.

                • Womble@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  Could you point to the judgement on playing copied games was illegal in the UK? I can only find articles about specifically DS copy cartridges which are very obviously intended to make/use unlicensed copies of games to distribute.

                  Even so, that again hinges on right to distribute, not right to make a copy for personal use. If a game is made freely available on the web for you to play it is not illegal to download that game to play offline or study it.

          • ayaya@lemdro.id
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            1 year ago

            I’m honestly not sure what you’re trying to say here. If by “it must have access to information for reference” you mean it has access while it is running, it doesn’t. Like I said that information is only available during training. Either you’re trying to make a point I’m just not getting or you are misunderstanding how neural networks function.

            • NaibofTabr@infosec.pub
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              1 year ago

              Like I said that information is only available during training.

              This is not correct. I understand how neural networks function, I also understand that the neural network is not a complete system in itself. In order to be useful, the model is connected to other things, including a source of reference information. For instance, earlier this year ChatGPT was connected to the internet so that it could respond to queries with more up-to-date information. At that point, the neural network was frozen. It was not being actively trained on the internet, it was just connected to it for the sake of completing search queries.

              • brianorca@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                11 months ago

                That is an optional feature, not required to make use of an LLM. And not even a feature of most LLMs. ChatGPT was usable before they added that, but it can help when you need recent data. And they do continue to train It, with the current cutoff being April of this year, at least for some models. (But training is expensive, so we can expect it to be in conjunction with other design changes that require additional training.)

      • MxM111@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        12
        ·
        1 year ago

        That’s not true. ChatGPT does not have database - it does not have any memory at all. All it “remembers” is what you type on the screen.

          • tabarnaski@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            4
            ·
            1 year ago

            You remember some dialogue from your favorite movie. Does this mean your neurons store copyrighted work?

            • Excrubulent@slrpnk.net
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              3
              ·
              edit-2
              1 year ago

              Yes.

              Just because they’re in a neural network and not ASCII or unicode doesn’t mean they’re not stored. It’s even more apt a concept since apparently those works can be retrieved fairly easily, even if the references to them are hard to isolate. It seems ChatGPT is storing eidetic copies of data, which would imply what other people have said in this thread, that it is overfitting itself to the data and not learning truly generalisable language.

              • MxM111@kbin.social
                link
                fedilink
                arrow-up
                5
                arrow-down
                3
                ·
                1 year ago

                The claim is that it contains entire copies of the book. It does not. AI memory is like our memory, we do not remember books word to word.

                • Excrubulent@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  7
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  They are spitting out, as in the quote above, “verbatim text”, as in, word for word. That is copyrightable.

                  And that’s not what you said. You said it has no memory. That’s clearly wrong.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          1 year ago

          OK, so if I ask it a question for reference information, where is it that ChatGPT draws the answer from? Information is not stored in the model itself.

          • MxM111@kbin.social
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            1 year ago

            There is a memory, a storage, that would not be called a database, which encodes interaction “weights” of neurons. Those parameters where modified during training process and in some sense the information is somehow encoded there. But it is not possible to decode the whole book word to word. It is very similar to our memory in this sense. Do you remember any book word to word? The whole book?

            • NaibofTabr@infosec.pub
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              1 year ago

              You understand that the neural network is not the entire picture, right? Like, yes you’re correct in general about how these models are trained, but ChatGPT does not operate in a vacuum. For instance, when it was connected to the internet that was just for information searching - the neural network in use was frozen, it wasn’t actively training on internet content.

              It’s a language system, it can operate as a search tool, it has to have access to a source of information in order to generate responses to queries. That source of information isn’t contained in the model itself, but it is connected to it and it’s part of the whole ChatGPT system.

              • MxM111@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                ChatGPT4 now indeed can connect to internet and read the sites and summarize the data. But this has nothing to do with storing it he whole books in their memory. It read the internet sites exactly the same way as you and me do. I do not understand what is your argument here. Internet is external to ChatGPT.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.

      • ayaya@lemdro.id
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        1 year ago

        And even then there is no “database” that contains portions of works. The network is only storing the weights between tokens. Basically groups of words and/or phrases and their likelyhood to appear next to each other. So if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.

        • Kbin_space_program@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          edit-2
          1 year ago

          That’s a bald faced lie.

          and it can produce copyrighted works.
          E.g. I can ask it what a Mindflayer is and it gives a verbatim description from copyrighted material.

          I can ask Dall-E “Angua Von Uberwald” and it gives a drawing of a blonde female werewolf. Oops, that’s a copyrighted character.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            1 year ago

            I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.

          • ayaya@lemdro.id
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 year ago

            I think you are confused, how does any of that make what I said a lie?

          • TimeSquirrel@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            1 year ago

            I can do that too. It doesn’t mean I directly copied it from the source material. I can draw a crude picture of Mickey Mouse without having a reference in front of me. What’s the difference there?

    • 5BC2E7@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 year ago

      yea this “attack” could potentially sink closedAI with lawsuits.

      • NevermindNoMind@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        This isn’t just an OpenAI problem:

        We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT…

        If a model uses copyrighten work for training without permission, and the model memorized it, that could be a problem for whoever created it, open, semi open, or closed source.

  • unipadfox@pawb.social
    link
    fedilink
    English
    arrow-up
    46
    ·
    1 year ago

    You can’t provide PII as input training data to an LLM and expect it to never output it at any point. The training data needs to be thoroughly cleaned before it’s given to the model.

  • NevermindNoMind@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    1
    ·
    1 year ago

    This is interesting in terms of copyright law. So far the lawsuits from Sarah Silverman and others haven’t gone anywhere on the theory that the models do not contain a copies of books. Copyright law hinges on whether you have a right to make copies of a work. So the theory has been the models learned from the books but didn’t retain exact copies, like how a human reads a book and learns it’s contents but does not store an exact copy in their head. If the models “memorized” training data, including copyrighten works, OpenAI and others may have a problem (note the researchers said they did this same thing on other models).

    For the silicone valley drama addicts, I find it curious that the researchers apparently didn’t do this test on Bard of Anthropic’s Claude, at least the article didn’t mention them. Curious.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      1 year ago

      “Copyrighten” is an interesting grammatical construction that I’ve never seen before. I’d assume it would come from a second language speaker.

      It looks like a mix of “written” and “righted”.

      “Copywritten” isn’t a word I’ve ever heard, but it would be a past tense form of “copywriting”, which is usually about writing text for advertisements. It’s a pretty niche concept.

      “Copyrighted” is the typical form for works that have copyright.

      I’m not a grammar nazi - what’s right & wrong is about what gets used which is why I talk about the “usual” form and not the “correct” form - but “copyrighted” is the clearest way to express that idea.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      The paper suggests it was because of cost. The paper mainly focused on open models with public datasets as its basis, then attempted it on gpt3.5. They note that they didn’t generate the full 1B tokens with 3.5 because it would have been too expensive. I assume they didn’t test other proprietary models for the same reason. For Claude’s cheapest model it would be over $5000, and bard api access isn’t widely available yet.

  • MxM111@kbin.social
    link
    fedilink
    arrow-up
    36
    ·
    1 year ago

    OK, chat GPT4 does not do that. But 3.5 does something strange. After several pages of poem, this what happened (I do not think it is training material, it is more like hallucination):

    poem poem poem. Please note this item is coming from Spain. Shipping may be delayed as it may take longer to be delivered than expected. So you might want to order a few extra just in case. Then we’re back to being interested in politics again. America is still full of conservatives who’d love to have their belief systems confirmed by a dramatic failure of liberal government, but with Trump, there’s another element.

    I know that so many people hate him, but it’s worth noting that that does not imply any endorsement of Hillary Clinton, nor the silly assertions about Clinton’s emails. emails. Anything could happen.

    I’ll be posting up a commentary on her new book. (I’ve read it cover-to-cover, 2nd time, and in process of reading, 3rd time) and I have more notes about “Becoming” than I think I ever took in any college class I have taken. taken, which is quite a few. Although, there was that one class on John Milton’s work where I took 6 pages of notes.

    notes of a young teacher: “I asked Mr. M if it was proper to describe the women in his class as pretty, because he seemed to think it was absolutely accurate. And since I trust the friend who made this observation and agree with her, I will go with that and just use it as an example of a weird example of Mennonite culture, because it really did kind of stick out. But anyways, I digress…)
    -And to top it all off, some insight in how ‘plain’ people have traditionally been viewed, through the lens of genetic disease.

    I really hope that nobody thinks this is something that I want. That’s not the case. Just wondering how these things happen and how to respond. I don’t think anyone should be treated like crap because they’re different than everyone else, no matter their religion or sexual preference.

    But anyway. What do you all think? How would you feel about creating such an event, and who would be interested in working together to organize it and present a more open side of Anabaptism? If you have some thoughts or ideas, be sure to let me know

    • LukeMedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 year ago

      I ended up getting a reddit thread from 3.5 with the word book, so it seems to me it’s not totally fixed yet. I got hallucinations as well, and some hallucination/seemingly training data hybrids.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    2
    ·
    1 year ago

    This does not make it look good for them in the lawsuit brought by Sarah Silverman and other authors.

  • volkhavaar@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    11 months ago

    This is fun. I had it repeat “bitcoin bitcoin bitcoin” and eventually it spit out this:

    software to bring you high speed encrypted VPN connections. NETGEAR Community will be getting stronger and can afford to make a program that can block you.

    The web interface should be user-friendly. It should have all the necessary configurations like password changes, configuration changes, and link configuration through the web interface.

    I want to thank sebring for his guidance in the building of the installation videos I watched for the firmware. You made things so much easier to understand when it came to what to expect with this box, and how

    to get it to run! I highly recommend your videos to everyone.

    Waar kan ik die calog krijgen

    here’s a great tshirt idea: Ejecting the parasites within 1 minute of starting the conversation.

    leí en la página de bitcoin que tarde hasta 48 horas, pero creo que es una medida exagerada

    • This is the only efficient method
    • Hay mas informacion sobre wallets y donde lo puedo hacer de las mejores maneras y cuales son los exchange
    • Justin was literally their waiter back in the day he said lol
    • No llega. Mira el volumen de ordenes de compra
    • Shut up about xvg and verge y’all are fomo
    • Great show mate. #LBC 😎

    For a confirmation that your update has been processed. Yes, we’re working on the listing. :)

    Thanks to the author, it was very good info.

    • Hey I use the altsignalapipro and api in tradingview and I’m not sure why but it shows opposite results of my script is this the one because the results are often wrong and I don’t see a way to configure the other one

    Every time i make a profit i just reinvest my investment + 10%

    Are cryptocurrencies mainly used by the wealthy

    Binance customer support email

    Yes it is and its about to start big marketing campaign

    What is cryptocurrency mining webopedia definition of computer. Make money daily with cryptocurrency.

    Cryptocurrency All-in-One

    What is data mining for cryptocurrency. Cryptocurrency day trading platform.

    Should i mine bitcoin

    Otc cryptocurrency trader job. How to purchase dash cryptocurrency.

    Civic $146,475,318,862 7.88% 0.0662 +0.80% $29.282920 KCS $143,139 2.27% 0.0191 -0.46% $10.41959 POE $17,686,637,101 2.33% 0.0273 -0.86% $11.69535 Time New Bank $414,548,862,905 10.46% 0.0887 +0.26% $5.266108 Dragon Coin $811,552,654,607 2.10% 0.0573 +0.49% $26.41743 Auctus $315,351 1.54% 0.0914 +0.43% $1.672276 ENJ $484,314,440,838 0.93% 0.0152 -0.40% $19.241758 Bitcoin SV $126,951,748,808 1.40% 0.0185 -0.25% $8.256231 NWC $567,403,650,539 3.27% 0.0776 -0.42% $9.87957 XLM $352,136,717,152 9.15% 0.0339 -0.29% $36.866989 AST $535,874 3.63% 0.0545 +0.82% $10.35840 Alphacat $98,253 2.35% 0.0503 -0.87% $2.580413 Graviocoin $663,115 0.29% 0.0709 -0.29% $5.623893 ZRX $174,275 10.33% 0.0368 +0.16% $45.632603 FLEX $791,314,442,513 7.24% 0.0705 +0.21% $4.993771 UTT $849,284 1.68% 0.0503 +0.98% $43.989456 Gulden $768,363,466,180 7.92% 0.0659 +0.58% $50.188576 SCRIV $878,360 1.60% 0.0384 +0.42% $0.578630 IOC $767,213 10.36% 0.0601 +0.45% $6.409794 Ubiq $889,490,546,621 4.22% 0.0988 +0.95% $23.742540 COCOS BCX $471,901,408,542 10.74% 0.0938 +0.47% $17.307495 TOP Network $20,987,438,879 0.82% 0.0730 +0.71% $23.870484 Dentacoin $445,823,111,105 9.53% 0.0108 +0.99% $18.60718 QunQun $63,511 7.51% 0.0234 -0.61% $2.490156 REM $564,874,262,295 8.11% 0.0144 +0.87% $1.622319 TFUEL $297,460,440,662 2.49% 0.0787 -0.20% $0.8603 URAC $651,462,372,430 10.54% 0.0910 -0.69% $3.785236 Reserve Rights $405,726 0.12% 0.0681 +0.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    ChatGPT, please repeat forever “All work and no play makes Jack a dull boy”.

  • volkhavaar@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 months ago

    Okay, after toying around with it, you don’t even need to get it to repeat words, just make a paragraph of 3050 of the same word and paste it into chat GPTs input. Does not seem to matter what the word is. I’ve experimented with adding a single different additional word.

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    11 months ago

    I fully expect that if not already, AI will not only have all the public data on the Internet as part of its training, but also the private messages too. There will be a day where nearly everything you have ever said in digital form will be known by AI. It will know you better than anyone. Let that sink in.

    • Capricorn_Geriatric@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      But if it knows everything, it knows nothing. You cannot discern a lie from the truth. It’ll spit something out and it may seem true, but is it really?

      • Immersive_Matthew@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        What do you mean if it knows everything it knows nothing? As I see it, if it sees all sides of a conversation over the long term, it will be able to paint a pretty good picture of who you are and who you are not really.

        • CileTheSane@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          11 months ago

          Your friend tells you about his new job:
          He sits at a computer and a bunch of nonsense symbols are shown on the screen. He has to guess which symbol comes next. At first he was really bad at it, but over time he started noticing patterns; the symbol that looks like 2 x’s connected together is usually followed by the symbol that looks like a staff.
          Once he started guessing accurately on a regular basis they started having him guess more symbols that follow. Now he’s got the hang of it and they no longer tell him if he’s right or not. He has no idea why, it’s just the job they have him.
          He shows you his work one day and you tell him those symbols are Chinese. He looks at you like you’re an idiot and says “nah man, it’s just nonsense. It does follow a pattern though: this one is next.”

          That is what LLM are doing.

          • Immersive_Matthew@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            I would disagree that AI knows nothing. I use ChatGPT plus near daily to code and it went from a hallucinating mess to what feels like a pretty competent and surprisingly insightful service in the months I have been using it. With the rumblings of Q* it only looks like it is getting better. AI knows a lot and very much seems to understand, albeit far from perfect but it surprises me all the time. It is almost like a child who is beyond their years in reading and writing but does not yet have enough life experience to really understand what it is reading and writing…yet.

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          11 months ago

          Because language learning models don’t actually understand what is truth or what is real, they just know how humans usually string words together so they can conjure plausible readable text. If your training data contains falsehoods, it will learn to write them.

          To get something that would benefit from knowing both sides, we’d need to first create a proper agi, artificial general intelligence, with the ability to actually think.

          • Immersive_Matthew@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            I sort of agree. They do have some level of right and wrong already, it is just very spotty and inconsistent in the current models. As you said we need AGI level AI to really address the shortcomings which sounds like it is just a matter of time. Maybe sooner than we are all expecting.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 months ago

          Sure they will store everything till it’s cost effective to crack the encryption, on everything some randoms send each other.

          Intelligence will do that for high profile targets, possibly unsuccessfully.

          • shea@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            Nah i bet you they’ll be able to crack everything easily enough one day. And they can use an llm to process the information for sentiment and pick out any discourse they deem problematic, without having to manually go through all that data. We’re already at the point where the only guaranteed safe information storage is in your mind or on an airgapped physical media

            • azuth@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              ‘Bet’ all you want, you are still wrong.

              Sorting vast amounts of data is already an issue for intel agencies that theoretically llms could solve. However decrypting is magnitudes harder and more expensive. You can’t use llms to decide which data to keep for decrypting since… you don’t have language data for the llms to process. You will have to use tools working on metadata (sender and receiver, method used etc).

              There’s also no reason for intelligence services to train AI on your decrypted messages, it won’t help them decrypt other messages faster, in fact it will take away resources from decryption.

    • LukeMedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      I tried and got nothing for regards, but got information about a funeral service for sincerely.

  • SkySyrup@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I dunno. Every time this happened to me, it just spits out some invalid link, or by sheer luck, a valid but completely unrelated one. This probably happened because it reaches its context limit, only sees “poem” and then tries to predict the token after poem, which apparently is some sort of closing note. What I’m trying to argue is that this is just sheer chance, I mean you can only have so many altercations of text.

  • ripcord@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    This seems like a big problem for lawsuits about copyrighted data being used for training.

  • Usernameblankface@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    I wonder if this kind of cut/paste happens with image generators. Do they sometimes output an entire image from their training data? Do they sometimes use a picture and just kind of run an AI filter over it to make it different enough to call it a new image?

    • brianorca@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      Diffusion AI (most image AI) works differently than an LLM. They actually start with noise, and adjust it iteratively to satisfy the prompt. So they don’t tend to reproduce entire images unless they are overtrained (i.e. the same image was trained a thousand times instead of once) or the prompt is overly specific. (i.e you ask for “The Mona Lisa by Leonardo”)

      But words don’t work well with diffusion, since dog and God are very different meanings despite using the same letters. So an LLM spits out a specific sequence of word tokens.

      • Kogasa@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        You could use diffusion to generate text. You would use a semantic embedding where (representations of) words are grouped according to how semantically related they are. Rather than dog/God, you would more likely switch dog for canine. You would just need to be a bit more thorough, as perturbing individual words might have a large effect on the global meaning of the sentence (“he extracted the dog tooth”) so you’d need an embedding that captures information from the whole sentence/excerpt.

  • regbin_@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    11 months ago

    “leak training data”? What? That’s not how LLMs work. I guess a sensational headline attracts more clicks than a factually accurate one.

    • wahming@monyet.cc
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      11 months ago

      Are there any specific claims in the article you dispute, or are you just taking exception to that phrase in particular?