• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
  • TurboWafflz@lemmy.world
    link
    fedilink
    English
    arrow-up
    242
    arrow-down
    1
    ·
    6 months ago

    It’s so weird how they’re just insisting it isn’t an android app even though people have proven it is. Who do they expect to believe them?

    • rtxn@lemmy.world
      link
      fedilink
      English
      arrow-up
      138
      arrow-down
      5
      ·
      6 months ago

      The same question was asked a million times during the crypto boom. “They’re insisting that [some-crypto-project] is a safe passive income when people have proven that it’s a ponzi scheme. Who do they expect to believe them?” And the answer is, zealots who made crypto (or in this case, AI) the basis of their entire personality.

    • Fisk400@feddit.nu
      link
      fedilink
      English
      arrow-up
      49
      ·
      6 months ago

      Their target audience are the most gullible tech evangelists in the world that think AI is magic. If there was a limit to the lies those people are willing to believe, they wouldn’t be buying the thing to begin with.

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        This will flop though. So will the stupid Humane pin.

        Either there are very few people that gullible or that group isn’t quite as gullible as you think.

      • Veraxus@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago

        You know, pairing an LLM with Playright is actually a pretty great idea. But that’s something I can totally roll on my own.

    • sickhack@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      6 months ago

      It’s the Juicero strategy.

      “You can’t squeeze our juice packs! Only our special machine can properly squeeze our juice packs for optimal taste!”

      • wjrii@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        6 months ago

        Ahh, the good ol’ days, before we knew how batshit AvE was.

        • quantumantics@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          6 months ago

          I’m assuming you’re talking about the YouTuber; It’s been since before the pandemic that I’ve watched AvE, what did he do?

          • wjrii@lemmy.world
            link
            fedilink
            English
            arrow-up
            15
            ·
            6 months ago

            Leaned hard into anti-vax and sympathizing with the Canadian trucker protests, and made it a fairly prominent part of his videos. Not entirely surprising that he held some of the views, but he got high on his own LIBERTARIAN!!! supply and started thinking that if he thought it, his audience must want to hear it.

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        Reviewer proceeds to squeeze more juice out with their hands than the machine managed.

    • will_a113@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      Investors who don’t bother reading past the letters A and I in the prospectus.

    • Anamana@feddit.de
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      13
      ·
      6 months ago

      They have thought of a specific design for the device using its own interaction modality and created a product that is more than just software.

      Therefore don’t get why people refer to it being just an app? Does it make it worth less, because it runs on Android? Many devices, e.g. e-readers are just Android Apps as well. If it works it works.

      In this case it doesn’t, so why not focus on that?

      • NegativeInf@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        edit-2
        6 months ago

        The point being, they are charging 200 bucks for hardware that is superfluous and low end for an incomplete software experience that could be delivered without that on an app. The question is, are you going to give up your smartphone for this new device? Are you going to carry both? Probably not.

        “It can do 10% of the shit your phone can do, only slower, on a smaller screen, with its own data connection, and inaccurately because you have to hope that our “AI” is sufficiently advanced to understand a command, take action on that command, and respond in a short amount of time. And that’s not to even speak about the horrible privacy concerns or that it’s a brick without connection!”

        Everything about this project seems lackluster at best, other than maybe the aesthetic design from teenage engineering, but even then, their design work seems a bit repetitive. But that may be due to how the company is asking for the work. “We wanna be like Nothing and Playdate!!” “I gotchu fam!”

        To address your point about e-readers, they have specific use cases. Long battery lives, large, efficient e-ink displays, and the convenience of having all your books, or a large subset, available to you offline! But when those things aren’t a concern, yea, an app will do.

        Like with most contemporary product launches, I simply find myself asking, “Who is this for?”

        • HelterSkeletor@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          They’ve said they are working on integration with other apps, and have said the ultimate goal is the AI could create its own interface for any app. I dunno if that’s gonna happen but if it did it would be closer to an actual assistant, imagine “rabbit, log onto my work schedule app and check my vacation hours” or “rabbit, compare prices for a SanDisk 256 gig memory card on Amazon, eBay, and Newegg”.

          More than likely it’ll just fuck it all up but that’s the dream I think.

        • nonfuinoncuro@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          I mean I have an eReader but most of the time I’m too lazy to go find it and my Kindle app works just fine. I am eyeing those eink phones though…

        • Anamana@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          10
          ·
          6 months ago

          It’s an experimental device and by buying it you invest into r&d. It’s not meant to replace a smartphone as of now, but similar ones eventually will.

          My point stands, because they are offering a completely new (but obv lacking) experience with novel design solutions. What they made is a toy, which is not really unusual for teenage engineering. But if they do as they did with other devices in the past this thing might actually rock in the future. They are not inexperienced and usually over super long support for their devices.

          TE is way older than Nothing and Playdate btw…

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            6 months ago

            It’s an experimental device and by buying it you invest into r&d.

            This is laughably untrue. By buying this you’ve proven to them that their marketing oriented approach to product development is correct, and that customers will throw away good money on half-designed, disposable shit.

            By the looks of this shitty project, they spent most of their money on design idiots that think they’re the next coming of Steve Jobs, and blathering marketing morons that think if they say AI and “the future” enough that it doesn’t matter that the products they actually deliver are half-done, also-ran, clout-chasing garbage with hardware from the clearance section of Alibaba.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            No, they won’t. Because it’s just a shitty downgraded smartphone controlled by a super shady company with massive security and privacy concerns.

      • capital@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        Why even try to sell me another device though?

        Anything and everything this square does, my phone can do better already and has the added benefit of already being in my pocket and not a pain in the ass to use.

        • Anamana@feddit.de
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          edit-2
          6 months ago

          Because, you know, technological development? Someone has to fund R&D, because it’s not cheap. And in 10 years everyone will have similar ai-enhanced devices. No one thought smartphones will make it back in the days as well. And I’m already looking forward to the time when I don’t have to look down anymore to get information

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            edit-2
            6 months ago

            And in 10 years everyone will have similar ai-enhanced devices.

            In 10 years (or actually 0 years because it’s already kinda true) people will have an AI enhanced device… And it’ll be their phone.

            Also, you’re arguing something I’m going to name the inevitability fallacy (for my own amusement). It’s not inevitable that everyone will have one of these particular type of devices in the same way it wasn’t inevitable that everyone would start watching 3d TV in their houses.

            This is just another in a long line of things that supply side economics driven companies are trying to sell us. There’s next to no need or demand for this thing, and there’s no guarantee that there will be.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        No, they’re not.

        An ereader is a piece of hardware that has a distinct purpose that cannot be matched by other hardware (high quality, high contrast, low power draw static content). Some of them do run Android, and that’s a huge value add. But the actual hardware is the reason it exists.

        This is just a dogshit Android phone. There is no unique hardware niche it’s filling. It’s an extremely obvious scam that is very obviously massively downgraded in all of value, utility, and performance by being forced onto separate hardware.

      • Num10ck@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        20
        ·
        6 months ago

        my honda is just android software, if thats the only part you look at too.

        • macrocephalic@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          6 months ago

          This is more like someone offering a “brand new method of personal travel” to replace your car, but it turns out that it’s just an old Honda with only one seat, a fuel tank that only holds 10L, and a custom navigation app. There’s nothing it does that your Honda can’t do better, and you won’t want to replace your Honda with this.

          • Num10ck@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            16
            ·
            6 months ago

            true but we all have tons of successful devices that are secretly like this, smart doorbells and flood lights and watches etc. we also have all seen terrible ones. its the implementation that isn’t magical.

            • FlorianSimon@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              6 months ago

              A lot of those smart devices are nothing but a waste of rare earth elements. I don’t think switching on your lights remotely, or starting your car engine with an app are “features”. This is consumerist bullshit that we can very well live without any meaningful change in quality of life.

              There are disruptors, that truly bring something new to the table, and then you have smart dildos.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          No it’s not. Your Honda has several different computers in it, only on of which is likely to be running Android.

    • MonkderDritte@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      6 months ago

      ‘Android’ is a certification with requirements in installed Google apps and homscreen links, so there’s that.

  • hark@lemmy.world
    link
    fedilink
    English
    arrow-up
    136
    arrow-down
    2
    ·
    6 months ago

    The AI boom in a nutshell. Repackaged software and content with a shiny AI coat of paint. Even the AI itself is often just repackaged chatgpt.

    • FlorianSimon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      Repackaging ChatGPT is arguably a very nice potential value add, because going to a website is not always very convenient. But it needs to be done right to convince users to use a new method to access ChatGPT instead of just using their website.

    • tabarnaski@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      9
      ·
      6 months ago

      What’s interesting about this device is that it (supposedly) learns how apps work and how people use them, so if you ask it something that requires using an app it could do it.

      So while it might be “just an android app”, if it does what’s advertised that would be impressive.

      • hark@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        Apps are designed to be easy to use. If this device works as advertised (and that’s a huge if), then it wouldn’t offer much in the way of convenience anyway. From what I’ve been reading, it doesn’t work well at all.

    • nonfuinoncuro@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      20
      ·
      6 months ago

      perplexity for this device. still, excited to get my pre-order if only to add to my teenage engineering collection

        • voxel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          6 months ago

          must be a cool device to jailbreak and mess around with just for the sake of it tho
          it has a very unique form factor after all

      • ChaoticNeutralCzech@feddit.de
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        6 months ago

        Unless you have tons of money, why preorder? Just wait for the company to inevitably go under and people start reselling their now-useless devices, and then scoop as many as you want from Ebay. Even if the company survives for a while, the functionality is so underwhelming they might start getting rid of them way sooner.

  • Felix@lemmy.ml
    link
    fedilink
    English
    arrow-up
    104
    ·
    6 months ago

    I heard someone even leaked the apk LMAO that’s hilarious that your 200 dollar product can be literally pirated

  • De_Narm@lemmy.world
    link
    fedilink
    English
    arrow-up
    117
    arrow-down
    14
    ·
    6 months ago

    Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

    • cron@feddit.de
      link
      fedilink
      English
      arrow-up
      58
      ·
      6 months ago

      What I don’t get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

      • no banana@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        6 months ago

        That’s why though. Because they can monetize hardware. They can’t monetize something a free app does.

        • knotthatone@lemmy.one
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 months ago

          Plenty of free apps get monetized just fine. They just have to offer something people want to use that they can slather ads all over. The AI doo-dads haven’t shown they’re useful. I’m guessing the dedicated hardware strategy got them more upfront funding from stupid venture capital than an app would have, but they still haven’t answered why anybody should buy these. Just postponing the inevitable.

    • exanime@lemmy.today
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      6 months ago

      The answer is “marketing”

      They have pushed AI so hard in the last couple of years they have convinced many that we are 1 year away from Terminator travelling back in time to prevent the apocalypse

      • sudo42@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 months ago
        • Incredible levels of hype
        • Tons of power consumption
        • Questionable utility
        • Small but very vocal fanbase

        s/Crypto/AI/

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 months ago

      Because money, both from tech hungry but not very savvy consumers, and the inevitable advertisers that will pay for the opportunity for their names to be ejected from these boxes as part of a perfectly natural conversation.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 months ago

      I have now heard of my first “ai box”. I’m on Lemmy most days. Not sure how it’s an epidemic…

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        I haven’t seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the “Humane AI Pin”, which was utter garbage and even more expensive.

    • XEAL@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      7
      ·
      6 months ago

      It’s not black or white.

      Of couse AI hallucinates, but not all that an LLM produces is garbage.

      Don’t expect a “living” Wikipedia or Google, but, it sure can help with things like coding or translating.

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        I don’t necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That’s not what this product is. It’s a standalone device which you talk to. It’s supposed to replace pulling out your phone to answer a question.

      • Paradox@lemdro.id
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I quite like kagis universal summarizer, for example. It let’s me know if a long ass YouTube video is worth watching

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 months ago

        I use LLMs as a starting point to research new subjects.

        The google/ddg search quality is hot garbage, so LLM at least gives me the terminology to be more precise in my searchs.

    • BaroqueInMind@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      6 months ago

      There is s fuck ton on money laundering coming from China nowadays and they invest millions in any tech-bro stupid idea to dump their illegal cash.

    • OneOrTheOtherDontAskMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      6 months ago

      I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.

      A biomedical model I saw seemed lauded for it’s consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.

      So although I agree they’re still giving well phrased garbage for big general cases (and GPT4 seems to be much more ‘savvy’), the specific use cases are getting much better and I’m stoked to see how that continues.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      6 months ago

      I think it’s a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.

      So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        5
        ·
        6 months ago

        Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          edit-2
          6 months ago

          Alexa is a fundamentally different architecture from the LLMs of today.

          Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.

          Maybe actually read the entire post before being an ass.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      28
      ·
      6 months ago

      The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
      Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        3
        ·
        6 months ago

        “Fairly high” is still useless (and doesn’t actually quantify anything, depending on context both 1% and 99% could be ‘fairly high’). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

        • AIhasUse@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          6 months ago

          Hallucinations are largely dealt with if you use agents. It won’t be long until it gets packaged well enough that anyone can just use it. For now, it takes a little bit of effort to get a decent setup.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          6 months ago

          1% correct is never “fairly high” wtf

          Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

          • De_Narm@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            edit-2
            6 months ago

            1% correct is never “fairly high” wtf

            It’s all about context. Asking a bunch of 4 year olds questions about trigonometry, 1% of answers being correct would be fairly high. ‘Fairly high’ basically only means ‘as high as expected’ or ‘higher than expected’.

            Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

            Hence, it is useless. If I cannot expect it to be more or less always correct, I can skip using it and just look stuff up myself.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              11
              ·
              6 months ago

              Obviously the only contexts that would apply here are ones where you expect a correct answer. Why would we be evaluating a software that claims to be helpful against 4 year old asked to do calculus? I have to question your ability to reason for insinuating this.

              So confirmed. God or nothing. Why don’t you go back to quills? Computers cannot read your mind and write this message automatically, hence they are useless

              • De_Narm@lemmy.world
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                1
                ·
                6 months ago

                Obviously the only contexts that would apply here are ones where you expect a correct answer.

                That’s the whole point, I don’t expect correct answers. Neither from a 4 year old nor from a probabilistic language model.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  8
                  ·
                  6 months ago

                  And you don’t expect a correct answer because it isn’t 100% of the time. Some lemmings are basically just clones of Sheldon Cooper

          • SpaceNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            6 months ago

            Perhaps the problem is that I never bothered to ask anything trivial enough, but you’d think that two rhyming words starting with 'L" would be simple.

            • CaptDust@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              “AI” is a really dumb term for what we’re all using currently. General LLMs are not intelligent, it’s assigning priorities to tokens (words) in a database, based on what tokens were provided before it, to compare and guess the next most logical word and phrase, really really fast. Informed guesses, sure, but there’s not enough parameters to consider all the factors required to identify a rhyme.

              That said, honestly I’m struggling to come up with 2 rhyming L words? Lol even rhymebrain is failing me. I’m curious what you went with.

            • MxM111@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              6 months ago

              Ok, by asking you mean that you find somewhere questions that someone identified as being answered wrongly by LLM, and asking yourself.

        • magic_lobster_party@kbin.run
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          6 months ago

          I’ve asked GPT4 to write specific Python programs, and more often than not it does a good job. And if the program is incorrect I can tell it about the error and it will often manage to fix it for me.

          • FlorianSimon@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 months ago

            You have every right not to, but the “useless” word comes out a lot when talking about LLMs and code, and we’re not all arguing in bad faith. The reliability problem is still a strong factor in why people don’t use this more, and, even if you buy into the hype, it’s probably a good idea to temper your expectations and try to walk a mile in the other person’s shoes. You might get to use LLMs and learn a thing or two.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              6 months ago

              I only “believe the hype” because a good developer friend of mine suggested I try copilot so I did and was impressed. It’s an amazing technical achievement that helps me get my job done. It’s useful every single day I use it. Does it do my job for me? No of fucking course not, I’m not a moron who expected that to begin with. It speeds up small portions of tasks and if I don’t understand or agree with its solution, it’s insanely easy not to use it.

              People online mad about something new is all this is. There are valid concerns about this kind of tech, but I rarely see that. Ignorance on the topic prevails. Anyone calling ai “useless” in a blanket statement is necessarily ignorant and doesn’t really deserve my time except to catch a quick insult for being the ignorant fool they have revealed themselves to be.

              • FlorianSimon@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                6 months ago

                I’m glad that you’re finding this useful. When I say it’s useless, I speak in my name only.

                I’m not afraid to try it out, and I actually did, and, while I was impressed by the quality of the English it spits out, I was disappointed with the actual substance of the answers, which makes this completely unusable for me in my day to day life. I keep trying it every now and then, but it’s not a service I would pay for in its current state.

                Thing is, I’m not the only one. This is the opinion of the majority of people I work with, senior or junior. I’m willing to give it some time to mature, but I’m unconvinced at the moment.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  6 months ago

                  You would need to be pulling some trickery on Microsoft to get access to copilot for more than a single 30 day trial so I’m skeptical you’ve actually used it. Sounds like you’re using other products which may be much worse. It also sounds like you work in a conservative shop. Good luck with that

      • dimeslime@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        ·
        6 months ago

        It’s a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I’d be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it’s never “I’d never thought of doing it that way!” levels of amazing.

      • Bahnd Rollard@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        3
        ·
        6 months ago

        You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.

        Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.

        • Semi-Hemi-Demigod@kbin.social
          link
          fedilink
          arrow-up
          11
          ·
          6 months ago

          I think the reason they’re useful for writing code is that there’s a third party - the parser or compiler - that checks their work. I’ve used LLMs to write code as well, and it didn’t always get me something that worked but I was easily able to catch the error.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          11
          ·
          6 months ago

          Are they useless?

          Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.

          • Bahnd Rollard@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            2
            ·
            6 months ago

            This is my expirence with LLMs, I have gotten it to write me code that can at best be used as a scaffold. I personally do not find much use for them as you functionally have to proofread everything they do. All it does change the work load from a creative process to a review process.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              5
              ·
              6 months ago

              I don’t agree. Just a couple of days ago I went to write a function to do something sort of confusing to think about. By the name of the function, copilot suggested the entire contents of the function and it worked fine. I consider this removing a bit of drudgery from my day, as this function was a small part of the problem I needed to solve. It actually allowed me to stay more focused on the bigger picture, which I consider the creative part. If I were a painter and my brush suddenly did certain techniques better, I’d feel more able to be creative, not less.

              • FlorianSimon@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                6 months ago

                I would argue that there just isn’t much gain in terms of speed of delivery, because you have to proofread the output - not doing it is irresponsible and unprofessional.

                I don’t tend to spend much time on a single function, but I can remember a time recently where I spent two hours writing a single function. I had to mentally run all cases to check that it worked, but I would have had to do it with LLM output anyway. And I feel like reviewing code is just much harder to do right than to write it right.

                In my case, LLMs might have saved some time, but training the complexity muscle has value in itself. It’s pretty formative and there are certain things I would do differently now after going through this. Most notably, in that case: fix my data format upfront to avoid edge cases altogether and save myself some hard thinking.

                I do see the value proposition of IDEs generating things like constructors, and sometimes use such features, but reviewing the output is mentally exhausting, and it’s necessary because even non-LLM sometimes comes out as broken. Assuming that it worked 100% of the time: still not convinced it amounts to much time saved at the end of day.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              6 months ago

              So you want me to go into one of my codebases, remember what came from copilot and then paste it here? Lol no

      • FlorianSimon@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        6 months ago

        This is not really a slam dunk argument.

        First off, this is not the kind of code I write on my end, and I don’t think I’m the only one not writing scripts all day. There’s a need for scripts at times in my line of work but I spend more of my time thinking about data structures, domain modelling and code architecture, and I have to think about performance as well. Might explain my bad experience with LLMs in the past.

        I have actually written similar scripts in comparable amounts of times (a day for a working proof of concept that could have gone to production as-is) without LLMs. My use case was to parse JSON crash reports from a provider (undisclosable due to NDAs) to serialize it to our my company’s binary format. A significant portion of that time was spent on deciding what I cared about and what JSON fields I should ignore. I could have used ChatGPT to find the command line flags for my Docker container but it didn’t exist back then, and Google helped me just fine.

        Assuming you had to guide the LLM throughout the process, this is not something that sounds very appealing to me. I’d rather spend time improving on my programming skills than waste that time teaching the machine stuff, even for marginal improvements in terms of speed of delivery (assuming there would be some, which I just am not convinced is the case).

        On another note…

        There’s no need for snark, just detailing your experience with the tool serves your point better than antagonizing your audience. Your post is not enough to convince me this is useful (because the answers I’ve gotten from ChatGPT have been unhelpful 80% of the time), but it was enough to get me to look into AutoGen Studio which I didn’t know about!

      • sudo42@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        6 months ago

        Who’s going to tell them that “QA” just ran the code through the same AI model and it came back “Looks Good”.

        :-)

      • knotthatone@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I don’t think LLMs are useless, but I do think little SoC boxes running a single application that will vaguely improve your life with loosely defined AI features are useless.

              • best_username_ever@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                6 months ago

                In one of those weird return None combination. Also I don’t get why it insists on using try catch all the time. Last but not least, it should have been one script only with sub commands using argparse, that way you could refactor most of the code.

                Also weird license, overly complicated code, not handling HTTPS properly, passwords in ENV variables, not handling errors, a strange retry mechanism (copy pasted I guess).

                It’s like a bad hack written in a hurry, or something a junior would write. Something that should never be used in production. My other gripe is that OP didn’t learn anything and wasted his time. Next time he’ll do that again and won’t improve. It’s good if he’s doing that alone, but in a company I would have to fix all this and it’s really annoying.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        18
        ·
        6 months ago

        It’s no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus… To them, generative AI is nothing more than the free version of chatgpt from a year ago, they’ve not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.

        • FlorianSimon@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          6 months ago

          Thing is, if you want to sell the tech, it has to work, and what most people have seen by now is not really convincing (hence the copious amount of downvotes you’ve received).

          You guys sound like fucking cryptobros, which will totally replace fiat currency next year. Trust me bro.

          • AIhasUse@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            6 months ago

            Downvotes by a few uneducated people mean nothing. The tools are already there. You are free to use them and think about this for yourself. I’m not even talking about what will be here in the future. There is some really great stuff right now. Even if doing some very simple setup is too daunting for you, you can just watch people on youtube doing it to see what is available. People in this thread have literally already told you what to type into your search box.

            In the early 90s, people exactly like you would go on and on about how stupid the computerbros were for thinking anyone would ever use this new stupid “intertnet” thing. You do you, it is totally fine if you think because a handful of uneducated, vocal people on the internet agree with you that technology has mysteriously frozen for the first time in history, then you must all be right.

            • FlorianSimon@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              6 months ago

              If everybody in society “votes” that kind of stuff “down”, the hype will eventually die down and, once the dust has settled, we’ll see what this is really useful for. Right now, it can’t even do fucking chatbots right (see the Air Canada debacle with their AI chatbot).

              Not every invention is as significant as the Internet. There’s thing like crypto which are the butt of every joke in the tech community, and people peddling that shit are mocked by everyone.

              I honestly don’t buy that we’re on the edge of a new revolution, or that LLMs are close to true AGI. Techbros have been pushing a lot of shit that is not in alignment with regular folks’ needs for the past 10 years, and have maintained tech alive artificially without interest from the general population because of venture capital.

              However, in the case of LLMs, the tech is interesting and is already delivering modest value. I’ll keep an eye on it because I see a modest future for it, but it just might not be as culturally significant as you think it may be.

              With all that said, one thing I will definitely not do is spend any time setting up things locally, or running a LLM on my machine or pay any money. I don’t think this gives a competitive edge to any software engineer yet, and I’m not interested in becoming an early adopter of the tech given the mediocre results I’ve seen so far.

        • GluWu@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          11
          ·
          6 months ago

          They aren’t trying to have a conversation, they’re trying to convince themselves that the things they don’t understand are bad and they’re making the right choice by not using it. They’ll be the boomers that needed millennials to send emails for them. Been through that so I just pretend I don’t understand AI. I feel bad for the zoomers and genas that will be running AI and futilely trying to explain how easy it is. Its been a solid 150 years of extremely rapid invention and innovation of disruptive technology. But THIS is the one that actually won’t be disruptive.

          • FlorianSimon@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            6 months ago

            I’m not trying to convince myself of anything. I was very happy to try LLM tools for myself. They just proved to be completely useless. And there’s a limit to what I’m going to do to try out things that just don’t seem to work at all. Paying a ton of money to a company to use disproportionate amounts of energy for uncertain results is not one of them.

            Some people have misplaced confidence with generated code because it gets them places they wouldn’t be able to reach without the crutches. But if you do things right and review the output of those tools (assuming it worked more often), then the value proposition is much less appealing… Reviewing code is very hard and mentally exhausting.

            And look, we don’t all do CRUD apps or scripts all day.

            • AIhasUse@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              6 months ago

              Tell me about how when you used Llama 3 with Autogen locally, and how in the world you managed to pay a large company to use disproportionate amounts of energy for it. You clearly have no idea what is going on on the edge of this tech. You think that because you made an openai account that now you know everything that’s going on. You sound like an AOL user in the 90 that thinks the internet has no real use.

              • FlorianSimon@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                6 months ago

                I don’t care about the edge of that tech. I’m not interested in investing any time making it work. This is your problem. I need a product I can use as a consumer. Which doesn’t exist, and may never exist because the core of the tech alone is unsound.

                You guys make grandiloquent claims that this will automate software engineering and be everywhere more generally. Show us proof. What we’ve seen so far is ChatGPT (lol), Air Canada’s failures to create working AI chatbots (lol), a creepy plushie and now this shitty device. Skepticism is rationalism in this case.

                Maybe this will change one day? IDK. All I’ve been saying is that it’s not ready yet from what I’ve seen (prove me wrong with concrete examples in the software engineering domain) and given that it tends to invent stuff that just doesn’t exist, it’s unreliable. If it succeeds, LLMs will be part of a whole delivering value.

                You guys sound like Jehovah’s witnesses. get a hold of yourselves if you want to be taken seriously. All I see here is hyperbole from tech bros without any proof.

                • AIhasUse@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  6 months ago

                  You’re just saying that you will only taste free garbage wine, and nobody can convince you that expensive wine could ever taste good. That’s fine, you’ll just be surprised when the good wine gets cheap enough for you to afford or free. Your unwillingness to taste it has nothing to do with what already exists. In this case, it’s especially naive since you could just go watch videos of people using actually good wine.

  • deafboy@lemmy.world
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    3
    ·
    6 months ago

    I’m confused by this revelation. What did everybody think the box was?

    • casual_turtle_stew_enjoyer@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      26
      ·
      6 months ago

      Magic

      In all reality, it is a ChatGPTitty "fine"tune on some datasets they hobbled together for VQA and Android app UI driving. They did the initial test finetune, then apparently the CEO or whatever was drooling over it and said “lEt’S mAkE aN iOt DeViCe GuYs!!1!” after their paltry attempt to racketeer an NFT metaverse game.

      Neither this nor Humane do any AI computation on device. It would be a stretch to say there’s even a possibility that the speech recognition could be client-side, as they are always-connected devices that are even more useless without Internet than they already are with.

      Make no mistake: these money-hungry fucks are only selling you food cans labelled as magic beans. You have been warned and if you expect anything less from them then you only have your own dumbass to blame for trusting Silicon Valley.

      • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        If the Humane could recognise speech on-device, and didn’t require its own data plan, I’d be reasonably interested, since I don’t really like using my phone for structuring my day.

        I’d like a wearable that I can brain dump to, quickly check things without needing to unlock my phone, and keep on top of schedule. Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans

        • casual_turtle_stew_enjoyer@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans

          Latte Panda 2 or just wait a couple years. It’ll happen eventually because it’s so obvious it’s literally unpatentable.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      6 months ago

      I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that’s worse than a $150 phone in most every way.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I didn’t know how much work they put into customizing it, but being derived from Android does not mean it isn’t custom. Ubuntu is derived from Debian, that doesn’t mean that it isn’t a custom OS. The fact that you can run the apk on other Android devices isn’t a gotcha. You can run Ubuntu .deb files on other Debian distros too. An OS is more of a curated collection of tools, you should not be going out of your way to make applications for a derivative os incompatible with other OSes derived from the same base distro.

      • w2tpmf@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        6 months ago

        I would expect bespoke software and OS in a $200 device to be way less impressive than what a multi billion dollar company develops.

    • WanderingCat@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 months ago

      Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app

      • deafboy@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        6 months ago

        The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.

        No small startup is going to revolutionize this space unless some kind of new physics is discovered.

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 months ago

          I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don’t even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).

          So I’d say it’s more accurate to put it as “the easiest/cheapest way to do an AI device is to use a standard SoC”, but the best way would be to design a custom chip for it.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        The hardware seems very custom to me. The problem is that the device everyone carries is a massive superset of their custom hardware making it completely wasteful.

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        Qualcomm is listed as having $10 billion in yearly profits (Intel has ~20B, Nvidia has ~80B), the news articles I can find about Rabbit say its raised around $20 million in funding ($0.02 billion). It takes a lot of money to make decent custom chips.

    • anlumo@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      6 months ago

      Same. As soon as I saw the list of apps they support, it was clear to me that they’re running Android. That’s the only way to provide that feature.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      6 months ago

      Isn’t Lemmy supposed to be tech savvy? What do people think the vast majority of Linux OSs are? They’re derivatives of a base distribution. Often they’re even derivatives of a derivative.

      Did people think a startup was going to build an entire OS from scratch? What would even be the benefit of that? Deriving Android is the right choice here. This R1 is dumb, but this is not why.

    • aname@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      It could have been a local AI and some special AI chip not found in all android phones, but since it is run in cloud, the privacy is really a problem

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    6 months ago

    The processing was done server-side as it is with the other thing. If you find a way to do it client-side let me know otherwise I’m not interested in your dumb product.

    • Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      35
      ·
      6 months ago

      Yes, but it’s also unauthenticated (it doesn’t verify it comes from the real device, or even run an account belonging to a device owner)

      You just need the app

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 months ago

      Spoiler: when they let you know about the better device, your phone will already be much better at the same client-side processing anyway.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      What, you aren’t excited about a future where everything is cloud computing spyware that sends all your activity to an AI to be analyzed and picked apart by strangers?

  • TomMasz@kbin.social
    link
    fedilink
    arrow-up
    59
    arrow-down
    1
    ·
    6 months ago

    So it’s just a single app running on a minimal Android implementation, the AI is done on remote servers and it still gets lousy battery life? Sounds like they dropped the ball on design. Nevertheless, no one is going to carry this that doesn’t already have a phone that can do everything the Rabbit does. It has no reason to exist.

    • Crikeste@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 months ago

      Yes, they have came out since this discovery saying that there is no ‘app’ and that the AI computed requests in the cloud.

      These people basically found the connection to the cloud.

      But yeah, stupid product that does practically nothing [that a phone cant].

  • Matriks404@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    6 months ago

    I don’t even understand what the point is of this product. Seems like e-waste at first glance.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 months ago

      It’s just marketing to be like “look at how capable our AI is with just one button”. I mean if you want to be charitable it’s an interesting design exercise, but wasteful and frivolous when everyone is already carrying devices that are far more capable supersets of this.

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    6 months ago

    lol at calling running Android an “emulator”.

    Also don’t they have to distribute the actual code for the OS if it’s lightly altered Android?

    • PlasticExistence@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      ·
      6 months ago

      My understanding is that if you only add modules on top, those can stay closed source. It’s possible the AOSP portion of the stack is still stock and untouched.

      • pacmondo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        6 months ago

        I don’t know, one of the reasons they’re decrying everyone running the APK is they claim they’ve made a bunch of “bespoke alterations” to the AOSP version they’re using

    • jbk@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      AOSP is fully Apache-2.0 licensed except for the Linux kernel, so only their kernel changes would have to be. It’s also an important reason why Android was/is so successful.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Depends on which part is altered. Lots of Linux distros are just curated collections of software, drivers, and configuration. You can easily achieve your OS goals without touching the code of the base distro at all. If they didn’t need to modify the base code then there’s nothing to distribute back. That would be like distributing your personal OS power user config settings. If you’re not touching source there’s nothing to contribute.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Having seen what this device does, they may not even have had to alter anything to the base AOSP image. Just set your app as the launcher and you’re good to go.

  • finkrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    6 months ago

    This is why I cringe at cell phone manufacturers selling cloud and AI features based on phone models because wtf you’re not running that cloud on that handset so why do you gatekeep the product behind that model? It can’t require that many resources, it’s a cloud app!

    • WIZARD POPE💫@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      It is to make you spend more to buy the better model. If you really want that AI you won’t mind spending a bit more

      • finkrat@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        I know what you’re getting at and this isn’t directed at you and I know this is why it’s done, but the capabilities of the phone don’t have any bearing on the use of the AI so why gatekeep it? It’s a dumb way to make a profit.

        • Malfeasant@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          It’s a dumb way to make a profit.

          If it works, is it dumb?

          When VHS was still around, DVDs were priced higher even though they were much cheaper to produce. If people are willing to pay more, producers/distributors will charge more. Yay capitalism.

        • WIZARD POPE💫@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          I know. It’s dumb as hell. Just like everything being priced at 4.99 instead of 5.00. people are just stupid and it seems to wprk out for the companies.

  • 0x2d@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    ·
    6 months ago

    their page to link accounts to it was not a real webapp, it was a novnc page that would connect to an ubuntu vm that runs chrome with no sandboxing and basic password store under fluxbox wm

    someone dumped the home directory from it

    • cheet@infosec.pub
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 months ago

      Holy shit, that’s actually hilarious, I imagine someone would have noticed when their paste/auto type password managers didn’t work

      For those confused, this sounds like instead of making a real website, they spin up a vm, embed a remote desktop tool into their website and have you login through chrome running on their VM, this is sooooo sketch it, its unreal anyone would use this in a public product.

      Imagine if to sign into facebook from an app, you had to go to someone else’s computer, login and save your credentials on their PC, would that be a good idea?

      • brotkel@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        What I don’t understand is why. This sounds like way more work than spinning up some out-of-the-box framework with oAuth or a Google login and hosting it on Lambda or Azure. What is logging in on a VM box even going to do for the device?

        • Ramenator@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          I’ve looked it up and it’s even uglier and I can kinda understand why they did it this way Basically, for their “integrations” they aren’t using any official APIs. Instead they just use the websites and automate them via the Playwright framework. So for each user they have a VM running with a Chrome browser to access the services. So now they have the problem that they need to get their users session cookies into the browser. And the easiest solution for that is having the users access their VM via VNC and just log into the automated browser.
          This is such a hacky solution that I’m actually in awe of it’s shittiness. That’s something you throw together in an all-nighter during a Hackathon, not a production ready solution

      • SereneHurricane@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        It basically implies that they cobbled together some standard technology but they didn’t even put it together very well.

        It’s like a solution that’s held in place with chewing gum and Band-Aids.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        The “login” was actually a browser based remote access tool. You were signing in on a machine on there network running Chrome.

        Someone dumped the contents of that machine.