• Aabbcc@lemm.ee
    link
    fedilink
    arrow-up
    82
    arrow-down
    2
    ·
    1 year ago

    Because I have nested loops and only want to see certain cases and I’m not smart enough to set up conditional breakpoints or value watching

      • leo85811nardo@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Ackchyually, value watching in debugger almost guarantee to get the value by address, but printf in some languages can pass by value, unnecessarily make copy of the watched variable, and the value printed is the copied data instead of the original

    • idunnololz@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      Someone should make that bell curve meme with people using print statements at the newb and advanced level. Debuggers are great but I’d say it’s a 50/50 split for me for debugger vs logger when I’m debugging something.

  • qprimed@lemmy.ml
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    2
    ·
    1 year ago

    because, sometimes, having your program vomit all over your console is the best way to remain focused on the problem.

    • charliespider@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      This is the reason for me. Sometimes I don’t want to step through the code and just want to scan through a giant list of output.

      • AeroLemming@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Sometimes, I don’t know what’s wrong, I just know that something in a specific area or small set of variables isn’t working right. It’s a lot easier to notice anomalies by looking through a giant wall of print statements than by stepping through the program. “Oh, that value is supposed to be in the range [0,1), why is it 3.6857e74?”

    • ripcord@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Similarly, every once in a while I’ll throw warning messages (which I can’t ship) to encourage me to go back and finish that TODO instead of letting it linger.

    • MajorHavoc@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Exactly. And there’s plenty of places where setting up a live debug stream is a massive PITA, and finding the log files is only a huge PITA.

      Edit: But I’ve been promised AI will do both soon, so I’ll stop stressing over it. /s

  • malockin@lemmy.world
    link
    fedilink
    arrow-up
    55
    ·
    1 year ago

    because sometimes you need to investiagte an issue that happens only on the production machines, and you can’t/shouldn’t setup debugging on those.

  • Scrath@feddit.de
    link
    fedilink
    arrow-up
    54
    arrow-down
    1
    ·
    1 year ago

    Try debugging a distributed embedded real time system which crashes when you are in a breakpoint too long because the heartbeat doesn’t respond

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      1 year ago

      Working with Nordic Semi Bluetooth Stack was like that when working with it a few years ago. If you reach a breakpoint while the Bluetooth stack was running, it would crash the program.

      So printf to dump data while it ran and only break when absolutely necessary.

        • Croquette@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Yeah. I mostly code in C because the tools come natively in C. I also do python and C# to create tools for my embedded projects.

  • corytheboyd@kbin.social
    link
    fedilink
    arrow-up
    47
    ·
    1 year ago

    Debugger good for microscopic surgery, log stream good for real time macro view. Both perspectives needed.

  • GravelPieceOfSword@lemmy.ca
    link
    fedilink
    arrow-up
    46
    ·
    1 year ago

    As someone who has done a lot of debugging in the past, and has also written many log analysis tools in the past, it’s not an ether/or, they complement each other.

    I’ve seen a lot of times logs are dismissed in these threads recently, and while I love the debugger (I’d boast that I know very few people who can play with gdb like I can), logs are an art, and just as essential.

    The beginner printf thing is an inefficient learning stage that people will get past in their early careers after learning the debugger, but then they’ll need to eventually relearn the art of proper logging too, and understand how to use both tools (logging and debugging).

    There’s a stage when you love prints.

    Then you discover debuggers, you realize they are much more powerful. (For those of you who haven’t used gdb enough, you can script it to iterate stl (or any other) containers, and test your fixes without writing any code yet.

    And then, as your (and everyone else’s) code has been in production a while, and some random client reports a bug that just happened for a few hard to trace events, guess what?

    Logs are your best friend. You use them to get the scope of the problem, and region of problem (if you have indexing tools like splunk - much easier, though grep/awk/sort/uniq also work). You also get the input parameters, output results, and often notice the root cause without needing to spin up a debugger. Saves a lot of time for everyone.

    If you can’t, you replicate, often takes a bit of time, but at least your logs give you better chances at using the right parameters. Then you spin up the debugger (the heavy guns) when all else fails.

    It takes more time, and you often have a lot of issues that are working at designed in production systems, and a lot of upstream/downstream issues that logs will help you with much faster.

    • fibojoly@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I’ve spent inordinate amounts of my career going through logs from software in prod. I’m amazed anyone would dismiss their usefulness!

    • xmunk@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      Ahem, I’m a senior developer and I love prints. I appreciate debuggers and use them when in a constrained situation but if the source code is easily mutable (and I work primarily in an interpreted language) then I can get denser, higher quality information with print in a format that I can more comfortably consume than a debugger (either something hard-core like gdb or some fancy IDE debugger plugin).

      That said, I agree about logging the shit out of everything. It’s why I’m quaking in my boots about Cisco acquiring Splunk.

      • GravelPieceOfSword@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Splunk is already very expensive to be honest, with their policy of charging based on indexed logs (hit by searches) as opposed to used logs, and the necessity for a lot of logs to be indexed for 'in case something breaks '. Bit of hearsay there - while I don’t work for the team that manages indexing, I have had quite a few conversations with our internal team.

        I was surprised we were moving from splunk to a lesser known proprietary competitor (we tried and gave up on elasticsearch years ago). Splunk is much more powerful for power users , but the cost of the alternative was 7-10 times less, and most users didn’t unfortunately use splunk power user functionality to justify using it over the competitor.

        Being a power user with lots of dashboards, my team still uses splunk for now, and I have background conversations to make sure we don’t lose it, I think Cisco would lose out if they jacked up prices, I think they’d add value to their infrastructure offerings using splunk as an additional value add perhaps?

        • xmunk@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          With the acquisition my concern partially lies in cost but is more focused on quality. Cisco does big expensive stuff with big expensive certifications. I’m concerned they’ll try to enterprise and make HIPAA compliant and (add insane features here) with the result being a quickly degrading customer experience.

          My company has developers a plenty - but we also have a lot of less technical people who give our platform value… splunk makes our logs accessible and usable to those people without requiring a technical liason.

          I’m concerned with needing to divert developers (and like, senior ones that have a lot of trust) to find better solutions or, god forbid, try to roll our own.

  • Knusper@feddit.de
    link
    fedilink
    arrow-up
    41
    arrow-down
    1
    ·
    1 year ago

    I found debuggers practically unusable with asynchronous code. If you’ve got a timeout in there, it will break, when you arrive at a breakpoint.

    Theoretically, this could be solved by ‘pausing’ the clock that does the timeouts, but that’s not trivial.
    At least, I haven’t seen it solved anywhere yet.

    I mean, I’m still hoping, I’m wrong about the above, but at this point, I find it kind of ridiculous that debuggers are so popular to begin with.
    Because it implies that synchronous code or asynchronous code without timeouts are still quite popular.

    • bort@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Because it implies that synchronous code […] [is] still quite popular.

      it isn’t?

      • Knusper@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        I’m sure it is, I’m just not terribly happy about that fact.

        Thing is, any code you write ultimately starts via input and funnels into output. Those two ends have to be asynchronous, because IO fundamentally is.
        That means, if at any point between I and O, you want to write synchronous code, then you have to block execution of that synchronous code while output is happening. And if you’re not at least spawning a new thread per input, then you may even block your ability to handle new input.

        That can be fine, if your program has only one job to do at a time. But as soon as it needs to do a second job, that blocking becomes a problem and you’ll need to refactor lots of things to become asynchronous code.
        If you just build it as asynchronous from the start, it’s significantly less painful.

        But yeah, I guess, it’s the usual case of synchronous code being fine for small programs, so tons of programmers never learn to feel comfortable with asynchronous code…

  • serratur@lemmy.wtf
    link
    fedilink
    arrow-up
    28
    ·
    1 year ago

    I still use print as a quick “debug” its just convenient to have multiple prints in the code when you’re iterating and changing code.

  • irotsoma@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 year ago

    Depends on the language and platform as well as how asynchronous things are. For example, lots of platforms have little to no debugging for scripting languages. I write a lot of Groovy on a platform that has a debugger that is mostly too much trouble to connect my IDE to since the platform can’t run locally. But even then it doesn’t debug the Groovy code at all.

    And with asynchronous stuff it’s often difficult to tell what something isn’t running in the right order without some kind of debug logging. Though in most cases I use the logger rather than printing directly to the console so that it can be left in and just configured to only print if the logging level is set to debug which can be configured based on the environment.

  • eestileib@sh.itjust.works
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    1 year ago

    Embedded Oldster : “Of course, we had it rough. I had to use a single blinking red LED.”

    Unix Oldster : “An LED? We used to DREAM about having an LED, I still have hearing loss from sitting in front of daisy wheel printers.”

    Punch-card Oldster : “Luxury.”