Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).

(header photo by Brian Maffitt)

  • 2 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • Thanks for so politely and cordially sharing that information


    edit: I would be even more appreciative if it were true: https://www.rockpapershotgun.com/rocket-league-ending-mac-and-linux-support-because-they-represent-less-than-0-3-of-active-players

    Quoting their statement:

    Regarding our decision to end support for macOS and Linux:

    Rocket League is an evolving game, and part of that evolution is keeping our game client up to date with modern features. As part of that evolution, we’ll be updating our Windows version from 32-bit to 64-bit later this year, as well as updating to DirectX 11 from DirectX 9.

    There are multiple reasons for this change, but the primary one is that there are new types of content and features we’d like to develop, but cannot support on DirectX 9. This means when we fully release DX11 on Windows, we’ll no longer support DX9 as it will be incompatible with future content.

    Unfortunately, our macOS and Linux native clients depend on our DX9 implementation for their OpenGL renderer to function. When we stop supporting DX9, those clients stop working. To keep these versions functional, we would need to invest significant additional time and resources in a replacement rendering pipeline such as Metal on macOS or Vulkan/OpenGL4 on Linux. We’d also need to invest perpetual support to ensure new content and releases work as intended on those replacement pipelines.

    The number of active players on macOS and Linux combined represents less than 0.3% of our active player base. Given that, we cannot justify the additional and ongoing investment in developing native clients for those platforms, especially when viable workarounds exist like Bootcamp or Wine to keep those users playing.







  • Intel fumbled hard with some of their recent NICs including the I225-V,[1][2] which took them multiple hardware revisions in addition to software updates to fix.

    AMD also had to be dragged kicking and screaming to support earlier AM4 motherboard buyers to upgrade to Ryzen 5000 chips,[3][4] and basically lied to buyers about support for sTRX4, requiring an upgrade from the earlier TR4 to support third-gen Threadripper but at least committing to “long-term” longevity in return.[5][6] They then turned around and released no new CPUs for the chipset platform, leaving people stranded on it despite the earlier promises.[7]

    I know it’s appealing to blindly trust one company’s products (or specific lineup of products) because it simplifies buying decisions, but no company or person is infallible (and companies in particular are generally going to profit-max even at your expense). Blindly trusting one unfortunately does not reliably lead to good outcomes for end-users.


    edit: “chipset” (incorrectly implying TRX40) changed to “platform” (correctly implying sTRX4); added explicit mention of “AM4” in the context of the early motherboard buyers.







  • Submitted for good faith discussion: Substack shouldn’t decide what we read. The reason it caught my attention is that it’s co-signed by Edward Snowden and Richard Dawkins, who evidently both have blogs there I never knew about.

    I’m not sure how many of the people who decide to comment on these stories actually read up about them first, but I did, such as by actually reading the Atlantic article linked. I would personally feel very uncomfortable about voluntarily sharing a space with someone who unironically writes a post called “Vaccines Are Jew Witchcraftery”. However, the Atlantic article also notes:

    Experts on extremist communication, such Whitney Phillips, the University of Oregon journalism professor, caution that simply banning hate groups from a platform—even if sometimes necessary from a business standpoint—can end up redounding to the extremists’ benefit by making them seem like victims of an overweening censorship regime. “It feeds into this narrative of liberal censorship of conservatives,” Phillips told me, “even if the views in question are really extreme.”

    Structurally this is where a comment would usually have a conclusion to reinforce a position, but I don’t personally know what I support doing here.




  • Typically no, the top two PCIE x16 slots are normally directly to the CPU, though when both are plugged in they will drop down to both being x8 connectivity.

    Any PCIE x4 or X1 are off the chipset, as well as some IO, and any third or fourth x16 slots.

    I think the relevant part of my original comment might’ve been misunderstood – I’ll edit to clarify that but I’m already aware that the 16 “GPU-assigned” lanes are coming directly from the CPU (including when doing 2x8, if the board is designed in this way – the GPU-assigned lanes aren’t what I’m getting at here).

    So yes, motherboards typically do implement more IO connectivity than can be used simultaneously, though they will try to avoid disabling USB ports or dropping their speed since regular customers will not understand why.

    This doesn’t really address what I was getting at though. The OP’s point was basically “the reason there isn’t more USB is because there’s not enough bandwidth - here are the numbers”. The missing bandwidth they’re mentioning is correct, but the reality is that we already design boards with more ports than bandwidth - hence why it doesn’t seem like a great answer despite being a helpful addition to the discussion.


  • Isn’t this glossing over that (when allocating 16 PCIe lanes to a GPU as per your example), most of the remaining I/O connectivity comes from the chipset, not directly from the CPU itself?

    There’ll still be bandwidth limitations, of course, as you’ll only be able to max out the bandwidth of the link (which in this case is 4x PCIe 4.0 lanes), but this implies that it’s not only okay but normal to implement designs that don’t support maximum theoretical bandwidth being used by all available ports and so we don’t need to allocate PCIe lanes <-> USB ports as stringently as your example calculations require.

    Note to other readers (I assume OP already knows): PCIe lane bandwidth doubles/halves when going up/down one generation respectively. So 4x PCIe 4.0 lanes are equivalent in maximum bandwidth to 2x PCIe 5.0 lanes, or 8x PCIe 3.0 lanes.

    edit: clarified what I meant about the 16 “GPU-assigned” lanes.


  • Sure, but not much of that battery improvement is coming from migrating the APU’s process node. Moving from TSMC’s 7nm process to their 6nm process is only an incremental improvement; a “half-node” shrink rather than a full-node shrink like going from their 7nm to their 5nm.

    The biggest battery improvement is (almost definitely) from having a 25% larger battery (40Whr -> 50Whr), with the APU and screen changes providing individually-smaller battery life improvements than that. Hence the APU change improving efficiency “a little”.


  • They were careful with how they phrased it, leaving the possibility of a refresh without a performance uplift still on the table (as speculated by media). It looks like the OLED model’s core performance will be only marginally better due to faster RAM, but that the APU itself is the same thing with a process node shrink (which improves efficiency a little).


    See also: PCGamer article about an OLED version. They didn’t say “no”, and (just like with the previously linked article), media again speculated about a refresh happening.

    It looks like they were consistent with what they were talking about with how it wasn’t simple to just drop in a new screen and leave everything else as-is, and used that opportunity to upgrade basically everything a little bit while they were tinkering with the screen upgrade.