• 40 Posts
  • 237 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • Multi threading is parallelism and is poised to scale to a similar factor, the primary issue is simply getting tensors in and out of the ALU. Good enough is the engineering game. Having massive chunks of silicon laying around without use are a mach more serious problem. At the present, the choke point is not the parallelism of the math but actually the L2 to L1 bus width and cycle timing. The ALU can handle the issue. The AVX instruction set is capable of loading 512 bit wide words in a single instruction, the problem is just getting these in and out in larger volume.

    I speculate that the only reason this has not been done already is because pretty much because of the marketability of single thread speeds. Present thread speeds are insane and well into the radio realm of black magic bearded nude virgins wizardry. I don’t think it is possible to make these bus widths wider and maintain the thread speeds because it has too many LCR consequences. I mean, at around 5 GHz the concept of wire connections and gaps as insulators is a fallacy when capacitive coupling can make connections across all small gaps.

    Personally, I think this is a problem that will take on a whole new architectural solution. It is anyone’s game unlike any other time since the late 1970’s. It will likely be the beginning of the real RISC-V age and the death of x86. We are presently at the age of the 20+ thread CPU. If a redesign can make a 50-500 logical core CPU slower for single thread speeds but capable of all workloads, I think it will dominate easily. Choosing the appropriate CPU model will become much more relevant.


  • Mainstream is about to collapse. The exploitation nonsense is faltering. Open source is emerging as the only legitimate player.

    Nvidia is just playing conservative because it was massively overvalued by the market. The GPU use for AI is a stopover hack until hardware can be developed from scratch. The real life cycle of hardware is 10 years from initial idea to first consumer availability. The issue with the CPU in AI is quite simple. It will be solved in a future iteration, and this means the GPU will get relegated back to graphics or it might even become redundant entirely. Once upon a time the CPU needed a math coprocessor to handle floating point precision. That experiment failed. It proved that a general monolithic solution is far more successful. No data center operator wants two types of processors for dedicated workloads when one type can accomplish nearly the same task. The CPU must be restructured for a wider bandwidth memory cache. This will likely require slower thread speeds overall, but it is the most likely solution in the long term. Solving this issue is likely to accompany more threading parallelism and therefore has the potential to render the GPU redundant in favor of a broader range of CPU scaling.

    Human persistence of vision is not capable of matching higher speeds that are ultimately only marketing. The hardware will likely never support this stuff because no billionaire is putting up the funding to back up the marketing with tangible hardware investments. … IMO.

    Neo Feudalism is well worth abandoning. Most of us are entirely uninterested in this business model. I have zero faith in the present market. I have AAA capable hardware for AI. I play and mod open source games. I could easily be a customer in this space, but there are no game manufacturers. I do not make compromises in ownership. If I buy a product, my terms of purchase are full ownership with no strings attached whatsoever. I don’t care about what everyone else does. I am not for sale and I will not sell myself for anyone’s legalise nonsense or pay ownership costs to rent from some neo feudal overlord.









  • Yes .docx.

    It appears as though the encoding is missing in such a way that nothing in Linux recognizes the file. The underlying CLI tools don’t have a way of converting the file. I tried with Python’s docx tool and with iconv. It has to be encoding related because some tools initially load the file with several sets of Asian characters instead of English. However, there is no hexadecimal or sections of entirely binary looking data. Archiving tools do not open up the the file to reveal anything else like a metafile or header. Neo vim shows garbled nonsense throughout. Bat warns of binary. Python won’t load the file, nor will Only Office. Libre Office and Abi Word load initially with Asian characters before crashing.

    The only option is likely gong to be setting up the W10 machine and converting a bunch of files within it.

    Ultimately, my old man thinks he can be an author all of the sudden and is trying to write. He’s not very capable of learning. I’m not confident that he can learn to use FOSS to do the same thing he has been doing. This post was just to see if there are options I am not already aware of that might actually work in practice. I can easily do everything I need in FOSS. I can do everything he needs to do. I’m more concerned about becoming his tech support when he forgets how to copy pasta. He already fails to separate the internet hardware connectivity from the web browser and operating system within his mental model of technology.













  • So software like CAD is funny. Under the surface, 3d CAD like FreeCAD or Blender is taking vertices and placing them in a Cartesian space (X/Y/Z - planes). Then it is building objects in that space by calculating the mathematical relationships in serial. So each feature you add involves adding math problems to a tree. Each feature on the tree is linearly built and relies on the previously calculated math.

    Editing any changes up tree is a massive issue called the topological naming problem. All CAD has this issue and all fixes are hacks and patches that are incomplete solutions, (it has to do with π and rounding floating point at every stage of the math).

    Now, this is only the beginning. Assemblies are made of parts that each have their own Cartesian coordinate planes. Often, individual parts have features that are referencing other parts in a live relationship where a change in part A also changes part B.

    Now imagine modeling a whole car, a game world, a movie set, or a skyscraper. The assemblies get quite large depending on what you’re working on. Just an entire 3d printer modeled in FreeCAD was more than my last computer could handle.

    Most advanced CAD needs to get to the level of hardware integration where generalizations made for something like Wayland simply are not sufficient. Like your default CPU scheduler, (CFS on Linux) is setup to maximize throughput at all costs. For CAD, this is not optimal. The process niceness may be enough in most cases, but there may be times when true CPU set isolation is needed to prevent anything interrupting the math as it renders. How this is split and managed with a GPU may be important too.

    I barely know enough to say this much. When I was pushing my last computer too far with FreeCAD, optimising the CPU scheduler stopped a crashing problem and extended my use slightly, but was not worth much. I really needed a better computer. However looking into the issue deeply was interesting. It revealed how CAD is a solid outlier workflow that is extremely demanding and very different from the rest of the computer where user experience is the focus.