• 0 Posts
  • 27 Comments
Joined 8 months ago
cake
Cake day: January 22nd, 2024

help-circle
  • Oh yeah, rust has to win, but I think this was an empathy-free paradigm war masquerading as an innocent request for information. I think trying to bolt rust into Linux is a strategic error. It’s going to cause quite a lot of unnecessary friction and an awful lot of unnecessary technical complication and will be absolutely riddled with complexities and ways of doing things that are inherently unsafe. Instead build a posix compliant OS as rust from the bottom up and it’ll knock the spots off Linux and will be rock solid. It’ll take well over a decade but it’ll be far, far better.


  • TL;DR: Vast culture clash that rust guys didn’t perceive and C guys hated and false assertion that “you don’t need to learn rust” based on inexplicably naive lack of understanding that maintenance might be necessary.

    If someone builds a rust api on top of your C code inside your project, you have exactly five choices: (1) preserve the assumptions the rust code is making (2) only change your code if you have a rust expert to collaborate with handy (3) edit the rust code yourself (4) break the rust assumptions leading to hard to find bugs (5) break the build. The C guys hated all five of those options, and the rust guys told them they didn’t need to worry their pretty little heads about it. ON, they weren’t as dismissive as that, but they either didn’t understand those as issues or didn’t care about them or dismissed them.

    The rust guys were asking the C guys to tell them the semantics so that they could fix the type signatures for their rust functions and the C guys were reluctant to do that because they wanted to be able to change the semantics of that turned out to be useful to them. They didn’t want to commit so something that was documented in a way they weren’t familiar with because they felt that even if they wanted to, they couldn’t ensure their code was compliant with this specification going forward because they didn’t understand the rust type signature fully. (They got hung up on the self argument and launched a rant against OOP.)

    The rust guys knew instinctively that the Result return type meant that the operation could fail and could tell from the two arguments to that both in what ways it could fail and every kind of answer it could produce if it succeeded, but the C guys found almost none of that obvious. This was for just one function in the rust API, but it also radically changed the way of doing it. This one rust call replaced the whole algorithms of ask, check answer, if none, check this and that, otherwise do this blah blah blah. The C guys are used to keeping everything lean and simple with a single purpose and were being asked to think of a while collection of procedural knowledge and edge cases with a handle everything monolith. But they were audibly reluctant to commit to that being all the edge cases because they don’t think of all of those tests as one thing and instinctively wouldn’t write something that checks for all of the edge cases because (a) in a lot of circumstances the code they’re writing only needs to know that there was a problem and will give up quickly and move on and (b) they want to be able to freely choose to add other edge cases in the future like they normally do without having to worry about the rust code breaking.

    They weren’t complaining that they were being asked to write rust, they were complaining that they didn’t want to learn rust, and they were complaining this because they could see that to preserve all the rust API type signatures they would have to understand them, the expectations around them and memory safety principles, so that a rust programmer in the future wouldn’t have to change the rust type signature.

    The rust guys would have gained a lot more traction by just asking the C guys to keep a bunch of comments up to date detailing the semantics and error checking procedures, and promising to edit their rust API if the C code changes, but I suspect they didn’t ask for that because they know that no guarantees come from a comment and they want to be sure that the rust code works across all the possible scenarios and in rust culture, that is always documented in the type system where it can be enforced.

    The rust guys spoke like it was self evident that having a monolithic API with a bunch of stuff guaranteed by the rust compiler was best, but seem not to have realised that this is a massive culture clash because the C guys come from a culture of rejecting the idea of compiler guarantees anyway (because they have long had confidence in their ability to hand optimize their code to be faster than some prescriptive compiler’s output and look down on people who choose to have the guardrails up).

    They felt like they were being asked to help write an interface definition in a monolithic style that they have always rejected, to achieve goals that they have long resisted, in a language that they find alien, with no guarantees for them that the rust guys were going to stick around to agree and implement the rust changes necessary if they changed the C code, and with no confidence that they understood what would count as a breaking change at the rust level.

    This perceived straightjacket made them particularly cross. They complained about the inability to change their C code and its semantics and the need to learn enough rust to understand quickly what not to change, but they didn’t want to not change things and would need to edit the rust API at the same time as editing the C code if they didn’t want the rust build to break, and then there would be even more downstream changes from that, so realistically they would need not only to be able to understand the rust type signatures, they would need to be able to edit both the type signatures and the functions themselves, and basically maintain all the downstream rust, and they would want to be sure they were writing efficient rust, well aware that it took them decades to get to the level of extreme efficiency they write in pure C, a much simpler language.

    The rust guys said “Just tell us what your code means so we can write our type signatures”, but the C guys didn’t want to help create for themselves a prison whose walls were of a strange and intricate design they found hard to perceive, made out of materials they didn’t have experience working with. They felt like the first guys were asking them how all the doors, windows, chimneys, air vents etc of the house that they built by hand would ever be used, so they could encase it in a stainless steel shell and make it part of a giant steel city. The C guys said “but I might want to build an extension or a wider garage!” They claimed that the C guys didn’t have to learn how to weld or manufacture steel sheets, and that their house would be much safer, but for some reason this didn’t win the C guys round to the plan, and there’s a bunch of people online calling the C guys tech luddites for not liking the whole thing and saying that they were incorrect that they needed to learn rust just because the rust guys made that claim, but that claim is actually completely incorrect unless you think that it’s OK to stop the project compiling with your pull request or you think that changes to the C code should be banned wherever a rust API is built on top of it.


  • I saw the clip previously. The rust guys are absolutely assuming that the C guys would go for something because (a) the compiler guarantees it’s memory safe (b) the semantics would be encoded in the type system. They demonstrate this using rust terminology and algebraic data types. Algebraic data types are the bees knees, (but not with that syntax and clumsiness), and compiler guarantees are the bees knees, but that’s not how a C programmer who’s middle aged sees the world, it just isn’t. Your typical middle aged C programmer grew up telling pascal programmers that automatic array bounds checking is for wimps and real men use pointer arithmetic and their programs run five times as fast. They were always right because their programs did really run significantly faster, but now rust comes along and its fast and safe. Why wouldn’t C programmers like it? Because the speed was the excuse and the lack of guardrails was the real reason they liked C.

    I said it’s a massive culture clash that the rust folks didn’t realise they were having because they just assume that “memory safe” wins people round, whereas C folks value their freedom from automatic compiler-based safety, and here you are, sounding like a rust person, saying it isn’t a culture clash at all and that the rust folks are right about memory safety and the C folks are just being irresponsible.


  • Expecting C programmers to like a compiler-based approach to memory safety is like expecting petrolheads to like a car purely because it’s electric. They have always viewed compiler based memory safety techniques as guard rails for novices. In their view, good bowlers don’t need guard rails at the bowling alley. It’s a massive massive clash of cultures and the rust folks come into the discussion with an assumption that C devs would leap with joy at the chance to automate memory management. Rust and C are complete opposites, but rust programmers seem to assume that just because rust is fast C programmers will love it.





  • Yes. The person I was replying to thought it was somehow bad for the battery to outlast the car. I was making the point that that’s fine. In response to your point about the cost of an engine, I should say that batteries are a far bigger part of the cost of an electric car - it’s really just not very complicated apart from that - very few moving parts indeed compared to a combustion engine. That’s why the car companies aren’t very keen - unless they make their own batteries, they’re not adding as much value when they manufacture them. They prefer to push the hybrids which have the complexity of both and a lot less battery capacity (but very much don’t have the advantages of both for the driver).


  • Well the original model Nissan Leaf has been available in the UK since about early 2011, which is more like 13.5 years than 11, and I did a quick search for the 2017 Nissan LEAF on more than 100k miles on autotrader and only one of them had lost any battery capacity at all, and it had over 90%. Another one had 120k miles on the clock and was still at 100% battery capacity. You can mistreat a car and it won’t last as long, yes, but it really is the older model that has the common battery problems. The new ones don’t. And there are brands that have much better battery care than the Leafs, with active cooling etc.

    You see, the reason we know they’re lasting longer is, you know, science and math, where they measure stuff and do the sums, and given that the old type of battery declined a lot in the first 8 years and the new type isn’t declining, then all you’ve got left on your hands at the end is just an awful lot of FUD about battery life peddled by an awful lot of people who don’t actually know.








  • They changed the policy so that wind farms could only be built on land designated by local councils as wind farm land. There’s no sense preemptively designating land as for wind farms if no one is trying to build a wind farm, and there’s no sense preemptively buying land for a wind farm unless it’s designated for wind farm. Effectively it designated the entire country as unsuitable for wind farms and made it easy for anyone to have their objections count against a new wind farm. Opposition to wind farms is very much in the minority, but it’s very vocal, very well organised and has the backing of fossil fuel industries.

    By contrast, fracking was pushed through against the local council’s objections and very much against the majority of local opinion. This is what you do with energy projects that you view as nationally important.

    The Conservatives felt that it was important to preserve and further subsidise the fossil fuel industry, so they supported fracking, no matter how a surdly expensive or unpopular, no matter how much water was permanently polluted and locked away from use. It was only when literally hundreds and hundreds of minor earthquakes (that they said weren’t important or indicative of a problem) led to a more major earthquake that made bad headlines for them, that they paused it for a while until the news died down.

    Anyway, most large energy projects are not subject to local objections, except, of course, for the cheapest form of energy today, which is onshore wind, which was subject to local objections with extra hurdles in the way compared to any other building projects.

    So it wasn’t technically banned, but everyone called it a ban because it was easier to get planning permission for a skyscraper in the Lake District than a wind farm on the Pennines.


  • UTC exists as a historical compromise because the British felt that GMT was the bees knees and the French felt differently. The letter order is most definitely a compromise between French and English word order. You can call it Universal Time Coordinaire.

    Historically, GMT became the international time reference point because the Greenwich observatory used to be the leader in the field of accurately measuring time. It probably helped that the British navy had been dominant earlier and lots of countries around the world and across time zones had been colonised by the British.

    UTC is an international standard for measuring time, based on both satellite data about the position and orientation of the earth and atomic clocks, whereas GMT is a time zone. Nowadays, GMT is based on UTC not independent telescopic observation.

    What’s the difference? You can think of a time zone as an offset from UTC, in the same sense that a 24h clock time is an offset from midnight. GMT = UTC+0.

    Technically, UTC isn’t a valid time zone any more than “midnight” is a valid 24h clock time. UTC+0 is a time zone and UTC isn’t in a similar sense that 00:00 is a time in 24hr clock and “midnight” isn’t.

    Of course, and perfectly naturally, I can use midnight and 00:00 interchangeably and everyone will understand, and I can use UTC and UTC+0 interchangeably and few people care, but GMT = UTC+0 feels like the +0 is doing nothing to most eyes.

    Fun fact: satellite data is very accurate and can track the UTC meridian independently from the tectonic plate on which the Greenwich observatory stands. The UTC meridian will drift slowly across England as the plates shift. Also, the place in the stars that Greenwich was measuring was of by a bit, because they couldn’t have accounted for the effect of the terrain on the gravitational field, so the UTC meridian was placed several tens of metres (over 200’) away from the Greenwich prime meridian. I suspect that there was a lot more international politics than measurement in that decision, and also in making the technical distinction between UTC and GMT, but I’m British, so you should take that with a pinch of salt.


  • The problem isn’t having empty values, it’s not tracking that in the type system, so the programmer and the compiler don’t have any information about whether a value can be null or not and the programmer has to figure it out by hand. In a complex program that’s essentially completely impossible. The innocently created bomb that causes your program to crash can be in absolutely any value.

    There are ways to track it all by disallowing null and using optional values instead, but some folks would rather stick with type systems that haven’t moved on since the 1960s.


  • In a discussion about whether null should exist at all, and what might be better, saying that Optional values aren’t available in languages with type systems that haven’t moved on since the 1960s isn’t a strong point in my view.

    The key point is that if your type system genuinely knows reliably whether something has a value or not, then your compiler can prevent every single runtime null exception from occurring by making sure it’s handled at some stage and tracking it for you until it is.

    The problem with null is that it is pervasive - any value can be null, and you can check for it and handle it, but other parts of your code can’t tell whether that value can or can’t be null. Tracking potential nulls is in the memory of the programmer instead of deduced by the compiler, and checking for nulls everywhere is tedious and slow, so no one does that. Hence null bugs are everywhere.

    Tony Hoare, an otherwise brilliant computer scientist, called it his billion dollar mistake a decade or two ago.


  • Well, UTC didn’t exist in 1800, it would have been GMT, and that might not have been too popular so soon after the war of independence. Even if you convinced all of the USA to use one time zone for the railways, it would be different elsewhere and you’d still get time zones.

    Maybe you’d get further with the project with the airlines in the first half of the twentieth century, but I’m not sure that that level of internationalism would have gone down well in a rather war torn world.