I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
Yeah I’m with you, I read most of it but I just don’t know where the disdain comes from. At most scales of infrastructure anymore you can use them interchangeably because the difference is immaterial in practical applications.
Like if I am going to provision 2TB I don’t really care if it’s 2000 or 2048GB, I’ll be resizing it when it gets to 1800 either way, and if I needed to actually store 2TB I would create a 3TB volume, storage is cheap and my time calculating the difference is not.
Wait until you learn about how different fields use different precision levels of pi.
It’s not 2000 Vs 2048. It’s 1,862 Vs 2048
The GB get smaller too.