Yeah taken as a guideline and observation that computer speeds/storage/etc continue to improve, I think it’s fair. It may not always be double, but it is still significantly different than other physical processes which have “stagnated” by a similar metric (like top speed on an average vehicle or miles per gallon).
I do have a 64gb m1 MacBook Pro and man that thing screams at doing LLM AI. I use it to serve models locally throughout my house, while it otherwise still works as a fantastic computer (usually using about half the ram for llm usage). I still prefer a 4080 for image generation though.