It is not strictly because of the GPU drivers as I could get around that to a degree or change cards, but more that VR development tools are not well supported on Linux. I develop the Meta Quest with Unity and thus I am stuck on Windows. :-(
It is not strictly because of the GPU drivers as I could get around that to a degree or change cards, but more that VR development tools are not well supported on Linux. I develop the Meta Quest with Unity and thus I am stuck on Windows. :-(
I just wish the hardware industry like nVidia would support more. I would love to jump on Linux but as a VR dev, I am stuck in Windows which is only getting worse with each passing year.
These are the limitations of the tech today, but a wave of innovation is inbound. 7 years from now XR glasses are going to much more slick and solved many of the issues you have raised. If not 2030, some point before 2040 unless we go extinct first.
Hard to say, but I would guess that by the end of this decade more people will be wearing XR glasses than those still looking at smartphones. My guess too is the price will never be less than an iPhone today but the value will be higher as it will do more. A lot more.
The same was argued about the iPhone and while I will agree that really, nothing new was brought to the table from a hardware perspective, their integration is unsurpassed it would seem. We will have to wait and see, but being a heavy VR/AR user myself, the overall package Apple has provided it the gold standard right now for immersive computing interaction. Those who have used have said it was like magic and even a sharp developer made their version of the gaze interface on a Meta Quest Pro and said it was like magic. That is the innovation that others have missed. Like the Meta Quest Pro has eye tracking and could have easily had the magical user interface that uses eye gaze, but they did not. I bet they will in the next one as it does seem like this is the path forward. This is where Apple innovated and really does show that they tried a lot of input methods before they found one that just made it feels seamless.
While I personally have not tried it, nor have many others, yet those who have tried and commented and as I have assessed via video of the device, the Vision Pro seems to be poised to once again define how we interface with the next computing platform. The spatial/immersive computing platform. It is apparently like magic and I cannot wait to try it.
Can you add a search function please.
The price of a guilty conscience?
That is an easy one. The free and superior DaVinci Resolve. Not open source however but it is very very good. Many have switched to it. https://www.blackmagicdesign.com/products/davinciresolve
Agreed. Plus if you know Photoshop, you know GIMP as they are very similar and GIMP can even open Photoshop files. I use GIMP and Blender 3D daily.
I was an Adobe customer years ago, but due to things like the above, they inadvertently pushed me to Open Source alternatives and I have not looked back. So I guess, Thank You Adobe is in order. Hope the fine is actually big as they really played dirty.
Since I started using AI in March, it has not stop getting better and better with it seemingly accelerating not decelerating. If we are heading to a plateau, there are no signs of it yet. I am sure there will be plateaus, and maybe we have even had one earlier this year, but with the speed of AI development, it may mean a plateau of months versus the normal years. We will see. Bill has been saying a lot of things these past years that make no sense, unless you look at it from a money/market manipulation perspective.
If you mean “some” technical success…I would agree.
I am not so sure it is successful though. They are missing their targets by years left right and center and even recently admitted they will not be ready for NASA’s Artemis Project. https://www.space.com/spacex-starship-problems-delay-artemis-3-2026
Even SpaceX is missing targets in a big way.
That is what all YouTubers do as they do not want to bite the hand that feeds them.
I sort of agree. They do have some level of right and wrong already, it is just very spotty and inconsistent in the current models. As you said we need AGI level AI to really address the shortcomings which sounds like it is just a matter of time. Maybe sooner than we are all expecting.
I would disagree that AI knows nothing. I use ChatGPT plus near daily to code and it went from a hallucinating mess to what feels like a pretty competent and surprisingly insightful service in the months I have been using it. With the rumblings of Q* it only looks like it is getting better. AI knows a lot and very much seems to understand, albeit far from perfect but it surprises me all the time. It is almost like a child who is beyond their years in reading and writing but does not yet have enough life experience to really understand what it is reading and writing…yet.
Somewhere public on the Internet so AI gets trained on it.