Years ago Nvidia were playing around with things like this as the far future of DLSS.
Even imagine something like a remake - you could literally just pump the gameplay from GoldenEye 64 into a model that redoes the graphics with CGI levels of detail when generative AI like this can consistently pump out frames in realtime.
Particularly when the models can also predict inputs based on input so far, there wouldn’t even be perceptible lag (GeForce Now does something like this actually).
This tech is going to get pretty wild.
Years ago Nvidia were playing around with things like this as the far future of DLSS.
Even imagine something like a remake - you could literally just pump the gameplay from GoldenEye 64 into a model that redoes the graphics with CGI levels of detail when generative AI like this can consistently pump out frames in realtime.
Particularly when the models can also predict inputs based on input so far, there wouldn’t even be perceptible lag (GeForce Now does something like this actually).