They also claim that it only takes about 8 seconds to generate various good images.
Might want to clarify: The “model” in this case is not a full model like Stable Diffusion, but rather something used like a patch, more comparable to something like LoRA
I don’t think that anyone would misunderstand anyway, but better safe than sorry
That’s the real meat of this. The future of models will be these smaller, focused “patches” that have some kind of traceable lineage. At least when it comes to marketing and selling these.
I’m always sceptical about those claims.
Let them prove it, and then we can decide if it’s good or not, instead of getting our hopes up for empty promises.
Not the first time ppl have made outlandish claims with AI, even though of course you’d expect someone like Nvidia to be cognisant about this kind of marketing.
NVIDIA’s marketing overhypes, but their technical papers tend to be very solid. Obviously it always pays to remain skeptical but they have a good track record in this case.
Release it, and let us see. Don’t just claim stuff.
Where can we download the model?
Pretty neat. The training process takes a while for textual inversion, which I have enjoyed playing around with. I hope Automatic1111 gets support for this method of training, if it takes off!
Can this be adapted to LLMs?
Great question, I wondered the same thing. I’ve got a decent knowledge base where stable diffusion (text to image etc) is concerned, and understand the applications of this Nvidia process, I’m not familiar enough with customization options for LLMs. I haven’t really seen references to hypernetwork/lora/midjourney type applications in LLMs, or anything that really “plugs into” your existing model to augment results, the way stable diffusion is geared for customization. It seems in my limited understanding, that customization for LLMs requires customization of the training ing data, and a completely new training process for the actual model, not a reference model like SD.