We are going to generate content at a volume orders of magnitude larger than our already current excessive volume, and finding the stuff that has real meaning and a real message is going to be even harder.
It could go both ways: similar software could “compress” video (especially AI-generated video) into text prompts that could then re-create it without needing to store it. (Currently, of course, the processing cost would be higher than the storage cost for the raw video—but the scenario in which we’re cranking out excessive amounts of AI-generated content implies that the high processing costs have been eliminated.) That would also have the side effect of making it easier to find and organize videos based on their “meaning”.
I think the idea of using natural language to generate video is flawed for the vast majority of applications we want. Imagine you could give a script to one of these models and have it output a TV show episode. While we can make these models deterministic it seems like the vast majority of generative content with some amount of quality requires the addition of random noise through the process. Should we want TV episodes whose visual quality and little details shift from model to model? Why not store a plain text description infered by some model and store the video component in a medium less prone to misinterpretation? We may use deep learning compression for videos and audio in the future if there are significant advancements but I doubt the compression will be to English.