Here is a rabit hole.
-
Here is a rabbit hole. And this might get me off youTube at last. I feel like I'm going mad. Like I can't trust anything. Maybe that's the point or maybe it's projection.
Is something going on? Or are people reacting to a bad environment and seeing digital demons everywhere?
Some of this reminds me of how audiophiles hear all kinds of things that aren't there. Could this just be "compression" ?
-
Here is a rabbit hole. And this might get me off youTube at last. I feel like I'm going mad. Like I can't trust anything. Maybe that's the point or maybe it's projection.
Is something going on? Or are people reacting to a bad environment and seeing digital demons everywhere?
Some of this reminds me of how audiophiles hear all kinds of things that aren't there. Could this just be "compression" ?
@futurebird One comment made sense: "They are reducing the quality of the video and using AI upscaling to reduce the bandwidth - saves them millions of dollars on bandwidth. "
-
@futurebird One comment made sense: "They are reducing the quality of the video and using AI upscaling to reduce the bandwidth - saves them millions of dollars on bandwidth. "
@Nazani @futurebird It can't save bandwidth. It might save storage. But the cost in compute would be way larger than the cost of the storage.
-
@Nazani @futurebird It can't save bandwidth. It might save storage. But the cost in compute would be way larger than the cost of the storage.
@Nazani @futurebird The only way "AI compression" could be profitable in storage & bandwidth to the video provider platform is if they could offload all the compute onto the user's client device. This isn't happening because it'd reduce battery life to a few minutes, and web won't allow it anyway (having that much site specific data for model, acces to "AI" scale compute, etc.)
-
@Nazani @futurebird The only way "AI compression" could be profitable in storage & bandwidth to the video provider platform is if they could offload all the compute onto the user's client device. This isn't happening because it'd reduce battery life to a few minutes, and web won't allow it anyway (having that much site specific data for model, acces to "AI" scale compute, etc.)
@dalias @Nazani @futurebird I'm not sure thats right, isnt DLSS "AI" upscaling for games run on the users GPU and actually save processing vs rendering at a higher resolution because it saves on the precise calculation of rendering replacing it with quicker estimates for upscaling. I could see a way to process video which uses similar techniques to upscale and reduce decompression artifacts which could give a video a to-sharp umcanny quality. I don't know that YT engineers have built such a thing, or if this person is misattributing normal compression artifact differences from traditional rencoding techniques
-
@dalias @Nazani @futurebird I'm not sure thats right, isnt DLSS "AI" upscaling for games run on the users GPU and actually save processing vs rendering at a higher resolution because it saves on the precise calculation of rendering replacing it with quicker estimates for upscaling. I could see a way to process video which uses similar techniques to upscale and reduce decompression artifacts which could give a video a to-sharp umcanny quality. I don't know that YT engineers have built such a thing, or if this person is misattributing normal compression artifact differences from traditional rencoding techniques
@raven667 @Nazani @futurebird That isn't "AI", just use of the "AI" marketing label for large compression (or "enhancement") dictionary. It's like calling 2xSaI "AI".
-
@dalias @Nazani @futurebird I'm not sure thats right, isnt DLSS "AI" upscaling for games run on the users GPU and actually save processing vs rendering at a higher resolution because it saves on the precise calculation of rendering replacing it with quicker estimates for upscaling. I could see a way to process video which uses similar techniques to upscale and reduce decompression artifacts which could give a video a to-sharp umcanny quality. I don't know that YT engineers have built such a thing, or if this person is misattributing normal compression artifact differences from traditional rencoding techniques
@raven667 @dalias @futurebird Much of this tech talk is beyond my grasp. Suggest you head over to the video & post your comments. I just know it's going to ruin the art history videos I like to watch.
-
@raven667 @dalias @futurebird Much of this tech talk is beyond my grasp. Suggest you head over to the video & post your comments. I just know it's going to ruin the art history videos I like to watch.
-
@raven667 @Nazani @futurebird That isn't "AI", just use of the "AI" marketing label for large compression (or "enhancement") dictionary. It's like calling 2xSaI "AI".
@dalias @raven667 @Nazani @futurebird this kind of "AI" involves lots of training to generate a lightweight model that would then be employed for each video, so the hefty compute cost is one-time.
I believe it's attention-based. Something like "use more bits on faces" and "smooth this part since nobody's looking at it." A musician will be looking at things differently than your general viewer, so since this is a generalized algorithm, the artifacts are more obvious.
-
@dalias @raven667 @Nazani @futurebird this kind of "AI" involves lots of training to generate a lightweight model that would then be employed for each video, so the hefty compute cost is one-time.
I believe it's attention-based. Something like "use more bits on faces" and "smooth this part since nobody's looking at it." A musician will be looking at things differently than your general viewer, so since this is a generalized algorithm, the artifacts are more obvious.
@adamhotep @raven667 @Nazani @futurebird Yeah that's a mix of really exhaustive search for optimal compression dictionaries with perceptual models for where to allocate bits that really aren't anything fundamentally fancier than mp3 psychoacoustic model.
-
@adamhotep @raven667 @Nazani @futurebird Yeah that's a mix of really exhaustive search for optimal compression dictionaries with perceptual models for where to allocate bits that really aren't anything fundamentally fancier than mp3 psychoacoustic model.
@dalias @adamhotep @raven667 @Nazani
Why is the "look" of the compression making people think of AI, though?
-
@dalias @adamhotep @raven667 @Nazani
Why is the "look" of the compression making people think of AI, though?
@dalias @adamhotep @raven667 @Nazani
I'm interested in the technical discussion but this post was also about a "bad feeling" a corrosive feeling that comes from not knowing if what you are looking at is real or not.
I think that problem there isn't so much technical as human.
No one has said "YouTube would never do such a thing without telling the creators and users!" Because no one has that kind of confidence in them. And no one should.
-
Here is a rabbit hole. And this might get me off youTube at last. I feel like I'm going mad. Like I can't trust anything. Maybe that's the point or maybe it's projection.
Is something going on? Or are people reacting to a bad environment and seeing digital demons everywhere?
Some of this reminds me of how audiophiles hear all kinds of things that aren't there. Could this just be "compression" ?
@futurebird They’re enshittifying YouTube by applying AI filters
️
-
@futurebird They’re enshittifying YouTube by applying AI filters
️
Maybe! We don't really know. I want to make that clear. I haven't found, evidence beyond some concerns like those raised in this video and what I've seen myself.
I cannot however say that this isn't totally possible?
-
@adamhotep @raven667 @Nazani @futurebird Yeah that's a mix of really exhaustive search for optimal compression dictionaries with perceptual models for where to allocate bits that really aren't anything fundamentally fancier than mp3 psychoacoustic model.
I've worked with "AI" before it meant "LLMs." I've worked with Machine Learning and MCMC far longer than the Attention is All You Need paper existed.
This use of "AI" is far, far older than LLMs, and has been used as a marketing term in this general direction since at _least_ the 1980s. Ceding everything and criticizing everything under the label just because of LLMs strikes me as not a productive use of time.
-
F myrmepropagandist shared this topic