Is it a conspiracy? For months, YouTubers have been quietly griping that something looked off in their recent video uploads. Following a deeper analysis by a popular music channel, Google has now confirmed that it has been testing a feature that uses AI to artificially enhance videos. The company claims this is part of its effort to “provide the best video quality,” but it’s odd that it began doing so without notifying creators or offering any way to opt out of the experiment.
Google’s test raised eyebrows almost immediately after it began rolling out in YouTube Shorts earlier this year. Users reported strange artifacts, edge distortion, and distracting smoothness that gives the appearance of AI alteration. If you’ve ever zoomed in close after taking a photo with your smartphone only to notice things look oversharpened or like an oil painting, that’s the effect of Google’s video processing test.
According to Rene Ritchie, YouTube’s head of editorial, this isn’t quite like the AI features Google has been cramming into every other product. In a post on X (formerly Twitter), Ritchie said the feature is not based on generative AI but instead uses “traditional machine learning” to reduce blur and noise while sharpening the image. Although, this is a distinction without a difference—it’s still AI of a sort being used to modify videos.
YouTuber Rhett Shull began investigating what was happening to his videos after discussing the issue with a fellow creator. He quickly became convinced that YouTube was applying AI video processing without notifying anyone—he calls this “upscaling,” though Google’s Ritchie contends this is not technically upscaling tech.