Streaming Giant Introduces New Policies to Combat AI Fraud and Protect Artists' Voices
Spotify has confirmed that it has removed over 75 million tracks flagged as "spammy" in the past year, as part of a comprehensive crackdown on AI misuse and fraudulent uploads. The streaming platform has outlined significant new measures designed to protect artists from impersonation and ensure greater transparency for its vast listener base.
A key part of the new policy specifically targets AI voice clones and vocal deepfakes. Spotify has stated that the unauthorized use of an artist's voice will now be prohibited unless officially licensed. The company is also expanding safeguards to prevent fraudulent uploads that appear under another artist's profile. This move comes after earlier reports of AI-generated tracks featuring deceased musicians appearing on the platform, raising industry-wide concerns. Later this year, Spotify will roll out a new spam detection system to block uploads employing tactics like duplicates, artificially short tracks, or other methods designed to exploit its recommendation algorithms.
Additionally, Spotify is supporting an industry standard for AI disclosures in music credits, allowing artists and labels to specify AI contributions to a track, which will then be visible in the app's credits. These measures aim to empower artists in controlling AI's use in their work and build listener trust as generative technology becomes increasingly pervasive in the music industry.
Post a Comment