Photographers are frustrated as OpenAI fails to meet its 2025 deadline for a promised tool designed to exclude their work from AI training data.
In May, OpenAI announced plans for a “Media Manager” tool that would help identify copyrighted content, such as text, images, and videos. This initiative aimed to address concerns about copyright violations and alleviate ongoing legal disputes. However, updates on Media Manager have been almost nonexistent. A former OpenAI employee revealed to TechCrunch, “To be honest, I don’t remember anyone working on it.”
Why Media Manager Is Crucial for Photographers
After the launch of DALL-E 3, OpenAI assured photographers they could opt out of having their work included in AI training. However, the process required creators to submit each piece individually, along with detailed descriptions—an overly burdensome task.
Ed Newton-Rex, founder of Fairly Trade, criticized the system, saying, “Most creators won’t even know this exists, yet their work will still be exploited without their consent.”
Many photographers argue that their content is being used for AI training without proper permission or compensation, which they see as a violation of their rights. Media Manager was expected to resolve these concerns by providing an easy opt-out option. Despite setting a deadline of 2025, OpenAI has gone silent about the tool. The last update came in August when the company stated it was “still in development.”
Intellectual property attorney Andrian Cyhan pointed out the complexity of such a project, noting that even large platforms like YouTube and TikTok struggle to implement effective content ID systems. “Ensuring creator protections and potential compensation requirements is challenging, especially with the rapidly evolving legal landscape,” Cyhan explained.
For now, photographers continue to wait for OpenAI to deliver a solution that respects their rights while balancing innovation and fairness.




