Abstract
The project examines transparency in the disclosure of AI use within news reporting. Driven by rising public demand for transparency in AI-generated content, the research investigates the policies and practices of major American news organizations in communicating AI involvement to their readers. This work addresses ethical concerns related to audience trust and the integrity of AI use in journalism. This study employs a qualitative analysis of publicly available AI policies and conducts a content review of AI-labeled articles. Research tasks included examining news outlets’ policy disclosures and identifying any visible AI content labels. The findings reveal that very few organizations provide a policy outlining their AI practices, with minimal public labeling of AI-generated content. This limited transparency suggests a need for standardized labeling practices to maintain audience trust and support ethical AI use in news media. The study concludes that establishing clear guidelines and labeling protocols could improve transparency, build audience trust, and set a foundation for responsible AI integration in journalism. Further research is needed to assess the impact of these practices on reader perceptions and trust.
Presenters
Natalya VodopyanovaAssistant Professor, Corporate Communication, Pennsylvania State University, Pennsylvania, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
News Reporting, Artificial Intelligence, Transparency, Disclosure of AI Use