An influential global news agency has released its guidelines concerning the utilization of artificial intelligence in journalism. The Associated Press has communicated that it will persist in its experimentation with AI, yet it won’t employ it to generate content and visuals fit for publication.
In a landscape where numerous industries are increasingly adopting generative tools due to their growing accessibility and capabilities, the news sector is grappling with significant inquiries. One pivotal question being pondered is whether the public can genuinely place their trust in news providers that are relying on AI to produce content.
While certain entities are imposing stringent regulations for the integration of AI within their operational processes, others are openly embracing this technological advancement.
A recent job advertisement by Newsquest Media Group seeks an “AI-assisted reporter” who will play a leading role in a new journalistic era. This role involves harnessing AI technology to craft content at the national, local, and hyper-local levels for their news brands, all while applying conventional journalistic proficiencies. The AI-assisted reporter will collaborate with AI to facilitate the creation of news articles and seamlessly incorporate AI-generated content into newsrooms of various sizes.
Job advertisements of this nature exemplify the contrasting viewpoints within the industry regarding the integration of AI for news content creation. Educational programs have emerged, tailored to instruct the incorporation of AI within newsroom operations. Earlier this summer, Euronews Next held a conversation with Charlie Beckett, who spearheads the JournalismAI initiative at LSE. In envisioning a ‘novel landscape’ for journalism, Beckett proposed that AI’s influence now extends to every aspect a journalist covers.
Regardless of the approach newsrooms adopt in deploying AI, Beckett emphasized its character as a ‘language machine’ rather than an arbiter of truth, underscoring the indispensability of the human factor in journalistic production. The ensuing exploration delves into the diverse strategies various news organizations have adopted in response to the ongoing AI revolution.
The Associated Press (AP) has released guidelines concerning the utilization of artificial intelligence (AI) in journalism, which have been incorporated into a section of its influential stylebook.
Amanda Barrett, the Vice President of News Standards and Inclusion at AP, stated, “Our objective is to offer a transparent approach that allows for experimentation while upholding safety.” She emphasized the importance of treating AI-generated content just as any other source, advocating for rigorous vetting of material produced by AI. Furthermore, the company highlighted that AI-created photo, video, or audio segments should only be employed when they are the central focus of a story.
Although the AP acknowledged the potential of AI for tasks such as assembling story digests for newsletters, the organization remains cautious about its application. Over the past decade, the AP has conducted trials with rudimentary AI, generating concise news pieces from sports scores and corporate earnings reports. This accumulated experience is invaluable, yet Barrett emphasized the need for a prudent approach in transitioning to this new phase, with a strong commitment to safeguarding the integrity and credibility of their journalism.
The news agency aims to acquaint its journalists with this technology, as they will be covering stories related to it in the foreseeable future, Barrett further emphasized. On a different note, Reuters, a competing news agency to AP, has expressed its commitment to an ethical employment of AI, focusing on maintaining precision and nurturing reliability.
The British newspaper stood among the pioneers of prominent news institutions to outline its strategy concerning generative AI. In a collaborative declaration by its CEO and Editor-in-Chief, the newspaper expounded in June on their employment and non-employment of AI tools. The Guardian affirmed that AI would exclusively find application in editorial capacities when it genuinely enhances the origination and circulation of authentic journalism. Its usage would demand human oversight and the authorization of a senior editor.
Moreover, the publication intends to harness this technology to aid journalists in tasks like analyzing extensive datasets and facilitating corrections, suggestions, and the alleviation of burdensome administrative processes. An additional cornerstone of their approach involves the selection of AI tools that uphold considerations such as permissions, transparency, and equitable remuneration in relation to the data they were trained on. This aspect has sparked substantial debate, a case in point being the contentious matter surrounding widely-used chatbots such as ChatGPT. OpenAI, the creator of ChatGPT, has faced allegations of training its language models on copyrighted material, raising significant ethical and legal concerns.
While prominent news institutions are proceeding cautiously in embracing AI for their future endeavors, this technology could present an avenue of opportunity for smaller newsrooms constrained by resources and budgets. For instance, News Corp Australia is said to be generating around 3,000 articles per week through generative AI.
In this case, modest teams employ the technology to release numerous localized stories weekly, covering subjects like weather updates, fuel costs, and traffic conditions.
Concurrently, a local newspaper situated in Nottinghamshire, UK, recently unveiled its experimentation with AI integration. In a letter addressed to its readers, Natalie Fahy, the senior editor of the newspaper owned by Reach, explained that the regional daily publication will begin using AI to craft succinct bullet point summaries for certain lengthy articles.
These summaries will subsequently undergo editorial review before incorporation, accompanied by a disclosure at the article’s conclusion, indicating employment of AI technology.