The Use of AI in Journalism – Opportunities and Challenges
AI revolutionizes journalism through automation, expanded coverage and new storytelling formats, while raising critical questions about credibility, transparency and journalistic independence. The opportunities and challenges demand concrete implementation strategies, economic considerations, and ethical frameworks for media organizations navigating this transformation.
By Kreatized's editorial team
In 2023, the Associated Press expanded its automated earnings coverage from 300 to 4,000 companies—all through AI that writes financial stories in seconds. Meanwhile, Bloomberg deployed BloombergGPT, a specialized AI trained on financial data to enhance reporting on market movements and trends. The revolution isn't coming—it's already here.
Artificial intelligence has become journalism's most powerful new collaborator, transforming how news is gathered, produced, and consumed. As newsrooms face shrinking resources and mounting competitive pressures, AI offers both salvation and disruption.
At Kreatized, we examine this intersection of narrative, technology, and ethics through our "showrunner" framework—where journalists, like TV producers, maintain creative direction while orchestrating AI-powered collaboration. Our approach helps media professionals harness these tools without surrendering editorial control.
Quick Guide: Best Practices for AI in Journalism
Human oversight: All AI-generated content requires editorial review
Transparency: Clearly disclose when and how AI tools were used
Accountability: Journalists remain responsible for published content
Data security: Establish protocols to protect confidential information
Regular evaluation: Continuously test AI systems for bias and accuracy
See our full "Concrete Guidelines" section for comprehensive recommendations
Background: What does AI mean in a journalistic context?
When we talk about AI in journalism, we're primarily referring to two distinct but related technologies: automated news production and generative AI.
Automated News Production: The First Wave
Automated news production has been around for nearly a decade, with companies like Associated Press using algorithmic systems to transform structured data into news stories. These rule-based systems excel at:
Sports results and statistics
Financial earnings reports
Weather updates
Election results tabulation
Traffic and transit reports
These systems follow rigid templates to produce factual content at scale without requiring human writers for routine, data-focused stories.
Generative AI: The Current Revolution
Generative AI, exemplified by large language models like GPT-4, represents a significant evolution. Rather than following explicit rules, these systems learn patterns from vast amounts of text data, enabling them to:
Draft articles on virtually any topic
Generate compelling headlines
Suggest story angles from research
Create content in multiple formats
Adapt writing to different audience needs
Since 2015, we've witnessed a rapid progression from simple news bots to sophisticated AI systems. The pace of development accelerated dramatically in late 2022 with the public release of ChatGPT, which demonstrated capabilities with "profound potential impact for journalism" and prompted AI-driven innovation to become "the urgent focus of senior leadership teams in almost every newsroom."
Key Players Shaping AI Journalism
Technology providers: OpenAI, Google, Anthropic, Microsoft
News agencies: Reuters, Associated Press
Financial services: Bloomberg (with BloombergGPT trained specifically for financial content)
Media organizations: New York Times, Washington Post, Gannett
Startups: Artifact (from Instagram founders), Semafor Signals
Local media: Increasingly exploring AI for community journalism
Opportunities: New tools, new storytelling
Automated news coverage
AI has already proven valuable for covering data-driven stories at scale. Bloomberg and Associated Press have been pioneers in this area, using AI to transform earnings reports and sports statistics into readable news stories within seconds of the data becoming available.
For example, Bloomberg's automated system provides readers with "a well-structured story that provides all relevant numbers from the company's reported revenue and how those stand compared to analyst estimates" along with commentary and context – all generated automatically within seconds of an earnings release.
These automated systems excel in areas where speed matters and the underlying information is highly structured. They can cover thousands of quarterly earnings reports, local sports games, or election results that would be impossible for human journalists to handle individually. This frees journalists to focus on stories requiring deeper analysis and human judgment.
Personalization and user experience
AI is transforming how audiences discover and consume news content. Media organizations are using machine learning to analyze reading patterns and provide personalized recommendations, potentially increasing engagement and subscription retention.
Norwegian public broadcaster NRK has experimented with AI-generated summaries to reach younger audiences who might not otherwise engage with their content. Media organizations are exploring chatbots that allow users to interact conversationally with news archives and current reporting.
However, this personalization introduces the risk of echo chambers, where algorithms reinforce existing preferences rather than exposing readers to diverse perspectives. The challenge lies in balancing personalized experiences with the broader civic role of journalism in creating shared understanding.
Data journalism and analysis
Journalists are increasingly using AI to sift through massive datasets, identify patterns, and generate insights that would be difficult or impossible to uncover manually.
These tools can analyze thousands of documents for investigations, detect anomalies in financial records, or identify connections between seemingly unrelated events. They're also enhancing visual storytelling through automated data visualization, making complex information more accessible to audiences.
The Associated Press has been using AI to examine public data and identify potential stories that might otherwise remain hidden. This approach combines the data-processing capabilities of AI with human journalistic judgment about what constitutes news value.
AI as a writing assistant
Perhaps the most immediately transformative application is AI as a collaborative writing partner. Just as in creative writing where AI can function as an "infinite writers' room", journalists can use these tools to:
Generate multiple story angles instantly
Draft compelling headlines and subheads
Create background sections on complex topics
Suggest follow-up questions for interviews
Produce initial drafts for refinement
In this model, the journalist functions as a "showrunner" – maintaining creative direction while leveraging AI to explore possibilities more efficiently.
The collaborative approach
This approach to AI-powered storytelling establishes clear creative parameters while allowing AI to suggest variations and alternatives within those boundaries. Effective systems provide:
Structured templates for different content types
Built-in quality checks for inconsistencies
Dynamic idea generation within boundaries
Flexibility without losing creative control
The key principle remains the same across all creative disciplines: the human maintains the guiding vision while using AI to expand creative possibilities.
From assistant to amplifier
When used skillfully, AI tools don't merely assist but amplify journalistic capabilities:
Reporters can explore more story angles in less time
Subject specialists can focus on analysis rather than routine writing
Editors can quickly generate varied headline options
Journalists with specialized knowledge can produce more content
News organizations can tackle complex topics with greater depth
These tools offer remarkable creative possibilities, allowing journalists to focus on the aspects of their work that most require human judgment and perspective – treating AI as an editorial colleague rather than a replacement.
Challenges: When technology becomes the storyteller
AI language models have a well-documented tendency to "hallucinate" – generating plausible-sounding but factually incorrect information. For journalism, where accuracy is paramount, this represents a significant risk. These hallucinations are particularly dangerous because:
They often contain a mix of accurate and inaccurate information
The errors can be subtle and difficult to detect without domain expertise
AI delivers falsehoods with the same confident tone as facts
Hallucinations often involve numbers, dates, and quotes—the core elements of news
Case study: CNET's AI backlash
Media organizations exploring AI-generated content have already encountered embarrassing errors. CNET faced significant public criticism after AI-generated financial explainers contained multiple factual inaccuracies, including basic mathematical errors in financial calculations that undermined reader trust. The publication was forced to issue corrections and ultimately scaled back its AI content program.
Rising verification burden
The challenge becomes even more acute when considering the explosion of AI-generated misinformation across the broader information ecosystem. Journalists now face a dual burden:
Verifying their own AI-assisted content for accuracy
Developing methods to identify and debunk synthetic content designed to deceive
Explaining to audiences how to distinguish between reliable and unreliable sources
Transparency and trust
Should readers be informed when content is produced with AI assistance? If so, how? These questions remain largely unresolved across the industry, with different organizations taking varied approaches:
Full disclosure approach: Some outlets explicitly label all AI-enhanced content
Selective disclosure: Others only disclose when AI plays a substantial role
Process transparency: Some describe AI use in general terms without article-specific labels
No disclosure: Some argue that AI tools are simply another journalistic resource not requiring special disclosure
The Financial Times commits to explaining "how the work was created and the steps they took to mitigate risk, bias and inaccuracy" when using AI tools.
Audience perception challenges
Reuters Institute research indicates that audiences have deep concerns about AI in news production, with comfort levels for fully AI-generated content remaining very low. Transparency may be essential for maintaining trust, but excessive disclosure could paradoxically undermine credibility even when AI is used responsibly.
Beyond bylines: Algorithmic transparency
This question extends beyond bylines to the entire news production process. When AI systems influence:
Which stories receive coverage
Which sources are prioritized
How information is presented
What content readers see first
These algorithmic judgments often remain invisible to both journalists and readers without appropriate transparency mechanisms. As one UK-based journalist noted, these tools can act like "an unreliable calculator" when their limitations aren't clearly understood.
Ethical dilemmas
AI systems reflect the data used to train them, which means they can inherit and amplify existing societal biases. This raises profound ethical concerns for journalism, which has a responsibility to represent diverse perspectives fairly.
Research has shown that AI systems can reproduce stereotypes about race, gender, and other identities in multiple ways:
Underrepresenting certain demographics in generated content
Using different language when describing different groups
Perpetuating historical biases present in training materials
Prioritizing dominant cultural perspectives
When deployed in newsrooms without appropriate safeguards, these biases could manifest in story selection, framing, or language choices.
The responsibility gap
Equally concerning are questions about journalistic responsibility when AI is involved in news production:
If an AI system makes a defamatory statement, who bears legal responsibility?
How should journalists verify information suggested by AI systems?
What ethical obligations exist when using AI to analyze sensitive data?
Who is accountable when algorithmic decisions affect news coverage?
As one industry guide notes, "When AI is involved, it is difficult to determine where content and/or ideas originate," creating complex questions about attribution, verification, and accountability.
Balancing efficiency with values
News organizations face difficult trade-offs between:
Speed and thoroughness
Cost savings and quality control
Scale and personalization
Automation and human judgment
These questions require both ethical frameworks and practical guidelines that maintain journalism's core commitment to truth, fairness, and public service even as technology transforms production practices.
Job roles and professional identity
As AI assumes more tasks traditionally performed by humans, journalists face understandable anxiety about their professional futures. Entry-level positions that have traditionally served as training grounds are increasingly being automated:
Financial earnings reports
Sports game recaps
Weather updates
Real estate listings
Election result tabulations
The Associated Press expanded automated earnings coverage from 300 to 4,000 companies—meaning thousands of stories no longer requiring human writers.
Emerging opportunities
Yet the relationship between AI and employment in journalism isn't simply subtractive. New roles are emerging:
AI editors: Specialists who verify and refine AI-generated content
Prompt engineers: Experts at directing AI systems to produce specific outputs
Data journalists: Skilled at using AI to analyze large datasets
Subject-matter specialists: Providing domain expertise AI lacks
AI ethics officers: Ensuring responsible technology implementation
These hybrid roles often command higher salaries and offer greater job security than traditional entry-level positions.
Identity transformation
The evolution of journalistic identity may prove even more significant than specific job changes. Journalists have traditionally derived professional pride from craft skills like writing and reporting. As automation touches these core functions, the profession must reorient around uniquely human capabilities:
Ethical judgment and values-based decision making
Critical thinking and investigation
Emotional intelligence and human connection
Cultural context and nuanced understanding
Creative thinking and storytelling vision
As journalism professor Charlie Beckett notes, newsrooms must "maintain journalist accountability for content, even when AI tools are used in its creation," preserving the profession's core values even as its practices evolve.
Concrete Guidelines for Responsible AI Implementation
While the opportunities and challenges of AI in journalism are becoming increasingly clear, news organizations need specific guidance on how to implement these technologies responsibly. Based on industry best practices and emerging standards, here are concrete recommendations for responsible AI adoption:
Editorial Oversight and Human Review
Establish clear editorial workflows that include human review of all AI-generated content before publication
Develop explicit criteria for evaluating AI outputs, focusing on accuracy, fairness, and alignment with journalistic standards
Create designated editor roles responsible for AI oversight and quality control
Maintain journalist accountability for content, even when AI tools are used in its creation
According to industry guidance, news organizations should "develop a workflow that incorporates human review of AI-generated content before publication or distribution" and "assign experienced editors to oversee AI-assisted decision-making processes."
Transparency and Disclosure
Clearly disclose when AI has been used in content creation, specifying how it was employed
Develop standardized language for AI disclosures that readers can easily understand
Create a public-facing AI policy that explains your organization's approach
Maintain a register of AI tools and applications used in the newsroom
The Financial Times' guidelines state that "all newsroom experimentation with AI will be recorded in an internal register" to maintain accountability and transparency.
Training and Literacy
Provide comprehensive training on AI tools for all journalists and editors
Develop specialized training for those responsible for AI implementation and oversight
Foster broader AI literacy to ensure journalists understand capabilities and limitations
Create channels for ongoing learning as technologies evolve
When training is mentioned in newsroom guidelines, it is "linked to mitigating the risks of generative AI and being accountable and transparent towards the audience."
Data Security and Privacy
Establish protocols to prevent sensitive information from being shared with external AI tools
Develop clear policies on what types of content can be input into third-party AI systems
Implement data minimization practices to limit exposure of confidential information
Create secure infrastructure for internal AI applications
Media organizations should "develop policies for data collection, storage, and sharing that adhere to relevant privacy laws and industry best practices" and "implement robust security measures."
Ethical Sourcing and Attribution
Ensure all training data and information sources are ethically obtained and properly credited
Develop clear citation practices for AI-assisted research
Regularly audit AI tools for potential copyright issues or attribution failures
Balance efficiency gains with ethical responsibility to content creators
The Council of Europe's guidelines on AI in journalism cover "incorporating AI systems into professional practice" with a focus on ethical implementation throughout the journalistic process.
These guidelines provide a starting framework, but each news organization should adapt them to their specific context, audience needs, and journalistic values. The Council of Europe and various media institutions have developed more comprehensive frameworks that can serve as valuable resources for creating organization-specific protocols.
Economic Perspective: The Business Realities of AI in Journalism
The integration of AI technologies into journalism involves significant economic considerations that go beyond the immediate technological possibilities. News organizations must carefully evaluate the financial implications of AI adoption:
Implementation Costs and Resource Requirements
The initial investment in AI systems can be substantial, particularly for smaller news organizations. Research shows that "larger, better resourced news organizations are more likely to engage in in-house AI development" while "smaller ones opt for third-party solutions from platform companies due to the high costs associated with custom AI." These costs include:
Licensing fees for third-party AI tools and platforms
Technical infrastructure and computing power
Staff training and potential new specialized roles
Integration with existing content management systems
Ongoing maintenance and updates
Many newsrooms face a challenging economic decision: develop custom AI solutions at significant expense or rely on third-party tools that may limit control and independence. The convenience of platform offerings makes them attractive, "allowing publishers to leverage AI capabilities without the financial burden of in-house development."
Impact on Business Models
AI is transforming traditional journalism business models in several ways:
New revenue opportunities through personalized content offerings
Potential for more efficient content production and distribution
Ability to serve niche audiences with targeted content
Challenges to subscription models as AI-generated summaries become available
Competition from AI-powered news aggregators and generative search
As advertising migrated to tech platforms, news organizations shifted toward "integrated and adapted business models," including maximizing revenue from subscribers willing to pay and diversifying their portfolios. AI both accelerates these trends and offers potential solutions.
ROI and Performance Metrics
When evaluating AI investments, news organizations must establish clear metrics for success:
Productivity gains in content creation and research
Audience engagement and retention improvements
Subscription or advertising revenue impacts
Cost savings from automation of routine tasks
Quality improvements in content and user experience
Some media organizations have reported success, such as "a local media outlet that increased revenue by 40% after implementing an AI-based content recommendation system." However, results vary widely based on implementation strategy and audience acceptance.
Journalistic Independence: The Platform Dilemma
The relationship between news organizations and the technology companies that develop AI tools presents a fundamental challenge to journalistic independence. This dependency creates several significant risks:
Technology Dependency and Control
The complexity of AI technologies "increases platform companies' control over news organizations, creating lock-in effects" that risk keeping news organizations tethered to technology companies. This dependency manifests in several ways:
Reliance on AI infrastructure controlled by tech giants like Google, Microsoft, and Amazon
Limited ability to verify or customize proprietary algorithms
Vulnerability to pricing changes or service discontinuations
Increasing technical barriers to independent operation
As news organizations integrate AI more deeply into their workflows, the ability to operate independently from major technology platforms diminishes. This "limits news organizations' autonomy and renders them vulnerable to price hikes or the shifting priorities of technology companies" that may not align with their own.
Data and Training Material Concerns
A profound ethical tension exists around the use of journalistic content as training material for AI systems:
News content is being used to train AI models without compensation or consent
The very AI tools journalists use may have been trained on their own outlet's content
Licensing agreements with tech companies create complex dependencies
Content produced with AI assistance feeds back into training data, creating cyclical dependency
As publications use AI tools from tech companies, they may be "improving the AI systems of major technology giants" which provides "a pathway for platform companies to build better general-purpose AI products and services, further cementing their control over information."
Building Independence Through Collaboration
To maintain journalistic independence while leveraging AI capabilities, news organizations should consider:
Forming industry collaborations to develop shared AI resources
Advocating for fair compensation for use of news content in AI training
Developing open-source alternatives to proprietary AI systems
Establishing clear boundaries on what editorial functions remain human-controlled
Creating transparent standards for AI use across the industry
Experts recommend that "the tech community must be engaged in these conversations with journalists and outlets as equal partners" in the design and deployment phases, not as those pushing their products and power on the profession.
Perspective: Journalist, not machine – but maybe together?
The future of journalism will likely be neither fully human nor fully automated, but a thoughtful integration of both. AI shows tremendous promise as a support system – processing data, generating drafts, and expanding creative possibilities – but the core journalistic functions of judgment, verification, and ethical decision-making remain distinctly human domains.
This complementary relationship leverages the respective strengths of each: AI excels at processing vast amounts of information and generating options, while humans excel at contextual understanding, critical thinking, and making value judgments.
The most promising path forward involves writing with AI rather than surrendering to it. As David Caswell from the Reuters Institute notes, journalists should "engage with these new tools, explore them and their potential, and learn how to pragmatically apply them in creating and delivering value to audiences."
The showrunner concept is instructive here. Just as successful television creates collaborative processes with clear creative direction, journalists can establish frameworks that harness AI's generative capabilities while preserving their essential role as stewards of quality, ethics, and public trust.
Conclusion: Who will tell the stories of the future?
The integration of AI into journalism represents neither utopia nor apocalypse, but rather a profound transformation that requires active shaping. The technologies themselves are neutral – their impact will be determined by how media organizations, individual journalists, and society at large choose to deploy them.
For journalists and editors, the imperative is clear: approach these tools with both curiosity and caution. Experiment to understand their capabilities, but remain vigilant about their limitations. Develop frameworks for responsible use that preserve journalism's core values while embracing new possibilities.
The ultimate question is not whether machines will replace human journalists – they won't – but rather how journalism as a profession will evolve in response to these new capabilities. The answer will determine not just the economic future of news organizations, but the health of our information ecosystem and, by extension, our democracy itself.
The stories of the future will be told through new collaborative relationships between human and machine intelligence. How we shape those relationships today will determine whether they strengthen or weaken journalism's essential role in society.
Frequently Asked Questions (FAQ)
Can AI replace journalists?
No, AI cannot replace the core journalistic values of critical thinking, ethical judgment, and contextual understanding. Instead, AI functions as a powerful tool that can free journalists from routine tasks, allowing them to focus on in-depth research, complex analysis, and narrative aspects that require human judgment.
How do readers know if content is produced with AI?
There is not yet an industry standard for how AI-assisted content should be labeled. Some media outlets use explicit notes, others disclose general AI use in their editorial guidelines, while some don't disclose it at all. Research shows that readers generally want transparency about AI use, which argues for clear labeling practices.
What are the economic implications of AI for news organizations?
AI implementation requires significant initial investment but can potentially deliver returns through efficiency gains, expanded coverage, and new products. Smaller news organizations risk becoming dependent on third-party AI solutions, creating economic vulnerability and potential control issues. Some media outlets have reported revenue increases of up to 40% after implementing AI-based content recommendations.
Can AI content be trusted?
AI-generated content requires careful editorial verification, as large language models can "hallucinate" – producing convincing but factually incorrect information. This underscores the need for human editing, fact-checking, and clear chains of accountability for all AI-generated content before publication.
What types of journalistic tasks is AI best suited for?
AI excels at data-intensive tasks such as transcription, data analysis, routine coverage of financial results and sporting events, and generating variations of existing content for different platforms. Most editorial guidelines recommend that AI primarily be used for research, idea generation, and structuring – not for independent production of publishable content.
How does AI affect journalistic independence?
Dependency on AI tools from large technology companies poses a potential risk to journalistic independence. When media outlets use proprietary AI systems from tech giants, technical, economic, and data-related dependencies emerge. This challenges media autonomy, especially when their own data is used to train AI systems that may later compete with them for audience attention.
Source List: For Further Reading
Reports and Research Articles
Beckett, C., & Yaseen, L. (2023). "AI and Journalism: What's Next?" Reuters Institute for the Study of Journalism. Link
Becker, K.B. et al. (2023). "Towards Guidelines for Guidelines: Examining AI Guidelines at European and US News Media." SocArXiv. Link
Newman, N. et al. (2024). "Digital News Report 2024: Public Attitudes Towards the Use of AI in Journalism." Reuters Institute. Link
Porlezza, C., & Schapals, A.K. (2024). "AI Ethics in Journalism (Studies): An Evolving Field Between Research and Practice." Journalism Studies. Link
Quinonez, C., & Meij, E. (2024). "A New Era of AI-Assisted Journalism at Bloomberg." AI Magazine. Link
Radsch, C.C. (2024). "Journalism Needs Better Representation to Counter AI." Brookings Institution. Link
Sjøvaag, H. (2024). "The Business of News in the AI Economy." AI Magazine. Link
Guidelines and Practical Resources
Associated Press (2023). "Standards Around Generative AI." Link
Council of Europe (2023). "Guidelines on the Responsible Implementation of AI Systems in Journalism." Link
Gannett. (2023). "AI Guidelines for Journalistic Practices." Link
JournalismAI. (2023). "Toolkit for AI in Newsrooms." Polis, London School of Economics. Link
Kapoor, S., Schellenmann, H., & Narayanan, A. (2023). "AI Reporting Checklist." Princeton University. Link
NBCU Academy. (2025). "Top AI Tools and How Journalists Can Use Them." Link
Platforms and Resource Centers
AI Journalism Labs, Craig Newmark Graduate School of Journalism. Link
Center for News, Technology & Innovation. "Artificial Intelligence in Journalism." Link
Global Investigative Journalism Network. "AI Tools for Journalists." Link
Journalist's Toolbox AI. "Tools and Resources for AI in Journalism." Link
Reuters Institute for the Study of Journalism. "AI and the Future of News." Link