From AI Project to AI Product

Building Sustainable Solutions for Journalism:
A Guide for Kenyan Newsrooms

DW AI in the Newsroom Fellowship
SimPPL Product Development Framework
February 2026

Why This Matters

Your projects work. You've proven AI can help with election coverage, misinformation detection, energy data, artivism storytelling, and more. The challenge now is different: turning prototypes into tools that serve real audiences over time, sustainably and reliably.

Election Coverage

Automated monitoring, candidate profiles

Misinfo Detection

Coordinated campaigns, deepfakes

Data Hubs

Energy, SRHR, infrastructure

Artivism

AI documentary, protest data

Common mentor feedback: "Projects are too ambitious." "Need clearer MVPs." "Define users specifically." "Focus on what you can do manually first." Today is about closing the gap between prototype and product.

Project vs Product: Know the Difference

A ProjectA Product
PurposeProve it worksSolve user problems
AudienceYou and your teamSpecific users with needs
LifespanUntil demo is doneOngoing, maintained
SuccessIt runsPeople use it repeatedly
A project proves something is possible. A product solves a real problem for specific people, works reliably, and people use it without you explaining how. The gap between these two is what we're bridging today.

Start with the Real Problem

Before writing more code, understand the problem better than anyone else.

Not this

"We need AI for elections."

Too vague. Doesn't explain who needs it or why.

This

"With limited staff, we cannot cover 24 electoral areas during by-elections, which means voters lack information to make informed decisions."

Specific, urgent, and measurable.

Your problem statement must answer:

  1. Who experiences this problem? Identify the specific people or groups affected.
  2. What makes it painful or urgent? Describe the consequences of not solving it.
  3. What happens when it's not solved? Explain the real-world impact on users.
Exercise: Write one paragraph describing your problem without mentioning AI. If you can't explain why it matters without technology, you don't understand the problem well enough.

Know Your Users Deeply

You cannot build for "journalists" or "the public." Those groups are too broad. Think about the actual person who will open your tool on Tuesday morning when they're stressed and on deadline.

Disinformation tracking

  • Who: Fact-checkers during election periods
  • Context: Tracking multiple narratives, time pressure, limited staff
  • Need: Spot coordination patterns without learning complex tools

Infrastructure reporting

  • Who: Urban planning journalists
  • Context: Writing about failures, lack technical tools
  • Need: Show patterns without GIS expertise
Create user profiles: For each user, write down Who (specific person), When (time and situation), Pain (what frustrates them), and Goal (what they need). Can you name a real person, not a category?

Research Your Users This Week

You don't need a big budget. Here's what to do:

  1. Interview 5-7 potential users. Ask about current workflow, pain points, what they've tried before. Record and transcribe if possible.
  2. Shadow someone. Watch them work. Note where they get stuck, what shortcuts they take, what frustrates them.
  3. Survey with specific questions. Not "would you use this?" but "how often do you face this problem?" and "what do you do now?"
  4. Test your prototype. Watch them use it without helping. Write down every confusion point. Don't explain, observe.

Key questions to ask

  • Walk me through your current workflow...
  • Where do you get stuck or frustrated?
  • What have you tried before to solve this?
  • How often does this problem occur?
  • What would make your job easier?
Your social media data session: everyone engaged heavily because it solved an immediate problem. That's your standard for success.

Define Your MVP with MoSCoW

An MVP is not a broken version of your big idea. It's the smallest thing that solves the core problem.

PriorityDefinitionExample (writing assistant)
Must HaveCore features without which the tool can't solve the problemText input, basic grammar correction, structure suggestions
Should HaveImportant features that significantly improve experienceAccept/reject suggestions, learn from past edits
Could HaveNice additions if time allowsMultiple writing styles, CMS integration
Won't HaveFeatures you're explicitly not building yetFact-checking, multi-language, collaborative editing
Your mentor feedback noted proposals are "too ambitious." This focuses your effort on what truly matters. The Won't Have column is the most important: it forces discipline.

Validate Your Assumptions

Your project is built on assumptions. Test them before going further.

The Hallway Test

Show your tool to a colleague who didn't build it. Can they use it without instructions?

Success: zero explanation needed.

The Accuracy Test

Manually verify 100 AI outputs. What's your error rate?

Success: <5% error rate for MVP.

The Deadline Test

Offer it to someone on actual story deadline. Will they use it when it matters?

Success: they choose it under pressure.

The Abandonment Test

Watch where users stop using your tool. That's where it fails them.

Success: complete task without dropping off.

If users can't complete tasks without help, your product isn't ready. Fix the friction points first.

Build for Real Constraints

AI projects fail when they meet reality. Design for actual conditions.

Technical realities

  • Unreliable internet in newsrooms
  • Older hardware, not powerful machines
  • API costs add up at scale
  • Government data is often late or incomplete

Resource realities

  • Journalists work tight deadlines: save time, don't add steps
  • Users need to interpret outputs, can they?
  • Someone must maintain it when platforms change

Trust realities

  • How do users verify AI didn't make mistakes?
  • How does translation preserve meaning?
  • What happens when the AI is wrong?

Ethics questions

  • Attribution: Who created the content your AI uses?
  • Consent: Did sources agree to data processing?
  • Privacy: How do you protect people in datasets?
  • Bias: What biases exist in training data?
  • Errors: How do users know when to trust your tool?
Your tool should work even when conditions aren't perfect. Plan for failure modes from day one. AI cannot verify facts: it predicts patterns. Always have humans in the loop for verification and editorial decisions.

Build Trust Through Transparency

AI in journalism carries special responsibilities. Show users clearly what your tool does and does not do.

What to communicate

  • What the AI does and what it doesn't do
  • What data you collect and why
  • Known limitations and error rates
  • How to verify outputs independently
  • What happens when the AI is wrong

How to build it in

  • Label AI-generated content clearly in the interface
  • Show confidence scores where possible
  • Provide source links so users can verify
  • Log errors and surface them, don't hide them
  • Write a one-page "how this works" for users
Journalists' credibility depends on the tools they use. If your tool gets something wrong and a journalist publishes it, that's your responsibility. Build transparency into the product, not as an afterthought.

Your Two-Week Action Plan

Concrete steps to move from project to product.

Week 1: Validate & Focus

  1. Write your problem statement. No AI mentioned: focus on the human problem.
  2. Interview 5 potential users. Ask about workflow, pain points, current solutions.
  3. Define your MVP using MoSCoW. Be ruthless about what you won't build yet.
  4. Identify top 3 assumptions. Design quick tests for each.
  5. Document one failure mode. Plan how you'll handle it gracefully.

Week 2: Build Foundation

  1. Create user documentation. For your core feature: assume reader knows less.
  2. Set up basic error logging. Track what fails and when.
  3. Define 3 metrics that matter. Not vanity metrics: signals of real value.
  4. Map workflow integration. Where does your tool fit in existing processes?
  5. Write sustainability requirements. Who maintains it? What's the cost?

Building What Lasts

Moving from project to product is not about writing more code. It's about deeply understanding the problem, knowing your users, building the smallest thing that works, testing in the real world, and improving based on what you learn.

"A good product is not the one with the most features. It's the one that solves a real problem so well that people keep coming back."

Know Your Users
Solve Real Problems
Iterate & Improve

Appendix

Additional slides for reference

APPENDIX

User Profile Template

Fill this out for your primary user before building further.

#QuestionYour answer
1Who is this person? Job title, organization, experience level
2When do they need your tool? Specific situation, time pressure, context
3What frustrates them today? Current pain points, workarounds
4What does success look like? What outcome makes them come back?
5What would make them stop using it? Deal-breakers, friction, trust issues
APPENDIX

MoSCoW Prioritization Worksheet

List your features in each category. Be honest about what you won't build.

Must Have

Without these, the tool doesn't solve the core problem. List 2-3 features maximum.

Should Have

Important but not blocking. Would significantly improve the experience.

Could Have

Nice to have. Only build if Must and Should are done.

Won't Have

Explicitly out of scope. Writing this down prevents scope creep.

APPENDIX

Validation Checklist

Run through these checks before calling your tool "ready."

Usability

  • Can a new user complete the core task without help?
  • Does it work on the hardware your users actually have?
  • Does it work on slow or intermittent internet?
  • Is the interface in the right language(s)?
  • Can users tell when AI is confident vs guessing?

Sustainability

  • Who maintains this after the fellowship?
  • What are the monthly running costs?
  • What breaks when an API changes?
  • Is there documentation for someone new to maintain it?
  • Do you have a plan for when grant funding ends?