How to Audit Your Media Monitoring Workflow and Find the Time You're Losing

By Gabriel Tan | March 2026

Every PR agency does media monitoring. Very few have looked at how they actually do it.

The daily monitoring report is one of those deliverables that runs on muscle memory. Someone logs into the platform, runs the searches, exports the results, formats them into a template, retrieves the articles, compiles the file, and sends it to the client. It gets done. It goes out. And because it gets done, nobody questions whether the process behind it is costing more time than it should.

I recently sat down with a senior associate at a financial communications firm to map her daily monitoring workflow step by step. Not the version she would describe in a meeting ("I run the searches, format the report, and send it by 9am"). The actual version. Every click, every export, every formatting decision.

What we found was not surprising. About half the time she spent on the report each day was mechanical: formatting article entries, sorting them under the correct section headers, making sure the publication name, date, headline, and hyperlink were laid out the way the client expected. The other half was judgment: deciding which articles were relevant, filtering out noise from broad keyword searches, and identifying the pieces that mattered enough to flag.

The formatting was repetitive and rule-based. The judgment was not. And the two were tangled together in a single workflow, which meant every morning started with a senior person doing 30 to 40 minutes of work that did not require their expertise.

This is not unique to one firm. It is the default in most agencies I have worked with. The monitoring workflow was built once, years ago, around the tools that were available at the time. Nobody has audited it since. It works well enough. But "well enough" is hiding a real cost.

Here is how to audit yours.

Step 1: Map the actual workflow, not the assumed one

Ask the person who produces the monitoring report to walk you through every step. Not from memory in a meeting room. Sitting next to them while they do it, or on a screen share while they run through a live cycle.

When I did this with the associate I mentioned, the workflow looked like this:

She logged into Meltwater and ran six separate keyword searches, one per section of the client's report. She exported each search as a CSV file. She reviewed each export manually to select relevant articles and filter out noise. She opened the Outlook email template and formatted each article entry under the correct section header: publication, date, title, hyperlink. She retrieved the full article text for each selected article and copied it into the email body. She identified client-specific articles, retrieved their full text, and compiled them into a PDF attachment. Then she did a final check and sent it.

Seven steps. Some took five minutes. Some took twenty. The total varied between two and three hours depending on news volume that day.

Your version will look different. The platform might be different, the template might be different, the client's requirements might be different. But the principle is the same: until you map the steps and time each one, you are working from assumptions.

Step 2: Separate formatting from judgment

Once you have the workflow mapped, go through each step and ask: is this formatting or judgment?

Formatting is anything rule-based and repetitive. If the instructions can be written down clearly enough that someone with no industry knowledge could follow them, it is formatting. Sorting articles by section. Populating a template with publication name, date, headline, and hyperlink. Structuring the email body in the client's preferred layout. Naming the file correctly.

Judgment is anything that requires context, experience, or editorial discretion. Deciding whether an article is relevant to the client. Filtering out noise from a broad keyword search. Knowing which articles the client would want to see versus which are background. Spotting a risk signal in a headline.

In the workflow I audited, the split was roughly 50/50. About half the time was formatting. About half was judgment. The formatting was spread across multiple steps, which made it feel like part of the work rather than a separate burden. But when we isolated it, the total was 30 to 40 minutes per day on tasks that did not require a senior person's brain.

That is 10 to 13 hours per month on formatting alone. For one client. For one deliverable.

Step 3: Identify what can move to AI and what cannot

This is where most people get it wrong. They look at the monitoring workflow and think: AI can do all of this. It cannot. Or they think: AI cannot do any of this reliably. It can do parts of it very well.

The honest split, based on what I have seen across multiple agency workflows:

What AI can handle well. Formatting the report structure. Once you have your article list confirmed, Claude can take the raw CSV export and sort the articles by section, generate the formatted index in the exact structure your template requires, and produce the complete email body ready to paste. This eliminates formatting inconsistencies and saves 15 to 20 minutes. AI can also do a first pass on relevance filtering. Meltwater keyword searches are broad by design and always return noise. Claude can review article titles and snippets from the export and flag low-relevance items, shortening the list the human needs to work through. The human still makes the final call on what stays and what goes.

What AI cannot handle. Running the platform searches. Without API access, the searches still need to be done manually each morning. Retrieving full article text is the bigger constraint. Most monitoring platform exports contain headlines, dates, URLs, and short snippets, not the full text. For open-access sources, retrieval might be possible. For paywalled sources, it is not. If your client requires full article text in the report, that step stays manual. Compiling PDF attachments of full articles has the same constraint. If the platform has a built-in PDF export for saved articles, check whether that is a shortcut. If not, this step stays manual. And final QA stays with a human. Always.

Be direct about these limits. The realistic time saving on the workflow I audited was about 30 to 40 minutes per day, roughly 30-50% of the total. That is meaningful across a month. It is not a full transformation. The honest framing matters because it sets expectations correctly with your team and, later, with your client.

Step 4: Ask the question most agencies never ask

This is the step that changes the economics.

Once you know what can and cannot be automated, you will likely find that the biggest time cost sits in a step that cannot be automated: retrieving full article text and compiling it into the report or a PDF attachment.

Before you accept that cost, ask this: does the client actually need full article text in the body of the report?

Most agencies never ask this question. The report format was agreed years ago, it has always included full article text, and nobody has revisited whether that is still what the client wants or needs. The format became a fixed assumption.

But the client's needs may have changed. A well-structured summary paragraph for each article plus a direct hyperlink to the source might be just as useful to the person reading the report at 8:30am. In many cases, it is more useful. Executives skim. They want to know what happened and why it matters. They click through to the full article only when something requires deeper reading.

If the client accepts a summary-plus-hyperlink format, the workflow changes significantly. The full article text retrieval step disappears. The PDF compilation step disappears or simplifies. AI can generate the summary paragraphs from the article snippets in the export. The total time saving goes from 50% to potentially 70% or more.

This is not a conversation about cutting corners. It is a conversation about whether the deliverable format still matches how the client actually uses the report. Frame it that way: "We've been reviewing how we can get you the monitoring report earlier and with more consistency. If we move from full article text to structured summaries with direct links, we can deliver 60 to 90 minutes earlier each day. Would that work for your team?"

Some clients will say no. The format is fixed, and that is fine. You have still saved 30 to 40 minutes on the formatting side. But some clients will say yes. And when they do, the entire workflow opens up.

Step 5: Redesign the workflow and test it in parallel

Do not switch overnight. Run the new workflow alongside the old one for one to two weeks. The old report goes to the client as normal. The new version is produced internally for comparison.

During the parallel run, track three things: time per report (old versus new), error count (did the new workflow miss anything the old one caught?), and whether the output quality is equivalent. If the new workflow is faster, equally accurate, and produces a report the client would accept, you have your proof of concept.

Only then do you retire the old process and switch over.

What This Looks Like Across a Year

Saving 30 to 40 minutes per day on one client's monitoring report does not sound dramatic. But run the maths across a team.

If your agency has eight monitoring clients and each report saves 30 minutes per day after the workflow redesign, that is four hours per day returned to your team. Twenty hours per week. Over 1,000 hours per year. Those hours were being spent on formatting, sorting, and copying. Now they are available for advisory work, client development, or capacity that lets you take on another retainer without hiring.

The cost of not auditing the workflow is not visible on any P&L. It shows up as overwork, as senior people doing junior tasks, as teams that feel stretched even when headcount looks adequate. The monitoring report gets done. It goes out. And nobody questions whether it should take this long.

Until you sit down, map the steps, and time them. That is where it starts.

Gabriel Tan is the founder of Mekong Bridge Advisory. He builds structured execution systems for PR and communications firms.

gabriel.tan@mekongbridge.com | www.mekongbridge.com

Previous
Previous

Three Ways Agency Leaders Respond to AI Workflows. Only One Works.

Next
Next

How to Use Claude to Write Press Releases That Don’t Sound Like AI