AHCJ – Association of Health Care Journalists

03/25/2025 | News release | Distributed by Public on 03/25/2025 15:58

Reporter exposes network of AI-generated ‘local’ newsletters

Nieman Lab reporter Andrew Deck knew something seemed off when he received the "Good Day Fort Collins" e-newsletter despite claims it was produced by the local community. After discussing it with a local journalist source, they agreed it was AI-generated.

His recent story, "Inside a network of AI-generated newsletters targeting 'small town America' " describes how newsletters generated by artificial intelligence have infiltrated some 355 smaller towns across the country while remaining opaque. For example, the newsletters tend to show up in residents' inboxes without a subscription, and testimonials supposedly from a resident in one town are repeated verbatim in support of newsletters in other towns.

In this "How I Did It," Deck discusses how he sleuthed out the backstory of these newsletters, and what they mean for journalism today.

Responses have been lightly edited for brevity and clarity.

You did a lot of legwork to find the founder/editor. Can you share that process?

It was really hard to find any additional information about who was running these newsletters, who owned them, and when they had started expanding across the country. It was after I started reaching out to some of the advertisers to ask them how their ads had been placed in these newsletters that the reporting kicked off. That put some pressure on the owner to respond to my email request and start a conversation.

On the newsletters, beyond a generic contact email, there's no information about Matthew Henderson, the person I found to be the owner, or his operating location, the entity behind the newsletters. The email used for website domain registrations, which I looked up, was tied to a blank website. I did all of the normal investigative reporting things you would do to try to find out who owns the site. It was only after I made a $5 reader donation to one of the newsletters that I was able to trace that charge and find the entity that owns the website. It's a company called Good Daily Inc., which was incorporated in Delaware and New York. But even then, it was hard to use that information to find anything that would be useful for our reporting, like the motivation for launching this newsletter, or how exactly it was earning its revenue. That's why the conversation with Matthew Henderson, the owner, was key to our reporting, to unpack those additional questions.

Once you finally got him, did that work out?

There was a little fire under him because we were already in communication with some of the advertisers. Some were not aware to what extent this newsletter was AI-generated. Some were unbothered, but others did not like what they were seeing in terms of the way that he was conducting his business. So, there was an incentive for him to answer some of our questions and tell his side of the story.

How did you realize that testimonials on the site for one location's newsletter were duplicated for others?

It wasn't more complicated than a good old Google search. I was searching the entire quote verbatim, and you could see that across the 355 newsletters that we identified in our reporting, there was not a single attempt to alter the names or copy of those testimonials, even though they were ostensibly from residents of each of these small towns across the U.S. So, there was even at that stage in the reporting, a clear deceit or lack of transparency on the part of the newsletter operator to pretend that these were real local residents.

How do AI newsletters like this work? Are they just scrubbing any news from these communities and then regurgitating it?

It's not reinventing the wheel; it's aggregation. One of the things that stood out to me about this instance was the lack of transparency. It's one thing to use AI for aggregation, to use automation to try to build this local news network in 355 cities across the U.S. as a single person who's not residing in any of these areas, but I think it's another to do that without any disclosure to your audience that you're using AI. And that was the sticking point for a lot of folks responding to the reporting.

I tried not to be too dogmatic in our story to tell people how they should feel about this, but overwhelmingly, the response was that people felt like they were being lied to. There's also some subtle ways that the newsletters purported to be coming from the perspective of someone in the community that they were covering, and that just wasn't the case. It's an interesting question for readers of that story, and other journalists, that if a real, genuine local newsroom used the same underlying technology and came out with their AI-generated daily roundup of their stories, how would people feel about that? I don't have an answer to that question because it seems like there are some use cases here that could be ethical.

Some people you interviewed said these newsletters showed up in their mailboxes without them registering. How was that possible?

That's another way that this operation potentially is not operating ethically. I was not able to prove in the course of my reporting exactly how that happened so I only want to speculate to a certain degree. Every person I spoke to for the story said that they had no memory of signing up for the newsletter for their town or city, and that one day it just started appearing in their inbox. Some of them unsubscribed. Some of them found it useful, nonetheless, it was not something that they had opted into. Matthew Henderson, in response to my questions, had a whole slew of explanations for why that might be the case. He said that family members might have signed up people on their behalf without telling them andhat some cyber hacking scams involve signing people up for newsletters en masse, and maybe some of his newsletters got caught up in that web. He wasn't really able to provide any evidence to support those explanations.

You also found out that these newsletters had contests where readers could vote for their favorite local nonprofit organization. Many people had no idea they were part of it.

Half of the newsletters that were operating at the time of our story were running these reader voting campaigns, where people could vote for a local nonprofit, and at the end of the calendar year, that newsletter would share 10% of their advertising profits with that nonprofit. Some local nonprofits found out about this and started launching subscription campaigns on their social media accounts to try to drive some of their supporters to vote for them, so it seemed to be a source of sign-ups for some of these newsletters. It was a pretty smart way to drive engaged local community members to your newsletter. The problem was, when I reached out to some of the winners of the 2024 voting campaigns, they had not received their prize money or had no awareness that they had won, or had no awareness that they had been part of the voting campaign at all. When I reached out to Matthew Henderson, he said that he was still settling his books for the year, and it was possible that for some of these markets, there would be no advertising profits even if there had been revenue, and that he would instead be sharing advertising credits with them for his newsletters.

What kind of reaction did the community have to your story?

I got the idea to do a follow-up because once we published our story, I started getting all this inbound from local news researchers and pink-slime watchdog groups, but also readers in some of the communities that have been targeted and audience professionals in local newsrooms whose publications have been aggregated in these newsletters. They were all asking me, "Is Good Daily operating in my town? I think I might have seen something like it." So, I decided to put together a comprehensive database, or as comprehensive as we could make, and try to make our reporting as accessible as possible. That was all done manually, and it was somewhat tedious, but I think it was worth it because I've had people reach out and say they're using it.

Overall, what should journalists know about these AI-generated newsletters?

I think this is the canary in the coal mine. I think these networks of automated or AI-generated local news sites and newsletters are going to continue to crop up. The fact that a single person in New York City created this national network, I think, is quite revealing about where the technology is today, and how easy it is to scale up something like this. It does require technical expertise, but probably a lot less than it used to, and a lot less capital. It's something we need to be on guard about in terms of the integrity of our local news ecosystems, understanding that AI-generated news could be entering these ecosystems without us knowing about it or without it being disclosed explicitly.

In terms of local news journalists in particular, what stood out to me about this story is how easy it is for one of these operations to siphon off local advertising revenue from legitimate, human-run newsrooms. Good Daily relied on human labor to produce them. It was just the human labor of local news journalists in the newsrooms that they were aggregating so they were not paying directly for it.

These newsletters do not have the infrastructure to do original journalism, to do original reporting, to embed in communities, and to have sources and all the things that good local journalism has. But, they do have the ability to chase advertising away from the existing operations in these communities.