Just 7% of nonprofits have successfully integrated AI into their workflows, according to a recent TechSoup survey of ~1300 organizations.
That means, despite the transformative potential of AI to multiply a single individual’s impact, most mission-driven organizations are still trying to figure out what AI can — and can’t — do for them.
Two longtime digital marketers, Catherine Seda and Jenny Halasz, want to help close that gap.
Their brand-new book, AI-Powered Content Marketing and SEO, translates decades of marketing know-how into practical guidance for teams who want to use AI effectively and responsibly, without losing the human heart behind the message.
Twenty Years at the Digital Frontier
Both Seda and Halasz built their careers during the early days of digital marketing.
“I got into SEO completely by accident,” Halasz recalls. “I graduated in 2000, and back then, people were asking how to get ranked on Yahoo, Info Seek, or Web Crawler. I ended up learning more and more and rode that SEO wave from the very beginning.”
Seda began her journey around the same time, working for one of the first digital agencies that built websites. “They called it SEO, but it was just putting keywords into meta tags,” she laughs.
As luck would have it, the pair would eventually meet while leading digital strategy for LL Flooring. Together, they experimented with an early, home-built version of a custom GPT model — long before the feature existed publicly.
“We trained it on everything we had, and over time, it got very good at writing first drafts of content,” Halasz explains. “We even named the GPT ‘Stella Jameson’ after one of our copywriter’s favorite drinks.”
Their friendship and collaboration flourished, and after a time, they decided to coauthor a book.
Drawing Lessons From the First-Mover’s Missteps
For nonprofits just beginning to experiment, Seda and Halasz offer reassurance: they’re not behind; they’re just on a different timeline and can benefit from other people’s experiences.
“I don’t know if it’s great being the first,” Seda says. “You kind of want other people to make the mistakes, especially if it’s a big branding or security risk.”
Among those mistakes? Failing to fact-check AI-generated content.
The legal field’s unfortunate mishaps are infamous by now, with many attorneys facing fines and reprimands for submitting briefs citing nonexistent cases.
Halasz shares another example: “A sheriff was running for reelection, and their team created a bunch of AI content that nobody was reviewing. But the content had endorsements for individual products and entities, which is a very big problem because of federal ethics laws.”
The key is to use AI as a tool without delegating human oversight.
“Always, always, always double-check your work,” Seda advises. “Especially if you’re quoting a person or a statistic.”
Balancing Efficiency With Authenticity
Before LL Flooring, Seda worked in-house for a nonprofit.
“I strongly believe in the value of subject matter experts and talented writers, so I actually pushed to prohibit copywriters from using AI at all,” she admits. “Back in 2023, people were just saying ‘write this’ and it was all just regurgitated slop.”
But while prohibition may have protected quality in the short term, Seda believes it’s no longer realistic.
The real question is balance: where can AI handle efficiency work, and where do we insert the human element that brings authenticity, credibility, and authority?
Where to Begin?
If the trouble with AI is hallucinations and uninspiring middle-of-the-road outputs, then Seda and Halasz recommend a simple starting point: verified, human-created material.
“It’s safer to start with, say, an annual impact report,” Seda shares. “Have an AI tool parse it up, give it some guardrails and guidelines, and then ask it to create social media posts or blog articles from some of your pre-checked content.”
The advantage of this approach is two-fold: with source material to work with, it’s less likely to make up facts or statistics. And secondly, since the primary document was written in your voice, the output will reflect some of that as well.
“It really helps if the input you’re using has already been validated by humans. Start with some core truth, and then have it help you generate additional content assets,” says Seda.
Good Reasons to Be Hesitant About AI
Nonprofits that take government funding are, of course, subject to limitations. The new administration has effectively banned a slew of words, not to mention the already familiar restrictions on partisanship.
“It can get very, very difficult when you’re working with something that doesn’t have the decades of experience most people at nonprofits have,” Halasz explains.
That’s why Seda and Halasz emphasize the importance of developing and implementing an AI policy. What information can you give it? What types of content is it allowed to write? What are the procedures for fact-checking?
It’s important to answer those questions, even if informally, before diving in.
The Benefit of Getting Started, Now
Nonprofits are notoriously tight on resources. AI isn’t a panacea, but the right approach can significantly multiply what you can accomplish in a day.
Halasz shares the benefits she personally felt from AI: “We create writing briefs for clients, and AI saves us a ton of time. Since implementing, we can complete that brief about 50% faster, and it also helps us turn around edits too.”
Seda adds, “If you’re feeling like you’re behind, it’s okay. Nonprofits are just coming up a bit slower, and now you’ve got a lot of great research and corporate use cases to model. If you can jump in now, it’s a great opportunity.”
Their book, highlighting the revolutionary impact of AI and how to balance human creativity with machine efficiency, while staying ahead of legal and ethical risks, is available for preorder today.
