AI Usage Policy
Effective date: 10/21/2025
At Capital Campaign Pro, we believe in transparency, accuracy, and putting human expertise first. This AI Usage Policy describes how we use AI tools (e.g. large language models, generative AI, etc.) in our work, what we do not allow, and how we safeguard integrity in all content.
Guiding Principles
1. Human-first content
All of our public-facing content (i.e., blog posts, guides, email campaigns, social media, training materials, and client communications) are drafted, edited, and reviewed by real people (our team or vetted contractors). We use AI as a support tool, not a substitute for human insight, judgment, or expertise.
2. Fact-based and verifiable
Any information we publish must be traceable to credible sources (peer-reviewed research, published studies, nonprofit sector best practices, official data, etc.). Claims must be backed by citations or documented evidence. AI should never serve as the sole justification for a statement of fact.
3. Privacy and data safety
We will not feed client-sensitive or proprietary information into AI systems that do not guarantee privacy and security. Before using AI tools, we assess whether the tool meets our confidentiality standards.
4. No “fully AI-generated” content
We will never publish content that is fully generated by AI. Period. When we do publish content with any AI input, it will always be subject to human review, correction, and fact-checking. Any AI output will always be validated by a human with subject-matter knowledge before release.
5. Responsible and limited use
We use AI primarily for:
- Creating interactive tools such as our interactive Gift Range Chart through the use of AI-powered no-code applications.
- Brainstorming and ideation (e.g. generating potential blog titles, content outlines, alternative phrasing, conceptual frameworks).
- Raw data analysis that is always reviewed, revised, and approved by subject-matter experts before publication.
- Content enhancement (e.g. grammar checking, readability improvements) so long as the core meaning and voice remain human-authored.
We have also developed our own LLM: Andie, the AI Campaign Companion. Andie was created in-house and subjected to nearly a year of training and testing to ensure the highest quality responses. The works of Amy Eisenstein and Andrea Kihlstedt (books, blog posts, podcast transcripts, etc.) were the primary training materials. Andie’s responses to client questions are reviewed weekly and training materials are updated regularly.
6. Attribution and transparency
For any content where AI played a substantive role (beyond trivial editing), we will:
- State that fact in the content (e.g. “This post was co-drafted with AI”).
- Note that a human expert verified the content.
- (Optionally) Note the AI tool or model used (e.g. “Drafted with the help of ChatGPT 5.”)
7. No misuse or misrepresentation
We will never represent AI output as wholly human thought or original intellectual property if it was not. We will not use AI to intentionally mislead, manipulate, or misinform clients or the public.
Workflow: How AI Is Used in Our Processes
Below is a typical process illustrating how AI may assist us, and where humans intervene:
1. Idea generation / planning
Team members or writers brainstorm content topics or campaign ideas. AI may be used to suggest alternative angles or titles (e.g. “10 possible blog post ideas about capital campaigns in 2025”).
2. Research / fact gathering
AI support may be used to surface relevant sources, summarize reports, or aggregate data. But the writer or researcher must verify each fact from original sources (studies, sector reports, authoritative organizations).
3. First draft
A human writer writes the first version of content. Optionally, the writer may ask AI to suggest phrasing improvements, transitions, or rewordings of complex sentences, which the writer then selects, adjusts, or discards.
4. Human editing / subject-matter vetting
A content editor or domain expert reviews the full draft, checks citations, corrects errors, and ensures the tone, voice, and accuracy of the content.
5. Final review and sign-off
Before publication, a senior staff or expert gives final approval. If AI had a substantial role (beyond light edits), a disclaimer is inserted.
6. Publication with transparency
The published version includes a short statement disclosing AI assistance, and confirming human review.
7. Periodic auditing
We periodically review content published with AI assistance to ensure ongoing accuracy, relevance, and consistency with our values.
Additional Safeguards & Best Practices
- AI “confidence check.” Writers or editors must flag any AI-provided fact that seems uncertain or out-of-context and cross-check original sources.
- Bias and neutrality. AI-generated suggestions should be reviewed for implicit bias, exclusion, or framing issues. We will guard against perpetuating misinformation, stereotypes, or unintended framing.
- Training and awareness. All team members will receive training on AI ethics, limitations of AI (hallucinations, errors, outdated knowledge), and your internal AI policy.
- Periodic review. We will revisit this AI Usage Policy at least annually (or more often, as AI evolves) to ensure we are following best practices and adapting to new tools responsibly.
Why This Policy Matters for Capital Campaign Pro
- To maintain credibility. As a thought leader in the nonprofit / campaign space, our clients expect accuracy, trustworthiness, and deep expertise.
- To support transparency. We emphasize “lifting the veil of secrecy” in campaign consulting. This policy extends that transparency in how our content is produced.
- To balance innovation with integrity. We value innovation and technology at Capital Campaign Pro. Our aim is to use AI as an efficiency tool without replacing the incredible value of human expertise.
- To protect our reputation. Erroneous or misleading content, particularly in the philanthropic sector, could damage trust. Strong human oversight mitigates that risk.
- To honor our values. This policy serves to uphold our claims of reliability, expertise, and transparency, while allowing us to leverage useful AI tools responsibly.