The Privacy Question You Should Be Asking
Every time a professional considers using an AI email tool, the first question is usually "does it work?" The first question should be "where does my client data go?"
Professional service providers — attorneys, CPAs, financial advisors, healthcare providers — operate under strict confidentiality obligations. Attorney-client privilege, CPA-client confidentiality, SEC regulations, HIPAA. These are not suggestions. They are legal requirements with real consequences for violations.
Most AI email tools were built for general consumers. They were not designed with professional confidentiality requirements in mind. Before you connect your client inbox to any AI tool, you need to understand exactly what happens to your data.
The Five Questions
1. Is client email content used to train AI models?
This is the most important question and the one most AI companies answer vaguely. When you send email content through an AI API for processing, does that content get used to improve the AI model? If yes, your client's confidential information is being incorporated into a system that other people use.
At AssistantAI, we use Anthropic's Claude API with zero-retention settings. Email content is processed in real-time and not stored by the AI provider. It is not used for model training. This is not a privacy policy choice — it is a contractual guarantee from the AI provider.
2. Where is email data stored?
When the AI processes your email, the original email content, the generated draft, and any extracted metadata (contacts, deadlines, project references) need to be stored somewhere. Where?
Key things to verify:
- Is data stored in the United States or in a jurisdiction with adequate privacy protections?
- Is data encrypted at rest (when stored) and in transit (when moving between systems)?
- Who has access to stored data — just you, or also the tool's employees?
- What is the data retention policy — how long is data kept, and can you delete it?
Our architecture: email data is stored in Supabase (PostgreSQL) with AES-256-GCM encryption for OAuth tokens, row-level security on all tables, and service-role-only access policies. No AssistantAI employee accesses client email data in the normal course of operations.
3. How are API credentials secured?
To process your email, the tool needs access to your email account — typically through OAuth (Google, Microsoft) or API keys. These credentials are the keys to your inbox. How are they protected?
Red flags:
- Credentials stored in plain text
- Credentials accessible to support staff
- No credential rotation policy
- OAuth tokens that never expire
Our approach: Gmail OAuth tokens are encrypted with a unique AES-256-GCM key before storage. The encryption key is stored as an environment variable, never in the database. Tokens can be revoked by the client at any time through their Google account.
4. What happens if the tool is breached?
Every company can be breached. The question is: what is the blast radius? If an attacker gains access to the tool's systems, what client data is exposed?
Good architecture minimizes blast radius through:
- Encryption at rest (compromised database = encrypted blobs, not readable text)
- Segmented access (each client's data is isolated from other clients)
- Minimal data retention (do not store what you do not need)
- Audit logging (you can tell what was accessed and when)
5. Can you audit what the AI is doing with your data?
Professionals need to be able to demonstrate compliance. That means audit trails. Every email processed, every draft generated, every action taken by the AI should be logged and reviewable.
Our system logs every API call with token counts and cost estimates (api_usage table), every email classification decision, every draft generation, and every approval or rejection. Clients can export their complete activity history at any time.
Profession-Specific Considerations
Attorneys
The American Bar Association's Formal Opinion 477R (2017, updated 2024) addresses technology and confidentiality. The key principle: attorneys must make "reasonable efforts" to prevent inadvertent or unauthorized disclosure of client information when using technology.
"Reasonable efforts" includes:
- Understanding the technology (you should know how the AI processes your email)
- Using appropriate safeguards (encryption, access controls, audit logs)
- Obtaining client consent when appropriate (particularly for novel technology)
- Having a response plan for breaches
Using AI for email does not inherently violate confidentiality obligations. Using AI without understanding or controlling the data flow does.
CPAs
AICPA Professional Standards Section 1.700.001 requires CPAs to maintain confidentiality of client information. The Confidential Client Information Rule applies to information obtained in the course of professional services.
For CPAs using AI email tools, the key requirements are:
- Client information processed by AI must not be accessible to unauthorized parties
- Tax return data, financial statements, and advisory communications must be encrypted
- Data retention must comply with IRS and state requirements
Financial Advisors
SEC Rule 30a-3 and FINRA Rules 3110 and 4511 require firms to maintain books and records of client communications. Financial advisors using AI email tools must ensure:
- All client communications (including AI-drafted emails) are archived per regulatory requirements
- AI-generated content is reviewable by compliance officers
- The tool does not make investment recommendations or provide financial advice without human review
The Bottom Line
AI email tools can be used safely and compliantly by professional service providers. But not all tools are created equal, and "it works" is not a sufficient evaluation criterion when client confidentiality is at stake.
Ask the five questions. Verify the answers. And choose a tool that was built for professionals, not adapted for them after the fact.
Want to see this in action?
Free 14-day trial. No credit card required.