AI-powered tools are showing up everywhere in therapy practice: note-taking assistants that generate session summaries, intake form processors that extract key information, scheduling assistants that handle client communication, and even AI-informed treatment planning suggestions. The promise is compelling—less time on administrative tasks, more time for clinical work. But for Canadian therapists, adopting any AI tool that touches client data opens a minefield of privacy, consent, and regulatory questions that most vendors are not equipped to answer.

This guide breaks down what the Office of the Privacy Commissioner (OPC), provincial privacy commissioners, and regulatory colleges have actually said about AI in healthcare settings, and gives you a practical framework for evaluating whether a specific tool is safe to use in your practice.

The OPC's Position on AI and Personal Information

The Office of the Privacy Commissioner of Canada has been increasingly vocal about AI since releasing its Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies. While these principles are not legally binding on their own, they signal how the OPC will interpret PIPEDA complaints involving AI and are likely to shape future enforcement actions.

The OPC's core positions relevant to therapy practices include:

Provincial Privacy Commissioners

Provincial commissioners have added their own layers. Ontario's Information and Privacy Commissioner (IPC) has published guidance on AI in healthcare that emphasizes data minimization—AI tools should only access the minimum personal health information necessary to perform their function. British Columbia's OIPC has focused on the requirement for Privacy Impact Assessments (PIAs) before deploying AI tools that process personal information, a requirement that applies to any B.C. therapist considering an AI note-taking assistant.

Alberta's OIPC has been particularly direct: their position is that health information processed by AI must remain within the custodian's control, and cloud-based AI tools that transmit health information to third-party servers require explicit, informed consent from the individual and may require a PIA filed with the commissioner's office.

The Data Residency Problem

This is where most AI tools fail the Canadian compliance test. The vast majority of AI-powered note-taking, transcription, and documentation tools route data through servers located in the United States. Some route through servers in multiple countries. When your client's session content leaves Canada, several legal frameworks come into play:

Key question to ask any AI vendor: "Where exactly is my data processed? Where is it stored? Is any data, including temporary processing data, transmitted outside Canada at any point?" If they cannot give you a clear, specific answer, that is your answer.

What Your Regulatory College Says

Canadian regulatory colleges have been slower to issue specific guidance on AI, but several have now weighed in:

CRPO (Ontario)

CRPO's Professional Practice Standards require that any technology used in practice must meet the same confidentiality standards as in-person service delivery. While CRPO has not issued AI-specific guidance as of early 2026, their existing standards on electronic records and third-party service providers apply directly. If you use an AI tool to generate session notes, CRPO expects you to:

BCACC (British Columbia)

BCACC has been more explicit, issuing a practice advisory in 2025 reminding members that the use of AI tools in clinical practice does not diminish the counsellor's professional responsibility for clinical documentation. BCACC's position includes:

CCPA National Standards

CCPA's updated Standards of Practice address technology use broadly, emphasizing that counsellors must be competent in the technologies they employ and must ensure that third-party technology providers cannot access client information in an unauthorized manner. CCPA has also stressed that the therapeutic relationship must not be compromised by technology use—if a client is uncomfortable with AI being used in their sessions, the counsellor must respect that and provide the same quality of service without AI assistance.

Consent Requirements for AI-Assisted Documentation

Based on the combined guidance from privacy commissioners and regulatory colleges, meaningful consent for AI use in therapy should address the following:

  1. What the AI tool does — describe in plain language what the tool does (e.g., "I use a software tool that uses artificial intelligence to help me generate session notes based on our conversation").
  2. What data it accesses — specify whether the tool processes audio recordings, transcripts, typed notes, or other session data.
  3. Where data is processed and stored — disclose the country and, ideally, the specific cloud provider where processing occurs.
  4. Data retention — how long does the AI provider retain your data? Is it deleted after processing, or stored indefinitely?
  5. Model training — is your client's data used to train or improve the AI model? This must be explicitly disclosed.
  6. Right to opt out — clients must have the genuine ability to decline AI-assisted documentation without it affecting their care. You must be prepared to take manual notes if a client opts out.
  7. Human review — confirm that all AI-generated content is reviewed and approved by the therapist before being finalized.

Practical tip: Do not combine AI consent with your general intake consent. Create a separate, standalone AI disclosure and consent form. This makes it easier to demonstrate meaningful consent in the event of a complaint and allows clients to consent to therapy without feeling pressured to also consent to AI.

Practical Steps for Using AI Tools Compliantly

If you have evaluated the landscape and decided to proceed with AI tools, here is a step-by-step framework:

1. Conduct a Privacy Impact Assessment

Before adopting any AI tool, complete a PIA that documents what personal information the tool will access, how it processes and stores that information, what risks are involved, and what safeguards are in place. This is legally required in British Columbia and Alberta, and strongly recommended everywhere else. Keep the PIA on file; your college may ask for it.

2. Vet the Vendor

Ask the AI vendor these questions in writing and keep their responses:

3. Choose Canadian-First Solutions

Where possible, choose AI tools that process and store data exclusively on Canadian servers. The Canadian AI ecosystem for healthcare is growing, and there are options that do not require cross-border data transfer. If you must use a US-based tool, ensure you have a robust cross-border transfer agreement and explicit client consent for the transfer. We help therapy practices evaluate and implement AI solutions that meet Canadian privacy requirements.

4. Update Your Consent Process

Add a standalone AI disclosure and consent form to your intake process. Review it annually with ongoing clients. Make it clear, specific, and free of legal jargon. Your clients should understand exactly what they are consenting to after reading it once.

5. Establish a Human-in-the-Loop Workflow

Never let AI-generated content go directly into a clinical record without human review. Establish a workflow where AI generates a draft, you review and edit it for accuracy and clinical appropriateness, and only then does it become part of the official record. Document this workflow in your practice policies.

6. Monitor and Audit

Regularly review what data your AI tools are actually accessing and transmitting. Check for vendor policy changes—AI companies frequently update their terms of service, sometimes expanding what they do with your data. Set a calendar reminder to review your AI vendor agreements quarterly.

What to Avoid

A few common practices that put Canadian therapists at risk:

The Path Forward

AI tools have genuine potential to reduce the administrative burden that contributes to therapist burnout. Session note generation alone can save 30 to 60 minutes per day for a full-time practitioner. But the path to adoption must be deliberate and compliant. Rush it, and you risk a privacy commissioner investigation, a college complaint, or a loss of client trust that is far more damaging than the time AI was supposed to save.

The regulatory landscape is evolving quickly. Both the federal government's proposed Artificial Intelligence and Data Act (AIDA) and provincial regulatory colleges are expected to release more specific AI guidance throughout 2026. Stay current, document your compliance decisions, and when in doubt, err on the side of client privacy.

If you are looking for help evaluating AI tools for your therapy practice or need assistance building a compliant AI workflow, explore our AI solutions for therapy practices or reach out for a consultation.