💬

Is it safe to put patient notes into ChatGPT?

The Short Answer: No, you should never put personally identifiable patient information (PII) into the standard version of ChatGPT or other public AI tools. Standard AI models may use the data you enter to train their systems, meaning private patient details could theoretically be exposed or become part of the public database. This is a potential breach of Australian Privacy Principles and confidentiality obligations.

The Safe Workaround: You can still use AI to assist with clinical summaries or letters, but you must de-identify the data first.

  • Remove Names: Replace “Jane Doe” with “Patient X” or “The Client.”
  • Remove Dates/Locations: Strip out birth dates, addresses, and specific appointment times.
  • Focus on Symptoms: Only input the clinical signs, symptoms, and pathology markers relevant to the case.

The Thriving Practitioner Approach: Think of AI as a generic medical textbook, not a secure filing cabinet. Use it to brainstorm treatment pathways or summarise complex mechanisms of action, but keep the “human” details of your patient locked safely in your practice management software.