Privacy and Data Security

In alignment with Yale’s values, we ensured rigorous privacy standards and data governance in deploying the AI chatbots as part of the ongoing pilot project. When you interact with one of Yale’s Secure Chatbots, the following security measures are in place to safeguard your data:

  • All conversations between you and the AI bot are isolated from the publicly available ChatGPT/OpenAI corpus. The chat content never reaches OpenAI and it not used to train any publicly available model or otherwise be retained by OpenAI or any third party. 

  • Your chat history with the chatbot will not be saved anywhere. If you want to save your chat history, use the “Copy Chat History” button to copy your interactions with the chatbot and paste into your desired destination. 

  • You need to be CAS authenticated and have been granted permission before you can use and interact with one of chatbots.

  • All interactions with the bot are discarded from Yale’s servers once the chat session ends. Yale does not retain any conversation for any reason.  No conversations are logged on Yale’s end.  This goes for both input and output data. 

  • Your session with the chatbot will time out after 20 minutes of inactivity. Upon time-out an error prompt will appear, requiring you to reload the website to restart the interaction with the chatbot. 

  • Microsoft keeps the conversations for 30 days as a matter of practice to prevent “harmful use”. There are currently seven harm categories - hate and fairness, sexual, violence, self-harm, jailbreak risk, protected material for text and code - that will trigger “harmful use.”  At that point, and that point only, designated Microsoft employees may review the full-text of the conversation with the sole purpose of discovering abuse. After 30 days, those conversations are irretrievable to Microsoft. Please refer to Microsoft’s website for more information on the abuse monitoring and content filtering done by Microsoft.

  • You may not use high-risk data in the chatbot at this time. You may insert moderate-risk data in the chatbot which includes: Non-public, University-owned research data not considered high-risk, student and applicant data, employment applications and personnel files, non-public contracts and internal memos and email, non-public reports, budgets, plans, and financial information. For a full overview of the data classifications, please refer to Yale’s data classification for more information.