ÃÛÌÒAPP

Campus-wide Access to Microsoft Copilot

We’re pleased to announce the Microsoft Copilot with Data Protection service, a new service endorsed by ÃÛÌÒAPP through the Center of Teaching and Learning Enhancement and Information Technology. Microsoft Copilot with Data Protection, a generative AI-powered platform designed and created specifically for organizations, is now available for ÃÛÌÒAPP faculty, staff, and students.

Previously branded Bing Chat Enterprise (BCE), Copilot with Data Protection ensures that organizational data is protected against threats. In Copilot with Data Protection, user and organizational data is protected–chat data is not saved, and chat data will not be available in any capacity to Microsoft or other large language models to train their AI tools against. This layer of protection is what sets Copilot with Data Protection apart from the consumer Copilot.

In addition, Copilot with Data Protection cites its generated content with verifiable citations, is designed to assist organizations in researching industry insights and analyzing data, and can provide visual answers including graphs and charts. While it is built on the same tools and data as ChatGPT, Copilot with Data Protection has access to current Internet data, while the free version (3.5) of ChatGPT only includes data through 2021.*

*Note that while this tool is available to you immediately, future policies governing its use and related data may be released soon.

Getting Started

Navigate to and log in using your LEA ID and password. When signed in, look for a message confirming “Your personal and company data are protected in this chat” above the chat input box and a green “protected” notice in the upper right corner to ensure you are using Copilot with Data Protection. You should also note the ÃÛÌÒAPP logo and name in the top left corner if you are logged in correctly. Copilot with Data Protection is currently available to Edge (desktop and mobile) and Chrome (desktop), with support for other browsers coming soon. It is not currently supported on the Bing mobile app for iOS or Android.

An image of the copilot UI with an arrow pointing to the university logo, the username, and the text input box.

Example of Copilot with Data Protection when logged in using LEA ID and password.

Tips for using Copilot with Data Protection

  • Be cautious. ÃÛÌÒAPP only allows information that can be publicly available to be entered into generative artificial intelligence tools, including Copilot, without appropriate approvals.
  • Log in. Always ensure you are logged in with your LEA account when using to ensure data is protected. • Potential uses. Content generation, course development assistance, brainstorming, data analysis, document summarization, learning new skills, writing code, and more. Faculty should visit CTLE for ideas.
  • Judicious use. When using Copilot with Data Protection, exercise care when entering information into the prompt. Copilot with Data Protection is being offered for use with Public Information only. Any other levels of data sensitivity should not be entered. As with other services, we do not recommend the inclusion of personal information about yourself in prompts. To ensure privacy is maintained, you may not enter personal information of any coworkers, students, or others. Failure to follow this guidance may result in violations of law (e.g., FERPA, HIPPA, etc.). Similarly, when using the service, you must ensure adherence to copyright and intellectual property protections. See more considerations for protecting privacy when using Generative AI tools below. Note that you must abide by all existing privacy, technology, data, and acceptable use policies already existing for ÃÛÌÒAPP and the Texas State University System.

Protecting Privacy in Copilot and other Generative AI Tools

Generative AI, encompassing artificial intelligence models creating content in various forms, such as text, images, and audio, employs deep-learning algorithms and training data to produce new content approximating the training data. In light of its growing popularity and transformative nature, the following general guidance is provided for ÃÛÌÒAPP, with a focus on data privacy. Please note that this advice is not legal in nature and is not intended to be exhaustive.

If you use generative AI in regular work

  • Explore options to purchase or license a business or enterprise version of the software. Enterprise software usually brings contractual protection and additional resources such as real-time support.
  • Begin discussions with your colleagues about the privacy considerations listed in the next section.
  • Consider where and how existing policies and best practices can be updated to better protect user privacy.
  • Remember to validate the output of Generative AI, and if using Generative AI in a workflow, consider implementing formal fact-checking, editorial, and validation steps to your workflow.

If you create or develop generative AI

  • Provide transparency about how your Generative AI models are trained. Inform users what data might be collected about them when using generative AI and create accessible mechanisms for users to request data deletion or opt-out of certain data processing activities.
  • Explore incorporating privacy enhancing technologies in your initial design stages to mitigate privacy risks and protect user data. Consider technologies that support data deidentification and anonymization, PII identification and data loss prevention, and always incorporate principles of data minimization.

If you would like assistance as you consider data minimization, data anonymization, or data deidentification in your AI, the IT Team can help. Contact servicedesk@lamar.edu

Supplementary Guidance

The realm of Generative AI is not novel, and apprehensions about its application and potential repercussions have been deliberated and will continue deliberations over time. Despite the recent surge in popularity and widespread access to generative AI capabilities, it is imperative to acknowledge the existence of established policies, practices, as well as scholarly, historical, and theoretical frameworks that should be considered alongside contemporary discussions. University employees must be careful to adhere to all relevant laws, university policies, and contractual obligations.

Within the university context, specific privacy laws, such as the U.S. Privacy Act, state privacy laws like PIPA, and industry-specific regulations including FERPA, HIPAA, COPPA, as well as global laws like GDPR and PIPL, are pertinent considerations. Given the unprecedented proliferation of AI and generative AI capabilities, market dynamics are fostering intense competition to integrate AI into existing offerings. This competitive pressure may compromise ethical standards and integrity when hastily introducing new features and capabilities to the market. Do due diligence.

It is essential to acknowledge that training data may encompass information collected in violation of copyright and privacy laws, potentially tainting the model and any products utilizing it. The societal and business impacts of such violations may only become evident over an extended period. We will continue to monitor these concerns.

Efforts to identify and remove personally identifiable information (PII) from large language models are relatively untested, potentially complicating responses to data subject requests within regulated timeframes. Additionally, the inclusion of PII in large language models may enable generative AI to expose such information in the output. The use of input data as training data, coupled with the interactive and conversational nature of data collection, may lead users to inadvertently share more information than intended.

Users may lack the technical literacy to discern that Generative AI mimics human behavior and can be intentionally misled into believing they are interacting with a human. The prolonged and conversational interaction may cause users to lower their guard, inadvertently divulging personal information. The extent of personal information, user behavior, and analytics recorded, retained