We’re pleased to announce the Microsoft Copilot with Data Protection service, a new service endorsed by ÃÛÌÒAPP through the Center of Teaching and Learning Enhancement and Information Technology. Microsoft Copilot with Data Protection, a generative AI-powered platform designed and created specifically for organizations, is now available for ÃÛÌÒAPP faculty, staff, and students.
Previously branded Bing Chat Enterprise (BCE), Copilot with Data Protection ensures that organizational data is protected against threats. In Copilot with Data Protection, user and organizational data is protected–chat data is not saved, and chat data will not be available in any capacity to Microsoft or other large language models to train their AI tools against. This layer of protection is what sets Copilot with Data Protection apart from the consumer Copilot.
In addition, Copilot with Data Protection cites its generated content with verifiable citations, is designed to assist organizations in researching industry insights and analyzing data, and can provide visual answers including graphs and charts. While it is built on the same tools and data as ChatGPT, Copilot with Data Protection has access to current Internet data, while the free version (3.5) of ChatGPT only includes data through 2021.*
*Note that while this tool is available to you immediately, future policies governing its use and related data may be released soon.
Navigate to and log in using your LEA ID and password. When signed in, look for a message confirming “Your personal and company data are protected in this chat” above the chat input box and a green “protected” notice in the upper right corner to ensure you are using Copilot with Data Protection. You should also note the ÃÛÌÒAPP logo and name in the top left corner if you are logged in correctly. Copilot with Data Protection is currently available to Edge (desktop and mobile) and Chrome (desktop), with support for other browsers coming soon. It is not currently supported on the Bing mobile app for iOS or Android.
Example of Copilot with Data Protection when logged in using LEA ID and password.
Generative AI, encompassing artificial intelligence models creating content in various forms, such as text, images, and audio, employs deep-learning algorithms and training data to produce new content approximating the training data. In light of its growing popularity and transformative nature, the following general guidance is provided for ÃÛÌÒAPP, with a focus on data privacy. Please note that this advice is not legal in nature and is not intended to be exhaustive.
If you would like assistance as you consider data minimization, data anonymization, or data deidentification in your AI, the IT Team can help. Contact servicedesk@lamar.edu.
The realm of Generative AI is not novel, and apprehensions about its application and potential repercussions have been deliberated and will continue deliberations over time. Despite the recent surge in popularity and widespread access to generative AI capabilities, it is imperative to acknowledge the existence of established policies, practices, as well as scholarly, historical, and theoretical frameworks that should be considered alongside contemporary discussions. University employees must be careful to adhere to all relevant laws, university policies, and contractual obligations.
Within the university context, specific privacy laws, such as the U.S. Privacy Act, state privacy laws like PIPA, and industry-specific regulations including FERPA, HIPAA, COPPA, as well as global laws like GDPR and PIPL, are pertinent considerations. Given the unprecedented proliferation of AI and generative AI capabilities, market dynamics are fostering intense competition to integrate AI into existing offerings. This competitive pressure may compromise ethical standards and integrity when hastily introducing new features and capabilities to the market. Do due diligence.
It is essential to acknowledge that training data may encompass information collected in violation of copyright and privacy laws, potentially tainting the model and any products utilizing it. The societal and business impacts of such violations may only become evident over an extended period. We will continue to monitor these concerns.
Efforts to identify and remove personally identifiable information (PII) from large language models are relatively untested, potentially complicating responses to data subject requests within regulated timeframes. Additionally, the inclusion of PII in large language models may enable generative AI to expose such information in the output. The use of input data as training data, coupled with the interactive and conversational nature of data collection, may lead users to inadvertently share more information than intended.
Users may lack the technical literacy to discern that Generative AI mimics human behavior and can be intentionally misled into believing they are interacting with a human. The prolonged and conversational interaction may cause users to lower their guard, inadvertently divulging personal information. The extent of personal information, user behavior, and analytics recorded, retained