Generative Artificial Intelligence Acceptable Use Standard
Document: Generative Artificial Intelligence Acceptable Use Standard
Campus: MSU Billings
Last Update: 08/28/25
Contact: Co-Chair(s) of Data Governance Council
Purpose: The purpose of this standard is to outline the acceptable use of generative artificial intelligence (AI) at Montana State University Billings (麻花影视). This standard is created to protect the safety, privacy, and intellectual property rights of 麻花影视 and its students and employees. This standard will be updated as new tools and vendors are evaluated and brought into the university鈥檚 technical environment.
Background: Generative AI tools offer many capabilities and efficiencies that can greatly enhance our work. These tools have the potential to enhance productivity by assisting with tasks like drafting documents, editing text, formulating ideas, or generating software code. Generative AI can include standalone systems, be integrated as features within search engines, or be overtly or transparently embedded in other software tools. When using these tools, members of the 麻花影视 community must consider issues related to data security, privacy, compliance, and academic integrity. These tools also come with potential risks that include inaccuracies, bias, and unauthorized use of intellectual property in the content generated. In addition to the content generated by AI, the information submitted into these systems poses security or privacy concerns for the students, staff, and faculty of 麻花影视 as the data may be used by the developer for training AI models, sold for profit, or maliciously obtained by bad actors through a data breach.
Definitions:
- Generative AI uses advanced technologies such as predictive algorithms, machine learning, and large language models to process natural language and produce content in the form of text, images, videos, audio, and software code. Generative AI models learn the patterns and structures of their training data and use this knowledge to generate new data based on user prompts.
- Institutional data are data elements which are created, received, maintained and/or transmitted while meeting the institution鈥檚 administrative and academic requirements. The 麻花影视 Data Stewardship Standards establishes the need to protect institutional data and defines institutional data classifications.
Standard: This Standard establishes the minimum guidelines for the usage of generative AI technology as an employee of or for related activities of 麻花影视. This Standard applies to all use cases involving 麻花影视 including but not limited to:
- Generating content for emails, announcements, letters, text messages, and documentation;
- summarizing and proofreading documents;
- business decisions and data analysis;
- meeting notes, action items, and summaries.
- chat bots or virtual assistants,
- development of software code; and
- research (must also have IRB approval for human subjects research);
Responsibilities: The following are the responsibilities for all 麻花影视 employees when utilizing generative AI tools:
- The 麻花影视 Data Governance Council is responsible for the maintenance of this Standard.
- Public data classifications may be input into generative AI tools. Restricted data
classifications may be input only into preapproved generative AI tools (see below).
Confidential data shall never be input into generative AI tools.
- For guidance on how generative AI tools intersect with academic honesty, it is recommended that instructors contact the Center for Teaching and Learning. Also see the Montana State University Billings Policy on Faculty and Student Use of Artificial Intelligence (AI) in Academic Work for academic guidelines of generative AI.
- Use of publicly available generative AI tools may inadvertently expose proprietary information--including intellectual property (IP)--to unauthorized parties. This may violate copyright or patent protections or compromise research integrity. Users are responsible for ensuring that inputs and outputs of AI tools are appropriately safeguarded and do not include protected, unpublished, or sensitive materials unless the tool explicitly supports secure use. Some granting agencies, including the National Institutes of Health, have policies prohibiting the use of generative AI tools in the and in . Failure to comply with these policies may jeopardize current and future funding requests.
- Before acting upon or disseminating responses generated from generative AI, outputs shall be reviewed by human operators (defined as individuals who possess the necessary expertise and understanding to accurately assess and validate AI-generated outputs) to ensure accuracy, appropriateness, privacy, and security requirements are met.
- AI tools employed by users for the purpose of processing data, such as meeting participants, must have consent for the generative AI use from all participants that may have their data processed by the AI tool.
- AI generated content may be misleading or inaccurate. Generative AI technology may create citations to content that does not exist. Responses from generative AI tools may contain content and materials from other authors and may be copyrighted. It is the responsibility of the tool user to review the accuracy and ownership of any AI generated content.
- Generative AI shall not be used for any activities that are harmful, illegal, or in violation of an individual鈥檚 federal and state constitutional rights, federal or state law, Executive Order, or 麻花影视/MSU/OCHE policy.
- Unless explicitly preauthorized by the 麻花影视 Data Governance Council project request, all contractors are prohibited from using 麻花影视 Restricted or Confidential data in generative AI queries or for building or training proprietary generative AI programs.
- Use of generative AI language models are limited to those based and hosted within the United States.
- As with any other software purchases, procurement of AI tools must be submitted through and approved by the Information Technology department.
Approved generative AI tools: An AI platform is approved when 麻花影视 has a contractual agreement with the software company and the agreement states that our data will not be used to train the company鈥檚 AI models or sold to other entities. 麻花影视 IT is actively evaluating AI platforms, and this list will be updated as the software is vetted and approved.
- Microsoft Copilot Chat
- Microsoft Copilot for M365
- OpenAI ChatGPT Teams (not the free version ChatGPT)
- Panopto Elai
- Adobe Fire Fly
University employees should not enter any institutional data that is categorized above the Public classification into any unvetted AI tools. If a software platform is desired, please reach out to the Information Technology department for review.
Those who wish to use unvetted AI platforms should think carefully about what happens to the information entered into these tools before engaging with them. By using an unvetted AI platform, the user assumes the risk but also exposes the university to potential legal, reputational or data security consequences risks. Many AI companies state that they have access to all information entered into the system, including account information and any inputs used to generate a response. This data could be breached and used by cybercriminals to create malware, phishing email campaigns, sold for money, or utilized in other fraudulent purposes. Additionally, information entered in a prompt could be used to train the underlying large language model, meaning that whatever data is in the prompt could then be inadvertently exposed to another user. We are all responsible for keeping our institutional data secure, especially as AI becomes more widely available.