Guidelines for using Generative AI Tools at LASP

Purpose for this guideline

This guideline is intended to provide information for the use of generative AI tools at LASP.

Overview

CU’s Office of Information Technology has released guidelines and a list of approved tools for generative AI. LASP employees are required to abide by CU’s guidelines if they intend to use Generative AI Tools for work related purposes. Please see the list of approved tools. If the tool you wish to use is not listed, the Resources & Guidance page has a link to contact OIT to request its review.

What are Generative AI Tools

ChatGPT and other tools in this category are artificial intelligence (AI) systems which have the ability to perform tasks that normally require human-level intelligence. They are AI-based large language models (LLMs) designed to assist users in generating human-like text and analyzing text for meaningful conclusions and other results. DALL-E, Midjourney, and other tools in this category are generative image models designed to generate images from text descriptions.

LASP-specific concerns

While these tools hold the potential to significantly advance research capabilities and streamline operations, they also introduce risks related to data privacy, intellectual property exposure, and the integrity of scientific research. As a research institution, we have additional considerations that are not specifically called out in the CU guidelines:

  • Data Privacy and Security and Export Control: CU guidelines list approved data levels for use with approved tools, and link to the CU data classification scheme. Note that research proposals and internal business documents are considered Confidential, not Public; this also covers software that has not been published. Content that is covered under ITAR/EAR or CUI is considered Highly Confidential. Zoom and Teams AI capabilities should not be enabled and used for meetings that entail ITAR, EAR, or CUI restricted content.

  • Blindly Using Generated Code: Generated code could potentially contain malicious elements, posing a risk to system security. Even absent explicit security issues, generated scripts may not be well written, containing elements that could be inadvertently harmful to systems within which the scripts execute. All code generated by Generative AI tools must be reviewed, understood, and scrutinized before running it on LASP systems to ensure it does not contain malicious elements. This review should be done with sufficient cognizance of the programming language.

  • Ethics: There are concerns about whether the data used to train a given Generative AI Tool has been properly licensed and/or used with permission; use of generative AI tools has the potential to violate copyright of writers and artists who own the original content. The computing processes required to run generative AI are extremely resource intensive, raising concerns around the environmental impact and sustainability of generative AI, so the use of generative AI tools is in conflict with LASP and CU’s climate goals.

Responsibility

In the event of any uncertainty or ambiguity regarding the use of these tools, LASP members should seek clarification or assistance from their supervisors, contract officials, compliance officers, and/or CU OIT to mitigate potential risks and ensure adherence to all relevant guidelines, regulations, laws, and institutional policies. While LASP does not have an official organizational element defined to administer and support the use of Generative AI tools, questions can be posed to the #chatgpt Slack channel for general guidance. Feedback or questions can be provided to your ITGC representatives.

Terminology

Term

Definition

AI

Artificial Intelligence

LLM

“Large Language Model,” a text-based generative AI system designed to understand and generate human-like language.

GPT

“Generative Pre-trained Transformer,” the foundational technology that powers large language models designed to generate text that mimics human writing.

Generative AI

AI that can generate new content after learning from large datasets.

Permissive Image Models

Generative AI models that are specifically designed or configured to allow for broader use without strict copyright or usage restrictions.

PII (Personally Identifiable Information)

Information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context.

EAR (Export Administration Regulations)

United States regulations that govern the export and re-export of items for national security, foreign policy, and nonproliferation reasons.

ITAR (International Traffic in Arms Regulations)

U.S. regulations that control the export and import of defense-related articles and services on the United States Munitions List.

CUI (Controlled Unclassified Information)

Information that requires safeguarding or dissemination controls pursuant to and consistent with law, regulations, and government-wide policies, excluding classified information.

References

Credit: Content taken from a Confluence guide created by Chris Pankratz, and last updated by Alex Ware on Jan 09, 2025