The following is a set of initial guidelines concerning the use and procurement of AI tools at Amarillo College. Examples of popular AI tools include OpenAI's ChatGPT, Google Gemini, Claude AI, ProWritingAid, Grammarly, and Copilot.
General Principles Regarding AI
The College supports responsible experimentation and use of AI tools, but there are important considerations to keep in mind when using these tools, including information security and data privacy, compliance, copyright, and academic integrity.
AI, most notably generative AI, is a rapidly evolving technology which is being incorporated into many online products and services. Generative AI goes beyond traditional AI systems by creating new data that resembles human-created content. While the application of AI technology is being rapidly adopted, the governance and security aspects of AI tools may not be maturely developed. Meaning the information which you provide an AI tool may be used in ways you did not intend or anticipate.
It is important to note that these guidelines are not new college policy; rather, they leverage existing college policies. Amarillo College will continue to monitor developments and incorporate feedback from the college community to update our guidelines accordingly.
Guidelines for use of AI tools
- Safeguard Sensitive, Confidential, and Restricted Data: Do not enter sensitive or confidential information, including non-public research data and student and/or employee data, into AI tools not authorized by the college for such usage. Information shared with third-party AI tools could be exposed to unauthorized parties.
- Sensitive Data can include:
- Internal correspondence, e-mail, budgets, financial information, and confidential reports
- Donor financials and other non-public or anonymous gifts
- NDA agreements and related information
- Employee and/or student compliance/complaint reports
- Legal documents
- Confidential and Restricted data
- Includes all data related to HIPAA and FERPA, contracts, credit card data, social security numbers, HR data, compliance reports and associated legal data
- Responsible Use: You are responsible for any content you produce or publish, including AI-generated content. AI-generated content can be inaccurate, misleading, or entirely fabricated (sometimes called “hallucinations”), or may unintentionally contain copyrighted, sensitive, or even confidential material. AI-generated content can also contribute to misinformation. Always review your AI-generated content before publication or sharing.
- Examples of AI-generated content include, but are not limited to:
- proprietary or unpublished research
- legal analysis or advice
- decision making in the recruitment of personnel or disciplinary action
- completion of academic work in a manner not allowed by the instructor
- creation of non-public instructional materials; and grading
- Data analysis generated from raw data, survey data and or data dashboards
- Academic Integrity: Students' use of AI tools must adhere to college requirements regarding academic integrity found in college policies, handbooks, codes of conduct, and other such documents. Faculty should be clear with students in their courses regarding permitted uses, if any, of AI tools. Students are also encouraged to ask their instructors for clarification about permitted uses of AI tools as needed.
Acquisition and Official Use of AI Tools: Faculty and staff seeking to use or acquire AI tools must adhere to college policies and standards regarding the acquisition of new IT products and services. Established procurement processes ensure that IT products and services have sufficient compliance, privacy, and security protections to safeguard our faculty, staff, and students.