Objective
The Responsible Use of Generative Artificial Intelligence (AI) Task Force is charged with developing and evolving university policy, guidelines, and training and support services on the use and procurement of generative AI tools by K-State students, faculty, researchers and staff. Areas of focus will include:
- Academic Integrity and Ethical Use
- Protection of Unpublished Research, Confidential and Clinical Data, and IP
- Responsibility and Accountability
- Security: Scams/Phishing and Cybersecurity
- Procurement Process and Vetting by IT
- AI Literacy Programs
Key Activities
Key activities will include, but not be limited to:
- Create Definitions
- How are we defining generative AI, agentic AI, etc.
- Develop Assessment and Benchmarking
- Review strategies, policies and practices related to AI use and expanding AI literacy in higher education, including those at peer institutions.
- Policy Development
- Define principles for responsible and ethical use of generative AI that align with KState’s mission and core values.
- Develop recommendations for a set of policies or guidelines to govern the use of AI by students, faculty, researchers and staff. This includes reviewing individual proposed policies generated by different working groups over the past year.
- Recommend ongoing governance structures to monitor and update policies as necessary.
- Educational/AI Literacy Initiatives
- Propose strategies, plans and resources needed to integrate AI education/literacy, training and support services into academic programs, faculty development initiatives and operational processes.
- Develop Clinical Guidelines
- Develop Research Recommendations
- Identify IP Issues
- Make sure IP issues and issues around course content created by faculty and IP research are clearly addressed.
- Implementation Roadmap
- Identify the key milestones, a timeline for developing and implementing an integrated set of AI-related policies and practices, and an action plan for advancing AI literacy at the institutional level.