03/17/2026 | News release | Distributed by Public on 03/17/2026 08:22
Marquette has established a university-wide artificial intelligence task force to guide the responsible use of generative AI across campus. This cross-disciplinary group comprising five workgroups is charged with identifying where existing policies, procedures and support structures should be updated or adapted, with a focus on ensuring that AI adoption at Marquette is responsible, effective and aligned with the university's Catholic, Jesuit mission.
While the task force continues its work and prepares recommendations, many members of the Marquette community are already exploring how generative AI fits into their teaching, research and daily work. The following FAQ offers guidance on some of the most common questions about using AI at Marquette right now, helping faculty and staff navigate emerging tools while broader policies and resources are still taking shape.
Marquette's guidance is intentionally enabling rather than restrictive. The goal is to help faculty and staff use AI tools confidently and responsibly, not to discourage experimentation or efficiency.
Information Technology Services has launched a centralized site to help the campus community use generative AI appropriately and effectively. The site brings together approved tools, data protections by tier, responsible use expectations, informational events and training resources - all in one place.
Faculty and staff should treat AI tools the same way they treat any system that handles university information:
Additional information on the general guidelines on the use of AI can be found here.
It is important to remember that not all AI tools are created equal; some AI platforms use your inputs and upload data to give future answers. If you share confidential data with an AI tool, you could accidently expose data to others outside of Marquette. To assist faculty and staff, university guidance on the use of AI tools has been established.
For most faculty and staff, Microsoft Copilot is the recommended AI tool for institutional use. Copilot can be managed within Marquette's 365 environment, granting the tool privacy and compliance protection and retaining the content in a secure containerized environment that does not train public AI models. The Microsoft Copilot tools are:
Further details on the use of these tools and all currently reviewed acceptable AI tools is located here:.
Additionally, ITS is available to assist with what AI tools are available and how they should be used. In some cases, other AI tools may be useful, but usually only for low-risk scenarios such as:
AI can be used responsibly for many day-to-day tasks, especially when outputs are reviewed by a person before being shared publicly or relied upon. Examples include drafting or refining emails or memos, organizing ideas or notes, summarizing discussions, generating meeting minutes using approved tools, brainstorming, and improving clarity or tone. Additional examples can be found within the AI guidelines Do's and Don'ts section.
Use extra care when accuracy, attribution or professional judgment are required, or when working with sensitive or regulated data such as FERPA-protected student data, PCI, HIPAA, research protocols or employee information. In these situations, choose approved tools, limit the data you share and verify outputs carefully. Additional guidance can be found within the AI guidelines for Sensitivity levels for data,
Additionally, when preparing materials for publication or for use in a proposal, be aware that appropriateness of AI use may be governed by external policies. These policies should be reviewed in advance to ensure compliance.
For collaborative projects, discuss AI use early in the process so all contributors agree on whether and how AI tools will be used. Establishing expectations upfront supports transparency, consistency and shared accountability.
AI should not be used to:
No. Many faculty and staff are excited to explore these tools, while others prefer not to. Marquette aims to reduce uncertainty and provide a clear path forward, not to prescribe uniform adoption. Additional information on the general guidelines on the use of AI can be found here.
University events and training related to AI can be found here; content will continue to grow as training support becomes available.
The landscape of artificial intelligence is constantly evolving. The AI Task Force is building a living, adaptive institutional approach to AI, ensuring any guidance is a starting point, not a final word.
Expect continued updates on: