Marquette University

03/17/2026 | News release | Distributed by Public on 03/17/2026 08:22

AI at Marquette: FAQ on responsible use, recommended tools for faculty and staff

Marquette has established a university-wide artificial intelligence task force to guide the responsible use of generative AI across campus. This cross-disciplinary group comprising five workgroups is charged with identifying where existing policies, procedures and support structures should be updated or adapted, with a focus on ensuring that AI adoption at Marquette is responsible, effective and aligned with the university's Catholic, Jesuit mission.

While the task force continues its work and prepares recommendations, many members of the Marquette community are already exploring how generative AI fits into their teaching, research and daily work. The following FAQ offers guidance on some of the most common questions about using AI at Marquette right now, helping faculty and staff navigate emerging tools while broader policies and resources are still taking shape.

Marquette's guidance is intentionally enabling rather than restrictive. The goal is to help faculty and staff use AI tools confidently and responsibly, not to discourage experimentation or efficiency.

New site provides guidance on AI usage

Information Technology Services has launched a centralized site to help the campus community use generative AI appropriately and effectively. The site brings together approved tools, data protections by tier, responsible use expectations, informational events and training resources - all in one place.

What do I need to know about privacy, compliance and responsible use?

Faculty and staff should treat AI tools the same way they treat any system that handles university information:

  • Protect data: Use only approved tools for institutional, sensitive or regulated information. Keep university data within systems under Marquette's control.
  • Minimize data sharing: Provide only what is needed for a task.
  • Verify accuracy: AI outputs may be confident but incorrect. People remain responsible for the accuracy and appropriateness of content.
  • Prevent harm and bias: Be mindful that AI can generate content that is out of context, insensitive or biased. Do not use AI to automate or distance responsibilities related to harassment, discrimination or support for individuals in vulnerable situations. These require human care.

Additional information on the general guidelines on the use of AI can be found here.

What AI tools should I use for university work?

It is important to remember that not all AI tools are created equal; some AI platforms use your inputs and upload data to give future answers. If you share confidential data with an AI tool, you could accidently expose data to others outside of Marquette. To assist faculty and staff, university guidance on the use of AI tools has been established.

For most faculty and staff, Microsoft Copilot is the recommended AI tool for institutional use. Copilot can be managed within Marquette's 365 environment, granting the tool privacy and compliance protection and retaining the content in a secure containerized environment that does not train public AI models. The Microsoft Copilot tools are:

  1. Microsoft 365 Copilot Full license: The most robust AI tool offered by the university. The AI tool fully integrates into Microsoft 365 apps, such as Word, Excel, Outlook and Teams. Departmental approval is required to purchase the annual license fee.
  2. Microsoft Copilot chat (university managed): A free version available to all faculty and staff. The tool assists with questions, drafts, summaries and brainstorming. The chat cannot access Microsoft 365 apps directly, but content or files can be shared directly with the chat to assist the AI with context.
  3. Microsoft Copilot chat (personal account): A free version that has the same functionality as the university-managed Copilot chat, but without data security for university-managed content. Only publicly available data should be used within this application.

Further details on the use of these tools and all currently reviewed acceptable AI tools is located here:.

Additionally, ITS is available to assist with what AI tools are available and how they should be used. In some cases, other AI tools may be useful, but usually only for low-risk scenarios such as:

  • Brainstorming
  • Improving general writing clarity
  • Learning how AI works
  • Working with non-sensitive or non-institutional content.

When is it appropriate to use AI - and when isn't it?

AI can be used responsibly for many day-to-day tasks, especially when outputs are reviewed by a person before being shared publicly or relied upon. Examples include drafting or refining emails or memos, organizing ideas or notes, summarizing discussions, generating meeting minutes using approved tools, brainstorming, and improving clarity or tone. Additional examples can be found within the AI guidelines Do's and Don'ts section.

Use extra care when accuracy, attribution or professional judgment are required, or when working with sensitive or regulated data such as FERPA-protected student data, PCI, HIPAA, research protocols or employee information. In these situations, choose approved tools, limit the data you share and verify outputs carefully. Additional guidance can be found within the AI guidelines for Sensitivity levels for data,

Additionally, when preparing materials for publication or for use in a proposal, be aware that appropriateness of AI use may be governed by external policies. These policies should be reviewed in advance to ensure compliance.

For collaborative projects, discuss AI use early in the process so all contributors agree on whether and how AI tools will be used. Establishing expectations upfront supports transparency, consistency and shared accountability.

AI should not be used to:

  • replace human accountability or become the author of record
  • store, reuse or train on university data outside institutional control
  • bypass required policies or compliance processes

Is using AI required?

No. Many faculty and staff are excited to explore these tools, while others prefer not to. Marquette aims to reduce uncertainty and provide a clear path forward, not to prescribe uniform adoption. Additional information on the general guidelines on the use of AI can be found here.

What training and resources are available?

University events and training related to AI can be found here; content will continue to grow as training support becomes available.

What is next for AI at Marquette?

The landscape of artificial intelligence is constantly evolving. The AI Task Force is building a living, adaptive institutional approach to AI, ensuring any guidance is a starting point, not a final word.

Expect continued updates on:

  • supported tools and responsible-use guidelines
  • training opportunities
  • examples from teaching, research and operations
Marquette University published this content on March 17, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 17, 2026 at 14:22 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]