Students enter graduate programs with widely varying experience using generative artificial intelligence (AI). And even advanced users may lack the skills to apply it effectively in an educational setting. For this reason, Penn’s Master of Health Care Innovation (MHCI) holds AI orientations for new and returning students, with two primary goals: 

  1. To promote appropriate use by helping students identify use cases that align with institutional and program policies and engage with established good practices. 

  2. To create a more equitable learning environment by helping students avoid common mistakes and build AI competency in educational settings. 

In this post, we will explain why orienting students to AI is so important and then offer a framework to help other graduate programs do the same. 

Why Orient Students to AI?

Providing robust AI guidance is increasingly essential, even when building AI competency is not a program goal.

The fact is that students already use AI in their coursework, and policies and technologies intended to stop its spread will not change that reality. In 2025 , nearly 90% of undergraduate and graduate students used AI. Usage has grown steadily since the release of ChatGPT in 2022, and the trend is unlikely to reverse.

Yet inappropriate use can have serious consequences for learning, institutional governance, and student culture.

In terms of educational outcomes, inappropriate use can both disrupt students’ immediate educational success and hinder their long-term goals. Two studies from 2024 show that unguided use of AI as a study aid can lead students into an overconfidence trap. Students may overestimate their proficiency in course materials and underperform on tests where the technology is not allowed.

Even more concerning, as business professor Michael Gertlich convincingly argues, sustained use of generative AI may “erode essential cognitive skills such as memory retention, analytical thinking, and problem-solving.” Misuse, in other words, may weaken skills that will help students translate their education into future pursuits.

For institutions, unguided use poses a number of serious challenges:

  • Academic integrity
    Students using AI to write all or part of their assignment submissions is plagiarism by most standards. This may cause reputational harm to the institution and strain campus resources for adjudicating such matters.

  • Privacy
    Students using AI tools that are not vetted and licensed by the institution may expose student and university data.

  • Security
    Agentic AI tools like Perplexity’s Comet web browser that act independently using students’ login credentials have the potential to act maliciously in learning management systems, billing platforms, financial aid portals, and any other university service a student can access.

Finally, unguided use can exacerbate inequity. Consider a student coming into a graduate program whose career involves building AI-enabled phone apps. Then consider one whose day-to-day job rarely, if ever, touches on AI. If students are allowed to use AI but lack guidance, the experienced user will gain an unfair advantage, while the novice will struggle to catch up.

A Framework for AI Training

Training students on educationally appropriate and effective uses of AI presents the most reasonable compromise between AI’s risks and the fact of its ubiquity. To be successful, that training must:

  • Speak to students who have varying levels of prior knowledge—and encourage them to discuss with each other.

  • Outline challenges and opportunities related to AI in educational environments.

  • Interface with institutional policies and institutionally licensed tools.

  • Provide opportunities to practice using AI to complete tasks like those in their coursework, as well as opportunities to reflect.

Varying Levels of Knowledge

An overview of what generative AI is and how it works will serve as a review for more advanced students and as necessary context for the rest.

In the MHCI student orientation, we remind students that AI chatbots:

  1. Generate responses by predicting the next token based on context—not by retrieving or verifying factual information like a search engine or database.

  2. Are designed to please the user and will “hallucinate” if necessary, in order to answer a prompt.

Then acknowledging that prompt engineering is complex, we give students basic techniques to improve chatbot responses: being specific about the task, the format of outputs, the audience, and the perspective from which the chatbot should write.

A screenshot of a slide titled "How can we use AI effectively?" On the right side, there is a screenshot of a conversation with Microsoft Copilot. On the left side, there is guidance for effective prompting: a clearly defined ask; context about the chatbot's role and the audience; directions for formatting output; any additional guidance.

Challenges and Opportunities

Many of the challenges students will face when using AI tools in class have been articulated earlier in this post: equity, academic integrity, the overconfidence trap, and the erosion of cognitive skills. In addition, in the MHCI orientation, we highlight that AI is not designed for accuracy. It is designed to please the user, sometimes by providing inaccurate or flattering responses; and it is a black box, unable to provide a truthful and accurate accounting of how it arrives at a given answer.

On the other hand, we tell students that AI tools can be a valuable supplement to their education when used within guardrails. In coursework, AI chatbots are particularly effective as:

  • Thought partners
    Helping students draw connections among their ideas, identify arguments, and create an outline of topics that may be relevant to a particular line of inquiry.

  • Copyeditors
    Suggesting how to improve the clarity, structure, and persuasiveness of draft writing—especially when writing is framed for a specific audience.

  • Data analysts
    Helping students find patterns among data points that might not otherwise be obvious.

Policies and Tools

While compliance may seem dry, understanding policies and approved tools empowers students to use AI confidently and responsibly. During the MHCI orientation, we introduce students to the University of Pennsylvania’s licensed AI tools and explain that working under an enterprise license protects privacy and intellectual property.

Additionally, because our students work in health care, we add that they must still be doubly sure that they avoid entering confidential or protected information.

The orientation distinguishes between using AI in addition to and using AI instead of:

  • If students use AI to supplement their own original thought and writing—for example, by asking a chatbot to critique a draft—that is acceptable and educationally valid.

  • If students use AI to do coursework on their behalf—for example, by having it complete and digest readings for them, or by having it write their assignment submissions—that is unacceptable and a violation of academic integrity standards.

Finally, we explain that per the MHCI’s policy, if students do choose to use AI in an assignment submission, they must be transparent by including a description of how.

Practice and Reflection

With all of this in mind, MHCI students practice using AI to complete the types of tasks they are likely to encounter in class. They form small groups, and we assign each a common task: writing, editing, research, ideation, or data analysis. They have 15 minutes to work together with a chatbot to complete it.

A slide directing students to work in groups on using generative AI to complete tasks like the ones they might be assigned in class. Groups are given the choice to work on tasks related to writing, editing, research, ideation, or data analysis.

At the end of that time, groups report out to everyone on:

  • The techniques they used to prompt the AI.

  • The strengths and weaknesses of its output.

  • What the next steps would be if they were to transform their work into an assignment submission for a course.

The discussion helps students reflect on how AI might fit into their educational journeys: how it could enhance their thought processes, where it might make them more efficient, and where it might be better to keep AI out of the loop.

For the MHCI, where a significant amount of coursework comes in the form of interactive, asynchronous discussion forums, one effective question for students is: given that you will review and respond to your classmates’ work on a weekly basis, how much AI-supplemented writing do you want to read?

What are the Results?

Group practice in course-like tasks allows students who already possess more advanced AI skills to model competency and fluency for their peers. And it spurs discussions of where AI fits in, specifically in educational contexts.

For example, groups in the MHCI orientation who asked the AI chatbot to complete a writing task found that they needed very specific prompts to get useful results. They found that they needed a certain amount of content knowledge to ensure output accuracy. And they learned that even with multiple iterations of a prompt, all output needed to be revised, especially for specificity.

Groups who asked the chatbot for research help got a good general overview, but they too found that they needed prior knowledge to determine accuracy. And even when the chatbot was web-search enabled, it frequently produced citations of marginal quality.

By practicing with AI in practical scenarios, students developed a shared framework for how generative AI could help their learning. And through the guidance the orientation provided, they began fitting that framework into programmatic and institutional expectations.

Does this approach unerringly point students in a productive direction? No. The balance between AI’s value and its cognitive risks cannot be met in one short session. And even students with the best intentions may feel compelled at times to seek a shortcut.

However, taking a pedagogical and social approach to managing AI can help students make more informed, productive choices in their academic work. By training students to use AI to support deeper understanding and skill development, and by setting community expectations and norms, we can reduce risk to students and institutions, even as we improve student outcomes.

The orientation framework discussed in this post has been cocreated by J. Meryl Krieger, PhD and Adam D. Zolkover, MA.