IT Teaching Resources

AI Solutions Suite

This quick guide is designed for students, faculty, and staff at the GSE to intentionally explore the use of AI. Below, you will find resources to frame your understanding, navigate common questions, and map out your own use of AI tools.


Introduction

What is AI? What is Generative AI?

Artificial Intelligence (AI) refers to the capability of machines to perform tasks that typically require human intelligence. AI systems learn from data, make decisions using algorithms, and interact with humans through technologies like speech recognition.

Generative AI (GenAI) is a branch of AI that uses patterns in data to create digital things like text, images, and videos. These tools can provide answers but also iterate upon ideas over the course of a conversation. Unlike AI tools before them, GenAI tools such as ChatGPT have been adopted on a massive scale, including in educational settings.

How do LLMs work?

AI models like ChatGPT are called large language models, or LLMs. They have been trained on vast amounts of data from the internet from sites like Wikipedia and Reddit. By processing the patterns in this data, they have become very good at predicting the next word or words in a sentence. In a sense, you can think of LLMs like word completion on your smartphone, but on a larger scale where it can compose entire essays. However, this is a simplified overview of how LLMs work, so read below to better understand the nuances of how LLMs operate and where they fit for you.

General Questions

What are some key “dos” and “don’ts” when using AI tools?
  • Do try it out yourself. Everyone’s experience will be unique because LLMs produce slightly different results for each user. Try out some AI models for yourself and see how they match or don’t match your expectations. Stanford’s approved AI Playground is a great place to test out the exciting capabilities of AI in a semi-secure environment.
  • Do consider how to use AI tools responsibly. You should never input high-risk data, such as credit card numbers or sensitive research data, in AI environments unless they have been approved for that level of security. You should also be cognizant of inputting medium- or low-risk data, and be mindful of whether this information is approved to share broadly.
  • Do familiarize yourself with Stanford’s guidance for AI use in the classroom. Per the Office of Community Standards, “absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person.” In addition, it is generally a good practice to be transparent and open about the role that AI services or tools play in your work or interaction with colleagues or learners.
  • Don’t rely on AI’s results and sources to be accurate and factual. Many AI tools operate on probability, so the output you get may not be 100% accurate; instead, it may just be the best available response, which can still be wrong. It is a good practice to check the output of AI tools before moving forward.
  • Don’t include any personal or confidential information when using AI. As with any non-secure technology, it is a good practice to not put any information into the tool that you would not be comfortable having on the public internet. Your privacy is not guaranteed, and in some cases the information may be used by the AI provider to refine its technology.
What is AI doing with my data?

AI systems collect data from various sources either online or from information you share (i.e., prompts), including numbers, text, images, and sounds. It will analyze this data to learn patterns and relationships, helping the AI to make decisions or predictions. Over time, AI improves its predictions by processing more data, which in turn will help it be more accurate and efficient.

Usage data is the information gathered when you use the tool. This can include your conversations and account information linked to the tool. The AI provider may use this data to train the model further. All this data may be stored in the system, which can be safe but be aware of data breaches, selling of data, and when you remove your account. You can generally find out how the data is stored, what data is stored, and if it is used for training in the terms of service agreement.

Which AI tools keep my data safe?

University IT (UIT) has set up an AI Playground. This is a terrific sandbox to try out various chatbots and get a feel for how each performs. You can attach documents in this environment, too. UIT notes that AI providers have committed to not retaining user data in the AI Playground. However, they encourage inputting only low-risk data.

Also, UIT’s GenAI Tool Evaluation Matrix site lists AI tools currently being evaluated for potential implementation in various contexts at Stanford; however, UIT advises that “[t]his page listing does not imply that each tool or program is approved for general use at Stanford.” For more information, you can refer to UIT’s Responsible AI at Stanford page which includes a comprehensive overview of considerations when using AI.

How much does AI cost?

The cost of AI varies widely. Many consumer-facing AI tools, such as ChatGPT, may be free to use with limited AI models. Often, these providers also offer paid access to more advanced AI models with expanded capabilities. You may also discover AI tools that are integrated into products such as productivity suites or research engines; these often start out free-of-cost, but are limited in usage until you pay for a subscription or license. Finally, for individuals with more technical skills, it is also possible to build custom AI tools with APIs that connect to services like OpenAI. Ongoing costs of those custom builds can include maintenance, updates, and scaling the system to handle more data or users.

What are the ethical considerations of using AI?

AI can at times be a “blackbox” in terms of how it works and what data it was trained on, so it’s difficult to pinpoint what data might have been included. You might consider using services that prioritize transparency in how they collect data and how they train their AI models.

In some situations, AI can offer a shortcut to points of friction in the learning process. In these cases, it is important to be mindful on how you want to use AI and to be clear with others on your standards. A clear syllabus statement can be a very effective tool in addressing this. In addition, if you are in conversations with students, emphasize the reasoning behind the instances in which you discourage or encourage the use of AI. For more, see this resource from the Stanford Center on Teaching and Learning.

What are the downsides of using AI at this time? How reliable is it?

AI is known for making errors, sometimes called “hallucinations,” such as incorrect facts, bias, and nonexistent citations. We recommend that you are mindful to proofread and fact-check anything that you produce with AI.

Some AI tools work differently on different platforms, including between Mac and PC, as well as mobile and desktop. We recommend testing out your tool on any device you intend to use it on, including mobile if you want others to have access to it.

Where can I find more resources about Stanford policy and tool access?

University IT and Stanford’s Information Security Office (ISO) provide several resources on technology policies and digital tool access. You can learn more about the tools they are currently evaluating and how they recommend approaching AI at Gen AI Tool Evaluation Matrix and Responsible AI at Stanford, and about how to effectively use AI via their articles on GenAI Use-Cases for Experimenting and GenAI Prompt Guide.

Also, if you are considering purchasing software or hardware (whether or not it is AI-related), we recommend reading UIT’s guidance on this topic. For academic-related policies, the Office of Community Standards also provides a resource about Generative AI Policy Guidance and how it affects the honor code.

If you have specific questions pertaining to your role at GSE, please see the personas in the section below.

Role-Specific Questions

  • Instructor /
    Advisor
  • Researcher
  • Staff
  • Student

Careful Charli

Charli is an instructor who wants to use AI in their classroom but is not sure how. They have thought about adding an AI statement to their syllabus, but they might need a template to get started. They are cautious about AI and want to know more before they allow their students to use it.


What are the guidelines for using AI for teaching at Stanford?

The Stanford Office of Community Standards’ Generative AI Policy Guidance states instructors have full discretion as to whether AI usage is permitted or discouraged in their course. In addition to familiarizing oneself with AI and its pedagogical capabilities in the GSE AI Dos and Don’ts for Instructors, the Stanford Teaching Commons advises instructors to include clear statements about their AI policy on course syllabi. You can also use this CTL Worksheet on Creating Your AI Course Policy to get started.

How can instructors incorporate AI into their teaching, such as using it for grading and giving feedback?

We recommend that you start introducing AI in the classroom gradually, before moving into grading and feedback. See CTL’s AI Teaching Guide for pedagogical approaches, and GSB’s Starting Small with AI in the Classroom for AI literacy and integration.

If you want to use an AI tool for grading or giving feedback, please consult with GSE IT. We can help to research and test the tool before you upload sensitive student data.

Are AI detectors accurate?

Not 100%. Furthermore, detectors can exhibit bias against non-native English speakers. Since a false-positive can have serious implications for students, we strongly encourage instructors be careful when relying on AI detectors.

What are factors that faculty advisors should consider when guiding doctoral students on using AI for research and degree requirements?

Verify that AI usage is permitted based on relevant professional associations and accrediting bodies’ guidance. Discuss the merits and the risks of using AI for research and for writing, including ethical concerns, intellectual property, biases, data security, and data accuracy. Draft a clear action plan outlining how AI will be used and how it will be cited in the research.

Should instructors require that students buy a tool?

In general, the GSE recommends that you try to find free tools for students to use, so that all students have equal access.

Many tools have an “education license” that is a fraction of the regular cost. You can also find software availabile through Stanford at the UIT Software Licensing page, and see the status of an AI tool with the GenAI Tool Evaluation Matrix.

If you are considering purchasing options, please reach out to GSE IT for a consultation.

Analytical Aysha

Aysha is a researcher and is aware of AI tools. She wants to go beyond just exploring tools in order to use them meaningfully in her research.  However, she is uncertain about policies and best practices. She has the budget to purchase an account and pilot something, but she will need help from IT to implement.


What are the guidelines for using AI for research at Stanford?

If you are hoping to publish your research, be aware that it can vary by field, and even by journal. Some research communities do not accept any AI generated content, including images or text, while others have embraced the use of AI tools in their acknowledgements.

You can consult the Stanford Research Policy Handbook and HRPP or the APA Journals AI Policy for further guidance on your research project.

How can researchers cite AI in their work?

Citation practices with AI are still evolving, but feel free to refer to these articles for some guidelines on different citation styles:

Diligent Diego

Diego is a staff member and has not used any AI tools. He has heard of them but is wary of using them before he fully understands the implications of the technology. He is eager for more knowledge, and he may need guided hands-on sessions before he feels confident using AI on his own.


What are the guidelines for using AI to perform professional tasks?

It is good practice to discuss the possibility of using AI for your tasks with your supervisor and develop mutually agreed-upon expectations when using AI, particularly if the work being produced will be public-facing such as newsletters and website content.

If you want to use AI to assist with writing and brainstorming, be sure to always proofread and fact-check. It’s important to remember that what you share, even in an email, can reflect on you and be seen as something you take responsibility for.

If you want to use AI transcription or recording tools, California is a two-party consent state (Penal Code 632) which means you need to obtain consent from everyone present in the meeting before you record audio and/or video. This is true whether you are using AI tools or traditional recording devices.

Do not ask AI to help with confidential and/or private information. Never disclose medium- or high-risk information to AI tools including budgets, financial accounts, personnel files, and health information.

What is the best option for staff based on how their teams work?

Start small. One method is to treat it as you would a new member of the team, and feel out which tasks might be appropriate. This could feel like a partner that you hands tasks over to, brainstorm with, or get feedback from.

This article on How Non-Technical Teams Can Use GenAI suggests that you “consider working with your team(s) to create a job description/wish list for GenAI that could inform how it is integrated into workflows moving forward.”

Earnest Em

Em is a student and may have played around with some AI tools once or twice. They are wary of accidentally doing something wrong, like violating the honor code or using data inappropriately. They are looking for guidance from their instructors and advisors about which tools and uses are allowed so that they can take full advantage of learning at Stanford.


What are the guidelines for using AI for writing research papers?

If you are hoping to publish your research, be aware that it can vary by field and even by journal. Some research communities  do not accept any AI generated content, including images or text, while others have embraced the use of AI tools.

You can consult with your advisor and the Stanford Research Policy Handbook and HRPP or the APA Journals AI Policy for further guidance on your research project.

Can students use AI to clean the data they will be using for their capstone or dissertation?

Consider speaking with your advisor before proceeding with any AI tools. It is possible to use AI to clean code, label data, and help with data visualizations. However, you it is important to note that AI can replicate any biases present in the dataset.

What are the guidelines for using AI for assignments at Stanford?

The Stanford Office of Community Standards’ Generative AI Policy Guidance states that unless otherwise stated by an instructor, “use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.”

You can also refer to the Student Guide to Artificial Intelligence by Elon University and AAC&U for some guidelines on approaching and using AI.

How can students cite AI in their work?

Citation practices with AI are still evolving, but please refer to these articles for guidelines on different citation styles:

Next steps

Now that we have addressed the main considerations for using AI, we continue our AI journey through three different pathways: Learn, Explore, and Make. Each pathway includes several options for your level of understanding and facility.

  • Learn
  • Explore
  • Make

For those wanting to dive deep into GenAI’s mechanics and learn how to apply it in everyday scenarios.


Getting Started
Foundational Knowledge
In-depth Courses
More at Stanford

For those wanting to try GenAI in a low-stakes, low-risk environment.


Sandboxes
Workshops

For those ready to buy and/or build their own AI.


GSE AI Tinkery
Workshops
Tools