This section includes some additional questions you might have right now. If you have additional questions that you would like to see added to this guide, please contact the CTT using the "Email Us" link in the bottom right corner of the screen or using ctt@unl.edu.

Instructors

What sort of policy or statement should I include in my syllabus?

In our "Classroom Implications" section of this resource, Kathy Castle, associate professor of practice at UNL, shares the statement she is using in one of her courses. For guidance on creating your own policy, please see "Developing Course Policies around A.I." And, to see a variety of policies from many institutions, take a look at this crowd-sourced collection, "Classroom Policies for AI Generative Tools."


Should I move to using only handwritten assignments completed in class?

In discussions around generative AI, some instructors are considering a return to using only handwritten essays completed while the instructor is present. While removing technology out of the equation does stop generative A.I. from being used, it also removes many benefits to you and your students. Computers have reduced writing time, enhanced the ability to revise work, made writing more readable, allowed for use of spelling and grammar checking, among many other things. It has also significantly improved the ability of many students with disabilities to complete coursework, and requiring those students to get an accommodation to use a computer adds an extra burden and would single them out in the classroom. There may be a few situations where this is appropriate, but for most things, we encourage you to use other approaches.


Should I use an AI detector to scan all student work?

Several AI writing detectors, like GPTZero, have been developed and claim the ability to determine whether a human or AI have written a specific piece of work. While this may sound great, the reality is that the detectors are flawed in many ways. Despite the prevalence of conversation around ChatGPT, there are a number of generative AI tools on the market. The detectors tend to be trained using a specific AI, meaning that they often fail to detect work written by other AI tools. It is also possible for the detectors to falsely flag work written by humans. At present, the AI detection tools available are not of high enough quality to recommend using them to assess the validity of student work.


Does this mean all my students are going to cheat?

The generative AI tools are presently available for free to any student, which means there is opportunity for cheating. However, responding to this new technology by focusing on policies and punishments can break trust with students and, in some cases, even drive them to use the strategies you’re trying to deter. Instead, it can be more useful to have a conversation with your students about what you are worried about and get feedback from them. Before implementing policies around AI use, explain to students why you find those policies to be necessary and how they connect to your intended learning outcomes. Often, the best way to encourage academic honesty is to build trust with your students and show them the value of the assignments you’re asking them to complete.


How much do my students already know about this technology?

Many of us operate under the assumption that our students know technology better than us. You hear jokes about how people ask their five-year-old to fix their phone issues for them. The reality is that not all students know how to use all resources. For a new platform like ChatGPT and other AIs, they are just as inexperienced as you are. Think of this as an opportunity for you and your students to learn and explore together. However, YouTube presenters specializing in AI prompting are rapidly teaching their viewers to make use of AI for research, writing, and job searches.


I don't want to create an account. How can I see AI's in action?

Contact an instructional designer assigned to your college. The CTT maintains an account to make it possible for faculty to gain hands-on experience


How can AI be used to enhance student learning and engagement?

Many educators and experimenting with different ways to make use of AI in their teaching and learning. A recent project, "100+ Creative ideas to use AI in education," collected a variety of ways teachers were experimenting with AI. Each approach outlines what the instructor aimed to achieve as well as where the inspiration for the idea came from. Common suggestions for use include brainstorming, first drafts, outlines, and receiving feedback on writing.


Are there ethical concerns with using an AI in my class?

Ethical considerations include surveillance, transparency, and data and privacy policies. For example, user interactions with an AI help train that AI.

AI Checker FAQ

How do AI checkers work?

AI checkers check for established relationships between different words. The AI checker looks at the probability of words in sentences and entire compositions being used together. The AI checker returns a probability that parts or the entire composition might have been created by an AI.  

Do AI checkers work?

The short answer is no, at least not with the certainty typically hoped for. Here's why:

  • AI checkers have to be trained on specific AI (as each generative AI is different and there are hundreds of generative AI),  
  • AI checkers often evaluate based off average prompt engineering, and only return estimates in the form of probability which lead to false positives. More sophisticated prompting further reduces the confidence of AI checkers. 
  • Students that use AI to write long form essays would likely be flagged with high probability that the work was written by AI. But students that use AI do not necessarily use them for full content generation. Student work that is fully AI generated often raises instructor suspicions based on past work -- just like students who copied other’s work before AI was even a threat would. 

There are just far too many AI, far too many creative ways AI can be used, and too many unknowns and false positives to state AI checkers work with certainty. 

What groups of students are most susceptible to false positives in AI checkers?

Students that are more likely to be flagged for AI content are students for whom English is not the first language, and students that are neurodivergent. Both groups often learn by pattern recognition instead of prose. Students learning English (or other languages) often learn canned phrases that they repeat in their work. Students who are neurodivergent often excel at pattern recognition and use commonly used phrases in their work that AI checkers may flag. Students who also lean on clichéd opening lines (“Mariam Webster’s dictionary defines…) or patterns to increase word counts are often flagged for potential AI usage when none was used. Finally, expertly written documentation such as peer-reviewed journal articles that use phrases and terms from other sources can be flagged. 

Does UNL have a supported AI checker?

No, not at this time.

An AI Checker I was looking at states it has a false positive rate of less than 1%. Why are we not using it?

The reported success rates of AI checkers are often conflated with looking at large amounts of AI generated text versus short answer. TurnItIn reported a less than 1% false positive rate when their AI checker was first announced. The rate was changed to 5% when information came out that the confidence percentage TurnItIn reported was only ChatGPT and the text tested was multi-page amounts. Here are reasons why the AI checker’s reported success rate when used against student material is questionable: 

  • Students that are using AI most often do not submit multiple pages of AI generated text. 
  • Students can also write the prompt they use to change the language to be less noticeable to AI checkers. 
  • People can train AI off their own work thereby the AI learns their voice.  
  • There are several AIs in use that rewrite text to be less noticeable to AI checkers. 

For help creating AI-resistant assessments, contact an instructional designer assigned to your college

I believe one or more students are using AI to cheat. How do I approach this?

While AI has made cheating more accessible, students have long used a variety of methods to avoid doing their own work.  Whatever process you followed in the past for violations of academic integrity remains relevant. Some colleges and departments have created additional policy or guidance on academic integrity. Talk with an instructional designer assigned to your college for assistance to develop strategies that support academic integrity in your class.  To learn more about creating AI policies for your courses, visit our AI policy resource

Students

How should ChatGPT and other AI be cited?

Citation authorities, journals, and universities are discussing how to cite ChatGPT and other AI resources and tools. For more information about specific styles and how to cite other types of electronic or "unusual" sources, see University Libraries and for specific guidance, talk to your instructor.


Is it OK to use ChatGPT as a "paraphraser tool?"

No, probably not. To paraphrase is to put something you understand into your own words. The the source material is then also cited, allowing the reader compare their understanding of the source material with your understanding as expressed by your paraphrase. To do your own paraphrasing is also an important learning check. If you find it difficult to paraphase something, you may not understand it as fully as needed. For specific direction on how to use ChatGPT in a specific class, talk to your instructor.