Artificial Intelligence already has some clear implications for classroom learning and student use (for better or for worse…), but what about its use for student support?
Our team at Ribbon Education partnered with Sean Dagony-Clark, Founder and CEO of effectivEDU, to facilitate a workshop specifically for student support staff about this as part of our “AI in Adult Education” webinar series. For this workshop, Sean shared some effective use cases for leveraging AI for student support, and answered questions we had about what AI really is and if we should trust it.
Missed the event? Read on for our key takeaways and highlights.
Leveraging AI for Student Support: 3 Practical Ideas
With so many possible uses for AI, figuring out where to begin can be overwhelming. But as leaders in the learner success space, we’re always trying to figure out how we can leverage the tools and technology available to us to become more efficient and effective in our work, and to free up more time for 1-on-1 connections with students.
Here are three ways that you can start leveraging AI for your student support tasks now:
1. Leverage AI for Data Analysis.
One really important thing to keep in mind before we dive into this example is to be very cautious not to divulge any personally identifiable information when using AI tools. The majority of AI tools are learning from the data they’re given, so if you provide them with your students’ information, it’s going to become part of the system.
That said, AI can be really useful for data analysis. For one example, Sean used the Bing Chat sidebar in Microsoft Edge to generate an analysis of a recent Zoom meeting transcription.
Here’s how to do it:
Generate a transcript of your Zoom recording and save as a PDF file.
In the Bing Chat, enable file access (Go to Settings → Sidebar → Bing Chat → Allow File Access)
Open your meeting transcript in the Edge browser. You can just drag it into an open tab.
Write a prompt in the Bing sidebar telling it how to analyze your file. In our example, we asked the system to write a 250-word summary of the file, including bullet points of the main categories that were covered and examples, and finish with takeaways and a summary of next steps and to-dos.
The response that we got from Bing was useful, although it did make some errors which were likely due to the PDF formatting. You always want to verify the accuracy of your outputs. But what it did really nicely was:
pulling out things that were stated multiple times
doing sentiment analysis—pulling out the overall sentiment of the meeting and the feedback
pulling out some examples of quotes
analyzing some things that went well as well as some things that need improvement
This is just one example of how to leverage AI for data analysis—this works for meeting notes, but it could also work for analyzing emails, summarizing student feedback, survey responses, questionnaires or other data.
To learn more, watch this video walkthrough.
2. Leverage AI for Learning.
Don’t forget about the “Chat” in ChatGPT! The conversational function of ChatGPT, Bing Chat, and other chat-based AI tools can be useful for learning, role-playing, brainstorming and so on.
For example, you can use a tool like Bing Chat or ChatGPT to roleplay and improve your communication or processes.
Here’s how to do it:
Start with a really specific prompt. For example, you can ask ChatGPT to help you prepare for a conversation with a struggling learner.
As ChatGPT acts as the student, you can respond as you typically would as the advisor and refine and learn from your responses in this role-play to learn. You can obviously use this in a variety of situations, and ChatGPT is pretty good about giving responses and acting as different personas.
After you’ve gone through the role-play, you can also prompt ChatGPT to give you feedback on your responses or give you examples of what you did well and what can be improved. This is a great way to get feedback on your advising and refine your ideas.
Here’s a Loom video walking through these steps or check out the transcript below.
3. Leverage AI for Student Communication
One thing to keep in mind is that there are both generalist AI tools and specialist AI tools. ChatGPT, Bing Chat, and so on—these are generalist AI tools. Then you have tools like our Ribbon Message Builder which are designed to use AI for a specific purpose—in this case, drafting emails for student communication.
Here’s how to do it:
Go to Ribbon’s Message Builder. You can start by selecting a basic reason for this student communication, for example, let’s say “a concerning grade.”
Then you can fill in what you’d like the learner to do, so let’s say schedule a meeting to discuss next steps with their advisor.
Next, click “Draft Email” and you’ll see the tool come up with an entire email draft based on your input.
From there, you can adjust the email draft based on your own tone of voice and the specific learner needs. Do you want it to be more professional or more upbeat? Is there specific information about the learner that you want to include?
Using a tool like this is way faster than writing an actual email from scratch, and can really speed up your student communication processes. You can get something written that is probably close to your voice, and then copy and paste it into your email center to make some final revisions and send.
Here’s a walkthrough video of how to do this.
Sean answered these questions, rapid-fire style, during the webinar. We've transcribed and consolidated his answers here.
If you’re curious to learn more about AI in Adult Education, join our Learner Success Guild, where we’re having conversations like these regularly with leaders, innovators and practitioners in adult learning.
What’s the difference between AI, generative AI, and machine learning?
So, to start with, artificial intelligence is the broadest category here, and that can encompass any machine intelligence. It’s a simulation of something that approaches human-level intelligence by a machine. That could be learning, it could be high-level functioning of a skill or a domain, or it could be something else that approaches human-level ability or intelligence, or at least appears to.
Machine learning is a subset of AI that focuses on learning, pattern building and prediction, and is based on huge amounts of data. This is pretty similar to how the human brain works, actually, where you feed it a huge amount of data and it learns to make predictions based on patterns it observes.
Generative AI uses machine learning to generate new content based on that vast exposure to data. So basically, machine learning is used to expose a machine to huge reams of data, and then generative AI takes the stuff the machine has been exposed to and the machine learns to create new content based on what it’s seen.
Can I trust the answers from AI?
My answer is not at face value.
There’s the potential of AI having bad training data, and a lot of its training data has come from the web. We all know how biased things can get on the web, and we’re seeing those same biases showing up in ChatGPT and other generative AI tools’ responses. And then there’s the issues of hallucinations in AI, of it making up answers or statistics or examples that just don’t exist. So we can't just trust AI's answers—not to say we shouldn't use it. It can create incredibly useful information but we also need to verify the accuracy.
Does generative AI actually create original things?
So the way generative AI works is by repurposing what it's seen, what it’s been trained on, and the probability calculations about what is desired.
So the images it creates are original, in that there’s probably not another image in the world that looks exactly like this. My point though, is that this is repurposed art, so it's derivative based on the training data. The catch here though, is that much of human art is derivative too. So in a best case scenario, generative AI creates unique works in the same way that great artists do by drawing from inspiration. And in a worst case scenario, generative AI is a copyright infringement machine.
Is AI actually intelligent?
So, I would say yes. AI currently excels at some of our standard definitions of intelligence for humans. ChatGPT, specifically the GPT-4 version of it, is more intelligent than 99.9% of us in its performance on common intelligence tests.
But it’s also artificial intelligence. As the name states, AI responds in programmatic ways, based on its training and the predicted probability of success. However, I could also argue that humans are programmed from birth to give answers and act in ways that have a high probability of success. So that's also a problematic argument against AI intelligence.
Now AI does have failings. It has hallucinations. It has bias. It has bad data. It has basic arithmetic errors. It can still fail on some really basic logic. The best way I've seen the intelligence question explained is that in ChatGPT or generative AI in general, the intelligence is not like a human's, but it is definitely intelligent.
How can we better detect AI?
It is becoming a lot harder to detect. There’s a lot of well-known cheating plagiarism detectors for schools, but so far, we’ve tried and failed to create plagiarism detection for AI. The one answer that I do have is if we treat both the production of knowledge and then the questioning of knowledge as part of the potential experience, then we can potentially take the machine out of the equation.
So, what I mean by this is, think of a Ph.D. defense. The person doing the Ph.D. writes this thesis, but then they present it and it gets picked apart by a team of scholars and they have to defend it. If we can, in a school, treat a paper as a similar process—with less intensity, of course—to a Ph.D. defense, then we start to get at what the person is actually thinking and how they came to the things that they wrote, as opposed to just judging the final product of what they wrote. That's a way that you can actually test the human brain and not just see the production of something that might be AI. But in terms of actually detecting what AI wrote, it's becoming really, really hard.
What’s the best way to use AI in my work without “getting caught?”
I think that the best way to use these systems is to not just take what it gives you and then use it as your own. The best way that I can see to use AI right now for creation is to use it as a partner.
So obviously, we're not thinking about how to use it without being caught, but we are thinking about “how do I use this to improve my work?” Then we can use it to bounce ideas off of, to reflect on ideas, to question ideas, to potentially brainstorm new ideas, or to expand on things you've written. Those are some ways that you can use it to really supplement your work as opposed to just replacing it and copying and pasting.
Will AI take over the world?
My answer to this would be, if we let it.
Another way to say this is: hopefully not, but it depends on us. It is definitely intelligent by human standards. We don't know exactly what's going on in there. We don't think that it's actually thinking—we're pretty sure it's not at this point—but we don't know how it's producing intelligent answers.
That said, allowing AI to make decisions that influence humans without it actually being a human and sharing a human experience and understanding the human issues that it could be affecting, could become exceptionally dangerous. So we really can’t put our heads in the sand on this. We have to learn to use AI as a tool. We have to learn how to validate it, how to check its answers, and basically be the masters of it, rather than letting it become the masters of us.
This is the third part of a series of events on Harnessing AI in Adult Education. If you enjoyed reading this recap, consider:
RSVPing to upcoming live events or watching past recordings