top of page

How to Create Responsible AI Policies For Your Teams

If you work in education, you’re already keenly aware of the need for responsible AI use—both inside and outside of the classroom. Yet creating any kind of guidelines or policies for that usage can be overwhelming. Where do you even begin? 

Our team at Ribbon Education partnered with Dr. Laurie A. Pinkert from the University of Central Florida to facilitate a workshop for student success leaders and program administrators on how to craft responsible AI policies for their teams. For the workshop, Dr. Pinkert dove into the factors that should inform AI policies, offered guidance on how to approach AI policy development, and finding the right balance between risks and rewards. 

Missed the event? Read on for an actionable recap of the factors to consider and elements to include as you craft your own AI use policy.

Five considerations before crafting an AI use policy

Before you set out to actually craft your AI policy, Dr. Pinkert gives us five considerations to keep in mind before laying out your policies.

"To start, you want to think about what are those privacy or security issues, what quality questions we have, and how do we ensure that what we’re doing [with AI] ultimately produces reliable results for folks?” - Dr. Laurie Pinkert

1. The types of data you handle 

What types of data is your team and your employees managing on a day-to-day basis? And what types of data might you not want to put into an AI tool? 

For example, are you working with: 

  • student work submissions

  • information about course content 

  • personal student details 

  • proprietary or confidential information 

You should also consider if the data you’re working with is public or private. All of these considerations can impact how you craft your AI policy, and what is or isn’t okay to use AI tools for. In general, if data is confidential, personal, or protected, it probably shouldn’t be used in an AI tool. 

While this may seem obvious, it’s worth making these expectations clear to the entire team. 

2. The nature of your teams’ tasks 

What types of tasks are you doing? Which of those are well-suited to using AI for—and which aren’t? 

Considering this on both a high-level and more granular scale can help you address with your team, and within your policy, when and where AI has a useful place in your workflows. For example, you might consider differences between: 

  • the types of communication (i.e. bulk communications vs. 1:1 emails) 

  • internal vs. external tasks

  • the level of the task (i.e. brainstorming vs. final reports) 

3. The variation in AI tools 

What AI tools does your team have access to, and what is the level of quality, security and reliability of these? 

AI tools have widely varying levels of security and privacy, and understanding the varying policies and terms of different tools can help you make wise decisions about how (and if) to use each of those tools within your organization. 

For example, you might consider using only tools your institution has a partnership or access to the API, versus using public versions. Or you might consider limiting the types of tasks or data that can be used on public versions of AI tools. 

4. Stakeholder responsibilities 

Consider who is responsible for various aspects of AI tools, output, and management, that is: 

  • Whose responsibility is it to approve adoption or prohibition of AI tools, or similar technologies? 

  • Who is ultimately responsible for each team’s output? 

  • Who is responsible for the quality and output of AI-assisted or generated work? 

You may also want to consider the relevant stakeholders and responsible parties in both individual use compared to large-scale institutional adoption of these tools. 

A key point to consider with generative AI tools is that the terms of use of these tools generally put the responsibility of AI output on the user. Since the AI can’t take responsibility for itself, and what it generates, that responsibility gets transferred to the user. Who is going to be ultimately responsible for that within your team and how you use the tools? 

In short, Dr. Pinkert highlights, what we’re really trying to think about is “who’s ultimately responsible for input, what happens to my input, and then what happens with that output.”  

5. Risks and rewards 

You want to consider the relationship of AI tools in potentially providing increased efficiency and effectiveness, while also weighing the risks of security, quality and reliability. 

“What we’re really wanting to do with an AI policy is to weigh those risks and rewards and to develop something that mitigates whatever risks might exist.” - Dr. Pinkert

That could include thinking about: 

  • how you monitor the output of AI 

  • how you ensure the output of AI is accurate or valuable 

  • what biases might already be present in the AI’s data set (therefore biasing its output) 

  • what limitations are present in the training data set 

Emerging Pieces of AI Policies: 7 Things to Include When Creating Yours 

Once you’ve considered the foundational principles, Dr. Pinkert provided seven core facets of emerging AI policies: 

1. An explanation of the basics of use and/or prohibition 

You can think of this section as setting your ground rules or outlining your overall philosophy on AI tools for your team. You don’t have to go into detail, but instead provide a high-level overview that team members can quickly understand and keep top-of-mind. 

2. Limitations of usage for specific tasks 

In this section, you can break down more granularly the prohibited uses of AI based on types of tasks. Just as some tasks might be enhanced by AI, there are almost certainly others that demand an approach that would preclude the use of AI tools—and this is the section where you should outline exactly what those are. 

Some examples may be: 

  • employment-related decisions, such as hiring, promotions, or performance-related reviews or screenings

  • empathy-based tasks or tasks that require high levels of EQ 

  • any uses that creates legal liability or violates your organization’s privacy or security policies 

  • external-facing communications for sensitive situations 

3. Limitations of use for sensitive data 

Just as you want to outline which tasks are inappropriate for AI tools, you’ll also want to make clear the expectations for the types of data that should not be input into an AI platform, regardless of task. 

Some examples could be: 

  • sensitive or confidential information, whether institution or personal 

  • proprietary information

  • emails of individuals or other personal identifiers or characteristics 

  • copyrighted work or IP 

4. Requirements for permission or review of AI-generated work 

After you outline limitations, you’ll want to clarify expectations for work that is AI-generated or assisted.

For example, you may want to consider or outline if AI-generated work is: 

  • something employees need to seek approval before they do? 

  • something they need to report if they do it? 

  • something they need to review the output and ensure they got the right level of expertise? 

  • something they need someone else to review for quality or expertise first? 

In addition, you should consider what kinds of records you want to keep around AI-generated work, such as when AI was used in a project, how, what information was disclosed to the AI tool, how the output was used, and so on. 

5. Permitted and prohibited AI platforms 

As Dr. Pinkert reminds, “there’s a wide variation in what’s out there [when it comes to AI platforms]—tools that are available to individuals, tools available to organizations… you may want to think about which you trust and use as a team.”

If you have specific platforms that you want employees to use—or don’t want them to use—that should be clearly outlined. 

Some organizations may also want to outline specific devices which can and cannot have approved AI software installed. 

6. A contact person for questions 

As with other employee policies, it should be expected that team members will have questions about your AI policies. You should designate a contact person for questions, and ensure that person has all the relevant details and information to help ensure implementation and adherence. 

7. A plan for review and updates 

“You really want to think of your document as a living document,” Dr. Pinkert said, “not as a static document, but one that will move you forward and gets reviewed regularly.” Tech changes, and it changes quickly. 

As such, you should include in your policy creation—whether or not you share this with the whole team—a plan for when and how often it will be reviewed, and a plan for iterating and updating the policies regularly as AI technology evolves. 

“What I tend to think about is really principle-driven approaches to policy making,” Dr. Pinkert concluded. “So it’s not all or nothing. It’s thinking about what is the appropriate use, what is the kind of data we use safely, and how do we ensure these kinds of things?” 


This is the fourth part of a series of events on Harnessing AI in Adult Education. Was this recap helpful for you? Check out our: 


bottom of page