false
en,es
Catalog
Generative AI in Higher Education Foundations: A F ...
Webinar Recording
Webinar Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you everybody for joining us. We're just going to let you file on in here to our Zoom room. If you'd like to use the chat to say where you're from, we'd love to hear about how the snow is or rain or whatever's going on across the country these days. So feel free to jump in the chat and tell us where you're from. Hello, Brenda Lee. Hello. Hello. That's great. All right. Yay. Yep. Hi, Christy. My daughter's in Chicago. She told me they got to work from home today. Thanks everybody. All right, well, we're going to get started, but feel free to keep using the chat to say hi and hello to your colleagues and hello to us. Thank you so much for being here today. My name is Meg Natter, and I'm the director of community colleges and foundations here at CASE. I'm just so grateful to our two presenters today. Our college and university foundation folks have been trying to offer more webinars, and this one is our first one, and it is a fantastic one. As you know, it's Generative AI and Higher Education Foundations. And just a little bit of housekeeping before we get started, as you know, it's being recorded. But so the chat's not for questions today. The chat's to say hi and hi, everybody who's checking in there. That's great. Down at the bottom of your screen, you should see a Q&A next to share and more. One of the little tools down there is Q&A, and that's where you can put your questions. And we're probably going to wait a little bit to get to questions because our presenters have a lot of information to share, and they may answer all of your questions while they're doing their thing. So again, just use the Q&A box for your questions, not the chat. But let's get rolling. I'd like to introduce you to our two presenters. I'm going to start with Rosa Unal, who's our vice president and chief information officer at the Iowa State University Foundation. She also serves on CASE's College and University Foundation Leadership Committee. And then we also have Abby Sawhawk, the chief legal officer at the University of Wisconsin Foundation and Alumni Association. So Rosa put this together, and I just can't thank her enough. Wait till you guys see the PowerPoint, because it is excellent. So Rosa and Abby, take it away. Sounds good. As Meg mentioned, my name is Abby Sawhawk. I'm chief legal officer at the Wisconsin Foundation and Alumni Association, also referred to as the University of Wisconsin Foundation. We love having different names and confusing everybody about which one to use, but we are the same thing. I have been at the foundation for almost six years now, and we always like to keep me busy. So I will say that, and I'll kick it over to Rosa for her to introduce herself a little bit. Yes. Well, as Meg said, I'm the chief information officer at Iowa State University Foundation, and I've worked here since 1994. So some questions we're going to explore today. We are going to explore what generative AI is and what it can do within your organization. We're also going to talk a lot about the risks of generative AI. Obviously, there's a lot of opportunities that generative AI can present, but there's also some risks. So how does generative AI fit into your broader enterprise risk management efforts? And what are some of the actual sort of more concrete risks of generative AI? We'll also talk a little bit about how those risks can be mitigated. And then we'll end with some case studies of how each of our organizations have approached mitigating those risks so that we can actually take advantage of this technology. Great. So let's start with a brief overview of artificial intelligence. AI is a very old field of study, and it started in its modern form with the advent of the computers. So it's gotten a lot of attention in the last few years with the public launch of ChatDPT in November of 2022. AI systems, in general, aim to replicate or augment human capabilities, often enabling automation, decision-making, and predictive analytics. Some of the most well-known tasks that AI has been trained to perform includes understanding natural language, recognizing patterns, solving problems, and creating new content. And examples of AI applications that you may know include face recognition, self-driving cars, and smart assistants like Alexa and Siri. But in today's webinar, we will focus specifically on generative AI. Next slide, please. So what is generative AI? It is a type of AI designed to create new content in response to human prompts. That new content can be in the form of text or usually through conversations with a chat box. It can also generate images and video with tools like DALI and Meet Journey. And it can create new songs or clone voices. Generative AI can also help developers write new code or fix programming bugs. There are many applications of generative AI out there. And vendors are now on a race to get their tools and technology out so they can be the ones that get the most adoption and prevail in the future. Some software vendors like Microsoft and Salesforce are embedding generative AI tools in their applications. And others like OpenAI and Anthropic are offering it as a standalone tool. I recently heard a quote. It said, generative AI is like salt on the dinner table. You can sprinkle it on everything. And so you can expect to see this year every product to have an offering with generative AI on it. As this technology becomes increasingly prevalent, there are many promises for improvements and efficiencies. Creativity. Increasing creativity. Increasing automation. But there are also concerns raised about potential misuse, regulatory compliance, and other risks. So, Abby will share with us about risk and risk management. So, we'll start by talking about how you can manage the risk of generative AI through your enterprise risk management efforts. So, obviously, with the opportunities, as you heard from Rosa, there are many ways generative AI could be beneficial to your operations. And there are a lot of products who are incorporating an AI function. With those opportunities also come threats. So, through your enterprise risk management process, you should monitor the threats created by generative AI. But we've generally seen, and this is a small sample size, right, but we've generally seen two approaches to incorporating generative AI into your ERM. The first would be having your generative AI be sort of a standalone risk that is monitored at that level. The benefits of doing it in that manner is that you get a ton of visibility on generative AI. But maybe your organization isn't embracing generative AI fully yet. The other option that we've been seeing is generative AI components of other risks. So, for example, you might have a cybersecurity risk that has a generative AI component. The sort of natural result of that structure is the generative AI is sort of subservient to the larger cyber risk. It just sort of depends on which you would like to elevate. Regardless of the approach you select, the key through all of this is tracking, monitoring, and adapting. So, there's a number of different kinds of risks posed by generative AI. And we're going to get into some of these areas in a little more detail, the cyber and privacy and the legal and compliance in particular. But we'll start by talking a little bit about some of the others. So, on the decision-making side of things, there is always the risk of making invalid decisions by using misleading or fabricated information obtained from AI. So, there is research that shows that generative AI can provide misinformation. And so, if you rely on that information without fact-checking it from more reliable sources, you're going to be making decisions based on inaccurate information. So, make sure that you're using – it can be a source of information for your decision-making, but it shouldn't be your only source of information. On the financial side of things, there's strategic and operational decisions that, you know, in a similar vein where if you make them using bad AI-produced data, it could impact your fundraising results in a negative way. On a reputational side of things, if your organization is seen to be relying on invalid, misleading, or fabricated information, it could create issues with your reputation. AI is also sometimes, in some cases, been shown to produce discriminatory and biased results. And we can talk a little bit more about why that is. I am by no means a technical expert on AI, but can speak to some of those issues from the non-technical perspective. There's also the risk of loss of opportunity. These technologies present a huge potential for organizations to streamline their efficiency and get more things done in less time. Being entirely resistant to generative AI could create a loss of opportunity and growth for your organization. The last two areas of risk, cyber and privacy and legal and compliance, are two that we will talk about in more detail. Starting with cyber and privacy, like all technology tools, these apps are subject to cyber attacks and could result in a data breach. And it could have a significant impact on the AI model if it was fed confidential and proprietary information. Rosa will speak to this item in a little bit here. And we're going to spend a little time here talking about legal and compliance next. The first thing to know is these risks can be balanced with appropriate mitigation efforts, but those efforts should be informed by your risk appetite as an organization and your desired use cases for this technology. Not all organizations are going to approach incorporating this technology to the same degree, so what works for us might not work for you. All right, so let's talk a little bit about the legal and compliance risks. The first thing I will say is I am a lawyer, I am not your lawyer. So I am here to give you some general advice, but obviously talk to your own experts in this space. The next thing I will say is the law is notoriously behind technology, always and forever. It is one of the biggest gripes of attorneys is that the law moves slowly and technology moves quickly. So these things are actively being worked out. I would expect over the next several years you will see a lot of refinement in this space about what the legal standards are for generative AI. So I'll speak to just sort of some general concepts and where things are now. In the intellectual property space, again, this is a moving target, but in general, I would not assume that you can claim intellectual property rights over anything created through generative AI. So, for example, if there is a specific use case in which it is important for you to have intellectual property rights over this particular image, for example, generative AI probably isn't the way to go. So, for example, probably don't use DALI or MidJourney to create your organizational logo. You might want to actually employ a person to do that for you because it does make the intellectual property protections a lot more clear. On the reverse side, these tools are actively being sued for copyright infringement based on what is being fed into them. So, for example, the New York Times is currently suing OpenAI and Microsoft for unauthorized use of its published works that is being used to train their AI models. That litigation is actively ongoing, but we will definitely see some more clear guidance about how these models can use other people's works. In terms of privacy and data protection, generative AI and especially large language models rely on a massive amount of inputs in order to generate their models. If the inputs are personal data, this can pose issues with the disclosure of personal information to these tools, and you may unwittingly violate privacy laws. Similarly, if your inputs are your business information, you may lose trade secret or confidential protection over that information by inputting it into these tools. Your comfort level with inputs may depend on the specific tool. So, for example, you may have greater comfort with a Microsoft-owned tool if you are already a Microsoft organization and you have some enterprise privacy and security protections in place versus a random web-based tool like ChatGPT where you have little to no control over what is done with your information. Moving on to terms and conditions, one thing to keep in mind is you are agreeing to terms and conditions by using these tools, and you should review these terms and conditions before you use them or before you sort of bless your team, your organization, to use them. You might find some interesting things when you review the terms and conditions. So, for example, many of the tools say, point blank, you cannot represent our output as human-generated, and in some cases, you must affirmatively disclose that this material was AI-generated. Some of you will see, like if you go to the CNN webpage and they use an AI-generated image, it will say AI-generated. That is becoming a best practice, and depending on your use case, the people using your tool might not want to do that. A lot of times, these tools reserve the right to use your output in any way they want. There's little to no guarantee of any accuracy, and these two are more on the legal side of things. There are significant limitations on liability. So, for example, OpenAI caps damages on free usage of its tools at $100, right? So, basically, no support whatsoever, and many of them include mandatory arbitration and class action waivers. Those things might not be important to you depending on how you're using them, but they are things that you should consider. The last thing is bias and discrimination. These tools are only as good as the information fed into them. So, models trained on discriminatory information will create discriminatory outputs. Where you see a lot of concern is in the HR space. Using these tools to perform HR functions can be quite troublesome, but that's something to keep in mind. I think the tools are improving in this space because there was a lot of focus on them, but it is something to keep in mind. And now I'll turn it over to Rosa to talk a little bit about the cyber risks. Thank you. So, generative AI in the area of cyber risk introduces new attack vectors that increases our risks. For example, AI can inadvertently cause a data breach by exposing sensitive information when generating responses. Like Abby already mentioned that if you input that information into the AI, it can then at some point expose your information. You may also have heard that AI-generated images, videos, and audio have already been used for fraud, political disinformation, or impersonation. AI can also be used to create highly convincing phishing email messages or fake websites that mimic real communications, increasing the risk of social engineering attacks. It can also assist criminals in writing malicious code, automating tasks, automating attacks, and improving malware evasion techniques, making cyber threats more sophisticated. Engineers can also manipulate AI inputs to deceive their models and bypass security systems, poisoning the training data of some AI tools and introducing biases, vulnerabilities, or backdoors. And last but not least, employees using generative AI tools without governance or oversight might expose confidential data and make an organization susceptible to external threats. So given all of these concerns that Avi and I just talked about, the implementation of AI tools in an organization should be approached from a risk management perspective, ideally using an AI governance framework. AI governance is still in its infancy, but it's evolving rapidly, and hopefully soon we will have good models to follow in this area. But in the meantime, here are some measures that we have identified can be used to mitigate AI risk. First, implementing policies and practices to provide employees with guidance on how to use these tools in a responsible manner and in compliance with law and regulations. It is not enough to introduce a policy, though. Employees' training and education are also very important. Evaluating AI vendors and their contracts is critical before introducing new tools. This evaluation can be accomplished through a vendor risk management or procurement program and ensures the selection of reliable software or services when looking for new AI functionality. And it is also important to review any existing cyber security, privacy, confidentiality, and your information retention policies and procedures to ensure they are aligned with AI. Finally, the retention of legal counsel with expertise in AI regulations can also be of help. As Abby mentioned, laws and regulations in this area are continuously changing and it's very difficult to keep track of them on our own. Now back to Abby. As Rosa mentioned, step number one is probably crafting a generative AI policy. If nothing else, this can go a significant way to mitigating your risk. The first step is to figure out, to learn when you need one. And in general, if your employees are using this data or using these tools, you probably need an AI policy. The important steps are to collaborate with your internal stakeholders and your external advisors on a policy that is both sound from a legal and risk perspective, but is also realistic in terms of what employees will be using these tools for and how they will be engaging with them. So make sure that you collaborate not only with your attorneys and your, you know, your information security or IT team, but also with somebody who can represent the actual users of these tools, who can speak intelligently and thoughtfully on the specific use cases that they're envisioning. The important, the other important thing is to make sure that your policy addresses the critical elements that are necessary. So for example, what are you going to be your, what's going to be your process for approving tools? Are you going to require staff members to inform their supervisor that they're using these tools? Just think about all of the questions that your employees might have as they're trying to interact with these tools and figure out the best ways to use them. So we'll start with the case study from over at the Wisconsin Foundation Alumni Association. So we began internal discussions in early 2023 about whether or not we needed a generative AI policy. As Rosa mentioned, ChatGPT was rolled out in November 2022. And in early 2023, we realized our staff members were using ChatGPT. Whether we liked it or not, they were using it. And we needed to be a little more proactive in providing some guardrails. So in our case, this policy was largely reactionary to the demands of our workforce. We tried to keep our internal work group small. So it was primarily myself, our senior director of IT, and our VP of digital experience and innovation. The idea was the three of us could speak to 95% of the concerns and issues we might have. We did, after we had a version, sort of a close to final version, sort of send it a little bit further out to get some reaction. But we intentionally kept the group small. Because we have learned that if you have too many cooks in the kitchen, it can be really difficult to make progress. We also worked with an external advisor in the form of our outside counsel, specifically to vet some of the IP concerns related to using generative AI. This took us about six months of regular effort to end up with what we felt like was a final draft. But we probably could have condensed it a bit more. You know, it was just sort of something we did in the background. We rolled out version one, I believe, mid-2023. And then we are currently rolling out, like today, version two. So I'll talk a little bit about what we did with version one that we changed in version two. So version one specifically excluded tools that we might purchase, like we actually purchase for generative AI. It was largely focused on these web-based tools like ChatGPT. The reasoning for that distinction is because tools we purchase go through legal review of all the contracts and IT review of all the technical requirements and the data privacy and the security and all of that, we had a higher level of comfort with purchase tools as opposed to free tools that somebody just might go to the internet and use. So the focus of version one was really on web-based generative AI tools that we weren't going to centrally manage. So version one had several guardrails. The first was that tools had to be approved by both IT and legal. So legal would review the terms and conditions of the tool, IT would also review the terms and conditions, and we would sort of meet together to talk about our concerns. This was really helpful to us because we surfaced some tools that people wanted to use that we found sort of little pieces in their terms and conditions that we either weren't comfortable with or we needed to address. So for example, and I haven't looked at the mid-journey terms and conditions in a while, but at the time, mid-journey included a provision in their terms and conditions, and this is an image generator that our marketing team was interested in using. They have a provision that basically says if you work for an organization and your organization has basically revenue over a million dollars annually, you have to buy a actual business license. We wouldn't have found that requirement unless we looked at the terms and conditions. Another requirement we had was that the use must be approved by the supervisor. We wanted supervisors to have some visibility into what people were using generative AI for. It allowed us to raise potential performance issues resulting from use. We wanted to be mindful of employees sort of relying on generative AI for what we would consider their normal work, and there was also concern of overuse of generative AI to perform their job functions. We also included a requirement that written outputs must be reviewed by staff and improved. I should say reviewed and improved by staff, and if they were going to be used externally, they also needed to be run through a plagiarism checker. We went the extra step of identifying certain employees who we consider sort of writers where their job duties were actually to write, and we actually went out and bought them subscriptions to Grammarly, both because there's some Grammarly functionality that they could find helpful and also because it has a plagiarism checker. In version one, we also said that image outputs could not be used externally. They could only be used internally for sort of like concept art and just sort of general usage and that they needed to be saved in locations or manners that indicated that they are AI generated. For example, they needed to be saved in a separate folder or they needed to include a watermark. Under version one, we also did not allow video or audio outputs. We indicated that you could not input proprietary or personal data. We had a lot of questions about this, so if I want to use ChatGPT to draft a letter to a donor to thank them for, you know, giving $10 million in support of a new football practice facility, our guidance was you can still do that, but like maybe leave out some of the details and call them Mr. Smith or Mrs. Smith, whatever, something that you can just easily change once you get the output. So there's a little bit of a training aspect there. We also made clear the employee was ultimately responsible for the use of generative AI and what they did in work was deemed their work product. So if you used a poorly written generative AI piece to send an email to a donor and it did not go well, that reflected on you, which is, again, part of the reason we had the requirement to both review and improve generative AI if you're going to use the content for external purposes. We also had a requirement that if the content was going to be used externally, you had to disclose the role of generative AI in creating the content. We are now, like today, rolling out version 2, and the reason we have version 2 is because we have decided to implement Microsoft Copilot organization-wide. We demoed it with a group of people and we all found it very, very helpful, and we have made an organizational decision to buy licenses for our entire organization. Our hope is that this will sort of drive people to Copilot and away from some of these web-based tools. So version 2 of our policy, which is the one that was included in the materials, advises staff to use Copilot as the first option in all cases. It doesn't rule out using other tools, but it advises that they should use Copilot in the first instance. It also communicates that because Copilot is governed by our enterprise security settings with Microsoft, and frankly, Microsoft has access to all of our information anyways, there is greater flexibility for native Copilot functionality within Word, for example. You can go in Word and you can actually use the donor's name because you were going to do that anyways, and you can use Copilot to sort of adjust the letter throughout. So that's another thing that we're hoping is a draw to bring people to Copilot. Our new policy also does allow for AI-generated images, audio, and video within certain guardrails, and I could speak to those. So you cannot use AI to recreate a person's likeness without their consent, so you can't use it to create an actual person in an image. You cannot use generative AI to manipulate logos or trademarks, so you can't use it in our situation to manipulate the bucky, right? You can't create a bucky generative AI. You can use generative AI to enhance the technical quality of an image, but you cannot use it, for example, to change the subject matter of the image. You cannot use generative AI in a way that misleads, confuses, or misrepresents something or the accuracy of an image. And again, you must also save these things in a way that indicates that they are AI-generated. And then we've also maintained our requirement that if you're going to use one of these things externally, you have to disclose the role of generative AI in creation of the output. So that's sort of where we are currently at Wisconsin. I am going to kick it to Rosa to talk about Iowa State. And I'm just going to jump in and add that, again, these policies that Abby and Rosa have at their institutions are available to all of you. There are instructions in the chat for how to access them because it is so wonderful that they've done all of the work and now you are benefiting from all of that. So thank you to Abby and Rosa. So please continue, Rosa. I just had a feeling that was a question that's like, where do we get these an example? And you all have access to them. So that's absolutely true. We have shared our policies, but as Abby mentioned, they are changing constantly. And they should be because this is a very dynamic, as we already described, very dynamic area. So similarly, as described by Abby, when ChatGPT was introduced, several of our foundation employees raised their hand and said, we want to utilize ChatGPT. How do we go about it? So we identified at that time the need to provide guidance on the use of this technology. And we already had in place an interdisciplinary innovation team. So we said, maybe this team can take on the task of creating some guidelines. And this team is composed of technology staff, strategic planning, communications and marketing, and development staff. So this group drafted the first guidelines on AI usage in January of 2023. And they were approved in March of 2023 after review by our external legal counsel. We had the same kinds of discussions and concerns as described by Abby. But in drafting these guidelines, our approach was a little bit different. Rather than focusing on specific tools or use cases, we kind of tried to create a policy that provides general guardrails applicable for all types of AI tools, while we still have in mind, we had in mind ChatGPT at that time. And then we later added in a subsequent revision, Microsoft Copilot, as clarified that Copilot is to be the preferred tool to use for business purposes. Next slide, please. So these are the guardrails that are included in our current version of our policy. First all AI-created messaging should be thoroughly reviewed by a human with an eye toward content clarity and appropriateness, and it is not allowed to share raw AI-generated content externally without review. Additionally, all employees are expected to use their tools responsibly, applying critical thinking, reviewing resources, and being responsible for the accuracy of results, as they will be if this was content they were creating themselves. Additionally, they are also responsible for protecting confidentiality of personally identifiable information or proprietary foundation content, which means no input of proprietary or personal data into an AI. Although this has been relaxed a bit with Copilot. We are all required to be transparent about our use of AI and avoid misrepresenting ownership of content. There is also the requirement to disclose the role of AI in producing content. Employees should be aware of potential bias and avoid discriminatory, harmful, or inappropriate usage of AI generated content. We provide employee training on AI and privacy laws and regulations to ensure that tools are being used lawfully. And also we utilize our cybersecurity training program to make employees aware of risks created by the misuse of AI tools. So in a spring of 23, we roll out our first AI policy and offered an employee virtual forum in which we presented potential use cases and discuss risk. This session was optional, but well attended. It was followed by several internal postings in our internal intranet, announcing the policy and providing tips and tricks on using ChatGPT, which included discussions on risk. Since then, employees have been required to review the latest AI policy regularly and sign up their understanding. All new AI use cases as proposed and all new AI vendors need to be vetted and evaluated for risk before implementing. And how do we go evaluate this? We use our vendor risk management process that includes legal and IT and security review. And we recently added AI as a separate risk in our enterprise risk management heat map, which is shared with the audit committee and our board of directors. This helps ensure visibility and transparency of this risk at the highest levels. And it allows for leadership feedback on organizational risk appetite. You are muted, Abby. Thank you. So we've shared a little bit with each of you about what our organizations are doing right now, but Rosa and I have some predictions for the future. We'll find out if they'll be correct, but we wanted to share them with you all maybe to generate some discussion. So the first is expect change often, right? Like this is definitely going to be something that changes over the next couple of years. Some of these things are going to even out. You're going to see, we're already seeing some best practices emerge. So the example of disclosure of when something is AI generated is sort of becoming a norm, but you will see some of those things even out over time. We also predict the rise of purpose-built use-specific tools. In the legal field, just speaking for myself, we are seeing this more, right? Like we're seeing large language models that are specific to contract review. You are seeing these specific uses prop up. And my personal feeling is that's where you're going to see the most benefit as opposed to just tools like ChatGPT, which are basically souped up search functions, right? I would also predict the consolidation of tools within the marketplace, right? You're going to see tools bought and buy and sort of see some consolidation. And you're probably going to see some front runners. So both Iowa State and Wisconsin are using CoPilot. And I think you're going to hear that a lot more, but there might be two, three others that emerge as viable alternatives. And then I want to kick the last one to Rosa for her to share a little bit more about what she's seeing with AI agents. Yes, well, in addition to general AI features that are being added into systems and applications, there are a rise on AI agents, which are more advanced AI tools that can autonomously perform tasks, make decisions and adapt to new environments. And obviously this has the potential of removing human oversight from the loop, which raises risk levels. So another area to watch for. And with that, we've sort of concluded our prepared information, and I think we're ready to open it up for questions. Okay, well, we do have a question. I guess there's a big group up at Suffolk University in Boston. Woo-hoo, hi, Boston. Tara would like to know, why did both of your institutions choose CoPilot? What is it about CoPilot? Abby, I think you kind of got there, but maybe a little bit more about that. I can sort of share our stuff, then I'll kick it to Rosa. We were already, we rolled out Teams. We're a Microsoft organization, right? So we use Microsoft 365. We had already started using Teams during the pandemic. And so CoPilot was a pretty natural next step for us. The demo group especially found CoPilot useful for meetings. We're like a very meeting heavy organization. So people really love the transcribe function. You can have CoPilot sort of digest the transcription and come up with action items. So like those things people had tried out really, they really liked it. So CoPilot for us was a natural fit. We already had a security infrastructure for the information stored in Microsoft 365. So it made a little bit more sense there. I'll let Rosa share her thoughts. It does pretty similar to us being a Microsoft organization where all our office applications are Office 365. Microsoft was very savvy to include CoPilot. You may know that Microsoft is a heavy investor on OpenAI, which is the producer of ChatGPT. So the version of ChatGPT that is incorporated into Microsoft tools is called CoPilot, but it's based on the same technology and the same LLM, large language model. So they were very savvy and introduced it already and got their users who are already using Office 365 to start testing CoPilot. And when we reviewed the terms and conditions and the confidentiality and privacy parts of the contract with Microsoft for CoPilot, we learned that it's pretty similar to what we already are doing with putting all our documents, Word, Excel, Outlook, email, everything, Teams into Microsoft products. So we may as well just use their generative AI product because we have similar levels of protection. So that's where we ended up now, for now. I would say our decision was sort of a in for a penny, in for a pound. They already have our information. So yeah, same. All right, thank you. We have some other questions from the Suffolk University bunch. Here's one. If you extensively edit an AI generated draft, do you still need to acknowledge the use of AI in its creation? I think it's yes, but Abby, what do you think about that one? I think it's a little bit of a open question and it's really fact specific, right? Like, and this is truly the challenge, right? So if it's like an email, I probably wouldn't disclose it, right? But if it's going in a publication, I probably would disclose it. So I think it really depends on how much has been changed, what the nature of those changes were and where the content is being used. And I think that's where we're going to see a lot of complexity and uncertainty over the next couple of years of, does this rise to the level of something that needs a disclosure? I would always err on the side of disclosure, but it also kinds of begs the question, right? Like if you're using generative AI to write an article for one of her publications and you're a writer, like, you know, there's some natural questions that come from that, but that's just my take. All right, Rosa, anything from you on that one or? It's the same. I don't know that it's not that much different of you searching for some topic in Google and then finding out references. And if you cut and paste the whole section and put it into your article or publication, that is not appropriate, but if you just read it and then you paraphrase and write it on your own, that may be okay. So I don't know that there is an exact answer, but it depends. It also, one thing to consider is one of the reasons we bought a Grammarly subscription is I believe if you run stuff through Grammarly's plagiarism checker, if it identifies something, it will give you a source. So that's part of the reason we have stuff run through a plagiarism checker and then they can sort of double check the sources and maybe they just need to cite their sources, right? Like maybe they can put quotes in or whatever and go from there. All right, well, you know, we are an advancement organization. We've got a bunch of fundraisers in our group and here come the fundraising questions. I bet you know where this is going. Are either of your foundations using AI to conduct prospect research? And if so, how and with what guardrails? Yes, that's a good question. We haven't found that co-pilot or chat GPT are that good of sources for actual research is more when you want to generate content or a write-up or something when it will be more helpful. But there are other AI tools that may be coming out there. I know of several examples, vendors at recent conferences where they are utilizing some of those tools or adding them into their products. Like for example, Salesforce may be adding it at some point and there are other vendors. So it's coming. At this point, we are focusing mostly on generation of content, whether it's text or images or programming code, but not necessarily doing the research on prospects and donors. Although one interesting use case that we had from our gift planning department, I hope it's okay that I mentioned it here, is that they send out these happy birthday emails to our gift planning prospects. And on the day you were born, this happened and that happened. And they were getting that from a separate vendor, but now they are using, or they are looking into using generative AI to obtain a specific content only from Iowa State. On the day you were born, this happened at Iowa State or something related to that. So it's helpful in that manner. On our side, I have not heard of our research and prospect management team using a specific tool or specifically for research. I'm sure those tools are coming. This is sort of in the vein of custom built purpose specific tools. I would not be surprised if those are coming. We would vet them, I think, carefully to ensure like they were, we understood what they're searching and all of that. Where we might be using it right now is to write up summaries. Like, so for example, they sort of do an information dump in a Word document and then turn on co-pilot and have co-pilot produce a summary. Like that might, I could see that happening right now, but actual research and prospect management tools, we have yet to buy any as far as I know. All right. The next question, and the director of online education here at CASE is helping me answer this one. Christy, thank you for your help with this. It's in the chat also, and it's about how do donors react to proposals that note they were generated with AI. And we did have a webinar. There's a link to it in the chat regarding AI and advancement. So it wasn't about policies. It was more about how do you use these tools? Do you wanna use these tools? I would say for major gifts and one-on-one prospect, I would just stay away from AI. You wanna make them personal. You want this person to feel special, perhaps for big alumni letters that go out to thousands and thousands of people. It might not matter as much, but just be careful. You know, we're in a relationship building business. And if AI is all over a document, it's just, it's not gonna feel good to a prospect. Think about the solicitations you get from nonprofits. You're gonna go, you're gonna support the, I mean, you're gonna support the ones you care about. But I think my eyes would be more interested in something that comes directly to me and is personalized to me, and hopefully from someone who knows me, not just an AI that's made assumptions about me. So that's just my response, but anyone else wanna comment? I have an interest. I agree with you, Meg, that it's something that we have not, we are not interested in doing because of that potential reputation issue. I mean, you don't have time to write to me. You have to use an AI to do it. But interestingly, I heard of a case in which a major donor received a letter written by the president of an organization, not ours. And they responded asking, did you write this with AI? When it was not true. I mean, it was just written, but somehow it looked written by AI. So they are already asking those questions. I can also see, I mean, I think for proposals to donors, there is a strong feeling that they should be written, they should actually be written by the people working on the proposal. Where I could see us using it is, so for example, somebody might ask chat GPT, what are the common components of a funding proposal that is submitted to a private foundation, right? Like they might do that to get like an outline or general topics, but the actual- And schools, yeah. Yes, but the actual content is written by humans thus far. Right, I'm gonna try and keep it real, keep it authentic as much as possible, even though I know folks in this, who are watching this may have thousands and thousands of prospects, and they're trying to make the most of their time and be smart about it. So I get the question, it's, AI is wonderful, but it's also, you still want to be authentic. And like everything, it's a risk decision. So some institution may be okay with taking that risk of maybe having an issue with a donor on that, others may not care. So it's just probably a decision to be made by individually by each organization. Right, well, guess what? We don't have any other questions in the Q&A box, but you know what I hear? I hear applause. I hear it from all the people. You hear the applause? They're very grateful for, oh, and a big thanks from everybody up at Suffolk. They're saying thank you in the chat and also for the policies. Abby and Rosa's emails are right there on the screen. So if you have questions you'd like to share with them individually, they're offering their time and their expertise. Really just thank you so much for sharing it with us, Abby and Rosa, because we couldn't do this without you and it's free of charge for our members. So thanks to you. So we appreciate it. And everyone's thanking you in the chat. So thanks for joining us, everybody. We hope this has been helpful. Don't let AI scare you. We're all gonna make it through this and figure it out together. All right, have a great day, everybody. Thank you.
Video Summary
In this webinar hosted by CASE, experts from Iowa State University Foundation and the University of Wisconsin Foundation discussed the integration of generative AI in higher education foundations. The session, led by Abby Sawhawk and Rosa Unal, focused on the capabilities and risks of using AI, specifically addressing its impact on operations, privacy, and compliance. Generative AI, which uses models like ChatGPT to produce content, presents opportunities for automation and creativity but also carries legal and ethical concerns, particularly around intellectual property and misinformation.<br /><br />The speakers emphasized the importance of incorporating AI into enterprise risk management and crafting a robust generative AI policy that aligns with an organization’s risk tolerance and operational goals. They shared their respective foundations' approaches, highlighting tools like Microsoft CoPilot, which integrates into their existing Microsoft infrastructure, providing security and functionality advantages.<br /><br />The discussion covered how AI could assist in drafting proposals and communications while underscoring the need for human oversight to maintain authenticity and accuracy. Participants were encouraged to consider their individual risk appetites and strategic goals when adopting AI technologies. The session concluded with a Q&A segment, addressing practical applications and ethical considerations in using AI-generated content in donor and fundraising communications.
Keywords
Generative AI
Higher Education
AI Integration
Enterprise Risk Management
AI Policy
Intellectual Property
Misinformation
Microsoft CoPilot
Donor Communications
×