false
en,es
Catalog
Artificial Intelligence in Advancement: What Does ...
Recording
Recording
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi everyone, welcome. Just going to give everyone a couple of seconds to trickle into the room here. So, hi, I'm Christy Grimm, CASIS Director of Online Education. I'm delighted to welcome you to this webinar, Artificial Intelligence and Advancement, What Does the Future Hold? I want to thank our partner GiveCampus for making this webinar possible. Before we get started, I just have a few very brief housekeeping notes. This webinar is being recorded and will be provided to all registrants following the session. For those of you who like to follow along with the slides, we will be providing those in just a few minutes via the chat. We will be taking questions at the end, so you can use that Q&A box to send in your questions throughout the webinar. And with that, I'm going to go ahead and hand it over to my colleague Deborah Trumbull to get us started. Hi everyone, welcome. As you all are joining, if you could just hop into the chat, you'll see a chat box at the bottom, and just let us know where you're from and what your role is at your institution, that would be really great. So as you all are doing that, I'm going to introduce you to today's presenters. We'll start with me. My name is Deborah Trumbull. I'm the Senior Director of Research here at CASE, and I am joined by my colleague at CASE, Jenny Cook-Smith, who is our Senior Director of Insights Solutions. We also have two guest speakers with us today, Josh Hirsch and Jen Schilling. I will let them introduce themselves. Josh, would you like to go first? Sure. Good morning, good afternoon, or good evening, wherever you are in the world joining us today. I like to say I'm a retired Director of Development and Director of Marketing after 15 plus years, now working on the consulting side for a nonprofit consulting firm, helping nonprofits from A to Z in strategic planning, brand development, board governance, and everything else in between. Great. Thanks, Josh. Jen? Hey, everybody. I'm Jen Schilling, Head of Product at Give Campus. I've been here for about four years, but started my career in education, so it's been fun to come full circle back to the space I love in a tech role. Excited to chat with you all today. Great. Thank you both for joining us. Hi, everyone. Glad to see you. Really enjoying seeing everyone's name pop up in the chat. I saw it go by really quickly, but I love that someone has the title of, I think it was Data Analytics and Integrity. I think that goes really well as we're thinking about our topic today. But as Debra said, my name is Jenny Cook-Smith, and I am thrilled to bring you the second in this series of topical research, thanks to KACE's partnership with Give Campus. This research, which is based on Pulse surveys, is meant to gather insights that are topical and timely and are really an aim to supplement the data that comes from KACE's annual benchmarking surveys. So think of this as sort of a deeper dive on topics that we hear are important to you. In September, we published the first in this series, KACE Insights on Advancement Metrics that Matter. That was for the United States. Today, we'll actually be bringing in some global data, which Debra will talk more about in just a few moments. But for that Advancement Metrics that Matter, what we did was ask advancement professionals, so many of you, to rank key metrics in the areas of philanthropy or fundraising, alumni engagement, and marketing and communication, as well as take that as a chance to share some of your common data challenges. And we pulled that together to give you some takeaways to help you as you start to evaluate and optimize your approaches to measuring impact. But today, we're really excited to bring you the second in our series focused on a topic that I think is pervasive to life as a human being, or even maybe a robot, in 2024. And that's, of course, the topic of AI. And we at KACE are so grateful for the support of Give Campus so that we can bring that research to you today. So before we delve in, I wanted to just take one moment and tell you about who we are and let Jen share a little bit about what Give Campus does as well. So as Debra mentioned, we both represent our KACE Insights team. And as you can see here, the idea is that we're bringing you the data, the standards, and the research that you need to do your jobs. And when we think about data, we're really looking at all of those orbiting circles and moving to a place where you have data that supports an integrated advancement model, because we know that what happens in fundraising isn't in a vacuum, and all of these pieces really depend on one another. This work is governed by our bright yellow sunshine of the KACE global reporting standards. Everything we do revolves around those standards. And I think now more than ever, particularly as we're talking about the topic today, the idea of standards for our profession, counting practices, ethical principles are clearly important. And so nothing happens without using those standards. And that's what allows us to have transparent and accountable benchmarks. And then finally, research. At KACE, we really think of research as exactly what we're doing today, how we bring you all the things that we know you care about, but we need to really shore an entire universe of partners to get there. And so today's session is a great example of the kinds of research we provide. So, Jen, I'm going to pass things over to you to talk a little bit about Give Campus. Thanks, Jenny. So the most important thing to know about Give Campus is that we build fundraising technology specifically and exclusively for fundraisers at educational institutions. Our partner community includes schools of every shape and size, big and small, public and private, higher ed and K-12. We work with everyone from large research universities with 500-person development offices and multimillion dollar budgets, all the way down to dozens of one-person, two-person and three-person advancement shops. So really run the gamut. And something I'm incredibly proud of is since 2015, 1,300 colleges and universities and K-12 schools have used our platform to raise more than $5 billion. And that's something that I rest easy sleeping on every night. So Give Campus is a comprehensive fundraising platform. We host more giving days and more crowdfunding campaigns than all other companies in the market combined. But giving days and crowdfunding are actually just one piece of what we do. We provide technology to power year-round online giving programs to enable fundraising and engagement volunteers and streamline your management of them, to communicate with your constituents via text, e-mail and personalized video, to enrich your data with insights about your donors' wealth and giving capacity, to help you benchmark against your peers and other schools, to power your event registration and ticketing, and to increase the productivity and efficiency of your major and leadership gift officers and their managers. So as you can imagine, the product and engineering teams at Give Campus are very busy people. But at the end of the day, our goal is really simple. We exist to help you raise more money for more people in less time and with less effort. That is what we wake up thinking about every day at Give Campus. All right, great. As Jenny mentioned earlier, we created this series of poll surveys with Give Campus to gather insights on topical and timely issues. And I think if you are on LinkedIn and paying any attention to the topics people are talking about, artificial intelligence is very high on that list. So we designed this survey specifically to learn a little bit more about how advancement teams are approaching their use of AI, how AI is currently being used, what ethical concerns are being taken into consideration, and whether policies and training guidelines have been developed for the use of AI. The survey itself was a short survey. We had about 25 questions. We had 211 respondents. Our respondents were at least some in all of our case global regions. The majority were from the U.S. 64 percent of respondents were from the U.S. But we had a pretty good number of respondents from the United Kingdom and Ireland and other parts of Europe. They made up about 25 percent of respondents. And when we asked respondents about their primary area of focus, it was interesting to see that fundraising was the highest at 28 percent. I found it interesting that the technology and the data management and analytics folks were really in the minority here. Although, you know, maybe folks are just sort of prioritizing advancement services as their broad category. But that just gives you a little bit of a sense of who responded to the survey. So that as you're looking at this data, you know, keep that in mind. These are the folks answering those questions. We have a nice, robust group of participants today, and we thought we would use this as an opportunity to hear a little bit from you all. So if you could take a moment, scan the QR code and you're going to get a prompt to I think you get three choices. So to give us an idea of the top three words that come to mind when you think of artificial intelligence. And we'll go check out the results in just a moment. So keep in everybody, at least if you're here on the East Coast. Moving after the lunch break, we want to keep this as interactive as possible. So while you're entering, I'm just going to go check in on our results. All right. And I'm going to as these are coming in, hopefully everyone has had a chance to scan the QR code. Now's your now's your chance. And I'm seeing these coming in. So I'm going to go ahead and reshare my screen. Hopefully you all are starting to see some of those themes emerge. And so interesting. I think we'll get to this in our panel discussion that helpful and scary sort of people equal weight. I think that actually aligns quite well to what we saw in the survey as well. Great. Well, thank you so much for for playing. Deborah, do you want to share some observations we heard? Yeah. I mean, we asked a very similar question in the survey. We asked, what was the first thing that comes to mind when you hear the words artificial intelligence? And we pulled some quotes from the responses. And they really mirror a lot of the words that you all added to that word word cloud, you know, from some very positive feelings around AI that, you know, it will really, you know, it can really superpower our human abilities. But then also some more negative perceptions that it might be cheating or that it's going to replace humans. So I think, you know, your your response is very much aligned with what we saw in the survey. And just a couple of stats here to get us started, you know, we are going to show some data from the report, but we really want to make the majority of this really a discussion. So just kind of very high level, you know, 70 percent. That's OK. You were just on the ball. You know, 70 percent of respondents reported that they're using AI as individuals. But just five percent said that their advancement team had a formal initiative to guide the adoption of AI. So a lot of individuals are tinkering around. But there seems to be seems to be a pretty slow movement toward having some formal initiatives in place. And just 13 percent have received formal training or resources related to AI in their role. So, you know, we're going to show like I said, we're going to show some data during this. But what we really want to do is talk a little bit with Josh and Jen about what they're seeing. And we'll you know, we'll be bringing in some data along the way. Right. Thanks, Deborah. And I am just going to stop sharing so we we can see each other's faces and our participants can see your faces as well. As we do shift gears and would love to learn a little bit from you all as experts, you know, what you thought in terms of some of the results of this particular report, but also people in the field that this is really something you're seeing some great observations on. But before we do that, I'm actually going to take a step back because I think it might be helpful to just hear from you. You know, when we say AI, what does that mean? And how are you seeing it's being used? So, Josh, I'm going to go to you first. So I'm going to I'm going to be the naysayer and say AI is not scary. I think that a better term would be overwhelming and not knowing really where to begin. As we saw from the survey, you know, very few are actually getting formal training in it. And it's a completely probably the most powerful tool that we have available to us, I think, in fundraising period over the last 30 years. I mean, we have not seen technology evolve at such a rapid pace and at all facets within fund development, from frontline fundraisers to major gift officers, to grant writers, to strategists, even on the nonprofit operations side. You know, and also some of the responses, oh, well, it's cheating and it's not cheating. So I like to push back on that because you pull your monthly KPIs, your key performance indicators for your email, your social, you're tracking all those numbers that you're getting from a third party source. And then putting that into Excel to do your ongoing, you know, ongoing data management and tracking trends. And you're letting formulas run in Excel. Well, that's not cheating. AI is just another tool that we have available to us. And it's more much more pervasive now than it was ever before. And from an equitable access standpoint, through tools like ChatGPT, it's everywhere. I mean, we've been using AI and people need to think back. Like, remember those early days of Microsoft Office, we had your little Clippy assistant? That's AI. You know, but now we fast forward to today where we can go and it's becoming more and much more accessible in the sense that third party CRMs are starting to incorporate AI tools within their search functionality. So you're not going to have to sit there and worry about, all right, well, what variable fields am I pulling? Am I going to use an AND or an OR? And whoops, I should have used OR, but I used AND, so I'm not getting the data I needed. Now you're literally going to be able to say, find me major gift prospects that live in the Chicagoland area that have two kids under the age of 13. And if you have clean data, you're going to be able to pull this. So, you know, it's always, I don't want to say disheartening, but I like trying to educate people that AI is not scary. It's going to be your best friend. Well, and as part of that education, Josh is actually going to do a short demo for us at the end of this panel discussion and really showing you some of what he just said. Jen, your thoughts? Yeah, so if you ask what AI is, my definition is it's the only entity that's capable of making me feel incredibly smart and painfully stupid at the same time. And I'm half joking, but it's also partly true. Josh is right. It is incredibly powerful. What's interesting to me as a technologist is, like he said, AI is not new. We all think of chat GPT. I saw that on the list. And all of these webinars are happening largely because of the explosion of generative AI, which these are all examples of the cheesy phrase, but pop culture AI, things that pop up on your Instagram feed, things that pop up in webinars because they are hot topics right now. But the reality is Google Maps uses AI. Netflix recommending videos and even the thumbnail that it shows you for a movie, that's all AI. Give campus donation forms, use AI to block systemic fraud attempts on our pages, right? None of this is new in the past two years. This has all been around. Generative AI is really the technology that's exploded. And gen AI is really good at summarizing and really good at writing. So in terms of what we're seeing, we're seeing a couple use cases be the most popular. One, and I imagine a lot of you are using chat GPT for this, is drafting text messages and emails for event invitations, solicitations, campaign updates, stewardship. Yeah, exactly. Writing tasks. Incredibly popular for that. Another use case we see schools really taking advantage of is synthesizing and summarizing decades of call reports into donor bios that you can scan as you're on your way into a Starbucks or on your way into a meeting with a prospect, instead of having to read all those call reports, just synthesizing it all for you. And my favorite, which I've seen in person and it's really cool to watch, is a gift officer summarizing through a rambling dictation on their phone while driving a narrative of a meeting that they just had with a prospect and it turning into, because of the power of AI, a cogent call report with really clear next steps. So a lot of it is around, the generative AI stuff is around synthesizing, summarizing, and a lot of writing. Thanks. I feel like you kind of, between the two of you, kind of nailed it, but I just want to ask Debra, anything else to add to that? Yeah, no, you can see here, Jenny put up on the screen, the responses from the survey, and we did allow people to select more than one response. But most people did only select one, or at least half of respondents picked one way that they're using AI. And there were, I think, about another 30% that selected two options, but you can see those writing and communications, you know, areas popping up to the top, where you see much lower percentages are in sort of that data and analytics side. But again, people, you know, are people using it and not even realizing that they're using it to Jen's point. And I think that's actually probably a good segue into the next question I had for the panel. And I think, you know, we talked about usage, but let's talk a moment about the idea of adoption. And the results of this study showed that it was about 50-50, right? Which again, interesting, because I feel like our word cloud was similar, and that we had about 50% that were eager adopters, and then the other 50%, somewhere in that, you know, between cautious to all out resistant. And it strikes me that this is probably not terribly surprising when we look at sort of general adoption of disruption or beneficial technology. But I'd love to get your thoughts on sort of this adoption. And as a side question to that, as we think about adoption, who are the people that are really driving AI? And Jen, can you weigh in on this first, please? Sure. So you see, technology adoption follows a bell curve. You've got your early adopters, your innovators on one side, all the way over to your laggards at the other side. The key difference, and Josh alluded to this at the beginning, with generative AI, is that this was the first major technological revolution that was democratized at warp speed. It took smartphones two years to get to 100 million users, but it took generative AI two months. And the barrier to entry is much lower than that of a smartphone, given how ubiquitous smartphones are now, and everyone has access to the internet. So what's driving adoption? There are two kind of key players that are kind of at opposite ends of the spectrum that we're seeing. One are tinkerers, and then the other is leadership. And on the tinkerer side, or people who tinker, these are your early adopters, people who are innately curious, they like to play around, try new things. It won't work this way, so they'll try it that way. They don't like the status quo, and they always have their eyes peeled for new tools that could make them better or faster at whatever they're trying to do. They're often young, and they're the people who, as children, took apart their easy-bake oven just to kind of see how it works, right? These are your tinkerers. However, that's one end of the spectrum, and we see a lot of that, and I think the results of this survey point to a lot of tinkerers in our responded group. However, on the other side of this, across the broader economy, individual users like you or me are not the primary players driving adoption of AI. It's actually enterprises or businesses driving adoption because they're looking for technologies to automate processes, boost employee productivity, and reduce costs. And as a kind of an example that I'm sure everyone learned about in history, think about what happened to factories that didn't modernize during the Industrial Revolution, or to department stores that were slower to put their inventories online. This is another wave like that where you're going to see a lot of pressure from big business and enterprise to drive adoption because of the efficiencies that can be gained. So while right now it looks like we're seeing this groundswell from the bottom up in our space, if you look around the corner a bit, there's writing on the wall that pretty strongly suggests that this is changing. And what we're hearing is we're hearing from trustees and board members at schools that they're expecting schools to use AI. And this is because many of them come from the private sector. And so they see that it's not there. It's not being viewed as a nice to have or something to be afraid of. It's a must have or we're going to be eaten alive by the competition vibe over there. And so when they then step into their role as a board member, they want to make sure that the school they're on a board of is staying current and competitive with how technology is changing. So we've in fact heard like kind of a palpable sense of fear from a couple of VPs who need to go to their next board meeting ready to answer the question, how are you using AI? And Josh, I see some vigorous head nods from you. So the floor is yours. No, I mean, Jen did a great job of summarizing it. We're not at a need or a want stage. It's really becoming a must have. And especially for nonprofits of all sizes, it's giving you the opportunity to take any, there's no problems. There's opportunities for solution. So using AI, we can reverse engineer whatever we want. And I was one of those nerds, those DIY tinkerers that, you know, it's like, okay, I need to create a custom GPT that's going to allow me to be a brand persona for the vice president advancement of a higher ed organization. And we're at the point that we can use the training based upon our past content, taking our, you know, and this is like the way my mind works. So take your last five years of subject lines, tracking your KPIs from open rate, click through rate, and the conversion rate, if it's, you know, an ask, if it's actually an ask, feeding that into check at GPT and say, based upon this data, create a list of subject lines that are optimized for open rate, click through rate, and conversion. And obviously, it's tied back to a campaign that you're training upon the amount of time and going back to your earlier comments about like, you know, the call sheet says you're sitting there and analyzing, like, the speed and efficiency at which we can work to make our jobs better and easier, it's going to allow us to free up more time to actually focus on the important things, which is donor stewardship and donor retention. So we all know how to write an appeal letter, we all know how to write an email. But the data analysis part, like to me, that's the the secret sauce. That's where we're really able to harness our own data to create a better content strategy, a better fund development strategy, and save more time to raise more money and create better experiences for our donors. And Josh, I could put you on the spot. But, you know, as we were chatting in advance of this session, I thought you gave a great scenario of, you know, take, take an SPCA. I think, would you mind just running through a scenario like that? Because I think it's a good example to get juices flowing among our attendees today. Sure. So we should all be surveying our constituents in one form or another, testimonials and feedback or how we're going to able to know that we're doing a good job, or if we're not doing a good job, and we need to be able to grow from our inefficiencies. So using AI, we're able to take past testimonials, feed all that data into chat GPT. And I keep referring to chat GPT, because it's the Kleenex, the Tylenol, you know, it's the Xerox, it's that brand leader. It's my preferred tool. There's a multitude of tools out there. I highly recommend that if you are getting into this world of AI for your organization, you need to have an AI usage policy. I'll actually drop a link into the chat of a template that you can use if your organization does not have one. I'd be remiss, and I would assume a lot of you do have that. But we want to make sure that it outlines specifically the use and especially when we're talking around donor data privacy, we never want to specifically put a donor's name into a tool like chat GPT. But we could assign a donor ID record to them that we can then cross reference later. So we're going to take all those testimonials, we're going to feed that into chat GPT, I want to have identified insights and trends based upon sentiment to move those from negative to neutral to positive sentiment over six months. From that we're able to then create a specific communications and fund development plan. And the time at which it would take this, you know, we're talking days, now we're crunching it down to a matter of almost minutes. And it's wild that literally anything that you think of, we can pretty much do with AI now. Thanks. Debra, and I'm going to share my screen again. Can you comment on this idea of who is using AI? Yeah, so when some of the questions that we asked in the survey, we asked about what is your approach to adopting new technologies such as AI? And I think, you know, Jenny mentioned this sort of like kind of half and half, you can see 52% of respondents consider themselves eager in their approach to adopting new technologies like AI. And you can see sort of the other half is mostly cautious, but there's also some resistance there as well. What I thought was really interesting was to see we also asked them, what do you think your advancement teams approaches to adopting new technologies. And you can see that eager percentage is a lot smaller. There's a much, you know, a higher percentage that are a little bit more cautious, but also a higher percentage that is resistant. And I think, you know, a lot of this, it's, you know, part of why we wanted to have this conversation today is to, you know, to familiarize people with AI a little bit more and start to alleviate some of those concerns that might be there. So that's sort of a little bit about approaches to adoption. We also asked, we also saw that 83% of respondents reported that use of AI was driven by individuals. So there was not a formal initiative within the advancement team. And, you know, when we asked the question, who was advocating for... Apologies, my slide is advancing on its own. Our computer just really wants to go, doesn't it? It does not like this. I'll continue. Yeah. When we asked who was advocating for AI use, 40% actually said no one was advocating. The next highest category was senior leadership at 24%. So there's that senior leadership bubbling up a little bit, Jen, as you mentioned, right? They're hearing it from their boards and they know this is something that needs to, you know, that needs to happen. But mostly, you know, 40%, they don't see anyone advocating for it within their advancement teams. So we have next, our next slide, you can go there now, Jenny. So our next slide is a quick poll about who is driving the adoption of AI in your advancement office. We're curious to hear from this group. This is the question that we asked in the survey. So whether it's a little bit more formal coming through senior management or leadership, is it your tech team that's bubbling up, individual staff, consultants, or no one? I'll be curious to see how these responses come in. I'm relying on Christy to share this when it's, when we feel like we've gotten some good responses in. Yep, I will share it right now. It looks like responses have slowed down. Okay, great. All right. Yeah, so interesting. So 25% said no one. Okay, the largest percentage was staff at 34%, senior management leadership at 25%. And the IT tech team. All right, great. Thank you so much. Yeah, so pretty similar, although a little bit smaller percentage falling into the no one category. You know, as we reviewed the responses that came in from the survey, we did see, you know, an interesting contrast between how respondents described their own approach and how they saw their team's approach to adopting AI. And, you know, in most cases, senior leadership is not actively promoting the use, but leaders are also not prohibiting it. Just one data point that came in was that just 10% said they were instructed not to use AI. So that is happening. 10% respondents did say they've had that instruction from their leadership, but also, you know, most of them are not prohibiting it. When we ask what is the biggest challenge you face when implementing or using AI in your role, here are some of the responses that we received. So, you know, this sort of leads into our next topic around trust, around training, around guidelines for use of AI. You know, there's, you can see here, there's lack of information or resources. You know, no clear direction from the institution. So here, this first data point to point out here, we're just looking at whether the respondent or their team has trust in AI. So let me just tell you what's on this graph first. On the far left is all respondents. And what we asked was, do you trust completely? Do you trust somewhat or do you not trust at all AI? That second two bars, the middle bar is showing, if you use AI, what does your trust percentage look like? Okay, so 88%. If you're using AI, you have a little bit higher levels of trust in the results, in the outcomes of what you're using. If you are not currently using AI, you can see there's still a pretty good percentage that trust, but you start to see that 20% that do not trust at all, which certainly suggests that, you know, this is, this is still new for a lot of people, and they're still going to need some hand-holding and some guidelines around using AI. And then speaking about guidelines, this next slide, you know, it is really telling us that, well, it could tell us a lot of things, but the lack of clear guidance could certainly impede widespread adoption. We did ask survey respondents if they were aware of any ethical guidelines for AI use that was provided by their institution, and how often they're taking those ethical considerations, you know, those decisions related to AI implementation. So those, again, this is that same breakout. All respondents on the left. If you're using AI, you're in the middle. If you're not using AI, on the right. And those that are using AI are somewhat more aware of the guidelines, but look at the percentages that are unsure of guidelines or are not aware at all of any guidelines existing for the use of AI in both categories, whether you're currently using it or not. So I think, you know, these are definitely areas where institutions need to, you know, leadership and senior management needs to step up and start thinking about how this is going to work for their institutions. It's also possible that the, sorry, just Deborah, to build on what you said, it's also possible that the existence of guidelines helps drive adoption. Like it could go that way as well. We're like, if there are guidelines, there's an increased sense of trust and like, we know what we're doing and oh, this AI thing is a real thing enough for us to have guidelines. Let me check it out. So it could work that way too. It creates some safety. It creates a little safety net for folks. Absolutely. Awareness for sure. And as we're moving into the final kind of part of our panel discussion, I'll just remind all of you that if anybody has questions for our panelists, please do put those in the Q&A. We will save time at the end to make sure we address them. And Jen, and then John, I'm going to give you both the final words and really continuing on this topic that Deborah was just sharing. What we're seeing is needs for institutional adoption. We know there are some barriers. And so I just want to get your thoughts on how do we make this easier? What are suggestions as we start to think about the need for institutional adoption? And I also say, especially as we do consider the ethics, as Deborah spoke to. So what we're seeing is in the absence of a policy, the policy is to allow AI to be used across the board. That is the consistent thing we're seeing. And really it's thinking about AI almost as a shift to cloud computing. I mean, eventually it is just going to be commonplace. What we've seen be effective is schools establish a task force. And these are like central teams, cross-functional usually, that are charged with searching for and highlighting opportunities to opportunities to use AI, encouraging departments to think about how they can use it, showcasing examples. They're basically in charge of change management at the institution on this topic. And one of the things we get excited about is they're partnering with tech companies like GiveCampus that are already innovating, that are already investing resources to stand tools up that they don't have the ability to do on their own. And we get asked all the time to present to these task force to understand what is cutting edge. Basically, they're in this kind of knowledge absorption phase oftentimes at the beginning to learn about what's out there. What are the use cases? What tech exists? How might it help them? And then inspire people to narrow down the potential use cases to the ones that they think are going to be the most impactful for the institution. Pick a few budget for them and then work with peers and with third-party tech providers to get the most out of this technology using things that are built out of the box versus trying to build yourselves. So that's what we're seeing be effective, but it's early and there's a lot of learning still that institutions are going through and it's been fun to be a part of the journey. It looks like some nice sharing happening on the chat as well. Josh. Yeah, so I'll build off what Jenna's saying. We're super early into this world. It's not that hot trend that's going to go away in a month. We are in a cycle that I believe is going to go on for years. I think we're just going to see that technology develop more and more. We're going to see third-party vendors start to incorporate more and more within their platforms. And we're just going to see more startups that are providing us tools that we didn't even know we needed that were all of a sudden, okay, well, we can do these features that we didn't realize before. One of the best tools that we use at my firm is an AI note taker. So we're no longer having to take notes at meetings. This employee shows up, captures all the audio, captures all the video. And within a matter of minutes afterwards, we have a complete summary, action items, who's taking care of what specific responsibilities, and then taking it to the next level. We'll use that AI note taker when we're meeting with constituents and meeting with key stakeholders. So we're able to capture those interviews, bring them back into AI, like a chat GPT, using that summary to identify those insights and trends, and then taking a multitude of those key stakeholders, doing ongoing analysis on all those, this sort of time that it would take to do that. It's exponential where we're literally, you know, making something happen in a matter of minutes. And it's super exciting to me. You know, people are like, well, where do we begin? Just get your feet and feet in the water, you know, just like download chat GPT. It's free. There is a paid subscription model. It's the best $20 you'll ever spend a month. But before you get to that route, start with the free version. Start, you know, I think the very easy low hanging fruit is draft me an email for, you know, a major gift prospect that has supported the university at this level, and this is their interest. But then you can get a more advanced when you have good clean data sets, you're able to in theory, no longer have that annual appeal. That's just, you know, your variable data from the mail merges, giving amount and name. Well, now I can actually send a letter to Jen and speak to Jen's interest and how Jen is connected to the institution, and make Jen not feel like a number of make feel like Jen is a partner in what we're trying to do. It's just so much fun. I'm a big nerd. I'm a DIY tinker. So like, on that early adopter, I was setting up pages for nonprofits on MySpace before we even knew what it was a thing. But that's how we have to be, you know, we saw that a majority of who's leading the way our staff and it's finding those nerds. And listen, nerds, not a bad thing. Nerds are people who are passionate about a subject. Exactly. Nerds rule. I love it, Jen. But that's what it is. We want to find those champions within our organization and let them run with it. And then it's attending professional development opportunities like this, you know, you could, in theory, go to a webinar or read a blog, or read different ebooks on a daily basis, not consume enough professional development opportunities for it. It's so much fun. And Josh, I think we're going to give you the floor to show us some fun. Yeah, exactly. Show us some fun. So we've got 5-10 minutes to dive into chat GPT, just so you can all understand really what is possible. At its core, AI is pattern generation. It's trying to guess the next word that you want. And based upon contextual features within your content you're putting in within your actual inputs will determine that output, grammar, spelling, punctuation, they don't matter because it's picking up on those specificities within the content itself. It's a natural language processing tool. So have a conversation with chat GPT. If you've never seen it before, this is your your base model. This is the free one, you can swap up to 4.0. But for today, we're going to look at 3.5. These are all different conversations within chat GPT. Each one is mutually exclusive. So whatever content I create here, it's not going to have any recognition going forward. So what's most important when you're working within chat GPT, and I could sit here right now and say, create a letter for a major gift prospect that lives in the Chicago area for my organization. But if I haven't spent time training it, it's going to feel very vanilla, it's going to be very generic, and not specific to our organization. Whenever I start prompting, the first thing I do is what's called the persona pattern generation. So we're basically telling chat GPT who we want it to act like. And that's literally my first words. So act like the vice president of advancement for the University of Florida, because I'm a gator. So we're going to use them today. Go gators. Act like the vice president of advancement for the University of Florida. You are a 30 year veteran of the nonprofit sector, having raised hundreds of millions of dollars. You are a subject matter expert in higher education, fund development, fund development, major gifts, and nonprofit leadership management. You are well respected. Oops. Well respected by your peers, have published multiple journal articles, professional publications, and present at conferences annually across the country. That is the prompt formula that I use every time that I start off. So it's always act like a role in organization. And then this is kind of boilerplate. I use that number 30 year because I've tested it out five years early into your career, 30 years, it shows that you're a veteran, a stalwart, you've been through a lot, and you understand what's going on. We always like to say how much they've raised. I specifically want to put a subject matter expert because these are the blinders that they're going to think within. So obviously for today's exercise, we're focusing on higher education. And then the well respected by your peers, it just makes it look as you are a thought leader and someone of authority. What's exciting to me with AI is I never know what my output is going to be. So I hit enter, and it starts working. So this right here, I don't care about any of the output. Literally what this is doing now is setting the stage for who it's going to be. So here's some proactive solutions and insights tailored to university's advancement efforts. And then it's all going through specifics. So for today's exercise, and we got about three or four minutes left, we're going to focus on drafting a appeal letter to someone from the College of Journalism and Communications because I was a JCOM grad myself. They are in the industry having been a news reporter for 25 years and are currently supporting the university at an annual gift of $10,000 or more. And for JCOM grad school, we consider that a major gift. So we need to draft a segmented appeal letter for the College of Journalism and Communications target. This letter is segmented towards major donors who contribute a minimum of $10,000 annually. This letter is specifically for an individual named Bob Smith, who is a news anchor for WPTV News Channel 8 in West Palm Beach, Florida. They have supported the College of Journalism and Communications for the past 15 years. This letter is part of our overall annual campaign with a goal to raise $1 trillion in the next three months. This letter, and when we're prompting, we want to be very specific with our desired output. So I'm going to say this letter should be a minimum of four paragraphs long, include a PS, and have a donation ask spread out in each paragraph. The letter should be written from the perspective of the Dean of the College of Journalism and Communications. All right, I think that's good. Let's see what we get. All right. So here's the thing that I hate about generative AI. Nine times out of 10, you're going to get a letter that, sorry, Swift, I hope this letter finds you well. That's icky. That's gross. And why is that? Excuse me. It's the biases that the data points are the LLMs, those large language models. So when it's scouring the internet and bringing in those trains of data points, it says, okay, well, we believe a best practice letter starts this way. Horrible, horrible, horrible, horrible. And that's the other thing too. Nine times out of 10, your first output, your first generation is not going to be your final. You're going to continue to iterate, continue to revise and bring it back. So let's just read through this and see what we get. As the Dean of the College of Journalism and Communications at the University of Florida, I'm ready to express my sincere gratitude for your steadfast support over the past 15 years. Your commitment to advancing the field of journalism through your generous contributions has made a significant impact on our students, faculty, and programs. Excuse me. Your dedication to excellence in journalism is truly commendable and it's evident in your distinguished career as a news anchor at WPTV News Channel 8. Your passion for storytelling and commitment to upholding the highest standards of integrity and professionalism inspire us all. I'm thrilled to share that our college continues to thrive thanks in no small part to the generosity of donors like you. Your ongoing support enables us to provide our students with unparalleled opportunities for learning, growth, and hands-on experience in the field of journalism and communications. As we embark on our annual campaign with the ambitious goal of raising $1 trillion in the next three months, I'm reaching out to you once again to humbly request your support. Your contribution of $10,000 or more will directly impact the lives of students by funding scholarships, internships, state-of-the-art technology, and innovative research initiatives. Mr. Smith, your partnership with the College of Journalism and Communications has been instrumental in shaping the future of journalism education at university. With your continued support, we can continue to empower the next generation of journalists to tell compelling stories, uphold the principles of truth and accuracy, and make a positive impact on society. In closing, I want to express my deepest appreciation for your unwavering commitment, yada, yada, yada. P.S., your contribution of $10,000 or more will be instrumental in helping us achieve our ambitious goal of raising $1 trillion in the next three months. Thank you for your generosity, unwavering support. Eh, I don't love it. I think it has some good points in there, but it could use a lot of work. I know we're kind of like at the Q&A time, so I'm going to wrap it up here, but I just wanted to show you really briefly the way my thought process works when we're developing content. I know everyone on here knows how to write an appeal letter. Think about ChatGBT also as ways that you can use it to refine and edit your content. We know that secret sauce is that sixth to eighth grade sort of readability level. One great tool I love to use is called Hemingway. I'll drop that link in the chat, so you could take a letter that you've already written, bring it into Hemingway, identify the readability level, and let's say it's on the 12th grade reading level, then bring it back into ChatGBT and say, redraft this letter to be on the seventh grade reading level. Like I said, there's so many possibilities, and it's really endless with what you can do with it. I can't unmute, sorry. So just a couple of follow-ups to the presentation. We do have, we saw at least one question in the Q&A, so we'll try to get to that real quick, but first I just want to let you all know the report is available. This is the link to access it from our website. The case library also has an AI subject guide available for case members, so these are links that you can check out for more information. I'll let Jen speak to this slide. Yeah. So Josh just did an awesome demo of ChatGBT. I can tell, Josh, you are a true nerd tinkerer at your core, and from one tinkerer to another, I respect that so much. What's tough for participants and people out there working full-time jobs is that Josh has put a lot of time into learning what works, into learning through tinkering to figure out how to do this well and refine instructions until they're just right to get an output that's good enough for you to take over the finish line, and this is because ChatGBT is a generic tool with no specific use case in mind. Josh had to give it all of that context up front every time, and by comparison, what you see on this screen is something we built. It's free. Anyone can use it. It's a lot of R&D work for us. It's called Donor Outreach AI, and it was designed specifically and exclusively for educational fundraising purposes, so instead of spending all that time figuring out how to describe to ChatGBT what you need, what you want, who your persona is, we've done all of that training and that work on our end, and actually, it was close to 80% of our development time for this was around the prompt refinement and context and inputs, and then it just guides you through a couple clicks to tell it what it needs to know in order to generate a draft for you to look at and then iterate from there. What I think is most cool about this and would encourage you all to check it out, just play it, tinker for fun, is if you leverage other GiveCampus solutions, you can then feed the content across the platform and actually make use of it in real time, so you can tinker and have fun with the free version, but then you could also, for example, use it to generate an invite to an event, and if you have GC events, it can pull in information about your event instead of you needing to type it in, and so what we've done is try to automate a lot of the steps you would take manually with something like ChatGPT through software to just remove steps for you so you can get this work done faster. Great, thank you, Jen. So I think while we're wrapping things up, we do have a couple of questions. I thought as we're kind of covering some of those questions, if you all could just add into the chat what is your biggest takeaway, what has been your biggest takeaway from today's webinar, and then maybe while you all are doing that, we can cover some of the questions that have come in. So in the Q&A, I can see the first question is, while AI can analyze data and personalize interactions on a large scale, it might lack the genuine human touch, empathy, and understanding that can be crucial in building long-term donor relationships. How much of a concern is this? So I don't know, Jen or Josh, if one of you wants to take that one. I can jump in. Two thoughts. One, I think this points to a really important thing to do as part of a task force or whoever is leading the charge, which is to set really clear values that your institution is going to have around AI. One that we have is that AI should make fundraising more human. It should add more humanity to fundraising, not less. And when you feel yourself making decisions that stray from those values, it's a good opportunity to pause and go back and But to the direct question, another principle that we have is human in the loop, which is this idea that at all stages of the development of content using generative AI, at least at this point, it's not good enough, we believe, for there to not be a review step. And so we don't build anything that just goes out on its own. There is always a step where the human is in the loop. And to that question, that is the point where you evaluate and make sure it does send the right message. It does have the humanity in it, and it's empathetic, and it's real, and it feels genuine and authentic. It's really a starting point more than it is the right answer. All right. The second question we have is any thoughts on the future of development correspondence when AI bots are communicating with other AI agents, summarizing emails, briefs, etc? I mean, I think we're going to be there much sooner than we realize. I think Microsoft Copilot is going to help a lot with that. I think that the issue with Copilot right now is only available at the enterprise level. And even at the enterprise level, it is pretty cost-prohibitive for a user license. But we're going to start to see the time where you'll have an inbox filled with 30 things, and all of a sudden, AI is going to say, all right, here are those three emails that you really need to say on top of. And more so than that, this is what you need to say because we're able to directly pull data from your database and say, okay, based upon the giving history, these are the things that you should speak about with your email to them. I think we're going to get there much sooner than we actually realize. Some other questions that I did sort of see pop up in the chat were around security. And I think, Josh, you sort of mentioned this in a couple of places, like, oh, we're not putting donor information into these prompts. Do you want to just expand on that a little bit? I mean, this is where obviously having policy and guidelines makes, it's going to make a huge difference for institutions. You must have a policy in place because that way there's no gray areas. There's no gray areas around image creation because generative AI from our perspective is very quick and easy, excuse me, very quick and easy to do. If you don't have that picture that you need of, you know, a student on campus, you could very easily create it. But then there's that whole ethical, you know, conundrum. Do you use an image that was generated? And if you had that within your policies in place, you know, which way, you know, yes or no. At the same respect, everything that's out there is already public. So, like, you know, you got to, like, almost take this with a grain of salt. We're not going to put our donor privacy information in, but if it can be found out there on the internet, like, they know it about us. So, there's, like, a really gray area of what are we okay with putting in there because we know that it's already available out there on the internet. But at the ground level, do not put, you know, first name, last name, mailing address, you know, first name, last name, mailing address. However, you know, things like zip codes might be okay. Birthdates, you know, having all that stuff within your policy is important because that data can then help with predictive analysis saying, okay, we want to identify the potential legacy donors within our CRM based upon these specifics. And it can do that much faster than you could otherwise. I do want to say, though, that the world of predictive analytics and generative AI, sorry, predictive AI and generative AI are two different camps altogether. I live in the generative AI world. Predictive AI are for people that are way smarter than me and have access to a lot better tech than I do. I'm just some nerd who likes to play around on chat GPT. So, maybe that'll be a follow-up for the next survey in the webinar. All right. Great. Well, thank you, everyone, for joining us today. Christy, can you remind everyone where they can find the presentation and a recording? Yep. So, the same event link where you used to get the Zoom link to join today. I will be sending a follow-up email after this event is over. So, look for that later today. And I'll take all those great links that were shared in the chat and pop those in there as well so everybody has easy access. Great. Thank you so much. Thank you, everyone, for joining us today. We really appreciate you taking the time. Thanks, everyone. And thanks for your great participation throughout. Thanks, everybody. Have a great day.
Video Summary
This webinar discussed the topic of artificial intelligence (AI) and its potential impact on the advancement field. The presenters discussed the current adoption and usage of AI, as well as the challenges and opportunities associated with its implementation. They emphasized the importance of having clear guidelines and training for staff to effectively use AI. The presenters also provided a demonstration of chat-based AI tool, chat GPT, to showcase its capabilities in generating personalized content. They discussed the need for institutions to adopt AI and highlighted the role of task forces in driving its implementation. The panelists addressed concerns about AI lacking the human touch and emphasized the importance of setting clear values and reviewing AI-generated content to ensure authenticity and empathy. The webinar concluded with a discussion on the future of AI in donor interactions and the need for policies to address security and privacy concerns. Overall, the presenters highlighted the potential of AI to enhance fundraising and engagement efforts and encouraged participants to explore its applications in their institutions.
Keywords
artificial intelligence
AI
advancement field
adoption
challenges
opportunities
guidelines
training
chat-based AI tool
chat GPT
×