In a fast-paced digital era, AI and digitization are reshaping the landscape of customer service, enabling organizations to provide more personalized and efficient support.
In this discussion with Paul Sweeney from Webio, we discuss the potential impact of AI and digitization on customer service operations exploring the benefits of continuous data collection, contextual understanding, and hyper-personalization.
The conversation also delves into the role of conversational AI models, the importance of leveraging real conversational data, and the potential for AI to enhance customer-agent interactions.
Find out more about Webio-> Here.
- The advent of AI and digitization allows for continuous data collection and analysis, providing organizations with more context and insights into customer behavior.
- AI models have the potential to generate hyper-targeted solutions by understanding complex biological systems, enabling personalized healthcare and well-being solutions.
- Conversational AI models can act as intelligent assistants for customers, enhancing web chat and consumer interactions.
- Organizations should aim to have their own AI models to ensure control, accountability, and compliance in regulated environments.
- Conversational AI can help automate conversations, leading to improved efficiency and allowing human agents to focus on more complex or intimate interactions.
- AI can assist agents in their roles, providing real-time insights and suggested responses to prioritize and enhance customer interactions.
- The ultimate goal is to improve customer experience and outcomes by understanding and addressing individual needs more effectively.
- AI-driven customer service can be a catalyst for transforming industries, with the potential to automate processes and improve overall service delivery.
- Responsible use of AI and adherence to ethical considerations are essential to ensure customer trust and protect sensitive data.
- Organizations should focus on training and upskilling agents to leverage AI tools effectively and enhance their expertise in delivering exceptional customer service.
- AI and digitization are revolutionizing customer service by enabling personalized, efficient, and contextually aware interactions.
- Conversational AI models can act as intelligent assistants, guiding customers through their journeys and automating routine tasks.
- The ability to collect and analyze continuous data empowers organizations to make informed decisions and provide tailored solutions.
- Organizations should have their own AI models to maintain control, accountability, and compliance in regulated environments.
- AI tools can enhance the role of human agents, providing real-time insights and suggested responses to improve customer interactions.
- The focus should be on improving customer experience, understanding individual needs, and delivering tailored solutions.
- Responsible AI use and adherence to ethical considerations are crucial to build trust and protect customer data.
- Training and upskilling agents in AI technologies can enhance their expertise and enable them to deliver exceptional customer service.
- The potential of AI-driven customer service goes beyond automation, leading to fundamental transformations in various industries.
- Continuous advancements in AI and digitization will shape the future of customer service, driving innovation and improved customer outcomes.
Hi, everyone. I’m here with Paul Sweeney today from Webio. And he’s the co founder and chief strategy officer for WebView, here in the chat bot and automation space. So Paul, thanks very much for joining me today. I really appreciate it.
It’s a pleasure to be here.
So I know that we check in next week, actually, with a few things around, you’ve got a podcast that’s being launched. It’s called what credit shift has been launched, that’s going to be available on all podcast platforms. And also you just launched that chat as well. So I thought it’d be great to get you on, talk a bit about some of the things that you’ve been seeing. And then I know that we’ve been pinging backwards and forwards around large language model generative chat, just based on your experience and what you do. And so I wanted to particularly wanted to chat about that, in particular, the podcast that’s coming, just maybe just explain a little bit about what that is, and what you’re really going to suppose focus on with Dan blogger, which I think you got your pop podcast partner in crime.
Yeah. So I guess it stems out of the conversations we’ve been having across the industry for a couple of years now. And I think it’s pretty well established to the senior level, but everyone is going digital, they need to be much more digital and digital transformation is the name of the game. But I think there’s, that’s step one of the journey. And I think what happens when everything goes digital is a lot more data. And you have a lot more challenges about what your business is going to be doing in the future, what your role is how you differentiate what your services looked like. And what we wanted to do was to start the conversation about, okay, everything’s going digital, and then you’re gonna have aI rolling in pretty quickly behind that. And then what does the future look like? And what kind of services do people expect to get? What’s the implications for the industry itself? And I think it’s, I don’t think it’s out of place to say that this is we’ve lived, this is such a big shift, that it’s like this, I was around from the shift in around 2002, hosted services, and cloud and software service. And I remember people saying, oh, it’s not going to happen, who’s going to give their who’s going to do things in the cloud, who’s going to do things like allow their software be up there? By 2008, it was a done deal. Like everything was cloud, then we had mobile overlapping people remotely checking accounts and managing themselves on their mobile phones, that was a huge change. I think social was a bit of a misfire, with aggregated into the business of kind of Facebook and Twitter, mostly Facebook. So we didn’t really see the big changes from that. But AI, definitely, definitely our platform level change. So we want to tease out some of the implications of what that might be, and what the systemic effects of that might be. And then what it means for like, competitive positioning, competitive strategy, what is it that people have to really start wrestling with as issues?
And where do you think we are on the change curve with that as just as an example, so you’re talking about cloud computing, I remember it will being talked about cloud computing, no one was doing it, everything was still on premise. And it’s really took three, four or five years. And you could even argue that the pandemic was a big thing that’s really changed. The cloud really changed cloud computing, because everyone wanted to have software as a service as a result of that it really accelerated it. Where do you think we are with this? There’s a lot of noise around AI a lot of noise around large language models really what we’re talking about with it with AI? Are we just at that start of that change curve? Or do you think it is going to accelerate really quickly?
I think it’s going to accelerate really quickly. But I tend to overestimate how quickly to self correct. By very nature. So I think that the difference with these technology curves now these days is once the internet was in place, the next wave of technology sat on top of that. So like mobile services, actually leveraged an awful lot of cloud to make them happen. And now mobile is in place. So like computers are ubiquitous, because we all have our phones. So the next level of innovation through AI now has access to all our phone based technology. So you’ll see someone like cha cha chi PT is now has an Apple app, you can download Chachi PT and start like using an intelligent assistant on your phone. And that leapfrogs so many other barriers to adoption. I don’t know if you’ve downloaded it on your phone yet. But you start, like when you start pressing the button and saying something into it, and then it starts generating the response and it’s making a little vibrating thing on your hand and you go, wow, that’s something else. And that’s just I think that’s the equivalent of getting email on the internet. It’s like that’s the base level thing. It’ll do. There’s so much Some more are going to come behind that, that it’s if I follow this kind of daily, so when you see the pace at which things are changing, it’s I pity decision makers. That’s what I think because you make a decision today, tomorrow, something happens and you go, that was the wrong way, we need to turn back.
Now. I mean, he’s had incredible adoption, hasn’t it? I can’t remember the stats, but it was. It’s just it’s one of the fastest launches ever, particularly for chat GDP. I read a stat earlier that was saying something like there’s links in the US something like 40% of people are now using it as part of their Java some money adoption has been huge, huge. I feel that. I also think a lot of people are using it, and not necessarily saying they’re using it as well. And so in my conversations with North America have been, it feels like it’s been even more adopted over there than it is over here in Europe, really.
So I think there’s a bit of culture difference here as well, I think in pointing that out, and we’ll come back to the safety aspects and the Yeah, the safe use aspects. But what it’s showing is it’s in business terms signal in saying people want to use something like this to help them do their jobs or to help them work. And I love asking people, when their bosses aren’t sitting there. So how are you using it? Are you using it to do some of the coding work? Are you getting it to parse for you? Are you getting it, to turn it from one kind of code to another, and oh, yeah, I’m using it to do this. And I’m using to do that. And so all the developers everywhere, are using some version of a co pilot, to write basic stuff or to do basic tasks. All marketing, people are saying, Get me a first draft of this, like first draft PR release, first draft marketing materials, first draft email, and it’s getting good enough now that it’s okay, first draft is nearly done, just a little bit of fiddling out it goes. And now I’m super, just super productive. Like I showed this to people, and they just literally said, I gotta go now, to put on the phone, I went straight out, it just gave them a secret weapon that they can use to get all the work that they need to get done, done. And when you go to a senior person, and you go, are you using anything? What I use it for? And you go, Okay, let’s start with simple things. Are you writing anything to your company? Like, what about the CEO letter, the update? How are we doing the stakeholder letter? How are we doing with our investor letters? How are we doing with Nikhil? Oh, I didn’t realize I could use it for that. And and then you start thinking about, Oh, are you aware that, like the generative aspect of this means that you can change your behavior, about how you’re interacting with the technology. So just change your persona. So acting as the let me say, Get give a football man acting as Alex Ferguson, famous manager, I want you to write a pep talk to my team, as if we’re down to goals, going into the second half. And I want you to inspire them in the manner that he would do use all the available TV footage, use all available XYZ to get that done. And then you get back something in it that’s inspiring, that’s fiery that has adopted some of the language and maybe tone that he would put into that message. And let’s say you find that incredibly useful as a first go. And then you might say to yourself, Yeah, but how would Alex Ferguson, maybe identify players that he needs to take aside and give an extra bit of coaching and you say, acting as Alex Ferguson, and using the following sets of information and data, identify the person that he needs to have a talk with to identify the tone of the of the intervention that would result in increased performance for that player? And then it generates, Hey, Paul, I realized that it’s tough that you’ve, this is a high pressure environment. But remember, people are counting on you and your teammates are really looking for you to really run that midfield. And I go okay, to the team, I got to step up for the team. It’s not about me, it’s about that. And that was perfect intervention for me, right. So I think that, what we’ll see is that, as we learn more and more of what the potential spaces for using the technology as it unfolds, as to what how it can help us, you’ll actually see much more of it being used. And therefore I think we’re only at the beginning of it. But I think it’s going to be everywhere really quickly.
Yeah, it’s almost like the invention of the car or the washing machine. It’s kind of once you’ve used it, it’s really hard to put it down because it just saves so much time and if you’re if you’re washing clothes manually, and someone gives you a washing machine, in this case, pretty much for free. Would you ever go back to not using it again, it’s it’s really hard to put down and the whole question around whether we should have controls around it or not. It’s just it just feels like it’s just going to be too hard. For people to stop, because if you’re saving 20% of your time, and you’ve got all this extra functionality, why would you not use it? If I could
just expand on that a little bit? I don’t know. It’s a Hans Rossler, the data guy who’s famous for his bubble graphs of data, I don’t know, I think he’s on YouTube. I think it’s Hans Rosler. He arising, he famously showed this graph of family size and GDP of the nation. So as family size decreased, GDP went up and average incomes increased. So it was he showed how it evolved over time. And it’s a very good data visualization, example. And he had another example where he said that one of the great things that the washing machine, the clothes washing machine brought was his mother was freed up from all this time that she was spending washing. And what she did is she read. And so with that time, she was able to educate herself, because she hadn’t received an education that at that time in the country she was in. So she was able to use that time to get educated, which means that then she got another job. And then she was able to bring her family that next step out of that cycle. And so what happens is, I used to say about the example of in around again, around 2000, I was working at a telco. And we used to think about broadband, or DSL and ADSL. We think about these things as being able to deliver like video on demand. And so, yeah, it’d be like movies pumped into your house like at lightning speed. Yeah. And it’ll be amazing. Yeah, and the competitors to this will be the cable companies who are providing multi channel like experiences. And they’ll be providing telephony and stuff. So we’re going to be competing with each other. So broadband was going to be faster movies, that’s essentially how they conceived it. But actually, broadband brought us remote working. And with remote working, we were able to be folders at home with our families. So broadband brought us time with our families to develop relationships and develop better relationships with our kids be there for when they come home from school, or collect them from school, and actually build better relationships with their families. So I think the second order and third order impacts of technologies can be far bigger, they just take a little bit longer to wind out.
Do you think there’s an interesting point, do you think that’s actually a bigger driver around psychology than the business driver, so I’m just looking, for example, I’ve got the apple, the apple goggles, they’ve just been announced, or the other one was the Microsoft co pilots. And you saw that on there. And they talk about work life balance, but it’s often I’m doing something, I’m doing something at home, which but this allows me to now to do my work as well. Whereas it’s almost like a bit of a business piece to where they’ve introduced family is like a secondary order, but is actually that should be the other way around, which is actually I want to spend time at home. And it’s really all about my family. And this allows me to do this other business stuff, which is really how I earn the money to do I just wonder if it’s presented one way, but actually, the driver of adoption is actually something else,
to really interesting point. So I think it’s about who’s the buyer, like, who’s gonna buy this technology? It’s gonna be the company that’s gonna get the business, right? And why does the company want the company wants like productivity, and happy, happy employees, make for like, better customer service experiences better interpersonal experiences with their co workers. So it’s in your interest to develop a better home life balance and manage tempers and manage emotions that people have and the conflicts that we all have as humans. But I think that the, like, you can talk all day long about what you think it’ll do for what people actually use it for is the gold standard of what we should do next. Yeah. And I think that cope I was explained to me really cogently by Brett Kinsler, the one of the leading analysts in this space. He explained that as previously, when we’re looking at AI, it was in front of us, it was the Alexa it was the Google Assistant, we were talking to it. And that was the relationship. But where the relationship is moving to is the AI is alongside us. Yeah, it’s looking over our shoulder going, Oh, I could help you a little bit with that. Or would you like it might be easier if you did this? Or have you noticed the other and oh, gee, I hadn’t great thanks. And you know, it’s given you those little doing little jobs on the side from you and handing it over to you or making you notice something? And it’s just like a really helpful co worker, co pilot, and I think it’s a good it’s a good brand name. I think its strength has been such that it’s gone ubiquitous for a description of it. So like other companies have Google’s co pilot is just not an assistant is no, it’s a co pilot. It’s like Microsoft words. It’s not a word processing document. It’s Microsoft Word. So I think they’ve done a great job of that. And I think the it’s analogous to IBM back in 2003, I would say when they came out with Cloud, because there was no cloud concept before that. And then they started to popularize the word cloud, and people started going, Oh, this must be a normal thing. And started spending more time looking at it or evaluating it. I think we’re at that kind of, what is the space cold? What’s it doing? What can I expect it to do? What I think that the, like, if I was running a class and saying to people, how would I get a technology out there in front of the world as fast as possible, no matter what it is, I’d say find a way for someone to represent themselves in it. Do you remember those apps where you take a picture of yourself, and we’ll show you in 20 years, I’ll show you the AB or show you as the opposite gender. i People love it. Like we love seeing ourselves, we love seeing ourselves and things. So I don’t think it’s any mistake that the visual aspects of generative AI really grabbed people’s attention. It’s like, here’s me, as Lego, here’s me as a character from a Lego Movie. Here’s me as a sponge animal or as a muppet. And then here’s me as a muppet on the moon, here’s a movie of me as a muppet on the moon. And but you’ll notice that for me still, it’s kind of like, it’s entertainment, or AI related. That’s how it gets into a mind that, obviously is really quite powerful. But then you start thinking, okay, okay, well, how would I use this at work? Like, this is the next thing. It’s okay. Actually, if I had pictures of my sneakers on people, or if I was selling clothes, or it would change, if I was selling services, what would change? And you start thinking more in that area? Yeah,
the productivity changes. I’m in a huge, aren’t they? And it is, it just allows you to be so much more productive and participate? How does that come back to things like security and concerns around privacy? And those kinds of things? It’s been talked about, do we need to regulate it? You’ve got the EU AI act that seems to be that seems to come out a few years before actually, in some ways it’s been talked about for a while was that they were almost like predicting what’s your kind of view on regulation? And particularly around the concern there is the media ran the dangers around it as well. Is it just a case of that we don’t something we’re just afraid of something we don’t understand? Or do you think there’s more to it? No, I
think there’s more to it. But from a practical point of view, like we’ve been in this space, for years, and we’ve run a conference called convert con back in 2017. And we’ve always had a layer two conference for AI ethics and AI governance, because we strongly believed that AI was coming. And it needed to not turn into social media, right? It needed guide rails, it needs rules, needs regulations, but also need company companies to understand behavior and governance. So the quick lessons here are, I believe, anyone who pays attention to data integrity anyway, like GDPR data at rest data, where’s it going? Anyone who’s paying attention to that stuff? Anyway? This is significantly the same. It’s like, where’s my data? Where’s it going? How’s it being processed? What’s it being used for? Do we have the right permissions in place to do this? Is this is this managing a policy who’s in charge of it? The standard enterprise questions that you would expect anyone to ask in our business, it’s just everybody needs to start thinking like that. Where’s my data going? Has minded, just standard stuff, right, but just do the standard good job of understanding how your data is being used, where it’s going, and what’s what actions are being performed on it. And I think it goes a long way to removing the concerns, and it will drive out any issues that you have concerns about because your compliance, people will see it. That’s the first thing. Yeah. Second thing is just in terms of the large language models themselves. And I’ll just use the term chat GPT, because everyone understands what that is. But I’ll come back to that as well. Just when you’re using chat, GPT, it’s like an a use policy. Like you had an employee use policy to say, look, we don’t mind if you’re using the internet, everyone has to check when their deliveries are due or whatever. But we also know that you have to use it to do your job. Like, for whatever reasons, your company, everyone has to use the internet at some point to do their job now. So it’ll be similar. Everyone has to have access to Ai do their job. It’ll be a similar kind of, what’s our policy? Is it clearly communicated? Does everyone understand that? How do we enforce it? How do we measure it? How do we prove it? So it’s going to be the same kind of governance policy, I believe, again, from a practical point of view, I think that’s how it will get handled. And when you when you’re looking at a developer and you say If you didn’t just copy all the code and put it into GPT, did you? Uh huh, no, don’t ever do that. So you have to educate people, that you can’t just indiscriminately throw things at an outside business like that, and not be at risk of compromising something that this company cares about. So, again, it’s just part policy part education, to say this is we know that you’re going to use it. Just to give you a quick example, right, let’s say you’re writing a letter, which is going to be lowest level can risk thing that you want to do there, don’t use your real name, right, don’t use your company’s name, don’t use the company’s name that you’re writing to, don’t use any details of your account that you might be able to identify, all you’re looking for to do is structure the letter, give you the right language, put in the name, a limited and be limited, right, and let it generate it, and then take it off, and then edit it, and then confirm that you’re happy with it, and then send it. But just be careful about the names you put in the details you put in. There’s no, like there’s only one, there’s always scare stories about every technology out there. But it’s a little bit of common sense. Like just don’t throw your company information into an open environment like that,
actually, is, I mean, I just, it’s almost like my little mini helper. So for example, writing something, you can end up structuring a letter, those kind of things. And just if you were to have someone who was working for you doing it, you wouldn’t just get someone off the street and then just say, go and write me a letter and then just accept that’s it and just put it straight into production, you’d have checks and balances, you’d review the letter before it went out, you’d make sure that you made sure that they were trained properly before you gave them additional responsibility. And I know that chat GDP or generative AI, its mass at the end of the day. But if we think of it, almost like it is my mini helper, then I would have the same controls I should normally have. And I just wonder if we’re thinking about almost it’s almost our relationship with computers is going to have to change as being the computers always right. And it’s going to be perfect to being something that is the computer might be right, just like a human might be, we need to have some segregation of duties, we need to have some sort of governance around it, just like we would as if it was a human.
So today, tomorrow and into the future, these are different time horizons for this technology. So today, it’s really good practice to treat your AI is an intern like smart graduate, coming to work in your business. They’re really smart, but they don’t understand everything. You just go great work. Okay, that’s very good. Okay, no, that’s not what we call that. That’s not what that means, okay. And then you go great. And then that that works really fine. Now, the, there’s going to be tricks. I call them tricks, but they’re really advances in the technology, right? So you might say, remember, all the ways I like to edit. And remember the corrections I make to these kinds of letters. And when you’re generating in the future, generate them with those edits in them. And so it’ll learn your style, it’ll learn how you like to communicate, it’ll learn how your policy like of how you want to do things. So it will remember your settings, or whatever else you want to call it. And where that then becomes interesting is, it’s like, oh, how much more information do you want to give it? And then how much more action do you want to take on your behalf? And then that’s policy level, kind of, I want you to be able to respond to an email for me, but I don’t want you to commit to anything. Or yeah, I’d like you to respond to it. And if it’s a request for a meeting, or coal or anything else, organize that and put it in my calendar and commit to it if I cannot. So it’ll be levels of actions, levels of permissions.
That’s a governance things as well as a human governance thing. Well, we’re gonna governance on top of it.
Correct. And then the kind of third angle on it, then is, as you’re saying that relationship to it is different at the moment. So for instance, you should never look at anything as a fact. Like just it is not fact. It’s a generative thing. It’s not factoring. So it doesn’t know what your balance is, it doesn’t know what your next payment date is. It doesn’t know how likely or not you are to do anything, it doesn’t have any that data, it’s not able to make those judgments. But I believe that as the technology evolves, there’ll be other rails put into the process into the pipeline. And as we go, actually, you shouldn’t be looking for a piece of data here. Does that exist? Do we have a link to something that saying, that’s the balance or that’s the next date or this is the person that validate them? And I think that it will evolve and therefore, it will link to API’s, it will link to enterprise data, there will be policies to put rails on everything. Because that’s how an industry mature So I think it will, I don’t think this is going to blow up and do something silly. And everyone was oh, that was funny. For the moment that it lasted. This is a fundamental shift in, in what computers are capable of. And I think that it’s going to be a longer term change. And
this was a bit of a concern I have around this talk around regulation is if people don’t understand how it works, or what it actually does, if the regulation was gets misplaced, based off what we think it does, rather than what it actually does. So for example, those are some of the issues have been raised with things around copyrights as an example. So did it learn everything from copyrighted material was that most of my education was learned, or all of our education was learned? Or is it? Is it is it going to have over control of things or the fact that it gets things wrong? And when actually there are policy controls around that? I just wonder if if it becomes too focused on privacy? If it becomes to focus on certain aspects we understand today? We think it’s going to do will that lead to the wrong regulation that then restricts development?
I? Yeah, I think it’s, I can just speculate the way I think it’s gonna go, I think that they’re going to make generative AI, something that falls under all the regular AI ethics rules, the AI rules of the EU have done up. And it’s not about the technology. It’s about what you’re doing with it? And do you have the permissions for it? So if you take visual surveillance, and you say, we don’t mind having cameras in London, I don’t mind you know, cameras on the street, I don’t mind cameras in the shop. Actually, I don’t mind camera in my car. I don’t want a camera outside my house, when I’m coming into my house, and I’m on the camera in my house, and I’m in my house, I don’t mind Alexa being in my house, you go. Okay, you don’t mind any individual piece of that. But if someone had access to all of that, and then was able to do something on top of that, that was against your interest? You go, I don’t like that. So it’s more than just what individual systems are doing. It’s on all systems together are doing? And then what is your intention? Like? What you as a company or an organization or an individual? What are you trying to achieve with that information? What are you is that in the interests of society, or people are using it against their interest in some way? And I think that’s, that that’s why you need to have a body in and it’s there and those sort of things. Yeah, I think I think you do need to register. But I think the more to the point like it’s, it’s having the right, we found having this conversation over the years at convert comm was that best companies are on top of this already. And what they have is they have committees. And they go who’s on the committee for AI ethics and oversight, to have a committee Guess who’s in charge? So who holds the bucket for this? Somebody has to be accountable for it? Who’s accountable? Nobody. Okay, that’s your first disconnect. Second, who’s on the committee that discusses this and evaluates this and communicates to the rest of the business? Oh, it’s all of us, like it’s five, college educated, middle aged white guys sitting on table, you go, Oh, second, that means you don’t have a diversity, a requisite diversity that reflects society, and how it’s likely to respond to the services that you’re providing and how you’re performing. So you have to make sure that you’ve got representation on that body. And then you’ve got to bring those policies out to the rest of the business. And I’ve, I’ve been really impressed. By the way some companies have managed that in a really considered fashion, and does every new technology that comes in and every new use case that comes in, there is a review process, like people look at it, and they evaluate it, and they have people who are used to doing this and making those decisions. Now, there are some of the bigger companies who are very well set in this area. But in principle, I think companies will need to start making someone responsible for the AI making, putting the policies in place, how we communicate it. And I think that’s how it ultimately get managed at the company level.
So you’re almost no sound I’m no the EU AI act is is managed out of that policy level, like you regulate financial services, rather than regulating the technology itself, regulate the policy and the inputs and the output and the outcomes from that. Right and let the technology do its thing in between to protect society as much as anything.
Yeah, I think that’s the only way to go. Personally, I’m not an expert. I’m not a lawyer. I’m not any expert in AI ethics. But just again, referring to how quickly everything changes. In this area, the minute you regulate for one thing, like another major game changer could occur. Give you an example. So around February, when Chuck GPT came out, and GPT for them was was the big one. They were talking about the number of parameters in it and the number of tokens and the size of the database and we were hearing these numbers, I think Oh my God, these numbers are huge. And then another company would come out and go, we’ve got twice as many parameters and three times as many tokens school and God, it’s three times as big as the last big thing. And then a moment occurred. And around the back of the university said, Oh, we’ve got a language model that does all that stuff. And it only has a million parameters up, but I thought we were going up. We didn’t need to go up with change the way the model works, we needed far fewer parameters. And you can put this on foam. Yes, you can actually just get it put it on foam. Yeah. Oh, what’s the next one? Oh, I’ve got a smaller model. And it’s three times the size of power of that last model, or whole second, we’re going the other way now. So how would you regulate for that kind of a breakthrough after at a tech level? So I think it’s, it’s about controlling at business level, okay. There’s operations, governance, oversight, that’s like in the kind of businesses that we’re in, we understand how that has to happen. It’s very regulated. Now there’s strong governance in place. It fits into that mindset. And people who have that mindset anyway, we’ll understand that don’t do that can’t do that wouldn’t do that Canada, and they won’t fall for that, like, they just won’t fall for it. I think it’s where it’s people who are, who have access to this technology now, and are able to generate, people will always like the ones that have grabbed the headlines will always be the very negative use cases, because it’s our human nature to respond to that. But nonetheless, if your image is out there, and, hey, I’m pretty sure that I can go through all your podcasts, throw them into a large language model, clone your voice, replicate your synthetic gestures, and create an entirely synthetic podcast for you. And I could actually get chat GPT to structure the topic, pick the topic. I could get it to generate news, I could get and say it all on your voice. And you’d go and probably get more views, you would get more views because you’d be able to optimize, right? And so what, just to give you an example of how that’s the speculation of how that’s getting used today, is the guys in Spotify are single second. Now we’ve got all these people like let’s say Joe Rogan, right? 100 million, or more probably listeners every couple of days on the podcast, and you’re a car dealer down here in County Cork, and you’re saying, You know what, I’d really love it if, on my Spotify stream, I had Joe Rogan, saying Paul Sweeney’s BMW sales. They’re amazing cars. He’s the best guy go talk to him, I know him very well. And that ad comes out in Joe Rogan’s voice, and it goes to listeners just in the monster region of Ireland, who have been identified through the IP address and segmentation task. And so Joe Rogan’s voice has been cloned, the copy has been generated by again, just generate me copy that the cell that generate me the best copy to sell a BMW, if Joe Rogan was the guy saying it, and then just pay her money and pump it through the network, right? And so you go, and that’s just localization, maximizing a brand voice or a brand asset of Spotify. And then they redistributed? And you can might that might be harmless case. But what if they got your podcast voice from the internet? And they made you say some really horrible things, and they promote the podcast, and they defend your name, and they got you picked up by various news outlets who fell for it? That’s really negative. But it’s the same tech. It’s like the same tech used in just entirely different intents intentions, I should say. So I think we are. I’ve yet to really see some. Actually, no, that’s incorrect. I have seen some amazing, amazing things happen with the baseline technology. That gives me not pause. But it tells me that this is bigger than we are capable of understanding right now. And just give you an example of why I think that’s the case. The exact same technology was used to understand how proteins unfold in a in a DNA strained genomics sample. And it was untrained. So it trained itself figured out what was going on and figured out the genes that figures out how proteins unfold. And prior to this, it would take a PhD researcher, his PhD or her PhD will take them to three years to establish What was going to happen? They have to write it all out, do their experiments. And that will be their thesis, the large language model did all the proteins in three months, all of them, this was an impossible project impossible project, it was done three months, and you start going, Okay, what else? Could this start to sense? Can this model be used to sense other things? And I think that the innovation space in front of us for this is huge, which is why I think we need to really start thinking about it.
And that’s there’s a couple of things there, I was just just going back to the almost like the fraud case, and you think about debit cards as an example of transactions, right? They’ve got great news, but they also have this fraud use as well. And I quite like analogies. So do we think of that almost as an analogy for that with this these kind of models? Which is, these things are going to happen? How do you put guardrails and policies in place and investigations to make sure that it doesn’t happen? So in your example, as well, you know, so I just wonder if there’s that kind of analogy? Can we use analogies to then put some of those guardrails in place?
I think we have. This is the job is, we have to figure out, how do we start thinking about it? So thinking in analogies thinking on okay, this is our process today? How much of this change? If this were to happen tomorrow? How would that change? Identity and fraud is an obvious one people like if you can have your voice clone. If it can generate like an absolutely Pitch Perfect voice? How are you doing? Like you go to your vendor, right? And if this happens, how you guys set up to to identify that, what’s your strategy for doing that? And then we go actually, we don’t have a strategy for that, because we haven’t seen that yet. But then you go, you guys need to come back with a strategy on that, because we think it’s inevitable that this is going to crop up. And now they’re feeding into their concerns that they’re hearing from their marketing going, Hey, we need to figure this one out.
And I suppose that gives a whole new sort of level of services that end up being produced so that so you don’t what gets lost in terms of us doing some of the productivity gains ends up getting replaced by new services that have to get created as a result, I would think, I
suspect, so. Like, I like keeping track of the things I called wrong as much things I call because you probably learn more from it. And I remember when the facial ID and the car tap technology was coming out. Are you really telling me that it’s so convenient that I’m not gonna just take out my credit card and tap? Like, oh, I have to tap it on my phone? Yeah, that’s your big innovation. Yeah. And it’s like, oh, actually, it was it was totally a game changer. And then I don’t even have to do anything. It just sees me. Yeah, and validates it. And off I go, it’s like touch this card lists. But it sees me authenticates me, enables the transaction gives me the receipt, and I go off. And it was just that little bit easier. Now we had the pandemic, which accelerated the whole thing. But I like thinking of it as I just carry my phone, as I’m sure it’s everyone, I carry my phone with me. But because my card is on my phone, I no longer carry a wallet. And because they no longer carry a wallet, I hardly ever carry a bag anymore. Because it’ll carry about it that that that that effect and effect. And so I think that what will happen is you’ll see this, like changes that are systemic like that, just kicking across everything, like what happens when you can be instantly authenticated, because of whatever service because you’re just carrying your phone, it passively sees you it knows that it’s you. It’s I’ve asked my assistant something like my regular Google assistant or Apple assistant, and I say, oh, do I have any note, we’re not acting like this right now. To put my my booth down, but it would be very handy for me just to be able to click on my salary and go, Hey, can you make sure that XYZ happens today? And inform me if my daughter spends more than 20 Euro today, cuz I really like to keep an eye on her on her spending. And I go away, and the agent goes away. Now my intelligent agent has gone away. And it’s said, Okay, nothing happened yet. Nothing happened yet up. She spent 30 Euro. Yeah, come back, pop up an RTI. She’s gonna spend 30 Euro. I told you that I’m going to put a block in your card now. And that’s your spending gone for the week. And I’m like, and so that behaviors will change, I think. I think it’s going to be really interesting to see what behaviors change and how they cascade, but the identification verification thing. It’s like a passive service in the background. And I’m interested in what happens next. So what next?
Yeah, I think it freezes up to like the brains up to do something else. And I think you when you’re going through that example, one of the things I’m finding with with facial ideas, you end up forgetting more few passwords, right? As an example, right, so you ever tried to work out what it was, or you’re having to reset them all the time, or there’s gonna check crates, it’s almost like unintended consequences and sort of behavioral change as a result of us doing doing these things?
Absolutely. And as well, I think analogous to that, as I think we start forgetting names, to, and navigation, how to get for me to be in route ahead. And do I go this way? Or that way? Hope at this time of day, what would you stop just turning on what’s the best way
you can facial recognition, so we’ve not met in person, right, but when you when we meet in person, it’d be like, you’re not a little bit different from the screen, because you’re in 3d and you live in reality. And you might be saying, Chris, you’re looking at, you’re looking a little bit more weight than you are on the screen, or maybe hopefully not, but it’s like, it’s it changes, the perceptions are slightly different. And the real person becomes the person on the screen through the pandemic versus the person who actually necessarily meet. And so it’s kinda like it, we’re trying to blend reality versus versus not in some ways,
because I think it’s a good example there of digital interactions, humans and artificial humans interactions, and then human to human kind of contact, like physical in the same space as somebody. And I think that these are gradations in the in how what kind of relationship you need to have with someone to get something done. So for instance, meeting online and talking to people, it’s really important in your work day to stop, ask the person how they are, how’s your what’s going on your area, how’s your dog, you get now to dog and build some social time into it, because we’ve built our meetings back to back Oh, God, zum, zum, zum, zum, all day long. And we actually degrade all our relationships, because we don’t take time to care for the other person to show them that they’re part of something that they’re that I’ve remembered what you said last time, and you build those relationships. Now. The I think that can even that can change. I was using Google meet for the first time in a long time yesterday. And it said, your background seems a little darker. Would you like to brighten it? Yeah, just a little wipe across the screen. And everything brightened up. I said, Oh, that’s nice. It was looking a little tired today, would you like me to remove the circles from under your eyes and just give you a little bit of a skin glow? And actually I do I would like that. Because soon again, it’s it’s actually your voice would be more receptive to others, if you just took out the upturn a little bit into downloads. And I like Joe Rogan actually do that a little bit, though, that’d be great. So now you’re talking to me, and you’re meeting and you’re interacting with me. But you’re not because I’ve changed things a little bit to present myself a little bit better, right? And you go, That doesn’t seem like a good thing to do. Kids do it day in, day out for everything like they will Snapchat as bunny rabbit heads, and they will create characters that they represent themselves that they will put things on their heads, they do all this all day long, anyway. And your voice is going through a service like this. And it’s taking the top out and it’s smoothing things out. And it’s just making you sound a little bit better, a little bit higher quality. And you go, that’s just good service. That’s just good. Good. But is it me? Is that really me?
Do you think we’re becoming part of the machine? In terms of coming? When we talk about embedding technology into our lives? Do you think we’re getting embedded into the technology lives? And so it’s becoming like one in the same where it becomes? And this was around almost that that point of unity? Right? I remember what it was. Singularity singularity. Yeah, I think we’re getting near that. Where is it? Just we just it’s becoming there’s Is there a danger of it?
No. Look, I’m sure there’s people far smarter than me going. I just don’t understand the danger, right. Yeah, let me give you a really silly example of a completely different area. And it’ll hopefully relate back. My wife and I browse in our local forest a couple of years ago, and we’re walking along in late August, and we walked by this apple tree. Oh, my God, all the crab apples are out. And we looked at all the berries, we could just take make crab apple jam, it’d be lovely, right? And we’re picking them off and we’re off a second. We need something to put them in and say Ah, it’d be really simple to make a make basket sit, we’ll make a basket. And then you look around you and you go, there’s nothing really here to make a basket from and okay, and we tried and everything fell apart. And so your basket is a basic tool. And knife is a basic tool. You take them, you put them in a basket, you carry them somewhere else. It allows you to carry more things than you could before and without damaging them. And so we have always developed tools. We’ve always carried our knife with us For cutting, we’ve always carried our hammer and our ruler and our things for our jobs. And you could say you would never see him without his toolkit, you never see that guy without his whatever, he carries it with them, it becomes a part of his job and maybe part of his identity, because I’m carpenter, a mechanic or I’m an accountant, or whatever an identity rolls into it. I don’t see it as being any different than it’s just another tool that you’ll figure out how to use, it will help you do your job or not. And maybe you’ll develop some relationship with it or not. But I don’t, I’m not really convinced yet that we develop relationships with technology, I don’t think we wake up and go, I must talk to Mr. Busy and see how he’s doing today got this talent, machi thing or whatever. We just don’t do that. But there are people out there who, because they don’t have social contact, because they’re isolated. They have a a proxy to a relationship by having a digital human that they can talk to and who asks some questions and who remembers things about them, and who seems to care about them. But that’s not us wanting a relationship with the technology, it’s a technology substituting for relationship we don’t have in real life. So I don’t think that we’re heading towards any kind of singularity, I think we have a set of tools that are actually more going to be passively keeping an eye on us for our own health and well being than proactively engaging with us and being like a bantery mindset. If you’ve, if you’re using an Apple Watch now, it’s if you wanted to really get into it, it would figure out, you know, what poll, your heart rate went up for that meeting? Were you worried about something? It’s not what I’m doing, it could do that. It is keeping track of my heart rate, it’s keeping track of my respiration, it could keep track of my glucose, my blood sugar. And now I need to triangulate what’s driving this time of day exercise? Eating? Okay, do I need to know more about those? And what’s it doing to help me? And it’s passively managing these things until we ask it to do something like, Hey, Apple, I think I’d like to actually lose a couple of pounds over the next couple of weeks, what do you suggest? I suggest you just walk after your meal. And every second night, you do the following. And your estimated weight will be half the stone in two months just by doing that, like, Oh, that’s great. Would you remind me to do that? Yeah, sure, I’ll send you a little notification. And that’s AI working to help you manage your physical well being. But it’s not like, I really invasive, it’s not mind melding with you in some way. All it’s doing is it’s gathering data. It has the tools, then to act on that data. And you’ll probably have to tell it what you want it to do, it’s not going to turn around and surprise you with I’ve spotted these things that you should be worried about, we’ll take off that watch in a minute and go, I don’t want to be worried about whether I’m going to get hair cancer from walking in a city street stop. So that I think it’s it mentally. The interesting things, I think are coming from the fact that there’s so much more to measure, there’s so much more to sense. We can sense our, our bodies, performance, what’s going on with our bodies, before we’d have to go to a doctor go to a big surgery with all the machines, they would do that once in a while. The technology now allows us to do this on a continuous basis, which gives us more data and more context in our lives. And the more context it has, and the more data it has, the more it’s able to sense and help us understand why certain things are happening, or what might happen.
Yeah, I suppose the only concern that I do have and I don’t have concerns about it at the moment, is that as human beings, we can only sense for example, let’s say 20 factors, or 50 factors this can do at an extreme, it could do 50,000. And in terms of writing things, if you’re getting it to produce a piece, it won’t, it might take me a day to do right or half day to do it, we’ll be able to do that in 10 minutes, and it’ll be produced 100 of them in 10 minutes. And it’s almost like that competition, we could lose out with competition. And this might be down down the road in it in terms of I suppose it’s around the policy to make sure that doesn’t happen, make sure it’s done for the benefit of society which the humans and society rather than, rather than necessarily the machine itself, and I suppose that that’s the only concern that I think I have at the moment that’s projecting so far in the future, because at the moment is helping me do my job, right. Yeah,
I again, I’m not entirely convinced of the runaway AI theory yet. And I’m not saying couldn’t happen, but at the moment I’m not too worried about that. They think that The all the interesting things that are happening are things that we would not have thought about. And there were separate, amazing breakthroughs happen, for instance, genomic databases, and understanding the very essence of biology, meeting the breakthrough in AI, and in data and in cloud, all coming together to say, actually, we now have enough computing power enough process and misunderstanding to start figuring out the enormous complexity of a biological system, such that we can now figure out triggers for why something happens like that protein shifts here, when this happens, because you are predisposed to it because of this relationship. And now we can just generate a hyper targeted solution to that problem, right? Just compounds that can be created, manufactured, and delivered, like within a day, like to you an entirely hyper personalized, that’s a complete game changer as up there with having health care at all. Like we take it for granted, I always go to my dentist, and I thank them for their education, because going to a dentist 100 years ago was no small event, right? It was a big deal. So I’m grateful that we have the standards of dentistry and care that we have these days. And the same with the health care system. What happens when the next layer of breakthrough technologies happen, and we were able to get into the finer and finer details of the things that are making us unhealthy, and possibly the other way, once you start going, This is all because of saturated fats, these kinds of oils that have these kinds of impacts on the proteins that these kinds of impacts on your body. Once you understand those relationships, now you can go back and start going, Well, guys, you can’t make food like that, you got to change those ingredients, and drive them out of the system. So I know we’ve gone very broad on all the AI stuff. But I think that if I was to make a, what’s my major point here, it’s, we’re on the cusp of a change that is huge. It’s scary, that we’ve had like Cloud, we’ve had mobile, we now have AI. And the cycles at which they’re coming in are faster and faster, because they’re layering one on top of the other. And so I think it’s going to spread really quickly. People are already shown that they have a strong desire to use it. It will change the nature of work for a while. And then we’ll reorganize ourselves, like we’ll figure out okay, actually, we’ve just changed the way we’re organized or the way work is organized.
Do you think that people today if they’re not using it, they’re going to fall behind? It feels? Some people have adopted it? Let’s say it’s 20 30% of people who are adopting it and adopting it more than we think as we talked about at the start? If you’re not using it, are you going to lose out in the competition with the people who are
100%? I have absolutely no doubt like zero doubt, like you’re going to lose out, you have to figure it out.
Yeah. And I want to come back to web chat and consumer interaction and those kinds of things. I know you guys have been working on it with debt chats. And some of that, I mean, where do you think? Where do you think large language models? Take that world and take the messaging communications world and the two way chat into the future? What’s next for that? Do you think?
It’s interesting, we have our view, I’m not saying that it’s obviously we believe strongly in it, but I’m not gonna say it’s the absolute truth, right? Because it could evolve another way. So our view on it is that every organization should have their own AI model. And they should be able to control it from head to toe, understand how everything works in it and be fully accountable for what happens in that model. And you have to be because you operate in a regulated environment. So you have to be able to explain how everything was decided upon. And then you have to be able to back it up as to why it did that. And every industry to some degree, I think we’ll have to struggle with how do we do that? The view that we’ve taken is, and from our experience of working with companies is simple things. Very simple things like that was a successful payment, or that was a successful promise to pay or that was a successful, XYZ, whatever. They all get defined slightly differently in every company. They all have little different ways of thinking about something being done or completed or defined or labeled. And so what we want to do is give everyone the, the understanding that it’s their labels It’s how they’ve labeled it. It’s the how they’ve defined it. That’s what’s training their model trained on their data and their conversations. So it’s not synthetic data. It’s actual conversational data from your customer speaking to you. It’s not theoretical. It’s actual this is, and the analogy there is, would you rather drive a car that was driven around Birmingham for a couple of months? Or would you rather buy a car that was trained on YouTube videos? It’s no, I’ll take the car that was trained on the real live data, please. So we, our viewers, you take the real conversational data, you understand it, you train the model to understand the context of that data. Give you an example, an example of even a short phrase can be very ambiguous, like, I’d love to pay, but I can’t pay right now. Or I don’t seem to be able to pay right now, if some phrase like that, you’d look at that. And you’ve got to be able to pick up all of the details of that sentence and go, I’d love to pay positive, but I can’t negative right now. Why right now? Is it at the beginning of a sentence or beginning of a conversation that will be maybe they don’t have available funds? The follow on question to that is Has something happened as circumstances change that we need to understand more. But at the end of the conversation, after a length of conversation, maybe it refers to the Gateway isn’t taking the payment, it’s rejecting the card, they don’t seem to be able to get their PIN number, right or something, they’re having a problem they can’t pay right now there’s something going on. So we’re phrases in a sentence where it is in a conversation can change the meaning of that sentence. And therefore, you get into this concept of different embeddings. So they’re structured differently, the relationships between words and phrases are different. And so what we’ve done is we’ve spent a good lot of time working with customers, building these models with them. And then therefore, those models are the conversational AI that drives their conversations. So the tricky bit then, is, okay, that’s great. Where’s the data coming from? So you’ve got to be like in a relationship with a company where they’re willing to share the data with you. And then they share that data, how it’s like over an API. So that’s, that’s the, that’s the amount of money that’s the date, this is the time this is the person. This is all kind of data that you can’t throw over to a generative AI. That’s why we have generative AI kind of technology, conversational AI, we have our own natural language, understanding engine, dialog engine, and we have our own then API. And so together, you’re able to get the data in, you’re able to workflow it properly. And then you’re able to use the right pieces of AI together in a kind of govern fashion to make sure that everything’s happening the way it should
be specific. It’s I just almost like simplistically, it’s almost like you have the general base level primary education that might be general. But then you have the university education that’s specific to particular companies are based off their data so they can get really accurate kind of predictions and training as a result of that
correct. It’s basically fine tuned learn language model is what it is. It’s a fine tuned language model. And so when you’re looking at a simple chat solution, I love. I feel bad doing this by but it’s also a bit of fun. If you go to a website, and you press on the chat, and you go, Hey, I’d like to look at your IP, whatever, right? It comes back and going, Hey, thank you for thinking of us. And then you say something else, and he goes, please, you leave your email. Yeah, that’s not web chat. That’s a way of collecting someone’s email. And the amount of times that happens. It’s kind of like, Why do you even bother having that bubble there? So the thing about conversational chat is you’re trying to get to a stage where you’ve got this balance between you maybe understand why a customer is on this page, or this part of the page. What you want to leave them say something and go Oh, okay. Actually, you’re looking for this and handling that exception to why the probably 80 to 90%. Here on this page. Oh, okay. Now I understand, and then come right back and go, this is the thing you were looking for. And go it was and it’s all then it is going to be as good as your data. It’s going to be as good as your training. It’s going to be as good as your design. It’s not going to be anyone’s turn around and say everything’s magic out of the box is probably never worked with that thing before. But it’s we’re very excited about it because we have a background in in all this conversation design stuff and now we’re seeing Okay, SMS conversations, messenger conversations, web chat conversations. They’re all conversations. You’re you Using your AI to drive all these. And now, you can execute things like you can do transactions, you can confirm people’s payments, you can take payments, you can confirm a future payment date with them, you can actually do account level interaction stuff with them, which is allows them, of course to automate. And so when I’m talking to people about AI, sure, there’s interesting things about models and how multiple models can be used to make this happen. But as a business, you’re probably they’re going, does this help me automate? Yeah, it helps you automate CI? Because it’s just automating the conversation.
But in some ways, it’s a bit like the chat GDP example saying, which is, it becomes your mini helper, doesn’t it? Because it allows you to do things that allows you to then redeploy resources elsewhere, then do something else that’s more and more interesting and more valuable.
I almost feel like you can keep an eye on this space. I’m not saying this is awesome that we’re releasing this soon, because that’d be breaking my marketing people’s hearts. But you’re right. Like, the best way to think about this is that it’s an intelligent assistant for the customer. Yeah. And now it’s what happens next. Yeah. So web chat is on every website. But it’s so under utilized. It’s just the customers know how to use it. There it is, I know what to expect. I click it, something should be happening. And most of the time, nothing’s happening, when that happens. And if you imagine that chat assistant, is just going to become a digital assistant, it doesn’t matter. Like it’s chat, it’s like it could pop up, it could take half your page, you could guide you places, it could you could speak to it, you can ask things, it will become a digital assistant, or a customer co pilot. On the other side of it, you have an agent. So you’re saying how can you use AI to help a an agent do their job? And that is the kind of co pilot rule. And we’ve been talking to customers about this and working with them. And the things that you might think they want they don’t they? Like? How many times do they want you to generate and the response for them, like they know 90% of the time, what the response should be? Like, they know that they’ve gone through this conversation a lot of times, so you just want the right option populated, but you don’t want to generate it, you want to know that’s the option? That’s the text reply. I want you. So I think that the I think the one of the things that agents really wanted was just speed, it’s time to understanding that a thing needed to be done, whatever that thing is just understanding what this message needs to be responded to. Because it’s important, the client has specified that they have some constraints, that means it’s very time constrained, and some other factors. So the AI understands that message needs to be prioritized. And it’s just brought to the agent’s attention. And just, as was said, by one of the companies we’re working with just having the ability to stop an agent and go, Oh, think about this, that was valuable. Just go stop, think, oh, I should actually deal with that this week, because they might probably know what to happen. And an example of that is like it’s specified a great example, actually bringing all this together is that vulnerability? Where are you saying to people? I’d love to pay that bill. But unfortunately, I lost my job very recently. And we’re going through a tough time, is there something I can do? And you’re saying, Okay, I’ve expressed the text in that text, it’s expressing a financial vulnerability, you’re giving it some sort of a measure, like a lost job is some sort of measure that you want to put on it. And now you want to label that as financially vulnerable, and then do something with it. Now, it might be just saying, person has expressed the financial vulnerability, please acknowledge that, they can just go, I’m really sorry to hear that poll, that can be very tough change of life situation. Let’s figure out how we’re going to deal with it. And just gives you as an agent, a reset, that says, I understand that this is the case, and I want to deal with it. Or maybe they’re saying, Oh, that’s a vulnerable person, though, I need to send them to a specialist who’s trained in the way that this should be dealt with. So it’s I’m very sorry to hear that. I’m going to hand you over to someone who’s really well trained in this and who’s great record of helping people out. And I think that the speed at which you can recognize that and the surety of the action that you can take on the back of it isn’t isn’t of the magnitude of over going to develop an AI that understands all the bits of how you’re speaking and all the nuances of your situation and generate The best response for you, it’s okay, I can see in the future, that could be a thing that might occur. But today, you just want to use the tools that are there to say, okay, don’t let someone sit too long when they have expressed a particular situation, or make sure that people who are, who have been maybe vulnerable, Have you followed up with them, like you’ve continued the conversation, you’ve done check ins with them, you’ve made sure that they’ve done things that they said they were going to do. And I think what’s happened, I’ve been looking at reading some of the reports recently on some of the bigger financial services companies in our space. And what they seem to be leading with these days is we’ve helped 4 million people get out of debt successfully. And that’s a really interesting reframing of an industry. It’s saying, rather than saying, we’re the most effective debt collection agency, or we were the most dollar per pound collected agency, we actually helped 4 million people get out of debt. We did so where people were happier at the end of the process, then the work the beginning of the process. And we impact brand equity in the following measures, which has a market valuation of why and you go, okay, that’s actually really interesting. turn of events into what might be required going into the future, it’s going to be figuring out not just a date, or a future payment date or make getting promises to pay. It’s going to be figuring out who needs to be interacted with, what kind of problems are they going to need, solve what kind of skills your agents gonna need to help them navigate that. And who’s doing this. And I think that’s really interesting, and a genuine shift. And what an industry might actually end up thinking its job as there is, again, like there is always, we need to be responsible in credit, and the type of credit that’s given. People need to be responsible in the credit that they take out. And we need to be responsible in how we help them deal with whatever credit situations they’re in. And there’s many different types. And I think that wit, the availability of so many tools, and so many digital products, and wondering what the surface looks like going forward? Is it micro targeting micro niching micro journeys? Is it much more behavioral, is it much more? Or is it just simply helping people have better conversations as individuals.
But I think it’s interesting, what you’re saying is, we can talk about all these exciting things, but and it’s great that we can look at all these extra parameters. But really, at the end of the day, you could have narrowed it down to one or two things and making customers lives easier. And it might be that they might be the aha moment that what we thought from our frame of reference the customer wanted is actually not what they wanted. Or actually the solution might actually be a very simple solution, rather than something that’s going to be way more complicated. And there’s an even though we go through all this complexity, you actually come back to some very simple and very familiar measures, we can just do it and a lot more detail than we ever could before.
Yeah, yep. And again, I think that is that feels correct. That feels like how life works out. It’s like the amount of complexity and development that goes into this phone, being able to see you and authenticate you through a camera so that you can tap and go in a supermarket, the amount of technology behind that is astronomical, what the experience is, I just walked by a little machine and I tap my way out, there’s a small moment with an enormous stack of development behind it. And I have a feeling that way of the way the kind of it ends up looking in the application is it’s just a number of small things in the application that have a huge layer underneath it to make it happen. Yeah, so you could work for a year on a feature and it just shows like one little thing. On a sidebar that goes actually this customer has gone from being likely to pay and happy to unlikely to pay and unhappy in the last two turns of this conversation. Yeah, we need to think about what happened here. And and figure out what the next best action is. Yeah. And I think the I think the idea of thinking of this as in context of the other challenges in the industry, like how do we recruit people? How do we train people? How do we keep people motivated? How do we keep people on culture that we Want to have? And how do we use our co pilot to do that? How do we do that, and maybe what we get is, we get people ramped up a lot faster. We give them hints and understandings of what’s happening in a conversation in a live way. And they get to be the best agents that they could have been a lot faster, and now suddenly are much happier in their rules, because they have mastery of the rules, they know what’s happening, they know how to deal with it, and not surprised by things. And they get to make a difference. Now, like, it’s, I say this quite openly, in that I grew up in a house where we got bills arriving that we couldn’t pay, and of course, stresses and strains, I’ve been through it myself. And it is a very, it can be very stressful situation. And the biggest mistake people make is they don’t engage. And if you can engage with people, and then you can build that rapport with them and build a way that we’re going to solve this, we’re going to figure it out. We’ll understand how we’re going to move through this. And people just get that sense of it was okay, like, the world didn’t end I didn’t, it was nowhere near as bad as I thought this was going to be, yeah, and then they experienced something, and they will actually, I’m gonna get through this, and I can get through other things. And then they do get through it. I think if agents understood that the person that they helped or met two months ago, has now become debt free. And you understood that you played your role in that. I think that will change your mind a little bit. Right? Yeah, you know, I have three, three, you go home, and you tell someone, actually the three people I met and worked with, they overall declared debt free today. And we sent them I sent them a card. And it was a, you know, comes up from all of us. And it was great.
I certainly hope that the this whole conversation allows us to do some of those nice, more human things, I sort of has to be the hope of it, doesn’t it?
It’s already changing. It’s, it’s like, I think you were saying, Do I need to look at this, like 100%, you need to look at this. This isn’t the kind of if and but this is, you’ve got to look at this. The major issue I think we were looking at is, yeah, technology is important and going through. But if you’ve got the right oversight, governance and processes, you’ll drive out those issues, you’ll figure that out, just make sure that you’re that this is a part of that process that you’re doing that. And then the next layer is it’s probably going to change some small things in your applications and services first, but then it could gradually get more and more. It’s that snowballing effect. And once it’s in place, and once you’ve got your processes and relationships in place, then it cascades and figure out. Again, what we’ve seen is some processes are like 5050, automated person, human, the ones that are 100% automated, they disappear for the company, and they have outsized effects for them. If I could automate that whole process, something kind of changes in the business, something fundamentally changes. And I think that end to end automation will happen for some conversations. And then that opens up the space to change the nature of the service that you’re giving later on. And that’s why I think digitization and AI is going to change the nature of the services that people deliver in the longer term. Yeah,
it’s gonna be fascinating watch, Paul, thank you very much for making the time. I really appreciate it and and we got to make sure we catch up with your podcast credit shift as that comes out to as well. So sign up there too.
Hopefully, there’ll be some interesting conversations to be had.
I’ve no doubt I’ve no doubt. Thanks, Paul. No problem.
RO-AR insider newsletter
Receive notifications of new RO-AR content notifications: Also subscribe here - unsubscribe anytime