Positively Changing Behaviour – Science and AI – [FULL INTERVIEW]

In this interview, Hanif Joshaghani, CEO of Symend, discusses the transformative potential of artificial intelligence (AI) and natural language processing (NLP) in various industries including Financial Services and Telecommunications.

He emphasizes the growing importance of data and analytics in decision-making processes, which can be used to understand and transform outcomes.

Hanif also highlights the evolving role of professionals in data-centric fields and encouraging individuals to stay updated with AI advancements. There is still a big role to play and staying up to date is essential to maximise the opportunity.

Find out more about Symend -> Here.


Key Points

  1. Hanif emphasizes the significance of data-driven decision-making and the role of AI in enhancing it.
  2. AI and NLP can simplify complex data analysis by enabling users to interact with information using natural language.
  3. The conversation touches on the adoption of AI technologies in financial services and collections.
  4. AI has the potential to automate and streamline tasks currently performed by traditional software and consultants.
  5. A prediction that AI adoption will accelerate, making innovation and experimentation easier and more accessible.
  6. The cloud and other technologies are driving increased agility in organizations.
  7. Invest time in learning about AI to remain competitive in a career.
  8. AI will transform job roles but not necessarily eliminate them.
  9. There are parallels between AI advancements and historical technological shifts.
  10. AI automation will require professionals to become strategic orchestrators of AI tools.
  11. The conversation highlights the importance of staying updated on AI developments and trends.
  12. The potential of AI in personalized content creation and its societal implications is discussed.

Key Takeaways

  1. AI and NLP are poised to transform decision-making processes through natural language interaction with complex data.
  2. AI adoption is expected to accelerate, simplifying tasks and fostering innovation.
  3. Traditional software and consulting services may face automation and transformation due to AI.
  4. Professionals across industries will need to become adept at AI to remain competitive.
  5. AI will change job roles but may not necessarily eliminate them.
  6. AI advancements parallel historical technological shifts, such as the transition from ice production to refrigeration.
  7. Professionals should aim to be strategic orchestrators of AI tools.
  8. Staying updated on AI developments is crucial for success in data-centric fields.
  9. AI’s potential extends to personalized content creation, with significant societal implications.
  10. The cloud and other technologies are enhancing organizational agility.
  11. AI adoption is making experimentation and innovation more accessible.
  12. Learning about AI and its applications is a valuable investment in one’s career.
Interview Transcript

0:02
So hi, everyone. I’m here with a Hanif Joshaghani, who’s the CEO of Symend and surrender in the behavioural analytics and a contact optimization space. So, Hanif, thanks very much for joining me today. I really, really appreciate it. And I know it’s very early in the morning where you are. So no problem

0:18
happy to do it. It’s like probably one of the few times that is quiet here.

0:21
And so it’s always good to get up early and get get some stuff done that for sure. So I thought it’d be good just to talk about behavioural science, at least talk about that a little bit in terms of like, maybe maybe just a bit of a primer around sort of what is behavioural science? And are we thinking about those kinds of things correctly, really, and why it’s important for for a lot of us, particularly in the collection space, which is where I work, but generally, yeah, so

0:44
it the discipline of behavioural science. I mean, it’s a fairly big academic area. The, the applications are quite broad like you can you hear about behavioural science, like behavioural scientists end up working in everything from healthcare, to, you know, government jobs, when they’re trying to figure out policy, and how to design things for a specific outcome with a population to you know, even even in the military complex, you have a lot of behavioural scientists in large technology, companies like Samsung and Apple, you have a lot of behavioural scientists. So the applications are quite broad, because at the end of the day, whatever it’s forming manifested manifestation, what you’re trying to ultimately do is apply a set of scientific principles, to the way you design things that people receive, so that you can control for the emotional coefficient of how they feel about, right from a human perspective. So in the case of healthcare, one of the most common applications is the way you design, even little things like behavioural design, when it comes to labelling and even nomenclature for medicine, or trying to nudge to people to weigh the, to how they feel about things like sugar or self help, etc. In the case of government, the most obvious case that everybody always teaches you in behavioural science is the back of your driver’s licence. So there’s a really big policy one where it you know, governments are trying to get as many people if God forbid, they have an accident to be organ donors? Well, the thing is, if you say, Hey, would you like to opt in and the checkboxes to opt into being an open to owner you only get like a 20 30% uptake rate. Whereas like, if you click, then people have to mentally decide to remove that benefit from the rest of society, right? All of a sudden, and people who are willing to do it, that little change, you end up going from 20 30% to 60 70%. So that’s really what behavioural scientists is, is about designing communication, visuals, labelling anything with human psychology, human behaviours in mind, so that you can modular control for it to optimise towards a certain outcome.

3:08
And I suppose some of the psychology of it, it’s probably been around for a while, I suppose. Why are we seeing this big sort of surgeon and particularly financial services and, you know, sort of telecommunications and you know, utilities, etc, they sort of raise, why are we seeing a surge in those kinds of industries, which we haven’t actually heard of before? Because it feels like it’s been definitely an area of interest and an area of a focus of almost like new technology to a certain extent, even though the some of the ideas might have been around for a while in academic areas. Yeah,

3:35
so I think it’s, it’s the patient zero for why you hear about it now is more or less the the one really important thing, which is the ability to deploy it right, from a distribution perspective, in a consistent and scalable way. So I’m not sure if you’re familiar with the two sigma problem in education, where they took two groups, and one group got personalised tutoring and classwork and stuff like that. And the other group got, like, very, very good education, but not personalised to their individual and how they want to learn, etc. And invariably, the people that get the personalised experience always end up getting better educated. Well, the same is true in almost everything, right, like personalization. At the end of the day, the reason why it matters is because there are probably something like three to nine to 15 parameters that you could probably if you want to simplify the world, imagine if you had nine parameters, and every parameter could be a high, medium low, right? Then the different types of permutations of the way I would receive education or I would receive a message versus you are going to be completely different. Now in the real world. It’s not nine parameters, it can be significantly more and there’s not three choices per parameter. There’s significantly more so all of a sudden the way you could have almost like sequence individuals, it becomes income credibly personalised and if you can get a very cost efficient way personalised to the person, and that requires you to a be, have that ability to control all of those variables. And then secondly, to learn and iterate so that you can map, whatever it is that you’re doing to that individuals ideal way of receiving the information, then all of a sudden, the lift is significantly war, the big challenges occasion, when the two sigma problem originally came up, was it’s too expensive to give everybody’s personal tutoring. So technology is, I think, especially with the advent of AI, the efficiencies and things like Cloud Compute, things like that, the the ability to personalise any meaningful scientific way, not just for the sake of personalization, using a data driven science first approach to doing it, not just in the case of like, Fine, you know, engagement and collections and all these things we’re going to talk about, but in almost every field is going to have a profound impact on the affordability of personalization in a way that improves the way people get their services. And they’re important things and that’s really what’s making it behavioural science, super important now, because it’s one of the guiding principle guiding areas of learning in academia, where if you productize it now it’s gonna have real life positive impacts.

6:27
And especially, you talked about AI, I think, I think we probably met before chat DGP came out, I can’t I can’t quite remember, right. It’s like, the world’s pills, like that’s kind of changed and sort of like, how do you think the interface of that is with with with behavioural science? Because I mean, as you say, it’s almost like an even like surfing the web or surfing questions, you can actually serve asking questions, that you can actually ask questions of something which is so much more efficient. It’s almost like personalised search or personalised information to go and get that or can get that about my about my account. And it means the amount of time I have to spend doing something is much less because I just asked the direct questions. But I suppose How do you wrap that in with behavioural science? Because it’s also about the psychology of what questions I’m asking how you respond to it. And those kind of things. response back? And what’s what’s, what’s your current thinking around that? It’s

7:16
a great question. So I mean, this is still like developing so I can only give you what I think as of today, yeah, we have, the accessibility of information is significantly improved. But the one of the challenges I find that I use chat GPT, pretty much every day, like I use it, I use AI tools every day in my job now, like I was I was I, you know, I used to read 25 to 50 pages a night, I’m all I’m all I’m doing is reading about the new products and tooling that’s coming up, because the speed at which it’s coming out, I would say as far as behavioural science is concerned, and the way it’s applied is concerned, the accessibility to get to minimum viable good is actually completely a game changer. Because a lot like you said behavioural science is a very well researched field of academia. So there’s a tonne of white papers and scholarly work and stuff like that, that very likely, a lot of these models, like for sure I’ve seen it like these models I’ve had access to they’ve been trained on so it’s out there, it’s in the training of most of these big foundation models. Now, the thing that I worry about, and this is where, you know, I think it’s going to be true in a lot of fields, to be honest with you is if you get science wrong, even a little, the impact of it could be profound. And so for example, again, I’m gonna go back to my example of my matrix of permutations, right? Like personalising within that is super important, but like getting one of those parameters wrong, because you didn’t have the analytic context. And so when it gave you an answer, you couldn’t tell this is the thing that I worry about. You couldn’t tell if it was a good answer or not. It looks like a really legitimate good answer. But it’s often off that when you try to put that into practice, it could do more harm than good. And so that’s where I worry, I think it’s going to get solved because I think it’s so you’re starting to see it and things like being where they’re starting to like source everything, but who really looks at the sourcing is what I’m worried about. So this, I think it’s going to have a very profound positive impact on accessibility, but accuracy and making sure I don’t think the need for experts is going to go away because you’re at least for now, until you can really rely on a on the consistency and the quality of the response. Because when you’re applying something like behavioural science to something really important, where the degree that by which you can be off is small, then the you’re in, you’re gonna have to accept what it is, which is a good starting point. But don’t take it as like gospel and the 10 commandments and go apply it right away that I think is too risky.

See also  The Lending Dilemma: High Street Banks vs. Credit Unions

10:11
I mean, I think that’s sort of that’s good coaching, we just have this thing called spurious accuracy, right? So you can calculate, I don’t know, G or pi to, you know, 1000, that 1000 1000 decimal places, but that’s just a calculation doesn’t necessarily mean that that’s actually accurate, right? And if you get the wrong, it might look right, but it might actually not be right. So you got to sort of have that sort of like, almost like sense check to make it does this sort of make sense. And, and the other thing you made me think of is, it’s almost like in interpersonal relationships, if you say something that’s super personalised, it might go down really, really well. But it might also have the opposite effect, right? Just think about it in terms of like conversations you have, and you got to be careful with that. Suppose when you systematise it and you don’t have the oversight from, from a human point of view. So So that sort of times in terms of like, important, considered important kind of considerations, at least to me, at least anyway. So bringing us back to I suppose financial services and sort of telecommunications I mean, how you sort of like see that these kind of like use cases getting used in, in what is like, you know, the businesses, I suppose. Yeah. So

11:11
look at the highest level. And I think this problem is going to get more and more acute. One of the things that not just AI, but this string of Innovations has done is it’s opened up and really democratise people’s ability to send messages, and a lot of them often. And so you’re seeing and, you know, we live in a world where people are making more decisions every day than ever before, like, we’re probably up to like 12 or 13,000 decisions a day. So by the end of the Friday, you’re pretty exhausted, and you’re in a state of paralysis. You know, being personalised is not a silver bullet, but it’s one of the key mechanisms of breaking through that freeze. So when you think about the application in a world of communication, where there’s a lot of noise, right, like, and the rate of noise is only going up, you again, the need for personalization, like, we always talk to our customers, we’re like, look, the goal is not to send more messages, the goal is send less, the goal is to do the right conversation and the right outcome with this few interactions with your customer as possible. Because the more you load, the more messaging you have to load up to them, the more you’re exasperating that digital fee and the decision fatigue and it become they become it becomes white noise, it becomes less and less effective over time. So the ability to basically leverage behavioural science to trigger a better emotional response sooner. So the message doesn’t get washed away in the noise is super important. And I think that problem is going to become a bigger one over time. So a lot of these companies have spent a lot of money building massive tech stacks, that sometimes look more like a Frankenstein thing, because different leadership, different groups, different set of technology. And it’s really hard to rip and replace. And it takes years to do it sometimes with these orcs and so they just keep patching things on top of it. And so what you end up with is a like, lack of agility. And so like they just send more messages, they’re not able to be agile, be learning iterate, some of these things are not in their disposal. So they basically just load the customer more, you do that across lots and lots of people, then the need to specifically and personally be personal, it’s just gets bigger and bigger. So I think spamming is going to become a bigger and bigger problem messaging is going to become a bigger and bigger problem. And so companies that can break through and have that right conversation with their customers with fewer touches, that I think is going to be the key to success.

13:56
So I suppose we’ve always had this concept certainly in the telephony world of F Park, I suppose first point of contact, you want to try and get things resolved on the call. And what you’re saying is, don’t just look at telephone calls, but look at all the different communication channels you’ve got, and try and minimise them as a great metric, isn’t it? Number A number of contacts to to our exam? How do you and how do you reduce and how do you reduce it? You have to because

14:21
what you’ll see is, and people a lot of people don’t measure these things. But like the rate of engagement, if you’re not being smart about this will drop overall and then all of a sudden, you’re basically eroding. If you think about engagement effectiveness, the reduction in the effectiveness of your contacts with the customer is effectively a write down on your technology investments that you’ve made. So it’s really important to be thoughtful about the way that you

14:51
think we’ve got we’ve always liked with digital technology as far as digital contacts we’ve got load into almost like this. This almost like false sense of security or or false sort of equivalence, I suppose, which is like it’s cost so little to do another contact, that people just put it out there because like the return is probably there. But I remember the first time we chatted, we talked about it almost like a decay curve of contact. So every time you try and make the next contact is less effective, it’s less effective is less effective, which is just what you were saying, right? It’s almost like a, like a drug that’s been around for decades. The

15:21
minute so it’s an interesting point that you’re making. I say there’s two problems, right, the long problem and the short problem. So in the short term, you end up with a, like, the decay rate is going to happen no matter why, because people get habituated to messaging. But the more you do messaging, the more you load them up, the faster that decay rate is right. And so your effectiveness, by being kind of like a shotgun like pattern, like just trying to throw everything at it, you’re actually exasperating the decay problem. And the problem was already getting pretty bad because of the forces that I highlighted, including all the spamming and including, like, all everything else that they’re getting in the market. Plus, we’re living in a, in a world where for the first time the bulk of the workforce was born, born into social media. And that has a pretty big treat, like, it’s almost like that when you’re born into it. It has a remapping of your brain in the way you receive digital information. And so that half life is accelerating, accelerating, you shouldn’t try to make it worse. So that’s one thing. That’s the immediate thing. There’s actually a bigger impact that a lot of US companies don’t measure, like, not correctly. Anyway, I haven’t seen seen maybe one, which is, can you draw an attribution between your the customer experience in the near term, what they do is they measure the immediate thing, they open this thing? Do they pay the bill like that they’re worried about this journey, this outcome? Right? What they don’t think about is the survival rate of customers over time, as you deliver a very poor experience, or alternatively, as you deliver a very positive experience, the lifetime value, because that would be the like, I always use this example. It’s a kind of like waves on the shore, reshaping the rock little by little over a longer window of time. And most people’s metrics. And KPIs are very short. So getting to the payment today, or getting them to buy something today is really, really important, the long term way that your messaging strategy is impacting the way they think about you. It is thinking about, imagine if a company was instead of person, right? And now it’s me, let’s say I’m to Acme telco, and you’re my customer, but let’s persona by telco in the human being, if I badgered you every day, and I treated you poorly over the long, long run, the next time you wanted to go get a beer, I probably am not your first phone call like that, that happens almost subconsciously, right. And people don’t measure that properly, and they don’t track it properly. we’re big believers in it, which is like the long term improvement of that relationship has incredible impact. But in order for you to actually care about that, you need to think about long term, shareholder value long term, brand and relationship value customers and the organization’s. Most of these companies aren’t scorecards like that, to be honest. I

18:18
mean, some of that stuff come up over here we talk to you about like vulnerable customers, or customers getting into financial difficulty. And it’s almost like this marketing problem around how do you almost like prime the messaging? So when they do get into difficulty, they know what to do? And they you’re a trusted partner, that kind of thing? I mean, I mean, how do you how do you how do we best thing about measuring that? Because it always becomes a question, right? I’m going to spend all this money over here, I’m not getting anything back. Even though you’re saying well, you probably are, but it’s might be in six months time, right?

18:45
Yeah. So that’s where I mean, look, it’s this is one of the things that we look at is the impact that we’re having not on any one. So the smallest unit of measure is the impact that you’re having with this outreach. The next unit of measure is the relationship that you’re having with the customer across this journey, then the next year unit of measure is the way that your experiences overall are impacting that customer and that cohort of customers are all the look alikes, their relationship with every decision they have to make about your brand. Right? So that last piece, you have to measure it, it’s doable, because what you’re looking at is, okay, what is the attribution between what I do when I zoom all the way in, and the next level that I zoom out, and then the next level that I zoom out, and then you measure those lien impacts over time?

19:37
I mean, all of this requires quite a lot of data, I suppose. And understanding and infrastructure to be able to do that. I mean, like, what what kind of what kind of infrastructure is kind of needed and where are the gaps are now how do you how are you helping sort of plug those gaps to sets out? So that’s

19:52
a good question on data so that if you look at 510 years ago or whatever like that, a third party augmentation was the biggest thing right like so the people scraping by deed from data vendors and the bid profiles and things like that you’re seeing, you know, Europe is actually leading the charge. The FCA is one of the most progressive regulatory bodies out there. But everybody else is gonna follow the world in general is not going to become less regulated when it comes to privacy and consumer data everywhere. Generally, like people might be on different points on the path, but the world is becoming more restrictive when it comes to things. So the the way, that’s, and that’s our belief, at least. And so we you know, with that belief as an underlying ethos guiding principle, really, there’s only two types of data that you can use first party data about the customer, their financial transactions, all the things that they’re doing with you, specifically. And then, in the world of engagement, second party data that you’re gaining from the telemetry that you’re measuring and recording. As you run the campaigns. Now, within that telemetry, you still have two sub buckets. One is we call engagement, telemetry, where are you spending time? Where are you clicking? Where are you opening? How do you navigate through things, right? What time of day, is this all occurring? Right? Like getting the richness of how are they interacting with the communications not one note at a time, but profiling it as like journeys and journeys over journey. So that kind of three dimensional system. The second part type of second party data that I think is pretty important, is tagged information. So if you’re using let’s say, behavioural science, and you’re building a gradient, where you’re basically using one behavioural strategy on this message, and depending on how they erected interact with it dynamically, you’re going this way or that way, using another behavioural tactic, that dynamic synthesis of the science into the campaign needs to be tagged. And if you can tag that, and combine it with the engagement element tree, and then map that within an abstracted way, of course, but if then you map that with that, with the first party data that you have, then you have this real and if you think about the speed of that measurement coming in and being refreshed with every interaction, then all of a sudden you have personas and profiles about you is specifically Chris, and all the look alikes that look similar to you. And the look alike, profiling and clustering can evolve over time. And all of a sudden, you have a really rich way of understanding your customers, which you can then translate into how to interact with them.

See also  Recruiting in Data and Analytics - [FULL INTERVIEW]

22:34
What about the speed of the feedback loop almost like of these algorithms, you talked about tagging, different behavioural routes or sort of things, I mean, and one might be more successful whenever and it’s going to change over time and like change over time with customers or cohorts. What about the speed of almost like the speed of the algorithm in terms of like, recommending what the next step is, or, you know, recommending? How quickly how quickly does that kind of evolve

22:56
depends on the like, I always, I always tell people that the speed of the way you can evolve a system like this is a byproduct. Because if you think about the the number of different components now in the equation, that you’re optimising you have segmentation, then you have the campaign, then you have all of the logic and the conditional stuff that goes into it, then you have the tactics that are being in synthesise into the scripts, you can even have more than one primary, secondary and the way they are placed together, then it’s like the gradient of the way they’re rolled out over time within the campaign and campaign over campaign, it becomes like a pretty big system. So this is where AI is super helpful, right? Because at the end of the day, it gives you an ability to surface insights about what’s working and what’s not. So you can learn in a fairly quickly. So that’s really one of the key applications of AI is once you get your life quite complicated, like being able to kind of manage the complicated in a user accessible way, I will say that, as we take these large datasets normally, you know, you gotta be very mindful about not trying to kill a fly with an elephant God. So like, you can build a platform that can be insanely complex, but it’s about the way it’s configured and set up. And what I’ve always recommended to customers is, think about the customers that you have, think about what you’re trying to achieve, think about how many customers you have, because you want to be able to separate signal from noise is noise. And so you what you want to do is strike the right balance in terms of complexity against the population. So you don’t end up with like more noise than signal basically. Yeah,

24:38
yeah. And I suppose, I suppose to a certain extent, if you look at social media as an example, they use some of these feedback algorithms. I mean, but they there’s a difference between like doing it across at scale in the consumer kind of world versus doing it even in financial services, which is like slightly different kind of scale in terms of the number of segments you can have, I would imagine. So I mean, how do you take into account into account because if you have too many segments, you have like segments of one or less than one with no population in it, it kind of becomes kind of like meaningless to settings. And you get into those kind of accuracy kind of issues, if you don’t have the data to reinforce our

25:11
best, that’s kind of what I was suggesting is like, you can introduce hyper personalization inside of a cluster. So you can basically use scripting attributes that basically say, hey, this message, pull in this very specific unique information about the individual and look back in history about what they did in the past, and allow that to inform what they see today. So you can do that without creating depending on the platform, of course, without creating segments of one, the thing that what you’re talking about is absolutely right. Like if I give you a system that has let’s say the in there’s a bunch of risk segments that come in to our system, and then we’re sub segmenting, from behavioural and engagement standpoint within it. So now we have a matrix. Well, even if I think small, like let’s say, there is five rows segments, and then across that we have five buying personas. Now we have a 25 grid system, within that there’s a whole bunch of dynamicism. Well, if you only have, you know, 10,000 delinquencies a day, and you try to put it through that kind of, you’re not going to learn much like you’re going to it’s going to take you time. So if you want your in learning and your granularization against the population to in in attributable data for science verse way, you want to build that out, you would need to start smaller. And I think it’s so amazing, like, we go into companies, and they’re so sure about what they’ve done in terms of risk. And I’m like, Have you thought like, Have you ever run a test to see if you collapse these two, they look awfully similar? And then did something differently on engagement against them? What didn’t even make a difference? Would it be better? Would it be worse? Do you not do have a handle on this. And oftentimes, they have been sold really, really complicated things out of the gate by whatever vendor and they haven’t gotten to the world of complicated incrementally over time. They’ve just implemented something very big upfront. And we’ve had examples where we’re like, Hey, I think you can collapse these 15 segments into nine and rebuild and you won’t be any worse off. And we’ve been right. So

27:23
but I suppose that’s the that’s the value of experience, I suppose. And having that human oversight, and then to the lived experience, from a science point of view, just in terms of like materiality and sort of that kind of sense, sense check to a certain extent. I mean, you’re making my head kind of explode just in terms of like measuring the effectiveness of all this stuff as well, because you’ve got all this complexity going on, you’ve got the data going on, you’ve got a wealth of attributes are going on just looking at interaction kind of data in a contact channel. How do we best think about reporting on that? And particularly around things like evidencing whether that was a good outcome or not? Or, you know, did it actually achieve the outcome? And what’s what’s the what’s the sort of the tools and techniques that you think about it in that kind of area?

28:02
Well, the most simple is to do a bunch of AV tests. But that was invented 20 years ago, that’s not really much. Again, like the challenge that you get when you get into AV testing and things like that is because it’s an interdependent journey of a sequence nodes that are highly interdependent and correlated to each other. So what you do here may look good, but really, what are the downstream impacts? And how does it impact the next cycle and things of that nature so that what we typically it’s a multivariate equation, it multivariable equation that you’re really optimising for and there’s interdependency. So what, like so for example, and oftentimes, we have to work partner with our customers and kind of coach them through some of these things. Because what they’ll end up doing is making decisions like, Hey, I’m going to change this because I want, I want to change x because I want y outcome. And then like, x doesn’t exist in isolation from the rest of the system. So you may achieve y. But the downstream impacts of this, if you care about these other things could be materially worse. So what we’ve always tried to do is to build a, what is the business case exercise for the client to say, all these variables matter? They matter in a certain proportion and a ratio. So then you can define those ratios for the optimization equation that you’re looking for. And then every permutation every note, every iteration is against that extra equation, rather than any one caveat.

29:38
You’re making making me think they’re a little bit about expert expert systems or suppose expert policy rules as an example, you have to have expert rules and you have data lead rules, and at what point does it become a bit of a conflict between the data says this But the expert which is what we all humans basically think think the way the world works like this, which is like almost like a different type of AI. I mean, how much of a conflict does that become? When the data says this, but our opinion says that I mean, can that can that be a challenge in terms of implementing some of the best strategies?

30:06
I would frame it the other way, which is like, look, the human can have a hypothesis based on the data, but they shouldn’t be able to just go with their judgement in their hypothesis unless the data validates it. So that’s been our policy always is that. And I think it’s, I think having people on top of this is an important guardrail. Like, didn’t, you know, one thing we haven’t talked about is the power of general generative AI not in terms of generating tax, that’s like, the easiest example of what you can do with that stuff, but rather for you to be able to prompt and query your information. And so if you have a large dataset, and you’re trying to build a multivariate equation, and optimise it, and introduce AI into it, etc, if you put Gen AI on top of it, now the language that you’re it’s not SQL, or whatever, it’s actually in the English language is the language with which you’re interfacing with your information to create the equations to create because then it can convert that to syntax to and basically, you can create scripts, effectively using the English language. And those scripts can define the things that you’re optimising for. And you can actually daisy chain kind of these components together and be able to have single point control over a very, like a sequence of components and in a large system. So that’s where we get really excited is that I think, doing the complicated thing will become more accessible for people that are trained properly. The key thing is, you’re still going to need people that are trained properly, I don’t think you’re going to be able to at least today, and again, I said at the beginning of the conversation, I think this is my views at a point in time, and it may change. But I don’t think you can get rid get rid of the people I don’t, I think what people can do and do quickly, with massively lower costs, there is already been a seismic shift in that and it’s going to get even bigger, kind of get away from human judgement. Yeah,

See also  Payments Redefined: Navigating Trust with new Payment Channels

32:09
I kind of see it as almost like it adds an extra arm to the human to be able to do more. And it takes away some of the stuff that we would we don’t actually want to do some of the more mundane stuff, you can get it to do some of that stuff you might be able to do, it might take a lot of thought and you can just it gives you a bit of a lift up to certain extent.

32:24
Yeah, yeah, exactly. basic examples that I have basic examples that I have are things for example, like, analytical work that again, especially if it’s multi component in nature, that was a team in days, right? Like, it’s not easy to do like, and I read this still, that’s still the paradigm in most companies, which is that, you know, I gotta go through this thing over here that pulled the data. Like we have clients like this, though, when we have the state that we walk into is siloed components and a lot of data scientists, and a lot of analysts and what the future state will be in Symend, I believe, will be a part of that future is, and the way we view the world is that like, reducing the friction of doing that job, improves the likelihood significantly of that job being done well to generate value. And so the minute you have a lot of friction in it, and the limit, the minute you have complexity, you may technically be able to do the job. But as soon as you get busy, it falls apart. Like the more coordination The more effort that something requires, the less likely you are to do it. Well.

33:33
Yeah, yeah. And so in terms of state of adoption, particularly in say financial services, you know, some of the collections kind of areas, what what do you see the opportunities in terms of state adoption, some of these technologies will be behavioural science, behavioural scoring, but even some of the contact and the data stuff to be able to be able to use it. I mean, what what’s your sense around as far as development and where we’re going from from here,

33:57
in terms of the data,

33:59
the data, but also the science and putting these kinds of tools and techniques into the industry? In some of your opportunities,

34:06
you know, like, in we’re in a really cool point in time, honestly. Yeah, you know, budgets are tight this and that, but like it’s an, I’ve had to convince people the least recently, that this is important. That, you know, when we try to when we’re trying to like evangelise for a problem, the first thing we do is convince people that this is a problem you should care about, and here’s our view of the way that problem can be solved or incrementally mitigated. The first part is making them believe right, and so that it feels like I’m preaching to the converted most of the time through these days. It’s a much easier journey and so that that part is getting better and better by the day, and we’re getting more and more by in fact, stir, it almost feels like they were waiting for us to arrive. The part that I still think is challenging for big orgs is, I think there’s just so much stuff that they can do. And they are moving in such a cautious manner that it’s going to take time from the for them to do the transformation and do it properly. I also hear about lots and lots of companies. And again, I’m not trying to like make this a plug for Symend. But the typical enterprise software solution like that, from a company that built their software 1020 years ago, is a 12 month to 24 month implementation and millions of dollars upfront, just to turn the lights on. And that is a big, big decision. And it takes a long time and it fails half the time, right? Like they run out of money. I just in the last year, I’ve seen a bunch of these things, stall out and get written off, right or take something that there’s one company where they bought a piece of software before they bought some in and we’d already been delivering value for a year, they bought it for like, a couple months, there’s like one or two quarters before they bought some in, they bought this other thing. And we’d been in by the time they bought some and we turned it on, we started delivering value a year had gone by us delivering value before they turn the other thing even on they got these old school software companies are and I mean, again, somebody’s somebody’s getting paid off that right. Like there’s consultants, just professional services, this system integrators. So I think a lot of that work should be automated away. And I think we’re getting we are and are going to be incrementally more at the point where a lot of that hard, hard stuff, especially the foundations, your data platforms, everything else is solid, and you’re building effectively data centric applications on top of it correctly, I think you should, I think one of the biggest unlocks is going to be the speed at which people can test, try innovate. And hopefully in the future, trying things is not going to be such a big decision anymore, because I think that’s the way companies should be built. But they just they’ve lost some of their religious agility over time. And I think things like the cloud and Z DPS and things like that, I think they’re going to solve a lot of that. Yeah,

37:19
I mean, we’ve seen, you know, just huge amounts of change, even in the last four or five years with the cloud on sort of the ability to adopt and put things into into play at the speed of acceleration. And what I’m hearing is really based off some of the technology even over the last year, it’s probably going to get even faster. So we can try trial and try out things even faster as well. So yeah, that’s

37:37
That’s exactly it. Like the you know, like, you know, we want to live in a world we onboarding Symend is lightyears faster than anything else. But I still slow like the reality of it is I want it to be even faster. I want like clients to be able to basically say, I want to tell everybody in larger every major enterprise like, hey, for no cost and no friction, and you can switch this on, try it. And if you like it great. And if you don’t, that’s okay, too. Like that’s the world that we should live in where friction is not a moat, right, like technical friction. Because if that’s the only reason why you’re doing business with someone is because it’s too hard to do business with other people. That’s, I mean, that is fixed. A lot of big companies have been built that way. But I think that’s the worst kind of friction is the one where I have no other choice, rather than I’m excited and delighted to work with you every day. Yeah,

38:29
yeah, I’ll write that down. Friction is not about I like that. So so last last question. So if we had the same conversation in five years time, what do you think we’ll be talking about? I mean, where are we going from here?

38:41
The part that I’m the most excited about is like the the applications of Gen AI beyond just scripting, right? Like things and research and all of the obvious use cases, the bigger use cases, like the one that I just mentioned about the way is going to change the way we can interface with data and create equations and, and pick models and, you know, apply those models or data to kind of do to solve problems and execute workflows. I think that that, that that piece, where scripting becomes unnecessary, and you can basically, English language can be the way you can interact with complicated information. I think we’re in the infancy of that, honestly, like I work with the tools every day right now. And they are, we’re not even at the first inning yet. I think we’re very, very early and it’s already transformative. And I think it’s gonna be more transformative still, and anybody who’s working in the fields of data analysis, data science, you know, engagements or anything where the underpinnings of how you make a decision, you’re, your job is to make decisions. And a lot of data is required as inputs into how that decision is made. I would encourage you guys, anybody who’s gonna listen to this, to spend an hour a day learning about AI stuff and the way in which to come in and try stuff like I just signed up for as a hobby on my weekends. Now I’ve signed up for it. AI filmmaking course. Now, my job has nothing to do with AI filmmaking. But I think that they’re doing, they’re able to personalise commercials to the individual and render them in 10 minutes, I’m like, Well, I gotta learn that stuff, because I think it’s gonna have huge implications to society at large. So I, I just encourage you to, if you’re passionate about it, you got to get your hands on it, like you got to try it, you got to work with it. And I think every person, every professionals job, in a traditional white collar business, like a bank in a telco will be changed will be different in 10 years, everyone, no one will be no one will be exempted from needing to be an expert at this stuff to continue to be successful in their careers. Earning it is really, really key. Yeah.

40:46
So it’s part of the optimism of that is, is it I mean, the jobs are going to change, not necessarily go away. It’s going to be more user friendly than it than it was before. I mean, just you just need to understand how the dynamics work to a certain extent. Yeah,

40:57
like, again, this is the you know, like 100 years ago, the number one industry in the US was making ice, you know, and then the fridge came along and just blew that out of the water because people didn’t disappear. They did other things. They made fridges they, like they did cooler things. And so I am excited. I mean, look, I can just speak to the people that are in my circle. But most people that I talked to who are involved with the data, the fact that all of this low level work is gonna get automated, and there’s gonna be puppet masters and master orchestrators. It will require it, the biggest thing is going to require is that you’re being strategic, and being the orchestrators was valuable. And so if you can make that mental shift, then your productive velocity is going to be better than ever. Yeah.

41:43
Well, honey, thanks very much for that for the time. I really appreciate it. I know it’s early in the morning where you are you got you got calls coming in now. So I really appreciate you taking the time. It’s fantastic. Always interesting to chat with you. I love, love, love our conversation. So I really appreciate it. You’re very welcome. Have a great day. Okay, thanks. Cheers.

#Symend


RO-AR insider newsletter

Receive notifications of new RO-AR content notifications: Also subscribe here - unsubscribe anytime