Decisioning, Data : Using the Right Tools – [FULL INTERVIEW]

In this conversation, Adi Hazan, from Analycat, explores the intricate dynamics of decision-making in the data-driven world.

The discussion around AI and its impact on various industries delves into the evolution of decision logic, the human brain’s role in AI decision-making, and the challenges posed by data overload and bias.

It also further discusses the environmental and operational costs associated with large-scale AI systems, emphasizing the need for a balanced approach that synergizes AI capabilities with human expertise.

Find out more about Analycat -> Here.

Key Points

  1. Evolution of Decision Logic: The conversation starts with an exploration of how decision-making methodologies have evolved in the AI and data space.
  2. Human Brain vs. AI: A comparison is drawn between human decision-making processes and AI methodologies.
  3. Data Overload: The speakers discuss the challenges and noise created by the ever-increasing volume of data.
  4. AI Bias and Data Quality: The risk of bias in AI, especially with AI-generated data re-entering the system, is a significant concern.
  5. Environmental Impact of AI Systems: The discussion highlights the substantial energy and resources demanded by large-scale AI systems.
  6. AI Predictions and Reliability: There’s a level of skepticism around the accuracy and reliability of AI predictions.
  7. Human-AI Collaboration: The need for a balance between AI efficiency and human judgment is emphasized.
  8. Challenges in AI Implementation: Practical difficulties in implementing AI solutions and achieving ROI are acknowledged.
  9. Future of AI: Predictions and expectations about AI’s evolution and its societal impact are shared.
  10. Role of AI in Decision-Making: The conversation underlines AI’s role in modern decision-making, emphasizing its potential and limitations.
  11. Critical Approach Towards AI: A need for a more critical and balanced approach towards AI-generated outcomes is highlighted.
  12. Media Influence on AI Perception: The influence of media in shaping public perception of AI and its capabilities is discussed.

Key Statistics

  1. Electricity Consumption by Data Centers: Data centers globally are consuming more electricity than the entire United Kingdom.
  2. ROI Challenges: Over 90% of AI projects are currently failing to deliver a return on investment.
  3. Server Efficiency: In certain cases, one server can replace the need for 200 servers in AI applications.
  4. Energy Efficiency in AI: New IBM infrastructure can reduce energy consumption significantly, with one new IBM core equating to approximately 60 normal cores.
  5. AI Project Failures: A high rate of failure in AI projects and initiatives is indicated, though specific percentages are not provided.

Key Takeaways

  • The evolution of AI and decision-making methodologies marks a significant shift in data-driven industries.
  • Understanding the differences between human and AI decision-making is crucial for effective implementation.
  • The increasing volume of data presents both opportunities and challenges, requiring careful management.
  • Addressing AI bias and ensuring data quality are essential for reliable outcomes.
  • The environmental impact of AI systems is a growing concern, necessitating energy-efficient solutions.
  • Skepticism and critical analysis are necessary when evaluating AI predictions and models.
  • A synergistic approach that combines AI and human expertise can optimize decision-making processes.
  • Practical challenges in AI implementation and the struggle for positive ROI are notable industry concerns.
  • The future of AI is both promising and uncertain, with potential societal impacts.
  • The media’s portrayal of AI influences public perception and expectations.
  • A critical, balanced approach towards AI-generated outcomes is essential for success.
  • The conversation urges a return to simplicity and strategic investments in technology, avoiding unnecessary complexity.
Interview Transcript

Hi, everyone. I’m here with Adi Hazan, who’s the owner of an Analycat, and you’re in the sort of the decisioning space in the data kind of space outside. He thanks very much for joining me. My pleasure. So we originally met talking about one of your products as a bot named Sue. And we’re talking about decisioning methodologies. And we had quite an interesting conversation looking at how you almost have this decisioning space, I envisioned it as almost like, almost like a surface, we have a decisioning surface where you have different services, different minimums and maximums in terms of what a decision might be. What have you seen in terms of evolution of like decision logic? Are the complexities of decision logic within the industry? And is that is it evolved? Or is there do you think there’s a big opportunity there?

Look, we obviously thought there was an opportunity, otherwise, we wouldn’t have built it, that space that you’re talking about, is actually a cluster of neurons in your head. So the goal of our product is literally to copy the way human being takes the decision. We’re not as data centric as our colleagues. Unfortunately, although everybody feels their data is more valuable than oil, your data always comes from the past and your business has to move into the future. It’s only a human brain that extrapolates and what happens with experts, when I say if I had to start to say, let’s do a podcast on this, we’ll link it to that. You would almost see it in your head, you use your spatial neurons. And what Sue does is sample your spatial neurons and try and set up a little set of neurons 20 or 30 of them to think like you were as most of the current methodologies looking at big data from the past hundreds of 1000s Normally, for a start, and try and model what was done. So I think, I think decisioning, and using AI to take decisions is happening already all the time, to varying levels of success. Everybody who’s used a computerized chatbot, when they’ve tried to connect to their insurance company or phone can attest to some of the levels of success being a little bit lower than they should be.

And we’re in this role of like, massive amounts of data. And it’s almost like the data just keeps on increasing. Do you think that’s causing problems and almost like getting lost in the noise because you need the data to be able to come up with another statistical decisioning? Versus I suppose almost one step removed to humans? I suppose we do that filtering? First, is that exactly how you see it? Absolutely.

Most of your data is repetitive. If I look at, let’s say, IoT data, things work 99.9% of the time, meaning that 99% of your data could be replaced with one sample. Unfortunately, with most learning algorithms, you never know which one that is. And the more rare and event probably the more important it is going back to IoT where for example, if your device is on fire, it does not send any data. But that’s probably your most important event. So the absence of data is probably your rarest and most meaningful event and it’s probably never handled. So I think the proliferation of data, people are looking for more and more rare events. And the more rare, it’s much like words, if you use a rare word, it’s normally because you’re trying to convey a very specific meaning. Same thing with data, but flooding yourself with a million books, doesn’t give you that specific piece of information. And the cost of the state is just flying through the roof, the data centers in the world and are using more power than the UK. It’s a big expense to big environmental expense. It’s a big, managerial expense, it’s requiring more and more expertise. To handle these volumes, finding a needle in a haystack is a skilled job. But I have a million data points and one of them is causing a problem. Then I need some really specialized people to find it.

I suppose there’s a bit of a balance there between almost like using human evolution to know where to look for the needles in the haystack. I suppose that’s part of what we would do with expert systems, versus finding new needles that we’d never discovered before in the massive data, which is the big data approach to certain extent. And I suppose how do we think about the blend of the two because certainly over this side, it’s going to cost a lot less money. It’s what we’ve done for for millennia, almost in terms of finding the data and I suppose it’s Is there an equation there around? What’s the cost? And what’s the cost benefit of finding those extra insights? Because basically, you have to retrain evolution to come up with new expert insights to a certain extent,

Chris, I think it’s a bit of a myth that’s being perpetuated, that this thing can find needles, you didn’t know where they’re going If I had to describe AI in a nutshell, metaphorically imagine if you were in a room, and people were handing in Chinese characters, and you had a list of rules to tell you which characters to hand back. There’s absolutely no space for innovation and creativity. If I design an AI that will look at a picture and tell me if I should build in brick class or concrete. It’s absolutely impossible for that ai ai to say, Hang on. What about a combination of brick and concrete here. And this, I’ve told it, that’s one of its options. There’s no framework, even theoretical, even quantum at the moment that will give you what we call an industry of counterfactual, that will tell you something you didn’t tell it. So there’s this myth being perpetuated that we drop the data in, and it will suddenly tell us very often things that run completely counter to common sense as well. And then we run around raving for a couple of months, until little or years, until we discover that it’s just not true. I haven’t seen anything that surpasses human understanding, yet. I have seen them make mistakes that a human being will never make.

One of the things that kind of concerns me tickly with the some of the recent developments is bias. And and just whether you do when do you think that bias could potentially increase as we start, the date is almost like getting dirty? Because we’ve now got aI data going into the data as well, that’s getting reread. So is there a danger there that we can we had a set of data, it worked well. But now when we’re retraining and retraining with data that might also include AI texts as an example, it could also amplify biases as well, what’s what’s the danger around that? Do you think

See also  Unlocking Possibilities: Proprietary AI Solutions

life is getting constricted all the time. So at the moment, you have an algorithm telling you which reviews you should read online, when you look at the news. You have Netflix’s algorithm telling you what it thinks you will like your chances of getting out of the bounce, are getting smaller and smaller. They’re all working in concert in the same way. So a nice example, I had this old list of mp3 that I that I downloaded 100 years ago, outtakes rare things, and I thought I don’t need to carry mp3 is with me, I’ve got a streaming service. But 10 or 15 of those songs are not available in the streaming service. Because they’re rare outtakes. So what you have is your list of options shrinking all the time. Actually, it looks like there’s a zillion songs out there. And maybe there are. But it has to any model has to simplify and to simplify, you have to delete. So we’re definitely in a process. It’s not necessarily bias, some matches normalizing all of us finding something rare and unusual. It’s becoming more and more difficult. What I call the Pavarotti effect, you know, the guy that sings down the road here in Italy is probably almost as good. But he never clicked on to the algorithm, and he will never be famous. Now in the past, he would have been partially famous. But not anymore. Now it’s all or nothing. If you click in opera, you get the five, best five most famous. You don’t get diversity.

And then the reality is we’re probably leading living in time where we’ve got more choice and more access to information than we’ve ever had been probably in, in the history of humanity to a certain extent is the challenge seems to be how do you find it. And someone said to me the other day, it’s almost like social media is almost like the filtering mechanism for you to be able to find it except that social media mechanism to find it is driven not just by your friends, it’s driven by an algorithm as well. So it’s like it needs to, we’ve got a breadth of choice. And but now we’ve got a very sort of a narrow presentation of what’s available.

And that narrow presentation is what is going to be most profitable for the presenter. So we’re in a situation now where initially Google used to scour the net and didn’t know how it was going to make money. But now it surely does. And I mean, it can’t be that. There’s anybody who’s going to watch this hasn’t noticed how the results have become a nightmare. You know, there’s the three advertisers because they paid for advertising. They’re also on the first page, the third page on the fifth page. And when it comes down to something that’s not monetized it now says your search has finished here. So we’re being led only where it’s commercially viable social media, and search and use.

I started looking at social media in particular around as much as what I might be interested in and recommended. What am I interested in? It’s also around what the people around me my Be interested in and you know, people who might be I have proximity with people who have similar interest to me. And what does that say around? What does that say around like my interest or even being a predictor around things that I should look at, or things that my friends should look at, as well as almost like, second guessing the algorithms used to be that

that was looking at your social friends and all that now it’s looking at your social friends seeing which of the paid advertisers are most likely. So that paradigm has shifted in the last 10 years. They’re completely commercialized now. And unfortunately, their interests do not lie in giving you what you like most. That’s changed. Which is very unfortunate, though, what you’re getting with AI is the absence of customization. People are being normalized into silos of profitability. And you’re being led towards making more profits from them. And do you

think that process particularly for generative predictive tests, or GP GP teas, or the transformer kind of AI, is that normalization? Fully happened yet? Are we still in the early stages of Do you think it feels like it’s maybe it’s starting to get commercial, but it hasn’t quite reached the levels of some of the social media companies as an example.

Look, in a lot of places it’ll say isn’t, isn’t to the level where it’s actually useful. So for a start, all of the words it puts out had to be put in again. So copyright is becoming a big issue. And there’s countless lawsuits about to take place. Because it can only blend, for example, with pictures, it generates a picture, but it can only blend pictures, it’s got. It’s a CPU, it gets numbers, it doesn’t see the outside world like we do. So we have this perception of something new. But it’s endless. And it’s happening in music a lot as well. Have you noticed how much of music is mashups from the 80s, there was a real burst of creativity in the 80s and 90s. And most of what you hear nowadays will have remixes of that thought that in. So it’s happening in the whole of society and the whole of culture. And we don’t have a product that stops that, by the way, it it’s just the way things are at the moment.

What I thought was interesting around our conversation, really we’re talking about almost like this idea of almost like local minimums. And it’s almost like this, if you just if you look at this large datasets, it’s almost like everyone falls down and thinking almost like a gravity well falls down to basically the minimum, which this is the lowest common denominator. What was I saw, our conversation is almost like being peaks and troughs with other local minimums or other peaks that were happening, almost like illness was like on this sheet of elastic. And it’s also but the expert system allows you to find those other pieces use mapping in the landscape, rather than just rolling people down the hill to certain extent.

So one of the problems that we have solves is that we don’t just absorb data, we ask questions. And what we’re able to do is analyze all the possibilities and say, but hang on, I know nothing about this. Now what happens, I’ll take an insurance company for an example. The guy in charge of accepting policies, the underwriter, each insurer has a flavor of risk that they like, some specialize in power stations, they know what they look for in a power station that has made them profitable in the past and likely to be in the future. That guy is not there by mistake. And all the data in that insurance doesn’t include the people that were excluded. So if you’re going to model human decisions, which is what we do, but we do it on a small scale, you need to know also what is not there. And that we have solved, we can’t give you an answer that you didn’t tell us about. But for the first time, we can at least ask you for a data point that you haven’t given us. Which is really a bit of a breakthrough. We can circuit but what happens when there is no data, oh my God call the fire engine. And

I suppose in the in this sort of big data world, one thing we didn’t talk about was cost of running things as well. So I know we were talking about expert systems needing to sort of light on data. And there’s also light on on resource as well. But also decisioning resource as well, because you’re using the human base sort of filter decisions, tools make decisions quickly. I think you’ll use an example at the supermarket wasn’t it around, being able to look at skews and run it very fast. But there’s a cost isn’t there to to using lots of data. And I think that’s coming up just in terms of environmental cost is coming for a lot of discussion around that I think recently.

So costs are enormous at the moment. And we play and we load up an image generating program I would name one. And you’re right mutant salad, banana fish and up it comes.

Yeah, this

but they take 1000s of cores. And those cores use tons of electricity. And the data centers in the world are now burning more electricity than the whole of the UK. And a lot of companies and a lot of people are becoming very cognizant of this. Ours was designed with this in mind. So we use in one of the retailers we worked with, they worked out they need 200 servers. And we’re using one server for two hours a day. Now, that’s a reduction in footprint in power consumption and increase in speed, it has a lot of benefits. And it’s very difficult for the larger AI companies to attend to that. The only one that’s really doing anything, I think, in that regard is IBM, IBM have just brought out a new infrastructure, which can massively shrink your footprint, we found that one of the new IBM cause is worth about 60 normal ones. So that’s an enormous saving, and what they’ve done, alright, let me not go into it technically, but they’ve done something really good. But not everyone’s going to buy an IBM machine. So I think we’re gonna have a couple of waves going forward, we’re gonna have a wave, where people start to really rebel against the commercialization, we’re gonna have a wave where lighter algorithms are given preference. The best thing to move sand is a backhoe loader. But honestly, if I have this much sand, the best thing to move it with is a shovel and a bucket. And this, and the people who sell backhoe loaders will never say that to you. So if I have 100 cases that need ai, 95 of them probably don’t need big data probably don’t need a data science, certainly don’t need the algorithms of generative AI. And trying to shoehorn them in is a problem. But again, even with AI, you’ve got very few options. There’s a list of about 40 algorithms that are being used. And innovation is actually quite small. They’re making bigger and bigger ones, they’re making faster and faster. configurations. But if you look at the state of the art is invidious H 100 machine, it’s a monster, no doubt. But it costs a lot and it’s difficult to run.

See also  Levelling the playing field - Data centric design - [FULL INTERVIEW]

So did you say do you think it was an uplift rate almost between almost like this generalized kind of GPT type type, which is a huge amounts of data that comes out with most generalized kind of output versus a sequence of almost like more specialized and maybe lighter kind of models, all the way down to almost like expert systems, which can be very light because it’s based off human intelligence. And really, it’s around finding, what is the right model for the application.

I think diversity is got to come back at all the levels of this thing. So different sizes of AIS, different kinds, Chet GPT isn’t, it’s amazing. I really take nothing away from those people. But I’ve seen very little useful live use of it. Everybody’s saying, Oh, it’s just changed everything. Do you know of anybody who’s changed something with it? Because I haven’t, I’ve yet to see a success case. The stats in the industry is that over 90% of projects are failing at the moment. So that can’t go on forever. Everybody’s very excited. There’s a lot of hype. All you hear in the news is this project was launched and that project was launched. You never get that little sort of announcement. This project is finished and worked. I haven’t seen that.

And what do you what are you finding when you’re out in the market, particularly around decisioning and decisioning systems is a people ready to adopt that is this is the stress around operating expense, what’s been the driver for kind of adoption, and because it does feel like the world at least is going through quite a bit of time in terms of stress around cost. And people are looking at how to take cost cost out to a certain extent.

So there’s a lot of noise underneath the noise. It’s like this, if you have 100 organizations 95 can’t afford to get started. Of the five that have gotten started two years ago, it would have been we would have heard something to the effect of what you only need one server now we want we’re going big. And now what I’m getting more and more is severe disappointment, that the feeling out there is we’ve spent all this money, where’s the ROI. And I have yet to meet a client so that there’s disappointment and people are now open to alternatives. And I think it’s going to happen to more and more firms. Nobody’s being paid to tell you hang on the stuff doesn’t work the way you think. So more and more companies. And I’m not saying it doesn’t work anywhere. caveat, obviously, it’s awesome stuff. But it’s very difficult to implement fully and successfully. You need a lot of expensive staff. It’s like a battleship, the army doesn’t need to 300 of them. You need five or six, if you want to dominate half the world the rest of the time, you’re going to need smaller boats, more efficient ways of behaving, etc.

Do you think we’re losing up? Do you think we’re losing our way in terms of this focus on complexity? Because as human beings, we like shiny things, we like complexity, we like to make things more complicated rather than simpler? And is, are we losing our way on that? And should we just should we be making things simpler, and really cutting stuff back to almost like the core around, this is what we need to do.

It’s tantalizing to have a bigger model, a better model, your own model. Unfortunately, none of the models can tell the difference between truth and fiction. We call it hallucinating in our eyes. So whatever happens, you have to check it anyway, I think at the moment, everybody’s finding out that you can run these things and then cross check them with a human and they’re effective. Definitely, the one thing that’s coming is that everybody’s gonna get off this thing of trying to replace the humans with it. But

undoubtedly, I suppose it does help with taking the load off, or the cognitive load off in certain tasks, right? There’s just dangers associated, those dangers may be associated with that, or risks associated with that. And that’s what technology has done throughout human history hasn’t. So like, we’re we’re pretty general purpose machines as human beings. But I use a car because it helps me get from A to B quicker. Now there’s dangers associated with that. But I have to be careful about how I use it. Are we seeing this almost like a blend between human and machine? Do you think we’ll see that sort of evolving going forward, but we just got to be aware of the dangers? Well, do you think we’re still doing what we’re doing now.

utility will take a little time to, to penetrate, some people are more stubborn than others. But even now, when I drive my car, if I slip lanes on the highway, my car rattles me and tells me, which I think is fantastic. Because you do that, over long distances, you, you lose concentration, it wakes me up. And after two hours, it tells me stop and have a break, and starts moaning at me. So augmented intelligence, I’m all for it. How me as an attorney, which case is I might have missed. But let me take the decision and check it. Tell me if I’m a doctor about a couple of rare diseases. So computers don’t forget, humans do. Computers don’t understand and humans do. So there are pluses and minuses for both entities, only one of them is conscious. One is thinking one is processing. And AI combination is killer, if you use it with human intelligence, it runs like a dream.

It strikes me that one of the real dangers is outside of that we talked about the data and the and what this does for generating bias. But the other danger is almost like the humans belief that the computer is always right. So we’ve we’ve almost grown up in the last sort of had no 30 odd years that whatever comes out of the computer, because we’ve almost seen them as almost like engineering machines, right? So they’re, they’re like a machine where the computers always right, because they’re doing very sort of basic tasks, although they’re complicated. And they’re giving out almost like engineering type answers. But when we get into these statistical processes, we get into much more of a human type discussion around well, it’s probably right. Or it might not be right, or it might not be right, which is actually a human kind of characteristic around. If you tell me something you might question whether I’m telling, you know, over I have all the facts or not, it’s actually a human type thing. So do we need to relate to computers differently?

Now, there’s a couple of things. First of all, interestingly, that has a downside for computers as well, if the computer makes a single mistake, and you can’t fix it, and it keeps repeating the mistake, human beings won’t accept the device or the program. Um, so if it keeps doing the wrong thing. It elicits a very bad reaction from us. And then I think we’re being brainwashed very systematically. Nobody’s being paid to give us the whole story. I was sitting with an attorney. She specializes in GDPR and privacy, and she says, Yeah, but did you see that some kid was sick, and there was an article and had a very rare tropical disease, and they typed it all into Google, and Google gave them the right diagnosis and none of the doctors did. Now, we call that neglect of probability. We marveled at coincidences instead of looking at the stats and nobody writes an article of the 4000 times that a doctor gets told, but maybe I’ve got this. And the doctor says, Yes, I know you did 30 seconds on Google. But I’ve been studying this for 30 years and actually not. So nobody advertises the outtakes, if that makes any sense. And the outtakes on many, they advertise the one success, because they have budgets. Excuse me. And nobody’s going to benefit commercially, from telling you all the times that goes wrong.

See also  Looking back at Digitalistion - [FULL INTERVIEW]

I think the other thing there is the media as an example, also like talking about the outliers. So we always like we like talking about the outliers, we like talking about the exception that kind of proves the rule either ends us we talk about the extremes, but actually, in medicine, then you have to deal with the with the mean, don’t use dealing with the mass in the middle around making sure that statistically, right. But actually, the conversation happens around the mean, and this is happening in politics as well. So the conversations around that, and it gets the airtime versus the center, which is really the vast majority, but it’s like the extremes of influencing the center to a certain extent.

It’s and it’s only interesting and newsworthy if it’s an outlier. But nobody is being paid to give you the whole story. The best example, how many years of your life Chris, have you watched the news and this person comes on to predict tomorrow’s weather? Now they’re using the most sophisticated computer in the world? How often are they right? More than a day if it very rare, yet they come back with us cold blooded, impervious to embarrassment. And we’ve reached the point where we react to it as if it’s true. Oh, it’s gonna be a great day tomorrow. Let’s go we’re gonna barbecue, we go. We buy steaks. Meanwhile, it rains. The next night, they say it’s gonna be bad weather we saw a better way your Macintosh. And their, their attachment to the future is recently? Yeah.

Maybe we need an alternative Weather Channel where they say look, we run these 12 different simulations with different starting points. And the probability of this of this particular scenario happening is x. And this one here is x 2x 3x. Four and just talk about the complexity, it might take an hour and a half. They

do a good job, I just wish that they would present it. And now they do try and take a 30% chance of rain. But you understand if I tell you this a 30% chance I’m going to go and have lunch now. It means actually have no idea what I’m about to do. And we need to start to take all of it with a pinch of salt. Elon Musk’s job is to tell you that full self driving is a year away. They’re there to sell these cars is right. And that’s what they do. So we need to start to buy with a skeptical eye and test things thoroughly and not accept scope shift. I add was going to answer all your clients questions, and I will only answer client questions if they have these seven words in them.

What’s the antidote for that for us as consumers because it’s becoming more complex? And it’s layered, isn’t it? And I think this is another layer that’s being added where we’re further away from almost like the detail. What’s the antidote to it? Because, you know, I mean, there’s a competitive advantage around being able to take quick decisions, or having this extra information or be able to look at legal cases, definitely competitive advantage. But as you say, there’s dangers with that. So

I mean, what’s the biggest pressure you bought calling us and saying, everybody that we drink with? has spent 4 million on AI and we’ve spent nothing? Now, what do you say to your board, and they’re all getting great benefits, because none of them will tell you the truth. So I think it’s just a painful learning process. We’re all going to go through the stuff I’m saying is starting to become more and more widespread, especially in high end organizations, mathematical modeling, has been around for a long time. It’s extremely effective we have but that’s all it is. There’s no intelligence that these are mathematical models. And they are advancing but they’re nowhere near where they should be. And I think this is going to be a process of pain. There are a lot of people now making a lot of money off proofs of concepts that would never flown. Yeah. And I don’t think that dishonest I think they believe that they’re on their way to something. I think that Tesla is trying to get somewhere and working really hard to bring it forward. But unfortunately, it’s not as easy as they had hoped. And the human being is I think one of the things we are going to realize is that we actually are quite special. This is more of the world’s the problem. is of course the humans but they’re actually not the problem. They’re wonderful.

Do think like the hype often outruns the reality by about like 10 years it feels I’m think maybe that shortening a little bit. So for example, the hype around the internet in this example, in the turn of the century, outrun the reality of what we see today by about, like, 1010 years or so. And it just feels like, we get a lot of noise around things. But it’s actually a little bit more difficult than people think, look,

I no longer rent, DVDs or VHS and I no longer get a lot of physical post. And I have access to a lot more media. How much else has the internet changed for you? As life as you know, it transformed. Now we’re still shopping with our kids, God bless them, leading us dry and making us work. Life carries on and I was sitting with a regulator in one of the countries in need to put you in touch with our innovation hub. said as long as they don’t, is their job is to disrupt an asset on I don’t know, there’s certain words you don’t use in front of me Disrupt is one of them. Because after all this disruption, how much has changed? Yeah. So we’re not disrupting nearly as much as we claim? Yeah, let’s just focus on what we can do an all star to be more realistic. I’m not sure we tenure. In fact, I am sure we’re not 10 years away from human like intelligence, these things cannot generate anything that was not put into them.

Just an Italian example. So you went round Herculaneum, or Pompeii as well. And you just realize there, how little life has changed, right? In many ways, so much of it is very familiar. In just you’ll see today, some of the finer bits on the outs on the higher end with the cars and those sorts of things. You don’t have the technology. But the end of the day, life is kind of the same, and it’s very relatable, very, very relatable.

Yeah, you’re gonna be eating and meeting people and doing your life. Yeah. And it’s a bit of a question how much weather forecasting has changed life for us? Most of us would look outside and better take something today, most of us still get caught out once in a while. It’s just life. Yeah. Now climate change, on the other hand, is definitely changing patterns and things for us. And interconnectivity. Things like COVID. I would say that’s pretty novel. We need to start making adjustments for those things. We need to start looking for technologies that are doing less environmental damage. We need to start thinking more carefully how often we want how we respond to certain kinds of threats. I’m in full agreement. With the disrupter. I’m not looking outside and seeing driverless cars. While I sipped the Martini that my robot made for me. I don’t know if you are. No, I’m not no. It’s here, it’s a year away. It’s been a year away for long enough rest is now and settle down and say, right, does this work now? Can I get a return on this investment? Is it worth doing for me?

So Addy, thanks very much for making the time and my kind of takeaway from the discussion is almost like we need to go back to thinking through processes carefully and probably also simplifying them and then making really strategic investments and making it easier rather than just adding on more complexity, I think to a certain extent, and that seems that seems to be your coaching coaching from that. Undoubtedly, there’s new maths you can use to look at decisions, but it’s right it’s just make sure there’s almost like that value really in all of our processes, rather than getting too excited by it. Yeah, it’s too much hype. Too little delivery. Fantastic. Thanks very much, Adi.

Alright, thanks for the time because thanks, thanks


RO-AR insider newsletter

Receive notifications of new RO-AR content notifications: Also subscribe here - unsubscribe anytime