RAW UNEDITED
cool all right you ready to go ready to go excellent make sure that we are recording we are recording here we go hello and welcome to episode 59 of great things with great Tech the podcast highlighting companies doing great things with great technology my name is Anthony Spiteri and in this episode we're talking to a company focusing on cloud optimization via a platform that reduces cost optimizes devops and automates Disaster Recovery they achieve this with an intelligent optimization engine delivering cost efficient High
performing and resilient infrastructure for every kubernetes workload that company has cast Ai and I'm talking to Leon Kuperman CTO at CAST AI welcome to the show Leon Anthony how are you sir excellent today and looking forward to talking about something that is well two things that are really hot in today's world that is kubernetes and AI but before that I just want to give a bit of a shout out to the show so if you do love great things with great Tech and would like to feature in future episodes you can click on the link in the show
notes or go to gtwgt to register your interest just as a reminder all episodes of gcw GCR on all good podcasting platforms Google Apple Spotify all hosted and distributed by anchor.fm and as a reminder on the YouTube channel at GTW GT podcast go there and subscribe so with that let's talk about cast Ai and I guess the first thing to talk about Leon is is your background and the founding of the company because I think it solves a really interesting problem that a lot of people are facing and gonna face today but first let's go back to your
background because it's really interesting as a series real startup guy so give us some information about hey came as found Castle AI yeah well we can go back really far but so I'm a software engineer by trade uh my first and only big company job was when I left University I worked for IBM so I worked on this cool product called Webster Commerce Suite back then it was IBM IBM's introduction into e-commerce okay so one of the early developers and then a kind of very early realized that so Anthony is kind of if you live in a
family of entrepreneurs that that tug is very strong so uh as immigrants to North America you know my parents were not entrepreneurs my brother's an entrepreneur family business and I had that whole to get into a business of my own and so the kind of inkling I had there was well we're shrink it takes a year and a half to sell this to ship this shrink wrap product from IBM why can't we just offer it to people as a service yeah this is kind of before as a service existed and so that was kind of my journey into startup life and and
since then I've only really been that in one other big company at Oracle and that was due to an acquisition which was a lot of fun okay but everything in kind of in between was focused around my software engineering skills as a technology leader and also as an engineer but in that startup Arena so in terms of the space that you were focusing on um you know was it was it more security focused has it has it always been around that security or has it been varied in terms of the previous companies leading up to cast AI that's really interesting
so no I didn't start out as a security focus in 1996 I started out as having a specialization in e-commerce how do you bring product online how do you sell it how do you do Payment Processing and then I was attacked in one of these jobs in one of these startups and it was a ransomware before before there was Bitcoin you used to get like uh you used to get Ransom not saying Western Union five thousand dollars oh that's really old school isn't it there you go the old school ransomware yeah and so so it was like it was a DDOS
attack that was affecting our business and I had all these vendors that I thought were going to protect me and I had and the technology for protection was still very mature and what I found was ultimately we're pretty naked like we didn't know we didn't know what the skill set and the technology we needed to really adequately defend our business I ended up paying the ransom finally after a week of struggling oh okay yeah and I got this interesting email back from the attacker he's like well you've
been such a good customer give us a couple of competitor names and we'll attack them free of charge I'm like this is like the whole obviously I didn't give them any competitor names yeah but that's what led us to the idea of zenich zenedge was a founding which was the first cyber security company that I was involved in like we need to figure out how to protect uh protect customers that don't have any expertise in this field and extend that to web application Security in general and so that kind of started
my cyber security career but it was a pain point from solving a previous problem that's interesting and then as I understand constant AI born out of another problem that you had at that company which was uh spiraling costs of cloud so just want to explain how number one have that problem influence the creation of this next company yeah so when we were building this image like really good product I think analysts and customers and you know the whole ecosystem really liked it very easy to use compared to some of the the
Legacy stalwarts in the market but we have one problem we launched the company in AWS in all regions in AWS it was a reserve reverse proxy so essentially we would proxy all traffic okay and and pay AWS for the egress but also the compute to do all of the the threat analysis in real time and so every time we signed the large customer our bill would kind of jump like it so our and our gross margin was suffering and soft with software as a service valuations as you know Anthony uh gross margin is extremely important yeah so I
I would have this constant every month the bill would come in from AWS and we would have this constant debate with the finance team and my CEO like why is this going out of control like I I'm trying to add profit to the bottom line yeah yeah yeah uh the class kind of went away because you know it was it was hidden under the platform so we never really solved it and then it was bothering me all through um you know all through the time we we went through that transition it was nagging at me as an unsolved problem and
I can't be the only CTO in the world that is having this significant challenge with with cost and budgeting absolutely and then during the at the beginning of the pandemic we had this chip shortage and this is when it really hit home when all of the cloud providers were really struggling to provide capacity and everyone was asking being asked to do manual efficiency projects to bring back the bring down the number of CPUs that they're committing to from the cloud yeah it was all manual and I'm like wow
this is a global scale problem that we can impact with some critical thinking and intelligence and that's how the company was born yeah that's really interesting Rob because it's been a problem in the clad space public Cloud space specifically for the longest time and in fact if you take it back a few years I mean like like you know and like I say a little bit besides like working for a service provider in the past we would sell an allocation pool uh to customers and that allocation for was the best guess effectively as to what
they thought their Total burstable Resource would be now we loved her because 99 times out of 100 they were over provisioning for you know what they really needed and obviously the way that you make money in that sense is via over provisioning under under consumption right so we love that fact but it just expose the idea that people don't really know what they need um and then you've got the problem of moving from you know the physical world to the virtualization world to the containerization world which
fundamentally it's still all about Computing resources and the consumption there but all of a sudden you haven't just got one it guy doing the allocation you've now got different teams you've got developers you've got it Ops people you've got SRE teams so you've got all these people that are putting in there two cents worth in terms of what they think the resources should be so it's a nightmare right let alone when it goes up to the sea levels to actually get someone to explain why the cost has
blown out so much yeah and if you've ever seen like a bill from one of these collaborators you literally have to be like a PhD in economics to understand like the thousands of line items on a simple build never mind a complex organization that's kind of like part of part of the trigger the trade isn't it like um just make it as complex as possible so it looks like it's just too too hard to actually go through and actually work out what we may or may not need yeah and I think AWS particularly suffers from
this and then all of the other clouds have kind of taken this on they feel the need to charge on multiple Dimensions so you're getting charged for the compute cost you're getting charged for the egress the Ingress the load balancer like there's like 10 things that make up this one line item and it's very difficult to understand at the end right absolutely so I guess we haven't really talked about cost AI such as yeah right we've talked about the problems your history so talk a little bit about cast
AI what you guys do in light of what we've just talked about yeah so we we started with this kind of thesis assumption the world will eventually move to containerization and if that's true then we need an orchestration strategy that makes sense right and there's two narratives around orchestration the cloud provider because they always want to be up the value chain will say we'll just give you containers as a service like uh fargate or um you know container instance provider and don't worry about the
control plane don't worry about all of the management we'll just take care of it for you right well that comes at a very significant price and price of dollars in control yeah and then the other kind of Open Source vision of that is well we have infrastructure as a service the orchestration platform is clearly the winning orchestration platform is clearly kubernetes so kubernetes for those folks that I don't know is a container uh orchestration platform that allows you to run many containers in various configurations
across a cluster of computers and it came from Google and ironically enough it was called Borg inside of the the Google sorry I didn't I didn't know that it feels like something I should know but I didn't yeah and so when they opened stores that they changed it to kubernetes which is a Greek word for navigator so that's why Dockers all of these are not it's all linked together in some sort of let's take a bit of a hit at the competition sort of way yeah yeah so the so if you assume those two things right and I and
I'm and I'm a strong believer that kind of all things will converge to containers and some flavor of kubernetes in the near and medium and long term then this platform is really hard if you've ever run kubernetes or if you've ever done tutorial there's a really nice tutorial called kubernetes the hard way if you ever want to go through the steps of setting it up yourself okay we'll link that I'll link them in the show notes yeah it's very difficult to run um as an SRE and even as an application
owner so we we have this principle that we have too few people in the industry on the managed service provider side on the SRE side on the dev side we need to figure out how to get them out of doing this low-level grunt work on these Cloud platforms and Elevate them to it's not about replacing them and not having a devops person no yeah it's about elevating them to do more creative work at the platform level and let the mundane stuff happen and be optimized by the machine or by the computer because that's what computers are really good at
yeah so we came up with this concept of autonomous kubernetes that solves kind of day two what we call day two operations problems in multiple verticals and the first vertical we tackled was cost because it was near and dear to our heart and it's a major industry problem today going into potentially what looks like a recession yeah absolutely and I think you know I mean I've been to a few AWS re invents and obviously even back when my first one was in 2017 and I was just blown away with the amount of cost of
companies that were literally related to cost right and how do we reduce the cost and I feel like in the six years since then we've come a long way in the underlying Tech that can actually enable that we've obviously seen a consolidation of sorts of those companies as well the good ones have stuck around and then you guys come in um and you know like from what I can tell be very successful in a space that was quite quite sort of busy and quite full of other companies so what what made you guys Stand Out initially like
what's the success behind cast AI at the moment so there's a couple of there's a couple of pillars to the way we approach the market both top down and bottom up but if I had to boil it down all of the what I'll call from the 2017 area like all of the the cost optimization focused companies were all about how do we consume the AWS build make it understandable to financial teams and then provide some recommendations to improve the situation the problem with recommendations is no one will take them
because if it's not built into your standard operating procedure if you're not doing it exercising that muscle all the time you're scared to exercise it like what if I reduce these resources and it breaks my application what what happens then so there's a bit of analysis paralysis you see some savings on the table but you don't actually ever execute it and so those platforms don't actually live up to their full potential we approached it a different way we said look people aren't going to read reports
maybe they'll read some reports for some results but what we have to start was an optimization first strategy so how do we take the quote-unquote recommendations that our engine is producing and just remediate immediately okay and if we're going to do that what's the resolution of remediation and we got to this extremely fast resolution of 15 seconds okay we remediate our customers clusters on a 15 second cycle to get them the best possible cost and performance for their apps and have that achieved like I
know I know looking at the research I did into your company you kind of um number one you've got like a free sort of tier which installs an agent of sorts or or some sort of um something on the on the computer or the kubernetes cluster it then creates a report okay and that's free and then I think the next steps that the customers can do is then you know do the actually get you guys to do the optimization and you do that by replacing the inbuilt kubernetes load balancer so you guys are effectively replacing that function
within a kubernetes cluster yeah we replaced not so much still a bouncer it's the auto scaler so okay yeah yeah the cluster Auto scaler so on Amazon these are called Auto scaling groups and uh on other platforms they're called scaling sets but essentially kubernetes under the covers is broken down into these things called node pools right so groups of computers and these groups of computers are homogeneous they're the same computer over and over again but when you're solving a Tetris puzzle or a jigsaw puzzle you don't use
the same size piece to solve the puzzle like you know that would be you know that's a material handling problem so what we do is we use the right computer for the workload and we find you you know out of all the Amazon is saying like 544 skus available in ec2 so and they're available on demand on with reserved instances as well as spot instances so there's some combination some very high uh exponential number of combination of those computers and life cycles that makes the most sense for your app and if you that's a very
computationally expensive problem to solve it's a it's an expensive graph problem but also um the humans just can't do it they just can't keep up with the rate of change so that that's where we're so there's kind of like four or five vectors of optimization but in a nutshell we're trying to match what the market clearing price is of the compute resource with what the customer needs today without worrying about what they need in six months because nobody knows yeah and you've hit a good point earlier when you
talked about cost efficiencies in times of financial stress which is you know what you know all all roads relate to some sort of quick sharp vicious recession this year right and if you can provide cost savings on any level that's going to be attractive to any organization especially at scale like obviously if you've got a couple of worker nodes you know you can get away with it but if if all of a sudden your application and these there's no tools that you've got we're talking thousands and thousands that's going to add up so
you know even saving two or three percent on on an instance in AWS like I guess is what you're kind of working towards if you're times that by a thousand that's that's quite a big saving per month right um yeah so in in understanding that is it is it is it just a case of how does that how does the mechanics work with regards to how do you get the the machine I guess you're sort of using obviously AI Clips in your name so how do you get that pool of understanding as to what a particular application running
on a particular node node set or node 4 is doing and how you then make the decision to go oh let's now change this instance from a whatever to a whatever yeah so the the agent um that we install collects a whole bunch of information every 15 seconds and then we report the delta or the change in uh position or state back to our data analysis platform every 15 seconds so what kind of data do we collect all of the pods you're running all of the nodes they're running on we understand the cost of those life
cycles and the cost of those nodes we understand the nodes capabilities yeah we also understand what the Pod is asking for and what the Pod is actually using right so there's like always a discrepancy between those right and so we take the and then there is another Vector that is interesting is that customers have these things called hpas if you're not familiar with an HPA it stands for horizontal pod Auto scalar so what does that mean when you're scaling an application in drastic sets right like I'm going from a thousand requests
per second to ten requests per second there's a couple ways you can do that you can give a single pod a lot more CPU and memory but often there's a cap like if you have a single uh single threaded process for example you just can't use more than CPUs than one right and the other way you do it is you create replicas you create many many copies of nginx or of your web app and when you create those replicas you do those through an HPA so you have this kind of harmonica uh feeling uh HPA application so as those things scale up
and down they're actually creating waste on the way up and they're creating waste on the way down and so those are a couple of vectors that were very heavily optimizing the focus on heavy and cost effective scale down which is at least a 10 to 20 percent difference between traditional and then as you're scaling up making sure you're using the right type of computers for the job at the lowest possible price right okay so you just you affected you're right sizing on the Fly it's Dynamic right sizing is as I
kind of understand it through learned um best practice through your customer base so I think it's genius that you've got the agent sitting there getting this information um and then through you know effectively crowdsourcing it um you know you're understanding exactly you know what this pod's doing what it should have and how to actually optimize it so I think it's brilliant from an optimization point of view um I've I've seen the term sandbagging sort of thrown around when it comes to
Resource optimization what is what does that actually mean when when people are sandbagging I guess we've touched them potentially but what is it from your point of view so in the imagine you're an engineer and you're building some app right and then you say okay I had to deploy this app and um how much how much how many resources should I ask for this app well you cut what do you do you kind of put your finger in the air and you say oh I need two CPUs ah just to be safe let's give it four right and this much memory and
then let's say that gets done through your development deployment cicd Pipeline and then your SRE teams doesn't looks at it and says well we don't want to get woken up in the middle of the night so let's just add maybe 20 30 40 percent more capacity and that goes up the chain and the more layers you have managing an application the fatter that buffer or that Headroom becomes and then before you know it the thing that only needs one is being allocated six and now you imagine replicate that at scale and you have a
bunch of computers sitting idle that are have allocated capacity but they're not using it and that's basically the sandbagging term and that then lends itself to I think one of the fundamental problems which I believe you guys are trying to solve which is this this getting rid of the wise effectively making data centers more efficient energy usage that sort of thing it's an important I think as we move forward you know with regards to Computing in general um it's just going to become more and
more important to be more and more efficient in what we're using don't waste and you you touched on the chips shortage early on that created you know pressure I've remembered was it azure ran out of compute in England or from memory last year that could happen right so if everyone is sandbagging then the potential is that we've run it becomes a finite resource which is actually the opposite of what cloud should be but if we don't manage it carefully from the consumer's point of view because
obviously as a cloud provider you try and you know add scalability you try and guess which way your customers are going and you'll add scale but if everyone is basically taking it as much as they need then waste was going to happen I kind of thought of an analogy there same way that ip4 space got used up real quick right because everyone just went yeah I'll take a c no worries no one's going to need that uh I'll just I'll just and all of a sudden it's like well we actually might need a bit of that back so that's
a bit of an interesting sort of analogy that I thought about in my head with regards to sandbagging yeah and if you look at ipv4 space and the cost it's just escalating and for whatever reason people are not adopting IPv6 widely right right so but but just to go back to that way like when you when you have this waste built into the standard operating procedure of your business essentially you're not just affecting your bill you're affecting as you said Anthony the resource like other people can't use those resources right because
the cloud is trying to be true to what they say you're allocated but you're generating those CPUs are cycling whether you're using them or not so they're generating heat they have to be cooled they're they're using power that power has to be paid for and generated it can be clean or dirty but you know as we know electricity is more dirty than we want to admit so you're not just wasting money you're actually slowly contributing to the killing of the planet and that's what we're really passionate about reducing
this Footprints because if we can get people just to use with the compute they need efficiently you know we imagine if we're saving 30 40 on a bill we're also saving 30 40 on the power consumption and the cooling requirements of every Data Center and I've never really thought of it in that way right I've never thought of you know my my potential footprint that I use when I when I provision a server or provision a service where it might be but you're right and I think we we do have to start
to think that way I think a lot of people are but I don't think a lot of companies at your level are actually you know contributing to making that better and being outwardly positive about it as well so I think that's a kudos to you guys and are definitely a feather in your cap as well yeah thank you I it's important to us that we create sustainable environments and and efficiency is just one one leg of that you know as I mentioned there's a ton this concept of autonomous platform is you know we're just tackling
the kind of the tip of the spear if you will the the but there are many other day two operations that need to be tackled security was one that we talked about I was going to ask about security actually like you know how are you guys you know because a lot of the headlines that I say with you guys is around security so how are you guys contributing to kubernetes and making it more secure so I think cloud security is generally come a long way over the last five years like there's a lot you know there's a
Cas Benchmark that kind of allows you to understand your posture there's a lot of policy as code that's been introduced um into the various service providers but once you get into kubernetes it's kind of like oh that's a completely different black box we don't know what's actually going on there from a security perspective a lot of the anomaly detection it doesn't go into those black boxes you know containers are fraught with vulnerabilities as you know Anthony because they're you're just building an
operating system under the covers so we found that the maturity level of Kate's users is actually much lower than the cloud like if you look at it as the superset and then the cloud below that is slightly less mature but getting there and then k8s really needs to step up its maturity level so we think that the securing uh continuously securing a kubernetes cluster is not easy and we want to help customers understand what vulnerabilities do they need to address now as security debt what can wait a little bit like where where is where are
they in the remediation funnel essentially that's kind of the angle that we're approaching event yeah right so as part of that initial um you know setup that you guys do is is it optimization and Security in in the one report that you do or or can use as pick and choose when they install it um or is it all in one and then it's like okay well let's let's see how we're going in the optimization let's see how we're going with security they can take the report as is and just kind of do
what they want with it or they can do like we talk about that next steps so from a security point of view how do you guys handle the the remediation around that that's a great that the remediation part's a good question and we don't have an answer for it now but I'll tell you where we're going um so right now we have so much stuff that we give away for free I don't know if I can like keep it that free and open but I'm gonna try so yeah um when you look when you install that agent you get all of the cost reporting
capabilities so like imagine something like a cube cost that you have to install on your own cluster which is open source you're gonna get that out of the box with the agent um and and then you also get all of the recommendations for how to size your cluster properly at no cost and then you get you don't have to host anything you just get this ongoing kind of report that shows you your progress okay even without automation but you also get a security report and that security report tells you what best practices you're
violating and it also and then how to remediate them and then all of your container vulnerabilities so in the containers that you've built what layers are you using where the patches are required and what packages you're using in your app that actually have vulnerabilities in them and then you can go remediate them based on known fixes right okay cool so you're okay and I guess in general like we talk about the black box of kubernetes to most people and you know my experience is that there's not a lot
of really I mean obviously there's a lot more people today versus maybe five years ago that are a lot more okay with the building blocks of kubernetes and not just a black box but a lot of people that interface with a kubernetes platform typically developers um in those SRE types They Don't Really Care as much it's not their fault you know I mean they're doing what they're doing they want to build an application they want to put it somewhere that works they want to make sure that it actually
does its thing so you're saying that from the point of view of the guys that are operating the platform so the platform UPS guys you're giving them a great report to be able to go and remediate and make sure that they're going to be you know running the most sort of security focused kubernetes platform that they can do yeah and then the remediation like there's some traditional strains that you go into remediation so there's something called a sore platform uh security orchestration right like what
do you do when you find a problem like companies typically have pipelines to fix those problems it could be a jira ticket or it could be something more sophisticated what I'm really interested in in exploring Anthony is the get Ops approach in other words if I know that you have to kind of patch a container version Can I submit a pull request to your repository with the fix have one of your engineers review it apply it to the pipeline and then remediate it that way because what's the point of me fixing an
image in your cluster if it's going to get overwritten by the CID CI CD platform anywhere yeah we have to go we have to shift what the term is called shift left yes meaning get to the source of the problem and I want to do that through git Ops yeah interesting because I I we I I take that approach from a backup perspective in in a way right like if your application has changed you know all of a sudden and you want to insert into that pipeline the ability to back it up on a change and have that then fed back
into the cicd there's all that sort of you know way of looking at it so I think I understand what you're talking about and I think it's going to be pretty successful um with regards to the you mentioned you don't need to host anything there's no infrastructure as sorts right so you guys have got your the assess platform you've got the agent and that's the only thing that people need to install basically yeah there's like three home charts so there's an agent when you go to
automation there's a cluster controller a spot Handler and something called an evictor which is our bin packing and they've been taxed within the Clusters they're like four components and then there's there's a couple of security agents so there's one security agent that detects your traffic anomalies there's another security agent that does image scanning within your cluster so there's a couple of things but we orchestrate that as an operator and just deal with it for you you don't have to
you don't have to worry about it all right good stuff so I think I want to kind of finish off talking about the AI component right because obviously it's pretty prominent in your name so before we get into the state of it it's again very topical for where we're at in this in this in the IT world at the moment with regards to Ai and we'll touch on that just to finish off but how did AI find its way into the company because obviously it's very prominent cast AI so explain because I think there's an
element of risk in that and I say that from the point of view of there's lots of people claiming that they're doing AI today right so how have you guys been able to to be bold enough to put it into the name but also deliver on it as well yeah so the so there are some problems that can be solved with simple heuristics and our methodology is if we can solve a problem with a traditional algorithm we will there's some problems that require prediction so prediction of a future State and those those is where
machine learning algorithm and so out of the whole field of AI we don't deal with natural language processing where we we deal with prediction problems predictions and those prediction problems are usually time series problems meaning like for example we've got a price a market clearing price of a particular computer what will that market clearing price look like in an hour or in a day right so we have to take those factors into consideration uh when uh spot instance gets interrupted can we give ourselves we only have two
minutes warning on AWS 30 seconds on Google can we give ourselves more notice can we predict spikes in a workload so that we can be ready for those spikes versus being reactive so those are some of the problems that we have to use machine learning to solve yeah other problems where we don't have to use machine learning to solve and and we're very pragmatic about which algorithm or model goes where yeah and I think that's a great answer Robert I think that separates you know from what the hype is today around you
know the NLP stuff which is obviously making all the news today with with JDP and like this week there was bad and then there was being an edge search Chat that got sort of announced today from Microsoft so what what's your theory on that in terms of where that's that industry is today you know how mature it is and where it's going because I kind of I've done a couple of sort of other podcasts around this and talked about the fact that it's exciting and scary all on the same this sort of sentence and breath so where do
you see it going how's it going to make our world better so so I it is exciting that what's interesting just to point out is the model that open AI is using and they've trained on is this thing called a it's a neural network Transformer Google invented it years ago like a few years before it opened out you know they just implemented it I think Google was just caught on its heels a little bit Yeah from implementing it and they were in the perfect position to kind of train on the vast majority of the internet they
just did it of course interesting yeah um so so it's not like they have open AI as some technological advantage that isn't in open source that's why you're going to see a lot of these copies I think the biggest problem happens not necessarily with when you get to commercialized usage of these kind of natural language processors and these things that are breaking What's called the touring test essentially you don't know if you're talking to a human or a robot is how do you attribute back the input
source of like when you learned a piece of source like so we we've played with this right how do we generate source code for to solve a computer science problem well that came from GitHub somewhere it came from somebody's you know snippet it learned on some features that were that were in public domain but if you don't attribute the credit back to the author I think there's going to be a significant legal question that has to be answered first so that's kind of my two cents on before we can broadly
commercialize we have to battle that part out and get an understanding I I didn't and I mean obviously I've been using it chapter he paid since since it came out and if you and my journey and all that kind of stuff but I never thought about what you just mentioned which is it's been fed this information from something that's real and tangible and that someone created like humans created all the the points that it got fed to be able to give its varied responses to whatever you throw at it so is it does
the fact that it's so fragmented from all of these resource points mean that it's not someone's work that's that's really interesting yeah and because it's a neural network it's it's highly unexplainable you can't go into the black box and the reason why search engines worked is because you would click on a link and you would see the source of the data right it wasn't owned by Google in that case in this case it's opaque so that's kind of the the conundrum right now
that's really interesting and I think if I don't think I've ever heard that particular sort of thread brought up in this um era that are in so yeah I wonder what they're going to do because you're right I mean change GDP of what introduced to 42 um paid um instance for if if it runs out if capacity reaches a certain level you can always get access to it so they're already levels of commercialization I think the big one today there was some sort of upsell that Microsoft is trying to trying to bring into it so yeah it's
like they're cashing in straight away and like we've kind of we've started with which is a good sort of way to round it out that's that that in itself is is cost blowout right and people are going to start to go I can spend five bucks on this iPhone app that gives me a good Avatar guilty as charged here right the spending the money just to see what it's about but we didn't but that then leads to wastage as well compute storage everything that we've talked about so it's a come full circle it's it's
interesting in terms of what this added stress on the compute networks that we have will do to the world yeah and it's it's the machine learning apps are a little bit worse because you often use these GPU optimized computers which are very expensive especially for training right so like open AI has so many millions of cores that they're using to train it's a very interesting problem yeah explain well hey this is this has been a really good conversation I'm sure we could have talked about that
particular thing for an hour and a half as well but we've we've almost reached the end um I just wanted to let you guys know that obviously I'm going to post a lot about what um what costs are doing we'll post um to some of the web materials we've got some great resources online great learning around kubernetes in general um I think that you guys are doing a really good job I love I love that this the ethos of the company um how it started and what you're looking to do so you know I really thank
you Leon for being on the show and just as a final reminder um again if you like great things with great Tech please head to gtwgt.com go to the website go to YouTube click the link and like And subscribe Leon thank you very much for being on and we'll catch you next time on great things with great Tech thank you so much Anthony great to talk to you [Music] foreign