June 15, 2023

Generative AI & Cognitive Automation in the Enterprise with Zero Systems | Episode #67

In this episode, I'm speaking with Alex Babin, Co-Founder & CEO of Zero Systems, a groundbreaking company at the forefront of utilizing generative AI to significantly enhance enterprise productivity. Zero Systems' cutting-edge platform integrates generative AI apps that augment knowledge workers in various enterprises. We delve into the capabilities of their innovative so…

In this episode, I'm speaking with Alex Babin, Co-Founder & CEO of Zero Systems, a groundbreaking company at the forefront of utilizing generative AI to significantly enhance enterprise productivity. Zero Systems' cutting-edge platform integrates generative AI apps that augment knowledge workers in various enterprises. We delve into the capabilities of their innovative solutions, how cognitive automation is transforming the workplace, and the endless possibilities that generative AI holds for the future.

Elevating Enterprise Productivity with Zero Systems: Harnessing Generative AI & Cognitive Automation!

In this episode, I'm speaking with Alex Babin, Co-Founder & CEO of Zero Systems, a groundbreaking company at the forefront of utilizing generative AI to significantly enhance enterprise productivity. Zero Systems' cutting-edge platform integrates generative AI apps that augment knowledge workers in various enterprises. We delve into the capabilities of their innovative solutions, how cognitive automation is transforming the workplace, and the endless possibilities that generative AI holds for the future.

The company was founded in 2016 is located in the United States.

☑️ Support the Channel by buying a coffee? - https://ko-fi.com/gtwgt

☑️ Technology and Technology Partners Mentioned: Generative AI, Large Language Models (LLMs), Cognitive Automation, Data Security

☑️ Web: https://www.zerosystems.io

☑️ Crunch Base Profile: https://www.crunchbase.com/organization/zero-systems

 

☑️ Interested in being on #GTwGT? Contact via Twitter @GTwGTPodcast or go to https://www.gtwgt.com

☑️ Subscribe to YouTube: https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1

 

• Web - https://gtwgt.com

• Twitter - https://twitter.com/GTwGTPodcast

• Spotify - https://open.spotify.com/show/5Y1Fgl4DgGpFd5Z4dHulVX

• Apple Podcasts - https://podcasts.apple.com/us/podcast/great-things-with-great-tech-podcast/id1519439787

 

☑️ Music: https://www.bensound.com

Transcript

RAW COPY

this Tailwinds of generate Fai and we've been preparing for that for four years but we knew that is coming so we got prepared and this Tailwind is just helping us to move really really hello and welcome to episode 67 of great things with great Tech the podcast highlighting companies doing great things with great technology my name's Anthony spiteri and in this episode we're exploring the transformative power of cognitive Automation and generative AI in Professional Services with a company that's been recognized as a
Gartner call Vendor for its innovative integration of AI into Legal Financial Insurance and Consulting industries that company is zero systems and I'm delighted to be speaking with co-founder and CEO Alex Babin welcome to the show Alex thank you Anthony happy to be here excellent so before we dive into you know another episode about Ai and I think a more exciting episode because we get to talk about applications in in a specific verticals and in business in general if you love great things with great Tech and would like to feature in
future episodes you can click on the link on the show notes or go to jtwgt.com to read us to your interest just as a reminder all episodes of GTW GT are available on all good podcasting platforms the Googles the apples and the spotifys all hosted and distributed by Spotify podcasts also go to YouTube hit the like And subscribe button and you'll get all past and future episodes and with that Alex let's talk about um yourself firstly where you where you've come from you're sitting today in Silicon Valley
um working in a great startup but also in a great industry in the startup world in the area of AI but before we talk about that maybe give it a bit of background about yourself and how you came to found xero systems thank you Anthony yes that um zero system is my third startup when the first company was um in the hardware space it was a hybrid vehicle technology Transmissions funded by DFG and we've been building um transmission for hybrid vehicles then I decided that I will never go into Hardware anymore because that's really
really hard um and we started the company in a computer vision AI space and then came zero systems with my co-founder you work and it's been quite a journey since then yeah so a little bit of back background about yourself like where did you get your start in computers and also what led you to you know be interested in the AI space so uh I've been always more or less on the product side of things so I really wanted to help people with a better product and that's actually where the name zero come from uh comes from it's
zero time wasted because literally a lot of software a lot of products we have they they make us spend more time using them rather than reducing the time spent and that what separates Great products from Bad products if you look at Apple and compare it with other historical platforms Apple products are amazingly uh well designed and easy to use and that makes people use it effectively so the same can be applied to other software products but not not every product is the same and we do believe that the AI is the is the way to help people
spend less time and produce more um and of course in Ideal World the interface the better the best interface is no interface but the job is done so in any case well when we started um myself and my co-founders we looked at how people work and what they spend a lot of time on we realize that AI that would support those knowledge workers and the most tedious time consuming and laborious processes would be the good answer to those problems and that's how we embarked on that Journey so that that in effect is almost your
problem statement right in terms of trying to make things easier for for people doing their work effectively in their job correct so cognitive automation so just to explain a little bit about what that actually is because I think a lot of people listening to the podcast might not have a total grasp on what that means so what is cognitive automation in order to start about talking about cognitive automation we need to kind of separate it into two pieces cognitive and automation so everyone knows what automation is automation is when you
produce more with less like when processes are being done by a machine but um unfortunately that concept sometimes on many cases cannot be applied to knowledge workers for very particular reason every knowledge Worker Works differently and if you try try to apply uh one feet all concept to everyone that's not gonna work because change management is really hard thing to to address people don't like changing their processes for example you Anthony uh work with your emails differently that I do and you do
uh calendars differently or recording or whatever whatever cognitive process you were involved in it always different from other people even if you work inside the same organization so that means that automation need to mimic the way you work not to force you to change your processes so that where comes cognitive Automation and that requires AI because in this case you can't just fix it with bio by The Brute Force you need to apply cognitive component which means AI and there's a cognitive component he
talks about the differences in people right in terms of how I do things how you do things so does that mean that the cognitive component of that can adapt to the person who's interfacing with the system it not just can it should because otherwise it's not going to be the cognitive Automation and um the systems that being built on top of large language models which are taking the World by storm right now is the beautiful example of cognitive component of it because when you uh work with those type of systems they adapt to
the way you want the information to be presented they adapt to the questions you ask and so on so forth instead of forcing you to do things they've been designed to to do yeah is that the whole reinforcement model play as well when you get absolutely a little bit of the system and stuff and it learns about yourself and then how to then interact with you better because it's reinforced learning yes it's reinforcement learning human envelope uh reinforcement learning because that's create a new data layer
that never existed before when when you um interact with AI system as a knowledge worker as a knowledge producer you basically create this data layer that being fed back into the model into the system and it learns from your behavior in the future well even right now it already can predict the next step of what you're going to be doing this is absolutely incredible that's a piece of a big deep tag that we're also working on the action Transformers when it doesn't just predict the next token in the in the um
in the or a sequence of tokens but it actually can predict what next step you would do to accomplish the task and that's also part of cognitive automation proactive automation or insights it's pretty fascinating yeah and I think actually so I think back to one of my favorite Sci-Fi movies of all time Minority Report um but those the guys that were predicting the murders were cogs weren't they they were cogs it's a cognitive they were cognitive AIS basically predicting murder okay that's just giving me chills
in terms of you know that that's so far away from it yeah oh yeah but in terms of just trying to marry a piece of sci-fi that was so brilliant but so farcical but then we're talking about it in a way where yeah we're a long way away but that's exactly what you're talking about it's it's crazy to think that we're stepping into this sort of world yes and it's uh on on the scale that you describe it it's far away but actually it already produces a lot of um results right now and it doesn't have
to be predicting the near future it it it's enough to predict two three five steps in your process for example you were writing um you got an email with a request from someone to create a report you didn't even open the email but the AI agent that sits on your computer and works as your personal co-pilot knows what typical what typical steps you are making to get that report on what system you'd pull the data out and how you compile it into the report you open the email and AI says Hey here is the report
you've been requested to you as a user looks like Blackmagic but actually it's not because all the steps the action Transformer can um reconstruct those steps proactively and say hey here is the work I've done for you yeah so it's actually real yeah timing is amazing because I listen into um I love the Lex Freeman podcast is amazing and I listened to him interview uh Mark Zuckerberg a couple of weeks ago but it was only recently I listened and they would Mark was talking about the future content creators but also
knowledge workers with that co-pilot exactly as you're talking about it right so you you xero systems are is an example of bringing this to life pretty quickly right so it's actually it's building it yeah there you go it's it's it's for me it's I've had a couple of conversations about Ai and where we're going I think we'll get into the morals just a little bit later on but I think I would say more I would say more actually right now at this moment over 10 000 over iacco Pilots already working
alongside with humans wow there you go okay I think the key part about that the part that shouldn't be scary for me from the even originally this conversational the first five or ten minutes is that because of the cognitive part it's it's kind of you but it's just an extension of you that is you know so I think that's what people need to keep in their mind from a positivity point of view this isn't taking over you and we will talk about this a little bit later on we're kind of foreshadowing but the
cognitive Pace means it is almost an extension of you as a helper think of it as a box of superpowers a knowledge worker can get and uh one of the reasons we started the company actually it's really interesting and it was back then when we're whiteboarding the the mission why we're doing this and I still remember drawing two circles on the board one Circle we called what I like doing and another circle is what I'm good at and then we overlap them a bit making sure like 30 of them overlapping so in that middle
where what I'm good at and I like doing it this is a genius Zone that's where people are absolutely at their best but everything else on the side can and should be automated can be done by AI amazing so just going back to the stock we didn't we didn't fully flesh out the start of the company founded in about 20 uh 15 16 and actually started with email categorization that's kind of where you kicked it off so it's interesting pivot so let's explain explain maybe that pivot not not exactly the P but we
started with uh idea to build an algorithm there is no large language model then so we started building a model our own model that were like four or five people and literally a garage here in Silicon Valley I love that and um we started with idea that what people are spending a lot of time on reading long texts and uh basically sorting the information into different different categories right so if we talk about email well that's what people do they sort it into folders if we talk about the long text the best way to solve the problem uh
that time consuming problem is summarize it that's what chat GPT or other models doing perfectly right now back then in 2015 there were no models to do that so we started training our own model and we realized the best way to train it is to apply it to AI to email management yep that's how this um let's help people with email management was born and we were helping users to basically automatically sort emails into categories and also summarize long emails into the form of a one paragraph exactly and then when we started
building it it started evolving into more processes being automated more models being being built and trained and then of course uh around 2020 we've got our first llm build and trained um not as big as chat GPT and not designed to do what uh uh chat GPT does or GT4 but similar in concept and that gave the birth of all other components that now we know as a AI engine Hercules on top of which we build our AI applications yeah we'll talk about Hercules next I think but I wanted to so interestingly though you you've set up
and focused on a particular vertical like did you hit the Lego sort of area first as a vertical to Target did you do that on purpose or just naturally because those guys are so unorganized well it's both well they actually really really well organized and that's the purpose we we wanted to help them with because they were spending enormous amount of time on actually doing that manually it's called compliance in legal uh another reason we we went into legal is that um if we talk about knowledge workers uh
legal professionals as well as accountant Consultants but legal Professionals in the first place they are the probably the biggest producers of uh very structured content and also because consuming a lot of content like creating a contract requires a lot of time and the um error level is really critical so you can make mistakes because that might be critical for for the client so we decided to go with the legal as a beachhead Market it's a beautiful Market once you uh inside and it's a terrible Market uh one when you try to win it
over because no one uh was actually Brave and stupid enough at the same time beware to actually try too because law firms are notoriously known to be the the most conservative uh client ever State and tradition set in their ways and and also I think if you think about the automation part of that that side of the argument they almost anti that they want to protect the fact that they are very manual in their price that's kind of how they they I think it's how they perceive value to the outside world potentially yeah
you're right but uh they wanna be they are protective when it's related to practice of law when they create the content that is legal focused we never focus on practice of law we help them with the business of law business law yeah document management time keeping all those um processes that no one likes doing I keep saying no um no lawyer went to law school to file emails into a Content management system or a document management system they want to help their clients instead they are forced to spend 40 percent of their
time doing monotonous work no one likes doing it and that's what we we help them with as well as other Industries like Financial Services accounting Consulting and so on so forth yeah I think that's that speaks to automation at its core right like how do you get better at the actual Crux of your job in fact that the circles that you talked about with the Venn diagram how do you become more efficient in that middle portion it's exactly what you've talked about there so completely makes sense all right so
you mentioned Hercules engine just give us a bit of a an overview of that what it is how it works and how you then interact with your customers for them to write applications with the framework yeah to describe Hercules which we've been building for four years uh we need to look at the whole Market as it is right now especially the market of AI and generative AI so I keep comparing it with uh brother's right first fly type of a moment we are in the air for the last six eight months we're in the air
flying 300 yard yards and the airplane is made out of dried wood duct tape and stuff it's amazing moment everyone understands that but Enterprise clients and we talk about large Enterprises they don't need that type of a plane they need Boeing triple seven reliable secure with Runway and the cocktails should be served in the first class that's what they expect from Enterprise grade system and here Hercules comes comes into life because virtual is actually the framework and the engine and the building blocks to produce and build and
deploy and run AI applications of the Enterprise grade it has all the guard rails and structure unstructured data labeling and the data source verification and it all runs inside the security perimeter of the organization so it has all the components requires components like Lego blocks to create an AI application in under eight weeks okay and those AI applications which is interesting when I when I mention AI applications people think it's like oh some app actually AI application can be solving a billion
dollar problem and I keep saying in the old world even two years ago each of those AI applications that can be created in under eight weeks could have been a billion dollar company on its own with as a SAS like software as a service uh with millions of dollars invested in years spent to build it now it becomes Mass model as a service running on top of Hercules okay and it's a new world we're in new Brave word of AI so you talked about the security boundaries and that's very important and and the fact that you know this is one
of the sort of points that comes up quite a bit with AI in terms of the security Factor the data that's going in the data that and then what spits out so do you deploy the Hercules framework per customer so it's not a centralized model basically it's deployed into a customer perimeter nice and secure which means it's only got customer data in it correct and that's critical component for any Enterprise and of course on-prem might be the private Cloud might be servers um but that's essential and critical
because a lot of people saying wait this second we can still send some data outside but if we talk about large Enterprises like Insurance Consulting Legal Financial Services sometimes it's not even the data of that customer it's the data of their customers yes other organizations individuals and so on so forth so they don't have rights to send it to third party for processing it limits the the whole ability of AI systems to actually work um as as we see them working for a consumer market and uh security and
privacy is pretty pretty big concern and one of the components of how we call it great AI Chasm that our Enterprises stuck on one side of uh and they there is no effective way to cross over right now yeah and you think about the model like Chad jbt which is obviously the the most popular and well-known at the moment we really don't know where that data's come from I mean we've got an understanding of it they've kept it quite you know quite quiet on the on the flip side from open ai's point of view
but clearly at some point the might the the model data that they've pulled has been created by someone somewhere you know what I mean so the whole question of Ip comes into play and who owns the the fragments of of what's being produced right so in your in your case with Hercules that doesn't come into play because it's so segmented and purpose-built for the actual Enterprise correct so the um the data that is inside the security perimeter and is owned by the client uh or by its customers but it's centralized
and the derivative um actions or derivative models that been fine-tuned trained on this data belong to the client so there are no issues both on the security privacy or IP side so it's a peace of mind for for clients yeah so you're solving the Privacy the security of an AI model and also more importantly ensuring the data security and compliance and also regulation policies which are pretty big in the worlds that you're playing in right yes correct that's a big challenge for organizations right now yeah so talk to me about a
little bit of the the pitfalls of like you know what what what what are the is there any negativity to because I know there's a lot of near say about these models coming out there but in the way that you've you're deploying it and the way that you're framing it I feel like you're solving a lot of the you know the negative connotations of these large language models that are open but to your end is is there any pitfalls in the generative AI for Enterprises well yes there's so many of them but
let's focus on the three main things that actually of which this great AI Chasm consists of number one super obvious uh data privacy concerns that we just discussed right number two the quality of the output model solution model hallucinated if you train if you train a model on internet scale of data you can you will get internet scale of biases and hallucinations so and let's imagine you are running the best bank in the world and uh using the state of the RTI Technologies with a three percent random chance of sending
someone money to someone else yeah you'll be running out of money and customers pretty quickly of course in this example I'm exaggerating but uh there are so many examples where quality of AI output is so critical we have a client who is processing hundreds of millions of dollars worth of transactions every month imagine a model hallucinate and instead of 10 million cents 100 million yeah for the model regular model General model it's nothing it's a tiny bit of hallucination for the regulated
Industries might be catastrophic so in this case the AI output should be absolutely hallucination free and we can talk about how it can be achieved and those components that are part of Hercules as well and the Third Kind of a derivative problem is actually derivative from those two lack of AI applications end-to-end applications so Enterprises are looking at this beautiful new world of possibilities they looking at this magic wand that they can wave and a lot of processes never been able to solve before and
automate before can be sold but they understand that it can randomly turn this magic one can randomly turn gold into something else right or um bad things can happen so it stops them and prevents them from adopting AI Technologies at scale and a lot of companies trying to dive into this by building those guard rails and support things but it's still in infancy you're right the big the biggest problem yeah the biggest problem that even that I've had when I'm interacting with the platforms that are out there is that at
some point the the output that you get is not even somewhat related to what you've been talking about and all of a sudden it just goes in a different tangent pulls different data in and you have to kind of actually I I've said it to the chat EBT a couple of times you're tripping it like if something's gone really wrong here and I I feed it back and I go what's going on like you you're on the right track and now you've gone off like get back on track man I actually talk to it in that way to bring
it back to what we were talking about but that's okay for me because I'm just tinkering and having a bit of fun with the platform as it is but you're right if this is business if we're talking about the Lively livelihoods of people as well but also money and all that emotive stuff you want to make sure the guide rails are right there was a case a week or so ago when the lawyer used Chad gbt for the brief to submit the brief into the court and he asked Chad GPT about uh some um some use cases like um previous cases
and charging BT completely made cases up and it was so convincing it's called a convincing so judge it is convincing bullshitter convince the the lawyer and the lawyer asks is are those he was suspicious he said are those real that's the human nature the person suspected the lawyer suspected that the use cases are made up ask Chad GPT are they real and Chad GPT said they absolutely real trust me yeah and the lawyer submitted it to the court and the judge caught him and it was a big shitstorm and the lawyer wasn't gonna imagine I
think I read that one yeah it's actively he was looking he wasn't looking for automation he was looking for shortcuts data yes he was he was actually automation he wanted to not just on searching for the cases precedence he wanted to get the distilled knowledge so he doesn't have to spend time on doing his own research yeah trusted AI consumer grade AI for the business use case where potentially millions of dollars might have been involved yeah and put someone's life yeah yeah absolutely so how does how
does Hercules you mentioned how Hercules kind of protects against that the guard rails that are in place how it interacts with the with the applications that are tapping into it so very quickly like how do you guys circumvent that potential issue there are many components that play into that like components of Hercules play into field I'll just highlight too number one ground Source data so ground truth because when you work with that like a model works with the corporate data that's literally owned by the customer you can always
Trace to the source of the data so models have less options to hallucinate they can make made things up and the second component is a human and a loop if you build the human and computer in like an AI interaction the right way so in between steps human oversees what AI is doing it also very dramatically reduces the level of um of the hallucination uh or wrong output there are many other components but those two play the critical role okay that's all that's interesting and I think that that's what separates what
you said a consumer model from uh Enterprise model effectively and I think in all areas of software you know over the years that's what differentiates consumer open source Enterprise it's it's always the way those little bits that make it more tangible for the customer to stop bad things from happening or from a point of your support or bugs or whatever it might be so here's a question right we talked about the open world AI is obviously there I there was a survey from Salesforce on generative AI release this week or last
week and you had some interesting takeaways from that did you just want to talk a little bit about you know your takeaways from that particular um server and I will link to the survey in the show notes by the way yeah absolutely and actually it Taps back into what we just discussed perfectly because uh the survey by that it has many interesting points there but one I wanted to highlight um uh specifically so they surveyed and it was phrased the way how much you trust the AI to provide the data security and write outputs or
something like this I'm rephrasing it so yeah and they interviewed a bunch of people starting from c-suite going down to management level like low low level management and in between there were senior directors I.T people and so on so forth so you would be surprised but hire people in the rank typically less technical Savvy they are more they trust AI down the ladder you go specifically the people in the ground in the trenches who has the responsibility to implement AI they trust less and less so the numbers
are uh eye watering so the c-suite 83 percent of c-suite people trust AI with this data security and privacy and all that stuff they might not even know how deep the rabbit hole is and down you go at the kind of a in the trenches it's 29 yeah so it's like three times difference incredible I've Got a Theory on that right like I think if I parallel that to the public cloud and have Enterprises were shifting from on-premises to public Cloud if you look at where the push came from the push came from the Safeway
and they just said just do it because I've heard it's good I've heard it's the future let's do it and then the guys on the ground had to implement the change and do the migrations and manage applications they shifted from on-prem to the cloud they're the ones that were left with all the all the all the craziness and then all of a sudden it was costing too much it wasn't performing and the Sea Suites would go and say but we thought this was going to be like the future and that's another
reason go back to pram they go back exactly yeah it's a back migration right now happening yeah so so the repatriation but I think it's kind of the same thing right where this and not to diminish obviously what the safe way to doing but the people that are making the decisions at the top typically are the ones that proliferate that down the chain no matter what and they just give the the go-ahead for it so this is the same thing happening now you're saying absolutely and that's interesting because uh this new technology is like
GPT 4 API and other systems you can actually use them pretty effectively to prototype things and when you project type you see what is art of the possible you see this magic wand available then c-suite say well it's well it's good enough now let's implement it and then csos and I.
T directors and cios like uh look at this saying wait a second we can't do that because we'll be exposed to so much risk and then that they come to us saying well we know that it's possible now can we do the same replicate the same inside the security perimeter and those are large insurance companies Banks um Regulators uh law firms uh and that plays it really well into our our strategy because we tell them hey what you just saw possible now can be done and replicated effectively at Edge inside your security perimeter yeah pivoting a little bit to a question
around how you guys doing I think this is very interesting you've got a partnership or a sort of uh news news article coming out with Intel about how you're leveraging CPUs instead of gpus for this large language model platform or or engine that's amazing because all this time I'm sitting here going there's a reason why the Nvidia stock price has gone through the roof well and they have reasons because it's all powered by gpus and training model training it still requires gpus but running the model
um and sometimes smaller models at Edge doesn't have to be on GPU you don't have to spend enormous amount of money on that so CPUs is the weight and that's why we got the cool vendor from Gartner because we do data processing and the model running on CPUs utilizing the same infrastructure organizations already have so instead of running it in Azure or AWS and paying them millions of dollars you can effectively run it there using CPUs or even inside your security perimeter on-prem using CPUs and uh yes
there is a case study with Intel and xero coming live um it's not yet live but it's probably in a couple of days so I'll send you the link you can put it in notes for the podcast it's a case study of how xero utilized um on some of our partners utilized the power of CPU processing and this methodology that we developed to run on Intel CPUs for one of the clients and how much uh cost saving It produced for the client now multiply it by the scale at which Enterprises want to apply AI to every corner of the business every
business line in the future like literally close future it's going to be hundreds of millions of dollars of saving for those large organizations because running uh AI applications is very expensive is you spend it on gpus yeah that's amazing that's a really good feather in the cap and a great value add and for xero systems I just want to finish off we've got about five minutes left I want to I want to talk a little bit about um this concept of closed versus open source models and what that means for
the industry in the democratization of these llms what's your take on that well I think the open source industry uh part of the market will win over time they have less resources but they in terms of the money Microsoft open AI Google meta they can spend billions of dollars what they don't have though billions of dollars is not equivalent of people's brains so they can spend it on compute they can spend it on other things but hundreds of thousands of people who involved in open source can produce things faster and they can also iterate
faster and I would say in the long run open source will win and the flexibility of those models ability to run on edge and on small devices we've already seen models running on a Raspberry API and even iPhone yes and that's going to be improving dramatically and that's this happening because this part of the market open source Community they have scarce resources they don't have billions of dollars to spend on ineffective systems so open AI Microsoft Google they are kind of a um they have too much resources that's why
they're not Innovative innovating in the right way well like I think it's it's the question of trust as well if you have a closed system like this you know we've talked about this through the episode we've answered a lot of the questions around whether it's going to replace knowledge workers you know what it means for efficiency the cognitive space but effectively and we're talking about the c-suites and you need trust and if a system is is that's a more General system is closed that's not open source
and I think the trust factor is always going to be questioned that's at that big level right and people don't trust big Tech as it is right though whatever reason there are reasons not to right absolutely right so I think that's an interesting question I think you know from a security point of view from the point of view finding bugs there's a plus and minus conversation there to have but yeah there is one aspect actually and a great example of why open source might be more effective and why
closed systems are not scalable or are not actually scalable but they are not flexible enough I'll bring up example when AI system needs to be hooked into Legacy system through API right so what is happening right now like when you send the prompt to a longer language model let's say gpt4 it gives you the output but you don't know how model evolves and if you send one prompt today it doesn't mean the same prompt will give you the same result tomorrow in fact it doesn't exactly and you can't
build the sustainable and dependable system when you don't know what next time it's gonna produce unlike API for example when you send a request to Legacy system and you get exactly the same output you know that's predictable so with open source models when you know why the system gives you that reply it's much more dependable with a closed Source model you have no idea what it will return tomorrow and you can't build the predictable systems based on that great answer great answer hi just want
to finish off in under a minute talk a little bit about where xero systems goes from here uh like what are you guys doing what are you guys looking to innovate and continue to disrupt the market in this amazing space which is just starting to take off you're absolutely right there's Tailwinds of generate Fai and we've been preparing for that for four years we didn't know how big that would be we didn't anticipate it to be that big but we knew that is coming so we got prepared um and this Tailwind is just helping us
to move really really fast and when the tide goes up all the bolts are lifted and in this case we see the whole Market evolving and more open source models more solutions like like chain and others appear we absorb them into Hercules as our engine and of course adding our own capabilities and we're delivering it to Enterprise clients so that whole movement is actually benefiting us incredibly well and we want to have more and more so Enterprises can get better apps faster and solve problems that never been
solved before amazing hey great conversation I really love what you guys are doing I love I love the way that you've just described and we hit all the points that we wanted to hit in the episode about you know the impact of AI you know this generative world that's coming up how it impacts workers but also the benefits um and the biggest thing about not being scared but I think what I really loved was the Venn diagram and getting people to be better at what they're best at which I love and that's just all about moving forward so
hey thanks for being on the show Alex if just as a final reminder if you're not subscribed to the show and you would like to hear from four episodes you can subscribe to the podcast on gtwgt.com or head to the YouTube channel or any podcast platform so with that this has been episode 67 thank you to Alex and xero systems and we'll see you next time on great things with redtech [Music]