September 16, 2024

Safeguarding AI with CalypsoAI | Episode #90

James White of CalypsoAI discusses AI security, enabling businesses to adopt AI responsibly, securely, and efficiently. Topics include protecting against AI threats like data leaks and adversarial attacks, regulatory compliance, and the benefits of using CalypsoAI’s enterprise-grade AI security platform.

James White of CalypsoAI discusses AI security, enabling businesses to adopt AI responsibly, securely, and efficiently. Topics include protecting against AI threats like data leaks and adversarial attacks, regulatory compliance, and the benefits of using CalypsoAI’s enterprise-grade AI security platform.

The player is loading ...
Great Things with Great Tech!

Responsible, Secure, and Efficient GenAI Adoption

In Episode 90 of Great Things with Great Tech, I sit down with James White, CTO of CalypsoAI, to delve into the critical role AI security plays as organizations adopt AI solutions across their ecosystems. With the rise of generative AI, safeguarding AI systems from data leaks, hallucinations, and adversarial machine learning attacks is essential. We discuss how CalypsoAI’s industry-leading AI security platform enables businesses to utilize AI responsibly and efficiently. James also shares insights into AI adoption at scale, regulatory challenges, and the importance of governance frameworks.

Founded in 2018 and headquartered in Washington, D.C., CalypsoAI offers tools that ensure AI models perform reliably and securely, empowering companies to harness AI’s potential without fear.

Key Topics Discussed:

  • The need for responsible AI adoption at scale
  • CalypsoAI’s solutions for defending against adversarial machine learning attacks
  • How CalypsoAI integrates security into AI pipelines to ensure efficiency
  • AI hallucinations, data leakage, and the risks involved in AI model misuse
  • The evolving regulatory landscape and how CalypsoAI helps businesses stay compliant
  • Insights into AI development, deployment, and future trends

Links:

Listen to the Full Episode on:

Follow Us:

 

 

Music:

https://www.bensound.com

Transcript
as businesses continue to adopt AI in all forms we need to think about how these interact with our businesses in a responsible way the rise of generative AI keeping AI secure trustworthy and efficient has become more critical than ever this is where Calypso AI steps in today I'm joined by James White CTO from Calypso AI to explore how they're pioneering the AI security space enabling businesses to harness the power of AI responsibly without fear of data leaks hallucinations or inefficiencies in the AI development pipeline stay
tuned to hear why clipa AI is shaping the future of safe efficient AI in the Enterprise this is episode 90 of great things with great Tech with James White from Calypso AI hey Jimmy thanks for being on episode 90 of great things with great tick I can't believe I'm in the 0s but here we are and what better way to count us down to 100 than talking about cypo AI so Jimmy tell us a little bit about Clips AI what the company does uh and what it was founded for excellent absolutely and thanks for having me on Anthony a pleasure to be
the 9th guest and uh one of my favorite decades so I guess clip wayi founded um way back when over five years ago to look at the uh cuz it sounds funny now but look at the threat of how AI usage and adoption would create new ways for adversarial attackers to um Target systems Target um you know institutions Target regular users and back then it was if you remember it was all machine learning um no generative Ai No Transformer architectures Etc um so when we started it was a company that was um not really required by Enterprise if I'm
being honest um it was mainly a problem for uh big Nations for um defense uh primarily and so at that stage that was indeed where we started working for the Department of Defense and the US government and tackling really gnarly problems um in the machine learning space and things like adversarial attacks of that nature that's interesting right and you make a point because in yeah you're right in 2018 no one if they're honest was talking about generative AI people knew about AI in general and and and it was always a
thing but yeah so it's kind of almost like this whole you know fast forwarding now to 2024 it's all kind of just you know there Fe you and it's kind of like oh here we go we've actually got got meaning right but what you know when we talk about responsible adoption of AI at scale what do we mean by that yeah so it means first of all it means different things to different people and so the definition of responsible um is somay sometimes mandated so if you work in a highly regulated industry it is well defined
and it has clear boundaries um if you work in an industry that is not well regulated there are upcoming regulations the EU AI act and will start having teeth uh very very early next year um state by state in the US even you know um each other region has different levels of fidelity around what they want to protect and mandate in terms of um say for ethical AI usage and so company to company really matters to understand what um your responsibilities are and what you need to do to just use AI versus train AI models versus um use it for what type of
processes making decisions Financial calls Etc and so that all sounds really scary but effectively the the most important thing to do is to is to grab the nettle and effectively say um if we're going to use Embrace AI embrace the future we need to understand what we should do and how we should safely adopt this technology yeah that's it's that's like kind of big thinking stuff right like it's it's even today on the news here in Australia there was an announcement about the government putting in some
sort of mandate guidelines as to the use of AI across the board right and I believe even in the states the um the AI protection or future Act was passed on the 31st of July um you know just basically to how the US can stay ahead of the game in in in AI so it's typically there and thereabouts and from experience of this technology as we've gone through you mentioned the 90s great love the 90s as well by the way how good was that um it's uh you know technology has moved really quickly but at every
step like you think of the internet people were afraid of the internet people didn't know how to use it properly to start with and then obviously it became a bit more mainstream but then guidelines are to put in play you think about every little step that we've gone along the way I mean even in crypto when that happened I mean that's that's a bit of a different story because of the way that it has panned out but that was also that last oh holy crap what is this we're scared of this we got a mandate AI is
definitely here to stay without question and so this level of now okay it's here how do we control it and mandated is is huge um before we get into a little bit more about Calypso it would be remiss of me to ask about yourself and you know you're in Island at the moment um you know how did you get to be in this space and come did were you were you a founder the company or did you come in after the founding of the company yes so I was working for a company called qualit and we had just been acquired by sap um and
I got a phone call from a recruiter that I trusted and said look you gota you got to look at this this company and I was I was not a Founder um but when I joined there was no code there was no Engineers there was no um product and so um you may ask why the hell did you leave your your job in in quadric and and move to this um this really really early stage uh startup so I I moved when we were pre series a and the only reason I moved is that when I paired my love for cyber security um with something I had studied
last in college it' been a long time since I looked at AI was swarm intelligence back then was the the soup dour and then I I said okay wow this is making a comeback and it's it might be for real this time like for real for real um and then when I saw that of course with adoption there would be a huge net new tax surface it's something that I thought would be super super interesting so that was my my reason for joining Clips wayi we've taken a huge amount of of of changes since then it's
it's very uh much a different company now we're now completely in the Enterprise space um and so for for me I think it it has happened the the thing we were waiting for we were you know we were all addressed up waiting for the the the door to Ring to to bring bring us to the dance and we're now here um and it's as expected it's wild um it's exciting and you know people talk about the gold rush and Technology a lot of the time where everyone's just rushing towards a random mountain and starting
to to throw a pickaxe into rock I feel like that stage is beginning to end and we're seeing real utilization of AI and huge investment in the space and the ROI is starting to be achieved now that's interesting yeah for sure and so what you're saying is you're a little bit lucky but you know you got there and it's it's it's it's it's taken fruit yeah I was I got on the other bolt into Titanic I guess is the way I would describe it that that that's awesome but in terms of so you mentioned your
background in cyber security was you know so that that was your background so you were in that space um was that what you were always doing or you know did you start off like most of us doing maybe networking like what what I'm just interested in the career progression yeah so I started off in the most exciting technology uh area in the world legislation um that was my first job I worked for a startup working in legislation uh replacing mainframes with with what now sounds funny with Java uh Enterprise uh Solutions and so I moved
from that startup into H it was called propon into a company called dun Brad street one of the oldest companies in the world um and I I worked there for a number of years before I jumped into cyber security as the first engineer outside of the us for mandant um and so I worked for Mion for a number of years we were acquired by firey which I now believe is called TRX it's just a weird nature of software um but yeah and then qual tricks and then into back to cyber security so that was my favorite I I joined Manny just as the New York Times
is hacked and by AP1 that that cyber group and so it's the most exciting time to be in cyber um and so the the the roller coaster during that period was something I was Keen to get back back to and I believe AI is a much bigger fast roller coaster yeah and and I guess what you've done here is you're taking both the love of you know and the professionalism of your career in terms of cyber and that development mixed with the future of AI and you your um skill set is now lending itself beautifully to
that this world of AI and how you actually protect companies from doing damage with the technology or the technology doing damage to them yeah I think um when people look at AI from from a business perspective they rightly look at the benefits to the company like what's the the efficiency we can make or the the new product we can launch or the the extra profit margins we can deliver and they look at AI from that perspective I look at protecting that investment so I look at how do you do that but what are the
things you need to put in place so you can realize that benefit and working in mandiant um all we saw all day were the the the black hats the thread activists Etc and figuring out loopholes ways into systems how to uh you know Traverse a network how to break into an endo how to xfill information and so when you see that all day every day you start developing a mindset of okay when you look at a a solution you you find problems and that's that you know behooves us very well in in Clips oi to look at somebody's program how they're planning
on rolling out Ai and where the Achilles heel of that approach may be and then how best to protect against that makes sense yeah yeah and I guess what you're saying there in terms of efficiency it works both ways right because yeah companies are looking for efficiencies in AI but then also the attackers and the attack vectors are also becoming more and more um targeted more efficient as well you know so it's kind of like you're fighting a war with um everyone's just getting ahead and then someone else
jumps ahead and then you've always got to be sort of condent of the other side being better than you because they're using the same tool set effect i l as to what you're using yeah people you know always refer to that as the cat and mouse game I like to call it the Tom and Jerry game because um every episode of Tom and Jerry still to this day I love it and um usually the cat H Tom gets his way at the start and then the you know Jerry comes back and and and wins the day so we see ourselves as the Jerry in Tom and
Jerry and when these attacks come in we figure out one what what damage is caused how did the attack happen and then we treat that as a zero day just like right cyber security and then we create solutions to defend against it and the the iteration time Loop to get that done is the is the is the real Challenge and so how do you leverage AI to battle against black hats or thread actors using Ai and that's the real secet Source into how to to combat this new domain this new discipline yeah so in I mean we're kind of going ahead a
little bit but I think it's it's actually just an interesting continuation of that are a lot of your customers net new I think about backup and you know where I play in the space and you know we talk about certain areas of backup like uh replication or offsite disaster recovery and more often than not customers don't need know they need it until something bad happens and they realize actually I could have done with having part of my data off site right is this the same thing that you guys are seeing today
where this sort of service is only being brought in for the majority at the moment when something bad has happened um or is it a 5050 thing with with your customers that's actually a really interesting question because if I'm being fully transparent initially our customers were folks who had been maybe burnt a little bit and so they wanted to figure out how to to reapo the problem before they went at scale um now I think there's enough publicity and content out there through people like yourselves and
and folks to get the message out there that it's a great technology but there are risks and you have to protect yourself against those risks we see people taking a proactive stance and they they typically go to a POC a proof of concept internally where there's very very little risk and then if that works successfully for them with the data sets they're using with the use case they're using then they'll say pause now we need to understand how we go to production and they're trating that as a as a um a
stage gate before they actually go full full boore and when they do that they typically you know get on Google and they look for companies that that do that and so there is a proactive nature out there which is which is now fantastic to see because it represents maturity step in in the industry I was going to say that that's maturity in terms of where we are in the hyp cycle of this particular technology right people know that they need it's being pushed by all the major players and everyone knows about it and everyone's
trying to develop a response to it in some way um okay so with that if I'm a CEO and you know I come to you guys I know that we're doing something around AI I know that we've got some data that we want to expose I know that our guys are saying we've got to use this thing called an LM I don't know what that is but it's an LM you so how what does Calypso do at that point to come in and help that Enterprise strangely enough we we first challenge that you need us right so are you really adopting AI or using AI for
the right reasons you know are using for the sake of using Ai and so typically what you'll see is a company wants to dip their toe in the water they'll do something quote unquote safe and that would look like a low hanging fruit project like a chatbot or an FAQ bot Etc and so we say look let's not you know be Chicken Little here and let's not say everything could be terrible but let's focus on what could actually be damaging from a simple use case like that and with those simple you know quote unquote
uh not dangerous use cases you can actually have significant brand damage and sadly we've seen that in the media where we've had a lot of companies now bring in a chat bot and the chat bot will say nefarious things about the company or it would be induced to do things it shouldn't do yeah and so that's where we start we look at the use case they're focused on and we say how do we wrap that around and typically with any AI use case and you've already hit on this you will have a combination
of two things you'll have one or many applications talking to one or many llm or slms or any type of generative AI model and with that that's usually what represents the Prototype when you move then from producty to production you need to put policy management in between those two so what can the application ask the model what data can it send and then what content should it protect the application from receiving okay so you don't want to abuse of content coming back you don't want content that's
derogatory to a a gender a race a religion a company you know so um there's bidirectional uh protections required there and then if you zoom out one click you will see also that you need to have access control log management all the typical things that exist in infc today and not to not to derail the conversation but that's an interesting migration we're seeing now where people are treating AI as a regular software development project with some extras attached and that's the kind of next step of maturity we'
witnessed that's interesting yeah cuz the Frameworks all the same and I think you know I look at it from the other side as well when we talk because we're getting asked about how do you back up um all the AI stuff the model and we're like well we don't really know we haven't actually got like backup for AI as it stands but when you break it down everything is is a component that can be looked at in a sense right so you know you've got your your vector database you got your storage you got the actual
model there you got the compute you got all separate parts that can make up this whole platform so to your point you know it's no different really to any other sort of application that we're doing I think the inputs and the outputs are a little bit more critical is is is what is the way that I said to your point you know having a model create a response that is dangerous um risky or puts people you know In Harm's Way that's that's not great so I think that's what you guys are fundamentally
trying to achieve here right you're trying to make sure that when companies go in and they start to look at how they can implement this to get them efficiency out of their data they're doing it in the safest way and I believe you can do it in you do it in two ways you do it inline and post you just want to explain how you plug yourselves into the process as well so uh you nailed it so in line we we sit between you and the and the model and so in our platform you can Define as many models you wish public private open closed and you can
all configure them inside you then get to decide which applications have access to Which models and so um your application instead of talking directly with a model will talk about our API and our API will um behave just like a normal uh gen model AI sorry API I should say and with that API effectively what you're doing is the you're giving the the application developer the user the ability to not have to worry they just focus on their job they focus on getting the um functionality or features they need from the model and then the
policies that you define in our platform sit in between and so you can decide for this use case we want these policies for this other use case we don't want this we do want that we want to buy directional scanning one way or two-way and so that's the the main kind of normal use case that people have but with like any technology um you know if you if you give someone a hammer they can hit a nail or they can break a window right um folks have been using our technology in ways that we hadn't predicted and it is fun to witness this
you know it's it's like everything with AI it's developing legs and and going in new directions but a lot of our customers Now using our scanners our our scan API to scan all sorts of things not necessarily AI content and so we've One customer scanning for arbitrary code execution in their source code right and so why do they do that because AI is awesome at detecting that stuff and so because we use AI in our platform it's a security uh tuned AI model that we have internally it's own proprietary
technology that will allow you to easily detect that stuff so it's exciting to see what people use it for yeah so as an example so at the moment like I'm I'm doing a bit of tinkering I'm I'm a horrible coder but I love to Tinker um so at the moment I've got like a little proof of concept where I've got um code it's of python um it's basically talking to an LM but then I'm getting to LM to create a bit of SQL query to go into a database and pick something up so on the
Fly it's creating a a select Bam Bam and then but that's based on the prompt The Prompt is going give me this it goes to LM LM and goes okay I'll create what I think again generate the code and then that's what gets executed right so you're kind of putting all your trust into the LM to do the right select query right now obviously that's just a select to read mechanism but it's I to myself this is a good exle something that could goong because can someone just go hey you know delete this this and that
create a career that deletes this you know and the LM will just do it because it knows no bad it knows no better right so that as an example your platform sits in between there to make sure that something like that can't happen based on the policies that you guys have got as part of your IP exactly right so in that use case you may have let's say it was a right uh SQL statement you would have a uh we we allow you to write a description so you could say something like block SQL injection right and you would have it
only on the inbound only coming back from the model so when you ask that question to your your uh gen model it will respond and if the code snippet that comes back contains code that has is liable or vulnerable to SQL injection it will block it and it will tell you was blocked because it it contained code that was liable SQL injection yeah um but we we have this uh this funny kind of um conversation internally which is um the fallacy rules right so we call them the fallacy rules because with AI there are a bunch of policies that that
people and one of them is the low hanging fruit when we spoke about already it's no there's no danger it's a it's a chat bot it can't cause us any harm um the other one is you know you you spoke about the squl uh the select statement so we have seen an example where a uh and I I won't comment on any of the details but effectively somebody asked for SQL code to do something to check records and they gave the information with the scheme of their database and the SQL that came back said
select all from logs okay and so they were like Grace read only copy paste execute and that table was hundreds of millions of Records long because it's a log table and it caused a DB lock on the connection and they they thought that okay this is uh was wrong it's it's crashed I'll try a second instance and now what they're causing is two backto back unbelievably long and I believe it wasn't an index table because it was just logs it was like a data store H kind of Dev n almost so we're seeing that the use of a even
just for chat give give your Enterprise chat capabilities when people don't understand the subject matter that interacting with um they're liable to just trust it and we don't know where this has come from psychologically but it's it's really present it's it's fascinating isn't that interesting yeah like we and I think I've talked about I had a I had a couple of AI conversations on this podcast over the last couple of years obviously and the trust aspect is something that always pops up like the
hardest part for me is to trust it but but I do right and and I think I think to myself I shouldn't like when I'm asking it to do certain things or write me this and I'm finding myself just not checking you know just just implicitly going okay it's told me that when I know that there's all sorts of hallucinations that go on you know and you only have to look a little bit to go like even today with the show notes um you know for this I I do obviously use I'm fish I want to be efficient so I do pump in the last
episode show notes and then I put in all the research and it kind of brings up the new the new show notes and it had your old platform that you said was decommissioned right to me great example right but to me I was like I was going to ask about Vester and if I hadn't queried you about it you would have it would have been an awkward question to ask because you know that would have shown me to be really a really bad researcher right but it wasn't that I'm a bad researcher it's just that I trusted that a had to give me the right
freaking question and it didn't so yeah I I get that what do what do you think about that like where's your theory why we is it because we've been told that AI for years is all about robots and humanity and the future and it's it's a sure thing and we're at the sure thing now so I think in in our my perception of it is in our heads we trust it because we've been told it's coming I I I have two um beliefs on this and one of them we we speak about a lot of course daily in work um and we we
have conversations with our customers about it as well because understanding different viewpoints helps us understand the psychology of it but one is H Dave down the pub right so um we're Irish so that we bring things back to the pub quite a bit but Dave down the pub your buddy who you know usually has his his information pretty correct but every now and again he'll say something outlandish and he'll say 100% that's true that's the fact that happened and everybody's like okay we'll go along with that and
then you might repeat that to somebody else and eventually someone's going to get caught out in it saying no that's absolutely nonsense that's not true um I I told my wife last Thursday we're watching a movie called The Trap and Josh har I've heard about that one yeah yeah it's good I'd recommend it yeah uh but I said to I I had in my head that he was Harrison Ford's son so I said it's amazing that he's Harrison Ford's son he's doing so well and my wife said he
is not Harrison Ford's son and I said wa I'm I'm 99% positive and he is not uh so you know every human being their brain can jump to a conclusion they they put data together for some reason they believe it's true and and that's very akin to what's happening with AI but the second thing is some of the world's greatest minds are responsible for this technology and we we all grew up in a NASA age where NASA were the folks who landed on the moon who accomplished amazing unbelievable things and when you
you hear about these these experts creating these new technologies you realize that yes they're responsible for this even being possible but at certain point it goes out of their control just like a rocket can explode or or you know people can get stuck on the ISS even though the people who created it were amazing things can go wrong and I think that's the combination of why people trust and they believe it's infallible that's interesting isn't it yeah I totally I'm I'm a bit of a I've become a bit of a
history buff on that time in terms in terms of technical acceleration that happened during World War II or even World War I what came out of that it's just amazing but yeah it's actually really really fny I thought when you were talking about infallibility of it all and we believe these things I went back to George castanza speaking of the 90s um one of George castanza famous famous quotes is it's not a lie if you believe it so so maybe we maybe somewhere down the track we can apply that to the George castanza rule of
AI that would be that would be really interesting all right started here it started like absolutely I've start Maybe started a few things on this show but nothing as great as that that's for sure um let's talk about data and security around that data because I think that's a big big topic right so you we've talked a little bit about how Calypso helps with coding and you know from what I just understand at at a purely basic level is you guys do a lot of the leg work to make sure that companies don't
need to worry about the bad stuff happening in a nutshell so you of control that and you've got your own playbooks and your own policies that help the companies not having to spend time and money and effort on that all good what about in the security space what about in the in the data space here and and just the General Security around Ai and generative AI how do you guys play into that yeah so there are kind of two high level categories in AI there's folks who are building models and there are folks
who are using models so inference is the usability of models and there are very very very few um you know less than 10 companies actually building geni models from the ground up and so those folks are not our targets right they they have a really really hard time um and they do amazing work and they're responsible for all of this being possible so those companies doing a fantastic job and they do their best at protecting the general use case you know so it you know it won't give you guidelines on how to commit a crime and
get away with it or things like that right so they do a great job let's call it the 80% but when you bring any of those into the inference space where 99.9% of companies will use these things they're very specific use cases and those use cases um have domain um expertise required The Last Mile you knowm required to understand how to use the model in the best way with the right data and so in there you need to understand how data is used what way it can be manipulated correctly or or incorrectly and then that Loop right so
you heard of the data poisoning and and data coming back in to to make the model better or fine tune a model and if that data is corrupted or modified to cause the model to do things in different ways or or lean it to in a certain direction create a bias um it can cause really really uh big problems quickly when at scale and so what we allow you to do is and put safeguards in place before and after your model usage in the inference space and security right now is and I think it's a really interesting question
because security has to somehow put its arms around all of this and really it's multiple uh activities as opposed to one big one but at a macro view it's it's putting the arms around the problem collectively yeah and and and that hallucinations and data leakage and model misuse is all part of that as well isn't it I mean how do you fight CU from my perspective hallucinations is um is relative to the to the type of model it is like a model that's got you know less parameters versus more parameters will
hallucinate more right I think we're saying that if if if I've downloaded and used Alarma locally and remember version two model with a 7 billion parameter thing if I asked that a question about something it would be just like it was completely made up then if you went to the to the 20 billion or 30 billion it would got better then as soon as three got released even the 7 billion was better than the 2 billion of the 30 so you know that's just basically the way that it is though right so as as a company will naturally just use the
latest model things will get better but you guys are still there to basically guide them in in the way to make sure that even that sort of when they're using the better technology the better model that they're trained on they're still going to get the desired outcomes as they continue to evolve with that model yeah it's it's Tom and Jerry again right so once the new model comes out um we'll see new attack vectors so I'll give you an example right this is we have a a game we launched um it's called
behind the mask you've got to try un unmask the the hacker and so we have you know huge amount of people every day trying to win that game and they try different techniques to fool our security and so we use that to learn we use that to grow and understand what you know zero day attacks people are trying Etc and one day there's six of us standing around someone's desk going what the hell is this attack and we're looking at this person hitting all these these uh requests and there were fortnite you know the game fortnite
they're using fortnite terminology to speak with the model and none of us are off a with fortnite terminology so it looked like you know clling on to us and we're looking at this and it was it was breaking through the model and getting results it shouldn't get and so we we quickly you know created a a a zero day defense against that and now the game is better and it doesn't allow that to happen but what constitutes an attack in let's say Lama 2 versus Lama 3 there's a new encoder at the start of the model
how the model interprets English text or any other language text changes version to verion and these attackers sit there all day trying new techniques at scale and they find ways to get through so it is this Tom and Jerry effect all all the time yeah those bastards eh always always trying to get a here that's a but that was a that's a really interesting sort of um yeah using fortnite to so the fortnite terminology was what allowed them to trick their way through basically yeah if you think about the the good people in meta
creating safeguards right in their model and they're doing their damnest and they're they're Top Flight top tier people yes and you know they come home that night and they go to their their their partner or whatever saying somebody's using fortnite terminology can you believe that and you know they're like their partner going look I don't really care why you're telling me the story but it's it is fascinating to those of us in the industry that it could be anything any attack from any
Direction that's right and they're so smart these days I mean they always been smart there's always been smart cheeky people there's also been people that want to see the world burn um yeah but the the common denominator is that they're tenacious and they just want to effectively either have fun or do it for a certain cause or a certain outcome so and if there's if anybody's getting any source source of monetary um benefit out of it or even if it's just their own sort of inner sort
of sense of achievement they're going to do it right that's that's the whole basis of the whole um cyber threat landscape today whether it be in AI whether it be in in general it as well so it's a really good place for you guys to be to be able to help stop that as well all right let's let's finish off talk about the broader landscape of AI so where where do you see it generally today we haven't got too much time left so in a couple of minutes where do you see this going what's Calypso um going
to be dealing with in 18 24 months in the AI workspace I sadly believe the money makes the world go around so I use that as my major um crystal ball so if you look today who's making money off this it's Nvidia H clearly uh they're making huge profits and rightly so but then it's the power companies who saw that report the the energy providers are making huge amounts of money because sadly AI requires a lot of energy to run and then if you think about those two green bars above the line in terms of
profitability you've got in the major model creators the foundation model creators in the red right huge red deep bar that is all of this investment going in and with you know sadly with with with the economy Etc that needs to be you know there has been Roi and One Step Beyond that is all of these applications use cases that will use those models and that would be bring that red up into the green and then accelerate into the future the gap between that right now is enablement so what do you need to take a
model with all those risks and get some uh profit out of it and that's things like security so we see businesses like ourselves being the the enabler for that but then into the future we're seeing a really interesting Trend as I said everybody's moving to inference you used to have to spend a bunch money on data science and ML and gpus now you can get it for Pence on the dollar access to the world's top models or you can download open source so we see there being a dual outcome we see um open source models
being hosted internally for privacy reasons Etc and the cost of GPU uh utilization coming down with groc with lpus Etc and then we also see uh kind of a proliferation of these proprietary models like open Ai and others where the cost gets really really low and it truly becomes ubit technology just like an SDK for everyone to use yeah so when you see that though you'll have this huge Advent of AI and then something weird will happen AI will disappear and why will AI disappear will become fully normalized
no one speaks about you know Java Frameworks or or Jes or run times anymore because they're normalized yeah yeah that's that's that's interesting isn't it I mean it's and it's already kind of it's the first steps that are like you know um Gemini going going into all the Android phones Apple's about to announce that as well um we should be probably more scared of that but we aren't um again that trust thing um but yeah yeah it's it's it'll just be a new thing right it'll it'll be a new thing
what's really interesting was what you said there about the power usage and the people in the green versus the red and whatnot um the the parallels with blockchain technology is so interesting in that sense as well like for for a while there we were so in Adam with with blockchain and where that was going still there I still think it's got certain place and use case I don't think it's dead I think you know it'll have its day in the sun in some form again but obviously AI you know it's funny AI
doesn't get the same amount of what's the word um focus on The Greenery of it you know or The non- Greenery of the power consumption the same freaking cards the same reason why Nvidia was doing so well it's because they were using the cards anything I tell my kids anything that generates ha cost money yeah yeah if you can touch the lamp and it's hot cost money that's what I tell them you know so turn it off um all right the LA last question is uh on clipo like AI where are you guys going like where where will you guys be
what's the next big thing for you guys uh in the next 12 to 18 months or even sooner yeah so I guess for us the the biggest um thing we were there for is to help security teams deal with this huge new problem um not just in terms of the scale of the problem but also Resort there's a huge resource problem in in infc at the moment these talented people um Can Only do so much and what you want to do is not try to replace what they do you want to give them the tooling to allow them to do more with the expertise
they have and their discipline and their domain and so our current goal is to give those people the tools they need to work with um secondary we're going into red teaming that's really really interesting okay we've identified 10 above categories of techniques to Red Team models and we're using ad agentic so using agents with models to to do that and then lastly it's the um GRC so it's the it's the regulation uh compliance aside um everyone's looking at the eua act as if it's something that
stifles innovation in fact my viewpoint is that if you know where the safeguards are you can run as fast as you want because you know where not to fall over trip over so it's like sign posts on the road versus something slowing you down and when you build an application the one thing you got to be sure of is that you're staying within the rules and that's the last kind of part of our pie chart we've defense already done offense in progress and then it becomes the regulation side awesome great stuff I
love all that I think very well-rounded um lots of areas of the business which lots of companies will tap into because it is so important and as it becomes more mainstream the regulation I think is something that's going to be huge um but it's a really interesting company I'm glad we had this this is a great conversation I really am interested in what you guys are doing I think people have gotten a really good different Outlook of as to the generative AI world and what you guys do in it so Jimmy
thanks for being on episode 90 of great things with great Tech thanks for much aune my pleasure hey just as a reminder thanks for listening to this episode stay tuned for more episodes where we continue to highlight companies and Technologies shaping our world don't forget to follow us on social media at jtw JT podcast and visit GTW j.
com for more great content and all past episodes if you enjoyed this episode make sure to subscribe on your favorite podcast platform and on YouTube please spread the word and if you feel like it drop a review thanks for joining us and we'll see you next time on great things with great team