AI for Dummies | Chat with an Expert Who Sold Two Firms

Mar 25, 2024

Notes

Welcome back to our latest podcast episode, where we dive deep into the world of artificial intelligence, specifically focusing on the rapid evolution of large language models (LLMs) and their implications for the future. I'm your host, and today we had an enlightening conversation with Steve, an AI expert from Brex, who shared his insights on the development, training, and potential of these groundbreaking technologies.

🧠 Episode Highlights:

The AI Explosion: Discover how the field of AI has rapidly expanded from the early days of OpenAI's eight-year journey to the daily emergence of new, cutting-edge large language models.

Decoding AGI: We tackle the elusive concept of Artificial General Intelligence (AGI) and explore what it truly means to create a machine that thinks and acts like a human.

Behind the Scenes of AI Training: Ever wondered how AI models learn? We break down the complex process of training AI in a way that's as easy as pie.

The Future of AI Integration: From the potential of AI in our smartphones to the role of AI in enhancing business processes, we discuss the exciting possibilities that lie ahead.


📩 Feedback & Contact:
We value your feedback and questions! Leave us a comment below or reach out at: andy@distribute.so

🎙️ About the Channel:
Our channel is dedicated to uncovering the stories of entrepreneurs who are changing the game. From tech innovators to wellness pioneers, we bring you the insights and behind-the-scenes looks at the journeys of today's top founders.

🔔 Hit the bell icon to stay updated with our latest content and dive into the world of entrepreneurship with us! 🔔

Transcript

Andy Mewborn:
OpenAI has been working on their thing for, I think for like eight years or something. Now I see these large language models coming out like every freaking day. Like someone has a new open source one or like, so my question is, are they like, have they also been working on these for eight years or are these people using OpenAI's model to build their own models?


Steve Krenzel:
Back in like 2016 when like that kind of era when like the transformer was just coming around and like people were just starting to get a sense of language modeling. Everything at the time was open source. And then suddenly the language models started getting bigger. They started becoming more expensive to train.


Andy Mewborn:
What is AGI and like how would describe that to me like a vibe, right? Because I think a lot of people have different interpretations of what that may be. But how would you interpret it?


Steve Krenzel:
Yeah, so I mean the the short answer is there's no well-defined definition for what AGI is It's like it's it's like asking like why why do we fall in love like everybody's coming up? Yeah


Andy Mewborn:
When Instagram came out and you were able to take pretty photos and put filters and all that on, I think some people's initial reaction to that was like, oh, photographers are going to lose their job or whatever, right? And like, oh shit, now photographers are going to be a commodity or whatever. And actually what happened is, no, it just created more photographers. talk through some like AI stuff. So you're at Brex. And so for people that listen, you're at Brex, you do some, you're like the AI guy at Brex. You wrote this whole guide on, on basically, you know, AI models for dummies that kind of went, blew up online on Hacker News and other places. Right. And so what I want to do is for my own sake and for everyone's sake, I do, I think we should get into, um, how what we were describing on our last call, which is interesting, is like how this AI works. And the last thing that I got out of our conversation, and it's funny because I'm building my second AI company. I've used the API for GPT, but like, do I even understand really what's going on under the hood? Not really. It's still like magic to me. Right. So last time we were talking about Um, how basically what the AI is doing is it is predicting the next thing it's taking, it's saying, Hey, we have this input. And now what we're doing is we're going to give, we're going to think we're going to give an output. And then based on that initial output, we're going to predict what the next thing is. And I probably just butchered that. So why don't you just describe that to us? Yeah. Yeah.


Steve Krenzel:
I mean, fortunately the cool thing with. modern large language models is the systems we've built around them, the abstractions we've built around them are so pleasant to use now that you don't even need to know how any of this works. But when, you know, if you ever hit like a roadblock with them, it's kind of useful to know what's going on underneath the scenes. It'll help you kind of reason through maybe how to perform better. And yeah, so I mean, the gist of what a large language model does is it takes a sequence of words and just tries to guess what the next word is going to be. And if you're able to do that really, really well, it turns out you have an intelligent machine. And that's kind of what we saw. We used to have really, really small language models that you know, when Google invented the transformer, which is like the big primitive in these language models that like unlocked all this power, they were just using it for translation. And like, it was like, take a sequence of French words and output a sequence of English words. And that's actually called sequence to sequence, like modeling. And so you're taking one sequence and you're outputting another sequence. And so that kind of motivated a lot of this. And then we kind of just like, continue doing variants of that where we said like, oh, give me a sequence and just predict the next word in this sequence. And yeah, there's a whole bunch of variants of that. But then we just took those models and kept scaling them up. And like for a while, they were just kind of gimmicky where it's like, oh, you can generate random sounding text that kind of looks like English. But eventually we made the model so large that really, really interesting abilities started to emerge. Uh, and it kind of unanticipated and in the, in the GPT papers, specifically around GPT-3, they really start getting into like, Oh, if you just make this big enough, really cool things just kind of start happening without even, without even trying to have the model explicitly try to do those things.


Andy Mewborn:
Interesting. Okay. So this is crazy. So we basically, it started. from these like translators essentially translators and predicting okay here's the here's the French words and here's the English words that are correlated with that and then what I'm saying is like man I feel like in one of these new LLMs or large language models are coming out like every day but the big question that I have is OpenAI has been working on their thing for, I think for like eight years or something, or like eight years and a couple of days. I saw Greg, the CTO guy posts like eight years in one day, like the other, you know, and now I see these large language models coming out like every fricking day. Like someone has a new open source one or like have. So my question is, are they like, have they also been working on these for eight years or are these people using OpenAI's model to build their own models? Like, what's going on here? Because I'm a little like, how the hell does everyone have an LLM now? I thought this was crazy. And now it seems like it's becoming commoditized.


Steve Krenzel:
Right. Yeah. Well, it's actually, it's kind of interesting because the ecosystem has shifted a bit where back in like 2016, when like that kind of era, when like the transformer was just coming around and like people were just starting to get a sense of language modeling. Everything at the time was open source because all of this is kind of rooted in academics and academics love publishing. So you could always read their papers. And there was kind of a, an up-and-coming open source community around language models. And then suddenly, the language models started getting bigger. They started becoming more expensive to train. Instead of training it on your home PC, you now needed a cluster. And real money started getting involved. And real business value started getting involved. And so then you kind of saw this wave of proprietary models. And the biggest models, the most successful models, still are very much proprietary. You've got the Anthropic Cloud models. Anthropic was founded by a bunch of folks that left OpenAI. You've got, obviously, the OpenAI GPT models. You've got Gemini over at Google. Amazon has a couple models. Although, Amazon has also open sourced some of their models. Google has also open sourced some of their models, too. But these really, really large models, the reality is even if they were open source, most people would not have the money, most entities on the planet would not have the money to run them anyway. But one of the really cool things that happened was Facebook open sourced Llama and then Llama 2, and those were phenomenal, you know, state-of-the-art models for their sizes. And because they're open source, people could take those and make improvements to them. There's also some other, you know, now that we have this architecture, this architecture understood, we're finding like new ways to train these models more cheaply or more efficiently or get better results from the same size model. And so what you're really just seeing is there's so much there's kind of so much vested interest in large language models right now that we're just experiencing this explosion of innovation and creativity where every week it seems like there's an announcement where somebody will say, oh, we have this new small model that performs as well as a model twice as large or five times as large. So we're just seeing like we're in this wonderful era of rapid innovation.


Andy Mewborn:
Yeah. And I just saw this French company raised like a crap ton of money. What was it called? It's in France. I don't know. You might know the name. Was it such an M, I believe. But they just raised like some crazy amount of money.


Steve Krenzel:
Yeah, the name escapes me, but I believe there's this great movement in the EU about open source models and not having these locked up in proprietary companies. It is kind of interesting, you know, OpenAI started as a non-profit and the whole goal of OpenAI was to make sure that advanced AI models don't get locked up in proprietary big businesses. And then a couple of years they switched to a for profit or they call it a capped profit company. And now it's locked up in their in their walls.


Andy Mewborn:
But OK, do you know, do you know? This is another question. I'm sure you you've researched this. How the hell did they go from being a nonprofit to do you know what went on here? Or are you still kind of dumbfounded by by that?


Steve Krenzel:
I mean, that's a I have a lot of opinions and thoughts on that. From a legal perspective, it seems bizarre that a non-profit could suddenly become a for-profit entity, but it seems like the non-profit entity remained, but then there was a legal entity subsidiary that was for-profit, and so the non-profit oversaw the for-profit entity, and that kind of gave them some legal coverage there.


Andy Mewborn:
Yeah, because I think Elon like Elon Musk. Invested a lot of money. Invested a lot of money, but he owns no percentage of it because he invested in a non-profit arm. And so I kind of feel like, I mean, he's written some tweets that I've looked at that he's been like, he's definitely got a bad. He's a bit bitter. Yeah. Yeah. He's bitter, which is now he's doing grok and Twitter and all this. And like, and so I would be too, dude, if I threw a hundred million, well, I don't have a hundred million dollars, but if I did, And he freaking, you know, like, and then it was like suddenly a for profit and you didn't have a piece of it. I would be like, what the heck's going on here? Um, so I mean, what are your opinions on it? Like, like you said, you had some, I mean, I think it's to be honest with you. Um, but.


Steve Krenzel:
I don't know. I'd say I'm of a mixed opinion. I have a lot of respect. I mean, I have a ton of respect for the folks over at OpenAI. I'm pretty close with quite a few of them. And they have single-handedly kind of moved the world forward on this front. And part of it is the models have just become so expensive to train, especially anything of the sophistication of a GPT-4 that it's tough to make the non-profit model work. If we were still kind of in the era of models that fit on your PC and that you train on your PC, you could stretch $100 million for a very long time. But when you're spending tens of millions of dollars per training run, $100 million doesn't get you very far. And so then it's like, well, if you have this goal of making sure AI isn't abused or only accessible to a handful of corporations, and how do you make that work? And, you know, there's a bunch of different paths you could take there. The path that OpenAI took is, you know, there's debate there. I wish, you know, earlier in their They would say like, oh, we can't open source GPT-3 because it could be abused for generating spam or something. But, you know, now these models are so readily available, we've found that that abuse really hasn't happened. And so I would love to see, they need some way to make money off of their innovations so they can keep innovating. But it'd be kind of cool if they would open source kind of the N-1 version of the model. So when they launched GPT-4, it'd be cool if they would open source GPT-3. Because GPT-2 is completely open source. When they trained GPT-2, they uploaded it to Hugging Face. If you don't know Hugging Face, it's like the GitHub of AI. And so GPT-2, you can download those weights. It's completely open source. OpenAI never tried to constrain it. It was really only with GPT-3 where they started putting it behind the...


Andy Mewborn:
And when you say, so the question that I'm asking, I'm sure other people are going to ask is, when you say GPT or train, what goes into training? And I'm sure this is like a big freaking question, but like, if you were to explain to me, like I was five, what goes into training these AIs and like, how would you explain that?


Steve Krenzel:
Yeah, I mean, training is really the secret sauce or like it's both the secret sauce and also the expensive part. So there's kind of two things when you're talking about models. There's the training phase and then there's the inference phase. And the inference phase is like the phase that we're familiar with. Like when we give it a question and it gives us a question back, like we're using the model. And that second stage is relatively really cheap. Like you can download Llama 2 and run it on your Mac laptop. And that's a large language model, but you can run it with a dozen or a couple dozen gigs of RAM. But the training side is really, really expensive. You need a whole compute farm, not even just like stock. servers with CPUs, you really need a GPU farm, and GPUs are incredibly expensive, which is why NVIDIA's stock has been doing so well for the last decade. I've never bet against NVIDIA. Basically, you're taking this model, this large-language model with these billions or hundreds of billions or trillions of parameters, And initially the parameters are all just random. And then you give it a sequence and you ask it what's the next word. And early on, it's just guessing. The word's going to be completely wrong. But once it makes that wrong guess, it's going to go back, tweak all those numbers a little bit. And the next time it guesses, it'll be a little, little bit better. And then you just do that trillions of times. And those numbers keep getting a little bit better, a little bit better. And then eventually it gets very, very good. at taking a sequence of words and predicting the next word. It's a very expensive, very time-consuming process, but it works incredibly well. From a mechanics perspective, it's actually not that complex. It's really just about scale and years of research that have made it not complex. So that gets you to the basic model that predicts the next word. But the model that we interact with, the chat GPT model or whatever your choice of model is, they do an additional step or an additional couple of steps where learning how to predict the next word in a Wikipedia article or off of a Stack Overflow post or from Reddit or from a tweet, that's not really training the model to answer questions, So then they'll do what they call instruction tuning, where that kind of first training run, the model just kind of learns general knowledge about the world. And there's some evidence that it actually does a little bit of world modeling. of sorts. But then I do this instruction tuning where they kind of invert things so that instead of saying, like, the, you know, Christmas happens in December, you can invert that and say, what month does Christmas happen in? And it will answer, oh, Christmas happens in December. And so it allows, basically instruction tuning just allows it to work better with instructions. And one of the cool innovations at OpenAI had was it's called reinforcement learning through human feedback. That sounds fancy. Yeah, you'll see it abbreviated as RLHF and that's basically just a means of like in an ideal world, we would have a human sit down and say, here's my question. And here's my perfect answer. You know, like, like, like encyclopedia, you know, like Britannica used to hire hire, like tons of tons of editors and writers to write these perfect encyclopedia entries. And like, it'd be so cool if we had enough humans to just write trillions of these entries, and every single entry would be perfect. And then we could train the model on those. But the reality is we don't. And even if you want to do that for a small subset of entries, it's very time consuming, very expensive. And so reinforcement learning through human feedback is a way where the model will output a couple of answers, maybe three answers, maybe five answers. And rather than having the human author an answer from scratch, it'll say, actually, answer number four is the best. Not a perfect answer, but it's better than the other answers you provided. And then they'll do that a whole bunch of times. And so it's basically another stage of that training process, but a human is involved in the training process now. And that's really when you get to like some of the amazing results that you see with ChatTBT really come from that stage. And even on smaller models, we'll see that reinforcement learning through human feedback meaningfully improves the output. It really makes it, it gives it that human touch. It improves the quality of the output, but it also makes it feel like you're engaging with a human rather than a machine.


Andy Mewborn:
Interesting. And so they're still doing that kind of human interaction in order to train these models.


Steve Krenzel:
Absolutely. And a part of that also involves what's called alignment. And alignment is, alignment can mean a lot of things, but usually when it's used in this context, it's usually about Making sure the model isn't doing things that you don't want the model to do. So if you ask Chattypt, how do you make Napalm? It won't tell you. It'll say, oh, I can't help you with that question. Or if you want it to do something malicious or derogatory, Chattypt will refuse to do that. And so there's this alignment phase. Um, but sometimes, you know, sometimes like if you, if you're, uh, I don't know, this is a made up example, but like, suppose you wanted to build a spam detector and you said like, Hey, chat, GBT, I want you to generate a thousand examples of spam so I can test my spam detector. It might refuse to do that. And, um, and. Uh, that's, you know, sometimes that's frustrating for folks because like, sometimes you want an unaligned model. There's a lot of very legitimate uses for a non aligned model. But the liability seems to be too high for OpenAI or really I think any other propriety model to just give you that on a line model.


Andy Mewborn:
Man, I have so many questions about this. There's the scary parts of AI, which people have been like, you know, that's the part, the thing you just mentioned, which is like the unaligned models are the ones where you let them, like you let it go rogue, right? Because it's been trained on the internet and knows how to create, but it probably potentially does know how to tell you how to create dangerous things. But it's been aligned to not, it knows, hey, don't go there because we don't do that. It's a liability. Yeah. But like, is there the are there models out there that like, will let you do some crazy shit like that, that can be scary? Or is this why there's that regulation discussion and why people are like, we should regulate AI?


Steve Krenzel:
And like, that's, yeah, I mean, this is the this is the crux of the of the issue. I'm I think a lot of the concerns are pretty overblown. Yeah. We're certainly like a far way away from artificial general intelligence, AGI. Really, you know, a malicious actor. There's a lot of ways to find out how to make napalm. If you want to make napalm, like you can you can like we have the we have Google, we have the Internet, like there's way. Yeah.


Andy Mewborn:
Someone who's going to want to make it is going to figure out right without.


Steve Krenzel:
And you know just like any just like uh you know there's so many things in life that can be that are mostly used for good but have some bad potential but uh that doesn't mean you you ban those things um yeah i especially you know the we're in such the early days of of this boom of innovation that it would, it feels premature to try to dampen that because we're in this period of compounding growth, as you mentioned earlier, like every single week, every single day, sometimes multiple times a day, there's just these huge things dropping. And, you know, my biggest fear would be, you know, people have drawn analogies to nuclear where you know, we unlocked the power of the atom, and then we used it for some bad things. But we also simultaneously learned how to generate really cheap, more or less like unlimited power from it. But fast forward a couple of days, you know, decades, and it's very difficult to use it for the good purposes, because we were so worried about the bad ones this. And just think about, and I realize this is a contentious statement, but just think about where we'd be on the global warming front if humanity had embraced nuclear power earlier. So I'm worried that we're so worried about the downsides of LLMs that we're going to prematurely dampen their potential.


Andy Mewborn:
Yeah, I'm with you on that. It's like, it's like, why are you trying to dampen something we don't even know? Like, like, you know, if I'm able to run them, I was gonna have a dumb analogy. But if like, I'm on my way to running like a frickin sub three marathon, you know, and like, you're like, Oh, no, we're gonna like, prevent you from doing that. Because like, you're gonna, who the hell knows why? It's like, why would you do that? When that's that can progress people so much, but


Steve Krenzel:
Well, it's also worth saying, like in the, if you remove machines from the conversation, like we have legal frameworks for addressing these exact same things. And, and this is a little bit of a tangent, but adjacent, but, but, you know, similarly, there was a lot of concern about generative art, and kind of replicating, you know, ruining copyright, and like, oh, you could you could take a generative model, stable diffusion, or Dolly or whatever. And generate Mickey Mouse or Iron Man and like, what does that mean for copyright? And, you know, if you remove the machine from the conversation, humans can also draw a picture of Mickey Mouse or draw a picture of Iron Man. And we have a legal framework to make sure they don't, you know, monetize that or, you know, that's handled in a very specific way. And it just, it seems like we're maybe a little quick to just ban the machine because it's a very easy but kind of blunt response. Because there is so much creative potential, both outside of LLMs, but with generative art more broadly. It has unlocked this whole new world of art and inspiring artists. And that divide is kind of interesting to see.


Andy Mewborn:
Yeah, man. Let's go back to this AGI thing, because you mentioned AGI, and there was this whole OpenAI drama that went on a couple of weeks ago. And some people were speculating what was the issue, was did they reach AGI? Is that simple? Yeah, Q star. And Q star, and no one knew, and they didn't tell anyone, and there was all this stuff. What is AGI and how would, describe that to me like I'm five, right? Because I think a lot of people have different interpretations of what that may be, but how would you interpret it and describe what that is? Because that's what OpenAI is, they've said that's their goal is to reach AGI, right? And so what the heck does that mean and how does that relate to how we use Jamf JBT today?


Steve Krenzel:
Yeah, so I mean, the short answer is there's no, well-defined definition for what AGI is. It's like asking, like, why do we fall in love? Like, everybody's going to have a guy, you know? And maybe five years ago, somebody might have said, oh, the Turing test, you know, Alan Turing, the amazing computer scientist, the photographer, code breaker. And, you know, he posed this test, the Turing test, which In short, it's basically like if you're having a conversation, like if you're communicating by a screen to another entity, and you don't know if that entity is a human or a machine, and that conversation goes on long enough. So think about like messaging something, and you don't know if you're messaging a human or you're messaging a machine. If you're able to hold a conversation with that entity for like an hour, say, and you can't distinguish whether or not it's a human or a machine, then that passes the Turing test. And so we used to be like, oh, if a human could have a conversation with a machine for an extended period of time, and the human was not able to discern that it's not talking to a human, that used to be the closest thing we had to a test. And then we just flew past that so quickly. Like as soon as the GPT models started picking up speed, like we are so far past the Turing test at this point. that it's kind of laughable to think that we thought that was a valid test of AGI at one point. But like three to five years ago, people would have said like, yeah, that's still a valid test. We're so far past that now. Nobody even talks about it. It's kind of remarkable how quickly we moved past it and nobody even acknowledged that we moved past it. I don't think we have a particularly good test now for AGI. It's one of those like, you'll know when you see it kind of things.


Andy Mewborn:
Like the way I understood it was, it's like this, it's, I kind of understood it as like what an agent AIs is kind of doing, right? Like there's this description of agent AIs where basically, you could give this, you can give the machine or like, let's say chat to a task. Hey, I need a marketing plan. And instead of it just laying out the marketing plan for you, it would actually go also do all those tasks for you. Right. And so that's what I kind of understood AGI to be. But I don't know if that's truly what it is. Maybe that's the in-between of what most people think AGI is going to be. I think of it as like a super robot machine that's basically like a human doing its daily routine. But without like a consciousness, you know, like that's what I think.


Steve Krenzel:
That spectrum kind of keeps moving. Yeah. One day we'll have, you know, if you had a machine that you could reliably give instructions to and then say like, OK, like come up with a marketing plan and email my customers and all of this and it goes and executes on that within a month or two, you'll get so accustomed to that that it'll be no different than any other machine in your life or any other piece of software. In the same way, you know, I mentioned the Turing test, but people thought there were other valid tests too. We once thought that if a machine could play chess better than a human, that's a good sign of general intelligence. How could a machine think through all the complicated scenarios that are present in chess? But we blew past that ages ago. And we keep coming up with new tests. And then we blow past it. And we're like, oh, actually, this thing still isn't as capable as an average human. They do keep, you know, we keep making more and more capable machines, but once we invent the machine, suddenly it doesn't seem as magical as it did before it existed.


Andy Mewborn:
Yeah, it's kind of like the dopamine hit wears off quickly, right? Like when Chapter BT first came out, you're like, holy shit, this is crazy. Like, when chatgpt 4 came out or whatever, 3.5, whatever. And then now it's just kind of like, oh, like use chatgpt. Yeah. It's a thing that, yeah, it's a tool that you use in your day to day life. Yup. Yeah. Like use chatgpt. That's crazy. So I wonder, I wonder if there ever will be that moment where it's like, holy shit, like this is crazy. And for me, I think that's going to be the day when I see a robot walking around and it's pretty much like, acting like a human you know and it's like how are you and and it sounds like a real human though it doesn't sound like a robot and i'm like oh i'm good and it's like talking me and it's like oh yeah we should grab lunch sometime or something that's when i'm gonna be like whoa dude like this is this is real that that's like terminator shit though you know and i think we'll keep getting closer and closer approximations and and at some point


Steve Krenzel:
At some point, I guess you stop pretending. How much of a pretender or how good at pretending do you have to be before you actually are the thing? Fake it till you make it. We'll certainly have machines that can have conversations with you and pretend to emote and pretend to be a friend. But I suspect we'll hit that milestone before we have real AGI, but it's a very philosophical question. And I think you'll find people debating the nature of intelligence going back thousands of years. So it'll be curious to see. I mean, maybe we're coming to a head on that discussion. Maybe we're almost there.


Andy Mewborn:
Yeah. I mean, QSTAR, OpenAI has named it, right? So what is their interpretation of AGI, I guess, from QSTAR's perspective? Like what is QSTAR to them? Is that like the agent model that we kind of talked about or like what the heck is that to them? Or are they still trying to figure that out as well?


Steve Krenzel:
Yeah. And even there, I don't necessarily know if they would claim that they've solved AGI. Maybe they made a leap closer to it. They haven't really announced anything else other than the name about QSTAR. Some people have speculated based on the name. There's a whole field of deep learning called Q-learning. And there's also a very popular classical ML algorithm called ASTAR that is for searching through many paths. And so people are speculating that it's kind of some combination of these two algorithms. And so you can kind of infer what they're, if that's an accurate, like if that's an accurate take on the name, people, you can kind of infer what they're getting up to over there. And it'll probably result in a more capable model, but I don't think 2024 is going to be the year where we have AGI.


Andy Mewborn:
Yeah. Okay. Okay. Got it. Got it. Okay, so next thing that I've been kind of like thinking through Steve is like there's a crap ton of people slapping on AI onto their products today, right? Or, or, or, or starting from scratch and then building, you know, basically wrappers on a bunch of API wrappers, which are like products with using open AI and all that. And who do you think are going to be the winners in this kind of like AI race? Right. In, in the way, why I asked that is like, who are the winners in like the mobile app era, it was like Instagram slash Facebook because they bought Instagram. It was like the Uber, right? It was like the was a couple of big winner examples. TikTok, right? I guess that was that was kind of later in the mobile. Yeah. But like like who do you who do you think is going to win this race and like what is going to be the the profile of those companies?


Steve Krenzel:
Yeah, well, I mean, I Continuing the analogy of the mobile revolution, from a certain lens, everybody was a winner because we suddenly had these super powerful supercomputers in our pocket. And as those individual entities were succeeding, suddenly we could get around cities faster, we could check our email more often, we could always be within reach. Maybe we're actually the losers there because we can't escape our work now.


Andy Mewborn:
I don't know. I'm trying to get that screen time down. I feel like a loser every time I see that one.


Steve Krenzel:
From a certain lens, we've kind of all won. And I think this is an opportunity where every business will, especially internal processes in terms of like employee efficiency and process efficiency, I think there's a lot of opportunities for companies to either amplify the output of their team um, or improve margin or improve quality or reliability. Like there's, there's, there's so many opportunities for your internal processes to improve any, anytime somebody is reading or writing anything, there's, there's potential for an LLM to, to aid there. Uh, in terms of products, that would, that would be speculating. I will say we should be thinking of LLMs as. as kind of a fundamental new tool, much like a phone was a tool. And then it's like, oh, you can make phone calls on a phone. And then like, oh, you can check email. Oh, you can browse the web now. And like how quickly that escalated. Oh, there's an app store now. Speculating where it will go with LLMs is a little dangerous. But in my mind, I do draw a lot of analogies to kind of the late 70s, early 80s personal computer revolution, where we were coming up with kind of all these new paradigms, and not only putting computers in every home, but also thinking through keyboards and mice, and how do humans interact with computers, much like the phone change, how humans interact with computers, I do think LLMs will fundamentally and permanently change how humans and machines work together. And so from that lens, it's kind of as pliable as it's like, it's like asking, like, how will the mouse change computing? Or how would the keyboard change computing? I think it's kind of on there's unbounded potential there.


Andy Mewborn:
I think what it's going to come down to, and someone said this great quote, which is like when whoever invented a refrigerator, the refrigeration process, right? Like that was a new technology refrigeration. Uh, no one knows who that is. Right. But the people that made money off that was Coca-Cola, the people that learned how to use the refrigeration in order to build a business around it. Right. Which is, um, Coca-Cola made the money in that, right? It was the people that use the technology and distributed another type of product with that innovation, which, um, I thought was interesting. So I kind of look at like. I look at it like that. It's the refrigerator. Right. And so right now, you know, opening is creating the refrigerator. They're creating some of their own use cases with like the enterprise and like, you know, they have like the chat GPT. What do they call those? The GPTs? I think they just call them. And yeah. And so they're allowing people to kind of create their own mini Coca-Cola's, you know, from that analogy. But I think what's in thinking a lot about this, I think What's going to happen is the people that are going to win with this are the ones that are going to learn how to take this GPT and then put controls around it in order for it to be used in existing workflows. Right. And so the way I think about controls, like from a sales and marketing perspective, and you deal with finance stuff. Right. But so you probably have some other stuff. I look at it from like creating content. Well, now with GPT, there is the there is the creative phase where you can say, hey, give me like a creative photo. And it's like one of the most creative things you've ever seen. Right. And you're like, holy shit, that's a great photo just made in 10 seconds, you know. And I kind of see it like, OK, it's this thing that's created this huge creative potential. But now what we need to do is bring it in to specific use cases and put controls that our teams can use within a B2B perspective. The teams can use in order to put controls around that, that essentially work for what that company is trying to achieve. Right. So from a sales market perspective, we're creating a product. Right. And what we're doing is we're putting controls around like, OK, you can create this type of marketing content, but you can't create this type of marketing content that you can create stuff that's on on brand here, but not, you know, you can't create stuff that's not on brand. And so it's almost like, how are people going to create controls around this and what are going to be the use cases for the controls? Um, is, is kind of where my head's going. I could be completely wrong. Um, but personally, I don't think the. I don't think the type of software we're all going to use is going to be that much different from what we're trying to use it to do. I just think it's now going to be more efficient. Right. And now it's going to become more efficient because of the controls you can put around the existing software. Um, that. That's what I think. And I don't know. I might be completely wrong, but I kind of think that's where that's my refrigeration. Coca-Cola aspect is they're going to take in. How do you figure out how to use it in a way that's going to allow you to create a brand and a product around it where the refrigeration is just part of the process? You know?


Steve Krenzel:
Yeah. And I suspect, you know, there's some, There will be companies that probably don't even exist yet that are able to leverage LLMs in ways that we don't yet foresee. And yeah, they may very well be the Coca-Cola of the refrigerator era. I'm really, you know, in terms of companies that exist today, I'm really hesitant to pick winners. Just, you know, there's so much churn, especially in the early days of innovation. Like if you were in the 80s, you'd be like, oh, D.C. or Atari, or like there's all these big companies that are like dominating. And then you fast forward a decade and the landscape is like, oh, Dell and Gateway and like all these, Compaq. And then like you fast forward a decade and like none of those companies, you know. And so like, who knows? who knows what the landscape will look like in 10 years.


Andy Mewborn:
Yeah. Yeah. The other way I kind of think about it is like you have, um, when, when Instagram came out, you know, and you were able to take like pretty photos and put filters and all that on, like, I think some people's initial like reaction to that was like, Oh, like, Oh, photographers are going to like lose their job or whatever. Right. And like, Oh shit. Like now, now like photographers are going to be, you know, like a commodity or whatever. And actually what happened is no, it just created more photographers. Right. In essence, like that, that's what it did. It created more photographers. Right. It created, you know, almost like the bell curve, like the real photographers, you know, on one spectrum, it fat in the middle. Right. It's kind of the way I see it. And so I kind of see. This is the same way as well, where it's like it's fattening the middle of people. It's making people more efficient at doing the thing that they're already doing. Right. And more efficient. And does that mean that it's going to eliminate people doing specific tasks? Maybe if those tasks aren't like if they're repetitive or whatever it may be. But then because you can automate some of these tasks so quickly, what what what are the opportunities that may arise from it? Right. Is kind of the other interesting aspect that can you get from it? Right. Like what are some jobs that can be created from it versus like, oh, is going to take my job now? It's like, oh. I kind of look at the optimist side of like, oh, what can you do now because of AI? Right. And I think I think it has helped a lot of people start businesses around like, well, now I can help people create more emails and sequences and and, you know, and all this stuff. And now they're starting their own businesses around that. And because they're like, well, I have something to help guide me. Right. So that's also where my head's at on that.


Steve Krenzel:
Yeah. Even if it just helps your employees focus more on what's core to your business. Like the you know, the first big thing that we shipped to Brex was memo generation. So like you swipe your card, pull in the details from that transaction. If you've if you sync your Google Calendar with us, we'll look at like Google events and we'll just automatically generate a memo that says like, oh, a team dinner with Steve and Andy and three other people at such and such for the San Francisco offsite. And, and that's like, it's a very, it's like a very lightweight, passive thing. It's just, it's just a convenience to you. It saves you the effort of having to write a memo because many companies, you know, your, your transactions, you, you need to include a memo for compliance. And, and it's like, let us just help ease that burden from you. Like, why are you thinking about writing memos when you could be thinking about whatever is core to your business. And so, like, let us take that burden off of you. And, you know, I think there's infinite opportunities to just, you know, make everybody's to remove all those paper cuts from from your life.


Andy Mewborn:
Yeah. And, you know, who's dropping the ball on that? And you guys you guys jumped on it quick at Brex, right? You guys are like you saw this and I think you were part of the catalyst for that and wanted to jump on and see how you can integrate in the product. I'm like, what's Apple doing, man? Like I used Siri yesterday and I was like, dude, compared to GPT, I'm like, Siri, you are dumb. Like, you know, I was like, Siri, like, and again, that's because the goalpost is smooth, right? But like Siri, five years ago, I was like, whoa, this is cool. But like, honestly, the only thing Siri could do is like turn on my alarm. You know, and like that's the only thing it does right for me these days. And so I'm kind of like, you know, you know, I'm guessing apples. They're just a big corporation, so they're a little bit slower to the game. But once they have something that come out, it's going to be amazing. But I feel like. that integration between and this is this can be a whole nother podcast, but that integration between these LLMs and like hardware could be crazy. Right. And like I think Sam Altman might be working on something like this, but like how does it like what do you see is that being right? Like what are devices going to be able to do when you take an LLM and then integrate it with a piece of hardware like a phone? Like what's going to happen? You know, that's great.


Steve Krenzel:
So that's that's an area that's very top of mind. And there's a couple of things that kind of build on what you just mentioned, like that. As far as I know, the Sam Altman hardware connection is actually a startup or a company that he he was funding or going to fund or write a somehow getting involved with the company to build GPUs that would kind of compete with NVIDIA or it basically be like, you know, hardware specifically tailored for large numbers models. But OK. mean, I guess that could evolve to like, like embedded hardware on your phone. But what I've read about it was more of like a direct competitor to Nvidia, mostly from you know, just from a business risk perspective, you don't want NVIDIA to have such a stronghold over your business. OpenAI is like, many, many AI forward companies are wholly dependent on NVIDIA right now. But on the hardware, you know, LLM's at the edge is a fascinating topic for me, I think. You know, I'm actually, on any given day, I'm running the Llama 2 models on my local hardware. I also use Chai TBT, but I do use the local model too. And there's so many nice little things about it. There's no latency, so you get these immediate responses. Computation is surprisingly quick if you have a pretty decent GPU. You're also seeing when Google announced Gemini, they actually announced three sizes of Gemini. One of them, if I remember correctly, they called Gemini Nano. I believe the intent there is to have that run on Android devices. Very similarly, I think within the same week, Apple announced a really cool framework for working with LLMs. And so there's a lot of, and if you kind of look at like, who's Apple? Apple's hiring, and if you look at some open source projects they've been dabbling with, I suspect 2024 is going to be a year of LLMs moving to the edge, where you have an LLM on device. I could imagine it might be too late in the cycle for the next generation of iPhones to be really LLM4, but certainly two cycles from now. I could imagine a future where Apple just throws an extra 40 gigs of RAM on device that is dedicated to a large language model. Yeah, I have no doubt that both Google and Apple are full steam ahead at getting LLMs on the device. And what does that mean for the world? That's a very exciting future.


Andy Mewborn:
Yeah, that's crazy, because I'm trying to think of the possibilities there. If I had this on my phone, and it could not only just one app like ChatGPT where I can ask it a question, but what are some other use cases of the LLM, like if it, what if it was tied into every app? Right.


Steve Krenzel:
In the sense of like, like, I can tell Apple, Apple is so privacy minded, like Apple can't read your messages. Apple can't like Apple can't read a lot of the data that's storing device. So if they want to do interesting things with large language models, they kind of have to push the model to your phone because they they're not going to be reading your data on their servers.


Andy Mewborn:
Yeah, that's interesting. So it's almost like what the heck and all these apps do once they have this LLM like I'm thinking Uber and Uber may be like it'll sync with your calendar and then be like automatically order you an Uber. You can probably already do that with an LLM, but it's almost like this knowledge base, you know?


Steve Krenzel:
You can also imagine an inversion where instead of every app having their own LLM, you could imagine some kind of like global LLM. So imagine you're Siri and you say like, hey, Siri, I want you, you know, I want to go to a concert. Siri actually just heard me I've got to unplug my devices. You can say, hey, S, in two weeks, I want to go to this concert. Buy me tickets and hail an Uber for me. Or you can give us some instructions, and then that LLM will figure out how to open up Ticketmaster to order your tickets and then hail the car for you. So you can imagine, rather than every app having their own language model experience, you have a language model that knows how to combine kind of the tools that different apps provide.


Andy Mewborn:
Yeah, that's crazy. Yeah, that and I think that's what's hopefully that's what they make Siri do, right? Like, that's what I would imagine Siri would do be able to work with all that, which is freaking would be amazing, man. Well, dude, Steve, it's been awesome, man. This has been great. It's been yeah, it was a pleasure, man. It's been freaking awesome.