The Rise of DeepSeek: A New Contender
Download MP3Dalton Anderson (00:01.71)
Welcome to VentureStep podcast, where we discuss entrepreneurship, industry trends, and the occasional book review. What if I told you that you could get access to cutting edge AI models for a fraction of the cost? Well, that was a story that broke a couple weeks ago with DeepSeek's R1 model. DeepSeek recently released a computational model or reasoning model, LLM, that
was going toe to toe with the juggernauts of OpenAI, Google's Gemini, and Meta's open source model. And the weirdest thing was the claims that they were having. They claimed that it was a side project. It had limited GPU usage and they were using older Nvidia GPUs that aren't as good.
because of the restrictions related to Nvidia exporting certain types of GPUs used to train AI. And then on top of that, it was free and came out of nowhere. So it was just this whole wave of, my gosh, like panic. Like the AI community was in panic. They didn't know what hit them. It was just,
a complete disaster. Like, not only is this really good, like who are these people? And then they're like, well, you know, we only spent like $6 million on this. And then they're like, and also it was like a side project that we worked on in our free time. And yeah, I don't know. mean, it's okay. And everyone else is like, wait, wait, wait, we're spending billions of dollars here. Like what's going on?
And so the whole reason why I wanted to wait a couple of weeks before I talked about it is because it seemed too good to be true. Like someone coming out of this, out of the shadows has this amazing model. No one's heard of the shop before and they're saying it's a side project and they use limited amount of GPUs.
Dalton Anderson (02:25.518)
So I wanted to wait before I talked about deep seeks simply because I don't know if this stuff is true. Some of it may be true. Some might not be true, but what I do know that they did have a backing from a quantitative venture capitalist firm that uses deep learning to trade the markets. And I was like, Hmm.
It seems to be a good opportunity to claim all these things. mean, you're not necessarily having any, I mean, they're claims. They're not putting, I mean, it's just a claim, a rumor, or hearsay that all these things might be true, but if freak the market out enough that,
NVIDIA dropped 20 % or something and other companies had similar reactions or the market had similar reactions to other companies. And so it's a perfect opportunity to just get in if you're waiting to get in. But it was also a perfect opportunity to short those stocks or those financial securities and make insane amount of money.
an insane amount of money. And you could do those things, no problem if you're in China, because it's really hard to enforce those kind of transactions or investigate those things when you're not on the soil, which the market is on. But that's just a theory. I don't want to get too far and take out the tin foil and put a tin foil hat on, but some food for thought. I'm not sure.
how much stuff related to DeepSeek is true. But I will be talking about the stuff that is true or said to be true. And I will be breaking down a difference between Jim and I's product that they rolled out recently. I think as of today or yesterday, I didn't see it yesterday. So they have a new model that goes through its stop process. I'll be showing that. I'll be putting it the same prompt and we could see the difference.
Dalton Anderson (04:51.48)
Prompt's already ready by the way, just in case DeepSeq gets overloaded. They've been having server issues where they're getting overloaded. So wanted to pre-prompt, but it seems pretty good. And then we'll be talking about DeepSeq, the founder, who is DeepSeq, who's the founder, and then who's the co-founder of Hyperflight, which is the other firm that the founder of DeepSeq is associated with.
And then we'll be talking about NVIDIA. NVIDIA has some role in this as people kind of freaked out saying, okay, companies are spending billions of dollars training these AI chips, using these AI chips to train their LLMs or these new AI models. What about DeepSeek? mean, they only spent 6 million. What's going on here?
And so everyone is kind of questioning, okay, like, we supposed to be spending this much money? And then we'll go further in the episode. I'll be discussing how there is speculation that they didn't use a couple GPUs. They use 50,000. So it's a bit different than a couple. So it costs, it costs billions of dollars. So a little different than what is being claimed, but Hey, who knows? Who knows? So.
But that being said, let's dive in. And also if you're watching on video, I have changed my camera angle. I had three monitors at one point and I was like, who needs three monitors? So I got rid of one of the monitors and then replaced that arm as my camera stand. So the camera angle is not.
ridiculous. And if you haven't watched any of my videos, my camera was like on top of my monitor, but the way that it was angled, it was just maybe 75 % forehead and I hadn't gotten around changing it, but I've since changed my monitor. But since I've changed my monitor, I don't look at the screen. I don't have a screen anymore or anything. And it's not that I wasn't not looking at the screen.
Dalton Anderson (07:07.378)
And, or sorry, it that I wasn't looking at the camera. was just occasionally I'd be able to glimpse over and be like, okay, where am I at in my outline? I have since now printed out my outline. So if I need to reference it and see what I want to talk about or some, some jot points that I had and things that I highlighted, then I will. And if you see me looking down or looking to the side, I'm looking at my outline. So that being said, let's talk about DeepSeq's origins. So DeepSeq was,
created by, let me just make sure, Wing-Feng. And please let me know if I mispronounced that. But Ling created High Flyer and it is said that Ling was someone who embraced AI and wants to create AGI. And the first step to that was using deep learning and machine learning applications within
algorithmic trading. And they did that starting in 2016. And then recently, 2023, July, which is very recent, they are he founded DeepSeek. And DeepSeek has a
pretty quick product to market from all the research that's required to make these models to getting a product to market. And I want to make sure that I emphasize the point of open source. Open source is the way to innovation. If you have an idea and you tell 10 different people about it, you write out your thesis, send it to them, ask for feedback or
They can make their own ideas with your idea. And it's the same thing as it your crowdsourcing innovation versus holding innovation within one entity. You're allowing innovation to spread between entities instead of a single entity. So your development is much faster than if you were holding everything close source and the claims about it's dangerous or these other
Dalton Anderson (09:28.046)
nonsensical things that are brought up. They're not even related. So that being said, yes, DeepSeq was able to from creation July 2023 to what, like late January, early February of 2025, release this model.
would so be a $6 million budget. I'm not sure that's true, but that's what they're saying. That wouldn't be possible without open source. So I want to make sure that's very clear in this episode that these open source models are critical to the development of new innovation and new applications of AI. Okay. So within DeepSeq, they have a different approach to recruitment. And when he was asked about
Okay. Are you going to recruit from open AI or Google's deep, deep research? His reply was, well, if I have short-term goals, that's what I'll do. And I'll get people to have the right experience right away, but I'm looking long-term and in the long-term approach, basic skills, passion, readiness to learn and general interest.
and attitude are more important than ready-made individuals. And he also emphasized creativity as a skill that he looks for. And he said those things will build.
Dalton Anderson (11:02.766)
division that I have. And having somebody ready made isn't my concern. It's the people that have those skills. Those are the people I want. Let everyone else fight for the other individuals. And that might be a good approach. And it might be twofold. Like, hey, DeepSeek isn't a world renowned AI company.
like open AI, like everyone is looking to open AI or Gemini or Metta as like the inspiration that got them into these things or like in the future. Like what I'm saying is five years from now, 10 years from now, the people that are graduating college, the people that have been tinkering for seven years, what are they tinkering on? What's inspiring them? DeepSeek isn't in the picture for those folks and
I'm not saying that it's not a good hire for them or to work at DeepSeek. What I am saying is that in their minds, they're not the first choice, either from the employee's perspective, the prospective employee, or the company. So that being said, you can develop talent that you want. And from there, you can create the things that you want with the mission that you need.
to foster and it gives you a different opportunity where as if somebody's coming to you with all the skills made, they're not necessarily closed minded, but they're not as open minded as someone that is ready for the opportunity and just wants to make something that they're passionate about. And that's a different mindset, like a different mindset where you're like, I'll learn whatever I need to learn to get it done.
versus, I've already learned a lot. I don't think they could teach me too much, but I'm sure I can learn a different approaches versus somebody that wants us to sponge up everything and build. So these are two different approaches. One is you could, I think that it's two things. One, they're not necessarily the most reputable.
Dalton Anderson (13:25.292)
versus the other companies. you won't get first shot like the best talent. And if you do, then you'll have to pay way more than the other folks. And then the second piece is
Building long term. That's what he said. I mean, that's his vision. I mean, it's his company. So I think that those are interesting cavities, like interesting because they're not necessarily looking for people that are computer science and AI. They're hiring from different disciplines and people that are interested in the piece about building something that they're passionate about versus someone who has the ready made skills, which I think is great.
more opportunity for people to get involved and contribute and have ideas. The more diverse viewpoints, more, I say, I would say the more diverse your viewpoints, the more robust your ideas become. So computational resources and training time. So this is something that is pretty important. So since they had not the
Dalton Anderson (14:38.542)
high tech best GPUs on the market that Meta or Google or OpenAI are using, they had to take a different approach and they had to be very careful about their usage of parameters. They had to be careful about how things are being executed on the GPUs. And one of the things that they did deploy was a mixture of models and a mixture of models, or sorry, mixture of experts.
My bad. Mixture of experts is a approach that I would explain in a way that works like parallelism on a complex problem. and we'll work into an analogy in a second. Just let me provide the background here. So parallelism is something that was used in like databases or.
or processing and basically it takes like a large complex problem and it breaks it up in different parts. And from there it can reduce the resource load and allocate resources instead of, right, I'm gonna build the whole house all at one time. First I'm gonna do is I'm gonna do the foundation and then I'm gonna work on the architectural plans for utilities and then I'll have the utilities. So you're working on three things instead of.
building the whole house all at once. Like you try to the whole house all at once. It's just, it's just way too many resources and people are pumping into each other. It's a mess. Things might make, make, make sense on paper, but when you're trying to do it all at the same time, you're waiting on other folks to finish. And so that's the issue when you're trying to work on these large problems, what you essentially are doing, if you don't use some kind of parallelism is that the resources are bottlenecked because
people are waiting. I would explain it that way. Same thing with this O E approach. Whereas instead of it being all right, now we're all building the house all at one time. You've got one crew doing foundation stuff. You've got another crew doing the walls. You've got another crew working on the electrical once the walls are in place. That is
Dalton Anderson (17:05.074)
MOE. MOE is a mixture of experts. so each expert works on their subject matter or whatever they're good at. You might have a PDF expert and the PDF expert reads all PDFs. And then you might have after you have PDFs, then you have maybe audio parsers. So someone who someone who's really good at listening to audio and then transferring that to a transcript.
And then you might have a summarizer, the best summarizer ever. And that summarizer takes, could take the, the, the PDF transcript comes in is processed by the PDF expert. Then after the PDF expert translates or parses the PDF, then, or hold on, let me do it the other way. So let me do audio. Audio is then trans transcribed. The transcription is in a PDF. Then the PDF expert,
processes that PDF and then it's pushed over to the summarizer. And the summarizer summarizes the transcript and then from there you have an output. Versus if you didn't have that applied then you're running the whole model which is billions of parameters all at once to get the same thing versus this is running certain things at certain times versus running
everything all at once. So that saves you resources. And that's what they, that's, that's what they applied. And they're saying that they did in the paper, they're saying that their architecture only activated 37 billion parameters out of 671 billion parameters for each token. And this reduces the computational resources required without sacrificing performance, which is good. And MOEs aren't
a very unique application. mean, they're applied in other machine learning projects and there's reference of MOEs in Meta's paper and in Gemini's paper. So it's not something that is like, wow, like this is out of the ordinary, but the level of performance gain that they've got from MOEs is what is the wow factor. They had to do something else, which I'm not too familiar with, but they had to use
Dalton Anderson (19:30.136)
They'd use a PTX programming for better computational execution of when the GPUs are activated. I'm not necessarily too versed in that. So I am just pointing out that they also did something with the execution of how the GPUs are running. And once they are running, they did the MOEs to make sure that they're not running all the parameters all at once. And so it becomes a very efficient.
process and they're able to train on less GPUs and make the GPUs more efficient. When Meta was training their models, like the 400 billion plus parameter model, it was taking them like six months to do a new training set.
Dalton Anderson (20:18.642)
When they trained R1, they're saying that it took them two months, right? So it says that they took them 200 or 2,788,000 GPU hours and their data set was 14.8 trillion tokens, which took them less than two months. And on their 5.6 million budget, I was saying 6 million, but
Close enough, mean, 400 grand.
Dalton Anderson (20:52.204)
All of that combined, it's a compelling story.
But as far as the budget, I think the budget is inaccurate. I don't believe $6 million for that. Like how are you getting that many GPUs? And so there was a separate research firm.
that provided a research report and they said that SemiAnalysis said that DeepSeek's total server expenditures could be significantly higher than what they're stating, potentially 1.3 or even exceeding $1.5 billion. So wasn't that they had this $6 million and they had some person after hours running their spare.
like 15 GPUs in the made deep seek and that's how it was explained and how it was marketed. But I think it's much more than that. But with that opportunity, you could reap financial benefit by just plummeting the market with all this news and having the market freak out.
This is a sizable opportunity for the people who knew that this stuff was going to be released. And if they had their ear to the market, they would have a wonderful opportunity to.
Dalton Anderson (22:18.52)
do these things. And when I say wonderful opportunity, that's not something I would buy into. I mean, you would have a an actual advantage on other people. And the whole essence of these markets is everyone has the same amount of information and the information being processed and the delays information and the distribution of this stuff is priced into the stock price.
And if you know something that doesn't reflect the stock price, then you have a advantage and that would be highly illegal. That'd be like insider trading. So don't do that if you're in America, but I'm just stating the obvious is that, okay.
could be a good opportunity for other folks that are involved. But I'm not partaking in things like that. But just thought I'd point that out and clarify if you're like, what is he talking about? He's talking about fraud. I understand I'm talking about fraud. Okay, so that being said, people freaked out about what's the future of Nvidia like and what's the future of
AI in general, if you don't need that many GPUs, what happens?
Dalton Anderson (23:39.64)
I don't know, but I don't think that usage of GPUs decreases if things become more efficient and things have been becoming more efficient. Like the reduction of...
resources needed to train AI has 1000 acts in like 10 years.
And that kind of compression doesn't necessarily mean less demand. It just means that you could do more stuff with the resources you have, which thus increases utility of those resources and then thus increases consumption and increased consumption increases the usage and the demand for such items. So it's not all bad for a Nvidia. What I do find that is bad and I am an investor in Nvidia. I have Nvidia stock.
One thing that I do disagree with is Nvidia is sold to the market as a, and this is not financial advice by the way, don't you dare think this is financial advice. Nvidia is sold to the market as a shovel seller. Like this is the gold rush, we're selling shovels, buy some shovels, get your gold.
Yes and no, but I don't think that the level of expenditure that is required to train AI models is the same equivalent to a shovel. Like a shovel might be like back in the day when they're talking about this gold rush analogy. I don't know, a couple bucks. They're not spending like 20 grand or 50 grand, 80 grand on
Dalton Anderson (25:29.932)
which would be a crazy amount of money back then. That would be like millions of dollars. They're not spending that kind of money on their shovel.
that's what companies today are doing. So it's not the equivalent analogy because it's such a large expenditure for these companies that they need to figure something out because they see AI as a long-term
value proposition feature or required or requirement to stay as a market leader in their space. There is no other option. It's either AI or die. And so these companies are spending billions of dollars on AI as you all know. I mean, it's not like, my gosh, wow. Thanks for telling me that, Dolan. Your insights are incredible. Thank you so much. I appreciate it.
No, but so they're spending a crazy amount of money, billions of dollars each quarter, whatever.
But that's so much money. You got to bring something like that in house. This isn't some offshore lightweight thing. We were hiring a couple extra employees to do some processing or call center, whatever.
Dalton Anderson (26:51.158)
It's not billions of dollars is a quarter. Like if you're spending like eight billion a year on something, then you, you've got to, you got to bring that in house. So that's what companies are doing. So Google has their own AI chip that they've had for a while and it's utilized by Claude Claude AI uses Nvidia in some areas, but their main provider is Claude. And the reason was
Google invested in Claude and Claude gets the TPUs for, I don't know if it's, I don't think it's free, but maybe reduced rate and, or maybe some of them are offered.
But yes, Claude is mainly using the TPUs by Google. And for Google to manufacture or make these TPUs, what I was able to find from an AI research
article from the New York Times, which I'll link in my show notes, is that they were saying that for one ship, NVIDIA was
requiring 15 grand. And for Google, it was costing them three to $2,500 or 3000 to 2500.
Dalton Anderson (28:23.086)
So not necessarily a 10 % but 20 % is the cost you could make a chip of your own. So you're saving 80%. If I went to my manager and I was like, hold on, I've got an idea here. I can save us 80%. What do you think? Sure, why not? So that's kind of the gist of it. So Meta's working on their own chip. I think they're on their second generation.
they're not necessarily ready for use use like what they're trying to do. think there's still going to be a partner with Nvidia for awhile and they are Nvidia's largest customer at the moment as of recently.
Dalton Anderson (29:10.306)
The next thing, well, maybe they're not the largest customer anymore after the purchase. Musk purchased for XAI, but they're a large customer. That's, that's what I'll say. So weird. And then Amazon's working on their own chip. So Meta, Google, Amazon, they're all working on their own chips and Microsoft. So where does that leave NVIDIA? Well, this is
what I think NVIDIA's value proposition is. And that is being vertically integrated. So they have this whole ecosystem of robotics that they have available to themselves. They have NVIDIA Omniverse, which is like the metaverse, but for robotics, they have a physics model built on top of Omniverse that is able to simulate in the virtual environment, a
interaction or workspace or whatever you want with the robot, the environment. And then you can run simulations within Omniverse and these other programs and train your robots virtually. So then when you take them out of the virtual world and you bring them into like your reality, your local reality, they are informed and they know how to deal with different scenarios.
And if they don't, then you can just hook up your VR headset and connect to the virtual optics of that robot and you could train right then and there. And then they have the root ecosystem and the robotics of things, ROT. And so they have all these things from the chip.
to integrating the chip with this robotics ecosystem, to virtually training and virtually running their Nvidia chips, to also having these virtual environments to where you can simulate all these different things, collaborate with engineers, designers, product, all in these virtual spaces while virtually training your robot.
Dalton Anderson (31:31.884)
that is Nvidia's value proposition versus, they sell chips. Like it's much more than that. And the chips thing is defensible for some time, but eventually the people aren't going to pay billions of dollars for their shovels. They're going to build their own. And that's what you're seeing at the moment. And I was, I had a piece on how much, how much they were spending.
So Google spent two to $3 billion on its own chips and Microsoft and Meta made up a quarter of Nvidia sales in the past two quarters. So 25 % of their revenue is going to, in last two quarters, is going to.
the two companies and they're both building their own chips. And it's very tense, very tense relationship where Microsoft is a cloud provider. Google is a cloud provider, Amazon is a cloud provider. And so now that they're building their own chips, Nvidia has invested in a separate independent cloud device provider and provided chips for them for their cloud. And so there's this, this jockeying of the chips in the cloud and
all these other things, but the cloud is very profitable business. And now Nvidia is also getting in there with their own proprietary offering with their robotics infrastructure or ecosystem from the chip all the way up to the virtual world, the training to production. And then from there, they are also investing in providing chips to a third party. Don't know the name off the top of my head, but they,
I did write down on my notes here that I have Nvidia Omniverse, which we talked about. We have the hardware, the chips. We've got the development kits that are available for developers to get started. The thing I like,
Dalton Anderson (33:36.232)
an easy way for developers to start building on NVIDIA. They've got the Jensen models, which is a family of models that help build and train robotics. They have also got something related to agents. They have an agent's thing where you can not only train your robots on these virtual environments, but they can also
interact with other robots in virtual environments and then simulate these interactions over and over again to get the robots to be better. So there's lots of impact there. NVIDIA, long story short, NVIDIA's value proposition is that they're vertically integrated, not that they sell shovels.
And that is DeepSeek. The last thing that I wanted to show you and we're transitioning my view over to the screen. I'll be sharing my screen. And if you are watching on not via video or on Spotify, Apple music, I guess not Apple music, Apple podcasts, then you won't know what I'm talking about because you can't see me. But if you are seeing me, I'm sure I'm going to search your screen in a second. I just want to talk about the output of DeepSeek and what I like about it, what I don't like.
There's not many things I don't like, but let me see if I can share my screen and not be a boomer. Sorry if you are a boomer, I just came out. All right, so Windows, DeepSeek, here we go. Okay, so you should see my screen and I'm gonna be zooming in so it's pretty easy to see, but.
I asked it a prompt, how do you use and deploy crypto and a stable coin mechanism for transactions to reduce your foreign currency exposure and risk? Can you please walk me through? How do I go about doing that? What are some great platforms to use? Something like Avalanche, right? And is it easier to use something or to use someone else's stable coin or should I make my own?
Dalton Anderson (35:42.978)
and could you do that and is that easy? If it isn't easy, can you make it easy? So that was my ramble to AI. So I try to make it somewhat structured, but also confusing at the same time where I'm just asking many questions within a question and not necessarily providing that much background. So a deep seeks process, it goes a little bit over like how it's gonna think about the problem. it just...
has a small snippet, a couple sentences, and then it maps out the problem. And it looks like it uses Markdown as its preferred output, which I think is very nice. Like the output of DeepSeq, I think is one of the better outputs between all these models. The UI, I would still give to Claude's, simply because Claude's UI is like, hmm, it's sublime. But.
The output of DeepSeq is very good. So breaks out these thought processes and the steps of which it's explaining and the answer with lines and with a header. So it has the thought process and what it's trying to do, a line that goes across the whole screen to separate it like a border. And then it has a header. And from there, it explains.
it starts breaking down, like understanding the problem of foreign exchange exposure. And then it goes through that and then it chooses different blockchain platforms and it breaks that down. And so it has a very nice UI. I really do like, sorry, I said UI. I wanted to say format. It has a very nice format that I appreciate because it's easy to read and you'll see when I share my screen on Geminize.
Output, Gemini's output provides more information, right? But the UI, I keep saying UI for some reason, I guess it's on the UI, it's making me think UI. The output is not as clean and feels cluttered when you compare it to DeepSeqs. Okay, so that was the output. Let's go over to Gemini, me zoom in. Let's see how that looks.
Dalton Anderson (38:06.126)
So I asked the same exact prompt, with GEMINI's output, you have to collapse or expand. So when you have an output, you have to click expand text or it'll collapse it. So it shows us thinking. It does a really good breakdown of the thinking, but I don't necessarily know if the thinking is really, is it really thinking or is it just prompt engineering? Like you think about it, if you're thinking about a problem, if I gave you,
question or something to solve and you map out your thought process and then confirm it to yourself and then solve the problem. Is that thinking about the problem or is that better prompt engineering if you're an AI? So I think that this thinking feature now evolves into
Dalton Anderson (38:59.182)
prompt engineering. So if you write something to AI, it will break down and try to understand what you're saying and then it go solve it. And it will explain like why it got to where it is.
but it also writes its own little prompt. So in this scenario, it breaks down the user's question, the goals, the mechanism, the how-to, stable coin options, the difficulty, and then identifies the underlying need, explains it to themselves, I don't know. Structure the answer, how we wanna structure the answer, breaks down all the things it needs to answer, the platforms, like the recommended one that I had.
And then it talks about existing versus your own stable coins. And then it goes through everything like step by step. I mean, the actual thought process from Jim and I is very detailed. Might be more information than what people want, but I appreciate it. The output on itself before it even starts solving the problem is like a page and a half, two pages, and then it gets the problem. And
The only thing I have an issue with is what the problem is that
It doesn't...
Dalton Anderson (40:19.726)
It doesn't feel as clean as DeepSeaks. That's the issue. If I look at DeepSeaks, we scroll back over to DeepSeaks, I zoom out a little bit, and you can see at a high level, it looks very nice. The information, there's a lot of information, but it doesn't feel cluttered at any moment. It just flows. It's like water, and it's just going down.
moving in a subtle direction. Whereas Jim and I, it feels everywhere. I don't know where to look. There's just so many places to look and there's not that much space between the next header and the previous header. And yeah, it just feels cluttered. It feels cluttered compared to DeepSeek. So I think DeepSeek's
output is very nice compared to these others, but this is a better explanation of the thought process. And these are quite long. If you, if you can see what I'm sharing on my screen, but that's what I wanted to share with everyone today. me see here. Stop sharing. That's what I want to share with everyone today. I wanted to break down deep seek, the origin story, how that affects Nvidia and my thought process on Nvidia's value proposition and how that
is going to have to alter or is already altered. My stance is that they never were just selling shovels. They were virtually integrated. Then I went over the differences between GEMINI's thinking feature and DeepSeq's R1 thinking feature. Then I went over some applications of
deep seeks.
Dalton Anderson (42:17.198)
innovation and how they went about the problem and how they were running their model more efficient than other models have ever ran before using the same things that other companies have, but Deepsea just seems to do it a little better. That was it. That was the episode. I hope that you really appreciate it. And if not, let me know and let me know what you think about this new camera angle. It's getting used to it.
It's not the most ideal thing, but I'm getting there. I did have a podcast scheduled with a AI wildfire company just fell through. had a scheduled podcast recording Friday last week and unfortunately we're having audio issues. So we had to reschedule. Would love to get her on the, on the show.
She's an amazing, talented person and I know that you'll find that episode exceptional. And I have prepared so much information for that episode, so I'm excited to speak with her as well. So I want to have those kinds of people on the show consistently. But that being said, we'll keep posting episodes every week and keep working on ourselves and building up.
one step at a time. Once again, wherever you are in this world, good morning, good afternoon, good evening. I hope you have a great day. Thank you for tuning in and I hope you tune in next week. Goodbye.
Creators and Guests
