AI Leaps Forward: Robot Butlers & Nvidia's Super Brain Chip

Download MP3

Dalton (00:01)
Welcome to VentureStep podcasts where we discuss entrepreneurship, industry trends, and the occasional book view. If you think AI is moving fast before buckle up from robots doing household chores to a chip so powerful, it's scary. And Google teaching AI to navigate like us in the virtual world. This episode covers the latest breakthroughs that could change everything. Of course, before we dive in, sorry, I don't understand. Talk about AI. My Google home just.

blurted out and we knew that we were talking about him or her or whoever. But before we dive in, my name is Dalton. I have a bit of a mix of programming experience and also work in insurance. Offline you can find me exercising. I like to go runs, work out, do calisthenics, build a side business, or you can find me in a good book. If you...

Prefer to view this podcast in a video format. You can see this video on YouTube or Spotify. If audio is kind of your jam, you can watch this wherever, or I guess listen to it, wherever you get your podcasts, Apple podcasts, YouTube, Spotify, et cetera, and YouTube. YouTube does podcasts now. Today's agenda, we'll be talking about Nvidia's announcements that they made during their, I guess,

their AI day is what they call it. They announced the Blackwell chip and it's a new chip that is promised to do quite a bit and it's a bit crazy. So I want to talk about that. And then also there was a demo with a robot called Figure01. Figure01 is a product designed by Figure AI and it was in conjunction with a partnership with

open AI and the demo was quite interesting because the robot was able to communicate information from another human on the fly and do certain tasks that were instructed by said human, which is not something you've seen before. Google's deep mind, their S I M a agent is their general.

agent model made some breakthroughs and they've moved forward with their generalist AI where they're changing. They're taking a different approach where they're training their AI in the virtual world. So they're using games like Minecraft or no man's land to train the AI on like real data, maybe like say YouTube data, and then take that information.

and train the AI, we go into more detail after.

So Blackwell B200 GPU. Why is it such a big deal? Well, to put it in perspective, they are saying, and it's more of an architecture. They're calling it chip, but people are calling it chip on the internet, but it's more of an architecture because multiple chips, but regardless, it's a big deal because they're saying that they're going to be able to train

one trillion parameter AI. Currently for a chat GPT -3, they were at 175 billion parameters. Not confirmed, but a theory is that the GPT -4 is eight models, 250 billion parameters each that run simultaneously. So that being said, one trillion is quite a bit

bigger than 250 billion?

But, you know, additional parameters doesn't mean that the model is better per se. A lot of the open source models typically run around 60 to 70 billion parameters. But, so this Blackwell super architecture chip, et cetera, is promised to do...

one trillion parameters. And then not only are they doing it able to accomplish this one trillion parameter chip as much architecture, they're also able to do it 35 or sorry, 25 % reduced energy consumption and 30 % faster or sorry, 30 times faster. I'm mixing these up, 30 times faster. So 30 times faster and 25 %

more efficient. And I think that's important because these chips take up a lot of energy. I think that the blackwell chip takes up and I'm reading it up right now because I don't have it. I it takes up.

like eight gigawatts of power, the previous one, the H100. And then the...

new chip takes up, you know, obviously less. But it's not like we're talking about like a home.

of you know, you know, it's like, oh, well, it takes, you know, it's like leaving the light on for your fan, like a gigawatt is quite a bit of power. And...

Man, let me just look up. How much is a gigawatt?

A gigawatt is 1 billion watts. So it could power 1 .2 gigawatts could power more than 10 million light bulbs or.

What? Or one fictional flux capacitor in a time traveling DeLorean. What? Anyways, so basically it saves, yeah, it saves a lot of power, which saves a lot of money. Because it's one of the key expenses is not only getting the capacity to train the data, but also paying to train the data because it takes up a lot of electricity.

And that's one of the key complaints with cryptocurrencies is like, okay, cryptocurrency is good. It's decentralized, blah, blah, blah. But the mining of crypto is taking the power of like industrialized countries. I think one of the claims was Bitcoin and Ethereum combined were using more power than Japan or some crazy like that.

So if you use less, then it makes sense.

I think that the new chip is going to obviously power.

you know, the next AGI, or the, you know, the has the potential to do things that we have never seen because of the, just the difference in improvement and the improvement of speed, the improvement of the amount of parameters it could take, the reduced energy consumption, all those things combined can really affect

the intensity of simulations, scientific simulations, or these power hungry AAM models that we would talk about, the LLMs or large language models, self -driving cars. I mean, it's endless. It was one of the Tesla, I mean, I talked about it with Nvidia with their supply chain issues in the last episode. What was the issue with their supply chain? Tesla has had to start building their own chips because...

they couldn't get enough Nvidia chips to keep producing their Teslas because Tesla uses or used to use Nvidia chips to GPUs for their autonomous driving in their cars.

Okay, so these chips could potentially train or be used for this Figma or not Figma, Figma is a coding language, but figure one, oh one robot. So figure oh one was a robot that was demoed recently. And basically the, I could see if I could share my screen here.

If you're watching the video, I'm gonna share my screen, but I'm not gonna talk about it as much as I did on the open eye video that we talked about with the Soma video generation. So, let me share. But I will narrate for maybe 20 seconds about what's going on. So, let me mute this. Okay, so the robot is standing at a table in front of dishware and Apple and.

some cups, some of which are put away, some are out on the table. The guy asked the robot, hey, can I have something to eat? The robot hands the human an apple. And then the human asks, okay, why did you give me this apple? Like what's going on? While he's putting away trash. So he's like, he throws trash on the table. He's like, can you put this away? And then can you explain why you gave me the apple?

he's putting away the trash or I can't say he but you know it's a robot so the robot the robot is putting away the trash and

He says, I said he again, the robot says that I gave you the apple because that is the only edible thing in front of me. And he's like, okay, cool. And then he asked, okay, can you put away the dishes? And then the robot puts away the dishes.

in the proper order. Like there's a slot for the cups. There's a slot for the plates. And the robot delicately picks up the cup, the plastic cup, the plastic plates and slots them in to the drying area. And then the human asks the robot, how do you think you did? And the robot says, I think I did pretty well. I gave you your apple that you wanted.

You and Apple found a new home for someone who was hungry. I put away the trash, cleaned up the area and I put away the dishes. But he said he because I keep saying he because the person in the video is a he talking to the robot. But the robot did everything pretty much on the fly. They stated, I mean, you can't really believe demos.

at face value, but to be able to do those things like pick up an apple, pick up the cups, know where to put them, hand the apple delicately to the human is all pretty impressive and not something that we've seen. But this AI chip Blackwell, this is something that could be used to train this figure 01 robot.

Overall, very crazy. And the next crazy thing was DeepMind's release of their general agent. So a general agent is basically you have and they're doing it in the virtual environment because it's less dangerous, but basically it doesn't have a purpose like it. It learns as it goes.

So they have the...

human play games. So they take the data from humans playing games and then they feed it to the AI and then it goes into the AI. And then from there they train it, have it learn. And then they at a certain point,

They just...

like basically compare it against normal humans. How would a normal human execute these tasks? And the way that they're doing it is with text. So they'll use natural language. So basically text, they send in send in text to the AI, like drive this car or chop down this wood or craft, you know, a pickaxe or something. And then the AI would do those things.

by learning all the inputs in the game, knowing where stuff is, and just doing whatever. And what they're doing is they're training in their virtual environment to just learn how to live in these games and progress basically.

in a way that isn't with a purpose. Like there is no reward of forgetting the right answer. There is no rhyme or reason for what it's doing besides just playing the game. They give it instructions, right, with natural language, but also at the same time, the AI agent just plays

by itself doesn't have like, it doesn't have an exact purpose for what it's doing. So it just does whatever besides when it's given instructions. And so they have like a gift of, okay, you know, drive a car. This is successful, successful simulation or

Satisfactory results, okay, pick up iron ore, chop down a tree, things I said earlier, but then they wanna generalize it across more games and more stuff. The idea is to take it out of the virtual world, but.

the agent that, you know, if it's trained off of mini games, it can learn how to play a game without any instructions. Basically that's what the goal is. Right now they have,

a pretty good percentage of.

know relative performance is what they're calling it for the agent the AI agent playing a game that's never played before slash like being in an environment that it's never used or has no information on in following the instructions and then they have maybe a 25 % relative performance for things that

looks more like 35 for when the AI agent isn't given any instruction. So this is pretty cool because you could use it for a more flexible AI assistance. You could use this for video game development. Like imagine video games that have no

ending, right? Or that constantly evolves ongoingly and the characters are always dynamic and have their own personalities and quirks that the user or the gamer can't predict because it is all original every time. And so you have this endless playability, which also is kind of dangerous because what about

you know, things like deep fakes or these other nefarious activities. I know that there are a couple of these like well -known kind of not well -known, but basically that there's these fake influencers online that are basically just this AI generated personalities with

they don't do videos, but it's just photos and they have many likes and, and influence and they get sponsorships, but they're really not a real thing or person. It's just this AI generated images of this persona, which is really sad. So, uh, I don't know. I mean, I think that with great change comes great responsibility.

And so I hope that we're going to be moving in the right direction with these new powers at B because I have my hesitations about these potential issues that I know we're saying that we perceive, but at the same time, I'm not sure. You know, it just becomes difficult.

to manage some of these massive AI models with trillions of parameters and at what point does AI become a person? Or at what point does AI have rights or something like that? I don't know. I don't think we're there yet, 100%. I'm not saying that AI deserves rights now, but.

I think in 20 years from now, if we keep going at the rate that we're going, what determines a differentiation between a human and AI if the AI is replacing humans to do general task jobs like...

I don't know, like a lawyer or something or housekeeping that have these kind of ongoing responsibilities that are changing constantly and require cognitive thought and that are multifaceted. I think at a certain point that that would be a determination of conscious, right? And so if you're conscious, then I guess you would be determined to be a human or not human, but at least have some.

form of intelligence and be protected. I don't know. We're going down a rabbit hole here, so I don't want to go too deep. But it is scary to think about where things are going. I showed my Nana that AI video with the figure 01, and she was pretty excited. She was like, oh, when I was younger, I wanted, I don't know that person's name. It was the...

the housekeeper of the Jensens? I'm not sure. But it was a robot and she's like, oh, I always wanted one of those when I was a kid. So I wish she'd visit me. And so.

There we have it. I mean, Nana's excited. Nana's ready for change. She's been ready for change for a long time. But it is seemingly closer to like a Black Mirror episode, potentially. But these all tie back to...

AI improving the well -being of humans. And I just kind of was questioning at what point are we just stepping on the backs of AI and what point is too far? And I don't think we're there or there or ready to have the conversation yet. What AI news has blown your mind lately? You know, let's.

discussed that, you know, you put it in the comments. I'm more active on the YouTube channel. The YouTube channel is Dalton B Anderson, and then the podcast playlist is just VentureStep. Put a comment down and let me know what your thoughts are about the Nvidia chip. If you have a deeper analysis about the Figma 01. I said Figma again. Figma is a programming language. The figure 01.

robot or, you know, any other suggestions to refine the show or we're still pretty new here. One thing I would like to do is talk about before I go, we'll go over last week's comments. And I think that is it's it's good to do because it lets people be heard and. Encourages people to be interactive with.

the show or not interactive, but interact with the show. So let me go and if I can figure it out on the fly, hopefully I can. Because I'm not an expert. See comments. Let's see. Oh, man. So. Man. All right. So we have one by Marco Coconuts.

Love the stash, keep it up. The growth friend, you earned my sub, tell Nana a fan says hi. Very nice, I said, you know, thank you so much for the kind words. Happy you're enjoying the content. I'll pass my regards to Nana, that's very sweet of you. Then this guy named gg -mm9hf said who asked?

Yeah, I'm not sure who asked. I'm glad that you're here, though, and you're you're trying to learn in your free time, or at least I'm trying to learn and maybe you're learning from me. Or maybe this speaks to curiosity, but you're welcome here. And maybe next week you can say who said or who asked? I don't know. It's a good question.

But that is the end of today's show. I'll talk to you next week. I'm trying to be consistent. Today was a little tough. I definitely struggled with the idea of doing a podcast episode today. We just got to stay disciplined once a week. Next week I'll be discussing the book review of the 48 Laws of Power and I will state my opinions on what I think about it and whether I think it's worthy of a read. Okay. Well.

Have a great day, night, morning, evening. Talk to you next week and appreciate your time. See ya, bye.

Creators and Guests

Dalton Anderson
Host
Dalton Anderson
I like to explore and build stuff.
AI Leaps Forward: Robot Butlers & Nvidia's Super Brain Chip
Broadcast by