From VR to AI: Meta's Ambitious Vision for the Future

Download MP3

Dalton Anderson (00:02.13)
Welcome to VentureStep Podcast where we discuss entrepreneurship, industry trends, and the occasional book review.

Meta recently had their 2024 Meta Connect and they announced quite a few changes to their Questline, their Ray-Ban Meta glasses, and added some AI features into their social media products. I find that the biggest standout was something that I talked about a couple episodes ago when I reviewed Meta's Hurting the Llamas paper.

I found it interesting that majority of the audio and video content was in a one minute to a minute and a half time frame. And then there was a subset that was longer than that, but they said that the majority of the data was one minute to a minute and a half. And I was like, they're training on reels or they're trying to do something with reels and they'll be releasing something like that hopefully soon. And they did release something that was quite interesting.

that we'll get into later in the episode.

But overall, very exciting, very exciting progress from Meta and integrating their products or integrating AI into their products. I think how quickly they're doing it and the scale of which they're at is incredible. They are at apparently 500 million active users for their AI platforms. that includes the aggregation of users from

Dalton Anderson (01:41.896)
I guess, WhatsApp, Messenger, Instagram, Facebook.

what to go from, I would say, embarrassment to all the way to leading class models to now being the most popular.

AI offering and they're doing it at scale for free and they're integrated into all these platforms that have billions of active users. think really cool, really cool. Okay, so with that being said, Meta announced many things that we'll get into later in the episode. We'll get into all the juicy details, but I just want to give you a quick

show itinerary and make sure that if you want to listen all the way through, you can. And if not, you could drop off. So Meta made some changes to their Meta Quest lineup. They're making it and also Meta is having a big push for their AI dominance. And these latest tools and partnerships will show the direction that they're trying to go in. They are having some

I guess, increased demand for their meta Ray-Ban glasses more than that was anticipated. And so the meta Ray-Ban glasses have been sold out for a while or been having limited availability. They have changed their manufacturing capabilities and so now they've increased it and they have also added new styles, which is pretty cool. And they've provided many updates to that product as well.

Dalton Anderson (03:32.38)
And then we'll be talking about Meta's long-term vision for their technology and their products. But before we dive in, I'm your host Dalton Anderson. My background is a bit of a mix between programming, data science and insurance. Offline, you can find me running, building my side business or lost in a good book. You can listen to the podcast in video or audio format on YouTube.

And if audio is more your thing, you can find the podcast on Apple podcasts, Spotify, YouTube, or wherever else you get your podcasts.

Okay, so before we dive exactly in the episode, just giving you a little life update. I am currently in an apartment in Seoul, South Korea. I am 15 stories up in Gundam.

Dalton Anderson (04:33.354)
and they're working on a skyscraper right next to me. So you might hear some, some noise feedback occasionally. I know they've got two cranes going on, got about like 20 workers. It's pouring rain. It's windy.

But overall, I have loved Seoul. Like Seoul is so nice. It's beautiful. Lovely, lovely atmosphere, lovely city, so clean. The food is amazing. The only time I had bad food in Asia was I think this morning, which is, I guess, nighttime for everyone else. I had dinner for breakfast and my dinner was this.

stew like thing that had potatoes, cow stomach, chicken, mushrooms, these white squishy things. I don't know what they were. Some other stuff. Yeah, it just that was the first thing I had that wasn't really my my my vibe, I would say. I I'm pretty sure it's pretty good. One, I didn't really know what I was ordering to.

I like to send it, I would say. And I like to order things that I potentially might not like. And if I don't like it, I still eat it. And this one was particularly tough. think the one that I put in second would be when I had a cow stomach soup in Mexico City and I accidentally ordered a large and it was large. It was a large bowl. I mean, it could have been for three people.

I devoured it, but it took me like an hour and a half because it was just so hard. This is a similar experience today. This morning I was really trying to get a quick bite to eat before I recorded this episode because I have fallen behind and now I am recording the episode the same day it's coming out. So afterwards I just had a hard time just working through this one. It took me like 45 minutes to get through it.

Dalton Anderson (06:48.306)
It was honestly supposed to be small, right? It said dish for one person and the dish comes out and it's literally the size of like a four person table and it's just this massive bowl that you heat up on your little stove that you have and you cook it down and it's still so much food. And then I ordered this other thing like a, I think in English it is a octopus pancake.

and it's greens and octopus and it was pretty good. But that and the curry, was just too much food and maybe it's a curry. I don't know if it's curry. They said curry, but it wasn't very curry. I'm not sure. I don't know enough to really have an opinion on what I ate. Besides, I didn't like it. I don't really know all the ingredients that's in it. I don't even know what it's called. I just eat it.

And if it's good, it's good. This one, not so much. But I will say that the chicken in Korea, definitely the fried chicken, is really great. It's good stuff.

Hmm

Yeah, all the meals I've had except this one this morning I have really loved. And a lot of times I'm eating stuff and I just have no idea what it is, but it's so good. This time I didn't hit the mark. This food didn't hit the mark. So it's okay. We'll come back. I'm still emotionally processing the whole thing. That was tough. And it was so much food, but it was difficult for me to eat that.

Dalton Anderson (08:31.978)
I ate majority of it, but I couldn't eat those white squishy things that I had to pick up a protein bar so I could work through this episode because I needed more energy. Because you got to think about it. I'm working the night before I'm working throughout the night and then I'm recording this episode right after work, after I get food and stuff. And then I'm here. OK, so that was a little personal antidote. Let's move into Meta Connect 2024.

So just quick high level highlights. MetaQuest S is rolling out. S is going to have the same processor as MetaQuest, but it's going to be cheaper and it's going to have the same controller too. So it's going to have some similarities. It just won't have 4K. And MetaQuest really isn't 4K because it's 4K, but it's 4K shared between the eye optical lenses. So the optical lenses for your eyes,

are 2K each. So it's not really 4K. They say 4K because it's like once you combine the 2K and the 2K, it's 4K, but it's really 2K per eye. This FYI, after I did quite a bit of research, I was like, wait, it's not really 4K. If you're splitting the resolution between two different eyes, you're seeing 2K per eye. Yeah, it threw me for a loop for a second.

And anyways, I ordered I ordered Metagloss or not Metagloss is MetaQuest. I ordered MetaQuest while I was out of town. And so when I get back, I'll be able to try some of these features out and I'll do an episode on it. Like one thing I'm really interested in.

learning more about is working remotely with virtual monitors, I think it would be sick. Like being able to just pick up your computer, go to a library or go somewhere and just bust out like a couple hours worth of work using virtual monitors wherever you're at, I think is so cool.

Dalton Anderson (10:38.462)
So they've enhanced the content offerings that they're.

Dalton Anderson (10:46.154)
getting a phone call. They enhance their content offerings. So they are offering, I guess they're now available. You can watch Netflix, Amazon, YouTube, which I know has been on there for a while and Twitch. So now Twitch is rolling out and you can use Xbox cloud. Not only can you do those things, but you can also do them in

their enhanced theater mode, which is supposed to be like more immersive, like the screen will wrap around you kind of thing. instead of it being like a normal TV screen, it's a curved screen that kind of wraps around you and surrounds you. That seemed pretty interesting. And they said that you could do that in many platforms. They listed off Netflix, Amazon Prime, video, YouTube, Twitch.

and Xbox Cloud gaming. And they also said that you can watch videos together with YouTube with their partnership. I'm not sure how that all works out, but I'll test it out when I get the chance when I get back to the US.

Dalton Anderson (12:03.23)
for VR were having an enhancement and I said were but it's not really me. Meta is having an enhancement or maybe it's were because I am a shareholder of Meta and I own part of the company. Very small part but I'm doing my part as a shareholder. Anyways, I know that was funny.

They are partnering with Microsoft to enhance the integration with their remote desktop app. So apparently they demoed where you look at the VR headset, you have the VR headset on, you look at the keyboard and somehow the computer knows that you're, I don't really know how it all works.

Maybe you have to set up beforehand. I just don't know how it knows to connect. So they said that you'll look at a keyboard and then it will know that you want to connect with your Meta headset and it will just start connecting. How it does that, I'm not sure. I don't know. They didn't really go into detail about that. It just looked cool. They're also enhancing the AI.

avatars and they're allowing you to watch YouTube and Meta Horizons with your friends. And then you can also capture photo realistic spaces using your smartphone. Okay. So that was just a quick highlight summary of what they talked about. So now let's go into a little bit more detail and we're going to start with the AI enhancements.

think that's a good place to start because the AI enhancements kind of trickle down, waterfall down into these other products. So one of the first things that they did was they announced that their models are now multimodal. And if you're new here, multimodal means that the models can now accept data from different avenues, I would say. So they can accept text, they can accept voice, they can accept photos, video, in some cases.

Dalton Anderson (14:22.014)
and some of the examples I showed that it could accept video. So these things will allow you to interact with Meta's AI more naturally than you could before. And text really isn't a good way to interact with these AI agents or models because you wanna be able to have a conversation and feel like you are.

collaborating instead of listing, I would say. Like, it's so much more natural to just talk to these AI platforms and get results that you're looking for. It's super useful, super useful to be able to use your voice. I feel like if there isn't voice capabilities on these systems, I personally don't use them. So in my opinion,

The first one who rolled that out was Google, Gemini. And maybe their product wasn't as good as opening out at the time. they definitely, they made strides. And I think pretty much it's pretty neck to neck. And a lot of these models are just slightly better at different things than others.

They all have their pros and cons because they're different models or different products. They're better at different things. They have different specializations.

But what I'm saying is even when OpenAI's product was definitely better and it wasn't like an opinion-based situation, it was better. The evidence showed that it was better. I would still use Google Gemini because Google Gemini was so much easier to interact with because I could just use my voice and it was so natural. And I really appreciate Meta opening that up.

Dalton Anderson (16:19.166)
that capability up to the public and launching that.

But not only with your voice, you can also use your photo. You can upload photos. can upload photos. You can also take photos with the app.

and

They also integrated.

Dalton Anderson (16:48.924)
AI voice. So now you can talk to AI like you can talk to it with your voice, but then it can also respond with its own voice and you can pick the voice that you want by using a public figure and a different approach that Meta took was they leaned into their social status and social media network and they used public figures as their AI voices. So they are licensing

the voices of actors and celebrities. And you can use those people to be the people that you can or not people with the A.I.' voice that you communicate with, which I think is really cool and a different approach. It's got a level of novelty that might convince people to try it out just for the giggles. Like, I want to see what Snoop Dogg says or so and so and just try it out for fun. And then like, wow, this is really useful. And they get hooked.

I think it's really good approach. And then also I was thinking about, say that they have trouble penetrating a customer market in a different country, say like some random country, whatever. They could use celebrities in that country to get people to try out the app. So it's like a two pronged approach. Like one, it's very approachable and people might try it out because it's funny. And then the other piece is like, okay,

You can use this actually as like market penetration to where you're using these AI agents to and their voice like AI voices that you're using and licensing for use celebrities in that country. And then this hopefully the citizens in that country utilize the AI voice feature and want to use that product versus other products because they can talk with their favorite celebrity or actor, whoever that person may be.

Dalton Anderson (18:49.436)
Okay, so transitioning over to AI Studio and AI Agents. Previously we were talking about just general, general.

Dalton Anderson (19:03.444)
general AI advancements with Meta. Now we're talking about AI Studio and AI agents. So with the studio and agents piece, public figures have enhanced capabilities. I don't have them, unfortunately. I made a AI agent for the show, of course. Anything for you guys.

I don't have the same capabilities as a public figure and it's slowly being rolled out to other people, unless I get more than 45 subscribers, I don't think I'm gonna be considered a public figure for a while. Currently I'm at like 45 subscribers and we're growing, we're growing for sure, but I think we've got a while until I'm considered a public figure and I could try these out.

they did have a public figure on the show and they did some live demoing and they did some discussion of like, how are they utilizing these, these advanced features that they only have access to. And one of the things is they, they respond to DMs for you. And so you can train your, your AI agent to understand what, what kind of person you are by training the information or training the agent on your threads.

meta threads and then you can train the AI agent on your captions that you post and your comments.

And then it will understand who you are and how you want to interact with others. And then you can train the agent to automatically intercept certain questions or messages and respond how you would respond, but without you responding to automate that piece. And it gives you better service to the people messaging you. They think it's you.

Dalton Anderson (21:02.748)
It wasn't necessarily clear that the AI agent says it's an AI agent and not you. Responding so I think that's kind of.

I wouldn't say shady, but a little difficult. And I talked about that a couple episodes ago. was like, okay, like it's good what the whole purpose of social media is for connection or supposedly. And if you are having these AI agents respond to people and they think it's you, is that good or bad? And I was leaning more towards bad.

And maybe if you make it clear, like, hey, like you triggered an automatic response, blah, blah, blah. This is how I would respond, whatever. Maybe that works. But I don't know if AI agents responding for you to free up time.

from responding to DMs is good without labeling it, hey, this is AI or this is my AI agent responding for me.

Dalton Anderson (22:12.01)
yeah, they can be trained on on your writing style and your communication by using your threads and like what posts you have on your threads and how you interact with people on threads, your comments and your captions on Instagram, which is sick. And then one thing that they showed, which was incredible, was your AI agent can make an AI like legitimate AI agent of you.

And so they live demoed the public figure. I don't know the guy's name, but whatever. It's not that important. What is important is they did a live demo of Mark Zuckerberg FaceTiming or I guess Instagram videoing this AI agent and asking the AI agent questions about a recent release of their book that he published. And the AI agent was able to respond and they asked a couple of questions and it responded in real time. And the

The head was moving around. It had facial expressions. know, certain things you asked it would like change its, you know, curl up his eyebrows. He would smile. Everything was synced up with the lips and the eye contact and

It was very human-like. I mean, you can tell that it was AI agent, but it's probably 80 % there. Like, it's pretty close. It's pretty close, surprisingly close.

Dalton Anderson (23:46.09)
so interesting. So interesting. I don't know what they're going to do with that. It's almost like a deep fake on the fly. So I'm not sure how they're going to do about it. I think they'll be very, very...

Dalton Anderson (24:06.378)
I think they would have tight restrictions on who they'd roll that out to because you don't want someone like snatching photos from a public figures profile, making up these like fake accounts and then making these AI agents and representing these people that.

have a personal brand and identity, that's not cool because they would get upset and then Meadow would.

obviously you lose these users that are really important to them. Okay, so next we're transitioning over to the AI powered features that they rolled out and and I would say the AI agent public figure personas are up there as like one of the coolest features they announced and then the next thing is

these AI features that they're talking about rolling out. And I guess they're not necessarily available right now. I don't think. But they were insane. The first one was.

the public figure personas I talked about. And the second one was automatic video dubbing for reels. And they're starting with English and Spanish. But the interesting thing was it wasn't, they emphasized that your dubbing was trying to use your authentic voice. So like what would you sound like if you spoke Spanish?

Dalton Anderson (25:47.612)
or if you spoke whatever language. mean, in this case, it's only English and Spanish, but they're going to roll it out to other languages later on. And they showed some examples with people speaking English, attorney in Spanish, Spanish to English. And these people would have a voice and you'd see this this chick talk or this this dude talk and they would talk in Spanish or English and then it would be translated or not translated, but auto dubbed into the opposite language.

And you're like, wow, I could really picture her sounding like that or this dude, he probably sounds like that. Like if he spoke English.

Dalton Anderson (26:28.232)
And they emphasize using your authentic voice somehow. Your authentic voice. And so each, I guess, each person will have their own voice and brand in that language. Like your whole, I think your voice is part of your brand, especially when people are listening to you. And so they're saying that you'll have the ability to reach more users and you'll have an authentic voice.

and that voice will be yours. Pretty cool. Pretty cool. mean, wild. So crazy to see in the live demo. It was nuts. It was nuts. I was like, my goodness, this is crazy. Especially the AI, the AI public figure persona thing.

or I don't even know, like your virtual you that you could talk to live with a FaceTime or not really FaceTime, but Instagram video. Crazy, crazy stuff. Okay, so now we're moving over to the next product, Meta AI glasses or RayBan Meta AI glasses. So they're changing their approach to summoning Meta.

AI and so instead of saying like, hey, what am I looking at? Or hey, like these long. Action item or not actually action words. The hot word is now just going to be hey, META, and it's going to understand that if you ask a question like, OK, like what can I make with this or whatever or. What is this book about? You don't have to say hey, META again.

You can just continue the conversation like if you were talking to a normal person. So they emphasize like conversational interactions. They also are going to be rolling out a feature where you will be able to use meta AI glasses to remember things for you. Like remember flyers or billboards that you saw or an advertisement or what this person was wearing because you liked their outfit.

Dalton Anderson (28:46.786)
Or my favorite, where you parked. I might be a smart guy considering whatever. Maybe, maybe you feel that way. Maybe you don't. But I do have trouble finding where I park all the time. And it's so funny. I was at work one time and they were talking to AI things and like rolling out little things like regarding AI and asked them some pretty engaging questions on like how they're going to approach it. And have you thought about this, that, whatever?

And they're like, wow, you seem like you're really into AI and you know a lot about it. Have you thought about joining the AI committee? And at the time I had so much in mind to play, I just didn't join, obviously.

But I was like, man, they're saying all these things, but I have trouble finding my car. I think the day before I spent like 20 minutes trying to find my car at like some big parking lot, because I just completely forgot where I put it. So sometimes I forget where I parked my car. So this would be a great feature for me. So I don't need help finding anymore. I can just use Meta's glasses for it.

Pretty cool. I love that feature. And sidebar, the company I work for doesn't know that I have this podcast and I'm not really too public about it. Not because I'm ashamed of it or anything, but I just don't want people to think I'm distracted or I'm up to no good or whatever. mean, it opens up more question marks than it does answers. So I just kind of keep it book closed on that.

Okay, so it has this this image extraction and text processing feature. So they demoed not live, but they talked about it that now you can have a folder, not a folder. Huh? Yeah, I think I think I'm starting to I'm starting to get pretty tired, but we're going to keep pushing through. So with

Dalton Anderson (31:01.392)
the feature of the image extraction feature and text processing capabilities. They are now going to allow you to look at a flyer or an advertisement or whatever. And you can ask it like, Hey, Meta, scan this QR code. Hey, Meta.

call this person and you could look at a flyer and it would know, okay, this is a QR code or this is a phone number and it would be able to process that information and take action for you. Another thing that they said that isn't out yet, but will be coming is this real time video overlay feedback. So you're overlaying your, what you're seeing real time to, the AI agent. I keep saying agent. These meta AI.

And MetaAI will be able to give you feedback during your task completion. Like, how do you put this together? Or what should I wear today? Help me pick out an outfit or something like that. OK, so this is the third thing that I thought was incredibly impressive. Yeah, just overall mind blowing is this real time live translation in multiple language. I think they said English, Spanish, French.

English, Spanish, French. I forgot the last one. So you're able to do live translation in many languages and what will happen is you'll talk, Meta AI will listen to what the person says, like the other person talking to you in the conversation, and then it will translate it to you, send it to your speakers. And then you would reply.

but it's pretty quick where the person's talking and it's translating it in real time. So while they're talking, you are pretty much like maybe a quarter second behind from what they're saying. And so you can almost respond in real time and then they could hear you almost in real time. So it's like a legitimate conversation. It's not like you're waiting back and forth. I think it's really cool. And another feature that I thought was really cool that

Dalton Anderson (33:16.358)
It's not really a feature. It's more of an integration, I would say. Is there partnering with Be My Eyes? And Be My Eyes is, guess, a phone app that you can use people that are blind or have a hard time seeing, can download the app and request a volunteer eyes or like volunteering eyes. It's people who volunteer to be the eyes of the visually impaired.

And so these volunteers, will help whoever requests help or assistance. Sorry. They'll help whoever requests assistance and they will.

I guess assist in providing the information that they're looking for. Like, I'm looking for this thing. Like, where is it? Or can you tell me what time this thing starts? I don't know. It's not in Braille. And I can't. I don't know what it says. Something like that. And so instead of doing it on your phone, now you can do it with MetaAI glasses. And I think that's more of a natural thing because I've had trouble with mine.

my friends or their grandpa or grandma helping them do different things. And I'm like, all right, hold the phone. And it's just difficult to get them to hold the phone because they're so focused on trying to help you. They forget that they need to hold the phone and it's a whole thing. But if it's on your face, like, just like look at what you need me to look at and I can help you. It's just really nice. And I think a little bit a little bit more smooth, I would say.

I really think it's a great partnership for the product and meta. And I think useful to society. Very useful to society. OK, so now we're moving not only to the new glasses, but there's a new style that I think are really slick. is the limited edition clear.

Dalton Anderson (35:27.594)
frames that showcases all of the technology that's being put into these glasses. And it looks incredible. So interesting. It reminds me of the Nothing phone, but with glasses and it's super cool. And they're limited edition. think there's only 3500 units being created and they're not that much more than the normal glass. I think they're a hundred dollars more. But really cool stuff.

Really cool stuff. Okay, so another thing that they announced was Orion, which Orion is the new future glasses. are full holographic glasses.

And these glasses are going to potentially be the glasses. They're the smartest glasses in the world currently. They are completely rebuilt from the ground up.

all custom sensors, all custom silicon, custom lenses from scratch. And they've spent close to 10 years working on these glasses. And it has things like hand tracking, voice notification where you could trigger it with your voice. It has eye tracking. And then it has like this.

Dalton Anderson (36:59.974)
wrist tracking like you put this tracker on your wrist and it can sense like your your movements with monitoring your

Dalton Anderson (37:13.736)
your electronic signals, I'm not really phrasing this correctly. It can monitor your potential wrist movements and mimic that in the...

UI of the glasses without you actually having to move your hand and it uses the signals that you're sending to your hand.

Dalton Anderson (37:36.97)
So you can move the UI around without really having to move your hand that much and it will sense what you're trying to do and do it for you, which I think is really cool. It's not being launched to the public at the moment. It's gonna be closed source, or not closed source, like closed to certain reviewers and partners and then it's also gonna be used for development purposes. It's currently too expensive and the product isn't as.

good as it needs to be to be launched to the public and for sale. That's what they said. We'll see. And I hope that they roll that out. I'm not sure how much they would be, but I'm probably guessing it probably like right now, maybe like $3,000, which is too much for a pair of glasses for them to make any money on it at least or get adoption, mainstream adoption. And you got to think about Meta's whole goal is to get more users.

And so if you have three thousand dollar glasses, it kind of defeats the purpose of the whole thing. OK, so we are going on to the next topic. This is just going to be talking about Meta's vision for the future. And so there's a big there's a big commitment to making technology more accessible and user friendly. And they always go back to the example of.

computers, Linux operating system, Windows. Windows is kind of open source-ish. Linux is open source. Mac OS is not. And the winning model for computers, nearly the closed model won. And without Linux and somewhat Windows, the closed model would have won for computers. And for mobile phones, the closed model won with

Apple dominating pretty much the market. so with the closed model, there's

Dalton Anderson (39:39.144)
control on the level of innovation you can have. And it particularly bothers Mark because there have been some hiccups with

Apple with certain things that Metta wants to launch in the app store. And they said, Hey, well, we won't let you do that. Like you're not, you're not going to be able to update your app with those features. So just work on something else. And so I get the, there's some friction there, but also having things more approachable and open source is I feel like an efficient way to understand your risks, your security.

different issues like bugs and improvements and feedback. You can get much faster if you open source and you have millions of random people interacting with your model or platform or your app versus having a closed system and only having that access to your customers.

So you're.

your customers are interacting at a faster rate, you can do things at, I would say, a higher feedback loop, which allows you to run through more iterations, which allows you to improve faster than your competition.

Dalton Anderson (41:07.69)
So there's a big emphasis on making technology more accessible and user friendly and especially open source. They want the open source model to win. They're open sourcing their VR platform, they're open sourcing their AI models, they're offering them for free and giving them to billions of users for free. And so they want to build a technology that emphasizes

human connection and experience and doing that with the open source model to allow innovation from different perspectives and companies. And that was the biggest thing with Mark and the announcement. They spent some good amount of time at the end talking about how it's important for the open source model to win. And I agree. I think I would be quite sad if

the closed source models of these new robust technologies that are coming about. They're not robust, but they will be robust in the future. And they will have a deep influence in our society and where we go and how we interact with the world. And it would be weird if something that was so important to everyone is closed source and you have no idea what's going on.

We've seen that before. It doesn't work out that well. So let's try the other route.

Yeah, OK. So just a quick recap. Meta updated their quest line. They are providing updated avatars. They're launching more content available platforms to watch videos. They're making it easier to interact with your friends. They're revamping the avatars. They're offering photo realistic spaces that you can create using your phone or you can join into other people's photo realistic spaces.

Dalton Anderson (43:12.618)
For, let's see, Meta's AI, can now use your voice photos with the multimodal models, which are their 90 and 11 million parameter models. And they also launched a 1 billion and 2 billion parameter model, which I didn't get to touch on. Those will be made to, or those are made to launch on devices like your phone or glasses or a watch, something.

You can now utilize public figures voices for your AI voice communication with Meta.

Dalton Anderson (43:56.464)
If you have the capabilities and you are a public figure and you have an AI agent, you can train your AI agent with on your threads, your captions and your comments to properly engage with your audience. Then you can train the AI agent on how you'd like or when you would like the AI agent to act on your behalf. And when, when would you do that when you, when you get sent DMs? And so that, that, that's pretty cool.

Dalton Anderson (44:26.612)
The next thing that was mind blowing was the real time video interaction with your AI agent that had like a carbon copy version of yourself.

They didn't really explain the how behind that, but I assume that that's going to be locked behind having quite a bit of a following because that could be potentially dangerous if everyone had access to that. So I don't think everyone's going to be able to use those. Auto video dubbing, like the lip sync in multiple languages and everyone having their own authentic voice in each language is pretty incredible.

Dalton Anderson (45:09.596)
Meta's AI glasses roll out. They're going to allow image extraction and text, real-time video overlay feedback for tasks. They're going to allow live translation. They are now partnering with MyEyes for visually impaired individuals. They are rolling out a more conversational approach to accessing Meta AI with Hotword.

Hey Meta.

They also are providing transitional lenses now. They announced Orion and its progress and it looks insane. And then just there's a big emphasis on being open source and hope the open source model wins. what are your thoughts about what was discussed? Is the live translation or the audio video dubbing

the biggest thing for you as being incredible and thought provoking? What are your thoughts? Let me know.

Dalton Anderson (46:24.272)
Next week, I really want to build something with cursor. I utilize cursor a little bit over the weekend and purchase premium so I could try some try to build something using the AI models. It's pretty legit and I am looking into it and I want to build something and talk about how long it would take me to build it with AI versus not with AI. So very exciting.

and hopefully I can get that done. If not, I could dive a little bit deeper into some of these meta topics. There's some other things I would like to talk on, but those are the main items that have on my list. And of course, I appreciate you tuning in and wherever you are in this world, good morning, good afternoon, good evening, and thank you for listening and I hope that you listen in next week.

Have a great day. Goodbye.

Creators and Guests

Dalton Anderson
Host
Dalton Anderson
I like to explore and build stuff.
From VR to AI: Meta's Ambitious Vision for the Future
Broadcast by