Beyond Reality: Unpacking Meta's latest AI and VR innovations

Download MP3

Dalton (00:00)
Welcome to VentureStep podcast where we discuss entrepreneurship, industry trends, and the occasional book review. Today we're exploring Meta's groundbreaking advancements in AI and mixed reality. From the cutting edge llama models to the launch of Meta's Horizon OS. We'll dive into the latest developments that are shaping the future of technology. Join us as we uncover the exciting possibilities. Before we dive in, I'm your host Dalton.

I've got a bit of a mix of background, a programming, data science and insurance. Offline, you can find me running, building my side business or loss in a good book. You can listen to this podcast in both video and audio format on Spotify and YouTube. If audio is more your thing, you can find the podcast on Apple Podcasts, Spotify, YouTube or wherever else you get your podcasts. Today, we're gonna be discussing Meta's AI leap from Llama 2 to Llama 3.

We'll be discussing the llama three features and we will touch on the newly announced Meta Horizon OS and what I could gather from the blog as the information was just announced from Mark Zuckerberg and Meta was communicated on their blog as of 10 hours ago. So there isn't that much information, but we will touch on it because it's very interesting.

Okay, so before I dive in about the models, I'm gonna compare them to their prospective peers. And so one thing that is being used pretty often is this thing called MMLUs, which stands for Massive Multitask Language Understanding or Understand Benchmark.

This benchmark is kind of like a SAT for models. So there is this 20,000 question or 18,000 question test bank and then there are topics within those. So there might be STEM, within STEM there's gonna be like engineering, coding, all sorts of stuff. And then from there.

They'll have humanities or history, and there's a whole bunch of various topics, and then the LLM is tested, and through that test bank, from there, they'll take the average score of all the topics, and then that would be the MMLU score, or the Massive Multitask Language Understand benchmark.

Okay, with that piece explained, I'm gonna talk about how Llama 3 compares to its perspective models in that area. And that's important because certain models are built for certain things, like Llama 3's eight billion parameter model, or Google's GEMMA seven million parameter model aren't made to do these

crazy tasks like building a software for some large company or building a policy administrative system or building a social media app. Like those models aren't gonna be able to do those things.

What those models are built for is you want something small that doesn't have to have that much compute, nor is

they're that much space where it's being stored. So these, these models are smaller. They take up less energy. They run faster. And so where these small models really thrive would be on a device, a mobile device, like your phone, or in Meta's case would be a good example would be Meta's RayBan glasses. I don't know the exact name, but the

the glasses that have cameras and can take photos and videos and post to Instagram or Facebook. Those glasses would work great for a smaller model. Eight billion still might be a little too large for the glasses, but it's just an example. Okay, that being said, Lama 3, eight billion parameter model performed significantly better than

GEMMA's 7 billion and MISTRA's 7 billion parameter model. So GEMMA was 5.3, MISTRA was 58.4 compared to LAMA 3's 8 billion parameter model of 68.4.

And then another thing that was pretty cool was Llama 2's 70 billion parameter model is getting outperformed by Llama 3's eight billion parameter model. So there's this significant jump in performance, not only in Llama's iteration from Llama 2 to Llama 3.

but also within the same space, it's outperforming its peers who used to be leaders. And it's.

The process of LAMA and MetaAI being taken a little bit more serious, because previously with LAMA 1 and LAMA 2, I think that their peers weren't necessarily.

taking Meta as a serious contender for building out one of the best open source models. Now I think that people are kind of on the edge of their seats because this is their 8 billion parameter model, it's performing very well. And then their 70 billion parameter model slightly scored better than Geminize or

Google's Gemini Pro 1.5, which was recently released, maybe three weeks ago to a month. It's crazy when you're keeping up with this AI stuff. It's like two weeks is almost like six months. Just barely, but Gemini Pro 1.5 was one of the highest performing models of that area, the 70 billion. And then above those were the larger models like ChatGPT 4 and

and Gemini Pro Ultra. And below that was like things like Gemini Pro 1.5 or now it is Lama 3 70 billion. And I said that, you know, people are kind of on the edge of their seats because Meta launched these two models. This is their small model. We'll call the 70 their medium. And they're coming out with a 400 and

eight billion parameter model that is forthcoming. They didn't necessarily announce when it's coming out. They're doing some testing and still training the model, but it is gonna be coming out. They don't know if they're gonna open source it or not. And I'll talk about why they may not, depending on everything, how everything plays out.

llama3 features llama3 features

came with image generation and text generation, and I'll go over my personal experiences when I get into more detail.

And then Meta Horizon OS, I didn't touch on it as much, but Meta Horizon OS was recently announced as of 10 hours ago, Mark Zuckerberg and Meta released a blog post, Mark had a reel on Instagram and Facebook, and then Meta obviously just had a blog post on the internet announcing that they are going to be launching or launch Meta Horizon OS, which will be the open source.

operating system for Meta. And this is the operating system that Meta uses for their Quest, Quest Pro, Quest 2, Quest 3, and all of their devices like Ray-Ban. And I think they have one more that I'm forgetting. But basically it's gonna be an operating system for any mixed reality devices. So it doesn't necessarily have to be a...

a VR headset, it could be.

glasses or some kind of watch thing that.

has visualizations on it, I don't know. Maybe a hat, something going crazy with the hat. Maybe you could like reject things from your hat and I'm not sure. But they're open sourcing their operating system. They're gonna have an open sourced marketplace. So all the games that developers have for

Meta devices or devices that aren't built on say for a quest would work for quests or vice versa. So it allows developers to develop on one platform and reach many users which promotes growth within this space because users get more bang for their buck.

and developers have more efficiencies and synergies with just developing one app for many devices. Okay. So, I'm going to go ahead and start the presentation.

So we're gonna explore now the Llama 3's kind of innovations. So I thought it was pretty cool from the differences in improvement from Llama 2 to Llama 3. And we went over kind of the results but I didn't discuss what was shipped with Llama 3.

So Llama 3 came with text and image generation. Text generation was, in my experience, similar to Geminis, Tone and...

tone and like identity where they would say things in a certain way, kind of be witty, warm, and more personable than ChatGPT, Jim and I, ChatGPT 4 than Jim and I is. And so that's good because from my experience, I use ChatGPT 4 to develop

things like I'll help it write me scripts or I'll help improve my code or optimize it or point out insufficiencies in my scripts but I don't necessarily have it write me stuff because one it's just not as personable as Gemini and so when I want help writing a support ticket

or when I need to reformat my notes, I'll use Gemini. And that's something that if you're not doing, that's super easy to do is you write down all your notes, kind of shorthand and it's messy, and you set up a kind of chat that organizes your notes for you. And so next time you're in a big meeting,

and you're taking down notes on your computer and it's a mess and it's useful, but on organize, ask AI to organize your notes for you. And then you can send it out to the team like, hey, I took down some notes, blah, blah. Here you go, appreciate it. And it makes your life a lot easier. So that's kind of a sidebar, but anyways.

I use Gemini to help me write things. So I used to help, I used to get help from Gemini to help me format my podcast episodes, the outlines. So I don't necessarily fill out a huge amount of information where it's like everything's got like crazy amount of bullet points and bullet points under those bullet points and nested, with these nested bullet points of just additional information. I kind of have a topic.

like, you know, subject or segment one is for this example would be like exploring llama three or something. And then underneath that, I kind of have these, these kind of topics. So, you know, in bold, I'll have user experience and then I'll just have a bullet point saying like text or image generation. And then from there, I'll just talk about it.

But to get there, I kind of use a voice chat and then I route this big blob of text, no grammar, no nothing, just one huge long run-on sentence. And then I throw it into the chat, or I used to, and Jim and I are like, oh, no problem, I'll use your template that we created, and it would help format my notes for me into

this outline and then I would reformat it to kind of add the bold text and make it so things pop out to me where I can kind of just glance over and see what's going on.

But as of today, Jim and I is rejecting me. And what do I mean by that? There's two things that are commonly talked about with these kind of AI large language models is rejection and hallucinations. And a hallucination is like, you'll ask it something, it will think it's doing what you're asked, but then it's like completely different and just not related to the topic or.

you might ask it something to do something and then it will respond in French. Like that would be hallucination. And so there's hallucinations and rejections. And so a rejection is like when you ask the chat of that AI to do something and then it says, hey, I really appreciate the help, but I can't do that. I'm not capable.

And I think it's super frustrating because you can kind of force it sometimes to do what it says it can't do. And then sometimes you can't. And so I tried my best to convince Jim and I, hey, you actually can generate a template for me. It's just that you're choosing not to. And it was stating that I'm a large language model.

I am only allowed to translate text. I can't help you translate or it says like manipulate and translate text or something like that. And that's exactly what I was wanting it to do. Just wouldn't do it. And so I asked my new friend, Lama3 to help me out. And so Lama3 was able to do it, no problem. And so that was something that from my experience,

it has a warm nature similar to Gemini, but with less rejection. And so it's able to do a lot of things. And it might state that it can't do something. Like if I ask it, oh, can you generate an audio clip? It's going to tell me no, but it will give me, oh, you know, I can't do that. But you know, go here and you could do that there and then come back and I can help you out doing these

That was pretty good from my perspective, is like, you're not getting rejections. And I've used Lama for, I don't know, four days. So I could be wrong. I mean, when I initially used Gemini, Gemini worked great. It had image generation. It had, you know, great text generation. And then,

Google had issues with their diversity, or I don't know what they called it, but they called it like.

their image generation malfunction, but I don't, and I really don't wanna get that far into it. I mean, if you're really curious about it, you could look it up, but basically, you know, they were having forced diversity image generation into their images. And so like you'd ask for a historical figure, like, you know, can you send me a photo of an African king? And it would like not...

put a black person, it would put like a white person or like some other ethnic group, such race, culture, whatever. And they'll do the same thing. Like if you ask for like a historic, can I have a historically accurate English king or something like that? And it wouldn't give you like a white old dude. It would just give you like some other group. So people got pretty upset about that.

I don't know if it's that big of a deal, but I guess if you're asking for a circle, accurate information in the model knows what it's doing is not historically accurate. And then I guess that is a form of deception. If it's actually lying to you. But I don't know what is deception and what is hallucination, right? That's difficult to understand because you don't know.

But that would be a cool measurement, interesting. Okay, so that being said, from my initial experience of Llama 3, it's been great. And I used Llama 3 to help me set up Venture Steps business page, and I used it to help me set up my new Instagram and link the two together, because...

Meta is always changing things and reorganizing their business suite and how things are working. And so I knew that I had to set up the business page first, but I didn't necessarily know how to go about setting it up. Whereas on Instagram, I didn't know, I knew you had to set up a separate account, but I didn't know like if you could.

register multiple accounts with one number, because that used to be an issue where if you had many accounts on one number, it would flag you as like spam. And so Meta helped me, or Lama 3 helped me figure out how to navigate those potential issues. And then I followed Meta's advice, Lama or Meta. It's weird because Meta doesn't have a, like a really good name for their AI. So I just call it Lama.

I followed Lama's advice and it almost sounds like I'm saying mama, but Lama, I missed my Lama. Anyways, so I followed Lama's advice on how to set up the business page, Instagram, and then I also asked it, you know, Hey, like, is there, is there a way, is there a better way for me to upload videos and you to upload like 70 videos from

my podcast that would involve like long form videos and short videos that would be reels on the platform. And how do I get, you know, things to cross post? Like do I have to upload to Instagram only? Or and that would cross post to Mac, to Facebook. And it gave me guidance there and told me, oh, like, you know, there's this recently launched thing called the Business Suite.

and the business suite would allow you, once you have the two accounts linked up, then you could upload one video and then it would post on both platforms, depending on if it works for both. So that would be an example of, okay, like if I wanted to upload like 30 images, I could do that on Facebook in one post, but on Instagram, you're limited to 10. And I think Instagram's videos are limited to an hour. Why? As...

Facebook is limited to like three and a half or two and a half hours, something like that. It's longer than Instagram, that's all you need to know. So as long as what you're uploading meets both the parameter on Instagram and Facebook, then they'll both upload. So it's pretty cool. So I uploaded some videos and then I asked Metta some suggestions of like how I should go about uploading and some of the things I should look for and how should I set my comments and.

in those things and so I followed Lama's advice. I figured that Lama would know because it was built by one of the largest, or I guess the largest social media company in the world. So I figured that the company who created the AI would know how to navigate its parents or parent. Okay.

So with that advice, I was able to get a couple thousand views in one day in less than 24 hours, which was pretty cool. Thanks Lama, I appreciate that. That being said, I do wanna talk about the image generation because the image generation is so cool, where you can, and I'll share my screen in a second. Let me figure this out.

It's always weird sharing your screen on the fly. Presentation, your screen maybe, I wanna share this one. Okay, so if you are watching a video, you'll see the video. Otherwise, if you're watching with your ears, you'll just have to listen to me. So I'm on Meta's AI website and you give two selections. You kind of have this paintbrush and then you kind of have a pencil. I clicked on the

Paintbrush, which is like the imagine section. So it starts off with like imagine something new and then it has some prompt generation for you like design a submarine, two sloths playing, wooden puzzle or design a flower dress. And so I'm gonna click on the flower dress. And so when you have a prompt,

inputted, it's gonna generate a series of photos and you'll have maybe five or this is, I can't count, it's four. You have four photos and then you have the chance to edit them or animate them.

And if you want it to change your photo here, it changes on the fly. So when I have the initial prompt, it will generate a photo. And then from there, it will iterate on that photo. So right now the prompt says, design a flower dress for a runway show. And it says, woman on the runway, she's got this kind of beautiful flower dress.

and it looks like a flower necklace, I don't know. And then so from there, if I change, design a flower dress for a runway show in space, if I can type, oh my gosh, in space, okay. So it doesn't really recognize that I want it to be in space, so maybe in space on Mars.

Okay. With falling rocks, falling flowers, let's do falling, falling flowers. Let's do that.

Okay, so now these people in the background are kind of fading away, and the background is a little bit more bleak with these, it's dark and there's this shining light that's shining on the prominent figure of the image, and there's flowers falling from the sky, and it's pretty cool. So then what if I just said, falling rocks?

Rocks. Okay, these aren't really rocks. I said falling rocks and it's still giving me flowers. So I say design a.

A tree. Okay, so now I said design a tree dress for a runway show in space on Mars with falling rocks. This kind of made a mud dress that has falling rocks and it looks like similar to Mars. But if I get rid of this runway show, see, and I just say tree in space on.

Mars with falling rocks make it cyberpunk.

Okay, so now we got a Cyberpunk tree, let's do with lights.

All right, so if you are watching the video, you kind of get a good idea of like, okay, you can not only can you generate images, but you can generate images on the fly and know how your prompt is changing the output of your generation, which is pretty cool. I've never seen that before and it's exceptionally fast.

Dolly which is chat GPTs or I guess not chat GPT, but OpenAI's image generation model is a bit slower than Meta's. Okay, I wanna stop sharing. Okay.

So I was honestly really impressed with the image generation that was.

My experience from my experience, I was pretty impressed with how it generates on the fly and it follows what you're asking for with limited hallucinations. On there, we kind of got a hallucination, but I think it's also a factor of what I was saying. I was saying something like design a flower dress runway in space on Mars with

falling flowers and rocks or something, you know? It's just like all over the place. Like you can't really blame, you can't really blame someone for not following directions when your directions are hoard.

But overall, very impressed with what they've got going on. And I hope, I really hope that they will launch the 408 billion parameter model soon-ish, maybe, so I can play around with that one. And I hope that they'll open source it.

One thing I didn't talk about was, okay, so they have their meta.ai website, which is kind of like a chat GPT or a Gemini where you log in with your Facebook account and then it will keep your chat history and you can train your chats to do certain tasks and...

help you optimize your day. Then they also have things for like one-off questions that are integrated in Facebook, they're integrated in WhatsApp and Instagram, and not only are they at the forefront of those apps, they're also within your group chats. So within your group chat, you could be like, oh, let's plan a trip to so-and-so place, and then you could ask,

Chat GPT, ah, chat GPT. Just so you say chat GPT. You could ask Lama, you could ask Lama or your mama to help plan an itinerary or, you know, what are the best times to fly or all these kind of questions like, okay, what time should we fly? What are some interesting things that we should do? Are there things that are interesting that you need reservations a month out for?

What are great places to stay in? Not necessarily places, but areas like neighborhoods. And you can ask all those questions and get answers for them on the fly with sources from Google and Bing already integrated real time into your chat, your group chat with your friends. And you can also make memes, you can make fun of your friends. I had a friend who

went a little too wild on the 420 weekend in California. And he wasn't feeling too good. And so I asked Lama to help him feel better with hangover remedies and some annoying lullaby songs. So that's pretty funny to just be able to.

communicate with your friends using this intermediary of Llama. You can also ask yourself questions which I do all the time. That's why I like some of these AI chats is that I have an AI chat for Jim and I where it says like ask, you know, it's like random questions where I just ask you questions like.

I think last week I stayed up late one night and I was asking about like, what's the most efficient architectural shapes and designs for homes? And in fact, it's the dome, like it's a dome, like kind of a circle dome-ish thing, and it's supposed to have a natural installation rating of like 64 or something or higher.

I don't know, it's a lot more than a normal house. And it just has low resale value and it looks odd and all these other issues. But anyways, so I ask myself random questions all the time and I think that other people might have similar questions like how old is Taylor Swift? Or can you tell me about Taylor Swift? Or what's going on with the Amigos? Or something like that. And you can ask those questions.

right in WhatsApp or Instagram in your group chats, and it will keep your safe history. And so one thing that I get annoyed with was I used to do fun facts every day on my Instagram. And with that became issues of gathering factual information from like fun fact apps. And I had a really good app for a while.

And then there's not a lot of money in a FunFact app. So it disbanded. And then the other app that I'm using is OK, but they just spam me with ads. And they spam me in ads that languages I don't know. And so I don't know which one is cancel or exit out. And they always switch which side they're on. And so I don't know how to get out of the ad.

and every time it winds up clicking into the website or whatever and it's super annoying. So I would love to just use llama within Instagram. Hey, you know, give me today's fun fact and write the relevant captions and generate the hashtags I need.

So that's really cool. And for me at least, I find it really interesting. I don't know about yourself, but if you're listening this far, I assume that you find it somewhat intriguing.

Okay, meta strategy in moving into AI. And so this isn't necessarily my thoughts. It's kind of a summarization of information that I have gathered in like four days, five days. And basically, they want to open source. They want to...

improve their products. They want to protect against sophisticated adversaries like government and election meddling.

But to explain all that, we have to back up years back to understand why they have so many H100s. And if you're not familiar with H100s, shame on you. You should listen to the NVIDIA podcast episodes. And I talked about the H100s and their importance to NVIDIA's revenue runway. It's one of their flagship products. Okay, for training AI.

Okay, so when Instagram was getting battered by TikTok, because Instagram didn't have Reels, Instagram was losing users, so it was Facebook to Reels. And...

TikTok was gaining popularity.

Facebook, I guess Metta, recognized that this was an issue and something needed to be done. And so the thing that was different between TikTok and Instagram was how they surfaced unrelevant accounts to you. And so Instagram would surface accounts based on

kind of like a thousand people that are unknown to you, that might be known to others in your social network. TikTok is a little bit different where it kind of uses the algorithm of like what you like and how long do you view stuff and who views what and who you follow and what they like and all these things. And it's a kind of this nested tree. Whereas Instagram was.

more of a surface level, you know, we'll show you these 1000 unknown people. Whereas TikTok is showing you upwards to hundreds of millions of unknown accounts. And that's how their model works.

Facebook wasn't like that and Instagram wasn't like that. So they had to restructure their architecture and their database schema to be able to do these things. And they also needed to train their Reels model for their Reels algorithm to have those capabilities to recommend the right contact content to the users.

that took a lot of compute power. And one of the good things about Meta is when they have a big issue at the company, I think the previous biggest issue was Snapchat and they dealt with Snapchat and they kind of fended off Snapchat as much as they could. And then this TikTok thing was more complex, but when they have these kind of

large issues that teeter the company per se, they double down, they recognize the mistakes. But then they also say, okay, we are changing our infrastructure allotment, we're altering our schema, we're changing our architecture. We need to not only prepare for what.

we have to do, but what we need to do in the next five years. And so their decision was to double whatever they thought they needed. So say that they needed, and it's crazy, they're like, oh, we need 200 million and H100s or something like that to train our data. They were like, all right, we'll just do 400 million. And then we'll do what we need to do with these other chips.

And we fast forward a couple of years from now, they started working on llama and llama one, llama two, llama three, and those decisions prior when they were making reels helped out the development of llama because they had the compute and resources to train the model. And they have...

Silicon partnerships that they're gonna be making their own silicon soon and maybe within five years. But the H100s are better at training AI than other chips you can get on the market. And so they're using the H100s to train Lama and then they're using their other chips slash chip architectures to train their other models like Reels or.

their video algorithm or just general algorithms on Facebook or Instagram.

Okay, so now that we understand the historical context of how we got to where we're at Metta, another thing that was kind of interesting that they talked about was understanding their end user. Right, so their perspective, they didn't think that users that were using Instagram or Facebook or WhatsApp were gonna be coding within the app.

which makes sense to me because I've never thought to myself on Instagram, like, oh, I can't wait to write some script in here. And so originally for Llama 2, they didn't train their model on programming data or things related to computer science. They kind of excluded that information because they're, oh, it's not relevant, it's not what the end user is gonna be doing, so.

they're not going to be asked this type of questions. And what they discovered was that for critical thinking and reasoning that their models performed much better if they included coding in their training data. And so once they did that, they were able to obviously be able to code with llama, but they were able to handle

more complex questions.

Pardon me, I had a cough. More complex questions and.

push the needle of what is possible with these low parameter models.

The piece of that, of the coding data, was just this interesting sidebar. But they did bring up why they were getting into AI and why they thought they needed to be really serious. And the examples they gave was, and this is from Mark Zuckerberg himself, he's like, people, people.

aren't getting any better at being racist. Like if you're racist, the things that you said in the 1930s or 1800s is pretty similar, maybe a little different vernacular than what you say in 2024, but in a roundabout way, you're saying the same exact thing and you're not improving and there isn't substantial resources backing you.

He said, but that's a different situation once you talk about election manipulation on the platform.

There has been obviously previous attacks on Facebook and just general platform manipulation. A lot of the top Facebook pages on Facebook were discovered to be Russian backed.

entities that may or may not be associated with the government, who knows? It's difficult to say, right? I mean, it's a Russian page, own it's English owned by a Russian shell company. And somehow they somewhat are promoting division, but are popular at the same time. Just, uh, just a tad, tad shady. But that was the example Mark gave is okay.

Hey, racist people aren't going to get any better, but I do know that governments are going to have substantial resources, are going to be adamant of completing their task at hand. And we need to be prepared for that. And I'm really concerned that if we don't have the best open source AI and AI becomes closed source and it's controlled by the governments that one government might have this super AI that will.

take over our platform and manipulate the platform in nefarious manners. And he said, another example was okay, if one group has a super AI and the other group's domed and they're the only one who has it, then they have kind of a future outlook on

companies. So if they had this super AI, they can almost see like two years in the future. Or two years in the past, however you want to phrase it. And from the example that was given was okay, like, if you had a security patch, you would patch it and everyone would get the security patch at the same time. But if you had a super AI that was more advanced, then everyone else that super AI would be able to

discover these security lapses and exploit them without anyone knowing for years. Or attack the energy grids or all these other crazy things. But the most plausible one would be okay, like you would infiltrate a system and mess with their system for years without anyone knowing. And you could do that across many, many large companies.

And no one would know. So that was Mark's and Metta's concern. And I assume that Mark speaks for Metta, right? That was their concern about AI and why they felt that they needed to be really serious about AI at Metta. And where they see themselves, they wanna be the best open source.

model. And they want to be the best, not only open, but close. They don't want to be closed, but they want to be the best open source model that they can. But he did talk about that they may or may not open source the foreign 8 billion parameter model because of concerns of promoting violence. And so these open source models are trained on data and that data could have bad stuff in there.

the data for each of these models, each model has these tendencies that they try to mitigate and suppress. And you've seen it in the news a couple times with OpenAI or Google having issues with certain things, but one of their concerns are, you know, we don't wanna promote violence. And so if they can't sufficiently mitigate

the generation of violent topics being outputted from the 408 billion parameter model. They said that they were not going to open source the model.

which I think is a fair statement. They said otherwise the model will be open source. We're an open source pro company. We believe in open sourcing. And he feels very strongly about that. They open source their other models, their machine learning models. They open source their database architecture, which became industry standard. They are open sourcing their operating system for

all of their mixed reality devices, which their industry leaders in, and they're open sourcing their AI models, llamas that they've spent billions of dollars on. So they're pro open source. And one of the gripes that Mark has is, okay, they have a closed source kind of framework for mobile phones.

but PC wasn't the case and PC has this level of freedom where you can have mods, you have different OS systems, you have all this different stuff that you can do whereas on Apple, you only have Apple and that's the only thing you could do on Apple is have app.

Android's a little different because Android's open source and you can have many different manufacturers developing on Android Like Meta Horizon or Meta Horizon surprisingly enough was developed on Android. I don't know how that works, but they did it and so They forked off of Android

That being stated, he had some issues with Apple. He mentioned it, he mentions it all the time, honestly, where he's like, well, you know, we believe in the open source model and, you know, we don't want a closed source model, mobile phones, closed source, Apple One. And we don't want that to happen. And one of the things he talks about in a podcast I recently listened to was, he,

launched products at Metta, product features at Metta. And Apple was just saying, no, like you're not launching these, like you, we're not letting you launch this on, on Facebook. And that really ticks them off. Not only is the issue of them taking the money of his revenue, but it's the piece of denying innovation and denying the next big thing. That's what really upsets me.

And so Mark doesn't want to have those kind of limitations on the world and he wants things to be free, which I agree with. I think open source is definitely way better than closed source for sure.

Open source, speaking of open source, Meta Horizon OS, which was announced now that I'm talking 11 hours ago, maybe 11 hours ago as of today, which is April 22nd, 2024.

was announced to have their operating system become open source, which is something else because they're industry leading in their space of mixed reality, and then they're open sourcing or kind of opening up their marketplace. So not only are...

high level technology companies like.

Lenovo, Xbox, see others, Azus, they will be launching products on their platform in the next couple of years.

A developer only has to develop in one language, which would be this Horizon OS, whatever language that is. I don't know if it's still Android or what, but you develop that one language, and then you could ship it to all these different device companies. And these device companies could be making VR headsets, they could be making glasses, hats, watches, you name it.

And this just changes the game because not only is meta the industry.

leading mixed reality product offering. Now they're going to have some of the largest technology companies potentially in the space develop and make partnerships on their OS system.

which cements Metta as the mixed reality leader seems like a permanent thing. It's difficult for a company, say Apple, to compete when there isn't just a closed source versus closed source.

It's different because Android and Apple, Apple had dominance over Android and Android kind of dominates in other countries besides like Japan, England, and the US. Not really England, but like the UK.

But if no person has permanent dominance at the time and you're fighting open source versus closed source, most of the times, and they have equal resources in this scenario, most of the times open source is gonna win because there's more opportunity for people to enter the market, there's less barrier to entry, there is a better development community. With all those things combined, it makes it super difficult.

to compete. So it's like a friendly gesture for other people, hey, join the platform, join the OS system of MetaHorizon OS. And then at the same time, it really messes up all their closed source models. Like if you're closed source, like if Apple's closed source, and they're not gonna go open source because they're closed source, everything's closed source with Apple. It...

really jacks up their plans, I think. And this makes Meta the industry standard for devices, App Store, and...

OS

I don't know how much, how much more you want to, to be the leader. And so congratulations to Metta and Mark for, for fixing that. And I think the bill for TikTok passed. So I think TikTok might be getting banned unless they diverse. So it's looking up for Mark and Mark, Mark's, Mark's knee is recovering.

and he is doing well. He's in the meme game now and people are making fake videos or fake videos with him in a beard with his new announcements and it was kind of a... what is it?

derivative of a previous kind of AI video that was made or AI photo that was made with Mark with a beard and a chain out and everyone was like, oh, you know, from Mr. Steal Your Data to Mr. Steal Your Girl. And it really blew up on the internet. And so Mark's been teasing the community like, oh, he's like posting a photo of him in a shaver and he's like, oh, question Mark, should I? And then he...

He was laughing about someone put a beard filter on his recent announcement of the new Meta Horizon OS and for the llama announcement. And so he's really blowing up and people seem like they're relating with him more. So it's great for Mark.

Today we spoke about Lama, Threes, innovations and progress. We also touched on the newly announced Meta Horizon OS.

These things will shape how we interact with the day-to-day in the coming years. And I can't wait to see how users might interact with their new llama friend on WhatsApp, Instagram, or Facebook. I encourage you guys to give it a try on your social media accounts. And let me know what kind of questions you asked or what things that you found useful.

in the comments. You can reach out to me on Instagram or Facebook. I have a Facebook page now, so I'm setting that up and getting started with all that. And once again, thank you for listening to VentureStep.

and have a great day. See ya. Bye.

Creators and Guests

Dalton Anderson
Host
Dalton Anderson
I like to explore and build stuff.
Beyond Reality: Unpacking Meta's latest AI and VR innovations
Broadcast by