Founder of Estimote, Inc. (YC S13) here — we do beacons.
In Project Aria video, they claim to have installed beacons at an airport to enable indoor location, only to dismiss it as something that "doesn't scale."
Instead, they say they "trained" an AI model using vision from glasses, allowing for vision-based localization.
So, here’s an honest question: which approach is actually easier, more cost-effective, and energy-efficient?
1) Deploying 100 or even 1,000 wireless, battery-operated beacons that last 5–7 years—something a non-tech person can set up in a day or two.
2) Training an AI model for each airport, then constantly burning compute power from camera-equipped glasses or phones that barely last a few hours.
Thoughts?
> So, here’s an honest question: which approach is actually easier, more cost-effective, and energy-efficient?
Really it's more like three questions.
1. Easier? I guess that depends how you define ease, but it largely depends on what resources you have available to you. If I'm Meta and I already have a ton of compute and AI training expertise but don't have relationships with all of the airports, stadiums, etc., their approach is probably easier. You'd have to spin up new teams of people all over the world to get beacons everywhere you want them.
2. Cost-effective? I don't know enough about the costs of your solution to give an accurate answer here, but again it just seems like they're probably already spending resources training models on a huge number of images of the world, so maybe not a lot of incremental cost here.
3. Cost efficient? I would assume your approach wins here.
In my experience with a mesh wi-fi project, physical devices come with real world physical side-effects: accidents happen, devices go offline, or get stolen, or knocked off the walls/shelves, a physical location needs to be negotiated with the space owner (less of a problem if the number of venues is in the hundreds as we have business people to handle those at scale), dust, water, heat, animals, etc.
It's not a big problem if you want to equip one venue or a couple, but scaling the operation means these side-effects scale too, and we had to work on solutions to handle those, rather than working on our core competency of mesh wi-fi. Unsurprisingly the project was scrapped despite being technically feasible on a small scale - we had a couple of sites.
Virtualizing a physical space gives you more flexibility. It keeps most problems in the software engineering space and limits physical requirements (eg someone might still need to walk around an airport to update the model, but I can't think of any other major ones).
That said, AI is sexy (right now), Meta is heavy in the MR space and the tech is reusable, even if it's not the most energy-efficient solution.
(disclaimer: just my personal ramblings, I don't work on project Aria)
Getting permission to install hardware is a lot harder than not getting permission to install hardware. It isn’t the hardware that doesn’t scale, it’s the people.
I've used beacons a lot in installations. I found their reliability was a bit over-promised [1]. If you want to know whether a user is within a 4 metre sphere, in a time window of about 5 seconds, then it's fine. But don't hope for anything more precise than that; the false positives/negatives aren't great.
A large part of the variation I found was due to how individual users held their phones, and the resulting signal attenuation.
was that Bluetooth or UWB? Cause that's like saying VHS vs. 4K
And here's an honest answer - it is likely to be option 2.
In over a decade of indoor robotics I have _never_ seen a beacon-based solution that practically scales (even marker-based solutions are challenging). And it's not because the tech is even bad – it's just that any process that involves _installing things_ is a PITA and wildly more expensive and time-consuming that it should be.
This kind of sucks but it is an unfortunately reality, in my experience at least.
> Training an AI model for each airport
This is where I think the gap is. We only have to train one model for all environments, not per
How do the costs compare for training one big model vs installing billions of beacons?
Also consider the pace at which model sizes, training, and operating costs are falling
With beacons, you need to install something—-with glasses, you don’t.
With glasses, you can map the space while identifying POIs—-with beacons you can’t.
Unfortunately, no one really cares about energy use.
It depends on scale need to achieve. 1000 beacons easier for one town scale, but train model for each airfield is Earth scale(in 1990 in US was ~6000 airfields, whole Europe have less number of airfields).
Also exist some nuances, as some cities are flat but others have large hills, so need to place few beacons on sides of hill (rough surface need much more beacons).
Practically, I have experience in project to deploy LoraWan network in large city Kiev, and one concurrent bought research from cellular network planners and for first look they drawn ~300 access points to have more than 99% coverage.
1) Is a $1-$20M business requiring "humans in the loop" deploying, monitoring, and maintaining beacons with a single purpose, getting past lots of "humans with opinions" on "aesthetics" and "not in my back yard".
2) Is a $1-??? business requiring a few dedicated nerds working on CV with inf more applications and doesn't require "invading" physical buildings you dont own.
Well judging from consumer VR people will pick inside out tracking over beacons most of the time.
The headset needs the inside out tracking anyway to draw specially locked virtual objects.
To create a fresh spatial anchor at home on mobile hardware is maybe 1 second of compute time. But that doesn't really matter because the anchors would be shared across every user and computed offline beforehand.
As far as scaling the device itself can be used to crowdsource these anchors so it's not even close that the visual solution wins out.
That said beacons are probably better for supporting handset platforms. Powering up modern cell phone cameras to use AR is pretty slow and tedious for the user.
They already have perfected vSLAM system for their VR headsets. Feature point extraction -> ego motion derivation -> environmental mesh reconstruction.
The data will all be relative to initial positions and it will have drifts, but how those affect your research goals will be use case dependent(esp. since this is pitched for researches than as ready to go entertainment).
I'm a little shocked by the use of beacons outside of manufacturing or logistics or robot safety contexts.
Anything you want to track in the meat realm, especially a place like an airport, the airtag or google equivalent mesh networks are going to be far more dense than your beacons and last forever with no power required.
> 2) Training an AI model for each airport, then constantly burning compute power from camera-equipped glasses or phones that barely last a few hours.
It's their purpose in VR/AR to have cheap indoor location, for them it's one more step in that direction.
Eventually they will achieve doing it with little compute.
When Valve came out with their VR headset that had base stations, everybody thought that’d be the holy grail, that you can never achieve better localization and tracking without base stations, and a base station free method can never be better than that.
Well, Meta poured a shit ton of money into making Quest base station free and they got there. We use to use valve setup for our robotics applications but we swapped it out with Quest cause honestly Quest was as good but much more easy to setup and operate.
The bitter lesson is that don’t bet against data or compute. Also, I don’t think you’d have to train a AI model for each location at every time in the future. Things get more efficient, etc.
> which approach is actually easier, more cost-effective, and energy-efficient?
I think you are asking the wrong question. The right question is: "Which approach will people use?"
Doesn't matter if it is the easiest cheapest most energy efficient thing, if people don't use it.
AI will get faster and more energy efficient over time. Deploying physical hardware will never improve in any meaningful way that fixes the biggest problem, deploying X amount of things everywhere you need it. Its a non-starter.
Why not just set up QR codes that link the location to your phone that the glasses can scan instead of a beacon? You could just as many as you want and slabber them on the wall.
> 100 or even 1,000
There are many single airports with more than 100 points of interest. Now extend that to every US state...
My answer is why not both? Is the end goal energy efficiency or making a product that works?
3) paint some qr codes on walls/signage to help make 2) easier?
do we really still train model for EACH airport?
using wifi routers?
For those who won't read the article: it's only a wearable camera with an SDK for capturing data, there's no AR projection.
Reminds me of the original Oculus Rift, it was a def device.
The specs [0] are quite interesting, in that they're nothing fancy which enables the small form factor. I'm guessing the main intent here is to do as much as possible off-device? 4GB of RAM is not nothing, but it's not much if you want to use any advanced models, never mind what that'll do to battery life.
It's been a few years since I worked on wearables at Meta, so things may have changed, but the goal from the beginning was to move as much compute off the device as possible to deal with the related problems of power consumption and heat generation. For the particular wearable my time was very focused on at the time, heat was actually the larger issue.
Then why can't my Meta Ray Bans use a non-Meta AI assistant???
Haha I understand why but my only real complaint about the glasses is that I'm stuck with Meta AI. Would be so nice if I could plug Gemini or Open AI into it.
Great product overall but suffers from not having an SDK and lock in on the model.
Unfortunately pretty much every modern product suffers from the same fate, we can't even have widely used open standards for messaging interoperability. We're lucky the open internet, email, irc and etc were invented before all of this enshittification, all of those would be very different if they were invented today.
[deleted]
Seems like they've been pushed to release this as open source alternatives are catching up, if not passing them by [1] ...
The original paper abstract [1] cuts through a lot of the jargon on the website, but yeah it's just a research platform for capturing (and doing limited processing on) video and telemetry for the purposes of AR-focused ML research.
It's not a new headset or a protoype for one.
"Egocentric, multi-modal data as available on future augmented reality (AR) devices provides unique challenges and opportunities for machine perception. These future devices will need to be all-day wearable in a socially acceptable form-factor to support always available, context-aware and personalized AI applications. Our team at Meta Reality Labs Research built the Aria device, an egocentric, multi-modal data recording and streaming device with the goal to foster and accelerate research in this area. In this paper, we describe the Aria device hardware including its sensor configuration and the corresponding software tools that enable recording and processing of such data."
I'm glad Mark managed to dig Meta out of the hole it was in a few years ago. I remember they were dumping so much money into the metaverse telling everyone it was the future. Seems now they have less of a focus on that and more of a focus on VR hardware and their AI models, which IMO is a better place for them than where they were.
I don't know if you're conflating 'Metaverse' with 'NFTs' or something, but in Meta land it's very much a VR/AR term.
I feel like the original messaging was terrible. Everyone thinks "metaverse" == Horizon Worlds, since that was the demo they showed at the time. It was actually supposed to be a blanket term for all post-phone tech that will be integrated into daily life. Including but not limited to AR/VR, all sorts of wearable tech, robotics, AI integration into everything, etc. People walked away from their original presentation assuming they were trying to clone second life in VR (they were, but that was just one random project they demoed).
I don't know why people are so focused on the amount they're spending on VR/AR. It's a tenth of their revenue, a quarter of their profit. They spent more on stock buybacks. As far as I'm concerned all the tech companies that are printing money should be spending like this on specific technologies they believe in, instead of stock buybacks. E.g. Google ought to have gone all in on Waymo instead of seeking outside investment.
Stock buybacks have no cost to investors. It's just returning cash -- not a "spend." Comparing it to actual costs is nonsensical.
It's called diversification
I do appreciate the big swing and the R& D is quite cool.
The Quest is a marvel and they seem to be making real gains towards a mythical hands-free AR glasses experience.
I was also very excited to see a big tech company moved towards premium hardware and premium software.
Sadly I fear a return to freemium, now AI generated, and soon to be advertisement filled slop. Meta is still Meta but hopefully their goals keep them on a better path.
> Mark managed to dig Meta out of the hole
He himself put Meta in that hole, found out he couldn’t manifest better hardware out of the magical money hat, and finally gave up on the idea, firing a whole bunch of engineers in the process.
They're still dumping roughly the same amount of money into the Metaverse hole as they were in 2022 ($4-5B/quarter), investors have just warmed up to it (and spending growth slowed down).
Link to hardware specifications provided by the manual, has good images:
One question: as a person who is far and nearsighted at the same time, I basically see only sharp at a distance of around 20 cm, could I even use such glasses?
It would probably involve a small physical adjustment to the device. Whether or nor Meta provides that option is a good question.
VR and AR devices so far always use a fixed focal plane for everything. Usually around 1 meter. So, if you are looking at a distant object in VR or through AR video passthrough, your eyes need to focus at around 1 meter.
I know some VR headsets that offer customizable lens. But, I don't know about this device in particular.
The glasses don't have a display, they're just normal glasses with audio/video recording functionality.
Don't know how hyped people are about this after Meta let Spark users standing in the rain...
Was anyone else disappointed not to see any sort of display built into the lens? When Google Glass came out 12 years ago I thought he would have DBZ Scouter level tech by now.
Why glasses? Why not a headband / bandeau / frontlet / taenia?
People want smart glasses so they can see virtual overlaid on the real world.
That's not what these do though.
Obviously, they are an early development device for what will be smart glasses. Why use a different form factor for this?
Am I the only one who wants to see these things banned in most public places?
I assume you're consistent and therefore also want to ban any other photo or video device in most public places?
Right now people have to point their phone cameras at at the thing they're photographing or recording, it's a very clear visual signal to others and there are cultural norms for this behavior. If a person is doing this in a common tourist destination that's more acceptable than pointing your camera at somebody else's children without asking them first. Imagine how uncomfortable it would be for somebody to hold their phone at eye level and point it at you the entire time they're having a conversation with you, even if they say they're not recording or anything like that. Having a distaste for smart glasses is pretty consistent with the status quo.
> Right now people have to point their phone cameras at at the thing they're photographing
I mean, that really isn't true. There have been wearable and carryable hidden cameras for ages and we also have 360 cameras that no longer need to be pointed at what they are capturing.
This isn't changing anything about what is available to purchase, and if anything, these are relatively more obvious.
The actual change would be, that if these become widely adopted, those types of cameras would be everywhere.
You're right, it's possible and common to be recorded without somebody pointing a phone camera at you. That doesn't negate my point, right now the norm is that in many situations you _can_ record to your personal device without informed consent but you shouldn't. That will change if these types of cameras are everywhere.
Are you recording non-stop all the time? If so, yes, you should not be able to do it.
The glasses aren't recording non stop
[deleted]
I wish we had batteries so small and powerful but there's no way to record constantly with a tiny device like that
There's certainly a weird aspect to it.
Historically, body-worn hidden cameras have been for perverts, spies and journalists. Normal folks don't mind people knowing they're taking a photo, and want to be able to frame the photo and suchlike. Gopros would be clearly visible, front and centre on people's helmets - and only while doing sports. Guards and cops with body cameras want people to know they've got a camera, as a deterrent.
You'd occasionally see hidden camera footage used by investigative journalists - but outside of that, the market for body-worn hidden cameras was mostly weird lonely pervs who wanted to take photos at the topless beach and upskirt photos without getting into trouble.
A glasses-camera product won't succeed among us normal folk if wearing it makes you look like a weird lonely perv.
> A glasses-camera product won't succeed among us normal folk if wearing it makes you look like a weird lonely perv.
You say this, and yet as a blind user of the Meta glasses (they're actually great for accessibility!) I am not ... seeing it. They are far more ubiquitous and warn by far more people than you would expect, especially when comparing to Google Glass.
Gotta wear clothes with embedded data to poison the recording (not an original thought by me saw it in GITS SAC 2045)
especially if it's being made by Facebook
This is key.
If this were pushed by Apple, people would be responding much differently, since there is an inherent level of trust in regards to Apple's privacy protections, vs Meta.
So, not so much the technology, but rather the trust (or lack thereof) behind the implementor.
[dead]
Oh this one isn't Orion, Orion looks amazing
how are the other smart glasses of meta doing? The ray ben one seems to be the new instagram.
I got one transition lens RayBan Meta glasses. It's pretty useful to free up my hands when shooting a video of my kid playing. I also like it can replace ear plugs if you don't care about premium sound quality. The only downside is a tad bit too heavy...
Founder of Estimote, Inc. (YC S13) here — we do beacons.
In Project Aria video, they claim to have installed beacons at an airport to enable indoor location, only to dismiss it as something that "doesn't scale."
Instead, they say they "trained" an AI model using vision from glasses, allowing for vision-based localization.
So, here’s an honest question: which approach is actually easier, more cost-effective, and energy-efficient?
1) Deploying 100 or even 1,000 wireless, battery-operated beacons that last 5–7 years—something a non-tech person can set up in a day or two.
2) Training an AI model for each airport, then constantly burning compute power from camera-equipped glasses or phones that barely last a few hours.
Thoughts?
> So, here’s an honest question: which approach is actually easier, more cost-effective, and energy-efficient?
Really it's more like three questions.
1. Easier? I guess that depends how you define ease, but it largely depends on what resources you have available to you. If I'm Meta and I already have a ton of compute and AI training expertise but don't have relationships with all of the airports, stadiums, etc., their approach is probably easier. You'd have to spin up new teams of people all over the world to get beacons everywhere you want them.
2. Cost-effective? I don't know enough about the costs of your solution to give an accurate answer here, but again it just seems like they're probably already spending resources training models on a huge number of images of the world, so maybe not a lot of incremental cost here.
3. Cost efficient? I would assume your approach wins here.
In my experience with a mesh wi-fi project, physical devices come with real world physical side-effects: accidents happen, devices go offline, or get stolen, or knocked off the walls/shelves, a physical location needs to be negotiated with the space owner (less of a problem if the number of venues is in the hundreds as we have business people to handle those at scale), dust, water, heat, animals, etc.
It's not a big problem if you want to equip one venue or a couple, but scaling the operation means these side-effects scale too, and we had to work on solutions to handle those, rather than working on our core competency of mesh wi-fi. Unsurprisingly the project was scrapped despite being technically feasible on a small scale - we had a couple of sites.
Virtualizing a physical space gives you more flexibility. It keeps most problems in the software engineering space and limits physical requirements (eg someone might still need to walk around an airport to update the model, but I can't think of any other major ones).
That said, AI is sexy (right now), Meta is heavy in the MR space and the tech is reusable, even if it's not the most energy-efficient solution.
(disclaimer: just my personal ramblings, I don't work on project Aria)
Getting permission to install hardware is a lot harder than not getting permission to install hardware. It isn’t the hardware that doesn’t scale, it’s the people.
I've used beacons a lot in installations. I found their reliability was a bit over-promised [1]. If you want to know whether a user is within a 4 metre sphere, in a time window of about 5 seconds, then it's fine. But don't hope for anything more precise than that; the false positives/negatives aren't great.
A large part of the variation I found was due to how individual users held their phones, and the resulting signal attenuation.
[1] https://hackaday.com/2015/12/18/immersive-theatre-via-ibeaco...
was that Bluetooth or UWB? Cause that's like saying VHS vs. 4K
And here's an honest answer - it is likely to be option 2.
In over a decade of indoor robotics I have _never_ seen a beacon-based solution that practically scales (even marker-based solutions are challenging). And it's not because the tech is even bad – it's just that any process that involves _installing things_ is a PITA and wildly more expensive and time-consuming that it should be.
This kind of sucks but it is an unfortunately reality, in my experience at least.
> Training an AI model for each airport
This is where I think the gap is. We only have to train one model for all environments, not per
How do the costs compare for training one big model vs installing billions of beacons?
Also consider the pace at which model sizes, training, and operating costs are falling
With beacons, you need to install something—-with glasses, you don’t.
With glasses, you can map the space while identifying POIs—-with beacons you can’t.
Unfortunately, no one really cares about energy use.
It depends on scale need to achieve. 1000 beacons easier for one town scale, but train model for each airfield is Earth scale(in 1990 in US was ~6000 airfields, whole Europe have less number of airfields).
Also exist some nuances, as some cities are flat but others have large hills, so need to place few beacons on sides of hill (rough surface need much more beacons).
Practically, I have experience in project to deploy LoraWan network in large city Kiev, and one concurrent bought research from cellular network planners and for first look they drawn ~300 access points to have more than 99% coverage.
1) Is a $1-$20M business requiring "humans in the loop" deploying, monitoring, and maintaining beacons with a single purpose, getting past lots of "humans with opinions" on "aesthetics" and "not in my back yard".
2) Is a $1-??? business requiring a few dedicated nerds working on CV with inf more applications and doesn't require "invading" physical buildings you dont own.
Well judging from consumer VR people will pick inside out tracking over beacons most of the time.
The headset needs the inside out tracking anyway to draw specially locked virtual objects.
To create a fresh spatial anchor at home on mobile hardware is maybe 1 second of compute time. But that doesn't really matter because the anchors would be shared across every user and computed offline beforehand.
As far as scaling the device itself can be used to crowdsource these anchors so it's not even close that the visual solution wins out.
That said beacons are probably better for supporting handset platforms. Powering up modern cell phone cameras to use AR is pretty slow and tedious for the user.
They already have perfected vSLAM system for their VR headsets. Feature point extraction -> ego motion derivation -> environmental mesh reconstruction.
The data will all be relative to initial positions and it will have drifts, but how those affect your research goals will be use case dependent(esp. since this is pitched for researches than as ready to go entertainment).
I'm a little shocked by the use of beacons outside of manufacturing or logistics or robot safety contexts.
Anything you want to track in the meat realm, especially a place like an airport, the airtag or google equivalent mesh networks are going to be far more dense than your beacons and last forever with no power required.
> 2) Training an AI model for each airport, then constantly burning compute power from camera-equipped glasses or phones that barely last a few hours.
It's their purpose in VR/AR to have cheap indoor location, for them it's one more step in that direction. Eventually they will achieve doing it with little compute.
When Valve came out with their VR headset that had base stations, everybody thought that’d be the holy grail, that you can never achieve better localization and tracking without base stations, and a base station free method can never be better than that.
Well, Meta poured a shit ton of money into making Quest base station free and they got there. We use to use valve setup for our robotics applications but we swapped it out with Quest cause honestly Quest was as good but much more easy to setup and operate.
The bitter lesson is that don’t bet against data or compute. Also, I don’t think you’d have to train a AI model for each location at every time in the future. Things get more efficient, etc.
> which approach is actually easier, more cost-effective, and energy-efficient?
I think you are asking the wrong question. The right question is: "Which approach will people use?"
Doesn't matter if it is the easiest cheapest most energy efficient thing, if people don't use it.
AI will get faster and more energy efficient over time. Deploying physical hardware will never improve in any meaningful way that fixes the biggest problem, deploying X amount of things everywhere you need it. Its a non-starter.
Why not just set up QR codes that link the location to your phone that the glasses can scan instead of a beacon? You could just as many as you want and slabber them on the wall.
> 100 or even 1,000
There are many single airports with more than 100 points of interest. Now extend that to every US state...
My answer is why not both? Is the end goal energy efficiency or making a product that works?
3) paint some qr codes on walls/signage to help make 2) easier?
do we really still train model for EACH airport?
using wifi routers?
For those who won't read the article: it's only a wearable camera with an SDK for capturing data, there's no AR projection.
Reminds me of the original Oculus Rift, it was a def device.
The specs [0] are quite interesting, in that they're nothing fancy which enables the small form factor. I'm guessing the main intent here is to do as much as possible off-device? 4GB of RAM is not nothing, but it's not much if you want to use any advanced models, never mind what that'll do to battery life.
[0] https://facebookresearch.github.io/projectaria_tools/docs/te...
It's been a few years since I worked on wearables at Meta, so things may have changed, but the goal from the beginning was to move as much compute off the device as possible to deal with the related problems of power consumption and heat generation. For the particular wearable my time was very focused on at the time, heat was actually the larger issue.
Then why can't my Meta Ray Bans use a non-Meta AI assistant???
Haha I understand why but my only real complaint about the glasses is that I'm stuck with Meta AI. Would be so nice if I could plug Gemini or Open AI into it.
Great product overall but suffers from not having an SDK and lock in on the model.
Unfortunately pretty much every modern product suffers from the same fate, we can't even have widely used open standards for messaging interoperability. We're lucky the open internet, email, irc and etc were invented before all of this enshittification, all of those would be very different if they were invented today.
Seems like they've been pushed to release this as open source alternatives are catching up, if not passing them by [1] ...
[1] https://augmentos.org/
The original paper abstract [1] cuts through a lot of the jargon on the website, but yeah it's just a research platform for capturing (and doing limited processing on) video and telemetry for the purposes of AR-focused ML research.
It's not a new headset or a protoype for one.
"Egocentric, multi-modal data as available on future augmented reality (AR) devices provides unique challenges and opportunities for machine perception. These future devices will need to be all-day wearable in a socially acceptable form-factor to support always available, context-aware and personalized AI applications. Our team at Meta Reality Labs Research built the Aria device, an egocentric, multi-modal data recording and streaming device with the goal to foster and accelerate research in this area. In this paper, we describe the Aria device hardware including its sensor configuration and the corresponding software tools that enable recording and processing of such data."
[1] https://arxiv.org/abs/2308.13561
I'm glad Mark managed to dig Meta out of the hole it was in a few years ago. I remember they were dumping so much money into the metaverse telling everyone it was the future. Seems now they have less of a focus on that and more of a focus on VR hardware and their AI models, which IMO is a better place for them than where they were.
I don't know if you're conflating 'Metaverse' with 'NFTs' or something, but in Meta land it's very much a VR/AR term.
If you're interested in where they're currently focusing their spending and the timelines for return on investment, the recently leaked memo isn't a bad place to start https://www.uploadvr.com/meta-cto-to-staff-leaked-memo-2025-...
I feel like the original messaging was terrible. Everyone thinks "metaverse" == Horizon Worlds, since that was the demo they showed at the time. It was actually supposed to be a blanket term for all post-phone tech that will be integrated into daily life. Including but not limited to AR/VR, all sorts of wearable tech, robotics, AI integration into everything, etc. People walked away from their original presentation assuming they were trying to clone second life in VR (they were, but that was just one random project they demoed).
I don't know why people are so focused on the amount they're spending on VR/AR. It's a tenth of their revenue, a quarter of their profit. They spent more on stock buybacks. As far as I'm concerned all the tech companies that are printing money should be spending like this on specific technologies they believe in, instead of stock buybacks. E.g. Google ought to have gone all in on Waymo instead of seeking outside investment.
Stock buybacks have no cost to investors. It's just returning cash -- not a "spend." Comparing it to actual costs is nonsensical.
It's called diversification
I do appreciate the big swing and the R& D is quite cool.
The Quest is a marvel and they seem to be making real gains towards a mythical hands-free AR glasses experience.
I was also very excited to see a big tech company moved towards premium hardware and premium software.
Sadly I fear a return to freemium, now AI generated, and soon to be advertisement filled slop. Meta is still Meta but hopefully their goals keep them on a better path.
> Mark managed to dig Meta out of the hole
He himself put Meta in that hole, found out he couldn’t manifest better hardware out of the magical money hat, and finally gave up on the idea, firing a whole bunch of engineers in the process.
They're still dumping roughly the same amount of money into the Metaverse hole as they were in 2022 ($4-5B/quarter), investors have just warmed up to it (and spending growth slowed down).
Link to hardware specifications provided by the manual, has good images:
https://facebookresearch.github.io/projectaria_tools/docs/te...
One question: as a person who is far and nearsighted at the same time, I basically see only sharp at a distance of around 20 cm, could I even use such glasses?
It would probably involve a small physical adjustment to the device. Whether or nor Meta provides that option is a good question.
VR and AR devices so far always use a fixed focal plane for everything. Usually around 1 meter. So, if you are looking at a distant object in VR or through AR video passthrough, your eyes need to focus at around 1 meter.
I know some VR headsets that offer customizable lens. But, I don't know about this device in particular.
The glasses don't have a display, they're just normal glasses with audio/video recording functionality.
Don't know how hyped people are about this after Meta let Spark users standing in the rain...
Was anyone else disappointed not to see any sort of display built into the lens? When Google Glass came out 12 years ago I thought he would have DBZ Scouter level tech by now.
Why glasses? Why not a headband / bandeau / frontlet / taenia?
People want smart glasses so they can see virtual overlaid on the real world.
That's not what these do though.
Obviously, they are an early development device for what will be smart glasses. Why use a different form factor for this?
Am I the only one who wants to see these things banned in most public places?
I assume you're consistent and therefore also want to ban any other photo or video device in most public places?
Right now people have to point their phone cameras at at the thing they're photographing or recording, it's a very clear visual signal to others and there are cultural norms for this behavior. If a person is doing this in a common tourist destination that's more acceptable than pointing your camera at somebody else's children without asking them first. Imagine how uncomfortable it would be for somebody to hold their phone at eye level and point it at you the entire time they're having a conversation with you, even if they say they're not recording or anything like that. Having a distaste for smart glasses is pretty consistent with the status quo.
> Right now people have to point their phone cameras at at the thing they're photographing
I mean, that really isn't true. There have been wearable and carryable hidden cameras for ages and we also have 360 cameras that no longer need to be pointed at what they are capturing.
This isn't changing anything about what is available to purchase, and if anything, these are relatively more obvious.
The actual change would be, that if these become widely adopted, those types of cameras would be everywhere.
You're right, it's possible and common to be recorded without somebody pointing a phone camera at you. That doesn't negate my point, right now the norm is that in many situations you _can_ record to your personal device without informed consent but you shouldn't. That will change if these types of cameras are everywhere.
Are you recording non-stop all the time? If so, yes, you should not be able to do it.
The glasses aren't recording non stop
I wish we had batteries so small and powerful but there's no way to record constantly with a tiny device like that
There's certainly a weird aspect to it.
Historically, body-worn hidden cameras have been for perverts, spies and journalists. Normal folks don't mind people knowing they're taking a photo, and want to be able to frame the photo and suchlike. Gopros would be clearly visible, front and centre on people's helmets - and only while doing sports. Guards and cops with body cameras want people to know they've got a camera, as a deterrent.
You'd occasionally see hidden camera footage used by investigative journalists - but outside of that, the market for body-worn hidden cameras was mostly weird lonely pervs who wanted to take photos at the topless beach and upskirt photos without getting into trouble.
A glasses-camera product won't succeed among us normal folk if wearing it makes you look like a weird lonely perv.
> A glasses-camera product won't succeed among us normal folk if wearing it makes you look like a weird lonely perv.
You say this, and yet as a blind user of the Meta glasses (they're actually great for accessibility!) I am not ... seeing it. They are far more ubiquitous and warn by far more people than you would expect, especially when comparing to Google Glass.
Gotta wear clothes with embedded data to poison the recording (not an original thought by me saw it in GITS SAC 2045)
especially if it's being made by Facebook
This is key.
If this were pushed by Apple, people would be responding much differently, since there is an inherent level of trust in regards to Apple's privacy protections, vs Meta.
So, not so much the technology, but rather the trust (or lack thereof) behind the implementor.
[dead]
Oh this one isn't Orion, Orion looks amazing
how are the other smart glasses of meta doing? The ray ben one seems to be the new instagram.
I got one transition lens RayBan Meta glasses. It's pretty useful to free up my hands when shooting a video of my kid playing. I also like it can replace ear plugs if you don't care about premium sound quality. The only downside is a tad bit too heavy...
[dead]
[flagged]