150

Meta Segment Anything Model 3

Released last week. Looks like all the weights are now out and published. Don’t sleep on the SAM 3D series — it’s seriously impressive. They have a human pose model which actually rigs and keeps multiple humans in a scene with objects, all from one 2D photo (!), and their straight object 3D model is by far the best I’ve played with - it got a really very good lamp with translucency and woven gems in usable shape in under 15 seconds.

3 hours agovessenes

Between this and DINOv3, Meta is doing a lot for the SOTA even if Llama 4 came up short compared to the Chinese models.

an hour agoQwuke

Are those the actual wireframes they're showing in the demos on that page? As in, do the produced models have "normal" topology? Or are they still just kinda blobby with a ton of polygons

2 hours agoFraterkes

I haven’t tried it myself, but if you’re asking specifically about the human models, the article says they’re not generating raw meshes from scratch. They extract the skeleton, shape, and pose from the input and feed that into their HMR system [0], which is a parametric human model with clean topology.

So the human results should have a clean mesh. But that’s separate from whatever pipeline they use for non-human objects.

[0]: https://github.com/facebookresearch/MHR

8 minutes agoseanw265

I wonder if this can be used to track an object's speed. E.g. a vehicle on a road. It would need to recognize shapes, e.g. car model or average size of a bike, to guess a speed.

7 minutes agomaelito

Surprisingly, SAM3 works bad on engineering drawings while SAM2 kinda works, and VLMs like Qwen3-VL works as well

2 hours agoenoch2090

I wonder how effective this is medical scenarios? Segmenting organs and tumors in cat scans or MRIs?

24 minutes agoaliljet

Which (if any) of these models could run on a RaspberryPi for object recognition at several FPS?

2 hours agophkahler

Side question: what are the current top goto open models for image captioning and building image embeddings dbs, with somewhat reasonable hardware requirements?

4 hours agothe_duke

Try any of the qwen3-vl models. They have 8, 4 and 2B models in this family.

4 hours agoNitpickLawyer

I would suggest YOLO. Depending on your domain, you might also finetune these models. Its relativly easy as they are not big LLMs but either image classification or bounding boxes.

I would recommend bounding boxes.

3 hours agoGlemkloksdjf

What do you mean "bounding boxes"? They were talking about captions and embeddings, so a vision language model is required.

2 hours agojabron

Which YOLO?

3 hours agosmallerize

Any current one. they are easy to use and you can just benchmark them yourself.

I'm using small and medum.

Also the code for using it is very short and easy to use. You can also use ChatGPT to generate small exepriments to see what fits your case better

2 hours agoGlemkloksdjf

There aren’t any YOLO models for captioning and the other models aren’t robust enough to make for good embedding models.

2 hours agothrowaway314155

Been waiting days to get approval to download this from huggingface. What's up with that?

2 hours agocolkassad

I was approved within about 10 minutes for both "Segment Anything 3" and "Depth Anything 3"

7 minutes agoknicholes

This would be convenient for post-production and editing of video, e.g. to aid colour grading in Davinci Resolve. Currently a lot of manual labour goes into tracking and hand-masking in grading.

2 hours agocheesecompiler

Miss the old segment anything page, used it a lot. This UI I found very complex to use

2 hours agoshashanoid

Same.

Checkout https://github.com/MiscellaneousStuff/meta-sam-demo

It's a rip of the previous sam playground. I use it for a bunch of things.

Sam 3 is incredible. I'm surprised it's not getting more attention.

28 minutes agobradyriddle

[delayed]

a minute agostronglikedan

[dead]

an hour agoWill-Reppeto

I do a test on multimodal LLMs where I show them a dog with 5 legs, and ask them to count how many legs the dog has. So far none of them can do it. They all say "4 legs".

Segment anything however was able to segment all 5 dog legs when prompted to. Which means that meta is doing something else under the hood here, and may lend itself to a very powerful future LLM.

Right now some of the biggest complaints people have with LLMs stems from their incompetence processing visual data. Maybe meta is onto something here.

4 hours agoWorkaccount2

Segmentation doesn't need to count legs. I'd guess something like YOLO could segment 5 legged dogs too.

4 hours agojampekka

YOLO is not a segmentation model.

4 hours agochompychop

https://docs.ultralytics.com/tasks/segment/

4 hours agojampekka

Thanks! TIL there's a class of segmentation models with the YOLO naming scheme.

3 hours agochompychop

I thought it was a joke about YAML

4 hours agolucasban

You don’t need segmentation to count legs. Object detection can do that. DeepLabCut from 2020 perhaps.

3 hours agonerdsniper

I doubt that gemini 3 cannot do it.