I am so confused by metas ecosystem. Perhaps others have the same issues. I have mountains of torchscript code. It worked fine for me - had no issues making the python compatible. Torchscript is now deprecated, and the ostensible replacement is torch.export and either: AOTInductor or executorch. torch.export is so limited - no control flow at runtime at all, less support of python than torchscript. It is far more work to hoist all the control flow out of the model than it ever was to make the model torchscript compatible. Feel like meta has moved on, but I'm still stuck in the past here.
Yeah, for a lot of users who control the exported source code, rewriting model to use control flow ops, or simply removing the control flow code is a viable option and solvable. For some other users who want to export the model as-is, the option is either using the (deprecated) torchscript, or just move on and use torch.compile and run your model in Python.
Those control flow ops aren't even supported on many backends. I know tensor rt doesn't support them for example, at least today.
Removing control flow isn't as easy as you'd think for some. It essentially means ripping large sections out of python and into separately implemented c++.
it's quite the bummer. some models you simply can't export with dynamo. for the time being the jit exporter is the only good option.
in particular selective function scripting is essential!
ExecuTorch developer here, agreed it's a huge pain to deal with if conditions right now. Part of the pain comes from the vast expressiveness of python on if condition, which causes all ML compiler a lot of headache to be able to capture a sound graph. The rest of the pain comes from the strict requirement of torch.compile itself (no mutation/aliasing behavior in the if branches), which in often times makes torch.cond hard to use or inefficient.
I've heard from a friend who works in the embedded space that Tensorflow Lite is still the only realistic (supported by vendors) game in town for running ML models on microcontrollers such as ESP32, nRF, etc. The hardware support listed for this project seems like it's targeting much "fatter" MCUs (Android, etc).
yeah that checks out, although looks like they do have an example for running models on a raspberry pi pico 2: https://docs.pytorch.org/executorch/main/pico2_tutorial.html. The list of embedded platforms this can run on is probably greater than the list of backends, it just wouldn't have acceleration.
Yeah, it's targeting "micro"-controllers, not microcontrollers. I was hoping for a PyTorch solution to TF Lite.
This is still great, though. Previously, I thought a mobile model (eg speech/object recognition) would require me to learn both PyTorch and something like MLC in C++. Then, port them.
If this is as it appears, I could develop a small model that could run on mobile on my laptop, train it on cloud GPU's, test it locally, and use this tool to produce a mobile version (or save some steps?). That would keep us from having to learn C++ or MLC just to do mobile.
I mean, one still can learn other tools for their advantages. However, ML students and startups might benefit greatly from this by being able to rapidly develop or port mobile apps. Then, people learning other tools for their advantages build stuff that way. The overall ecosystem gets stronger with more competition.
While reading the README and related documentation, I noticed that Samsung Exynos NPU acceleration was listed, which immediately caught my attention. According to
https://docs.pytorch.org/executorch/main/backends/samsung/sa..., Samsung has finally built and released an NPU SDK—so I followed the link to check it out.
Unfortunately, the experience was disappointing.
The so-called “version 1.0” SDK is available only for Ubuntu 22.04 / 20.04. There is no release date information per version, nor any visible roadmap. Even worse, downloading the SDK requires logging in. The product description page itself https://soc-developer.semiconductor.samsung.com/global/devel... does contain explanations, but they are provided almost entirely as images rather than text—presented in a style more reminiscent of corporate PR material than developer-facing technical documentation.
This is, regrettably, very typical of Samsung’s software support: opaque documentation, gated access, and little consideration for external developers. At this point, it is hard not to conclude that Exynos remains a poor choice, regardless of its theoretical hardware capabilities.
For comparison, Qualcomm and MediaTek actively collaborate with existing ecosystems, and their SDKs are generally available without artificial barriers. As a concrete example, see how LiteRT distributes its artifacts and references in this commit:
https://github.com/google-ai-edge/LiteRT/commit/eaf7d635e1bc...
It'd be great if it supports a wasm/web backend as well.
I bet a lot of trivial text capabilities (grammar checking, autocomplete, etc) will benefit from this rather than sending everything to a hosted model.
It's possible right now with onnx / transformers.js / tensorflow.js - but none of them are quite there yet in terms of efficiency. Given the target for microcontrollers, it'd be great to bring that efficiency to browsers as well.
You can compile to wasm, I have done so via the XNNPACK backend - you might have to tweak the compilation settings and upgrade the XNNPACK submodule/patch some code. But this only supports CPU, not a WebGPU or WebGL backend.
So the vulkan backend for pytorch is just in executorch?
I am so confused by metas ecosystem. Perhaps others have the same issues. I have mountains of torchscript code. It worked fine for me - had no issues making the python compatible. Torchscript is now deprecated, and the ostensible replacement is torch.export and either: AOTInductor or executorch. torch.export is so limited - no control flow at runtime at all, less support of python than torchscript. It is far more work to hoist all the control flow out of the model than it ever was to make the model torchscript compatible. Feel like meta has moved on, but I'm still stuck in the past here.
Yeah, for a lot of users who control the exported source code, rewriting model to use control flow ops, or simply removing the control flow code is a viable option and solvable. For some other users who want to export the model as-is, the option is either using the (deprecated) torchscript, or just move on and use torch.compile and run your model in Python.
Those control flow ops aren't even supported on many backends. I know tensor rt doesn't support them for example, at least today.
Removing control flow isn't as easy as you'd think for some. It essentially means ripping large sections out of python and into separately implemented c++.
it's quite the bummer. some models you simply can't export with dynamo. for the time being the jit exporter is the only good option.
in particular selective function scripting is essential!
ExecuTorch developer here, agreed it's a huge pain to deal with if conditions right now. Part of the pain comes from the vast expressiveness of python on if condition, which causes all ML compiler a lot of headache to be able to capture a sound graph. The rest of the pain comes from the strict requirement of torch.compile itself (no mutation/aliasing behavior in the if branches), which in often times makes torch.cond hard to use or inefficient.
I've heard from a friend who works in the embedded space that Tensorflow Lite is still the only realistic (supported by vendors) game in town for running ML models on microcontrollers such as ESP32, nRF, etc. The hardware support listed for this project seems like it's targeting much "fatter" MCUs (Android, etc).
yeah that checks out, although looks like they do have an example for running models on a raspberry pi pico 2: https://docs.pytorch.org/executorch/main/pico2_tutorial.html. The list of embedded platforms this can run on is probably greater than the list of backends, it just wouldn't have acceleration.
Yeah, it's targeting "micro"-controllers, not microcontrollers. I was hoping for a PyTorch solution to TF Lite.
This is still great, though. Previously, I thought a mobile model (eg speech/object recognition) would require me to learn both PyTorch and something like MLC in C++. Then, port them.
If this is as it appears, I could develop a small model that could run on mobile on my laptop, train it on cloud GPU's, test it locally, and use this tool to produce a mobile version (or save some steps?). That would keep us from having to learn C++ or MLC just to do mobile.
I mean, one still can learn other tools for their advantages. However, ML students and startups might benefit greatly from this by being able to rapidly develop or port mobile apps. Then, people learning other tools for their advantages build stuff that way. The overall ecosystem gets stronger with more competition.
I'll plug: https://github.com/google-ai-edge/ai-edge-torch for torch to tflite conversion.
I was hoping something like that existed, too. Thanks for the link!
I get the impression that https://github.com/pytorch/executorch is Meta’s take on TFLite / LiteRT, which is quite interesting.
While reading the README and related documentation, I noticed that Samsung Exynos NPU acceleration was listed, which immediately caught my attention. According to https://docs.pytorch.org/executorch/main/backends/samsung/sa..., Samsung has finally built and released an NPU SDK—so I followed the link to check it out.
Unfortunately, the experience was disappointing.
The so-called “version 1.0” SDK is available only for Ubuntu 22.04 / 20.04. There is no release date information per version, nor any visible roadmap. Even worse, downloading the SDK requires logging in. The product description page itself https://soc-developer.semiconductor.samsung.com/global/devel... does contain explanations, but they are provided almost entirely as images rather than text—presented in a style more reminiscent of corporate PR material than developer-facing technical documentation.
This is, regrettably, very typical of Samsung’s software support: opaque documentation, gated access, and little consideration for external developers. At this point, it is hard not to conclude that Exynos remains a poor choice, regardless of its theoretical hardware capabilities.
For comparison, Qualcomm and MediaTek actively collaborate with existing ecosystems, and their SDKs are generally available without artificial barriers. As a concrete example, see how LiteRT distributes its artifacts and references in this commit: https://github.com/google-ai-edge/LiteRT/commit/eaf7d635e1bc...
It'd be great if it supports a wasm/web backend as well.
I bet a lot of trivial text capabilities (grammar checking, autocomplete, etc) will benefit from this rather than sending everything to a hosted model.
It's possible right now with onnx / transformers.js / tensorflow.js - but none of them are quite there yet in terms of efficiency. Given the target for microcontrollers, it'd be great to bring that efficiency to browsers as well.
If you need WASM, I think Candle is your current best bet: https://github.com/huggingface/candle
You can compile to wasm, I have done so via the XNNPACK backend - you might have to tweak the compilation settings and upgrade the XNNPACK submodule/patch some code. But this only supports CPU, not a WebGPU or WebGL backend.
So the vulkan backend for pytorch is just in executorch?
I just want it on native desktop python.