> The trading system reports an average latency of 10ms
Python is a bad choice for a system with such latency requirements. Isn't C++/Rust preferred language for algorithmic trading shops?
Depends... not all kinds of trading are THAT latency sensitive.
Why even bother with sampling profilers in Python? You can do full function traces for literally all of your code in production at ~1-10% overhead with efficient instrumentation.
That depends on the code you're profiling. Even good line profilers can add 2-5x overhead on programs not optimized for them, and you're in a bit of a pickle because the programs least optimized for line profiling are those which are already "optimized" (fast results for a given task when written in Python).
It does not, those are just very inefficient tracing profilers. You can literally trace C programs in 10-30% overhead. For Python you should only accept low single-digit overhead on average with 10% overhead only in degenerate cases with large numbers of tiny functions [1]. Anything more means your tracer is inefficient.
> The trading system reports an average latency of 10ms
Python is a bad choice for a system with such latency requirements. Isn't C++/Rust preferred language for algorithmic trading shops?
Depends... not all kinds of trading are THAT latency sensitive.
Why even bother with sampling profilers in Python? You can do full function traces for literally all of your code in production at ~1-10% overhead with efficient instrumentation.
That depends on the code you're profiling. Even good line profilers can add 2-5x overhead on programs not optimized for them, and you're in a bit of a pickle because the programs least optimized for line profiling are those which are already "optimized" (fast results for a given task when written in Python).
It does not, those are just very inefficient tracing profilers. You can literally trace C programs in 10-30% overhead. For Python you should only accept low single-digit overhead on average with 10% overhead only in degenerate cases with large numbers of tiny functions [1]. Anything more means your tracer is inefficient.
[1] https://functiontrace.com/
That's... intriguing. I just tested out functiontrace and saw 20-30% overhead. I didn't expect it to be anywhere near that low.