256

How fast are Linux pipes anyway?

Seared into my soul is the experience porting a linux pipe-based application to Windows, thinking it's all posix and given it's all in memory the performance will be more or less the same. The performance was hideous, even after we found that having pipes waiting for a connection more or less ground windows to a halt.

Some years later this got revisited due to needing to use the same thing under C# on Win10 and while it was better it was still a major embarrassment how big the performance gap was.

5 days agozh3

> The performance was hideous, even after we found that having pipes waiting for a connection more or less ground windows to a halt.

When you say the performance was hideous, are you referring to I/O after the pipe is already connected/open, or before? The former would he surprising, but the latter not - opening and closing a ton of pipes is not something you'd expect an OS to be optimized for - and it would be somewhat surprising if your use case requires the latter.

5 days agodataflow

Literally just having spare listening sockets, ready for incoming connections (and obv. not busy-waiting on them). Just reducing to the number actually in-use was the biggest speed-up - it was like Windows was busy-waiting internally for new connections (it wasn't a huge number either, something like 8 or 12).

4 days agozh3

By "spare listening sockets" do you mean having threads on the server calling ConnectNamedPipe? A bit confused by your terminology since these aren't called listening sockets. (You're not referring to socket() or AF_UNIX, right?)

And yeah, that seems more or less what I expected. The implementation is probably optimized for repeated I/O on established connections, not repeated unestablished ones. Which would be similar to filesystem I/O on Windows in that way - it's optimized for I/O on open files (especially larger ones), not for repeatedly opening and closing files (especially small ones). It makes me wonder what kinds of use cases require repeated connections on named pipes.

If the performance is comparable to Linux's after the connection, then I think that's important to note - since that's what matters to a lot of applications.

4 days agodataflow

Yes, it was indeed using ConnectNamedPipe - just had a look at the code (which I can't share) to refresh my memory. The main problem was traced to setup delays in WaitForSingleObject()/WaitForMultipleObjects(); we fixed it as above (once all sessions were connected there were no spares left, so no problems), actual throughput was noted as quite inferior to linux but more than enough for our application so we left it there.

4 days agozh3

Ah interesting, thanks for checking. Not entirely sure I understand where the waits were happening, but my guess here is that the way Microsoft intended listening to work is for a new pipe listener to be spawned (if desired) once an existing one connects to a client. That way you don't spawn 8 ahead of time, you spawn 1 and then count up to 8.

I would intuitively expect throughout (once all clients have connected) to be similar to on Linux, unless the Linux side uses syscalls like vmsplice() - but not sure, I've never tried benchmarking.

4 days agodataflow

Windows API is built on kludge of functionality, not performance. For example, GetPrivateProfileString [0] does exactly what you stated for files. Opens, parses a single key value, and closes. So much time and resources are wasted with the GetPrivateProfileXXXX APIs.

[0] https://learn.microsoft.com/en-us/windows/win32/api/winbase/...

4 days agoyndoendo

This function is provided only for compatibility with 16-bit Windows-based applications. Applications should store initialization information in the registry.

They literally provided the registry to solve this very problem from the days of 16-bit Windows. Holding it against them in 2025 when they have given you a perfectly good alternative for decades is rather ridiculous and is evidence for the exact opposite of what you intended.

4 days agodataflow

This functionality is still used in software after 16-bit. It is even used by Skyrim software developers when it was "mark for compatibility by Microsoft". There is a project that hacked around this bad API [0]. You are going off the documentation wording versus real-world usage. Anyone that plays Skyrim today or in the future is still being harmed by this bad API.

INI files are 100% different than the Registry. I rather have an configuration file over registry entries because the registry is just another kluge of bad design. Configuration files are text files that are well defined.

Example would be the registry settings to mark a URL that it needs to run in IE compatibility mode because the source use old IE features that are now obsolete and don't even work in Edge or a modern browser. It should of just been a simple string array.

[0] https://www.nexusmods.com/skyrimspecialedition/mods/18860

2 days agoyndoendo

Some years back Windows added AF_UNIX sockets, I wonder how those would perform relative to Win32 pipes. My guess is better.

5 days agoasveikau

Seems to reportedly be slightly faster in a few cases, but nothing particularly dramatic https://www.yanxurui.cc/posts/server/2023-11-28-benchmark-tc...

5 days agomanwe150

Are we reading the same tables? It seems to be about 3x faster than named pipes, and marginally faster than local TCP.

It's worth noting that in Win32, an unnamed pipe is just a named pipe with the name discarded. So this "3x faster" is, I think, the exact comparison we're interested in.

4 days agoasveikau

Well POSIX only defines behavior, not performance. Every platform and OS will have its own performance idiosyncracies.

5 days agoSoftTalker

How on earth would POSIX define performance of something like pipes?

5 days agoklysm

I was addressing "it's all posix and given it's all in memory the performance will be more or less the same."

Not claiming that POSIX should or could attempt to address performance.

5 days agoSoftTalker

By using Big O notation, or deadlines like on RTOS APIs, as two possible examples on how to express performance on a standard.

5 days agopjmlp

Some standards do define performance requirements, e.g. operations on data structures, in BigO notation.

4 days agovariadix

Last I checked, on Windows local TCP outperforms pipes by a large margin.

4 days agovardump

Did you find that you needed interprocess communication to replace the gap?

5 days agoandrewmcwatters

pipes are a form of interprocess communication :) I guess you meant shared memory?

5 days agospacechild1

Yes. Yeah, you're right. Sockets could also be used, but I guess when I think of IPC, I generally think of shared memory.

5 days agoandrewmcwatters

I remember years ago, we had an opposite experience. Not necessarily with pipes. We were running on Linux with a php app that would communicate with a soap api on .net and found that a .net implementation had better response time.

4 days agohk1337

FWIW there is readv() / writev(), splice(), sendfile(), funopen(), and io_buffer() as well.

splice() is great when transferring data between pipes and UNIX sockets with zero-copy, but it is Linux-only.

splice() is the fastest and most efficient way to transfer data through pipes (on Linux), especially for large volumes. It bypasses memory allocations in userspace (as opposed to read(v)/write(v)), there is no extra buffer management logic, there is no memcpy() or iovec traversal.

Sadly on BSDs, for pipes, readv() / writev() is the most performant way to achieve the same if I am not mistaken. Please correct me if I am wrong.

At any rate, this is a great article.

5 days agojohnisgood

> sendfile() is file-to-socket (zero-copy as well), and has very high performance as well, for both Linux and BSDs. It only supports file-to-socket, however, and well, to stay relevant, sendmsg() can't be used with pipes in the general case, it is for UNIX domain sockets, INET sockets, and other socket types.

On Linux, sendfile supports more than just file to socket, as it's implemented using splice. I've used it for file-to-block-device in the past.

5 days agomesse

On BSDs probably not, as they don't have splice, but that is good to know. I wonder if on BSDs it really is readv() and writev() that are the fastest way to achieve the same thing as has been done in the article. Maybe I am missing something. I would like to be corrected.

5 days agojohnisgood

AFAIK, neither OpenBSD nor NetBSD has sendfile. On FreeBSD, I think you're correct regarding it being file-to-socket only.

5 days agomesse

Indeed, if I'm not mistaken Netflix at least used to use (and commit to kernel) FreeBSD on content servers because of its superior sendfile performance

5 days agozambal

> splice() is the fastest and most efficient way to transfer data through pipes (on Linux), especially for large volumes. It bypasses memory allocations in userspace (as opposed to read(v)/write(v)), there is no extra buffer management logic, there is no memcpy() or iovec traversal.

Proper use of io_uring should finally have it beat or at least matched.

5 days agowavesquid

Shared memory, like shm_open and fd passing, would be even faster and fully portable.

5 days agotedunangst

This is such a dope article. I love that it comes from time to time.

5 days agogigatexal

s/comes/comes up

5 days agogigatexal

I feel bad that this doesn't have any comments, the article was really great.

I'd like to use splice more, but the end of the article talked about the security implications and some ABI breaking.

I'm curious to know if long term plans are to keep splice around?

I'd also be curious how hard it would be to patch the default pipe to always use splice for performance improvements.

5 days agoaeonik

Does modern Linux have anything close to Doors? I’ve an embedded application where two processes exchange small amounts of data which are latency sensitive, and I’m wondering if there’s anything better than AF_UNIX.

5 days agolukeh

shared memory provides the lowest latency, but you still need to deal with task wakeup, which is usually done via futexs. Google was working on a FUTEX_SWAP call for linux which would have allowed direct handover from one task to another, not sure what happened to that.

5 days agothe8472

If you really want low latency, then you should be OK to trade power/CPU for it, and you can just spin instead of being woken up.

4 days agoGalanwe

What are Doors, it's too common a word to Google.

5 days agothemerone
[deleted]
5 days ago

Would be helpful to know what your problem is with AF_UNIX at the moment. Is it lacking in features you want? Is it higher latency than you'd want? Is the server/client socket API style not appropriate for your use-case?

5 days agomort96

Well, it’s probably fine but, it’s an audio application where metering (not audio) is delivered from a control plane process to a UI process. Lower latency is better. But haven’t measured it.

5 days agolukeh

(2022)