In my last year of university (5 years ago) I took a networking seminar. Each student took a look at a different technology to utilize multiple links for internet data transfers.
Initially I was amazed by MPTCP and wondered why it had so little adoption. As I looked into the papers I slowly figured out why. With different links (WLAN, LAN, LTE) their real world characteristics are too different for efficient aggregation. It is the head of line blocking problem times ten.
It might be fine as a back up link, but there are other problems like the limit to TCP and middelboxes dropping unknowns packets.
The challenges outnumber the benefits for consumers and in data centers there are other technologies to aggregate links that operate on a level below TCP.
People underestimate how often MPTCP is actually used. Siri has been using it for ages and has since expanded their MPTCP usage to many other built in apps. At some point MPTCP became available to all apps on iOS. Any network with an iPhone connected to it is using MPTCP, whether you know it or not.
When you're using TCP, you can enable MPTCP for free and make your connections faster and more stable. If you're not using TCP, there are alternatives, but then MPTCP is completely irrelevant anyway. You can use QUIC if you want to bypass shitty middleboxes, for instance, as that has similar features but smuggles itself past shitty middleboxes by being marked as UDP (which also makes it more likely to get dropped when the network is congested, unfortunately).
Can an app use MPTCP if they don't have a MPTCP-aware server component? Or is Apple proxying (or via VPN) the MPTCP connection as part of their services?
It needs server side support, but the OS just supports it out of the box. On linux enabling multi path is I believe just a configuration flag and then it just works.
and the other comments on lack of MPTCP support from middleboxes, my guess is even for Apple first party apps, out in the wild, might not see much MPTCP use?
I imagine re-ordering is a big problem. The only application I can think of that would require a single flow is media streaming, but you only need ~4Mbs for a decent quality stream anyway. Other applications like file transfer can be split into multiple concurrent flow, at which point you might as well just let the local router nat each flow to each internet connection in turn.
I routinely ship 50mbit media streams over the internet, and sometimes streams up in the 200mbit range. In campus streams are up in the 10gbit range.
Over bonded networks bitrate are typically under 40mbit, and usually under 20
Of course none of this uses tcp.
Instead of trying to aggregate by packet, wouldn't it be more effective by managing sessions?
Once a session establishes a route, maintain it. Add logic to prioritize routes by session importance.
Then some sessions get a shitty link and others get a better link. Your bandwidth may go up for a benchmark of a lot of concurrent sessions but your latency will be random and all over the place.
Doing it at the packet level in theory gives you the ability to exploit the aggregate bandwidth for any session but as OP noted you still have all the latency problems and middleboxes getting in the way.
QUIC by the way solves the middlebox problem and you could put individual QUIC streams on separate connections to solve the head of line blocking that can appear but I feel like that’s closer to the TCP session thing where you only benefit the use cases that set up multiple streams. HTTP3 where this does happen may not benefit though because bandwidth tends to not be a problem if your rich enough to afford multiple links in the first place (ie more latency sensitive). This could be useful in places if you build a custom end to end solution for video streaming where you put the time-sensitive parts of the video on the lowest latency link and let the rest of the video buffer across all links. It’s a very niche use case though and not worth the effort I think.
That is done with multiwan in opnsense or mwan3 in openwrt.
I hope QUIC with its many advanced features gets better adoption to provide many of the benefits so we can just stop messing with TCP for it's completely screwed up by middleboxes.
Middle boxes simply block QUIC so that won’t help
No they do not unless they’ve been specifically updated to do so since QUIC is just UDP. From Google’s experiments very few middleware had problems such that they made QUIC impossible. That’s why Chrome has been using QUiC by default to Google services for a decade or maybe even slightly more.
And given that it’s the next evolution of TCP and a requirement for HTTP/3 it seems like the opposite direction of better QUIC compatibility is likely.
If you have a middle box in the way fiddling with your traffic then bypassing it is just a policy matter. Companies don’t put these things inline for fun, they do it because they want to block traffic.
Allowing bypassing it breaks their policy.
If you own your own network then you don’t need to worry about middle boxes interfering with quic or http.
The impact of middle boxes is overhyped on HN. Games use all kind of random ports and UDP (or TCP), it still works. Them not working on shitty company networks has no impact for real networks.
You either have a box in the way intercepting traffic for policy reasons, in which case quic won’t help as it would be blocked for breaking policy, or you don’t have a box in the way, in which case quic has no benefits (in terms of middle boxes)
I've been using this for around 6 months now to aggregate a 70 mbit FTTC connection and 150-450 mbit 5G connection; overall it works pretty well. The FTTC connection is the "master" link and it seems is preferred for the first several bytes of the connection, so the latency is better than using the 5G connection directly. This provides a nice balance for general web browsing, as loading web pages is still quite quick and overall download speed is more than either connection alone. In my setup I'd say it's about 80% efficient in terms of aggregating the total download.
There are a lot of configuration options and the stability of them can be an issue. I've found using XRay VLESS for the Proxy and Glorytun TCP for VPN to offer the best overall speed and reliability. (Edit: I've disabled SQM too). It's possible to mostly bypass the VPN by enabling proxy UDP over XRay but I found that breaks various bypass rules, e.g. so that Netflix connections always go over 5G, as some content has restriction from being accessed by my VPS data center IP.
Port forwarding is also a bit hit and miss; I have configured my 2 WAN routers to use the OMR router as a DMZ and then if I want to play a game, or enable remote access, I will use the bypass feature so that the device's MAC goes through my FTTC connection. UPnP works correctly in this scenario which is handy for consoles.
I've never heard of XRay or V2Ray. It seems like a niche thing?
Unfortunately I don't really understand the various protocols OMR supports, so my experience comes from measuring aggregated speed/latency and stability. XRay worked best for me and also supported QUIC if enabling proxy UDP over XRay/V2Ray. However, due to the omr-bypass issue, I've disabled that option and QUIC (which is the default setting).
They are extremely popular in the "niche" of censorship evasion especially in countries like China and Russia. There are many more such protocols as well like OBFS/Shadowsocks/Snowflake/Meek etc.
VPNs
I used OpenMPTCPRouter to aggregate 3 LTE connections (via routers connected to directional antenna, with SIM cards from different operators) when I was living in a house in the woods before the war has started I had to leave Russia.
Worked like a charm, giving me up to 180 Mbps or so. May not be that good for aggregating different types of links together, but for using multiple cellular connections it's nearly a perfect solution.
BTW 5G 3GPP specs include MPTCP support, IIRC for aggregating connections going via different gNodeBs (base stations)
When Starlink was new I really wanted channel bonding to take advantage of its speed an work around the early beta unreliability. I ended up using Speedify which has a really nice desktop client implementation. But it only works for one computer, it's more like a VPN client. This system was the best option for a full network for a router.
Fortunately Starlink got more reliable so I stopped needing it. Bonding disparate network connections has a lot of really funky behaviors. In practice the biggest problem is the Speedify VPNs kept getting flagged as spammy: running your own OpenMPTCrouter endpoint fixes that.
Would it be more practical to consider MPTCP use in a smaller network such as across servers and racks in a data center /room
while using QUIC (and MP-QUIC) over WAN / Internet?
Love it, but aggregating different connections (latency, LTE vs Adsl, fiber) is hard. Tried different algos, but always had network hogs or even higher ping that slowest connection.
What made it work best (and rock stable) was using LTE only, or ADSL only connections having same ping to aggregator (VPS) and have the VPS as near as possible. (latency wise)
I did not had the time to set up multiple redundant aggregators, so my SPOF was the VPS some times. Maybe there is a solution out there.
So far my experience a year ago. Our provider then offered a way cheaper managed solution, thats why we stopped using it.
> If MPTCP is not supported, OpenMPTCProuter can also use Multi-link VPN (MLVPN) or Glorytun UDP with multipath support.
It’s unclear to me from a brief look why they use mptcp as a backend for bonding
I don't doubt that it can be a problem in some cases with bad connections but personally I have never had any issues using TCP-based (TLS) VPNs in the 15 years I have been using them, even with MPTCP, which I routinely use to bond several different connections together.
I got hit by MPTCP shifting gigs of data from hetzner Germany to Australia. The solution was to intrude a relay host on the west coast.
It's great tech which can reach out and bite you in the bum at random. In my case it was a 10x drop in throughput by out of order packet arrival and effects on window and flow control.
Cool, but needs a VPS. A simpler approach is to load balance/ fail over individual TCP/UDP flows, eg using mwan3
I'm hoping that with QUIC, there will be a way to use it's migration feature to load balance more accurately (no need to wait for new flows to start). But, right now there is no way for a middlebox to tell if the server end of an individual flow supports migration, as this is only visible to the client.
I've been using it for a few years, and it's an awesome solution if you have slow or flaky network connections. The project is great, however, it takes some time to find the best configuration. I'm not sure about the latest version, but I didn’t have a great experience with versions above 0.60 and still stick to 0.59. I also recommend saving your working configuration once you have it, as a few changes can mess up the system—probably due to a bug.
I used this product when I lived in a building where I could get a 500mb and a 100mbit connection but not a gigabit connection. I had the server side setup on digital ocean and it "worked" in so far as I could pull 600mbit/s but was impractical since 1) I would get the latency of the worse (and in my case more variable) connection and 2) since it was essentially a VPN to a cloud provider many sites blocked me as a suspected bot.
I used this when I was in Tallinn. Place I was in didnt have gigabit fiber (probably the only place on the whole street). It worked well for me.
I was using copper internet from local ISP paired with cellular and starlink. Starlink went out for 30 seconds every few minutes but when it worked, it was the fastest of the bunch.
I rented a cheap VPS in the city to use as the other end of this router. The setup worked well overall, I was getting work done along with downloading games with sizes above 100G without worrying too much
In my last year of university (5 years ago) I took a networking seminar. Each student took a look at a different technology to utilize multiple links for internet data transfers.
Initially I was amazed by MPTCP and wondered why it had so little adoption. As I looked into the papers I slowly figured out why. With different links (WLAN, LAN, LTE) their real world characteristics are too different for efficient aggregation. It is the head of line blocking problem times ten.
It might be fine as a back up link, but there are other problems like the limit to TCP and middelboxes dropping unknowns packets. The challenges outnumber the benefits for consumers and in data centers there are other technologies to aggregate links that operate on a level below TCP.
People underestimate how often MPTCP is actually used. Siri has been using it for ages and has since expanded their MPTCP usage to many other built in apps. At some point MPTCP became available to all apps on iOS. Any network with an iPhone connected to it is using MPTCP, whether you know it or not.
When you're using TCP, you can enable MPTCP for free and make your connections faster and more stable. If you're not using TCP, there are alternatives, but then MPTCP is completely irrelevant anyway. You can use QUIC if you want to bypass shitty middleboxes, for instance, as that has similar features but smuggles itself past shitty middleboxes by being marked as UDP (which also makes it more likely to get dropped when the network is congested, unfortunately).
Can an app use MPTCP if they don't have a MPTCP-aware server component? Or is Apple proxying (or via VPN) the MPTCP connection as part of their services?
It needs server side support, but the OS just supports it out of the box. On linux enabling multi path is I believe just a configuration flag and then it just works.
referencing : https://support.apple.com/en-sg/101905
and the other comments on lack of MPTCP support from middleboxes, my guess is even for Apple first party apps, out in the wild, might not see much MPTCP use?
I imagine re-ordering is a big problem. The only application I can think of that would require a single flow is media streaming, but you only need ~4Mbs for a decent quality stream anyway. Other applications like file transfer can be split into multiple concurrent flow, at which point you might as well just let the local router nat each flow to each internet connection in turn.
I routinely ship 50mbit media streams over the internet, and sometimes streams up in the 200mbit range. In campus streams are up in the 10gbit range.
Over bonded networks bitrate are typically under 40mbit, and usually under 20
Of course none of this uses tcp.
Instead of trying to aggregate by packet, wouldn't it be more effective by managing sessions?
Once a session establishes a route, maintain it. Add logic to prioritize routes by session importance.
Then some sessions get a shitty link and others get a better link. Your bandwidth may go up for a benchmark of a lot of concurrent sessions but your latency will be random and all over the place.
Doing it at the packet level in theory gives you the ability to exploit the aggregate bandwidth for any session but as OP noted you still have all the latency problems and middleboxes getting in the way.
QUIC by the way solves the middlebox problem and you could put individual QUIC streams on separate connections to solve the head of line blocking that can appear but I feel like that’s closer to the TCP session thing where you only benefit the use cases that set up multiple streams. HTTP3 where this does happen may not benefit though because bandwidth tends to not be a problem if your rich enough to afford multiple links in the first place (ie more latency sensitive). This could be useful in places if you build a custom end to end solution for video streaming where you put the time-sensitive parts of the video on the lowest latency link and let the rest of the video buffer across all links. It’s a very niche use case though and not worth the effort I think.
That is done with multiwan in opnsense or mwan3 in openwrt.
I hope QUIC with its many advanced features gets better adoption to provide many of the benefits so we can just stop messing with TCP for it's completely screwed up by middleboxes.
Middle boxes simply block QUIC so that won’t help
No they do not unless they’ve been specifically updated to do so since QUIC is just UDP. From Google’s experiments very few middleware had problems such that they made QUIC impossible. That’s why Chrome has been using QUiC by default to Google services for a decade or maybe even slightly more.
And given that it’s the next evolution of TCP and a requirement for HTTP/3 it seems like the opposite direction of better QUIC compatibility is likely.
If you have a middle box in the way fiddling with your traffic then bypassing it is just a policy matter. Companies don’t put these things inline for fun, they do it because they want to block traffic.
Allowing bypassing it breaks their policy.
If you own your own network then you don’t need to worry about middle boxes interfering with quic or http.
The impact of middle boxes is overhyped on HN. Games use all kind of random ports and UDP (or TCP), it still works. Them not working on shitty company networks has no impact for real networks.
You either have a box in the way intercepting traffic for policy reasons, in which case quic won’t help as it would be blocked for breaking policy, or you don’t have a box in the way, in which case quic has no benefits (in terms of middle boxes)
I've been using this for around 6 months now to aggregate a 70 mbit FTTC connection and 150-450 mbit 5G connection; overall it works pretty well. The FTTC connection is the "master" link and it seems is preferred for the first several bytes of the connection, so the latency is better than using the 5G connection directly. This provides a nice balance for general web browsing, as loading web pages is still quite quick and overall download speed is more than either connection alone. In my setup I'd say it's about 80% efficient in terms of aggregating the total download.
There are a lot of configuration options and the stability of them can be an issue. I've found using XRay VLESS for the Proxy and Glorytun TCP for VPN to offer the best overall speed and reliability. (Edit: I've disabled SQM too). It's possible to mostly bypass the VPN by enabling proxy UDP over XRay but I found that breaks various bypass rules, e.g. so that Netflix connections always go over 5G, as some content has restriction from being accessed by my VPS data center IP.
Port forwarding is also a bit hit and miss; I have configured my 2 WAN routers to use the OMR router as a DMZ and then if I want to play a game, or enable remote access, I will use the bypass feature so that the device's MAC goes through my FTTC connection. UPnP works correctly in this scenario which is handy for consoles.
I've never heard of XRay or V2Ray. It seems like a niche thing?
Unfortunately I don't really understand the various protocols OMR supports, so my experience comes from measuring aggregated speed/latency and stability. XRay worked best for me and also supported QUIC if enabling proxy UDP over XRay/V2Ray. However, due to the omr-bypass issue, I've disabled that option and QUIC (which is the default setting).
They are extremely popular in the "niche" of censorship evasion especially in countries like China and Russia. There are many more such protocols as well like OBFS/Shadowsocks/Snowflake/Meek etc.
VPNs
I used OpenMPTCPRouter to aggregate 3 LTE connections (via routers connected to directional antenna, with SIM cards from different operators) when I was living in a house in the woods before the war has started I had to leave Russia. Worked like a charm, giving me up to 180 Mbps or so. May not be that good for aggregating different types of links together, but for using multiple cellular connections it's nearly a perfect solution. BTW 5G 3GPP specs include MPTCP support, IIRC for aggregating connections going via different gNodeBs (base stations)
When Starlink was new I really wanted channel bonding to take advantage of its speed an work around the early beta unreliability. I ended up using Speedify which has a really nice desktop client implementation. But it only works for one computer, it's more like a VPN client. This system was the best option for a full network for a router.
Fortunately Starlink got more reliable so I stopped needing it. Bonding disparate network connections has a lot of really funky behaviors. In practice the biggest problem is the Speedify VPNs kept getting flagged as spammy: running your own OpenMPTCrouter endpoint fixes that.
Would it be more practical to consider MPTCP use in a smaller network such as across servers and racks in a data center /room
while using QUIC (and MP-QUIC) over WAN / Internet?
Love it, but aggregating different connections (latency, LTE vs Adsl, fiber) is hard. Tried different algos, but always had network hogs or even higher ping that slowest connection.
What made it work best (and rock stable) was using LTE only, or ADSL only connections having same ping to aggregator (VPS) and have the VPS as near as possible. (latency wise)
I did not had the time to set up multiple redundant aggregators, so my SPOF was the VPS some times. Maybe there is a solution out there.
So far my experience a year ago. Our provider then offered a way cheaper managed solution, thats why we stopped using it.
TCP Over TCP Is A Bad Idea https://web.archive.org/web/20230228035749/http://sites.inka...
MPTCP isn't TCP over TCP.
It’s odd
> If MPTCP is not supported, OpenMPTCProuter can also use Multi-link VPN (MLVPN) or Glorytun UDP with multipath support.
It’s unclear to me from a brief look why they use mptcp as a backend for bonding
I don't doubt that it can be a problem in some cases with bad connections but personally I have never had any issues using TCP-based (TLS) VPNs in the 15 years I have been using them, even with MPTCP, which I routinely use to bond several different connections together.
I got hit by MPTCP shifting gigs of data from hetzner Germany to Australia. The solution was to intrude a relay host on the west coast.
It's great tech which can reach out and bite you in the bum at random. In my case it was a 10x drop in throughput by out of order packet arrival and effects on window and flow control.
Cool, but needs a VPS. A simpler approach is to load balance/ fail over individual TCP/UDP flows, eg using mwan3
I'm hoping that with QUIC, there will be a way to use it's migration feature to load balance more accurately (no need to wait for new flows to start). But, right now there is no way for a middlebox to tell if the server end of an individual flow supports migration, as this is only visible to the client.
I've been using it for a few years, and it's an awesome solution if you have slow or flaky network connections. The project is great, however, it takes some time to find the best configuration. I'm not sure about the latest version, but I didn’t have a great experience with versions above 0.60 and still stick to 0.59. I also recommend saving your working configuration once you have it, as a few changes can mess up the system—probably due to a bug.
I used this product when I lived in a building where I could get a 500mb and a 100mbit connection but not a gigabit connection. I had the server side setup on digital ocean and it "worked" in so far as I could pull 600mbit/s but was impractical since 1) I would get the latency of the worse (and in my case more variable) connection and 2) since it was essentially a VPN to a cloud provider many sites blocked me as a suspected bot.
I used this when I was in Tallinn. Place I was in didnt have gigabit fiber (probably the only place on the whole street). It worked well for me.
I was using copper internet from local ISP paired with cellular and starlink. Starlink went out for 30 seconds every few minutes but when it worked, it was the fastest of the bunch.
I rented a cheap VPS in the city to use as the other end of this router. The setup worked well overall, I was getting work done along with downloading games with sizes above 100G without worrying too much
What's a Prouter?
Rasprouter the bruter! I musta got lost
It's Open-MPTCP-router.
With support for Glorytun. Very cool!