I agree with Avery as he identifies a future necessity: replacing TCP with an encrypted, UDP-based protocol like QUIC that will no longer identify sessions with a 4-tuple (clientIP, clientPort, serverIP, serverPort), but instead with a random session ID. This would allow clients to change their IP address, e.g. between WiFi connections, while continuing the session state. This is not currently possible with TCP, with IPv6 or not.
The Secure Shell protocol is built on top of TCP. This creates for SSH a number of problems:
Anyone can send a TCP RST in your name (faking the IP and port; it can be brute-forced), which breaks your connection. Routers that unilaterally decide your connection is "taking too long" are in a special position to do so.
If there's a data transmission error (particularly common on WiFi), TCP fails to detect it in 1/65536 cases. If you're sending gigabytes of data, the session will collapse because the SSH MAC will detect the error which TCP didn't - and SSH will disconnect because it has no mechanism to correct it.
TCP implementations in operating systems tend to sport bad congestion control. It's typical to see speeds of 30% or less of what a connection permits, especially if there's any loss at all. 0.1% packet loss shouldn't affect bandwidth, especially if it's unavoidable like with WiFi. But it affects speed, badly. Unscrupulous file transfer clients "handle" this by making it trivial for users to run 10 simultaneous transfers, which "works around" the TCP deficiency by doing 10 key exchanges, burdening the server with 10 sessions, and elbowing bandwidth away from other traffic flows.
At IETF 97 in Seoul (2016), Google presented BBR, a seemingly much better congestion control than is currently used in widespread operating systems. But applications can do nothing to migrate: if we build on TCP, migration can only happen in OS kernels, which in many cases it is common to upgrade every 10 years.
- TCP connections aren't mobile. Switch a phone over to a neighboring WiFi, and you lose your SSH connections and their states. Terminal windows close, file transfers need to be resumed, port forwarded connections are interrupted.
In 2014, I wrote a draft specification for a protocol like this. This was before I even knew QUIC existed. I even wrote a reference implementation in pure C++! Then... I shelved it. I had several insecurities:
I was unsure if the design was sound. To improve on it, I'd need feedback, but I was afraid of getting feedback of the "A je to!" variety. That would give a false sense of assurance, only for fatal problems to come up later. This would need expert reviewers, but I didn't imagine there was interest (since I didn't even know there was QUIC then).
Standardizing new SSH transport and connection layers cannot be done properly by one person. It requires multiple years, and the amount of tedium that needs to be performed is huge. Even if others were interested, I wasn't sure I was willing. Subsequently, I did RFC 8308 and RFC 8332 – comparatively small things, with the help of the awesome group at Curdle. The process for those was decidedly painful. A new SSH protocol is 5x or 10x the amount of work.
It does not seem easy to fit into an existing SSH implementation. For users, it may seem like a drop-in replacement, but at an implementation level, many things change. It would take years to iron out new issues that arise.
The problems being solved seem the kinds that are just small enough to ignore. SSH doesn't recover if TCP doesn't fix errors, but on a good underlying network, there might not be errors. Connections are vulnerable to rogue TCP resets, but those don't happen much except with rude routers. TCP congestion control sucks, but implementations can eventually solve this. TCP doesn't provide IP address mobility, but usually we use SSH from one spot.
SSH over UDP would be the solution. But is this important enough to spend, like, 5 years of our lives on? Do people want to form an IETF working group?
If we don't, someone may do it independently, like I considered in 2014. If someone does, perhaps users might prefer it. If that happens, then for existing SSH developers and users, the transition might be messier than it could be; or we might have two solutions for a long time where one could have worked for everything.
Showing 7 out of 7 comments, oldest first:
Comment on Jun 14, 2019 at 07:36 by Cyber Fonic
Comment on Jun 14, 2019 at 07:56 by denisbider
Comment on Jul 17, 2019 at 18:01 by Jim Baird
Comment on Jul 17, 2019 at 22:32 by denisbider
SSH needs something similar to QUIC, but not QUIC. It could be adapted from QUIC.
Comment on Aug 2, 2020 at 20:20 by Hritik Vijay
I duckduckgo'ed and stumbled here. My thoughts drained in vain.
Nice article though :)
Comment on Aug 4, 2020 at 01:21 by denisbider
In the meanwhile I did write a spec for SSH over QUIC which could be implemented in the future. At this point I'm waiting:
(1) for the QUIC specifications to be finalized at the IETF;
(2) for the Quiche implementation to incorporate the final QUIC protocol version; and
(3) for any interest shown by other developers who might want to implement this.
In my experience, such initiatives are not very successful if a single person tries to do it alone! :)
The main page for the spec proposal I've written up is here:
https://datatracker.ietf.org/doc/draft-bider-ssh-quic/
The best way to read it is to click the "HTML" button under Document > Formats.
I would post a direct link, but that doesn't age so well if a new version of the document is released.
Comment on Sep 20, 2021 at 05:56 by Brenton Camac
But until QUIC is ready, I run SSH over Wireguard (VPN/UDP) as a workaround.
This gives me the reverse port-forwarding features etc I need from SSH, plus a nice virtual network layer from Wireguard, and as a bonus, because Wireguard runs over UDP, the SSH layer is now immune to (lower level) network reconfigurations. Its a bit of a hack (with multiple Public/Private key pairs) but it works.
Enjoy,
Brenton.