a pretty generic question. Noticing many environments start using SDWAN, sometimes running on top of a provider managed MPLS, or in combination with MPLS + IA with SDWAN on top.
When you are using an SDWAN architecture that’s using IPSEC , you will introduce following “overhead”:
- MTU/MSS issues?
- encryption/decryption on both ends of the SDWAN tunnel.
does this mean that “performance” of traffic going through an SDWAN ipsec tunnel will be “less” compared to sending the same traffic directly over the underlay MPLS ?
Cisco SD-WAN uses path-mtu-discovery so it will adjust the MTU to the max size it can use minus overheads. Obviously encrypting the traffic and sending is going to take a little more time but in reality won’t be noticeable. Argueably the advantages of SD-WAN (application aware routing, policy driven routing, DIA for guest traffic etc) outweigh the ‘needing to encrypt/decrypt’ problem.
You won’t notice the difference if you do SDWAN correctly. You will have more bandwidth overall because you are distributing traffic over multiple paths. Yes theoretically you will have slightly less bandwidth on any path due to encryption overhead and delay but you are talking 1-2 ms of delay and maybe 5% on bandwidth. The bandwidth question depends on your data flows and how large the packets are crossing the network.
Yes, if the traffic goes through one less box and doesn’t need to be en- and decapsulated (exception would be the Juniper / 128T SSR / SVR which doesn’t encapsulate anything, and doesn’t even necessarily re-encrypt), it’s going to add latency and a potential bandwidth limit.
But if your MPLS link has quality issues, SD-WAN can add FEC and packet duplication and make performance over less-than perfect links better. And obviously setup and management ought to be better.
And yes (again with the exception of the SSR/SVR), you add overhead in packet size as well so that saps bandwidth / increases bandwidth costs, and may introduce additional fragmentation. But usually these are small effects and the pros outweigh the cons.
If the app works over MPLS and fails over SD-WAN due to latency or bandwidth, I’d argue that the solution is already borderline failed and needs to be rethought.
MTU issues can arise if you have additional headers used like VXVLAN with SGTs, or any other protocols, if auto MTU discovery is not an option you can adjust it manually along the path
Hardware encryption and decryption with symmetric keys introduces basically no latency. We get 6-8ms on our main tunnel, and that’s with two hops in between.
IPsec sort of works as an overlay transport. SDWAN is more comprehensive view on using overlay networks using application inspection to identify flows, monitoring the end-to-end link performance to determine best paths, detecting brownouts and re-pathing flows accordingly. The application inspection means direct internet at the branch is practical - rules like “allow Azure” or “permit and log Salesforce” avoid the hassles around IP access-lists. And then there is the visibility and monitoring tools - most SDWAN/SASE provide a comprehensive traffic flow monitoring and visibility thats nearly impossible with a dumb IPsec overlay.
SDWAN is not about forwarding packets (thats IPsec), its more like application networking with dynamic adaptation to the network conditions, visibility, and alerting. The solutions are not directly comparable.
PS: SDWAN is a subset of SASE functions so you can look at SASE as well.
The total throughput of a link will be reduced by using IPsec; by how much depends on the packet sizes.
The reason to deploy SD-WAN is to get resilience and take advantage of cheaper and higher bandwidth Internet circuits. I’ve never heard of someone deploying SD-WAN and only using MPLS circuits.
SDWAN on MPLS is typically not configured to encrypt traffic over the private link except in sensitive environments. You can be selective in policy about which links traffic is encrypted over.
Generally speaking, there will be a very small loss in latency and throughput with encryption overhead but most SDWAN OEMs are not using traditional IPsec encryption, so the losses will be far less than what you may see with IPsec. There’s also the path MTU discovery element which will overcome MTU overhead issue in the overlay, which varies slightly if the traffic is encrypted or not.
Unless your business is truly colossal, just do friggen IPSec tunnels. IPSec configuration is not complicated, and if you use route-based tunnels instead of security-associations/proxy-IDs, you can dynamically change the routing at will.
SD-WAN is just marketing hype on top of IPSec.
If you really want an interoffice VPN with performance guarantees, buy MPLS.
I’d love to see your PM data comparing both. You make an eyebrow raising assertion. Prove it. I’m open to the possibility that multi-network best effort based services overlays are only 1-2ms more latent than single network IPVPN SLA’d services.
It’s really not. Aside from the far more scalable and performant encryption methods most OEMs use, features like App-aware routing, forward error correction, graceful re-route with per-packet replication are just some of the added benefits that SDWAN brings on top of classic IPsec/DMVPN. Some vendors offer a bunch of their own special sauce features, but you’re missing out on a ton of SDWAN table-stakes benefits if you’re just relying on classic IPsec tunneling.
There are always a lot of “academic” debaters from their text books or labs, but most of the time they ignore the real issues from a new solution.
First of all, SDWAN main goal is to adress WAN cost issues, here exists its advantages.
Is SDWAN having better performance, better SLA comparing with MPLS provider? No, not at least in Australia.
Is SDWAN a complete solution for WAN? Not at all, there are always exceptional use cases in the IT world, where dark fibre is the only available choice.
Technically, there is a higher overhead issue on the transport, and they defenders use “outweigh” with its advanced features, such as multiple paths, app awareness routing…which are not a SDWAN unique features.
The SDWAN brings in complexity in the solution, ask your OPs team if you are in a large international company, there is no one perfect solution in this world.
Haven’t we heard the public cloud is the ultimate solution for the DC?
In many cases the best-effort Internet path has considerably lower latency than the single-provider MPLS backbone. PoP density is much higher, and the networks are vastly greater in capacity vs. the relatively small private networks.
Then you add in how much bandwidth the business can procure for a given budget; this can result in 100x the capacity at the edge in some cases - think high-cost markets in Africa, Middle East, etc.
Finally, look where the traffic is going. If you’re accessing a SaaS application, your MPLS network with a breakout to the Internet at a DC is going to result in much higher latency and a reduced overall user experience.
I’ve seen data from my own deployments since the early 2000s that show Internet-based networks outperforming MPLS. It just wasn’t consistently that way. Multi-path SD-WAN has largely eliminated that caveat.
Look. I don’t owe you shit. If you don’t believe me, fine. I’ve been running Cisco SDWAN long enough that I lifecycled all my Vedge routers already. It works.
If your traffic is so snowflake precious that you care about ipsec overhead and reduced tcp mss, maybe SDWAN isn’t for you.
I will say, my SDWAN transportix. Includes layer 3 Mpls, layer 2 mpls and DIA circuits. The DIA circuits are much better bang for the buck and we are dropping the traditional Mpls in favor of diverse path DIA going forward.
Aside from the far more scalable and performant encryption methods most OEMs use
The world you’re searching for is “weak”.
None of these features are impossible to implement on any enterprise-grade firewall. They’re just usually unnecessary in 99% use-cases. A corporate VPN, virtual or physical, is “packets go to other office”, and the firewall policy will determine whether the flow is permitted or not.
Pretty much. I can’t say whether SD-WANs advocates are academics or real, bona-fide operations staff, but in my experience, there is a direct correlation between how diaphanous a product’s technological merits are, and how frenetically that product is marketed, and SD-WAN definitely is far into the “tissue-thin tech/absurdly overhyped marketing” side of the product spectrum.
If you do sdwan correctly, complexity should be trending down not up. As far as performance goes (even in Aus) you get far more bang for your buck with dia compared with mpls tails.
Most enterprises I have implemented sdwan in over the last few years have had close to 80% of their branch egress traffic be internet bound, not DC bound. Talking about P2P performance compared to mpls tails literally does not matter.
Also, does it really count as a performance benefit if it’s so conditional? Like, sure, you can make some intuitive guesses that tunneling into a VPC from the internet is going to be more performant than some artificially long path through an IPVPN. That’s a pretty specific case. I’d argue that the general case for a multi-site deployment is that a single provider VPN with SLA is going to outperform SD overlays but I don’t have the data to back that up and I’m intrigued that /u/jgiacobbe made the assertion so confidently.
I’d love to see an experiment where a budget is provided along side a set of requirements and VPN services are directly compared with SD services.