SpeedFusion is Peplink's VPN bonding technology. It sits at the heart of every serious Peplink deployment, and it is the single feature that separates Peplink from the long list of SD-WAN vendors selling glorified load balancers. But SpeedFusion is not one thing. It is three distinct operating modes, each with fundamentally different behaviour at the network level, and picking the wrong one is a mistake we see regularly in the field.
This article explains how each mode works, what happens to your traffic at the packet level, and when each mode is the right choice. If you are configuring SpeedFusion profiles on a Balance, MAX or FusionHub and want to understand what the settings actually do rather than just clicking through a wizard, this is for you.
The Three SpeedFusion Modes
When you create a SpeedFusion profile on a Peplink router, you choose one of three connection modes for each WAN link in the tunnel:
- Bonding - combines multiple WAN connections into a single logical pipe, distributing traffic at the packet level across all active links simultaneously
- Load Balancing - distributes traffic across WAN connections at the session level, assigning entire sessions to individual links
- Hot Failover - sends all traffic through a single priority WAN link and switches to a backup link only when the primary fails
These modes sound similar in a product brief. They are not similar at all in practice. The differences come down to granularity, latency behaviour, throughput characteristics and licensing cost.
Bonding: Packet-Level Aggregation
Bonding is SpeedFusion's headline feature and the mode that requires the most understanding to configure well. When bonding is active, the Peplink router breaks outbound traffic into individual packets and distributes those packets across all bonded WAN connections simultaneously. At the remote end, a SpeedFusion peer (another Peplink router or a FusionHub virtual appliance) reassembles the packets back into the correct order before forwarding them to the destination.
How it works technically
The bonding process works roughly like this. Your router has three WAN connections: a 50 Mbps broadband line, a 30 Mbps 4G connection and a 20 Mbps 4G connection on a different carrier. With bonding active, a single TCP stream (say, a large file download) gets split across all three links. Packet 1 goes out over the broadband, packet 2 over the first 4G connection, packet 3 over the second 4G, packet 4 back to broadband, and so on. The distribution is weighted by each link's available bandwidth, so faster links carry a proportionally larger share of the traffic.
The SpeedFusion peer on the remote side receives these packets from three different source paths, each with different latency and jitter characteristics. It holds packets in a resequencing buffer until it can reconstruct the original stream in the correct order, then forwards the reassembled traffic to the destination network.
This is fundamentally different from session-level load balancing. A single TCP connection, a single video stream, a single VoIP call benefits from the combined bandwidth of all three links. Aggregate throughput for a single session approaches the sum of all WAN bandwidths, minus overhead. In our example, that is roughly 95 Mbps usable from three links that individually max out at 50, 30 and 20.
The resequencing problem
Bonding introduces a challenge that does not exist with the other modes: packet reordering latency. Each WAN link has different latency. Your broadband might be 12 ms to the FusionHub, your first 4G connection 35 ms and your second 4G connection 55 ms. When packets from a single stream arrive via these three paths, the peer must wait for the slowest packet before it can forward the reassembled chunk.
This means your bonded tunnel's effective latency is governed by the highest-latency link in the bond, not the lowest. If you bond a 12 ms fibre connection with an 80 ms satellite link, your bonded tunnel latency will be approximately 80 ms for all traffic, not 12 ms. You gain the satellite's bandwidth, but you pay for it with the satellite's latency across every packet.
This is the single most important thing to understand about bonding. It is not free bandwidth. You trade latency headroom for aggregate throughput. For bulk data transfer, streaming video and large downloads, this trade-off is almost always worth it. For latency-sensitive applications like VoIP, interactive video conferencing or real-time control systems, it can make things worse.
When bonding is the right choice
Bonding makes sense when you need maximum aggregate throughput from multiple WAN connections and your applications can tolerate the latency of your slowest link. Typical use cases include:
- Live broadcast uplinks where you need to push a 15-30 Mbps video stream and no single cellular connection can sustain that reliably
- Large file transfers between sites where throughput matters more than per-packet latency
- Backup and replication traffic running over multiple commodity broadband connections
- Any scenario where a single WAN link cannot deliver enough bandwidth for the application, but multiple links together can
Bonding also makes sense when your WAN links have relatively similar latency. Three 4G connections on different carriers from the same location will typically show latency within 15-25 ms of each other. Bonding those three links costs you very little additional latency while potentially tripling your usable throughput. Three links with wildly different latency characteristics (say, fibre plus 4G plus satellite) will work, but the latency penalty is more noticeable.
Bonding licensing
Bonding requires a SpeedFusion licence on the Peplink device, and on most models, it also requires a specific throughput tier. The B One, for example, supports SpeedFusion with a 1 Gbps bonding throughput limit, while the HD4 MBX supports multi-gigabit bonding. FusionHub Solo (the free tier) supports up to 5 Mbps bonded throughput. FusionHub licences for production use start at the Essential tier with higher throughput caps. Check the specific model's datasheet for bonding throughput limits before specifying hardware for a project.
Load Balancing: Session-Level Distribution
Load balancing in SpeedFusion operates at the session level rather than the packet level. When a new TCP session or UDP flow begins, the router assigns that entire session to one WAN link based on a distribution algorithm. All packets within that session travel the same path. The next new session might be assigned to a different link.
How it works technically
Consider the same three-WAN setup. With load balancing, when a user opens a browser and loads a web page, the HTTP session to that server gets assigned to the broadband link. A colleague starting a video call has their RTP stream assigned to the first 4G connection. A third user downloading a file from a cloud service gets placed on the second 4G connection. Each of those sessions uses only one WAN link for its entire lifetime.
The distribution algorithm is typically weighted round-robin based on each link's bandwidth. Faster links get assigned more sessions. Some Peplink models also support "intelligent" session distribution that considers current link utilisation, though in practice the weighted round-robin approach works well enough for most deployments.
The fundamental limitation
The critical difference from bonding is this: no single session can exceed the throughput of the individual WAN link it is assigned to. If your user starts a 60 Mbps file download and it gets assigned to the 30 Mbps 4G link, that download will max out at 30 Mbps. The other two links might be sitting idle. The aggregate throughput across all users will approximate the sum of all WAN bandwidths, but any individual session is capped by its assigned link.
For environments with many concurrent users generating many independent sessions (a typical office with 30 people browsing the web, checking email and using cloud applications), this works perfectly well. The traffic naturally distributes across links and the aggregate experience is good. For environments with a small number of high-bandwidth sessions, load balancing underperforms bonding significantly.
Advantages over bonding
Load balancing has two meaningful advantages. First, latency. Because each session stays on a single link, the latency for that session is the latency of that specific WAN connection. Your VoIP call on the 12 ms broadband link gets 12 ms latency, not the 55 ms of the slowest link in the tunnel. This matters enormously for real-time communications.
Second, processing overhead. Packet-level bonding requires the router to make a forwarding decision for every single packet, maintain a resequencing buffer at the remote end, and handle the complexity of out-of-order delivery. Session-level load balancing makes one routing decision per session and then forwards packets without additional processing. On lower-end hardware, this translates to meaningfully higher aggregate throughput before the CPU becomes a bottleneck.
When load balancing is the right choice
- Office environments with many concurrent users where individual sessions rarely need more than a single link's bandwidth
- Deployments where VoIP and video conferencing quality is the priority and you want per-session latency to reflect the best available link, not the worst
- Situations where you want to spread traffic across WAN connections for resilience without the processing overhead or latency penalty of full bonding
- Deployments using hardware with limited SpeedFusion throughput licences, where bonding throughput caps would be restrictive
Hot Failover: Priority-Based Switching
Hot failover is the simplest SpeedFusion mode and the one that organisations most commonly deploy when they need resilience without complexity. All traffic runs through a single, designated priority WAN link. The other links in the SpeedFusion tunnel are maintained in a warm standby state. If the priority link fails or degrades below a configurable threshold, traffic switches to the next priority link automatically.
How it works technically
With hot failover, your SpeedFusion tunnel is established across all configured WAN links simultaneously. The router keeps all tunnel paths alive by sending periodic keepalive packets across every link. But actual data traffic flows through only one link at a time: the highest-priority healthy link.
When the active link fails (cable unplugged, carrier outage, signal loss on a cellular connection), the router detects the failure through missed keepalives and immediately shifts all traffic to the next link in the priority list. Because the tunnel paths are pre-established and pre-authenticated, the failover happens without renegotiating the VPN. Typical failover time is under one second on most Peplink hardware, often in the 300-500 ms range.
You can configure multiple priority levels. WAN 1 as priority 1, WAN 2 as priority 2, WAN 3 as priority 3. If WAN 1 fails, traffic moves to WAN 2. If WAN 2 also fails, traffic moves to WAN 3. When a higher-priority link recovers, traffic automatically moves back to it (this behaviour is configurable; you can disable automatic fallback if you prefer manual control).
Health check configuration
The quality of your failover depends entirely on how you configure health checks. Peplink routers support several health check methods: ICMP ping to a specified host, HTTP checks to a URL, and DNS lookup checks. For SpeedFusion hot failover, the tunnel keepalive mechanism handles detection of complete link failure. But you should also configure WAN-level health checks to catch partial failures: links that are technically up but performing so poorly they are unusable.
On the router's WAN configuration page, you can set thresholds for packet loss and latency. If a WAN link exceeds these thresholds for a configurable duration, the router marks it as unhealthy and triggers failover. Setting these thresholds too aggressively causes flapping (rapid switching back and forth). Setting them too loosely means the router tolerates a degraded link for too long before switching.
A reasonable starting point for most deployments: mark a link as unhealthy after 5 consecutive failed health checks at 5-second intervals (25 seconds of failure), with recovery requiring 5 consecutive successful checks. Adjust from there based on how sensitive your applications are to brief interruptions.
When hot failover is the right choice
- Deployments where you have a clearly superior primary WAN connection (leased line, fibre) and want cellular purely as backup
- Environments where you want to minimise cellular data consumption because the backup links are on metered or capped data plans
- Simple resilience requirements where "stay connected if the primary fails" is sufficient and you do not need aggregate throughput from multiple links
- Sites with limited SpeedFusion licensing where bonding or load balancing are not available or cost-prohibitive
The hidden cost of hot failover
The downside of hot failover is that your backup links sit idle during normal operation. You are paying for WAN connections that deliver zero throughput 99% of the time. If your primary is a 100 Mbps fibre link and your backup is a 50 Mbps bonded cellular connection, that 50 Mbps contributes nothing during normal operation. With bonding, those same links would deliver 150 Mbps aggregate. With load balancing, they would share the session load and improve the experience for all users even when everything is healthy.
Whether this matters depends on your economics. If the backup is a pay-per-gigabyte cellular connection, keeping it idle saves money. If the backup is an unlimited broadband connection that you are paying a monthly fee for regardless of usage, hot failover wastes capacity you have already paid for.
Forward Error Correction (FEC)
Forward Error Correction is a SpeedFusion feature that works alongside bonding and load balancing (it is not relevant to hot failover). FEC adds redundant data to the tunnel traffic so that the remote end can reconstruct lost packets without requesting retransmission.
In practical terms, FEC sends duplicate copies of packets across different WAN links. If a packet is lost on one link, the copy arriving via another link fills the gap. The receiving end does not need to wait for a TCP retransmit, which means packet loss on individual WAN links does not translate to visible packet loss at the application layer.
FEC is configured as a percentage overhead. Setting FEC to "Low" adds roughly 17% overhead (one redundant packet for every six data packets). "Normal" adds approximately 33% overhead (one redundant packet for every three). "High" adds 50% or more overhead. The higher the FEC level, the more packet loss the tunnel can absorb without application-layer impact, but the more bandwidth you consume on overhead rather than actual data.
When to enable FEC
FEC is most valuable for real-time traffic (VoIP, video conferencing, live video) running over unreliable WAN links. A cellular connection that regularly shows 2-5% packet loss will produce terrible voice quality on a standard VPN. With FEC enabled on a SpeedFusion tunnel, that same link delivers clean audio because the redundant packets compensate for the losses.
Do not enable FEC on tunnels carrying primarily bulk data traffic (file transfers, backups, web browsing). TCP's built-in retransmission handles packet loss adequately for non-real-time traffic, and FEC's bandwidth overhead reduces your effective throughput for no meaningful benefit. Use SpeedFusion sub-tunnels or outbound policies to route real-time traffic through an FEC-enabled tunnel and bulk traffic through a tunnel without FEC.
WAN Smoothing
WAN Smoothing is related to FEC but works differently. While FEC sends redundant data to compensate for packet loss, WAN Smoothing sends duplicate copies of every packet across multiple WAN links simultaneously and uses the first copy to arrive. This eliminates jitter almost entirely because the per-packet latency is always the latency of the fastest link at that instant, not the average or the worst.
WAN Smoothing is extraordinarily effective for voice and video quality over unreliable connections. It also burns bandwidth at an extraordinary rate. With WAN Smoothing set to "Normal" across two WAN links, you are sending every packet twice, halving your effective throughput. On three links, you might triple the bandwidth consumption. Use it only for traffic that demands rock-solid low-jitter delivery and only when you have bandwidth to spare.
The configuration in the router firmware offers three levels: Low, Normal and High. Low duplicates across the two best-performing links. Normal duplicates across all links. High duplicates across all links and adds FEC on top. In practice, we find that "Low" is sufficient for most VoIP deployments over cellular, and "Normal" is reserved for live broadcast backhaul where the video encoder's output is the only traffic on the tunnel.
Configuring SpeedFusion Profiles in Practice
On Peplink firmware 8.x and later, SpeedFusion profiles are configured under Network > SpeedFusion. You create a profile, define the remote peer (by serial number for Peplink-to-Peplink, or by IP/hostname for FusionHub), and then configure each WAN link's role in the tunnel.
Per-WAN link mode selection
A detail that many administrators miss: you can set different modes per WAN link within a single SpeedFusion profile. This is useful for mixed deployments. You might bond your two 4G connections for maximum throughput while setting your broadband connection to hot failover priority 1. If both 4G links fail, broadband carries the traffic alone. While the 4G links are healthy, they are bonded together but the broadband stays in reserve.
To configure this, edit the SpeedFusion profile and expand the WAN connection settings. Each WAN shows a dropdown for "Send" mode: Priority (for hot failover, specify priority number), Bonding or Load Balance. You set this per-WAN, per-profile.
Sub-tunnels and traffic steering
For deployments with mixed traffic types, create multiple SpeedFusion sub-tunnels within a single profile. Each sub-tunnel can have its own bonding/load-balancing/failover configuration and its own FEC and WAN Smoothing settings. Then use outbound policies (firewall rules) to steer specific traffic types into specific sub-tunnels.
A common configuration for broadcast deployments: Sub-tunnel 1 is bonded with WAN Smoothing on "Low", carrying the video encoder's output. Sub-tunnel 2 is bonded with no FEC, carrying production file transfers. Sub-tunnel 3 is load-balanced with no FEC, carrying general internet traffic for the crew. Each tunnel uses the same WAN links but applies different treatments to different traffic classes.
InControl2 management
For multi-site deployments, SpeedFusion profiles are managed centrally through InControl2, Peplink's cloud management platform. InControl2 lets you define SpeedFusion profile templates and push them to groups of devices. This is essential for organisations managing dozens or hundreds of remote routers, where configuring SpeedFusion profiles individually on each device would be impractical.
InControl2 also provides tunnel status monitoring, showing which WAN links are active in each tunnel, current throughput per link, latency measurements and packet loss statistics. The bandwidth reporting helps you identify underperforming WAN connections and refine your bonding or load-balancing weights over time.
One InControl2 feature worth highlighting: SpeedFusion Cloud. This is Peplink's hosted FusionHub service that eliminates the need to deploy your own VPN termination point. You configure your Peplink router to connect to the nearest SpeedFusion Cloud point of presence, and Peplink handles the server-side infrastructure. This is useful for deployments that need bonded internet access (combining multiple WAN links for faster browsing and cloud application access) without the complexity of maintaining a FusionHub instance. SpeedFusion Cloud uses bonding by default and is available as a subscription add-on per device.
Choosing the Right Mode: A Decision Framework
After configuring SpeedFusion tunnels across hundreds of deployments, we have settled on a straightforward decision process.
Start with the application requirements
Ask what the traffic is doing. If a single session needs more throughput than any individual WAN link can provide (live video uplink, large file sync, database replication), bonding is the only mode that helps. Load balancing and hot failover cannot make a single session faster than the link it is assigned to.
If no single session requires more than one link's bandwidth, ask whether latency or resilience matters more. If latency sensitivity is high (VoIP-heavy office, interactive applications, trading systems), load balancing gives each session the latency of its assigned link, which is better than bonding's worst-link-latency behaviour. If resilience is the primary concern and you have a clearly superior primary connection, hot failover is simpler to configure and easier to troubleshoot.
Consider the WAN link economics
If your backup links are on metered data plans (pay-per-gigabyte cellular, satellite with data caps), hot failover keeps those links idle during normal operation and preserves your data allowance for actual outages. Bonding and load balancing will consume data on those links continuously.
If all your WAN links are on unlimited or flat-rate plans (multiple broadband connections, unlimited cellular plans), hot failover wastes capacity. Use bonding or load balancing to extract value from every link you are paying for.
Think about the link characteristics
If your WAN links have similar latency (multiple cellular connections from the same location, multiple broadband lines), bonding's latency penalty is minimal and the throughput gain is significant. Bond them.
If your WAN links have vastly different latency (fibre at 5 ms plus satellite at 600 ms), bonding drags all traffic to the satellite's latency. Use the fibre for latency-sensitive traffic and the satellite for bulk transfers, either through separate sub-tunnels or by using load balancing with outbound policies to control which sessions go where.
Factor in the hardware and licensing
Not all Peplink models support all modes at all throughput levels. Entry-level models like the B One support bonding but with a throughput cap (1 Gbps on the B One). Higher-end models like the Balance 710 and HD4 MBX support multi-gigabit bonding. Confirm that your chosen hardware and licence tier support your intended mode at the throughput you need before committing to a design.
Common Mistakes We See in the Field
Bonding everything by default
Bonding sounds like the best option, so many administrators enable it on every tunnel without considering the latency implications. We regularly encounter deployments where VoIP quality is poor because voice traffic is being bonded across links with 40+ ms latency spread. The fix is usually simple: create a separate sub-tunnel with load balancing for voice and keep bonding for data. Ten minutes of configuration, immediate improvement in call quality.
Ignoring FEC bandwidth overhead
Enabling FEC "High" on a tunnel and then wondering why throughput dropped by 50%. FEC is not free. Every redundant packet consumes bandwidth that could carry data. Use the lowest FEC level that achieves acceptable packet loss rates for your application, and only apply it to traffic that actually benefits from it.
Setting health checks too aggressively
Configuring a 1-second ping interval with failover after 2 missed pings on a cellular connection that naturally jitters. The result is constant flapping between links, which causes brief interruptions every time the router switches, which is worse than the marginal packet loss it was trying to avoid. Cellular connections are inherently variable. Give your health checks enough tolerance to ride out normal fluctuations.
Not testing with realistic traffic
Testing a bonded tunnel with a single iPerf stream and declaring it working. Then discovering in production that the tunnel performs differently under mixed traffic with hundreds of concurrent sessions. Test with traffic that resembles your actual production workload. If your deployment carries live video, test with a video encoder. If it carries VoIP, make test calls. Synthetic benchmarks tell you about raw capacity, not about real-world application performance.
Summary
SpeedFusion bonding, load balancing and hot failover are three tools for three different problems. Bonding maximises throughput for individual sessions at the cost of latency. Load balancing distributes sessions across links while preserving per-session latency. Hot failover protects against link failure while keeping backup links idle.
The right choice depends on your applications, your link characteristics, your data costs and your hardware capabilities. Most production deployments use a combination: bonding for bandwidth-hungry data traffic, load balancing or dedicated sub-tunnels for latency-sensitive real-time communications, and hot failover for metered backup links that should only carry traffic during an outage.
If you are unsure which configuration suits your deployment, or if you have existing SpeedFusion tunnels that are not performing as expected, get in touch. We configure and optimise SpeedFusion tunnels for broadcast, maritime, events and enterprise organisations every week, and we are happy to review your current setup or design a new one from scratch.