Plume made a splash into the burgeoning Wi-Fi mesh scene a couple of years ago by promising to do things differently. In a market where vendors vie with each other to put the biggest, nastiest-looking hardware with the biggest possible numbers on the box, Plume seemed to say, "That's not how you actually fix Wi-Fi."
Instead, the small, crowdfunded startup started by taking a risk on selling tiny, low-powered devices with cloud-based smart management. And the strategy proved to be successful, despite the devices' individual low power and speed. Fast-forward to today, Plume is now releasing a second generation of hardware—called "Superpod"—that keeps the small form factor, nimble deployment, and overall network reliability of its first product. And after getting a little pre-release hands-on, Plume's newest effort also appears to add the raw speed its predecessor was missing.
Quarter-mile times arent everything
Before talking about this particular product's performance, we need to talk about how to measure Wi-Fi performance in the first place. When I'm not busy building my own routers, I've spent the last couple of years learning about and improving methods of testing Wi-Fi systems in ways that actually matter for real-world use. Wireless AC speed ratings are complete mumbo-jumbo, and simple iPerf3 runs don't get the job done, either.
Wi-Fi reviewers typically just blast a giant TCP stream across a system from one laptop, measure the big number, and call it a day. But Plume tested its own product differently—the company set up a test environment in a real home with lots of devices, and it used a proprietary suite called ixChariot to model actual traffic that you might see real humans and their devices producing. Instead of looking for one big speed test number, Plume wanted to see relevant metrics: could the downloader get lots of throughput? Could the 4K stream continue without buffering? Could a VoIP call go on without stuttering and a Web browser load pages quickly and responsively?
In these scenarios, Plume demonstrated its product kicking the crap out of a competitor or two (impressing me in the process). But the company did so on its own ground, with its own tools, and with plenty of time to set things up carefully and stack the deck in its favor. I liked what I saw, but I knew I needed to up my own testing game rather than just take Plume's word for it.
Previously, I'd used ApacheBench for its intended purpose to test webservers and apps in my normal sysadmin career, and I used the tool again at Ars to put "gigabit" consumer routers to a real test. I figured I could use this again to model a traffic flow… and I was wrong.
I'm stubborn, though, and rather than give up, I doubled down and wrote my own small suite of network modeling tools. Armed with a new network scheduler that would orchestrate jobs among a group of laptops with millisecond precision and a test tool that could pull HTTP traffic from a Web server at any desired rate with extremely detailed results, I was ready to model small networks. I began doing just that, in fact, as an occasional Wi-Fi equipment reviewer at Wirecutter.
From pods to Superpods
Inspired by Plume's test scenarios, I initially set up four laptops to emulate a download session, a 4K streaming session, a VoIP call, and a Web browsing session. This did a much better job differentiating between Wi-Fi products; I was now able to put real numbers behind qualitative differences I'd seen between similar systems rather than just talking about how one "felt more frustrating" than another.
The bad news—at least, for Plume—is that while these device did well on the new tests, they weren't the best for long. When competitors that brought more firepower to the table learned to optimize their networks for myriad devices, too, Plume fell quickly to the middle of the pack. I still liked these kits—and they were still the easiest thing for aesthetes or "technically challenged" folks to deploy and live with—but Plume's first-gen offerings quickly weren't on top of the performance heap.
In my own testing and others', top honors repeatedly went to Orbi, Netgear's "muscle car of mesh." Plume clearly noticed, because the company's brand-new Superpod design is basically a highly miniaturized Orbi RBR-50. Each device is a tri-band design running on the Qualcomm IPQ4019 SoC—a quad-core Cortex A7 ARM CPU and dual-band, dual-stream 802.11ac wave 2 Wi-Fi radio—with a Qualcomm Atheros QCA9984 providing the second 5 GHz radio. The QCA9984 is a 4×4:4 device (four input antennas, four output antennas, and four simultaneous MIMO streams), and it's a real cannon. In Orbi's RBR50/RBS50 chassis, with one laptop wired to the satellite and another to the router, I measured the QCA9984 providing an eye-watering 750+ Mbps throughput across the breadth of my 3,500 sq ft house.
In Netgear's configuration, the QCA9984 is reserved for backhaul use only; Orbi uses it exclusively for communications between its satellites and router, and your own devices are only allowed to connect to the IPQ4019's lesser 2×2:2 radios. The idea is that, by reserving the more powerful radio for backhaul, you'll always be able to utilize the IPQ4019's full capacity. This worked out very well in practice, and up until now, the Orbi RBK53 (one router + two satellites) has been the undisputed throughput king.
As usual, Plume again took a different approach—with Superpod, the company is using its Cloud Optimizer to allocate the radios dynamically. I was more than a little skeptical of how the extreme miniaturization would affect performance, not to mention the use of anything but the big 4×4:4 radio for backhaul. But at least controversial designs make for fun testing.
[contf] [contfnew]
Ars Technica
[contfnewc] [contfnewc]