Welcome to the XCellAir Blog! A few promises:
Enjoy the reads, join in on the conversation and come back often!
Simplicity has been the name of the Wi-Fi game from its very beginning. From the outset, Wi-Fi has dealt with connectivity as its primary problem to solve, and has been designed for low cost, lightweight operation and plug-n-play deployment. Wi-Fi is essentially Internet-like in spirit, and embraces a medium-sharing philosophy where all manner of devices contend for the medium democratically, on equal terms. Wi-Fi’s medium access scheme and a well-defined channel access etiquette ensure this.
Over the years and its various standards iterations, Wi-Fi has largely stayed true to its roots. The Wi-Fi standards have stayed away from adding too much functionality to its core layers. We are, however, at an important inflection point in the Wi-Fi business today, where service providers are seeking to significantly expand their footprint, attract new subscribers, improve customer satisfaction and reduce churn. They see Wi-Fi as a key lever to enable growth beyond basic access to revenue-generating service offerings.
Providing a best-effort Wi-Fi service is therefore no longer acceptable. For users to accept Wi-Fi as a high quality service, and be willing to pay for premium services that run over it, Wi-Fi’s quality of service has to be considerably better than it is today. Wi-Fi optimization has a significant role to play in providing mechanisms to combat congestion in dense deployments, sharpen up coverage and provide quality of service per subscriber SLAs. It should also include self-provisioning and self-healing schemes to recover from faults.
Increased network densification means interference and impacts on the quality of services being delivered to the subscriber. Radio resource management schemes will be needed to ensure that the right network resources are allocated to the right devices at the right time. Aspects such as techniques for intelligent allocation of Wi-Fi channels, movement of client devices to the right Wi-Fi bands, device mobility between access points in a multi-AP environment, etc. become critical towards making Wi-Fi work really well.
But what does it take technically to realize all this? Our experience shows that you don’t need an “AI” engine to achieve this. Radio optimization algorithms need to factor in a solid appreciation of Wi-Fi behavior, however, and need to take into account aspects like Wi-Fi sensitivity to changing environmental conditions, deployment nuances, and the impact of Wi-Fi behavior (good or bad) on the quality of services like voice, video etc.
A Wi-Fi system can expose scores of performance parameters that can be configured and tweaked in an attempt to optimize quality. This universe of parameters includes metrics related to network topology and neighbor AP information, channel quality indicators, user quality of service metrics, coverage parameters etc. Our experience is that smart optimization doesn’t require hundreds of parameters to be adjusted. In fact, with radio optimization algorithms, less is typically more. Our finding has been that working with a handful of parameters is sufficient and is far more effective. Focusing on parameters that have the greatest impact on resolving congestion, interference and coverage issues is key.
Equally important is knowing how to leverage these parameters within the optimization schemes for maximum effect. For example, assessing parameters thresholds at which performance becomes good or bad; determining how frequently to collect and process metrics; and deciding on the right actions to take to solve an issue. In essence, 10-20% of the available parameters will give you 80% of the desired effect, as long as the right ones are focused on, and the algorithms leverage these correctly. We have gleaned all of this through extensive lab testing and benchmarking, and from knowledge of how Wi-Fi works.
It is also critical that such schemes work without disrupting normal Wi-Fi operation. Optimization seeks to improve user experience, and hence must not degrade data communication and the customer experience in any way. To this end, self-healing schemes, while being proactive, should kick in only when they need to, i.e. when there’s a developing performance issue that needs to be solved. Mechanisms need to be built in that prevent cascading configuration changes and “ping-pong” effects that can have a deteriorative effect on the network. Algorithms must be cognizant of how active users are at a given time, and adjust optimization actions to avoid disruption. Of course, while the resultant benefits are visible to the end user, the optimization actions shouldn’t be!
And it goes without saying that such a solution needs to be thoroughly tested – via lab and field trials, and test deployments – well ahead of commercial deployment. Service providers will have little appetite for schemes that “learn in the field” and use knowledge gained only from a real deployment to get smarter and solve issues over time.