OmniPHY-5G

Enhancing 5G RAN Performance with Machine Learning

Learning-Enhanced 5G-NR RAN Algorithms

Communications and wireless engineering are on the cusp of a data-driven revolution. This revolution is powered by measurement, feedback, computation, and powerful AI tools, such as deep learning, that will grow wireless systems to unprecedented levels of adaptivity, scale, performance, and reliability. The core optimization tools that enabled 4G and early-5G technology were important enablers of modern communications systems. However, they have struggled to keep pace with the demands of reconfigurability, adaptation, many-dimensional RAN optimization, and computational efficiency which true 5G systems and beyond demand.

IMPROVING RAN PERFORMANCE IN THE REAL WORLD

Today’s wireless systems are optimized using statistical models which represent a drastic simplification of the real world. Because communications engineering has relied on formal probabilistic optimization methods and simplified system models, many of the modem processing algorithms today are fixed at design time. They are both unable to adapt after deployment or reconfiguration and tied to the statistical formulation of the processing problem. This often results in computational complexity. Methods such as belief propagation, iterative decoding, and high-order MIMO processing are notorious power-consumption pain points. They then limit the scalability and efficacy of cellular systems. By allowing a deployed system to adapt to fit the complexity of the environment, cellular systems adapt to the complexity of real-world deployment scenarios, such as cities.  They then derive lower complexity signal processing solutions that better fit these channel distributions.

Screen Shot 2019-10-21 at 9.29.45 PM.png

BUILDING A NEXT GENERATION LEARNING-BASED RAN

Machine learning within the RAN drastically improves performance by: 

  • Reducing power consumption
  • Exploiting channel information to support improved connections
  • Exploiting Massive MIMO systems
  • Making better use of real-world feedback and responses to compensate for hardware, distortion, non-linearities, and effects 

DeepSig is building these capabilities within our partial-reference 5G-NR RAN L1 implementation. The purpose of this implementation is to provide case studies that allow VRAN and other 5G-NR base station integrators to understand and quantify the value of embracing the data-centric RAN algorithms provided by OmniPHY-5G

Through time-domain sample level simulations and over-the-air system testing, we have transitioned these ideas into reality, demonstrating superior baseband processing performance through improved bit error rate. This improved bit error rate leads to 2x to 10x better signal reception and more efficient power-saving inference algorithms. The result is reduced computational cost and deployment of twice as many radio heads per baseband unit when embracing a vRAN-centric front-haul architecture. While we have observed improved performance in the real world due to stability in the 5G-NR RAN L1 implementation versus random models, below we illustrate this performance improvement on one of the harshest 3GPP TDL-A channel models.

5g+equalizer

REAL WORLD CHANNELS & DEMONSTRATIONS

We first demonstrated this capability in an over-the-air system at the Brooklyn-5G Symposium in early 2019. Using off-the-shelf embedded and laptop NVIDIA graphics processing units (GPUs), we demonstrated a 5G-NR downlink signal in which the receiver adapted to the environment. This adaptation continually improved the performance of channel estimation and equalization in the 900 MHz test while remaining 100% standard-compliant. This was the first true ML-based 5G-RAN learning testbed of its kind. We observed BER reductions of 10 to 100x when compared with a conventional MMSE based approach.  This work has since been extended to wideband 50 to 100 MHz channel configurations, multi-layer deployments, and the uplink reception process. The pace of development in this area is increasing, and mature learning-based baseband algorithms offer enormous advantages when compared with the conventional approach.

image (21)

MASSIVE MIMO AND SPATIAL PROCESSING

We have already demonstrated an ML-driven processing approach for Massive MIMO in simulation. Now we are working to build a testbed to demonstrate real-world measurement of the data-centric method for Massive MIMO processing in UL-MIMO and DL-MIMO configurations for TDD and FDD systems. DeepSig has developed unique high-performance C++ code for this application. This code couples high-performance NR implementation code with deep learning-based inference algorithms and GPU offload to provide a unique implementation. This offering could help save power and improve density and system performance in 5G RAN deployments as we continue to work with partners to make their systems more efficient and performant. 

nrloop3

We continue to prove this capability in more mature 5G-NR RAN realizations over the air. We will provide updates in this area as our RAN enhancements mature. So far, we have focused on digital element processing within mid-band NR deployment scenarios, but our capabilities will continue to evolve with our partners.

INQUIRIES

To learn more about our OmniPHY-5G communications systems and how you can use them in your systems and deployments, please contact us!