Skip to main content

At this year’s IEEE Communication Theory Workshop, DeepSig presented our latest research and ongoing deployments applying deep learning to physical layer wireless communications, highlighting how our AI-native approach is progressing toward real-world impact. The talk showcased how deep learning drives advancements in neural receivers, AI-Native air interfaces and spectrum sensing, as these technologies move from simulation and lab models to real-world capabilities, field trials and deployments.

Founded in 2016, DeepSig is rethinking how wireless systems are designed and operated by applying deep learning directly into the critical physical layer processing at the heart of baseband processing in radio access network (RAN) edge devices. The presentation explored this journey, from neural receivers and digital twins to AI-designed waveforms and advanced RF sensing. The key message: the wireless stack is becoming smarter, faster and more adaptable thanks to AI. Accurate data and measurement-driven simulation are critical in designing and synthesizing state-of-the-art wireless functions.

Neural Receivers: Smarter Wireless Reception 

Most wireless receivers rely on model-driven algorithms, manually designed from idealized channel models, to handle tasks like channel estimation and equalization. However, DeepSig has shown that neural networks can outperform these methods under challenging conditions. By learning from real-world data and using an end-to-end approach, AI-based receivers can adapt to the complexity of modern wireless environments.  

Working with Intel’s FlexRAN platform, we integrated AI-based receiver functions into virtualized RAN (vRAN) architectures by replacing classical signal processing with AI inference algorithms that have been heavily optimized for this specific function. This means moving neural receiver models directly into the Distributed Unit (DU) or Radio Unit (RU) to move low-latency AI processing directly into the critical high-PHY functions of the radios.

In collaboration with Viettel, HTC GREIGNS and others, DeepSig conducted over-the-air field tests using these neural receivers deployed directly within macrocell network DU hardware. The result? Successful measurement of UE uplink throughput speed tests indicates an improvement in edge-of-cell and extreme-edge-of-cell conditions of up to 2- 3x in some cases, significantly outperforming the more broadly used conventional MMSE receiver. According to O’Shea, this is where AI-native systems shine—when channel conditions are poor, dynamic or non-ideal—and where leveraging more information about the channel conditions and data distribution can significantly benefit capacity and coverage.

Tuning RAN processing to Reality with RAN Digital Twins

Ensuring the reliable performance of neural receivers in real-world deployments is a complex challenge. To meet extremely tight latency constraints and adapt to the variability of real-world channels, fine-tuning for actual channel conditions is often necessary. This is where RAN digital twins play a crucial role, accurately modeling the true channel response within a live network and enabling us to fine-tune our AI inference for specific environments.

To better tune a RAN digital twin and neural receivers for real-world environments, DeepSig developed a low-cost, passive drive capture system that uses software-defined radios to gather channel response data from downlink reference signals across existing 4G and 5G towers. With GPS-synchronized timing and location data, they can estimate accurate channel impulse responses, locations, and flight times and build an accurate measurement model for the channel environment.

Generative models like variational autoencoders can be used to train statistical models that accurately emulate the channel response of some geographic regions. They can also calibrate physics-based channel emulation, such as RF ray-tracing methods. These calibrated RAN digital twins allow DeepSig to simulate, train, evaluate and deploy AI models customized to specific sites or sectors. In one case (published in our RitiRAN paper), we were able to observe an additional 2 dB gain by fine-tuning a neural receiver for a site-specific channel model, on top of the roughly 2 dB that a neural receiver provided as compared to a conventional MMSE-based approach.

Smarter Massive MIMO and Beamforming with AI

DeepSig also tackles critical multi-user Massive MIMO performance challenges by applying deep learning to beamforming. Instead of using zero-forcing or weighted MMSE approaches, which are widely used today for their simplicity and practicality, we train neural networks to estimate beamforming weights that maximize user throughput directly. 

Initial simulations in scenarios involving 32TR or 64TR antenna configurations with 8 or 16 layers showed aggregate capacity improvements of around 10–20% as measured in spectral efficiency (e.g., bits/Hz/sec), especially at medium to higher SNR ranges. Because the models can be run as wide and low-precision tensor operations on GPUs or AI accelerators at the edge, they’re both performant and efficient on numerous current and forthcoming silicon platforms.

O’Shea emphasized the growing value of AI as we move into more complex, higher-dimensional communication systems, such as Giga-MIMO envisioned for the FR3 band, or Distributed-MIMO (D-MIMO) in future systems. Machine learning has generally shown that the more degrees of freedom it has to optimize over, the greater the performance upside can be from more naive model-driven classical approaches.

Allowing AI to Redesign the 5G Waveform Itself for the Next Generation PHY

Current generations of the physical layer use manually designed modulation schemes, pilot patterns and resource grid layouts locked down and specified based on classical model-driven methods. At DeepSig, we’re rethinking this approach by focusing on end-to-end performance rather than layered design convenience. One key direction is using the AI-Native autoencoders approach, which learns the entire physical layer of modulation and reference signals and the equalization and symbol detection approach, rather than trying to sub-divide the problem analytically. By allowing AI models to learn how to embed data, reference information and structure in new ways that optimize directly for ultimate performance metrics under real-world channel condition and assumptions, we illustrate how an AI-Native air-interface can replace the classical and relatively simplistic approaches used today and increase the capacity and efficiency of the fundamental air-interface of our communications systems.

What’s remarkable is that these learned schemes can be seamlessly integrated into the existing 5G-NR  architecture, maintaining the standard MAC and slot structure, and not causing a significant divergence from current-day standards. They can achieve 20-60% throughput gains in some instances. The performance gap between traditional and AI-learned receivers widened even more in high-mobility scenarios such as 130 km/h max Doppler environments, where the efficiency of AI-learned receivers is truly impressive. It reduced the pilot overhead needed in conventional systems, further demonstrating the performance superiority of AI-learned receivers in challenging scenarios. 

This class of approach to the physical layer design offers a path which can be interoperable and tightly compatible with current day 4G and 5G systems, while providing a rapid software-enabled path to higher performance for 6G systems, with limited modifications to how information is encoded and decoded over the link for devices supporting such as future standard extension. By reducing pilot-overhead, improving spectral efficiency, and enhancing resilience under harsh channel conditions through learned methods, significant performance advantages can be gained without completely diverging from the existing foundations of modern cellular systems.

OmniSIG: Enabling Efficient AI-Powered RF Sensing and Awareness

While neural receivers and AI-Native Air Interfaces help to encode and decode information across the RF/wireless channel, RF sensing tells you who’s talking, where and what kind of signal it is – a critical function when ensuring the RAN is free from interference, degradation or contention within shared spectrum deployments.

DeepSig’s OmniSIG software uses deep learning for spectrum sensing, signal classification and emitter localization. It can leverage RF datasets to rapidly curate, distill and train compact neural sensing models that can run on edge devices with minimal compute – to provide understanding of a wide range of spectrum access technologies and phenomena.

OmniSIG rapidly detects and classifies emissions, including overlapping or bursty signals, such as radar, IoT, or drone, which might interfere with 4G or 5G cellular signals, scaling from microsecond-long events to much longer emissions. It generates a rich metadata representation of the spectrum activity (rather than the raw RF sample data) that can be fed into schedulers, automated frequency coordinators, or interference mitigation engines, or other analytic engines at scale – allowing edge devices to inform and enable network optimization without consuming significant bandwidth or processing resources – and allowing for real-time reaction to diverse and evolving spectrum anomalies and activity which was previously not feasible.

O’Shea highlighted recent work maturing emission direction of arrival estimation, allowing sensors and radio units (RUs) to localize the source of intended or unintended radio emitters, and to combine the metadata from multiple RUs to pinpoint their sources in real time – providing an unprecedented way to rapidly understand spectrum activity and how it relates to emitters and locations thereof in the physical world all in real time across a wide range of diverse emitter types. Applications span a wide range, including radar, IoT, UAS links, cybersecurity threats, jammers, EMI and more – all driven by pattern recognition and distributed intelligence. In the next generation of the radio access network, spectrum management and O&M for operators should be much more heavily automated by these technologies, reducing the need for manual diagnosis and truck rolls to resolve issues.

AI-Native Wireless: Closer Than You Think

The talk ended with a clear message: AI-native communications is not just a research idea – it’s practical, happening now, and moving at the speed of AI. DeepSig’s technologies are already being deployed and tested in real networks, demonstrating real-world impact. With the continued advancement of edge AI hardware, flexible RAN architectures, evolving interface standards and open component implementations (e.g., OpenRAN), the path from deep learning research to real-world wireless deployments is shorter than ever.

From boosting throughput and spectral efficiency with neural receivers, neural beamforming, and AI-Native Air Interfaces to understanding spectrum activity in real-time with OmniSIG, DeepSig is redefining what’s possible when AI meets the physical layer – the most fundamental performance layer in the wireless system. As the industry looks toward 6G, these breakthroughs aren’t just optional; they are foundational in how next-generation systems will deliver enhanced performance, autonomous lower-cost deployment and operation, and resilience and improved user experiences for operators.

Leave a Reply