Skip to main content

DeepSig Demonstrates Evolved OmniPHY Axon 6G Software Illustrating AI-RAN Alliance AI-for-RAN Work Item #1 alongside NVIDIA at GTC Washington, D.C. and Brooklyn 6G Summit Events

Summary

AI-Native Air Interfaces are the core of next-generation wireless systems; the recent acceleration of AI/ML capabilities has allowed for directly learning wireless modulations, references, and waveforms, alongside the transmission and receiver algorithms in a joint process. This can more completely optimize for a wide range of channel conditions and effects in order to maximize throughput, sensitivity, coverage, resilience, and performance delivered to the wireless user.

OmniPHY Axon is DeepSig’s production software for training, deploying, and operating AI-Native Air Interfaces over cellular and other non-cellular application wireless systems, which will be demonstrated as a candidate modulation scheme for the 6G air interface at the upcoming NVIDIA GTC DC and Brooklyn 6G (B6G) Summit conferences.

The showcase marks a significant breakthrough for AI-Native Air Interfaces, which aligns with the scope of the AI-RAN Alliance working group “AI-for-RAN.” Our demonstration highlights how adopting AI/ML-based signal processing in a future 6G standard could significantly improve and deliver enhanced system capacity and user experience for the next generation of wireless communications, while still maintaining computational and energy efficiency in wireless devices.

AI-for-RAN: Delivering Peak Wireless Performance with AI in Critical Air Interface Functions

AI-RAN Alliance’s AI-for-RAN working group focuses on leveraging AI/ML methods to drive RAN performance, as an enabler for greatly enhanced communications system efficiency, performance and experience.

Work Item #1 (WI1), spearheaded by DeepSig and NVIDIA, aims to develop a two-sided model that jointly learns modulation schemes and receiver methods to maximize link performance by enabling ML to optimize what is transmitted over-the-air and how it is received.

DeepSig already demonstrated initial steps for WI1 at Mobile World Congress Barcelona in March 2025, highlighting significant improvements in throughput in dense urban macro-cellular wireless environments and showing that link-level and system-level simulations yielded improvements in single-user single-antenna wireless systems.

This now-updated NVIDIA GTC DC and B6G demonstration features an evolution of the technology with site-specific optimization for enhanced performance in specific deployment sectors, as well as MIMO modes that take advantage of modern multi-antenna systems within 5G and 6G.  The demonstration aligns with our internal Alliance work-item updates that highlight the viability and performance impact the approach could have within 6G, in addition to providing additional test and measurement details based on 3rd party performance validation.

Learning a Pilot-Free Modulation Scheme in the Modem

Current wireless systems rely heavily on dedicated reference signals for channel estimation and equalization to enable clean reception of both uplink and downlink signals. These pilot signals (e.g., demodulation reference signals, DMRS) add significant overhead to the communications system, especially in multi-user and massive MIMO deployments, which can cause pilot contamination and interference between adjacent sectors in some cases.

When learning a new air-interface modulation, it can help minimize this overhead by reducing pilot density or, in some cases, removing pilots entirely, instead relying on learned transmitter and receiver functions that leverage non-symmetric, non-rectangular modulation schemes to transmit data effectively over a wide range of communication channels.

As a founding member of the AI-RAN Alliance, we proposed this approach as Work Item #1 in the “AI-for-RAN” working group (WG1), with the goal of utilizing neural network-based modulation, receiver, and decoder networks (i.e., autoencoders) to map information bits across a standard 5G-NR slot structure. The methodology even aligns with the vision of 6G’s Multi-RAT Spectrum Sharing (MRSS), which allows for simple 5G-6G interoperability by maintaining compatibility with existing CP-OFDM and DFT-s-OFDM modulation frameworks, easing the transition and adoption for operators.

By jointly learning these two-sided models, for a wide variety of channels, hardware imperfections, interference or jamming distortion, and other effects within the channel, this AI-Native approach to modulation and reference design allows for a more powerful and expressive platform to improve spectral efficiency and throughput, reducing overhead, thus providing improved system performance.

The figure above illustrates the convergence of the training process, in which the clean (noise-free) version of the transmitted signal achieves a low error rate. This ensures optimal bandwidth utilization and reliable transmission, even over challenging fading channels.

Uniquely, this method can learn an embedding “cloud” that exhibits asymmetric properties, serving as an imposed pilot signal sequence, allowing the receiver to perform channel estimation and equalization without pilot signals. This embedding enables more efficient data encoding than a traditional rectangular QAM signal (e.g., QPSK, 16-QAM, 64-QAM, 256-QAM, 1024-QAM, etc.), and can be optimized to maximize performance against non-linear distortion introduced by the power amplifier and interference from congested spectrum.

In the same way that today’s large language models (LLMs) such as ChatGPT, Claude, Grok and others leverage representations in high-dimensional “token” space to represent and process data, this learned “cloud” which is transmitted over-the-air represents a token of sorts for some number of raw bits (or more specific content in the case of joint-source-channel-coding or jointly learned semantic-communications) of transmitted information. These PHY tokens are heavily conditioned for this embedded space to withstand the variety of wireless impairments that it will encounter during normal usage.

Site-Specific Learning for Improved Wireless Performance & User Experience

The core of AI/ML methods is data. In wireless systems, and particularly AI-RAN systems that optimize the air-interface for performance, that data often takes the form of samples of the channel response, and samples of the propagation and impairment environment over which communications must be optimized.

Traditionally, wireless systems are designed with a one-size-fits-all approach, where we pick a constellation and pilot design and deploy it everywhere. Techniques such as self-optimizing networks (SON) have made incremental advances by tuning select high-level cell parameters, including power levels, down-tilt, and physical-layer settings such as cyclic prefix length or codebook selection. However, with the emergence of AI-Native RAN and an AI-Native Air Interface, there is significant potential to further enhance key RAN functions—such as modulation, precoding, beam-forming, and scheduling—tailored to the specific activity and real-world responses observed within deployment environments at the sector level, or even at the individual user level for stable users, such as those in fixed wireless access (FWA), thereby enabling the extraction of additional performance from network data.

To this end, site-specific learning offers a path where one-sided and two-sided AI-RAN models can be optimized for specific cell measurements to increase total capacity, total throughput and multi-user sum-rate. Below we show one example from this work item, where we leverage NVIDIA Sionna RT as well as NVIDIA Aerial Omniverse Digital Twin (AODT) to simulate fine-grained geometry-aware propagation statistics for a specific sector, and highlight how this can be used to further optimize the air-interface performance.

In practice, the concept of a RAN Digital Twin (DT) or RF DT can also leverage real-world measurements, such as channel impulse response (CIR) data, to further optimize and “calibrate” beyond pure simulation, or where the AI-RAN model can continually optimize itself directly on feedback information.  This was a core topic of our NTIA-supported PWSCI NOFO1 program, “AirTwin”, focused on calibrating AI/ML and RT based RAN DT for such measurement data across a wide range of locations for more accurate air interface test and evaluation … and optimization.

The simulation above running on NVIDIA Grace Hopper (GH200) shows the process of simulating thousands of possible UE positions and channel responses within the expected operating coverage area of the sector, allowing for the simulation of what normal usage would look like—and then allowing for new simulation of test activity within this coverage area, enabling site-specific model fine-tuning and site-specific test of models on a per-sector basis.

This is important because one of the fundamental aspects of machine learning is that the better you match the distribution of your deployment-time data during training, the better you can prepare and tune the AI/ML models to behave in deployment. When seeking to extract every inch (or, in this case, bits/second/Hz) of spatial-spectral efficiency, throughput, coverage, and energy efficiency from 6G systems, site-specific fine-tuning and the use of RAN Digital Twins are critical enablers for unlocking next-generation performance.

Below we highlight a comparison of a block error rate (BLER) sweep for a single modulation and coding scheme (MCS) across operating conditions within the sector, and highlight that while the AI-Native Air Interface outperforms a conventional rectangular QAM (as used in 5G-NR) in all cases, a further site-specific fine-tuned AI-Native Air Interface can even further improve this performance margin, delivering improved performance to operators and users alike from finite and precious spectrum and base-station resources.

Finally, when considering system simulation including link adaptation in the loop, the base station scheduler can benefit from both the decreased [pilot] overhead and increased sensitivity of the OmniPHY Axon scheme, delivering consistently higher user throughput for a user moving through the environment with a highly dynamic channel response and signal-to-interference noise ratio (SINR).

The video below shows one such simulation, monitoring the attained link’s varying SINR, and MCS selection choice while passing through a 1×8 MIMO RF Ray Traced scene. Throughout the scenario, the AI-Native Two-Sided model consistently attains higher throughput under the same channel conditions as an LMMSE receiver on a 5G-NR link using rectangular QAM. This is especially pronounced in very low SINR conditions, such as harsh cell-edge, where on multiple occasions 5G loses the link entirely, while the AI approach consistently maintains a data link, ranging from 1.2x faster to over 2x the throughput in low SNR.

Realizing an AI-Native Air Interface Over-the-Air 

OmniPHY Axon is a software platform enabling 6G Air Interface research, but it is also a self-contained and functional solution for AI-Native modem training and deployment. As part of this development, it has been integrated into multiple enterprise, mobile, and embedded-class GPU compute platforms and software-defined radio platforms to enable over-the-air measurement, deployment, and use on small-form-factor mobile platforms as well as data-center-class platforms.

This has been validated through a number of lab and field tests for both SISO and MIMO links, including terrestrial, aerial, and other systems, where AI-Native Air Interface can deliver enhanced wireless performance across a wide range of applications. OmniPHY Axon focuses on the key AI-Native Two-sided models for modulation and reference signals in the PHY, but is also a comprehensive, usable modem that includes error correction, protocol integration, MAC and scheduling functions, and other protocols needed to deploy these components into production modem systems today and tune them in the real world. We hope to continue engaging with organizations such as 3GPP, O-RAN Alliance, and AI-RAN Alliance to adopt common interfaces and align on protocols, capabilities, and reference designs in line with future industry consensus.

Getting these AI-RAN capabilities into the field, measuring and validating real-world performance impact, is critical in a data-driven world.   Often, evaluating on statistical channel models, misses the benefits which can be gained by introducing learning into RAN functions—and the structure of real-world test and data is critical to measure and motivate the benefits of adoption. To this end, we have focused on partnering with ORAN system integrators and DU and RU vendors to help bring these capabilities into mature and deployable hardware and systems, including NVIDIA Aerial, Intel FlexRAN, Software Radio Systems (and the Linux Foundation OCUDU Project), HTC G-Reigns, Viettel High Tech (VHT), and others.  Vetting these methods in deployment, is often where the benefits of the approach show through the most—as the system model assumptions deviate from “ideal” and “whitened” distributions or effects.

As we continue to build out two-sided AI models for 6G, we seek to continue to follow the rapid path to field testing with vendors, operators, and partners. Above, you can read a little bit more about how we have done this with current 5G RAN vendor partners, including bringing One-Sided AI/ML Receiver models alongside HTC G-Reigns’ Stack, as well as our efforts working with Viettel High Tech to bring Neural Receivers into their Macro-network to drive improved edge-capacity and improved interference rejection.

Two-Sided AI-Native Air Interface Driving Energy Efficiency and Versatility

The AI-Native Air Interface approach here also offers significant potential benefits in terms of energy efficiency, especially with regards to power amplifier efficiency.  It offers a method that can learn efficiency PAPR and Net Gain performance within its constellation design jointly, allowing for energy-efficient operation when compared to that of traditional CP-OFDM and high-order-QAM, which often requires significant back-off and oversized amplifiers to operate.  It provides a versatile platform which can be optimized for a wide variety of additional objectives and scenarios, such as enhanced resilience in harsh channels and EMI or Jamming, and can be tuned for other objectives such as integrated sensing and communications (ISAC) return properties or unique wireless channels like non-terrestrial networks (NTN) beyond the core of commercial macro-cellular networks to other industrial use-cases.

An example of its versatility can be seen in the following example, where we trained a two-sided model for increased coverage and PA performance for an NTN uplink. Traditionally, DFT-s-OFDM would be used in such a scenario for its desirable low peak-to-average power ratio (PAPR) which allows handheld devices to drive average power higher. In the figure, we compare both our standard and low-PAPR two-sided pilot-less models as well as two 5GNR baselines, CP-OFDM and DFT-s-OFDM. The first plot shows the PAPR CCDF, the second shows the standard BLER curve, and the third shows the BLER curve with a power back-off based on the PAPR at 0.01% CCDF.

Focusing first on the second plot, it is clear that our base OmniPHY Axon ML model has the best BLER performance when the effects of a nonlinear power amplifier are not considered. However, when considering the power amplifier, its advantage is erased as performance becomes similar to DFT-s-OFDM due to its restrictive power behavior. Next, notice that our low-PAPR model achieves the best of both worlds, with reduced PAPR (first plot) and improved BLER performance (second plot), resulting in the best overall BLER performance (third plot). This illustrates the flexibility of an AI/ML-based waveform.

3GPP RAN1 Progress on 6G AI/ML Study Items

Current study item discussions in the 3GPP RAN1 working group are considering a wide range of use cases for AI/ML to optimize future base stations and mobile devices, some one-sided and some two-sided. Similarly, discussions are ongoing on waveform, modulation, and constellation shaping topics, which align heavily with MRSS as well as adopting constellation shaping to modify modulations and receivers within the context of an OFDM slot structure.

Several significant agreements were made in September 2025 during the recent 3GPP RAN1 #122bis meeting in Prague, Czech Republic, summarized in the chairman’s notes (excerpted below) on considering additional constellation shapes within 6GR, and discussion around how these should be best validate, under BLER analysis, Throughput analysis, and analysis with link-adaptation in the loop—each of which we have considered above in this article. Likewise, it is important that SU-MIMO and MU-MIMO impact be thoroughly analyzed, and methods on how CP-OFDM and DFT-s-OFDM are impacted, reference signals are impacted, and the impact on factors such as PAPR or “Net Gain” metrics be considered.

Several key sections accepted for agreement in Prague, directly relevant to this work item and path forward in 6GR, are shown below (slide left and right to navigate).

Leave a Reply