DeepMIMO: The data foundation for wireless AI
The open-source project is advancing wireless communication by integrating ray tracing technology with 5G simulation systems

For decades, wireless communication was powered by mathematics. Engineers relied on equations to describe how signals move through the air, how antennas radiate and how noise interferes. Those mathematical blueprints powered everything from FM radio to 5G.
But modern wireless networks face a challenge that equations alone can’t solve.
Billions of devices — phones, sensors, vehicles and machines — are competing for space on the wireless spectrum, the invisible range of radio frequencies used for communication signals. That crowding makes it harder for devices to send and receive information clearly because signals can reflect, scatter and fade as they move through everyday environments. Meanwhile, emerging technologies such as autonomous vehicles and augmented reality demand near-perfect precision and low latency.
Solving this problem requires a new approach that brings together wireless engineering and artificial intelligence, or AI. Researchers at Arizona State University are using NVIDIA’s AI Aerial platform to design intelligent communication systems that can learn from their surroundings and adjust on the fly.
“Artificial intelligence was the obvious next step,” says João Morais, a doctoral student in the Ira A. Fulton Schools of Engineering at ASU. “We needed networks that could learn from data, not just follow equations.”
Morais conducts research in the Wireless Intelligence Lab with Ahmed Alkhateeb, an associate professor of electrical engineering in the School of Electrical, Computer and Energy Engineering, part of the Fulton Schools at ASU.
Led by Alkhateeb, the Wireless Intelligence Lab is advancing next-generation, AI-driven methods for wireless communication and sensing. The lab’s collaboration with NVIDIA exemplifies ASU’s commitment to advancing open, scalable and equitable innovation in engineering and technology.
Difficulty with data
AI relies on data and, in wireless research, it can be a challenge to acquire.
Capturing real-world wireless data requires synchronized antennas, precise calibration and hours of field testing. The resulting measurements are expensive, hard to reproduce and often valid only in one place at one time.
To reduce cost, researchers frequently rely on mathematical or stochastic models that can simulate wireless channels rapidly. While such models are scalable and efficient, they often sacrifice realism, failing to capture the complex nuances of physical environments. As a result, they may not effectively differentiate between scenarios such as dense urban canyons or open campus spaces, leading to AI systems trained on them to perform poorly when deployed in real-world conditions.
“We couldn’t just measure the world,” Morais says. “Every time you move a car or experience changes in weather, the signal environment changes. We needed a way to recreate those conditions with accuracy and control.”
That challenge led the team to ray tracing, a simulation method originally developed for computer graphics that models how light — and for this research, radio waves — bounce, scatter and diffract through complex 3D environments.
Simulating the wireless world
Their solution was to develop DeepMIMO, an open-source project led by the Wireless Intelligence Lab in collaboration with NVIDIA and Remcom to make high-fidelity wireless data generation accessible to anyone in the research community.
DeepMIMO consists of two foundational and publicly available components.
First, a database of computer-simulated, ray traced wireless environments available in a standardized format that researchers can download and use immediately.
The second component is a Python-based toolkit that converts outputs from different simulation programs into data that connects directly with AI models and wireless-network simulators.

A workflow diagram reflecting DeepMIMO, which unites advanced ray-tracing engines from close partnerships with NVIDIA and Remcom, with an open and reproducible framework and expands opportunities for how wireless AI research can be shared and scaled. Photo courtesy of João Morais
“You can go from 3D geometry to machine learning data in minutes,” Morais says. “This process used to take days or even weeks.”
Before DeepMIMO, wireless AI research was fragmented. Researchers used different models and datasets, which made it difficult to compare or reproduce results.
“Two research groups could train their algorithms on completely different data and report different outcomes,” Morais says. “There was no shared foundation, and that made progress slower.”
DeepMIMO changed that.
The performance of the standardized datasets and open distribution of the platform has been cited in more than 750 research papers, creating a common ground for testing and benchmarking AI algorithms in wireless systems.
Umut Demirhan, an alum and former electrical engineering doctoral student in the Wireless Intelligence Lab, helped develop and maintain the platform and witnessed this transformation firsthand.
“The real reward was seeing the community’s focus shift,” Demirhan says. “Researchers could get right to work on developing their new solutions on the same data, which was precisely the impact we wanted to have.”
Morais adds that the result is like ImageNet, a large visual database designed for visual object recognition research, but for wireless AI, which enables everyone to start from the same foundation and build upward together.
Connectivity is key
DeepMIMO doesn’t simply provide data, it connects directly to next-generation simulation tools. Through partnerships with Remcom and NVIDIA, the project integrates with Wireless InSite, Aerial Omniverse Digital Twin, or AODT, and Sionna, enabling researchers to prototype intelligent wireless systems in virtual replicas of real-world cities.
“DeepMIMO enables researchers to spend less time generating data and more time building models, which accelerates the pace of innovation,” Alkhateeb says.
This shift makes wireless AI research more inclusive. Anyone from a graduate student with a laptop to a company with cloud graphics processing units, or GPUs, can begin from the same foundation and scale up as needed.
DeepMIMO also supports a range of applications.
Researchers can use DeepMIMO in several ways to make wireless systems faster and more dependable. For example, it helps networks find the best direction to send signals and predict how those signals will travel, which improves speed and reliability. It can also match signals to precise locations, allowing technologies like autonomous vehicles and robots to know exactly where they are within just a few centimeters. And by predicting when buildings, vehicles or people might block a signal, DeepMIMO lets networks adjust and reroute connections before problems happen.
“The future of wireless is multi-modal,” Demirhan says. “Networks won’t just transmit data; they’ll sense the world around them.”
The Wireless Intelligence Lab is also working on a multi-modal expansion of DeepMIMO called DeepVerse6G.
“DeepVerse 6G is a digital twin platform that can be used to generate co-existing multi-modal sensing and communication datasets,” Alkhateeb says.
Pioneering tomorrow’s wireless networks
DeepMIMO sits at the intersection of several major trends shaping the next decade of wireless technology. By making realistic wireless data accessible, customizable and reproducible, it provides the foundation for AI-native networks
As part of the AI-RAN Alliance, the ASU-led project is helping define the research standards for 6G and beyond. The team plans to expand the DeepMIMO database to thousands of environments, supporting the next generation of intelligent, adaptive communication systems.
“Our mission is simple,” Alkhateeb says, “we want to make wireless AI research faster, equitable and available to everyone.”
