The intersection of AI and robotics has reached an inflection point, poised to redefine entire industries. But to push these technologies forward, the tools we use to build and test them need to be just as groundbreaking. Eight years ago, we introduced AirSim to start the journey towards an AI-centric robotics simulator. Now, we’re taking it to a whole new level.
AirGen is a pivotal step forward in the evolution of simulation technology, expanding the possibilities for AI and robotics development.
More than just a simulator, AirGen is an ecosystem—an AI-first robotics suite that combines realistic environments, versatile robot representations, complex autonomy pipelines, and advanced capabilities for data generation and evaluation. Designed as the next evolution of AirSim, AirGen equips researchers and developers to push beyond conventional limits, setting a new standard for high-fidelity, scalable simulations.
Imagine training a drone to navigate a dense urban landscape in GPS-denied scenarios, enabling a forklift to operate autonomously in a bustling warehouse, or an indoor robot to act safely in a household setting. With AirGen, these scenarios come to life in a seamless, adaptable platform built for the future of AI and robotics. Let's dive in to some of the key features we've architected within AirGen, through the learnings from AirSim.
1. High-Fidelity Simulation for Diverse Robots
AirGen redefines robot simulation by expanding beyond the limited options in AirSim. While AirSim primarily supported just one default drone and one car, AirGen offers an extensive (and continually growing) library of distinct robots in the wheeled and aerial categories, representing a wide range of robotic platforms. This not only includes drones but also agricultural robots, warehouse automation systems (like forklifts), cars, offroad vehicles, space rovers, and many more.
We work with leading OEMs to ensure that the simulated models accurately reflect the behavior and specifications of real-world vehicles. Whether it's simulating a drone with precision flight capabilities or a differential-drive robot navigating uneven terrain, AirGen offers robust support for diverse locomotion systems, including differential drive, etc.
This breadth of robot categories means you can simulate warehouse robots navigating shelves, farming robots in complex outdoor environments, or even rover missions on rugged planetary surfaces. The ability to test a variety of robots under different conditions makes AirGen an indispensable tool for robotics engineers and AI researchers looking to experiment with multiple robotic platforms in realistic scenarios.
2. Wide Variety of Environments: Geo-specific and Geo-typical
One of the most transformative features of AirGen is its support for both geo-specific and synthetic environments. For geo-specific simulations, AirGen integrates data from sources such as Bing Maps, Google Maps, and USGS DEM data. AirGen contains a tight integration with the Cesium platform for geospatial data as well as support for the standard 3D tiles format, which allows users to georeference their own data.
In addition to geo-specific environments, AirGen offers an extensive library of geo-typical or synthetic scenes. Drawing from a wide variety of asset sources as well as providing support for datasets such as Matterport, AirGen provides access to a diverse and ever-growing array of virtual scenes, including urban areas, residential spaces, rugged landscapes, and industrial settings. This expansive collection not only covers a wide range of use cases but also ensures flexibility in simulation needs, allowing for scenario-based training and testing.
To accommodate those with custom environments, AirGen is also available as a plugin-only package, allowing users to integrate their own Unreal Engine environments.
Built on Unreal Engine 5, AirGen leverages cutting-edge features like Nanite for scalable detail and Lumen for realistic lighting, while frameworks like NVIDIA DLSS on Windows enhance rendering efficiency. Compared to AirSim, AirGen also contains a completely revamped sky and weather system, introducing more realistic particle effects, full 3D volumetric clouds, geo-specific sun/moon/stars simulation, and more.This flexibility and advanced rendering lend well to the creation of rich simulation scenarios and datasets in virtually any setting.
3. Built-In Autonomy Stack
Effectively training robots to learn from their environments requires the implementation of a rich autonomy stack in simulation to solve mapping, path planning, and control. Robots are expected to sense and navigate through complex terrains and cluttered spaces without colliding with obstacles or entering unsafe areas.
AirGen has the ability to automatically build and store 2D/3D occupancy map representations for entire maps which the robots can leverage at runtime for an understanding of spatial relationships and potential obstacles. This level of detail unlocks an accurate geometric representation of the real world, which is crucial for training effective AI models. In contrast to AirSim - which often required reliance on external mapping and planning tools, AirGen integrates these capabilities directly into the simulation environment.
This seamless integration simplifies the workflow for developers, providing a unified platform to simulate safe behaviors without the need for external dependencies. We leverage Unreal Engine’s navmesh technology and augment it with efficient representations such as sparse voxel octrees to achieve both 2D and 3D mapping. The navmeshes intelligently adapt to different robot sizes and parameters such as the maximum slope a robot can reach: for example, an offroad vehicle can navigate through terrains that a small robot cannot, hence both of those robots have different navigable areas. For researchers focused on optimal path planning and navigation, AirGen also incorporates signed distance fields, offering a powerful tool for advanced trajectory optimization.
On top of the maps, AirGen contains built-in optimal path planners such as A* as well as 3D trajectory generation techniques such as minimum snap optimization - to ensure that robots can achieve both path and trajectory planning in complex, cluttered environments. Whether you're developing a delivery drone that must navigate urban landscapes or a ground robot operating in warehouses, AirGen's trajectory planning capabilities allow all of its supported form factors to navigate efficiently and safely.
4. Generative AI for Simulation
Generative AI techniques have found widespread usage in language and vision, and simulation is no different. Generative AI techniques have the potential to transform how simulations are built, and this is particularly interesting in the context of robotics, given the need for diversity while simulating scenarios for robotics use cases. Instead of manually creating every detail of an environment, robot characteristics, or sensors; generative models can dynamically produce diverse objects, arrangements, and even entire environments, making it easier to simulate real-world scenarios and introduce the right level of complexity.
To support such workflows, AirGen supports importing glTF meshes even at runtime, allowing users to bring in real-world objects and environments directly into the sim in a dynamic fashion. Moving beyond objects, AirGen can also operate natively within scenes built with neural rendering techniques such as NeRFs or Gaussian Splatting, which can manifest as glTFs, splat PLYs among other formats. From individual objects to large scene-scale assets, this means deploying your robots in true-to-life settings. Below, you can see a video of a drone in AirGen navigating within a world built using Gaussian Splatting.
Generative AI techniques are also particularly relevant for sensor simulation. For example, sensors like LiDAR or radar have a deep dependency on physical characteristics of materials which makes them hard to emulate - and traditional simulations often rely on either highly simplistic models or some heuristics. Our research paper on neural sensor modeling dives deeper into this and shows how it significantly improves sensor accuracy.
5. Large Scale Data Generation
Scaling data across the axes of quality, quantity, and diversity is key for creating general purpose robot intelligence. AirGen excels in large-scale data generation, leveraging a highly parallelizable and cloud-centric infrastructure. With the ability to simulate a vast array of environments and a rich array of robots—including drones, agricultural robots, and warehouse vehicles—AirGen ensures that users can create diverse datasets tailored to various scenarios. For intensive workloads such as data generation, ease of use is as important as the feature richness of the platform. AirGen contains an extensive and easy-to-invoke API for not only robot perception and navigation but also environment manipulation - all the way from adjusting object positions and scale, to fine-grained weather control, retexturing objects for domain randomization, and more.
The parallelizable architecture allows for the simultaneous execution of multiple simulations, dramatically increasing the volume of synthetic data generated. This combination of extensive environmental diversity, multiple robotic platforms, and scalable infrastructure makes AirGen an indispensable tool for researchers aiming to train robotics foundation models.
6. Cloud-Ready with Real-Time Streaming
AirGen is designed for the cloud, making it easy to scale simulations across multiple instances and take advantage of the latest in cloud computing. Whether you’re running on local hardware or deploying in the cloud, AirGen supports real-time streaming using Unreal Engine’s Pixel Streaming technology, creating a low latency pipeline between local setup and remote cloud infrastructure. AirGen is also built to be served as a containerized workload if required, making it cloud-agnostic and allowing you to flexibly spin up simulation workloads on a wide variety of compute resources.
This feature is critical for projects that need to scale quickly or require high-fidelity simulations to be run across distributed environments. AirGen can integrate seamlessly with cloud services, allowing you to generate data, train models, and test in parallel across multiple machines.
7. Highly Accessible Robot Development Suite - No installation needed!
Finally, AirGen is not just packed with powerful features—it’s also built for maximum accessibility through the GRID platform. As part of GRID, AirGen is entirely cloud-based, meaning you can jump into high-fidelity robotics simulation from anywhere with no installations, dependencies, or complex setup required. All the power of AirGen’s cutting-edge capabilities is available instantly in the cloud, removing the traditional barriers to simulation and enabling seamless access for researchers and developers alike.
This cloud-based approach means you can witness your robots in action across any environment of interest, from realistic, geo-specific landscapes to richly diverse synthetic scenes, or even your own custom Unreal Engine environments. With GRID, AirGen becomes a complete solution for generating vast amounts of synthetic data, training or invoking AI models, and evaluating existing models or autonomy stacks—all in one unified, cloud-accessible ecosystem.
GRID’s infrastructure supports parallel simulations, real-time visualization, ensuring you get high-quality, efficient simulation without the need for specialized hardware. The entire platform is designed to be flexible, scalable, and immediately ready for designing, testing, and pushing the boundaries of autonomous robotics, all without a single installation.
Pushing the Boundaries of AI-Centric Simulation
AirGen, as a part of the GRID platform, is pushing the boundaries of simulation technology, empowering AI researchers and robotics engineers to work within environments that are as dynamic and diverse as the real world. By offering a diverse set of robots, high-fidelity data generation capabilities, and integration with cloud and generative AI, AirGen enables the creation of realistic, AI-ready scenarios that will drive the next generation of robotic innovation.