HoME is a platform for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context.
HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
- 3D visual renderings of 45,000 houses based on Panda3D
- 3D acoustic renderings based on EVERT, using ray-tracing for high fidelity audio
- Semantic image segmentations and language descriptions of objects.
- Physics simulation based on Bullet, handling collisions, gravity, agent-object interaction, and more.
- Multi-agent support.
- A Python framework integrated with OpenAI Gym
To get started, check out the project on Github