HANGZHOU, China, Aug. 27, 2025 /PRNewswire/ — Manycore Tech, a global leader in 3D interior design technology and spatial computing, announced major advancements in its spatial AI model system with the publication of SpatialLM 1.5, a next-generation spatial language model, and SpatialGen, a novel spatial generation model. The company also shared its roadmap for open-sourcing these models, marking a significant step forward in democratizing access to advanced 3D scene understanding and generation technologies.
Generation in Action: SpatialGen & SpatialLM 1.5 Showcase
As the industry’s first large model system specifically designed for the cognition and generation of 3D indoor environments, Manycore Tech’s Spatial AI models demonstrate remarkable capabilities in producing photorealistic, interactive, and structurally coherent virtual spaces — addressing key challenges in robotics training, immersive media, and AI-generated content.
In addition to the open-source models, Manycore Tech AI team unveiled an experimental AI video generation solution powered by SpatialGen, aiming to tackle long-standing issues in maintaining spatiotemporal consistency across generated video content — a critical hurdle in the broader adoption of generative AI in visual media.
SpatialLM 1.5: One Prompt, One Structured 3D Scene — Solving Robotics Training Data Scarcity
SpatialLM 1.5 is a spatial language model trained on large language model (LLM) architectures, enabling users to interactively generate entire 3D scenes through natural language commands via the SpatialLM-Chat interface. Unlike conventional LLMs that struggle with understanding physical geometry and spatial relationships, SpatialLM 1.5 interprets textual instructions and outputs what the company calls “spatial language” — structured outputs that include spatial layouts, object relationships, and physical parameters.
For example, when given a simple text prompt such as “a cozy living room with a sofa near the window,” SpatialLM 1.5 can automatically generate a structured scene script, intelligently match appropriate furniture models, arrange them in a coherent layout, and allow further refinement or querying through conversational interactions.
More importantly, the scenes generated by SpatialLM 1.5 are rich in physically accurate, structured information and can be rapidly produced at scale — making them highly valuable for applications in robotics, including path planning, obstacle avoidance, and task execution training. This development directly addresses the current bottleneck in robotics: the lack of diverse, high-quality training data.
During the event, Dr. Zihan Zhou, Chief Scientist at Manycore Tech, demonstrated a compelling use case in eldercare robotics. When prompted with the command “Go to the dining table in the living room and get the medicine,” the model not only correctly identified the relevant objects but also autonomously called the relevant tool to plan the optimal navigation path — showcasing its potential to perform complex tasks in real-world home environments.
The previous iteration, SpatialLM 1.0, which was open-sourced in March of this year, quickly climbed to the top three on Hugging Face’s trending models list. Startups have already leveraged its codebase and architecture to develop their own proprietary models, highlighting the impact and scalability of Coohom’s open-source initiatives.
SpatialGen: Cracking the Spatiotemporal Consistency Code for Immersive 3D Worlds
While SpatialLM focuses on understanding and interaction, SpatialGen is engineered for generation and visualization.
Built on a diffusion model architecture, SpatialGen is a multi-view image generation model capable of creating spatially and temporally consistent views from text prompts, reference images, and 3D spatial layouts. These outputs can then be further processed into 3D Gaussian Splatting (3DGS) scenes and rendered into immersive, explorable videos.
Leveraging Manycore Tech’s vast repository of indoor 3D scene data and advanced multi-view diffusion techniques, SpatialGen ensures that objects maintain accurate spatial attributes and physical relationships across different camera angles — a common pain point in traditional generative models.
The resulting 3D Gaussian scenes and hyper-realistic holographic walkthrough videos allow users to virtually navigate and explore the generated environments as if they were physically present — delivering a deeply immersive experience.
“Today, AI-generated video and image tools have sparked a wave of content creation among the general public. However, due to persistent challenges with spatiotemporal consistency, true commercial viability remains limited,” said Long Tianze, Director of AI Products at Manycore Tech. “We’re developing a 3D-native AI video generation product, planned for release later this year — potentially the world’s first AI video generation agent deeply integrated with 3D capabilities. By unifying 3D rendering and video enhancement into a single pipeline, we aim to significantly bridge the gap in spatial coherence that plagues current AIGC video tools.”
He added that most existing AI video generation tools suffer from issues like object misalignment, illogical spatial arrangements, and incorrect occlusions during viewpoint transitions — primarily because they rely on 2D image or video datasets that lack an inherent understanding of 3D structure and physics.
Both models announced at the event will be progressively open-sourced on platforms such as Hugging Face, GitHub, and ModelScope. The spatial generation model SpatialGen is already available for download, while the spatial language model SpatialLM 1.5 will be released later together with the SpatialLM-Chat interface.
Links:
Hugging Face: https://huggingface.co/manycore-research/SpatialGen-1.0
Github: https://github.com/manycore-research/SpatialGen
ModelScope: https://modelscope.cn/models/manycore-research/SpatialGen-1.0
About Manycore Tech:
Founded in 2011, Manycore Tech is committed to becoming a global provider of spatial intelligence services.
Manycore Tech have established a complete technological flywheel of “spatial data – spatial large models – spatial editing tools”, which is widely applied in scenarios such as 3D spatial design, e-commerce 3D AI design, Industrial digital twins, and intelligent agent training, accelerating the integration of AI into the physical world.
Manycore Tech owns a suite of products, including the spatial design software Kujiale, its overseas version Coohom, the spatial intelligence platform SpatialVerse,and the spatial design BIM software KuSpace,with products covering over 200 countries and regions worldwide. Additionally, Manycore Tech has independently developed a multi-modal spatial large-scale model with over 10 billion parameters and has open-sourced its spatial language model SpatialLM to the world, contributing to the intelligent upgrading of global AI at the physical space domain.
SOURCE Manycore Tech Inc.