Palo Alto, CA, February 6, 2026, Chainwire
ZenO provides access to egocentric audio, video, and image data captured from smart glasses and smartphones to power the next generation of physical AI systems.
xeno today announced the public beta launch providing access to a new platform designed to collect, anonymize, and structure real-world first-person data to train physical AI systems such as robots, autonomous agents, and embodied models. ZenO utilizes Story’s Layer-1 blockchain technology as its core infrastructure.
This launch comes at a moment when physical AI is moving from research to production, but the data needed to power those systems is not keeping pace. Robots trained through scraped web data or simulations have difficulty performing routine tasks that humans perform intuitively. ZenO addresses this discrepancy by collecting first-person real-world data—what people actually see, hear, and do—to create a new foundation for training AI systems implemented at scale.
Training on real, first-person data allows physical AI models to better perceive their environment, generalize to unpredictable conditions, and be fine-tuned to perform tasks more accurately and reliably once deployed in the real world.
ZenO also recently joined NVIDIA Inception, a global program designed to support startups building advanced AI technologies. Through this program, ZenO is leveraging NVIDIA’s GPU ecosystem, technical expertise, cloud infrastructure advantages, and access to launch resources to accelerate the development of physical AI data networks. This support will enable ZenO to scale the enterprise-grade permissioned data infrastructure needed to train robotics and embodied AI systems that operate in complex physical environments.
The beta builds on ZenO’s existing MVP, currently live at https://app.zen-o.xyz/, and focuses on validating the end-to-end product flow for real-world data collection, from capture to quality assurance and anonymization. Beta lasts approximately 6-8 weeks.
Unlike synthetic data or scraped online content, physical AI systems require egocentric data—data from what humans actually see, hear, and do in their real-world environments. ZenO allows contributors to capture continuous first-person audio, video, and images using ZenO-branded smart glasses or mobile phones, following a clearly defined data collection mission.
How ZenO Beta Works
During the beta period, users can:
- Capture real-world audio, video, and images from a first-person perspective.
- Upload data via ZenO application for automated formatting and integrity checking
- Data is reviewed through a multi-step QA process, including AI-based screening and human review.
- Automatically anonymizes sensitive information, including faces and identifiable text.
After anonymization, contributors add structured metadata that describes the actions and environments within the video. Approved datasets are securely stored and cataloged within ZenO’s data marketplace infrastructure.
Incentives and Contributor Rewards
ZenO uses a two-tier incentive model for contributors.
- immediate reward Pay with XP during beta phase to collect data
- profit sharing If contributed data is sold, contributors receive a portion of the downstream sales in stablecoins.
This structure aligns contributor incentives with long-term data quality and commercial demand rather than one-off labeling efforts.
Hardware and Capture Options
ZenO’s smart glasses are manufactured through OEM partnerships and launched under the ZenO brand with specifications similar to leading consumer smart glasses. These glasses support audio and video capture, hands-free operation, and all-day wearability. Contributors can also participate using their smartphones, depending on mission requirements.
On-chain foundation and future roadmap
During the beta period, ZenO will record wallet signature consent and data identifiers on-chain, creating a verifiable record of contributor approval and dataset provenance. Full IP and data rights management functionality will be available in a future release.
ZenO’s long-term roadmap includes recording metadata and licensing information for user-generated datasets into Story, enabling programmable data permissions, transparent licensing, and automated revenue distribution for AI training data.
“The real world is not like the Internet.” Saebyeok KimCo-founder of Zeno. “Physical AI systems require high-quality, empowered, first-person data captured in real-world environments. This beta version demonstrates the foundation for how that data can be collected, structured, and used to train models that actually work outside the lab.”
ZenO is currently working with initial data demand partners and will share traction metrics after the beta period.
To learn more or participate in the beta, visit: https://zen-o.xyz.
About ZenO
ZenO is a physical AI data collection platform focused on capturing real-world first-person (POV) human behavior for training robotics and embodied AI systems. ZenO allows contributors through smart glasses and smartphones to upload video and image data generated from their daily activities. ZenO is designed to enable scalable and compliant real-world data collection for the next generation of physical AI systems.
Story Introduction
Story is an AI-based blockchain network designed to serve as a provenance, licensing, and economic layer for AI data and models. Powered by the $IP token, Story lets you register your datasets, models, and AI-generated outputs as intellectual property, license them programmatically, and monetize them through embedded attribution.
Backed by $136 million from a16z cryptocurrency, Polychain Capital, and Samsung Ventures, Story will launch its mainnet in February 2025 and is building the foundational infrastructure for the AI economy. By making IP fundamental to the data and model lifecycle, Story provides the trust and economic rails AI systems need to scale responsibly across enterprises, developers, and global markets.
contact
Communications Director
H.V.
story
(email protected)
