Building verifiable, decentralised AI for the world: Inside NTU and Zero Gravity’s ambitious new research partnership

When Zero Gravity (0G) and Nanyang Technological University, Singapore (NTU), announced a new joint research hub for decentralised AI, the move signalled a decisive shift in how next-generation AI systems may be built, verified, and governed.

The S$5 million initiative is 0G’s first university collaboration globally, aiming to address one of the most pressing problems in modern artificial intelligence: making AI systems more accountable, transparent, and genuinely open to public scrutiny.

The partnership arrives at a moment when Singapore is strengthening its position as a global leader in digital trust, infrastructure resilience, and responsible AI development. As AI models grow larger and more complex, both organisations believe the world can no longer afford to accept opaque, untraceable systems controlled by only a few dominant players.

Why AI needs to break away from centralised black boxes

ZeroGravity co-founder and CEO Michael Heinrich

Photo: NTU

For 0G co-founder and CEO Michael Heinrich, the motivation is blunt. “AI must be a public good, not a private monopoly,” he says. He argues that the current landscape, dominated by a handful of companies with enormous control over models and data, creates systemic risk. These risks include hidden biases, unverifiable outputs, and opaque training sources, all of which erode trust in the system. 

Heinrich believes the industry is at an inflection point. “If we do not build an open, decentralised foundation now, the current system will be locked in,” he explains. For him, decentralisation is not a fashionable add-on. It is the only way to make AI safer and more widely accessible.

This is why 0G has built a modular blockchain architecture optimised specifically for AI. The system supports what Heinrich calls “verifiable AI”, where every model, dataset, and output is recorded on a chain. “It creates an immutable birth certificate for every model,” he says. If a model claims to use a certain dataset or alignment policy, that claim can be checked rather than accepted on faith.

Turning academic theory into real-world infrastructure

Dr Ming WU, CTO and Co-Founder ZeroGravity

Photo: NTU

While Heinrich champions the philosophy, his co-founder and Chief Technology Officer (CTO), Dr Ming Wu, focuses on the engineering realities. “It is one thing to write a whitepaper on ethical AI or decentralised governance. It is another thing entirely to build the economically viable infrastructure that makes it possible,” Dr Wu says. 

He views academic institutions as the breeding ground for the long-horizon ideas the industry needs. Proof-of-Useful-Work, verifiable computation, secure provenance, model alignment: these are not weekend hacks but fundamental research challenges.

“With NTU, we are turning research into reference implementations,” he adds. The hub will work on verifiable inference toolchains, blockchain-integrated alignment frameworks, and Proof-of-Useful-Work protocols that can run on real nodes. “We want to bridge the gap between papers and prototypes, and from prototypes to SDKs and governance.”

NTU’s perspective aligns closely with this. According to Associate Professor Zhang Tianwei, College of Computing and Data Science, NTU Singapore, Singapore’s AI ecosystem has matured to the point where the next frontier is not simply capability, but accountability and openness. “A decentralised, blockchain-enabled foundation ensures that computation and data can be shared, verified, and rewarded transparently,” Zhang explains. 

This is why 0G, with its focus on scalable decentralised AI infrastructure, is the right partner for the moment.

How verifiable AI works in practice

Associate Professor Zhang Tianwei, College of Computing and Data Science, Nanyang Technological University, Singapore

Photo: NTU

Dr Wu describes verifiable AI as a supply chain for data and intelligence. Every piece of training data is cryptographically hashed and stored on chain. The model parameters and weights are also registered. Each inference request and its output are then recorded in a privacy-preserving way.

“This creates an unbroken, auditable evidence trail,” he says. For users, it means confidence that an answer has not been tampered with. For regulators, it allows audits of whether a model was trained on biased data. For developers, it means an AI agent can be trusted because its history is verifiable.

This flips AI from a “trust me” paradigm to a “verify me” one.

Why blockchain still matters for AI

The official opening of the research hub

Photo: NTU

Some argue blockchain’s hype has faded. Heinrich disagrees. For him, the end of the hype cycle is an advantage. “Blockchain is not exciting because it is blockchain. It is exciting because of what you can build on top of it,” he says.

He rejects the misconception that blockchain is only for finance. “That is like saying the internet was just for email.” He sees blockchain as the natural substrate for global coordination and verification, something AI urgently needs.

Without this neutral layer, AI recentralises around whichever entity owns the most servers, which undermines accountability. A decentralised ledger ensures enforceable provenance and public verifiability, both of which Heinrich believes are essential for future AI infrastructure. 

If every AI action is recorded on a ledger, who controls it? Dr Wu says the answer lies in the network design. “The ledger is validated by a decentralised community of node operators,” he explains. There are two safeguards: permissionless participation and incentive alignment.

0G’s system uses distinct roles such as data attesters, compute providers, and verifiers. These roles cross-check each other and can be penalised for dishonesty. “No single actor has root control,” he adds. 

NTU is exploring additional models like multi-layer consensus and verifiable computation proofs to ensure integrity is woven directly into the system’s fabric, Associate Professor Zhang added.

Balancing transparency with privacy

The cost argument for decentralised AI and 0G.

Photo: 0G

Dr Wu stresses that transparency does not require exposing sensitive information. Instead, the hub focuses on proving computation without revealing the underlying data.

Techniques like commit-and-prove, differential privacy, secure enclaves, and cryptographic proofs allow systems to show that a model was trained responsibly, even if the dataset is private. “You verify the computation on chain while the underlying data remains protected,” he explains. 

Blockchain has long been criticised for being slow and energy-intensive. Dr Wu acknowledges these constraints but points out that 0G’s modular design avoids them by splitting execution, consensus, and data availability into specialised layers. The heavy lifting happens off chain through specialised compute networks. Proofs then return the validated results to the ledger.

This allows the network to achieve Web2-level performance with Web3-level security, targeting more than eleven thousand transactions per second per shard. “Energy goes into useful work, not pointless hashing,” Dr Wu says. 

This meant that Web2 referred to the traditional system without blockchain or decentralisation while Web3 referred to systems that employ blockchain technologies to enable decentralisation. “In general, centralised systems more easily achieve high efficiency and scalability. 0G’s technology allows decentralised systems to reach performance levels on par with centralised systems,” he explained.

Giving global developers access to big-tech-level AI compute

One of Heinrich’s goals is to democratise compute access. In the current landscape, start-ups are largely constrained to a few cloud giants. 0G’s decentralised network aims to create a competitive market where anyone with spare compute, from a data centre to an individual with a strong GPU, can provide it.

This would allow a developer anywhere in the world to access the same power as a major company, paying only for what they use. “It compresses the cost of accessing AI from raise a Series B to fund a weekend hackathon,” Heinrich says.

A strong alignment with Singapore’s priorities

Photo: NTU

Singapore has long prioritised digital trust, data sovereignty, and safe innovation. Heinrich sees decentralised AI as a natural extension of these goals. “Decentralised AI operationalises digital trust,” he says. “You prove compliance, rather than simply assert it.”

Associate Professor Zhang agrees, noting that Singapore’s regulatory environment and research strengths position it well to set global standards for open AI governance.

The research hub is focusing on several core areas:

  • Training optimisation for decentralised models with more than 100 billion parameters
  • Model alignment methodologies that embed human values directly into decentralised systems
  • Developing Proof-of-Useful-Work as a real consensus mechanism
  • Multi-agent blockchain systems for smart contract management and anomaly detection
  • A unified theory that maps out token utility, verifiability, and energy cost to create sustainable systems

The collaboration is not only about infrastructure but also about people. NTU expects hundreds of Master’s students to participate over the next few years. They will work directly on live research problems, open-source projects, and prototype deployments.

The goal is to nurture a generation fluent in both AI and blockchain thinking, capable of designing systems that are transparent, auditable, and secure. 

Associate Professor Zhang stressed that even with a fast-moving industry partner, academic freedom and ethical integrity remain central. All research will follow NTU’s ethics frameworks, prioritise open science, and embed fairness and accountability as core principles, he said.

Looking forward, Heinrich hopes the collaboration will help establish verifiable, decentralised AI as a global standard. “Our ultimate achievement will not be a single application,” he says. “It will be the creation of an ecosystem where the world can build and trust AI as a public good, not a private weapon.” NTU shares that ambition, with Dr Wu saying that if the hub succeeds, it could help redefine how AI systems are designed, verified, and governed across the world.

Share this article