AI smart cities: Project Hafnia with NVIDIA

November 15, 2025

Use cases

AI and Video Technology: Foundations of Project Hafnia

Project Hafnia began as an open platform to speed up AI development in urban contexts. Launched by Milestone Systems with partners, it gives developers access to a compliant data library and tools for ai model training. In practice, Project Hafnia provides ethically sourced video data that teams can use to train computer vision models without compromising privacy. For example, Milestone explains how the platform accelerates model iteration by offering pre-annotated footage and modular services that remove significant friction that speed.

Visionplatform.ai contributes by showing how existing CCTV can act as an operational sensor. Our platform converts streams into structured events so teams can deploy ai models on site, keep data local, and meet EU AI Act standards. Therefore, organisations can use their video management software to extract value. For a practical pointer, see our people detection reference for airport deployments people detection in airports, which explains how camera networks become live sensors for safety and operations.

Video technology underpins accurate computer vision training in three ways. First, consistent frames and quality matter; high-quality video data leads to better models. Second, annotated sequences create labeled examples for supervised learning and for emerging vision language approaches. Third, an open platform lets teams combine synthetic and real footage so visual ai models generalise better. In short, Project Hafnia lowers the barrier to training reliable computer vision models by making compliant video and tooling available. For more on the ethics and sourcing that matter, see the reporting on Project Hafnia’s privacy and compliance focus here.

smart cities and NVIDIA: Enabling Smarter Cities with GPU Power

GPU infrastructure changes how cities process video at scale. For example, the deployment in Genoa uses NVIDIA DGX Cloud to train and run complex workloads in hours rather than weeks. The city of genoa became an early European deployment where cloud GPUs and edge devices worked together to optimise urban traffic. That real deployment demonstrates how compute and models interact to deliver outcomes in a live city.

NVIDIA provides the compute backbone and model orchestration. Using nemo curator on nvidia dgx and curator on nvidia dgx cloud, teams fine-tune models quickly and iterate on scenarios. The partnership between Milestone and NVIDIA shows this in practice, and coverage highlights how Milestone and NVIDIA combine video infrastructure and AI expertise for Genoa. Consequently, cities can run visual ai without excessive overhead.

The pairing helps integrate video management software like XProtect with GPU-accelerated pipelines. For instance, XProtect integration enables real-time streaming and event extraction at scale, which helps both emergency response and operations. For teams exploring ANPR or LPR use cases, our ANPR guidance offers detail on how cameras become operational sensors ANPR/LPR in airports. Overall, combining Milestone Systems technology, NVIDIA GPUs, and specialised platforms brings next-gen ai for smart cities closer to everyday deployments. The result is smarter cities that can process, learn from, and act on video streams in near real time.

A panoramic city control room with large screens showing traffic flows, anonymized vehicle tracks, and digital twin overlays, in a modern urban operations center, daytime, no text or logos

AI vision within minutes?

With our no-code platform you can just focus on your data, we’ll do the rest

data-driven City AI: Traffic Management in Genoa

Genoa illustrates a data-driven approach to urban traffic. There, sensor fusion and video analytics feed predictive models that nudge signal timings and reroute flows. The project used high-quality video data and GPU-enabled training so models adjusted quickly to new conditions. Project Hafnia supported these efforts by providing annotated footage and tools that reduce training time by large factors, with reports noting acceleration of AI development by up to 30× 30× acceleration.

Operators found they could measure congestion and adapt in minutes. The city used those gains to lower idling time and to prioritise public transport corridors. As a result, emissions fell and urban mobility improved. The work in Genoa also served as a proof point for rolling out similar systems across european cities.

Compliance and public trust mattered throughout. Project Hafnia emphasises compliant video data and anonymisation as core requirements, which helped secure public buy-in on compliance. For mid-sized cities, a live testbed for ai-driven traffic management held lessons. The city of dubuque served as a testbed for ai-driven traffic management, and it demonstrated how traffic management scales to municipalities with roughly 60,000 residents Dubuque case. In short, data-driven traffic systems combine compliant video data, GPU compute, and careful governance to produce measurable improvements in flow and safety.

Use of Video and AI Innovation for Public Safety

Vision language models and vision language approaches now help detect anomalies and trigger safety alerts. These systems combine frame-level detections with scene context to decide what qualifies as an alert. For example, visual ai models flag unusual motion and then rank events for operator review. This reduces false alarms and speeds up emergency response, with systems tuned to local rules and workflows.

Continuous fine-tuning keeps models current. Teams use services such as nvidia cosmos and nemo curator on nvidia dgx to retrain models with new footage. That microservice approach supports modular updates and lets teams deploy ai models trained on synthetic data alongside real footage. Meanwhile, Visionplatform.ai focuses on keeping training in the customer environment so data does not leave the premises, supporting EU AI Act readiness and GDPR requirements. If you need forensic search capabilities, our forensic search guide shows how archived footage becomes searchable and actionable forensic search in airports.

Examples from deployments show clear benefits. In Dubuque, the platform improved detection quality and allowed city staff to tune alerts to reduce nuisance alarms. The system also supports PPE and ANPR workflows where needed, and it integrates with existing VMS. Furthermore, motion detection responsible technology cities principles guided the tuning to ensure alerts match the risk profile. As a result, ai-powered safety systems support both security and operations with measurable gains in responsiveness and situational awareness.

A street-level view of city traffic with anonymized pedestrian and vehicle bounding boxes overlaid, showing AI detections and a digital twin backdrop, clear day, no logos or text

AI vision within minutes?

With our no-code platform you can just focus on your data, we’ll do the rest

nvidia omniverse blueprint for smart and omniverse blueprint for smart city

The NVIDIA Omniverse ecosystem provides a shared virtual space for planning and testing city systems. Planners use the omniverse blueprint for smart city to build digital twins and run what-if scenarios. Digital twins and AI agents simulate traffic, events, and infrastructure stress. Thus, teams can test responses before they touch real streets.

NVIDIA’s tools also contribute an nvidia ai blueprint for video that standardises pipelines for video-centric ai. This helps model portability and reproducibility. For example, visualisations in a digital twin let stakeholders compare interventions side by side. The ability to spin up a scenario, run it with different parameters, and measure outcomes helps city planners scale solutions with confidence. In effect, the omniverse blueprint for smart city creates a control room for the future of smart city technology.

Integration matters. When digital twins connect to live feeds, planners get near real-time insights into urban mobility and infrastructure health. The result is better coordination between traffic control, emergency services, and maintenance crews. The blueprint for smart city ai supports simulation of complex urban dynamics, and it lets teams incorporate models and vlms that reflect local conditions. For municipalities preparing for eu ai regulation, these simulations also provide auditable trails and validation that inform compliant deployments.

Blueprint for Smart City AI: Visionplatform.ai’s Path Forward

The Hafnia Smart City model shows what a coordinated platform can achieve. Visionplatform.ai builds on that work by offering a video-centric ai stack that keeps data and models under customer control. We help organisations deploy ai at the edge or in hybrid setups so teams can meet ai act requirements and maintain GDPR readiness. In practice, this means you can deploy ai models on-prem, tune them with local footage, and stream events to city operations without exposing raw streams outside the environment.

Looking ahead, Project Hafnia plans expansions across european cities and emerging markets. These rollouts aim to combine high-quality video data, nvidia compute, and modular microservices so municipalities can scale quickly. Thomas Jensen, CEO of Milestone Systems, framed the ambition as creating “the world’s smartest, fastest, and most responsible platform for video data and AI model training” Thomas Jensen quote. That aspiration underpins a shared vision: responsible ai applied to urban needs.

Finally, Visionplatform.ai will continue to integrate with leading VMS products such as XProtect, and to support advanced use cases like people counting, PPE detection and process anomaly detection. For a practical reference on how camera data becomes operational events, see our people counting page people counting in airports. Together with partners and frameworks like the nvidia omniverse blueprint for smart, we aim to provide a reproducible blueprint for smart city ai that cities can adopt to make urban life safer, greener, and more efficient.

FAQ

What is Project Hafnia?

Project Hafnia is an initiative that provides annotated and compliant video data to accelerate ai model training. It is designed to help developers and cities train models faster while keeping privacy and compliance top of mind.

How does Visionplatform.ai fit into smart city projects?

Visionplatform.ai turns existing CCTV into operational sensors and streams structured events for security and operations. The platform focuses on on-prem or edge deployments so organisations can keep control of data and meet EU AI Act requirements.

What role does NVIDIA play in these deployments?

NVIDIA supplies the GPU infrastructure and tooling that speeds up training and inference. Technologies like DGX Cloud and Omniverse enable rapid iteration, simulation, and deployment of video-centric AI in cities.

Can these systems respect privacy and regulation?

Yes. Project Hafnia and partners emphasise compliant, ethically sourced video data and anonymisation. Deployments can run on-prem to support GDPR and the EU AI Act, which helps with legal and public acceptance.

What benefits did Genoa see from the deployment?

Genoa used GPU-accelerated models to optimise traffic flow, reduce congestion, and improve urban mobility. The deployment proved that high-quality video data and compute can deliver measurable operational gains.

Is there a role for digital twins in city planning?

Absolutely. Digital twins allow planners to run what-if scenarios, simulate interventions, and validate ai agents before changes hit real streets. This reduces risk and improves decision-making.

How do cities handle model updates and tuning?

Models are fine-tuned using microservice architectures and tools like NVIDIA Cosmos and NEMO. Continuous retraining on local footage keeps performance high and reduces false positives.

Can smaller cities use these technologies?

Yes. The city of Dubuque demonstrated that mid-sized cities can act as a testbed for ai-driven traffic management. Scaled solutions fit a range of city sizes and budgets.

How does this impact emergency response?

AI-enhanced video can speed situational awareness and automate alerts, which supports faster emergency response. Structured event streams can integrate with dispatch and incident management systems.

Where can I learn more about operationalising camera data?

Visionplatform.ai provides practical guidance on converting camera feeds into searchable, operational events. For hands-on examples, see our resources on people detection and forensic search to understand typical workflows.

next step? plan a
free consultation


Customer portal