Hanwha AI video analytics plugin on Wisenet Wave

December 7, 2025

Platform updates

wisenet wave vms: video management for modern surveillance system

Wisenet WAVE VMS is a centralised platform that makes video management simpler for modern security teams. First, it gives a single dashboard for camera health, live streams, and recorded footage. Next, it lets operators configure alerts and review incidents with fewer clicks. The platform integrates with cloud services, yet it supports on-prem workflows so organisations can meet compliance demands. Hanwha Vision designed this VMS to support embedded AI on cameras and appliances, so video analytics run closer to the source.

Also, the platform reduces manual monitoring by surfacing relevant clips and events. For example, AI-assisted abnormal behaviour and automatic incident workflows cut the time needed to find footage. As a result, response times improve and overall operational efficiency rises. The system supports a range of device types so integrating existing cameras is straightforward. In addition, administrators can control camera settings centrally and push profile updates without visiting each device.

Moreover, Wisenet WAVE brings device-level analytics into the central console. Edge-based processing reduces network load and lowers storage costs, because only relevant clips or metadata stream to the server. This decentralised model supports more cameras with the same infrastructure. It also enables tighter security controls since raw video can remain on-site while metadata flows to dashboards. For teams focused on performance of the system, the platform delivers consistent performance and a clear upgrade path.

Also useful, the platform supports plugin extensions so tailored features can be added. Administrators can add or remove functionality without rearchitecting the VMS. For teams that need deeper analysis or custom detections, plugins allow integration with third-party services. For example, Visionplatform.ai integrates with Wisenet Wave to publish detections as MQTT events so operations can use camera-as-sensor data for dashboards and automation. If you are deploying video surveillance at scale, Wisenet WAVE offers the control and flexibility that modern security system projects require.

ai analytics plugin: plugin deployment and integration

The AI analytics plugin architecture supports both edge devices and server-hosted deployments. First, AI models can run on compatible cameras or on NVR appliances. Second, they can run on a GPU server if you prefer central processing. This hybrid approach means you can balance latency, bandwidth, and cost. The plugin communicates with the VMS through a standard API so events appear in the main console. Also, administrators can route metadata to third-party systems, which helps operational teams turn alerts into workflows.

Deployment follows clear steps. First, install the plugin package on the WAVE server or on the appliance. Then, apply the appropriate licence key and activate the features required for your site. Next, register supported cameras and confirm firmware compatibility. The plugin supports a range of HANWHA models and many third-party ONVIF cameras. Also, you can configure camera analytics profiles and tune sensitivity through camera settings. For complex sites, Visionplatform.ai offers optional model retraining and site-specific tuning so detections match real-world object types and reduce false positives.

Key plugin features include real-time alerts, reduced bandwidth and storage, and flexible rule configuration. The plugin can raise alarms when an event is detected and then optionally trigger recording or a notification. This behaviour ensures that the video recording function focuses on meaningful footage. Also, the plugin enables enable multiple analytics to run in parallel so you can combine loiter detection with people counting. When a trigger event fires, the VMS logs the incident and links to the relevant clip.

Control room operator using a centralised video management dashboard on a large wall of monitors, showing multiple camera feeds and alert indicators, modern office environment, no text or numbers

Finally, the plugin supports a support portal for licence management and updates. Administrators can download compatible firmware and access configuration guides. If integration is needed with external systems, the plugin supports webhooks and MQTT so trigger_events flow to ticketing or automation platforms.

AI vision within minutes?

With our no-code platform you can just focus on your data, we’ll do the rest

hanwha vision and hanwha ai analytics

Hanwha has been an active developer of AI-driven surveillance technology for years. Hanwha Vision focuses on robust hardware and firmware, while its software teams build analytics features that operate on the edge. The company publishes technical guides and performance results that show improved detection accuracy and reduced false alarms. For instance, Hanwha Vision’s documentation highlights how automatic incident detection reduces unnecessary alerts and boosts operator trust Automatic Incident Detection – Hanwha Vision Europe Limited.

Also, Hanwha’s AI analytics roadmap emphasises edge inference, expanded object classifications, and hybrid deployment models. The roadmap supports embedded analytics features that let cameras pre-filter events before forwarding them to the VMS. Hanwha is investing in research into behaviour analytics and crowd metrics, which improves large-site surveillance like transport hubs. Additionally, the company tracks sustainability and performance metrics in public reports; the 2025 sustainability report details how analytics and hardware co-design reduce data centre loads and energy consumption Sustainability Report 2025 – Hanwha Vision.

Market data shows the sector is expanding. The global AI video analytics market was valued at approximately USD 9.40 billion in 2024 and is forecast to rise, reflecting growing demand for automated detection and efficient review workflows AI Video Analytics Market – Global Market Size, Share and Trends …. Also, Hanwha’s solutions have demonstrated measurable improvements in incident response and cost savings by reducing patrols and live monitoring. Hanwha TechWin America and related regional teams also support deployments and systems integration, ensuring local compliance and service for large customers.

analytics and ai analytics capabilities

Core analytics functions include abnormal behaviour detection, crowd counting, and object classification. The platform can detect unattended baggage, count people in a queue, and classify vehicles by type. These functions run on cameras, on NVRs, and within the WAVE client so processing is decentralised. AI analytics on cameras reduce latency and keep raw feeds local. As a consequence, networks carry less traffic and storage use drops. Also, analysts can search metadata rapidly and find the right clip without scrubbing hours of footage.

The system improves automatic incident detection by combining multiple sensor cues and confidence thresholds. This method lowers false positives and thus relieves operator fatigue. For example, when a detected object triggers an intrusion rule, the system can verify size, speed, and direction before escalating. The AI models are optimised to detect people and vehicles in cluttered scenes. When detections are uncertain, the VMS can flag events for human review rather than generating a full alarm.

Also, camera analytics can stream structured events to external systems. Visionplatform.ai uses that pattern to publish detections via MQTT so operations teams can use camera data for KPIs and process monitoring. This approach turns cameras into operational sensors. For sites that need specific detections, teams can create custom models that recognise new_target_object types. The result is better alignment between analytics and the real-world objects that matter to a site. For more details on site-specific detection like loitering or crowd metrics, see the people-counting and loitering detection resources such as the detailed people-counting in airports page people counting in airports.

Close-up of an IP security camera mounted on a wall with city background, showing the camera lens and housing, daytime urban environment, no text or numbers

AI vision within minutes?

With our no-code platform you can just focus on your data, we’ll do the rest

loiter, line crossing and object detection

Loiter detection identifies objects that stay in a defined zone longer than intended. You create a virtual polygon to specify the area, and then you set time thresholds. If the system detects objects that stay in that polygon longer than a specified time, an alert is raised. This logic helps teams spot unattended people or left items. The system also supports area to detect objects left and can detect objects left behind in a busy terminal; review the object-left-behind detection example for airport operations object left behind detection in airports.

Line crossing analytics let you draw a virtual line and set direction rules. The system detects if a person or vehicle crosses that virtual line during a predefined time and can ignore crossings in the opposite direction. You can create a multi-segment virtual line for complex perimeters. If one object crosses a virtual line and matches the object type you defined, the VMS can mark the clip and send an alert. This is useful for slipstream control at entrances and for perimeter checks in restricted zones.

Object detection classes include people, vehicles, and animals. Advanced object classifiers can separate bicycles from motorcycles and vans from trucks. You can combine object detection with other rules to create composite alerts. For example, you can detect and classify people who enter an area and then remain there, or you can detect vehicles that park in a loading bay for longer than allowed. To fine-tune coverage, you can draw a custom polygon to encompass the area for object checks and create custom rules that trigger only when specific object types enter that polygon. This flexibility reduces nuisance alerts and improves accuracy.

wisenet wave and video surveillance use cases

Wisenet WAVE is widely used in retail, transport hubs, and critical infrastructure. In retail, analytics help with queue management and loss prevention. In airports, crowd detection and people counting smooth passenger flow and support operational decisions; see how people detection in airports applies here people detection in airports. Also, perimeter breach use cases combine intrusion detection with video verification to reduce false alarms and speed response. For data centres and other high-security sites, the system supports access control integrations and intrusion workflows that align with compliance requirements Defending Your Data Centres – Hanwha Vision Europe Limited.

Performance metrics show clear benefits. Deployments report faster incident detection times, lower storage costs, and fewer false positives. AI cameras and edge-based video and audio analytics reduce the volume of recorded footage while increasing the relevance of stored clips. Consequently, operators spend less time searching and more time acting. Vendors estimate the global AI video analytics market will grow significantly over the next decade, which supports continued investment in intelligent video and camera analytics technologies AI Video Analytics Market – Global Market Size, Share and Trends ….

Also, integrators can enable multiple analytics to run concurrently, which helps with layered security. For example, teams might run loiter detection and people counting in arrivals halls and pair them with thermal people detection for after-hours monitoring thermal people detection in airports. This combined approach improves situational awareness and reduces reliance on live monitoring. For organisations that need site-specific tuning, Visionplatform.ai can create custom models and stream events to operations so cameras act as sensors for business intelligence and OT systems.

FAQ

What is the Wisenet WAVE VMS?

Wisenet WAVE is a centralised video management system designed by Hanwha for modern surveillance deployments. It simplifies device management, recording, and alerting while supporting embedded analytics and plugin extensions.

How does the AI analytics plugin deploy?

The AI analytics plugin can run on edge devices like cameras and NVRs or on a GPU server. Installation requires a licence, compatible firmware, and registration of supported cameras; after setup, events integrate with the WAVE client.

Can loiter detection be customised?

Yes, you can draw a custom polygon to encompass the area and set time thresholds so the system detects objects that stay in the defined area longer than you allow. This reduces nuisance alerts.

Does the system support line crossing rules?

Yes, administrators can create a virtual line or create a multi-segment virtual line and set direction-specific rules. The system detects objects that cross a defined line during the configured period.

How are false positives reduced?

AI models combine size, speed, and classification checks to verify events before raising alarms. In addition, layered rules and confidence thresholds prevent low-quality detections from triggering an alert.

Can analytics run on cameras as well as servers?

Yes, AI analytics on cameras are supported so processing occurs at the edge. This setup reduces bandwidth and keeps raw video local while streaming metadata to the VMS for review.

How do plugins integrate with other systems?

Plugins can publish events via webhooks or MQTT so alerts flow into ticketing, BI, or OT systems. This lets teams treat cameras as operational sensors as well as security devices.

What types of objects can be detected?

Typical classes include people, vehicles, and animals. Advanced object classifiers recognise sub-types, and custom models can add a new target object specific to your site.

Is Visionplatform.ai compatible with Wisenet WAVE?

Yes, Visionplatform.ai integrates with Wisenet WAVE to stream detections and enable operational uses beyond alarms. The integration supports on-prem deployments and GDPR-compliant workflows.

Where can I find detailed airport use cases?

Visionplatform.ai publishes tailored pages that cover people detection, loitering detection, object left behind detection, and others to help aviation teams plan deployments. See the people detection and loitering resources for focused guidance people detection in airports and loitering detection in airports.

next step? plan a
free consultation


Customer portal