To use this site, please enable javascript
Author: EIVA CEO Jeppe Nielsen
EIVA NaviSuite software products support a number of DVRs (digital video recorders), and NaviModel, which is dedicated to subsea data modelling and visualisation, has offered the possibility of video playback for many years.
With the introduction of the fourth generation of NaviPac (EIVA's software product for navigation and positioning), where the video functionality from NaviModel has become available for the online operator as well, we have had many requests to expand the video features which support live video feeds to the NaviPac Helmsman’s Display too. Many of our customers have built their own video handling solution in order to make it fit their needs. This includes selecting camera technologies, encapsulating these in subsea housings, and using different video encoding products. Until now, this also meant having to use a separate software for live video.
To accommodate these customers and others interested in using video data, we have now added the possibility of not only showing, but also recording live video. Consequently, you can now connect NaviSuite (NaviModel and NaviPac) to multiple live cameras.
There are many different kinds of video technologies to choose from, and it is not always easy to find one’s way around the various standards and possibilities. Furthermore, the development of available technologies has meant that the norm has moved from older analogue cameras to the quite different internet/web cameras with much higher resolutions.
We have tested a number of configurations, using different types of cameras and encoding/decoding boxes – both analogue and digital.
We have combined the following pieces of equipment:
The tested units support a number of camera input types:
We used Ethernet (IP) as the network carrier in all test scenarios, since this is typically available in modern subsea systems. In other words, standard Ethernet (TCP/IP) is used from the subsea MUX over a coax or fibre cable to the topside. The streaming protocol is typically H.264 or Motion JPG, which are both supported by NaviSuite.
In all cases, the video quality is as good as expected from the given camera source, and there is a possibility to adjust the resolution and/or the frame rate, for example by using VBR/CBR H.264.
The NaviSuite products can support as many live cameras as you want, combining their feeds in a single view – the number of cameras is only limited by the power of your computer. Each camera is in its own dockable window. That is, it can be docked as an integrated part of NaviModel or the NaviPac Helmsman’s Display layout or as a free-floating window.
Three different camera technologies (analogue, web, HDMI) displayed in NaviPac Helmsman’s Display at the same time
With the new recording feature in NaviSuite, live video data is captured onto the disk. NaviSuite will record all live feeds in individual files of a size specified by you (for example 15 minutes per file). You can define the location of the video files and the file names will match the replay naming conventions used in NaviSuite, allowing for the files to be used directly during post-processing.
NaviSuite does not come with the possibility of defining overlays – simply because these make parts of the image impossible to use for photo mosaics and other advanced techniques. However, most encoders will allow you to define an overlay.
An important factor when talking about video data is latency. That is, the time delay between when a real-world motion happens until the motion is displayed on the screen. Latency is a result of delays introduced throughout the process – the encoding latency, the latency due to the transport network, the decoding latency, and the latency introduced in the software displaying the video data.
Sometimes, there is no need to do complex testing because more simple methods will do. The latency times below are estimated based on a simple measurement setup using a stopwatch, where we let the camera capture the stopwatch as well as the display in the background. This lets us compare the display of the elapsed time including the delay from display to display, here showing a latency of roughly 300 ms.
With latency a of more than 500 ms, we believe that the delay makes the solution unfit for navigation and other live purposes. The delay visible to the operator is so high that it is for example impossible to safely navigate an ROV or operate a manipulator arm. High latency is however acceptable for recording purposes.
The hardware encoders used in our tests offer 80 ms latency end to end, but an additional 100 ms are added due to transport and display decoding latency. Latency levels of 200-300 ms are well suited for usage in for example, ROV navigation due to the very short visible delay.
We have obtained a 200-300 ms latency with all the technologies and encoders we have tested.
The live video streaming and video recording features are included in NaviModel 4.1.5 and NaviPac 4.1 Pro at no extra cost.