RFEL teams with Plextek for adaptive FPGA-based video processing
RFEL has teamed with Plextek and 4Sight Imaging to create new, hybrid FPGA (field programmable gate array) SoC (system on chip) based, self-optimising video enhancement for Defence Science and Technology Laboratory, UK
The UK MoD’s Defence Science and Technology Laboratory (DSTL) has contracted a team, led by Plextek Services, with RFEL and 4Sight Imaging, to perform rapid evaluations of real-time image processing functions. The project also aims to simultaneously demonstrate the latest adaptive capabilities that modern FPGA-based SoC architectures can deliver to defence and security surveillance applications, with a minimised size, weight and power footprint.
DSTL can use this platform to solve complex defence vision and surveillance problems, to incorporate best-in-class video processing algorithms while simultaneously bridging the gap between research prototypes and deployable equipment.
RFEL and Plextek bring expertise in sensor exploitation and real-time embedded video processing, which will combine with DSTL’s and 4Sight’s adaptive algorithms, to create a single, environment for a robust proving tool. It allows a range of enhancements to be experimentally evaluated, using an optimized, intelligent and flexible architecture that can be re-used in deployable field equipment.
Increasingly complex video processing has thrown up a new problem, the need to optimally configure FPGA-based component functions and algorithms in real-time, in rapidly varying conditions. The team will incorporate a software processing layer, previously developed for DSTL by 4Sight Imaging, which performs the adaption of the control variables to optimise the real-time video enhancement, and replacing the need for a man-in-the-loop.
Using video metrics benchmarked against extensive human trials, the CPU (central processing unit)-based configuration management layer can out-perform a human operator. All of the processing is performed at source, in real-time, thereby reducing off-board bandwidth and potentially alleviating the requirement for downstream processing. The low SWaP (size weight and power) video enhancement platform performs irrespective of the time of day, or the prevailing weather.
The work draws together high performance, bespoke FPGA processing supporting the computationally intensive tasks, and the flexibility (but lower performance) of CPU-based processing. This heterogeneous, hybrid approach is possible by using contemporary SoCs, such as Xilinx’s Zynq devices, that provide embedded ARM CPUs with closely coupled FPGA fabric. The use of a modular FPGA design, with generic interfaces for each module, enables FPGA functions, which are traditionally inflexible, to be dynamically re-configured under software control. Critically, to support remotely deployable, real-world applications, the system will also manage its own power budget, adapting the processing solution to maximise time delivering operational benefit.
Image processing techniques can have a number of degrees of freedom, with a wide range of possible parameter settings. This can call for a high level of expertise to set up a vision system. In security or defence applications, conditions can change rapidly, meaning that a system may also require frequent tuning to re-optimise the image. Commercial camera systems deliver a point-and-shoot experience, but military users have more specialised needs and require a system that performs the right optimisation, automatically.
RFEL plans to introduce this technology as an enhancement to the powerful HALO video platform. Coupled with video processing algorithms such as Stabilisation, Distortion Correction and Non-Linear Contrast Enhancement, this will provide rapid product development for designers addressing the most demanding vision applications in the security, military and aerospace domains.