8
algorithms/processing. The weakness of this traditional way of doing concerning
processing performance is that it results in a ‘too long’ processing time and therefore
making it very difficult to satisfy real-time constraints. The real-time constraints expect a
completion of the processing of a frame within a time window that is less than the
capturing time of that frame. Therefore, for a 60 FPS (frames per second) rate we do have a
maximum of 15 milliseconds to finish all the processing. And since one does generally need
about 6~10 different high definition (HD) image preprocessing modules whereby each of
them takes around 5ms per frame we thus do reach a total ranging between 30 ms and
50ms of processing time if one tries to use the traditional processing schemes. This figure
of 50 ms does fail to fulfill the real-
time constraint of “maximum 15 ms” processing time
per frame. It is therefore clear that traditional processing schemes are not capable of
fulfilling the real-time constraints of visual sensors in/for ADAS.. Examples of ADAS
solution of relevance for visual sensors are: the
Lane Departure Warning
(LDW),
Adaptive
Cruise Control
(ACC), Emergency
Brake Assistant
(EBA) and
Blind Spot Detection
(BSD), etc.
The different ADAS solutions should be able to re-use the same components/modules for
the image processing. Another weakness of classical algorithms they do not enable an easy
re-use of functional components. Therefore we do need and are looking for a special
architecture that is reconfigurable by software; such a concept will enable an easy re-use of
the same platform for different functionalities and algorithms.