19
In many ADAS cases the driver will be warned if a potentially dangerous situation is
detected. But in some other cases, depending on the type of assistance, the ADAS system
can take over the control (partially or fully) over the
car by sending appropriate
commands to the actuators. In such cases of control takeover the signal/video processing
must be ultrafast to ensure very hard real-time requirements.
Two main issues while designing an ADAS system are the processing speed and the
robustness. A robust system should work in dynamic environmental condition,
dynamic
illumination and lighting [7]. Some distortion like sun in background and shadows can
disturb the vision system and provide wrong information. To ensure that system is
working in different condition we need a highly adaptable
framework with dynamic
coefficient.
Depending on the type of video-based ADAS applications we do need different appropriate
software/hardware architectures as well as different combinations of image processing
filters. Some filters are very complex in term of processing computational effort. A good
example of a complex filters is the stereo vision for depth
estimation and collision
avoidance. For such complex processing tasks one does need a specific hardware and
software architectures that amongst others enable and support parallel processing and
tasks concurrency. .
Traditionally one has been using sequential
processing architectures, time sharing and
multi-threading algorithms [9, 37, 38]. The main weakness of this traditional way of
processing with respect to performance and speed is that processing time is too slow for a
real-time ADAS application [9]. Therefore, the traditional approach has only limited ways
to reach a certain speeding-up: to
extend the hardware, and use more powerful processors.
To ensure a real-time processing of high quality images the system should able to
complete the processing of a frame within less than therequired time for capturing a frame.
Consequently, for a 60 FPS we do have a maximum 15 milliseconds for all processing. And
if we have 6~10 different sequential high definition (HD) image preprocessing
modules/function whereby each one takes around 5ms on traditional (embedded)
processing platforms/architecture, it would take, overall, approximately 30 ms to50 ms. By
those processing times one does clearly fail to satisfy the real-time requirement/deadline
of 15 milliseconds.