WSF_IMAGE_PROCESSOR¶
- processor WSF_IMAGE_PROCESSOR¶
Derives From: WSF_MESSAGE_PROCESSOR
processor <name> WSF_IMAGE_PROCESSOR processor Commands ... Platform Part Commands ... ... External Link Commands ... ... WSF_MESSAGE_PROCESSOR Commands ... ... WSF_SCRIPT_PROCESSOR Commands ... coast_time ... filter ... end_filter reports_velocity reports_side reports_type reports_bearing_elevation message_length ... message_priority ... include_unstable_covariance ... include_unstable_residual_covariance ... target_recognition ... ... Target Recognition Commands ... # Script Interface on_initialize ... end_on_initialize on_initialize2 .. end_on_initialize2 on_update ... end_on_update script_variables ... end_script_variables scripts ... end_script ... Other Script Commands ... end_processor
Overview¶
WSF_IMAGE_PROCESSOR receives image data from an imaging sensor such as WSF_EOIR_SENSOR or WSF_SAR_SENSOR and produces tracks. This provides a simple capability to simulate an analyst looking at the image and generating actionable information. The capabilities of the WSF_MESSAGE_PROCESSOR can be used to simulate processing delays by delaying the reception of the incoming image messages.
The processor is typically used in the following construct:
platform_type ... sensor eoir WSF_EOIR_SENSOR ... # Forward the images to 'image_proc' internal_link image_proc end_sensor processor image_proc WSF_IMAGE_PROCESSOR ... # Forward the extracted 'tracks' to 'track_proc' internal_link track_mgr end_processor processor track_proc WSF_TRACK_PROCESSOR ... # Implicitly takes the tracks from 'image_proc' and updates the track_manager end_processor end_platform_type
The processor accepts the following of messages:
-
A static image from WSF_SAR_SENSOR operating in ‘spot’ mode.
-
A single frame of a video stream from WSF_EOIR_SENSOR or from WSF_SAR_SENSOR operating in ‘strip’ mode.
Each of the above messages contains a WsfImage object, which reflects the objects that were visible in the image. The processor then creates or updates tracks for each object in the image as defined in the following sections.
If the target_recognition flag is enabled, the WSF_IMAGE_PROCESSOR will attempt to perform target detection, identification, and classification, based on the number of pixels in the evaluated image and associated Johnson’s criteria equations (see Target Recognition Commands).
Note
Only WSF_IMAGE_MESSAGE and WSF_VIDEO_MESSAGE message types are processed by WSF_IMAGE_PROCESSOR. Incoming messages of all other types are discarded.
Static Image Processing¶
For static images the process is simple. For each object in the image:
Create a new temporary track with a new track ID.
Set the reported location in the track to the ‘measured’ location from the image.
Set the reported velocity in the track to zero.
Set the reported type and side if requested.
Send a WSF_TRACK_MESSAGE containing a new track.
Note that every object in every static image gets a unique track ID. No memory is retained about formerly processed static images.
Video Stream Processing¶
For video streams, the following process is repeated for each object in the image:
If there is not an existing track for the object, create the track, and if requested, initiate a filter.
If a filter is defined:
Update the filter with the ‘measured’ location from the object in the image.
Update the track with the location and velocity estimate from the filter.
Update the track with the state and residual covariance (this may be suppressed while the filter is not ‘mature’.)
If a filter is not defined:
Update the track with the ‘measured’ location from the object in the image.
If ‘reports_velocity’ was specified, update the track with the true velocity that corresponds to the platform associated with the object.
Set the reported type and side if requested.
Send a WSF_TRACK_MESSAGE with the new or updated track.
After processing all the objects in the image, old tracks are purged. Any track in which the time since last updated exceeds the coast_time is purged, and a WSF_DROP_TRACK_MESSAGE is sent.
Commands¶
- coast_time <time-value>¶
Specifies the maximum amount of time that may elapse between updates before a track is dropped.
Tracks are evaluated for dropping only when a message containing an object is received.
Default: 0 secs (no coast time)
- filter <filter_type> <filter_parameters> end_filter¶
A filter can be used to take an incoming video stream to produce smoothed position and velocity estimates. <filter-type> can be one of the Predefined Filter Types or a filter derived from one of those types.
Default: No filter.
Note
Filters are not applied to static images.
- reports_velocity¶
Indicates if velocity is reported in the produced tracks.
This command is applicable ONLY the input is a video stream and a filter has not been defined. Velocity will ALWAYS be reported in the following cases:
If a filter is defined, and enough updates have been received to generate a reliable velocity. This means that initialize, and perhaps for one or two updates, the track will not have WsfTrack.VelocityValid.
For static images, a velocity of zero will be reported.
- reports_type¶
Indicates if ‘type’ is to be reported in the produced tracks.
Default: ‘type’ will not reported.
- reports_side¶
Indicates if ‘side’ is to be reported in the produced tracks.
Default: ‘side’ will not reported.
- reports_bearing_elevation¶
Indicates that tracks are to be populated with bearing and elevation instead of location data.
Note
This feature should not be used with filtering.
Default: Location is reported instead of bearing and elevation
- message_length <data-size-value>¶
Specify the logical length assigned to the track messages that are created from the image.
Default: 0 (use the value derived from the message_table )
- message_priority <integer-priority>¶
Specify the priority assigned to the track messages that are created from the image.
Default: 0 (use the value derived from the message_table )
- include_unstable_covariance <boolean-value>¶
- include_unstable_residual_covariance <boolean-value>¶
When a filter is employed, the state covariance and residual covariance are not reliable during initial creation and perhaps for one or two updates (the number depends on the filter employed). When these values are false (the default), these unreliable values are not passed to the output track.
Default: false for both commands.
- target_recognition <boolean-value>¶
Enable this processor’s target recognition capabilities for target detection, classification, and identification.
Script Interface¶
WSF_IMAGE_PROCESSOR utilizes the capabilities of the Common Script Interface, WSF_MESSAGE_PROCESSOR and WSF_SCRIPT_PROCESSOR.