![]() image processing system and method for tracking moving objects in a sequence of images
专利摘要:
METHOD AND SYSTEM FOR USING DIGITAL PRINTING TO TRACK OBJECTS IN MOTION IN VIDEO. The present invention relates to a method and system for tracking moving objects (155) in a sequence of images (110). In an illustrative embodiment, a current image (128) in the image sequence (110) is segmented into a plurality of segments (138). The segments in the plurality of segments (138) belonging to the same movement profile are merged to form a set of main segments (142). A set of target segments (154) is identified from the set of main segments (142). The set of main segments (154) represents a set of moving objects (155) in the current image (128). A set of fingerprints (156) is created for use in tracking the set of moving objects (155) in a number of subsequent images (162) in the image sequence (110). 公开号:BR102013024785B1 申请号:R102013024785-5 申请日:2013-09-26 公开日:2021-04-06 发明作者:Terell Nathan Mundhenk;Kyungnam Kim;Yuri Owechko 申请人:The Boeing Company; IPC主号:
专利说明:
[0001] [0001] This application is related to the following patent application: entitled "Method and System for Processing a Sequence of Images using Fingerprints", No. 13 / 631,705, and No. of document 12-078-US-NP; deposited on the same date, assigned to the same assignee, and incorporated here by reference. Background of the Invention Field [0002] [0002] The present invention generally relates to image processing, and, in particular, the detection and tracking of moving objects in images. Even more particularly, the present description relates to a system and method of detecting and tracking moving objects in images by creating fingerprints for moving objects. Background [0003] [0003] Different types of techniques are currently available for the detection and tracking of moving objects in a sequence of images, such as video. However, some of these techniques currently available may be unable to detect and / or track a moving object with a desired level of accuracy. For example, some techniques currently available may be unable to detect and / or track a moving object when that moving object becomes partially obstructed in one or more images in the image sequence. [0004] [0004] Additionally, some techniques currently available may be unable to determine the contour of a moving object with a desired level of precision. As used here, the outline of an object can be the outline of an object or the shape of the object. This contour can be the contour of the object's outer surface. [0005] [0005] Segmentation is an example of a process used to determine the contours of objects in images. As used here, "segmentation" is the process of dividing an image into multiple segments. Each segment includes a group of pixels that have been identified as sharing a similar visual characteristic. This visual characteristic can be, for example, without limitation, color, texture, intensity or some other type of characteristic. In this way, the segments that are adjacent to each other are different with respect to the particular visual characteristic beyond some selected limit. [0006] [0006] Segmentation can be used to simplify and / or change the representation of an image so that the segmented image is easier to analyze compared to the original image. For example, when an image is segmented to form a segmented image, the characteristics within the segmented image may be more easily discernible compared to the original image. In particular, the contours of the objects and / or features captured within the original image can be more easily discernible within the segmented image. [0007] [0007] However, some currently available segmentation techniques may be unable to segment images in a way that defines that an image is segmented based on color to form a segmented image, an object that is captured in the image as having two or more colors can be represented by multiple segments within the segmented image. [0008] [0008] Consequently, the contour of the object within the segmented image may not be as easily discernible as desired. Additionally, extracting information about the characteristics represented by these types of segments can result in less accurate information than desired. Therefore, it would be desirable to have a method and equipment that takes into account some of the problems discussed above, in addition to other possible problems. summary [0009] [0009] In an illustrative embodiment, an image processing system comprises an image segmentator, a consistency checker, and a digital printing device. The image segmentator is configured to segment a current image into a sequence of images into a plurality of segments to form a segmented image and to merge the segments into the plurality of segments belonging to the same motion profile to form a set of main segments. The consistency checker is configured to identify a set of target segments from the set of main segments. The set of target segments represents a set of moving objects in the current image. The fingerprint device is configured to create a set of fingerprints for use in tracking the set of moving objects in several subsequent images in the image sequence. [0010] [00010] In another illustrative modality, a computer-implemented method for tracking moving objects in a sequence of images is provided. A current image in the image sequence is segmented into a plurality of segments. The segments in the plurality of segments belonging to the same movement profile are merged to form a set of main segments. A set of target segments is identified from the set of main segments. The set of target segments represents a set of moving objects in the current image. A set of fingerprints is created for use in tracking the set of moving objects in a number of subsequent images in the image sequence. [0011] [00011] In another illustrative modality, a computer-implemented method for tracking moving objects in a sequence of images is provided. The local movement and the global movement are identified in a current image. Global movement is subtracted from local movement to form a moving image. The moving image includes a set of motion profiles. The current image in the image sequence is segmented into a plurality of segments to form a segmented image. The segments in the plurality of segments belonging to the same movement profile are merged to form a main image having a set of main segments. A set of target segments is identified from the set of main segments to form a target image. The set of target segments represents a set of moving objects in the current image. A set of fingerprints is created for use in tracking the set of moving objects in several subsequent images in the image sequence. [0012] [00012] The characteristics and functions can be achieved independently in various modalities of the present description or can be combined in other modalities in which additional details can be observed with reference to the description below and the drawings. Brief Description of Drawings [0013] [00013] The novelty characteristics considered characteristics of the illustrative modalities are presented in the appended claims. The illustrative modalities, however, in addition to a preferred mode of use, additional objectives and characteristics will be better understood by reference to the following detailed description of an illustrative modality of the present description when read in conjunction with the attached drawings, in which: [0014] [00014] Figure 1 is an illustration of an image processing environment in the form of a block diagram in which an illustrative modality can be implemented; [0015] [00015] Figure 2 is an illustration of a digital printing device in the form of a block diagram according to an illustrative embodiment; [0016] [00016] Figure 3 is an illustration of an image according to an illustrative modality; [0017] [00017] Figure 4 is an illustration of a moving image according to an illustrative modality; [0018] [00018] Figure 5 is an illustration of an enlarged view of part of a moving image according to an illustrative modality; [0019] [00019] Figure 6 is an illustration of an image segmented according to an illustrative modality; [0020] [00020] Figure 7 is an illustration of part of an image segmented according to an illustrative modality; [0021] [00021] Figure 8 is an illustration of a segmented image in motion according to an illustrative modality; [0022] [00022] Figure 9 is an illustration of a main image according to an illustrative modality; [0023] [00023] Figure 10 is an illustration of an enlarged view of part of a main image according to an illustrative embodiment; [0024] [00024] Figure 11 is an illustration of an image according to an illustrative modality; [0025] [00025] Figure 12 is an illustration of a main image according to an illustrative modality; [0026] [00026] Figure 13 is an illustration of a process for performing image processing in the form of a flowchart according to an illustrative modality; [0027] [00027] Figure 14 is an illustration of a process for establishing a set of target segments from a set of main segments in the form of a flow chart according to an illustrative modality; [0028] [00028] Figure 15 is a process for creating a digital impression in the form of a flowchart according to an illustrative modality; [0029] [00029] Figure 16 is an illustration of a process for the formation of a set of main segments in the form of a flow chart according to an illustrative modality; and [0030] [00030] Figure 17 is an illustration of a data processing system according to an illustrative modality. Detailed Description [0031] [00031] The different illustrative modalities recognize and take into account different considerations. For example, the different illustrative modalities recognize and take into account that some of the systems and methods currently available for detecting and tracking objects and, in particular, moving objects, can perform as well as desired. [0032] [00032] In particular, some methods currently available for detecting and tracking objects in video may be unable to track objects, which are at least partially obstructed in one or more of the images that form the video, with a desired level of accuracy. In addition, these currently available methods may be unable to track objects that temporarily move out of the field of view of the video camera system for some period of time during the video. Additionally, some methods currently available for object tracking may require instructions on what types of objects to search for and track. For example, these methods may be unable to track objects and, in particular, moving objects, without knowing the type of object to be detected and tracked. [0033] [00033] Thus, the different illustrative modalities provide a system and method for generating a fingerprint of a moving object, which was detected in an image in a sequence of images, for use in the detection and tracking of the moving object throughout the sequence of images. In particular, the fingerprint can be used for the detection and tracking of moving objects in an image in the sequence of images even when the moving object is partially obstructed or is no longer in the image's field of view. [0034] [00034] With reference now to the figures and, in particular, with reference to figure 1, an illustration of an image processing environment in the form of a block diagram is presented according to an illustrative modality. In figure 1, the image processing environment 100 includes an imaging system 102 and an image processing system 104. [0035] [00035] In these illustrative examples, imaging system 102 can be any type of sensor system configured to generate imaging data 106 for scene 108. Imaging system 102 can be selected from, for example, without limitation, an electro-optical (EO) imaging system, an infrared (IR) imaging system, a radar imaging system, a thermal imaging system, an imaging system creation of ultrasound images, a light detection and classification system (LIDAR), and some other suitable types of imaging systems. In this way, the imaging data 106 generated by the imaging system 102 can comprise electro-optical images, infrared images, radar images, thermal images, light detection and classification images, or some other type of image. . Electro-optical images can be, for example, visible light images. [0036] [00036] In these illustrative examples, the imaging data 106 can take the form of a sequence of images 110. As used here, an "image" is a two-dimensional digital image comprising pixels arranged in rows and columns. Each pixel can have a value representing a color and / or brightness for that pixel. Additionally, a "sequence of images", as used here, is two or more images generated in a consecutive order with respect to time. [0037] [00037] The sequence of images 110 generated for scene 108 can be referred to as video 112 of scene 108. When the sequence of images 110 is referred to as video 112, each image in the sequence of images 110 can be referred to as a "frame" . [0038] [00038] Scene 108 can be a physical area, such as, for example, without limitation, an area of a city, a neighborhood, an area near the ocean, an area in a forest, an area in a desert, a city , a geographic area, an area within a manufacturing facility, a floor in a building, a section of a highway, or some other suitable type of area. [0039] [00039] Moving objects 114 can be present in scene 108. As used here "a moving object", such as moving object 116, can be any object that is moving with respect to a field of view for the system of movement. image creation 102. Moving object 116 is an example of one of the moving objects 114 in scene 108. [0040] [00040] In this way, moving object 116 can take the form of any object that does not remain stationary within scene 108. For example, moving object 116 can take the form of a person walking or running within scene 108, a vehicle, a moving structure, an object located in a moving vehicle, or some other type of moving object. A vehicle in scene 108 can take the form of, for example, without limitation, a car, a truck, an aircraft, a van, a tank, an unmanned aerial vehicle, a spacecraft, a missile, a rocket, or some other appropriate type of vehicle. [0041] [00041] In some cases, the moving object 116 may be a combination of two or more objects moving together. For example, the moving object 116 may comprise two or more objects that are attached to each other and, thus, moving together with the same type of movement. [0042] [00042] Additionally, the moving object 116 can take the form of any object that moves in relation to the field of view for the imaging system 102 as the angle at which the imaging system 102 is directed and / or as the position of the imaging system 102 changes. For example, the moving object 116 may be a stationary object that appears to move within the image sequence 110 when the imaging system 102 is moved. [0043] [00043] Imaging system 102 is configured to send imaging data 106 to image processing system 104 using a number of communication links 120. As used here, a "number of" items means one or more items. Thus, the number of communication links 120 can be one or more communication links. The number of communication links 120 can include at least one of, for example, a wired communications link, a wireless communications link, an optical communications link, and some other type of communications link. [0044] [00044] As used here, the phrase "at least one among", when used with a list of items, means different combinations of one or more of the items listed that can be used and only one of each item in the list may be necessary. For example, "at least one item A, item B and item C" can include, for example, without limitation, item A, or item A and item B. This example can also include item A, item B and item C , or item B and item C. In other examples "at least one of them" can be, for example, without limitation, two from item A, one from item B and 10 from item C; four from item B and seven from item C; or some other suitable combination. [0045] [00045] The image processing system 104 can be implemented using hardware, software or a combination of the two. In these illustrative examples, image processing system 104 can be implemented in computer system 122. Computer system 122 can comprise several computers. When more than one computer is present in computer system 122, those computers can be in communication with each other. [0046] [00046] Image processing system 104 is configured to process imaging data 106 received from imaging system 102. In some illustrative examples, image processing system 104 can receive images in the sequence of images 110 one at a time as the images are generated by the imaging system 102. For example, the image processing system 104 can receive the image sequence 110 substantially in real time as the images are generated. In other illustrative examples, the image processing system 104 can receive the entire image sequence 110 at some point in time after the image sequence 110 has been generated. [0047] [00047] The image processing system 104 processes the image sequence 110 to detect and track the presence of moving objects in the image sequence 110. As shown, the image processing system 104 includes motion detector 124 and the object tracker 126. Motion detector 124 is configured to detect the presence of motion in the image sequence 110. Object tracker 126 is configured to track moving objects through the image sequence 110. [0048] [00048] For example, motion detector 124 receives the current image 128 in the image sequence 110 for processing. Motion detector 124 is configured to detect movement within the current image 128. In an illustrative example, motion detector 124 uses current image 128 and previous image 134 to form the moving image 136. Previous image 134 is the image in the sequence of images 110 prior to the current image 128 without any of the images between the current image 128 and the previous image 134. Additionally, the previous image 134 is the image that was previously processed by the motion detector 124. [0049] [00049] The motion detector 124 uses the current image 128 and the previous image 134 to identify the local movement and the global movement in the current image 128. As used here, "global movement" in the current image 128 can be a general movement for the current image 128. The overall movement may include, for example, the movement of the background characteristics in the current image 128 with respect to the background characteristics of the previous image 134. These background characteristics may include, for example, without limitation, trees, sky, roads, shrubs, vegetation, grass, buildings, man-made structures and / or other types of background features. Thus, the overall movement in the current image 128 is the movement of the scene as a whole 108 with respect to the general scene 108 in the previous image 134. [0050] [00050] As used here, "local movement" includes movement that differs from the global movement. Local movement can include, for example, the movement of foreground features, such as moving objects 114, in the current image 128 with respect to the previous image 134. Motion detector 124 can subtract the global movement identified in the current image 128 of the local movement identified in the current image 128 to form the moving image 136. [0051] [00051] In these illustrative examples, the moving image 136 can include a set of motion profiles 135. As used here, a "set of" items can be zero or more items. In other words, a set of items can be a null or empty set. Thus, in some cases, the set of moving profiles 135 can include one, two, three, five, ten or some other number of motion profiles. In other cases, the moving profile set 135 may be an empty set. [0052] [00052] As used herein, a "moving profile" is a part of the moving image 136 that represents local movement in the moving image 136. For example, a moving profile can be a part of the moving image 136 having a color other than a background of the moving image 136. This color can represent, for example, a moving object, such as the moving object 116 in scene 108. [0053] [00053] Object tracker 126 is configured to receive the current image 128 and the moving image 136 for processing. As shown, object tracker 126 includes an image segmentator 130, a number of data structures 158, a consistency checker 132, and a fingerprint device 133. [0054] [00054] The image segmentator 130 is configured to segment, or divide, the current image 128 into a plurality of segments 138 to form the segmented image 140. In these illustrative examples, each segment in the plurality of segments 138 includes one or more pixels. When more than one pixel is present in a segment, these pixels are contiguous pixels. In other words, each pixel in the segment is adjacent to another pixel in the segment without any of the other pixels not belonging to the segment located between those two pixels. [0055] [00055] In these illustrative examples, image segmentator 130 segments the current image 128 so that all pixels in each segment in the plurality of segments 138 share a similar visual characteristic. The visual characteristic can be, for example, a color, an intensity value, a texture, or some other type of visual characteristic. For example, all pixels in a particular segment in the plurality of segments 138 can have a value within a selected range that represents a selected color. [0056] [00056] Image segmentator 130 takes into account that different parts of a moving object, such as the moving object 116 in scene 108, may have different visual characteristics in the current image 128. For example, when the moving object 116 it is a car, the body of the car may look like a color in the current image 128, while the windows of the car may look like another color in the current image 128. [0057] [00057] Consequently, the moving object 116 can be represented in the segmented image 140 by multiple segments in the plurality of segments 138. Discerning which segments in the plurality of segments 138 actually represent the moving object 116 may not be an easy task. [0058] [00058] In this way, the image segmentator 130 is configured to group elements in the plurality of segments 138 together to form a set of main segments 142 using the moving image 136. In particular, the image segmentator 130 merges the segments in plurality of segments 138 belonging to the same movement profile to form the main image 143 having a set of main segments 142. [0059] [00059] More specifically, the segments in the plurality of segments 138 that belong to the same motion profile in the set of motion profiles 135 in the moving image 136 are merged to form a main segment in the set of main segments 142. In these illustrative examples , a segment in the plurality of segments 138 can be considered as "belonging" to a particular motion profile in the set of motion profiles 135 when the number of pixels in the segment that overlaps the particular motion profile is greater than a limit selected. Obviously, in other illustrative examples, other criteria and / or factors can be used to determine which segments in the plurality of segments 138 can be merged to form the set of main segments 142. [0060] [00060] In these illustrative examples, the image segmentator 130 can only merge the segments that are contiguous. In other words, two segments in the plurality of segments 138 can be merged only when these two segments are adjacent to each other. Accordingly, each main segment in the set of main segments 142 comprises a number of contiguous segments. [0061] [00061] In some illustrative examples, the image segmentator 130 integrates the moving image 136 with the segmented image 140 to form the segmented image in motion 145. The segmented image in motion 145 can be created, for example, without limitation, by superimposition of the moving image 136 over the segmented image 140. The segment part in the plurality of segments 138 overlapped by the set of motion profiles 135 can be considered "moving segments". For each motion profile, the moving segments overlapped by that motion profile are merged to form a main segment. In this way, the set of main segments 142 can be formed in a number of different ways. [0062] [00062] Thereafter, the image segmentator 130 generates main statistics 144 for the set of main segments 142. As an illustrative example, the image segmenter 130 identifies the segment data 146 for each main segment in the set of main segments 142. Segment data 146 for a particular major segment can include, for example, without limitation, color data, luminescence data, pixel location data, entropy data and / or other types of data. [0063] [00063] Color data can include, for example, a color value for each pixel in the main segment. The color value can be a color value or a saturation value. Luminescence data can include, for example, luminescence value for each pixel in the main segment. The luminescence value can be a brightness value. Pixel location data can include, for example, a location for each pixel in the main segment with respect to the rows and columns of pixels in the main image 143. Entropy data can include color data that has been filtered using the filter entropy. [0064] [00064] In this illustrative example, image segmentator 130 generates main statistics 144 by fitting segment data 146 into mathematical model 148. In some cases, mathematical model 148 may be a linear regression model, such as, for example, without limitation, a generalized linear model (GLM). The generalized linear model can be, for example, a Gaussian model with total covariance. [0065] [00065] Image segmentator 130 sends main image 143 and main statistics 144 to consistency checker 132 for further processing. Consistency checker 132 is configured to determine whether each main segment in the set of main segments 142 represents, in fact, a moving object. In other words, consistency check 132 determines whether a main segment in the set of main segments 142 represents a moving object or an image anomaly. [0066] [00066] In an illustrative example, consistency checker 132 can combine main segment 152 in the set of main segments 142 with a previously identified main segment that has been identified for the previous image 134. Consistency checker 132 determines whether a difference between the main statistics 144 for the main segment 152 and the main statistics identified for the previously identified main segment is greater than the selected threshold. [0067] [00067] If the difference is not greater than the selected limit, the main segment 152 is added to the set of target segments 154. In this way, the consistency checker 132 creates the set of target segments 154 for the current image 128. The target segment set 154 may include some, none or all of the target segment set 142. [0068] [00068] Each target segment in the target segment set 154 represents a moving object in the current image 128. In other words, the target segment set 154 represents the moving object set 155. The moving object set 155 can include some, none or all moving objects 114 in scene 108, depending on the implementation. For example, in some cases, the moving object set 155 may include moving object 116. [0069] [00069] In some illustrative examples, consistency checker 132 may be unable to combine main segment 152 with a previously identified main segment. In such cases, the main segment 152 can be analyzed to determine whether the main segment 152 represents an anomaly or a new moving object that has not been previously detected. When the main segment 152 is identified as representing a new moving object, the main segment 152 is added to the target segment set 154. The consistency checker 132 sends the target segment set 154 to the fingerprint device 133 as the image target 157. [0070] [00070] The fingerprint device 133 receives the target image 157 and identifies the fingerprint set 156 for the target segment set 154 in the target image 157. As used here, a "fingerprint" for a target segment is a description of the unique characteristics for the moving object represented by the target segment. The fingerprint set 156 is configured for use in tracking the set of moving objects 155 in a number of subsequent images in the image sequence 110. [0071] [00071] The fingerprint device 133 stores fingerprints 156 in the number of data structures 158. A data structure in the number of data structures 158 can take the form of, for example, without limitation, a table, a report , a graph, a database, a report, an associative memory, or some other type of data structure. [0072] [00072] As an illustrative example, the fingerprint set 156 can be stored in the fingerprint database 160 in the number of data structures 158 for future detection and tracking of moving objects. The fingerprint database 160 includes the fingerprints created for the part of the moving objects 114 in scene 108 detected and tracked within the image sequence 110. [0073] [00073] The object tracker 126 can use the fingerprint set 156 stored in the fingerprint database 160 to increase the probability of being able to track the set of moving objects 155 in the number of subsequent images 162 in the image sequence 110 In particular, the fingerprint set 156 can be used to track the set of moving objects 155 in the number of subsequent images 162 even after one or more of these moving objects have become partially or totally obstructed or when one or more of these moving objects moving objects moves out of the field of view of the imaging system 102. The number of subsequent images 162 can be the images in the image sequence 110 after the current image 128. [0074] [00074] In these illustrative examples, each fingerprint in the fingerprint set 156 is a light fingerprint. As used here, a "light fingerprint" is a description of the characteristics for the moving object represented by the corresponding target segment that is minimized with respect to spatial and temporal complexity. In this way, the amount of stored space required to store the fingerprint set 156 can be reduced. [0075] [00075] In some illustrative examples, image segmentator 130 may use fingerprints to determine which of the plurality of segments 138 should be fused to form the set of main segments 142 in addition to or in place of the moving image 136. In an illustrative example, the image segmentator 130 sends the segmented image 140 to the fingerprint device 133. The fingerprint device 133 creates the plurality of fingerprints from segment 164 to the plurality of segments 138 in the segmented image 140. Each among the plurality of fingerprints of segment 164 is a fingerprint for a corresponding segment in the plurality of segments 138. [0076] [00076] The fingerprint device 133 stores the plurality of segment fingerprints 164 in the fingerprint database 160 for use by the image segmentator. The image segmentator 130 retrieves the plurality of segment fingerprints 164 and the set of previous segment fingerprints 166 from the fingerprint database 160 and uses those different fingerprints to form the set of main segments 142. [0077] [00077] The previous segment fingerprint set 166 may include the previously identified fingerprint set for the previous image 134 based on the target segments identified for the previous image 134. In this illustrative example, image segmentator 130 groups the impressions contiguous segment fingerprints in the plurality of segment fingerprints 164 that combine with a particular fingerprint in the previous segment fingerprint set 166 to form a main segment. [0078] [00078] With reference now to figure 2, an illustration of a digital printing device in the form of a block diagram is presented according to an illustrative embodiment. In figure 2, the fingerprint device 133 of figure 1 is shown in greater detail. [0079] [00079] As shown, the fingerprint device 133 receives the target image 157 for processing. The fingerprint device 133 includes the feature analyzer 202 and the fingerprint manager 204. The feature analyzer 202 is configured to perform feature analysis 206 for each target segment in the set of target segments 154 in the target image 157 to form the fingerprint set 156. In these illustrative examples, performing characteristic analysis 206 may include extracting characteristic data 208 for each target segment in the target segment set 154 and fitting characteristic data 208 to the number of mathematical models 210. [0080] [00080] The number of mathematical models 210 may include different types of models. A model in the number of mathematical models 210 can be, for example, without limitation, parametric or non-parametric. As used here, a "parametric model" is a family of distributions that can be described using a finite number of parameters. In contrast, a "non-parametric model", as used here, is not based on the data being embedded belonging to any of the distributions. [0081] [00081] Additionally, a model in the number of mathematical models 210 can be, for example, without limitation, spatially aware or spatially agnostic. A spatially aware model can take into account the locations, spatial orientation, and / or alignment of the characteristics. However, a spatially agnostic model may not take into account locations, spatial orientation, or feature alignment. [0082] [00082] The generalized linear spatial model 212 and the generalized linear model of characteristic only 214 are examples of parametric models 220. Spatiogram 216 and histogram 218 are examples of nonparametric models 222. Additionally, the generalized spatial linear model 212 and the spatiogram 216 are examples of spatially aware models 224. The generalized linear model of characteristic only 214 and histogram 218 are examples of spatially agnostic models 226. [0083] [00083] Characteristic data 208 for each target segment in the set of target segments 154 can be matched in one or more of the number of mathematical models 210 to form matched data 228 for each target segment in the set of target segments 154. For example , characteristic data 208 can fit characteristic data 208 for target segment 230 into target segment set 154 for spatial generalized linear model 212, characteristic generalized linear model only 214, space 216, histogram 218, or some combination of the above to form nested data 228 for target segment 230. [0084] [00084] When the fingerprint device 133 is configured to create the plurality of segment 164 fingerprints as described in figure 1, characteristic data 208 can be extracted for each of the plurality of segments 138 in figure 1 and embedded in the number of mathematical models 210 similarly to the form described above. In particular, characteristic data 208 for the plurality of segments 138 can be embedded in the number of mathematical models 210 to form the embedded data 228 for each segment in the plurality of segments 138. [0085] [00085] The fingerprint manager 204 is configured to receive nested data 228 for the target segment set 154 and create the fingerprint set 156. The nested data 228 for each target segment in the target segment set 154 is used to form a fingerprint in the fingerprint set 156. For example, the embedded data 228 for the target segment 230 is used to form the fingerprint 232. In an illustrative example, the target segment 230 represents the moving object 116 in figure 1. Consequently, fingerprint 232 is a fingerprint for the moving object 116. [0086] [00086] Thus, the fingerprint set 156 is created for the current image 128 in figure 1. The fingerprint manager 204 is configured to store the fingerprint set 156 in the number of data structures 158 for use in processing of the number of subsequent images 162 in the image sequence 110 in figure 1. For example, the fingerprint set 156 can be stored together with other fingerprints in the fingerprint database 160. [0087] [00087] When the fingerprint manager 204 receives nested data 228 for the plurality of segments 138 in figure 1 from the feature analyzer 202, the fingerprint manager 204 uses nested data 228 for the plurality of segments 138 to create the plurality of segment 164 fingerprints. The fingerprint manager 204 can store the plurality of segment 164 fingerprints in the number of data structures 158 and / or send the plurality of segment 164 fingerprints to the segmented image 130 in the figure 1. [0088] [00088] During the processing of the number of subsequent images 162 in figure 1, one or more of the set of moving objects 155 in figure 1 may become partially obstructed or no longer visible. For example, the moving object 116 in figure 1 may be partially obstructed in one or more of the number of subsequent images 162. Consequently, the moving object 116 may not be detectable in these subsequent images. However, the fingerprint 232 for the moving object 116 can be used to regain the trace of the moving object 116. [0089] [00089] For example, new fingerprints that are created for images after the current image 128 in figure 1 can be compared with the set of fingerprints 156 and any other fingerprints created previously stored in the fingerprint database 160. This A comparison is used to determine whether any of the new fingerprints are moving objects for which the fingerprints were previously created. [0090] [00090] As an illustrative example, one of a number of subsequent images 162 in figure 1 can be processed and new fingerprint 234 can be created for that subsequent image. In this illustrative example, fingerprint manager 204 compares the new fingerprint 234 with different fingerprints stored in the fingerprint database 160 to determine whether the new fingerprint 234 is for a moving object for which a fingerprint has been previously created. [0091] [00091] For example, the fingerprint manager 204 can compare the new fingerprint 234 with the fingerprint 232. If the new fingerprint 234 matches the fingerprint 232 within selected tolerances, the fingerprint manager 204 determines that the new fingerprint 234 and fingerprint 232 are for the same moving object, which is the moving object 116. [0092] [00092] In some illustrative examples, fingerprint manager 204 averages the new fingerprint 234 and fingerprint 232 to create a modified fingerprint that replaces fingerprint 232 in the fingerprint database 160. In others illustrative examples, the fingerprint manager 204 replaces fingerprint 232 with the new fingerprint 234 in the fingerprint database 160. In this way, fingerprints can be used to track objects in motion and regain traces of objects in motion in the sequence of images 110 in figure 1. [0093] [00093] In some cases, the fingerprint manager 204 can be configured to use fingerprints previously created to track stationary objects. For example, in some cases, a moving object for which a fingerprint has been previously created may become stationary during a period of time in which the sequence of images 110 is generated. The previously created fingerprint can be used to keep track of that object even when the object is not moving. [0094] [00094] The illustrations of the image processing environment 100 in figure 1 and digital printing device 133 in figure 2 should not imply physical or architectural limitations to the way in which an illustrative modality can be implemented. Other components in addition to or in place of those illustrated can be used. Some components may be optional. In addition, the blocks are presented to illustrate some functional components. One or more of these blocks can be combined, divided, or combined and divided into different blocks when implemented in an illustrative modality. [0095] [00095] For example, image segmentator 130, consistency checker 132, and fingerprint device 133 can all be part of the same module in some cases. In some illustrative examples, other mathematical models can be used in addition to and / or in place of the models described for the number of mathematical models 210 in figure 2. In other illustrative examples, consistency checker 132 can be configured to generate main statistics 144 instead of the image segmentator 130. [0096] [00096] With reference now to figure 3, an illustration of an image is presented according to an illustrative modality. Image 300 is an example of an image that can be generated by an imaging system, such as the imaging system 102 in Figure 1. [0097] [00097] In particular, image 300 is an example of an implementation for an image in the sequence of images 110 in figure 1. Additionally, image 300 can be an example of an implementation for the current image 128 in figure 1. As shown , image 300 includes the background 302 and the set of moving objects 304. Examples of moving objects in the set of moving objects 304 in image 300 include, but are not limited to vehicles 306, 308, 310 and 312. [0098] [00098] With reference now to figure 4, an illustration of a moving image is presented according to an illustrative modality. Moving image 400 is an example of an implementation for moving image 136 in figure 1. Image 300 in figure 3 can be processed by a motion detector, such as motion detector 124 in figure 1, to form the moving image 400. [0099] [00099] As shown, the moving image 400 includes the background 402 and the set of motion profiles 404. Additionally, the moving objects 304 of the image 300 in figure 3 are still visible in the moving image 400. The background 402 represents the part of the image 300 contributing to the overall movement of the image 300 in figure 3. The global movement of the image 300 can be, for example, the general movement of the scene in the image 300. [0100] [000100] The set of movement profiles 404 is an example of an implementation for the set of movement profiles 135 in figure 1. Each of the set of movement profiles 404 represents the local movement in image 300 of figure 3. The movement local is the movement that differs from the global movement of the image 300 beyond some selected limit. [0101] [000101] Examples of motion profiles in the 404 motion profile set include, but are not limited to, the 406, 408, 410, 412, 414, and 416 motion profiles. In this illustrative example, the motion profiles 406, 408, 410 and 412 represent the local movement that includes the movement of vehicles 306, 308, 310 and 312, respectively. These movement profiles indicate that the movement of these vehicles is different from the general movement of the scene captured in image 300 in figure 3. Part 418 of movement image 400 is presented in greater detail in figure 5 below. [0102] [000102] Turning now to figure 5, an illustration of an enlarged view of part 418 of the moving image 400 from figure 4 is presented according to an illustrative embodiment. As shown, motion profile 408 is superimposed on top of vehicle 308 in motion image 400. Motion profile 408 represents the local motion that includes motion of vehicle 308. In addition, as illustrated, motion profile 408 also represents the local movement that includes the shadow of the vehicle 308. [0103] [000103] With reference now to figure 6, an illustration of a segmented image is presented according to an illustrative modality. The segmented image 600 is an example of an implementation for the segmented image 140 in figure 1. The image 300 of figure 3 can be processed, for example, by the image segmentator 130 in figure 1, to form the segmented image 600. [0104] [000104] As shown, the segmented image 600 includes the plurality of segments 602. The plurality of segments 602 is an example of an implementation for the plurality of segments 138 in figure 1. Each segment in plurality of segments 602 comprises one or more pixels contiguous. The contiguous pixels that form a particular segment in the plurality of segments 602 correspond to contiguous pixels in the image 300 in figure 3 that share a similar visual characteristic. The pixels that form a segment in the plurality of segments 602 are all given the same value representing this visual characteristic. [0105] [000105] Examples of segments in the plurality of segments 602 include, but are not limited to, segments 604, 606, 608, 610, 612, 614 and 616. Each of these segments may represent a particular characteristic in the image 300 in figure 3. For For example, segment 604 represents the road on which vehicles 306, 308, 310 and 312 are traveling in image 300 in figure 3. Additionally, segment 606 and segment 614 represent the grass at the bottom 302 in image 300 in figure 3. [0106] [000106] Segment 608 represents the hood of vehicle 306 in figure 3. Segment 610 represents the hood of vehicle 310 in figure 3, while segment 612 represents the front window of vehicle 310. Segment 616 represents the shadow cast by the vehicle 312 in image 300 in figure 3. Part 618 of segmented image 600 is shown in greater detail in figure 7 below. [0107] [000107] Turning now to figure 7, an illustration of part 618 of segmented image 600 is presented according to an illustrative embodiment. As shown, segments 702, 704, 706 and 708 in the plurality of segments 602 in segmented image 600 are more clearly seen in that view. [0108] [000108] Segment 702 represents the upper body part of vehicle 308 in image 300 in figure 3. Segment 704 represents at least part of the hood of vehicle 308 in figure 3. Segment 706 represents the shadow cast by vehicle 308 in image 300 in figure 3. In addition, segment 708 represents the right side doors of vehicle 308 in figure 3. [0109] [000109] With reference now to figure 8, an illustration of an image of a segment in motion is presented according to an illustrative modality. In figure 8, the moving segment image 800 is an example of an implementation for the moving segment image 145 in figure 1. The moving image 400 of figure 4 and the segmented image 600 of figure 6 have been integrated, for example , by the image segmentator 130 in figure 1, to form the moving segment image 800. [0110] [000110] As shown, the moving segment image 800 includes the background segments 802 and the moving segments 804. The moving segments 804 are the segments of the plurality of segments 602 in the segmented image 600 in figure 6 that are overlapped by set of movement profiles 404 in the moving image 400 from figure 4. The segments that are overlapped by the same movement profile can be merged to form a main segment. [0111] [000111] With reference now to figure 9, an illustration of a main image is presented according to an illustrative modality. Main image 900 is an example of an implementation for main image 143 in figure 1. In this illustrative example, the moving segments 804 in the moving segment image 800 in figure 8 that were overlapped by the same motion profile have been merged, for example, by the image segmentator 130 in figure 1, to form the set of main segments 901 in the main image 900. [0112] [000112] Core segment set 901 is an example of an implementation for core segment set 142 in figure 1. Examples of core segments in core segment set 901 include, but are not limited to core segments 902, 904, 906 , 908 and 910. Each of these main segments comprises moving segments of the moving segment image 800 in figure 8 belonging to the same movement profile in the set of movement profiles 404 in figure 4. Part 912 of main image 900 including the main segment 904 and is presented in greater detail in figure 10 below. [0113] [000113] Each main segment in the set of main segments 901 can be compared to a set of main segments previously identified for an image processed before the image 300 in figure 3. This comparison can be used to determine whether the main segment actually represents a moving object, some irrelevant feature or an anomaly. [0114] [000114] For example, main segment 902 can be compared with the set of main segments identified for a previous image to determine whether main segment 920 represents a moving object. If the main segment 902 does not match any of the previously identified main segments, then an analysis can be performed to determine whether the main segment 902 represents a previously undetected moving object, an anomaly, or some other relevant feature. [0115] [000115] Turning now to figure 10, an illustration of an enlarged view of part 912 of main image 900 of figure 9 is presented according to an illustrative embodiment. In this illustrative example, main segment 904 has been formed so that contour 1000 of main segment 904 matches the contour of vehicle 308 in image 300 in figure 3 within selected tolerances. [0116] [000116] With reference now to figure 11, an illustration of an image is presented according to an illustrative modality. In figure 11, image 1100 is an example of an image that can be generated by an imaging system, such as an imaging system 102 in figure 1. [0117] [000117] In particular, image 1100 is an example of an implementation for an image in the sequence of images 110 in figure 1. Additionally, image 1100 can be an example of an implementation for the current image 128 in figure 1. As shown , image 1100 includes a background 1102 and the set of moving objects 1104. Examples of moving objects in the set of moving objects 1104 in image 1100 include, but are not limited to, vehicles 1106, 1108, 1110, 1112, 1114 and 1116. [0118] [000118] With reference now to figure 12, an illustration of a main image is presented according to an illustrative modality. Main image 1200 is an example of an implementation for main image 143 in figure 1. Image 1100 of figure 11 can be processed by object tracker 126 in figure 1 to form main image 1200. [0119] [000119] As shown, the main image 1200 comprises background segments 1202 and the set of main segments 1204. The set of main segments 1204 includes main segments 1206, 1208, 1210, 1212, 1214 and 1216. In this illustrative example, the segments main 1206, 1208, 1210, 1212, 1214 and 1216 represent vehicles 1106, 1108, 1110, 1112, 1114 and 1116, respectively of figure 11. [0120] [000120] Each main segment in the set of main segments 1204 was formed by merging multiple segments of a segmented image together. The selection of which segments to merge to form the set of main segments 1204 was carried out using previous fingerprints for an image processed before the image 1100 in figure 11. [0121] [000121] Referring now to figure 13, an illustration of a process for carrying out image processing in the form of a flowchart is presented according to an illustrative modality. The process illustrated in figure 13 can be performed using the image processing system 104 in figure 1. [0122] [000122] The process starts by receiving a current image for processing (operation 1300). The current image can be, for example, current image 128 in figure 1. After that, the global movement in the current image and local movement in the current image are identified (operation 1302). The overall movement in the current image is then subtracted from the local movement in the current image to form a movement image in which the movement image includes a set of movement profiles (operation 1304). Operation 1302 and operation 1304 can be performed using, for example, motion detector 124 in figure 1. [0123] [000123] Next, the current image is segmented into a plurality of segments to form a segmented image (operation 1306). Operation 1306 can be performed using, for example, the image segmentator 130 in figure 1. The segments in the plurality of segments belonging to the same movement profile are then merged to form a set of main segments (operation 1308). After that, a set of target segments to be fingerprinted is established from the set of main segments (operation 1310). In operation 1310, a target segment in the saved segment set represents a moving object. [0124] [000124] A fingerprint is then created for each target segment in the set of target segments for use in tracking the moving object in several subsequent images (operation 1312), with the process ending after that. Operation 1312 can be performed, for example, by the fingerprint device 133 in figures 1 and 2. The fingerprint device can perform operation 1310 by performing a characteristic analysis of the main segment. [0125] [000125] Referring now to figure 14, an illustration of a process for establishing a set of target segments from a set of main segments in the form of a flowchart is presented according to an example of a way in which the operation 1310 of figure 13 can be performed. This process can be performed, for example, by the image segmentator 130 and consistency checker 132 in figure 1. [0126] [000126] The process begins with the generation of main statistics for each main segment in the set of main segments (operation 1400). After that, the main segment is selected from the set of main segments for processing (operation 1402). [0127] [000127] The selected main segment is connected to a closest combined main segment identified for the previous processed image (operation 1404). The closest combined main segment can be, for example, the previously identified main segment that has a location in the previous image that is closest to the location of the selected main segment within the current image. Obviously, in other illustrative examples, the closest combined main segment can be based on the main statistics generated for the selected main segment and the main statistics identified for the set of main segments previously identified for the previous image. [0128] [000128] Next, the process determines whether any additional unprocessed main segments are present in the set of main segments (operation 1406). If additional unprocessed main segments are present, the process returns to operation 1402 as described above. Otherwise, the process computes a similarity mark between each pair of connected segments (operation 1408). This similarity mark can be, for example, without limitation, a Kullback-Leibler (KL) divergence value. [0129] [000129] In operation 1408, the similarity mark can be computed based on the main statistics identified for the main segments for the current image and for the main segments previously identified for the previous image. In some illustrative examples, the similarity mark is computed through a number of images previously processed with respect to the current image. [0130] [000130] After that, the main segments having a similarity mark within a selected limit are added to a set of target segments (operation 1410), with the process ending after that. In this way, only the main segments that are consistent with the previously identified main segments are selected as target segments for further processing. [0131] [000131] Referring now to figure 15, an illustration of a process for creating a fingerprint in the form of a flowchart is presented according to an illustrative modality. The process illustrated in figure 15 can be an example of a way in which the operation 1312 of figure 13 can be implemented. This process can be performed using the fingerprint device 133 in figures 1 and 2. [0132] [000132] The process starts by identifying target pixels for each target segment in the set of target segments (operation 1500). A target pixel is a pixel that is within a target segment. After that, the characteristic data is identified for each target pixel in each target segment in the set of target segments (operation 1502). The characteristic data for a target pixel can be, for example, a characteristic vector that includes color data, pixel location data, entropy data, other pixel data, or a combination of the above for that target pixel. [0133] [000133] Embedded data is then generated for each target segment in the set of target segments based on the characteristic data generated for the target pixels for each target segment (operation 1504). Next, a fingerprint is created for each target segment in the target segment set based on the embedded data (operation 1506), with the process ending after that. In operation 1506, a set of fingerprints is created for the set of target segments. These fingerprints are stored for future detection and tracking of moving objects in subsequent images. [0134] [000134] Referring now to figure 16, an illustration of a process for forming a set of target segments in the form of a flowchart is presented according to an illustrative modality. The process illustrated in figure 16 can be an example of a way in which the operation 1308 of figure 13 can be implemented. This process can be performed using the fingerprint device 133 in figures 1 and 2. [0135] [000135] The process begins by creating a plurality of segment fingerprints for the plurality of segments in the segmented image (operation 1600). In operation 1600, a segment fingerprint is created for each segment in the plurality of segments. [0136] [000136] In an illustrative example, a Gaussian covariance model can be used to create each segment fingerprint. The model used can be as follows: [0137] [000137] In some illustrative examples, not all segments in the plurality of segments have digital printing. A set of criteria can be used to determine whether a segment in the plurality of segments has a fingerprint. These criteria may include, for example, without limitation, that the number of pixels in the segment is greater than twelve; the characteristic data for all pixels in a segment are not constant; the segment has a height and width in the pixels that is greater than one; the distance in pixels between the edge of the image and the segment is not less than a selected limit; the segment is less than half the size of the entire image in the pixels; and / or other types of criteria. [0138] [000138] After that, a background fingerprint is created towards the bottom of the segmented image (operation 1602). In operation 1602, the background of the segmented image can be all parts of the image excluding the plurality of segments. The background digital impression can also be created using a Gaussian covariance model. [0139] [000139] Each segment fingerprint is combined against a set of previous segment fingerprints in addition to the background fingerprint to form a set of combined segment fingerprints (operation 1604). In operation 1604, this combination can be performed in several different ways. For example, a similarity mark can be used to perform the combination in operation 1604. In some cases, the image register is used to perform the combination operation 1604. [0140] [000140] In an illustrative example, the KullbackLeibler divergence value between each segment fingerprint and each segment fingerprint identified previously can be computed. Each segment fingerprint that matches one of the segmented fingerprint set with a Kullback-Leibler divergence value below a selected threshold can be added to the combined segment fingerprint set. Segment fingerprints that match the background fingerprint with a Kullback-Leibler divergence value below a selected threshold can be excluded from the combined segment fingerprint set. [0141] [000141] After that, the process merges the segments that correspond to the segment fingerprints that combine with each other in the combined segment fingerprints and that are adjacent to each other to form a set of main segments (operation 1606) , with the process ending after that. For example, in operation 1606, a first segment fingerprint and a second segment fingerprint in the set of combined segment fingerprints that correspond to a first segment and a second segment, respectively, which are adjacent to each other are identified. A determination can be made as to whether a similarity marking between the first segment fingerprint and the second segment fingerprint is within a selected limit. [0142] [000142] The first segment and the second segment can be merged together in response to a determination that the similarity marking between the fingerprint of the first segment and the fingerprint of the second segment is within the selected limit. In operation 1606, the first segment and the second segment are at least one of the fused to form a new main segment to be added to the set of main segments and fused to an existing main segment in the set of main segments. [0143] [000143] Flowcharts and block diagrams in different modalities presented illustrate the architecture, functionality, and operation of some possible implementations of apparatus and methods in an illustrative modality. In this regard, each block in the flowcharts or block diagrams can represent a module, segment, function and / or part of an operation or step. For example, one or more of the blocks can be implemented as program code, in hardware, or a combination of program code and hardware. When implemented in hardware, the hardware can, for example, take the form of integrated circuits that are manufactured or configured to perform one or more operations in flowcharts or block diagrams. [0144] [000144] In some alternative implementations of an illustrative modality, the function or functions noted in the blocks may occur out of the order noted in the figures. For example, in some cases, two blocks illustrated in succession can be executed substantially simultaneously, or the blocks can sometimes be performed in reverse order, depending on the functionality involved. In addition, other blocks can be added in addition to the blocks illustrated in a flowchart or block diagram. [0145] [000145] Turning now to figure 17, an illustration of a data processing system in the form of a block diagram is presented according to an illustrative modality. In this illustrative example, the data processing system 1700 can be used to implement one or more computers in the computer system 122 in figure 1. [0146] [000146] In this illustrated example, the data processing system 1700 includes communications structure 1702, which provides communications between processor unit 1704, memory 1706, persistent store 1708, communications unit 1710, input and output unit 1712, and monitor 1714. Communications structure 1702 can be implemented as a bus system in some examples. [0147] [000147] Processor unit 1704 serves to execute instructions for software that is loaded into memory 1706 to perform a number of operations. Processor unit 1704 can be a number of processors, a multiprocessor core, or some other type of processor, depending on the particular implementation. In some cases, processor unit 1704 may take the form of a hardware unit, such as a circuit system, an application-specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware. [0148] [000148] In some cases, motion detector 124 and / or object tracker 126 of figure 1 can be implemented as processors within processor unit 1704. Additionally, image segmentator 130, consistency checker 132, a device fingerprint 133 of Figure 1 can be implemented as modules within one or more processors in processor unit 1704. [0149] [000149] Memory 1706 and persistent storage 1708 are examples of storage devices 1716. Storage devices 1716 may be in communication with processor unit 1704 through communications structure 1702. A storage device, also referred to as a computer-readable storage device is any piece of hardware capable of storing information such as, for example, and without limitation, data, program code in functional form, and / or other suitable information on a temporary and / or permanent basis . Memory 1706 can be, for example, a random access memory or any other suitable volatile or non-volatile storage device. [0150] [000150] The persistent store 1708 can take various forms and comprise any number of components or devices, depending on the particular implementation. For example, the persistent storage 1708 can be a hard disk, a flash memory, a rewritable optical disc, a rewritable magnetic tape, or some combination of the above. Depending on the implementation, the media used by the 1708 persistent store may or may not be removable. [0151] [000151] The communication unit 1710, in these examples, provides communications with other data processing systems or devices. The 1710 communications unit can provide communications through the use of one or both physical and wireless links. [0152] [000152] The 1712 input / output unit allows data input and output with other devices that can be connected to the 1700 data processing system. For example, the 1712 input / output unit can provide a connection for the data logging. user via a keyboard, a mouse and / or some other suitable input device and / or can send the output to a printer. The 1714 monitor provides a mechanism for displaying information to a user. [0153] [000153] Instructions for the operating system, applications and / or programs can be located on the 1716 storage devices. The processes of different modalities can be performed by the processor unit 1704 using instructions implemented by computer. These instructions are referred to as program code, computer-usable program code, or computer-readable program code and can be read and executed by one or more processors on processor unit 1704. [0154] [000154] In these examples, the program code 1718 is located in a functional form in the computer readable medium 1720 that is selectively removable and can be loaded into or transferred to the data processing system 1700 for execution by the processor unit 1704. Program code 1718 and computer-readable medium 1720 form computer program product 1722 in these examples. In some illustrative examples, motion detector 124 and / or object tracker 126 of figure 1 can be realized within computer program product 1722. In some cases, image segmentator 130, consistency checker 132, and the fingerprint device 133 of figure 1 can be implemented as software modules in program code 1718. [0155] [000155] Computer readable medium 1720 may take the form of computer readable storage medium 1724 or computer readable signal medium 1726. Computer readable storage medium 1724 is a physical or tangible storage device used to store code program 1718 instead of a medium that propagates or transmits program code 1718. The computer-readable storage medium 1724 can take the form of, for example, without limitation, an optical or magnetic disk or a persistent storage device that is connected to the 1700 data processing system. [0156] [000156] Alternatively, program code 1718 can be transferred to data processing system 1700 using computer-readable signal medium 1726. Computer-readable signal medium 1726 can be, for example, without limitation, a propagated data containing program code 1718. This data signal can be an electromagnetic signal, an optical signal and / or some other suitable type of signal that can be transmitted over communication links that are physical and / or wireless. [0157] [000157] The different components illustrated for the 1700 data processing system should not provide architectural limitations to the way in which the different modalities can be implemented. The different illustrative modalities can be implemented in a data processing system including components in addition to or in place of those illustrated for the 1700 data processing system. Other components shown in figure 17 may differ from the illustrated examples. The different modalities can be implemented using any hardware device or system capable of running the program code. As an example, the data processing system may include organic components integrated with inorganic components and / or it may consist entirely of organic components excluding a human being. For example, a storage device can consist of an organic semiconductor. [0158] [000158] Additionally, the invention can comprise the following modalities: an image processing system comprising: an image segmentator configured to segment a current image into a sequence of images into a plurality of segments to form a segmented image and to merge the segments into the plurality of segments belonging to the same motion profile to form a set of main segments; a consistency checker configured to identify a set of target segments from the set of main segments, where the set of target segments represents a set of moving objects in the current image; and a fingerprint device configured to create a set of fingerprints for use in tracking the set of moving objects in a number of subsequent images in the image sequence. [0159] [000159] Advantageously, the system additionally comprises a motion detector configured to form a moving image using the current image, in which the moving image includes a set of motion profiles. Advantageously, the motion detector is further configured to identify the local movement and the global movement in the current image and subtract the global movement from the local movement in the current image to form the moving image. Advantageously, the consistency checker is further configured to determine whether a main segment in the set of main segments should be added to the set of target segments based on the main statistics generated for the main segment. Advantageously, the image segmenter is configured to generate the main statistics for the main segment by generating segment data for the main segment and fitting the segment data to a mathematical model. Advantageously, the mathematical model is a generalized linear model. Advantageously, the fingerprint device is further configured to perform a characteristic analysis of a target segment on the target segment set to form a fingerprint for the target segment to be added to the fingerprint set. Advantageously, the fingerprint device is further configured to identify characteristic data for each target segment in the target segment set, fit the characteristic data to a number of mathematical models to generate matched data, and create the fingerprint set using the embedded data. Advantageously, the number of mathematical models includes at least one generalized spatial linear model, one generalized linear model of only one characteristic, a spatiogram, and a histogram. [0160] [000160] A computer-implemented method for tracking moving objects in a sequence of images, the computer-implemented method comprising: the segmentation of a current image in the sequence of images in a plurality of segments; the merging of segments into the plurality of segments belonging to the same movement profile to form a set of main segments; the identification of a set of target segments from the set of main segments, where the set of target segments represents a set of moving objects in the current image; and the creation of a set of fingerprints for use in tracking the set of moving objects in a number of subsequent images in the image sequence. [0161] [000161] Advantageously, the method further comprises the formation of a moving image using the current image, wherein the moving image includes a set of motion profiles. Advantageously, the stage of formation of the moving image comprises the identification of the local movement and the global movement in the current image; and subtracting the global movement from the local movement to form the moving image. Advantageously, the stage of merging the segments in the plurality of segments belonging to the same movement profile for the formation of the set of main segments comprises the merging of the segments in the plurality of segments belonging to the same movement profile in the set of movement profiles in the image in movement to form the set of main segments. Advantageously, the step of identifying the set of target segments from the set of main segments comprises the generation of main statistics for a main segment in the set of main segments; and determining whether the main segment should be added to the set of target segments based on the main statistics for the main segment. Advantageously, the main statistics generation step for the main segment in the set of main segments comprises the generation of segment data for the main segment, and the fitting of the segment data to a mathematical model to generate the main statistics for the main segment . Advantageously, the fitting of segment data to the mathematical model to generate the main statistics for the main segment comprises the fitting of segment data to the mathematical model to generate the main statistics for the main segment, where the mathematical model is a generalized linear model . Advantageously, the step of creating the set of fingerprints for use in tracking the set of moving objects in the number of subsequent images in the image sequence comprises performing a characteristic analysis of a target segment in the set of target segments to form a fingerprint in the set of fingerprints for the target segment. Advantageously, the step of creating the set of fingerprints for use in tracking the set of moving objects in the number of subsequent images in the image sequence comprises the identification of characteristic data for each target segment in the set of target segments; fitting the characteristic data to a number of mathematical models to generate the embedded data; and the creation of the set of fingerprints using embedded data. Advantageously, the step of fitting characteristic data to the number of mathematical models to generate the matched data comprises fitting the characteristic data to the number of mathematical models to generate the matched data, wherein the number of mathematical models includes at least one among a generalized linear spatial model, a generalized linear model of characteristic only, a spatiogram, and a histogram. [0162] [000162] A computer-implemented method for tracking moving objects in a sequence of images, the computer-implemented method comprising: the identification of local movement and global movement in a current image; subtracting the global movement from the local movement to form a moving image, where the moving image includes a set of motion profiles; the segmentation of the current image in the sequence of images in a plurality of segments to form a segmented image; the merging of segments into the plurality of segments belonging to the same movement profile to form a main image having a set of main segments; the identification of a set of target segments from the set of main segments to form a target image, in which the set of target segments represents a set of objects in motion in the current image; and the creation of a set of fingerprints for use in tracking the set of moving objects in a number of subsequent images in the image sequence. [0163] [000163] The description of the different illustrative modalities has been presented for purposes of illustration and description and should not be exhaustive or limited to the modalities in the manner described. Many modifications and variations will be apparent to those skilled in the art. [0164] [000164] In addition, different illustrative modalities may provide different characteristics compared to other illustrative modalities. The selected modality or modalities are chosen and described in order to better explain the principles of the modalities, the practical application, and to allow others skilled in the art to understand the description for various modalities with various modifications as they are suitable for the particular use contemplated.
权利要求:
Claims (11) [0001] Image processing system (104), characterized by the fact that it comprises: an image segmentator (130) configured to segment a current image (128) into a sequence of images (110) into a plurality of segments (138) to form a segmented image (140) and merge the segments into the plurality of segments (138 ) belonging to the same movement profile to form a set of main segments (142), each main segment (142) to represent a set of moving objects or a set of image anomalies, the image segmentator (130) configured for generating main statistics (144) for the set of main segments (142) by generating segment data (146) for the main segment (152) and fitting the segment data (146) to a mathematical model (148); and a consistency checker (132) configured to identify a set of target segments (154) from the set of main segments (142) by determining whether a difference between the main statistics (144) for a main segment (152) and the main statistics identified for a previously identified main segment of an earlier image (134) is greater than a threshold, where the set of target segments (154) represents a set of moving objects (155) in the current image (128); and a fingerprint device (133) configured to create a set of target segment fingerprints (156) for the target segments, each target segment fingerprint (156) defining a description of a unique set of characteristics for a moving object , the target segment fingerprint set (156) for use in tracking the set of moving objects (155) in a number of subsequent images (162) in the image sequence (110), where: the fingerprint device (133) is further configured to create a plurality of segment fingerprints (164) for the plurality of segments (138) in a segmented image (140), each plurality of segment fingerprints (164) is a fingerprint for a corresponding segment in a plurality of segments (138); the fingerprint device is further configured to create a plurality of target segment fingerprints (156) for the plurality of target segments, wherein each plurality of target segment fingerprints (156) is a fingerprint for a corresponding target segment in the plurality of target segments (138); and the image segmentator (130) uses the plurality of segment fingerprints (164) to determine which of the plurality of segments (138) to merge to form the set of main segments (142). [0002] Image processing system (104), according to claim 1, characterized by the fact that it also comprises: a motion detector (124) configured to form a moving image (136) using the current image (128), wherein the moving image (136) includes a set of motion profiles (135). [0003] Image processing system (104), according to claim 2, characterized by the fact that the motion detector (124) is further configured to identify the local movement and the global movement in the current image (128) and subtract the movement of local motion in the current image (128) to form the moving image (136). [0004] Image processing system (104), according to any one of the preceding claims, characterized by the fact that the mathematical model (148) is a generalized linear model (212). [0005] Image processing system (104), according to claim 1, characterized by the fact that the fingerprint device (133) is further configured to perform a characteristic analysis (206) of a target segment (230) in the set target segments (154) to form a fingerprint (232) for the target segment to be added to the target segment fingerprint set (156); to identify the characteristic data (208) for each target segment (230) in the target segment set (154), fit the characteristic data (208) to a number of mathematical models (210) to generate nested data (228), and creating the fingerprint set (156) using embedded data (228); and wherein the number of mathematical models (210) includes at least one of a generalized spatial linear model (212), a generalized linear model of characteristic only (214), a spatiogram (216) and a histogram (218). [0006] Computer implemented method for tracking moving objects (155) in a sequence of images (110), the computer implemented method characterized by the fact that it comprises: segmenting a current image (128) in the image sequence (110) into a plurality of segments (138); creating a plurality of segment fingerprints (164) for the plurality of segments (138), wherein each plurality of segment fingerprints (164) is a fingerprint for a corresponding segment in the plurality of segments (138) and a fingerprint digital defines a description of a singular set for characteristics of a moving object; merge segments together into a plurality of segments (138) belonging to the same movement profile to form a set of main segments (142), each main segment (142) to represent a set of moving objects or a set of image anomalies , using the plurality of segment fingerprints (164) to determine which of the plurality of segments (138) to merge together; identify a set of target segments (154) from the set of main segments (142) by generating main statistics (144) for a main segment (152) in the set of main segments (142); and determining whether a difference between the main statistics (144) for a main segment (152) and the main statistics identified for a previously identified main segment of an earlier image (134) is greater than a threshold, where the set of segments target (154) represents a set of moving objects (155) in the current image (128); and create a set of target segment fingerprints (156) for the target segments, the set of target segment fingerprints (156) for use in tracking the set of moving objects (155) in a number of subsequent images (162) following images (110), where the step of generating the main statistics (144) for the main segment (152) in the set of main segments (142) comprises: generate segment data (146) for the main segment (152); and fitting the segment data (146) to a mathematical model (148) to generate the main statistics (144) for the main segment (152). [0007] Method implemented by computer, according to claim 6, characterized by the fact that it also comprises: forming a moving image (136) using the current image (128), wherein the moving image (136) includes a set of motion profiles (135); and in which the step of forming a moving image (136) comprises: identify the local movement and the global movement in the current image (128); and subtract the global movement from the local movement to form the moving image (136). [0008] Method implemented by computer, according to claim 7, characterized by the fact that the step of merging the segments into the plurality of segments (138) belonging to the same movement profile to form the set of main segments (142) comprises: merging segments into the plurality of segments (138) belonging to the same movement profile in the movement profile set (135) in the moving image (136) to form the set of main segments (142). [0009] Computer-implemented method according to any of claims 6 to 8, characterized by the fact that the mathematical model (148) is a generalized linear model (212). [0010] Method implemented by computer, according to claim 6, characterized by the fact that the step of creating the target segment fingerprint set (156) for use in tracking the set of moving objects (155) in the number of subsequent images (162) in the image sequence (110) comprises: performing a characteristic analysis (206) of a target segment (230) on the target segment set (154) to form a fingerprint (232) on the target segment fingerprint set (156) for the target segment (230). [0011] Method implemented by computer, according to claim 6, characterized by the fact that the step of creating the target segment fingerprint set (156) for use in tracking the set of moving objects (155) in the number of subsequent images (162) in the image sequence (110) comprises: identifying characteristic data (208) for each target segment (230) in the set of target segments (154); embedding characteristic data (208) in a number of mathematical models (210) to generate embedded data (228); creating the target segment fingerprint set (156) using the embedded data (228); and in which the step of fitting the characteristic data (208) into the number of mathematical models (210) to generate the embedded data (228) comprises: fit characteristic data (208) to the number of mathematical models (210) to generate the matched data (228), where the number of mathematical models (210) includes at least one of a generalized spatial linear model (212), a model generalized linear feature only (214), a spatiogram (216), and a histogram (218).
类似技术:
公开号 | 公开日 | 专利标题 BR102013024785B1|2021-04-06|image processing system and method for tracking moving objects in a sequence of images JP6095018B2|2017-03-15|Detection and tracking of moving objects US10192107B2|2019-01-29|Object detection method and object detection apparatus US10546387B2|2020-01-28|Pose determination with semantic segmentation JP2014071902A5|2020-01-16| Neumann et al.2018|Nightowls: A pedestrians at night dataset Azevedo et al.2014|Automatic vehicle trajectory extraction by aerial remote sensing US20160275695A1|2016-09-22|System and a method for tracking objects Zhou et al.2017|Moving object detection and segmentation in urban environments from a moving platform Reilly et al.2013|Shadow casting out of plane | candidates for human and vehicle detection in aerial imagery US9036910B1|2015-05-19|Method and system for processing a sequence of images using fingerprints US20200401617A1|2020-12-24|Visual positioning system Mousavian et al.2016|Semantic image based geolocation given a map Kumar et al.2017|Traffic surveillance and speed limit violation detection system Nejadasl et al.2006|Optical flow based vehicle tracking strengthened by statistical decisions CN111881853A|2020-11-03|Method and device for identifying abnormal behaviors in oversized bridge and tunnel Zhang et al.2019|GMC: Grid based motion clustering in dynamic environment Wu et al.2016|Robust and low complexity obstacle detection and tracking Elassal et al.2016|Unsupervised crowd counting JP2011043863A|2011-03-03|Apparatus, method, program for determining/tracking object region, and apparatus for determining object region US9652681B1|2017-05-16|Using geospatial context information in image processing Becattini et al.2019|Vehicle trajectories from unlabeled data through iterative plane registration Conde Moreno2019|Automated Privacy-Preserving Video Processing through Anonymized 3D Scene Reconstruction Saini et al.2017|Innovations in Computer Science and Engineering: Proceedings of the Fourth ICICSE 2016 Imad et al.2020|Navigation System for Autonomous Vehicle: A Survey
同族专利:
公开号 | 公开日 KR20140043023A|2014-04-08| JP2014071902A|2014-04-21| US20140093127A1|2014-04-03| CN103716687A|2014-04-09| AU2013213659A1|2014-04-17| AU2013213659B2|2018-06-07| RU2013143669A|2015-04-10| EP2713308A2|2014-04-02| EP2713308B1|2021-11-03| CN103716687B|2019-01-08| JP6650657B2|2020-02-19| BR102013024785A2|2015-09-15| KR102069390B1|2020-01-23| EP2713308A3|2015-08-05| US8811670B2|2014-08-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JP2000222584A|1999-01-29|2000-08-11|Toshiba Corp|Video information describing method, method, and device for retrieving video| US6954544B2|2002-05-23|2005-10-11|Xerox Corporation|Visual motion analysis method for detecting arbitrary numbers of moving objects in image sequences| US7886365B2|2002-06-11|2011-02-08|Panasonic Corporation|Content-log analyzing system and data-communication controlling device| AU2005328364A1|2004-06-01|2006-09-08|Lumidigm, Inc.|Multispectral imaging biometrics| JP4768544B2|2006-08-10|2011-09-07|株式会社東芝|Target detection tracking device| EP2313847A4|2008-08-19|2015-12-09|Digimarc Corp|Methods and systems for content processing| GB0818561D0|2008-10-09|2008-11-19|Isis Innovation|Visual tracking of objects in images, and segmentation of images| JP5103665B2|2008-10-24|2012-12-19|国立大学法人広島大学|Object tracking device and object tracking method| JP4715909B2|2008-12-04|2011-07-06|ソニー株式会社|Image processing apparatus and method, image processing system, and image processing program| JP5214533B2|2009-05-21|2013-06-19|富士フイルム株式会社|Person tracking method, person tracking apparatus, and person tracking program| US8116527B2|2009-10-07|2012-02-14|The United States Of America As Represented By The Secretary Of The Army|Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background| US8731239B2|2009-12-09|2014-05-20|Disney Enterprises, Inc.|Systems and methods for tracking objects under occlusion| KR20130113481A|2010-11-12|2013-10-15|넥스트나브, 엘엘씨|Wide area positioning system| US8620026B2|2011-04-13|2013-12-31|International Business Machines Corporation|Video-based detection of multiple object types under varying poses| US8498448B2|2011-07-15|2013-07-30|International Business Machines Corporation|Multi-view object detection using appearance model transfer from similar scenes|US9036910B1|2012-09-28|2015-05-19|The Boeing Company|Method and system for processing a sequence of images using fingerprints| US9429425B2|2013-03-05|2016-08-30|Here Global B.V.|Aerial image collection| JP6278752B2|2014-03-05|2018-02-14|株式会社キーエンス|Shape inspection apparatus and shape inspection method| US9639807B2|2014-06-10|2017-05-02|Jose Oriol Lopez Berengueres|Method and system for forecasting future events| US9940726B2|2014-12-19|2018-04-10|The Boeing Company|System and method to improve object tracking using tracking fingerprints| US9791541B2|2014-12-19|2017-10-17|The Boeing Company|System and method to improve object tracking using multiple tracking systems| DE102015002923A1|2015-03-06|2016-09-08|Mekra Lang Gmbh & Co. Kg|Display device for a vehicle, in particular commercial vehicle| US9483839B1|2015-05-06|2016-11-01|The Boeing Company|Occlusion-robust visual object fingerprinting using fusion of multiple sub-region signatures| KR102334767B1|2015-07-23|2021-12-02|한화테크윈 주식회사|Apparatus and method for controlling network camera| US9495763B1|2015-09-28|2016-11-15|International Business Machines Corporation|Discovering object pathways in a camera network| WO2018204120A1|2017-05-02|2018-11-08|Hrl Laboratories, Llc|System and method for detecting moving obstacles based on sensory prediction from ego-motion| US10460156B2|2018-03-06|2019-10-29|Sony Corporation|Automated tracking and retaining of an articulated object in a sequence of image frames| CN109214315A|2018-08-21|2019-01-15|北京深瞐科技有限公司|Across the camera tracking method and device of people's vehicle| WO2020180424A1|2019-03-04|2020-09-10|Iocurrents, Inc.|Data compression and communication using machine learning|
法律状态:
2015-09-15| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]| 2018-11-21| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-04-14| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-02-23| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-04-06| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 26/09/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/631,726|2012-09-28| US13/631,726|US8811670B2|2012-09-28|2012-09-28|Method and system for using fingerprints to track moving objects in video| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|