专利摘要:
unsupervised parameter adjustments for object tracking algorithms. the present invention relates to a method for automatically optimizing a set of parameters for a tracking algorithm comprising receiving a series of picture frames and processing the picture frames using a tracking algorithm with an initialized set of parameters. a set of updated parameters is then created, according to the processed image frames, using estimated tracking components. the parameters are validated using a performance measure, which can be conducted, manually or automatically, using a gui. image frames are collected from a video camera with a fixed setting at a fixed location. image frames can include a traffic instruction video or an instruction video for tracking humans.
公开号:BR102012021598B1
申请号:R102012021598-5
申请日:2012-08-28
公开日:2021-04-06
发明作者:Wencheng Wu;Beilei Xu
申请人:Xerox Corporation;
IPC主号:
专利说明:

[0001] [0001] The embodiments are generally related to the field of computer applications. The embodiments are also related to methods and systems for tracking objects. BACKGROUND OF THE INVENTION
[0002] [0002] Object tracking has become increasingly prevalent in modern applications. This is particularly true in the field of vehicle tracking. Therefore, it is increasingly necessary to optimize tracking algorithms and the corresponding parameter adjustments.
[0003] [0003] For example, a rule specifying that a vehicle cannot climb a wall can be beneficial in the development of a tracking algorithm. In practice, a common solution is to have a human being to manually specify the regions suitable for the detection and tracking of objects, and to ignore other regions, such as walls. However, human intervention in these algorithms is expensive, time-intensive and error-prone. Therefore, it would be beneficial to automate the parameter adjustment process for tracking algorithms using application-dependent and media-dependent information. BRIEF SUMMARY
[0004] [0004] The present invention relates to a method and a system for the optimization of parameters, which comprise: receiving a series of image frames; and processing the image frames using a tracking algorithm with an initialized set of parameters, are described. An updated set of parameters is then created according to the processed image frames, and validated using a performance measure, thereby automating the set of parameters for the tracking algorithm. The analytical screening components can also be estimated according to the image frames. The creation of a set of updated parameters may also include deriving the updated parameters count, using the estimated analytical screening components. Validation of the updated parameter set further includes manually inspecting and / or modifying the updated parameter set using a graphical user interface associated with a computer. The approach described in this specification includes: collecting picture frames from a video camera, with a fixed setting at a fixed location, such as near a rolling road or sidewalk. Picture frames can include a traffic training video or a training video for tracking humans. BRIEF DESCRIPTION OF THE DRAWINGS
[0005] [0005] Figure 1 illustrates a block diagram of a computerized system, which is implemented according to the described embodiments.
[0006] [0006] Figure 2 illustrates a graphical representation of a network of data processing devices, in which aspects of the present invention can be implemented.
[0007] [0007] Figure 3 illustrates a high level flowchart, illustrating the logical operational steps in a method of parameter optimization, according to the described embodiments.
[0008] [0008] Figure 4 illustrates a detailed flow chart illustrating the logical operational steps, deriving a set of parameters updated according to the described embodiments.
[0009] [0009] Figure 5 illustrates a representation of an image frame according to the described embodiments.
[0010] [00010] Figure 6a illustrates an exemplary graphical user interface, for validating and modifying parameter settings in the "Detection zone" mode.
[0011] [00011] Figure 6b illustrates an exemplary graphical user interface, for validating and modifying parameter settings in the "Virtual Loops" mode. DETAILED DESCRIPTION
[0012] [00012] A block diagram of a computerized system 100, which performs programming to execute the methods and systems, described in this specification, is shown in figure 1. A generic computing device, in the form of a computer 100, can include processing unit 102, memory 104, removable storage 112 and non-removable storage 114. Memory 104 may include volatile memory 106 and non-volatile memory 108. Computer 110 may include or have access to a physical medium computational, which includes various computer-readable media, such as volatile memory 106, non-volatile memory 108, removable storage 112 and non-removable storage 114. Computer storage includes, for example, random access memory (RAM) ), an exclusive read-only memory (ROM), an erasable programmable read-only memory (EPROM) and an electrically erasable programmable read-only memory (E EPROM), an instant memory or other memory technologies, exclusive compact disc (CD ROM) reading memory, Digital Versatile Discs (DVDs) or other optical disk storage, magnetic tapes, magnetic tape, magnetic disk storage, or others magnetic storage devices, or any other medium capable of storing computer-readable instructions, as well as data, including video frames.
[0013] [00013] Computer 110 may include or have access to a physical computing medium, which includes input 116, output 118, and a communication connection 120. The computer may operate on a networked physical medium, using a communication connection to connect to one or more computers or remote devices. The remote computer can include a personal computer (PC), a server, a networked PC, a peer-to-peer device or other common network node, or the like. The remote device may include a camera, video camera, tracking device, or the like. The communication connection can include a Local Area Network (LAN), a Wide Area Network (WAN), or other networks. This functionality is described in more detail in figure 2.
[0014] [00014] Output 118 is most commonly provided as a computer monitor, but can include any computer output device. Output 118 may also include a data collection device associated with computer system 100. In addition, input 116, which commonly includes a computer keyboard and / or a pointing device, such as a computer mouse, allows a user to select and instruct computer system 100. A user interface can be provided using output 118 and input 116.
[0015] [00015] Output 118 can function as a screen, to display data and information to a user and to interactively display a graphical user interface (GUI) 130. An exemplary embodiment of GUI 130 is provided in figures 6a and 6b below.
[0016] [00016] Note that the term "GUI" refers, in general, to a type of physical medium, which represents programs, files, options and so on, through icons, menus and dialog boxes displayed graphically in a computer monitor screen. A user can interact with the GUI to select and activate these options by directly touching the screen and / or pointing and clicking with a user input device 116, such as, for example, a pointing device, such as a mouse, and / or with a keyboard. A particular item can work in the same way for the user in all applications, because the GUI provides usual software routines (for example, module 125), to handle these elements and record the actions of users. The GUI can also be used to display electronic service image frames, as discussed below.
[0017] [00017] Computer-readable instructions, for example, program module 125, are stored in a computer-readable medium and are executable by processing unit 102 of computer 110. Program module 125 may include a computer application. A hard disk drive, a CD-ROM, a RAM, an instant memory, and a USB drive are just a few examples of articles including a computer-readable medium.
[0018] [00018] Figure 2 illustrates a graphical representation of a network of data processing systems 200, in which aspects of the present invention can be implemented. The network data processing system 200 is a computer network, in which the embodiments of the present invention can be implemented. Note that system 200 can be implemented in the context of a software module, such as a program module 125. System 200 includes a network 202 in communication with one or more clients 210, 212 and 214. Network 202 is a means which can be used to provide communication links between the various devices and computers connected together within a networked data processing system, such as a computer system 100. Network 202 may include connections, such as communication links connected with wire, wirelessly connected communication links, or fiber optic cables. Network 202 may include communication with one or more servers 204 and 206 and a memory storage unit, such as, for example, a memory or database 208.
[0019] [00019] In the illustrated example, server 204 and server 206 are connected to network 202, together with storage unit 208. In addition, clients 210, 212 and 214 are connected to network 202. These clients 210, 212 and 214 can be, for example, personal computers or networked computers. The computer system 100, illustrated in figure 1, can be, for example, a customer, such as customers 210, 212 and / or 214. Alternatively, customers 210, 212 and 214 can be, for example, a photo camera, a video camera, a tracking device, etc.
[0020] [00020] Computational system 100 can also be implemented as a server, such as servers 204 and / or 206, depending on design considerations. In the illustrated example, server 204 provides data such as starter files, operating system images, application and application updates for clients 210, 212 and 214. Clients 210, 212 and 214 are, in this example, clients for the server 204. The network data processing system 200 may include servers, clients and other additional devices not shown. Specifically, customers can connect to any element of a server network that provides equivalent content.
[0021] [00021] In the illustrated example, the network data processing system 200 is the Internet with network 202, representing a worldwide collection of networks and ports, which use the Transmission Control Protocol / Internet Protocol suite of protocols ( TCP / IP), for communication with each other. At the heart of the Internet is a structure of high-speed data communication lines, between large nodes or central computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages. Of course, the network data processing system 200 can also be implemented as several different types of networks, such as, for example, an Intranet, a local area network (LAN) or a wide area network (WAN). Figures 1 and 2 are intended as examples and not as architectural limitations for the different embodiments of the present invention.
[0022] [00022] The following description is presented in relation to the embodiments of the present invention, which can be indicated in the context of a data processing system, such as computer system 100, together with program 125, and the processing system data 200, and network 202, illustrated in figures 1 and 2. The present invention, however, is not limited to any particular application or any particular physical medium. Instead, those skilled in the art will find that the system and methods of the present invention can be applied advantageously to various system software and applications, including database control systems, word processors and the like. Furthermore, the present invention can be indicated on several different platforms, including Macintosh, UNIX, LINUX and the like. Therefore, the descriptions of the exemplary embodiments, presented below, are for illustrative purposes and are not considered as a limitation.
[0023] [00023] Figure 3 illustrates a high level flowchart 300 of steps of logical operations, associated with a method of parameter optimization, according to the described embodiments. This method provides unsupervised optimization of parameters used in vehicle or object tracking algorithms. The method starts at block 305.
[0024] [00024] Block 310 indicates that instruction data is received, for example, on a computer system 100. Instruction data can include video data, such as data from a traffic video camera, a video camera from pedestrians, or other tracking camera. In a preferred embodiment, the training video data and associated picture frames are obtained from a camera, with a fixed setting at a fixed location. This location can be a traffic sign, an intersection or any other point of interest.
[0025] [00025] A person skilled in the art will consider that when a camera is arranged, the associated camera settings, such as mounting height, tilt angle, zoom, pan, camera location, etc., will differ from according to the camera's location. Thus, it may therefore be necessary to adjust the parameters used by the tracking algorithms. These parameters can include the parameters described below.
[0026] [00026] A detection zone parameter associated with the regions, in which the vehicle detection is made. A generic detection zone parameter would use the entire region. This parameter can be adjusted, manually or automatically, to limit a detection zone to a specific region of interest. This decreases false readings of areas that are not of interest and / or typically problematic for detection and tracking algorithms (for example, occlusion due to another fixed object in the scene, branches that will move / shake due to the wind, etc. .) and improve computational efficiency. For example, a full crossover detection zone may include branches moving in the wind or a sideways lane, which is not of interest. The detection zone parameter can be adjusted to ignore those regions of the detection zone.
[0027] [00027] The threshold parameters for the vehicle size can be used to determine whether the moving object, in a detection zone, is a vehicle. A reasonable range of vehicle sizes, in real dimensions, can be established. However, camera settings can dramatically affect the sizes in pixel units of images provided by the camera. The pixel units of images can therefore be converted into real coordinates, which requires a related set of parameters dependent on the physical medium, which will be incorporated into the parameter adjustment method.
[0028] [00028] A trigger parameter can be used to determine the timing for triggering a vehicle's image capture, license plate identification, or other identification. The trigger parameter can trigger one or more image frames for vehicle identification when a new vehicle is detected. For example, when a vehicle passes through the detection zone, the optimal time to capture a vehicle identification plate, to identify the vehicle, may originate as the vehicle passes through the lower right quadrant of the detection zone. The trigger parameter can also depend on the relative direction of traffic, the angle of view of the camera, etc. All of these factors can be used to find the region in the detection zone that corresponds to the largest image of the vehicle, so that the vehicle's nameplate is therefore more recognizable.
[0029] [00029] Speed measurement parameters can be used to measure or derive the vehicle from a vehicle in the detection zone. For example, virtual loops can be used to measure speed. They generally consist of at least a pair of virtual lines, arranged orthogonally on the tracks in the detection zone, with a known distance between the lines. Speed can be derived by dividing the known distance between lines by the time it takes for a vehicle to travel from one line to the other. Preferably, the physical distance between the virtual lines is significant, but not so far from the camera's focus. If the line is too far from the focus, the tracking becomes weaker, as the moving object becomes smaller. Also, it is preferable to avoid areas of occlusion, as object tracking can be poor in those areas. For example, if the upper right quadrant is out of focus or occluded by trees, this is a poor selection for the placement of the virtual line, and will therefore need to be adjusted using the speed measurement parameter.
[0030] [00030] Other parameters, such as the rules for classifying roads, the size of the identification plate in the pixel coordinates of images, the projection distortion, the models of trajectories of common vehicles, the flow of traffic, the time of day , and the path information threshold can all be used in the present invention. The foregoing description of parameters is not complete or exclusive, and is not intended to limit the scope of this patent application. Instead, these parameters are intended to illustrate the nature of the parameters, which can be used in accordance with the present invention. One skilled in the art will appreciate that any number of other parameters can also be used necessarily in conjunction with the present invention.
[0031] [00031] The method then continues at block 320, which shows that the collected instruction data is processed using a selected algorithm, with an initial set of parameters. The tracking algorithm can be, for example, a vehicle tracking algorithm. The initialization of the parameters is selected according to the past experience with cameras located in similar situations. The initial values of the parameters need only be estimated approximately. Instructional data is processed to estimate analytical tracking components, such as sizes and trajectories. The result can be a list of objects, corresponding to the objects in motion through the detection zone, determined from the initial set of parameters. However, this data is probably not as good as the initialized parameters, probably resulting in significant computational loss, falsely detected objects corresponding to the movement of unimportant objects, branch occlusion or other natural obstacles, etc. These analytical video components are illustrated by the arrow 325. In this way, the parameters do not need to be adjusted.
[0032] [00032] Block 330 illustrates that a set of updated parameters is derived. In order to update the parameter set without supervision, it is necessary to further develop methods of parameter derivation. This is shown in figure 4, which illustrates an example of the detailed steps associated with block 330 in figure 3.
[0033] [00033] First, the method associated with step 330 starts as indicated by block 405. Next, the representative video analytical components are selected as described by block 410. In this step, all detected objects, whose trajectories were well located, according to a predetermined threshold, or in a very short time, they are removed. This step helps to eliminate false detections associated with moving branches or partial screening due to occlusion.
[0034] [00034] In block 420, the video analytical components are grouped and / or brought together. In this step, the trajectories are grouped based on their displacement directions. As an example, this step may include for each trajectory, the application of a temporal filtration technique, the adjustment of a polynomial curve, the calculation of the direction of displacement, and the classification of the trajectory using a K media grouping approach. A person skilled in the art will, of course, consider that other path-grouping methods can be used for more complex path configurations, for example, crossing intersections, roundabouts, highways, etc.
[0035] [00035] The next step is to select interest groups, as illustrated by block 430. This step promotes the selection of which of the groups mentioned above to maintain, according to some predefined parameters. This can be, for example, the average pixel size of objects in the group. This selection can also be made according to any defining factor, considered necessary for the particular situation. For example, in the case where large views are required to see vehicle identification plates for identification, it makes sense to mount and point the camera in the most appropriate direction for this purpose. Thus, the group selected is the one with the largest average pixel size, for the advantageous view of the vehicle identification plates.
[0036] [00036] Finally, the parameters can be derived based on the selected analytical components and the algorithms, as indicated by block 440. At that point in the method illustrated in figures 3 and 4, a group or several interest groups were identified. Then, other useful parameters can be derived and updated.
[0037] [00037] For example, at this stage, an optimal trigger rule determination device can be determined. Because the optimal trigger time for vehicle identification is when the object (such as the vehicle itself or even the vehicle nameplate) is longer, this becomes the optimum trigger point. This synchronization can be determined based on the trace of the size of the object with time and on the shapes, for example, of the identification plates of the vehicles previously selected to be kept in the group. The optimal trigger parameter can also correspond to other factors, such as when the object is less distorted.
[0038] [00038] Likewise, in step 440, a parameter for determining the detection zone can be defined. As described above, the limit boxes for the detection zone are created and examined when an object is first detected. To reduce noise, additional points along the path can also be examined, if necessary. The optimal detection zone is then defined as the minimum region that would encompass all of the boundary boxes. Using a minimum allows efficient computation by reducing the probability of selecting an occluded area, or zones with displacement in the wrong direction. However, it can also result in failed detections, if the training video does not include the largest vehicle of interest. This problem can be solved in several ways. The optimal detection zone can be increased manually according to a pre-specified percentage, or by inspecting the training video to identify the largest vehicle. Otherwise, the limit box can be adjusted according to the data in the public database and by using extrapolation, to determine the largest vehicle of common interest. Finally, an optimal detection zone can be manipulated by applying a mask or increasing the number of corners in the zone. This operation will require a human being to do it; although, minimal training of that person is necessary, as the adjustments are extremely simple.
[0039] [00039] A virtual loop determination parameter can be determined and used for speed measurement. An image frame 500, associated with a pair of virtual loop determination parameters, is illustrated in figure 5. Figure 5 shows three objects 540, 541 and 542 being tracked in a detection zone 510. Arrow 560 indicates the direction of traffic on the right side of the 570 rolling lane. First, the detected trajectories can be used to estimate the projective distortion (specified by a projective matrix) of the camera's view. This provides an indication of the direction in the image, which is orthogonal to the bearing path, so that the virtual loop can be established. Then, the trajectories are converted into undistorted coordinates, in which the virtual loop is simply defined by the horizontal lines 520 and 521, shown in figure 5. Finally, a determination of which horizontal lines are optimal can be made. In general, the two lines 520 and 521 should be placed as far apart as possible, although they still cover the sections of the trajectories for some percentage of the objects being tracked. This allows interpolation, rather than extrapolation, of the exact time for an object to cross the virtual loop. In addition, the greater distance provides a measure of speed, which is less sensitive to error associated with the distance between the two horizontal lines 530 in the virtual loop.
[0040] [00040] Alternatively, the virtual loop can be defined by detecting bearing track marks and / or dividing marks known in the detected region, such as, for example, dividing frame 550. This can be particularly useful, when there are on site an existing infrastructure. This can be further supported by the existing information or infrastructure provided by the Department of Transport or another such organization. If such an infrastructure exists and the bearing track marks or boundaries have known distances between them, projective distortion can be estimated using this information. The virtual loops will be placed as close as possible to the partition, without going through an occlusion zone, or when necessary, with a minimal overlap of an occlusion zone. This method can be used to fine tune a virtual loop parameter, and can optionally be used as a suggested optimal virtual loop, which can then be validated or adjusted by a human.
[0041] [00041] Other factors also need to be addressed with respect to virtual loops. For example, acceleration estimates may be required. This can be achieved by subdividing the regions between two virtual loops, evenly in undistorted space, to generate extra virtual loops for finer resolution of speed measurements.
[0042] [00042] A track classification determiner can also be identified. This parameter is used simply to identify the lanes on which vehicles are passing at a certain time of the day or over a certain length of road. This parameter can be useful in identifying aggressive drivers. The present embodiment of the invention provides a lane identification instruction in an initial video sequence, which shows vehicles moving along the lanes of interest. A trajectory grouping method, such as the Gaussian Mixture Model (GMM), which is based on the locations (x, y) and, optionally, the size of a vehicle's detected boundary box for each point along a given trajectory, can be used to group the trajectory observed in a set of clues in the scene. The average trajectory of each identified track, in the set of tracks, is then calculated. The location associated with the average trajectory of each track is stored. When a new trajectory is observed during execution, that distance from the average trajectory trajectory is calculated. A present threshold is then used to automatically determine which lane the new path belongs to. A human being can periodically review this mark and correct any badly marked trajectories. These paths can then be used to update the location of the average stored path.
[0043] [00043] Several other parameter determiners can also be used. These extensions include determinants for nameplate size, projective distortion matrices for nameplates associated with each lane, the extraction of vehicle models, parameters associated with the time, date or weather conditions of a given scene, or other useful parameters. Some of these extensions will require larger sets of training videos, for example, samples based on varying weather conditions. Likewise, specific self-knowledge algorithms may be needed, which can be developed if necessary.
[0044] [00044] At this point, the method shown in figure 3 continues at block 340, in which a validation of the updated set of parameters can be done. This validation can be based on a pre-selected performance measure, or by manual inspection. For example, the training video can then be re-analyzed with the updated set of parameters, to determine whether unsupervised learning was effective, according to a pre-selected performance measure. The performance measure may include a consideration of computational cost, when that cost must use memory or time equal to or less; the detection rate, in which no vehicle of interest was lost; and the convergence of the set of updated parameters.
[0045] [00045] Block 350 refers to the check for convergence of the updated parameter set. If the parameters are validated as shown in block 351, the method ends. If the parameters are not validated, as shown by block 352, the training video is processed again with the updated parameter set, and then returns to block 330, in which the steps are repeated. In this way, an iterative loop is created to check the convergence of the set of updated parameters.
[0046] [00046] The method described above may require periodic calibration and / or human intervention, to obtain a desired level of accuracy. Therefore, it is another aspect of the present invention to provide a computer aided tool, to simplify parameter adjustments, which are application dependent and dependent on the physical environment. This can be achieved by using a graphical user interface (GUI), in conjunction with the computer system and the network shown in figures 1 and 2. An example of a GUI 130, shown in figure 1, is provided to assist a human being. human to validate or adjust the updated parameters derived by the unsupervised learning method, described above in figures 3 and 4.
[0047] [00047] An exemplary GUI for validating or modifying parameter settings is shown in figures 6a and 6b. In general, GUI 130, illustrated in figure 6a, provides several scenes from the detection zone, such as detection zone 600. This can include overlapping images of detected objects, when they are first detected, and the optimal detection zone. The user can then approve or modify the scene via the GUI. The overlapping objects and detection zones provided by the method of figures 3 and 4 help to guide the user. For example, if no large vehicle is found in the superimposed image, the user can increase the detection zone 600. Furthermore, several buttons 605 are provided by the GUI, which provides assisted manipulation of the parameter settings. GUI 130 can also provide instructions to the user on how to operate the GUI, such as instructions on the top bar 610. The GUI can also include standard drawing functions (not shown) on the top bar 610, which allow a user to manipulate the lines and the polygons shown in the GUI.
[0048] [00048] For example, in figure 6b, the virtual loop button 606 is provided in GUI 130. Activation of this button will select a detected object from the video data and superimpose the two images of that object, when it intercepts the optimal loop 620 ( that is, at two different times). The user can investigate the data provided by the automated method and approve or modify it using the GUI. For example, the GUI can be used to guide the user to modify the horizontal lines, which define the virtual loop 610, to align with the edges of the vehicle or orthogonally to the direction of the road.
[0049] [00049] In this way, the GUI, for example, GUI 130, can be used to show some of the video analytical components automatically generated with a set of video frames, to help guide the operator on how best to select or modify the parameters. A person skilled in the art will consider that any number of parameters associated with the analytical components of the video can be adjusted or approved in this way. GUI 130 can also include sliders or manual text fields (not shown), which can be used, for example, to modify vehicle size thresholds, or other numerical parameters associated with the parameters.
[0050] [00050] Based on what was mentioned above, it can be considered that several preferred and alternative embodiments are described in this specification. For example, in one embodiment, a method can be implemented to optimize the parameters. This method can include, for example, receiving a series of image frames, and processing the series of image frames using a tracking algorithm, with an initialized set of parameters. This method may also include creating a set of updated parameters, according to the image frames processed from among the series of image frames, and validating the set of updated parameters, using a performance measure to automatically optimize the set of parameters for the tracking algorithm.
[0051] [00051] In another embodiment, the operation or step to receive the series of picture frames may further comprise collecting the series of picture frames from at least one video camera, with a fixed setting at a fixed location. In other embodiments, the operation or step for processing the series of picture frames may further comprise estimating the analytical tracking components for the picture frames received from among the series of picture frames. In yet another embodiment, the operation or step of creating the updated parameter set, according to the image frames processed from among the series of image frames, can also comprise deriving the updated parameter set using the analytical tracking components , in response to the estimation of analytical screening components according to the image frames received from among the series of image frames.
[0052] [00052] In more other embodiments, the series of picture frames mentioned above can be a training traffic video and / or a training video for tracking a human being. In other other embodiments, the operation or validation step of the updated parameter set, using the performance measure to automatically optimize the parameter for tracking, the tracking algorithm can further understand the validation of the updated parameter set by manual inspection of the updated set of parameters, using a graphical user interface associated with a computer. In another embodiment, the validation step of the updated parameter set, by manual inspection of the updated parameter set using the graphical user interface associated with the computer, may also include displaying the video analysis of the image frames, thereby providing , a semi-automatic update of the parameter set.
[0053] [00053] In another embodiment, a method for parameter optimization can be implemented. This method may include: receiving a series of image frames collected from a video camera, with a fixed setting at a fixed location; process the series of image frames using a tracking algorithm with an initialized set of parameters; estimate the analytical screening components for the image frames received from the series of image frames; create a set of updated parameters using the analytical tracking components, according to the image frames processed from among the series of image frames; and validate the set of updated parameters using the performance measure, to automatically optimize the set of parameters for the tracking algorithm.
[0054] [00054] In yet another embodiment, a system for parameter optimization can be provided, which includes, for example, a processor, a data bus coupled to the processor, and a usable medium by computer incorporating a computer code. The computer usable medium can be coupled (for example, connected electrically or in electrical communication) to the data bus. The computer program code can comprise instructions executable by the processor, and configured to receive a series of image frames, process the series of image frames using a tracking algorithm, with an initialized parameter set, create an updated parameter set according to the image frames processed from the series of image frames, and validate the updated parameter set using a performance measure, to automatically optimize the parameter set for the tracking algorithm.
[0055] [00055] In other embodiments, these instructions can also be configured to collect the series of frames of images from a video camera, with a fixed setting in a fixed location. In another embodiment, these instructions can be configured to estimate the analytical tracking components, according to the image frames received from among the series of image frames. In yet another embodiment, these instructions can be configured to derive the updated parameter set using the estimated analytical tracking components. In another embodiment, the series of picture frames mentioned above may comprise, for example, a traffic training video and / or a training video for tracking a human being.
[0056] [00056] In other other embodiments, these instructions can be further configured to validate the updated parameter set by manual inspection of the updated parameter set, using a graphical user interface associated with a computer. In another embodiment, these instructions can be configured to validate the updated parameter set by displaying a video analysis of the series of picture frames, and manually inspect the updated parameter set using the graphical user interface associated with the computer, to provide a semi-automatic update of the parameter set.
[0057] [00057] Although the embodiments have been particularly shown and described in this specification, those skilled in the art will understand that various changes in form and details can be made in them, without departing from the scope of those embodiments. It is considered that the variations of the aspects and functions described above and others, or alternatives to them, can be desirably combined in many other different systems or applications. Also, the various alternatives, modifications, variations or improvements currently unforeseen or unanticipated may also subsequently be intended to be covered by the following claims.
权利要求:
Claims (20)
[0001]
Method for parameter optimization characterized by the fact that it comprises the steps of: receiving a series of image frames (310); processing the series of image frames (320) using a screening method with a set of initialized parameters; create a set of updated parameters according to the image frames processed from among the series of image frames; and validate the set of updated parameters (340) using a performance metric to automatically optimize the set of parameters for the screening method through manual inspection of the set of updated parameters using a graphical interface associated with a computer (110).
[0002]
Method, according to claim 1, characterized by the fact that the stage of receiving the series of image frames (310) still comprises: collect the series of image frames from at least one video camera with a fixed setting at a fixed location; and the step of processing the series of image frames (320) still comprises: estimate the analytical screening components according to the image frames received from the series of image frames.
[0003]
Method, according to claim 1, characterized by the fact that the step of validating the updated parameter set (340) using the performance metric to automatically optimize the parameter set for the method of screening by manual inspection of the parameter set updated using the graphical user interface associated with the computer (110) further includes: display video analysis of the image frames, thereby providing a semi-automatic update of the set of parameters.
[0004]
Method, according to claim 1, characterized by the fact that the step of receiving a series of image frames (310) still comprises: collect the series of image frames from at least one video camera with a fixed setting at a fixed location.
[0005]
Method, according to claim 4, characterized by the fact that the step of processing the series of image frames (320) still comprises: estimate the analytical screening components according to the image frames received from the series of image frames.
[0006]
Method, according to claim 5, characterized by the fact that creating the set of updated parameters according to the image frames processed from among the said series of image frames still comprises: derive the updated parameter set (330) using the analytical tracking components estimated in response to the estimated analytical tracking components according to the image frames received from the series of image frames.
[0007]
Method, according to claim 6, characterized by the fact that the series of image frames comprises a traffic training video.
[0008]
Method, according to claim 6, characterized by the fact that the series of image frames comprises a training video for tracking a human being.
[0009]
Method for parameter optimization characterized by the fact that it comprises the steps of: receiving a series of image frames (310) collected from a video camera with a fixed setting at a fixed location; processing the series of image frames (320) using a scanning method with an initialized set of parameters; estimate the analytical screening components according to the image frames received from the series of image frames; create a set of updated parameters using the analytical tracking components according to the image frames processed from among the series of image frames; and validate the updated parameter set (340) using a performance metric to automatically optimize the parameter set for the tracking method, where validating the updated parameter set (340) still includes validating the updated parameter set (340) by manual inspection of the updated parameter set using a graphical user interface associated with a computer (110).
[0010]
Method, according to claim 9, characterized by the fact that the series of image frames comprises at least one among: a traffic training video and a training video for tracking a human being.
[0011]
Method, according to claim 9, characterized by the fact that the step of receiving a series of image frames (310) still comprises: collect the series of image frames from at least one video camera with a fixed setting at a fixed location; and the step of processing the series of image frames (320) still comprises: estimate the analytical screening components according to the image frames received from the series of image frames.
[0012]
Method, according to claim 9, characterized by the fact that validating the updated parameter set (340) by manual inspection of the updated parameter set using the graphical user interface associated with the computer (110) still comprises: display video analysis of the image frame series to provide a semi-automatic update of the parameter set.
[0013]
System for parameter optimization characterized by the fact that it comprises: a processor; a data bus coupled to the processor; and a computer usable medium (110), the computer usable medium being coupled to the data bus, and being executed by the processor, and configured to: receiving a series of image frames (310); processing the series of image frames (320) using a scanning method with an initialized set of parameters; create a set of updated parameters according to the image frames processed from among the series of image frames; and validate the updated parameter set (340) using a performance metric to automatically optimize the parameter set for the screening method through manual inspection of the updated parameter set using a graphical user interface associated with a computer (110).
[0014]
System according to claim 13, characterized by the fact that it is configured for: collect the series of image frames from a video camera with a fixed setting at a fixed location; and estimate the analytical screening components according to the image frames received from the series of image frames.
[0015]
System, according to claim 13, characterized by the fact that it is still configured for: validate the set of updated parameters (340) by displaying a video analysis of the series of image frames; and manually inspect the updated parameter set using the graphical user interface associated with the computer (110) to provide a semi-automatic update of the parameter set.
[0016]
System, according to claim 13, characterized by the fact that it is still configured for: collect the series of image frames from a video camera with a fixed setting at a fixed location.
[0017]
System, according to claim 16, characterized by the fact that it is still configured for: estimate the analytical screening components according to the image frames received from the series of image frames.
[0018]
System, according to claim 17, characterized by the fact that it is still configured for: derive the updated parameter set (330) using the estimated analytical tracking components.
[0019]
System, according to claim 18, characterized by the fact that the series of image frames comprises a traffic training video.
[0020]
System, according to claim 18, characterized by the fact that the series of image frames comprises a training video for tracking a human being.
类似技术:
公开号 | 公开日 | 专利标题
BR102012021598B1|2021-04-06|method and system for parameter optimization
US9870704B2|2018-01-16|Camera calibration application
Ismail et al.2013|A methodology for precise camera calibration for data collection applications in urban traffic scenes
US9754178B2|2017-09-05|Long-term static object detection
JP2021502638A|2021-01-28|Motion identification method and device based on 3D convolutional neural network
CN105809658B|2020-01-31|Method and device for setting region of interest
US10360247B2|2019-07-23|System and method for telecom inventory management
US20150023560A1|2015-01-22|Multi-cue object association
EP2858008A2|2015-04-08|Target detecting method and system
US9251416B2|2016-02-02|Time scale adaptive motion detection
KR20200040665A|2020-04-20|Systems and methods for detecting a point of interest change using a convolutional neural network
CN109154938B|2021-11-09|Classifying entities in a digital graph using discrete non-trace location data
CN108875480A|2018-11-23|A kind of method for tracing of face characteristic information, apparatus and system
CN112447060A|2021-03-05|Method and device for recognizing lane and computing equipment
US20160180201A1|2016-06-23|Image processing
KR20210058408A|2021-05-24|Acquisition method of pedestrian path data using data network and the system thereof
CN111105695A|2020-05-05|Map making method and device, electronic equipment and computer readable storage medium
KR20170090081A|2017-08-07|Method and System for Traffic Measurement using Computer Vision
JP2019154027A|2019-09-12|Method and device for setting parameter for video monitoring system, and video monitoring system
KR20210125899A|2021-10-19|Method and apparatus for generating position information, device, media and program
WO2021036243A1|2021-03-04|Method and apparatus for recognizing lane, and computing device
KR20190070235A|2019-06-20|Method for Estimating 6-DOF Relative Displacement Using Vision-based Localization and Apparatus Therefor
US20210374985A1|2021-12-02|Systems and Methods for Processing Captured Images
JP2022034034A|2022-03-02|Obstacle detection methods, electronic devices, roadside devices, and cloud control platforms
JP7029902B2|2022-03-04|Video call quality measurement method and system
同族专利:
公开号 | 公开日
MX2012009946A|2013-03-15|
US20130058523A1|2013-03-07|
BR102012021598A2|2014-12-02|
US8582811B2|2013-11-12|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

WO2000031707A1|1998-11-23|2000-06-02|Nestor, Inc.|Non-violation event filtering for a traffic light violation detection system|
US6754663B1|1998-11-23|2004-06-22|Nestor, Inc.|Video-file based citation generation system for traffic light violations|
CN100533482C|1999-11-03|2009-08-26|特许科技有限公司|Image processing techniques for a video based traffic monitoring system and methods therefor|
US7821422B2|2003-08-18|2010-10-26|Light Vision Systems, Inc.|Traffic light signal system using radar-based target detection and tracking|
US7688222B2|2003-09-18|2010-03-30|Spot Devices, Inc.|Methods, systems and devices related to road mounted indicators for providing visual indications to approaching traffic|
US7403664B2|2004-02-26|2008-07-22|Mitsubishi Electric Research Laboratories, Inc.|Traffic event detection in compressed videos|
US20060170769A1|2005-01-31|2006-08-03|Jianpeng Zhou|Human and object recognition in digital video|
US8098889B2|2007-01-18|2012-01-17|Siemens Corporation|System and method for vehicle detection and tracking|
US7953544B2|2007-01-24|2011-05-31|International Business Machines Corporation|Method and structure for vehicular traffic prediction with link interactions|US8917910B2|2012-01-16|2014-12-23|Xerox Corporation|Image segmentation based on approximation of segmentation similarity|
US8971573B2|2012-09-12|2015-03-03|Xerox Corporation|Video-tracking for video-based speed enforcement|
US9390329B2|2014-04-25|2016-07-12|Xerox Corporation|Method and system for automatically locating static occlusions|
US9390328B2|2014-04-25|2016-07-12|Xerox Corporation|Static occlusion handling using directional pixel replication in regularized motion environments|
US9875664B2|2014-06-02|2018-01-23|Xerox Corporation|Virtual trainer optimizer method and system|
US10515285B2|2014-06-27|2019-12-24|Blinker, Inc.|Method and apparatus for blocking information from an image|
US9558419B1|2014-06-27|2017-01-31|Blinker, Inc.|Method and apparatus for receiving a location of a vehicle service center from an image|
US9589201B1|2014-06-27|2017-03-07|Blinker, Inc.|Method and apparatus for recovering a vehicle value from an image|
US9779318B1|2014-06-27|2017-10-03|Blinker, Inc.|Method and apparatus for verifying vehicle ownership from an image|
US9754171B1|2014-06-27|2017-09-05|Blinker, Inc.|Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website|
US10540564B2|2014-06-27|2020-01-21|Blinker, Inc.|Method and apparatus for identifying vehicle information from an image|
US9563814B1|2014-06-27|2017-02-07|Blinker, Inc.|Method and apparatus for recovering a vehicle identification number from an image|
US9760776B1|2014-06-27|2017-09-12|Blinker, Inc.|Method and apparatus for obtaining a vehicle history report from an image|
US10572758B1|2014-06-27|2020-02-25|Blinker, Inc.|Method and apparatus for receiving a financing offer from an image|
US9892337B1|2014-06-27|2018-02-13|Blinker, Inc.|Method and apparatus for receiving a refinancing offer from an image|
US10579892B1|2014-06-27|2020-03-03|Blinker, Inc.|Method and apparatus for recovering license plate information from an image|
US9773184B1|2014-06-27|2017-09-26|Blinker, Inc.|Method and apparatus for receiving a broadcast radio service offer from an image|
US10867327B1|2014-06-27|2020-12-15|Blinker, Inc.|System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate|
US9594971B1|2014-06-27|2017-03-14|Blinker, Inc.|Method and apparatus for receiving listings of similar vehicles from an image|
US9607236B1|2014-06-27|2017-03-28|Blinker, Inc.|Method and apparatus for providing loan verification from an image|
US9600733B1|2014-06-27|2017-03-21|Blinker, Inc.|Method and apparatus for receiving car parts data from an image|
US10733471B1|2014-06-27|2020-08-04|Blinker, Inc.|Method and apparatus for receiving recall information from an image|
US9589202B1|2014-06-27|2017-03-07|Blinker, Inc.|Method and apparatus for receiving an insurance quote from an image|
US9818154B1|2014-06-27|2017-11-14|Blinker, Inc.|System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate|
KR20160071242A|2014-12-11|2016-06-21|삼성전자주식회사|Apparatus and method for computer aided diagnosis based on eye movement|
CN107977596A|2016-10-25|2018-05-01|杭州海康威视数字技术股份有限公司|A kind of car plate state identification method and device|
US10309787B2|2016-11-10|2019-06-04|Sap Se|Automatic movement and activity tracking|
US10796185B2|2017-11-03|2020-10-06|Facebook, Inc.|Dynamic graceful degradation of augmented-reality effects|
CN108596955B|2018-04-25|2020-08-28|Oppo广东移动通信有限公司|Image detection method, image detection device and mobile terminal|
CN111563913B|2020-04-15|2021-12-10|上海摩象网络科技有限公司|Searching method and device based on tracking target and handheld camera thereof|
法律状态:
2013-08-27| B15U| Others concerning applications: numbering cancelled|Free format text: NUMERACAO ANULADA POR NAO CUMPRIMENTO DE EXIGENCIA PUBLICADA NA RPI 2210, DE 14/05/13 |
2013-12-10| B150| Others concerning applications: publication cancelled [chapter 15.30 patent gazette]|Free format text: PUBLICACAO ANULADA POR TER SIDO INDEVIDA. REFERENTE A RPI 2225, DE 27/08/2013, COD. DE DESPACHO 15.21. |
2014-12-02| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
2018-12-11| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-11-19| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-05-05| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]|
2020-09-15| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2020-10-20| B08F| Application dismissed because of non-payment of annual fees [chapter 8.6 patent gazette]|Free format text: REFERENTE A 8A ANUIDADE. |
2021-03-02| B08G| Application fees: restoration [chapter 8.7 patent gazette]|
2021-04-06| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 28/08/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US13/223,420|2011-09-01|
US13/223,420|US8582811B2|2011-09-01|2011-09-01|Unsupervised parameter settings for object tracking algorithms|
[返回顶部]