![]() METHOD AND SYSTEM FOR THE REAL-TIME DETECTION OF AN OBJECT
专利摘要:
method and system for real-time detection of an object. a system and method for real-time detection of a moving object on a route path in a geographic area is provided. the system and method can also be used to track such a moving object in real time. the system includes an image capture device for capturing successive images of a geographic area, a geographic reference map comprising contextual information of the geographic area, and a processor configured to calculate the differences between successive images to detect, in real time, a moving object in the path of the journey. the method includes capturing successive images of the geographic area, using the image capture device, geo-recording at least some of the successive images against the geographic reference map, and calculating the differences between the successive images to detect, in real time, An object. 公开号:BR102015003962B1 申请号:R102015003962-0 申请日:2015-02-24 公开日:2021-06-01 发明作者:Arturo Flores;Kyungnam Kim;Yuri Owechko 申请人:The Boeing Company; IPC主号:
专利说明:
FIELD OF THE INVENTION [001] The present invention concerns the real-time detection of a mobile object of interest in a route path, and reduction of false alarms and lost detections. The present invention can also be used to track such a moving object in real time. FUNDAMENTALS [002] Prior art systems are known to detect and track an object of interest that travels along the path, such as a path path, employing video tracking. Many of such systems, however, are limited in their accuracy of detecting an object in real time, due to environmental artifacts and image quality resulting from lighting effects on the object of interest. As a result, current systems cannot adequately track an object of interest in certain environments. [003] Jiangjian Xiao et al: “Aerial Video Vehicle Detection and Tracking with Wide Field of View”, 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 13-18, 2010, San Francisco, CA , USA, IEEE, Piscataway, NJ, USA is an example of a low frame rate aerial video vehicle tracking that uses a three-frame subtraction approach, based on a behavior model and graphical matching approach . [004] Shih-Ming Huang et al: “Image registration between UAV image sequence and Google satellite image under quality mismatch”, ITS Telecommunications (ITST), 2012 12th International IEEE Conference, November 5, 2012, describes a normalized variant of mutual information for quality mismatch problem, used in recording between the UAV image and the Google satellite image SUMMARY [005] A system and method for real-time detection of a moving object on a route path in a geographic area is provided. The system and method can also be used to track such a moving object in real time. The system includes an image capture device for capturing successive images of a geographic area, a geographic reference map comprising contextual information of the geographic area, and a processor configured to calculate differences between successive images to detect, in real time, an object. mobile in the path of the course. The method includes capturing successive images of the geographic area using the image capture device, geo-recording at least some of the successive images against the geographic reference map, and calculating differences between the successive images to detect, in real time, a object. [006] Furthermore, the description comprises embodiments according to the following clauses: Clause 1. A method for real-time detection of an object comprising: capturing successive images of a geographical area containing at least one path of the route using an image capture device; geo-recording at least some of the successive images in relation to a geographic reference map, the geographic reference map comprising contextual information of the geographic area; and calculate differences between successive images to detect, in real time, an object. [007] Clause 2. The method of clause 1, in which a first image of the successive images is manually geo-registered. [008] Clause 3. The method of clause 1, which further comprises obtaining images of the geographical area including a route path. [009] Clause 4. The method of clause 1, in which the contextual information is metadata of the route path. [0010] Clause 5. The method of clause 1, wherein the path of the course is a road. [0011] Clause 6. The method of clause 1, wherein the image capture device is positioned in an air vehicle. [0012] Clause 7. The method of clause 1, wherein the successive images are generated by capturing at least one video sequence of the geographic area, and further comprising separating the video sequence for the successive images. [0013] Clause 8. The method of clause 1, in which planar homography is used to geo-record the successive images in relation to one another in the reference map. [0014] Clause 9. The method of clause 1, further comprising using contextual information to suppress false detections of moving objects off a path of the route. [0015] Clause 10. The method of clause 1, which further comprises using contextual information to detect a moving object in a path along the route. [0016] Clause 11. The method of clause 1, which further comprises reducing errors in the geo-record by adding an additional geo-record of one of said successive images in relation to the reference map. [0017] Clause 12. The method of clause 1, including additionally tracking the position of the movable object. [0018] Clause 13. A system for real-time detection of a moving object on a route path in a geographical area comprising: an image capture device for capturing successive images of a geographical area; a geographic reference map comprising contextual information of the geographic area; and a processor configured to calculate differences between successive images to detect, in real time, a moving object along the path. [0019] Clause 14. The system of clause 13, wherein said image capture device is a video camera. [0020] Clause 15. The system of clause 14, wherein said geographic reference map is a map that incorporates metadata. [0021] Clause 16. The system of clause 14, wherein said moving object is a vehicle and said route path is a road. [0022] Clause 17. The system of clause 13, wherein the image capture device is mounted on an aerial platform on an airborne vehicle. [0023] Clause 18. The system of clause 17, in which the airborne vehicle is unmanned. [0024] Clause 19. The system of clause 13, wherein the geographic reference map is hosted on a server remote from the processor. [0025] Clause 20. The system of clause 13, wherein the processor is additionally configured to track the position of the moving object. [0026] The scope of the present invention is defined only by the appended claims and is not affected by the statements within this summary. BRIEF DESCRIPTION OF THE DRAWINGS [0027] The invention can be better understood with reference to the following drawings and description. Components in the figures are not necessarily to scale, emphasis is in turn placed on illustrating the principles of the invention. FIG. 1 is a block diagram of the system incorporating features of the present invention; FIG. 2 is a block diagram of an image capture device used in the system of the invention; FIG. 3 is a block diagram of a processor used in the system of the invention; FIGS. 4A-4D are linear drawings showing a reference map and images captured by the image capture device in the system of the invention; FIG. 5 are linear drawings showing how images are geo-registered to the reference map using planar homography in the system of the invention; FIG. 6A and 6B are linear drawings with associated flowcharts showing the steps used in the present method; Figs. 7A-7C show an example of a moving object detection of the present invention; FIG. 8A shows moving object detection without using path path metadata; and FIG. 8B shows a mobile object detection using path path metadata in accordance with the present invention. DETAILED DESCRIPTION [0028] Although the invention may be susceptible to the embodiment in various forms, which is shown in the drawings, and here will be described in detail, a specific embodiment for detecting a moving object of interest on a path path using a device image capture device mounted on an aerial structure, such as an unmanned aerial vehicle (UAV), an aerial platform or a piloted vehicle, with the understanding that the present description is to be considered an illustration of the principles of the invention, and not is intended to limit the invention as illustrated and described herein. Therefore, unless otherwise indicated, the features described herein may be combined together to form additional combinations that have not been shown otherwise, for the sake of brevity. [0029] A system and method for geo-recording and in-context moving object detection along a route path, using images captured from an aerial location, to improve detection of a moving object on a route path, and reduce false alarms and missed detections. The system and method can also be used to track such a mobile object. The method can be performed in real time. By combining geo-record information with route path metadata provided by a global mapping system, the accuracy of the system can be increased over the prior art. [0030] As illustrated by the block diagram in FIG. 1, the system comprises an image capture device 20, such as a camera, for generating digital images of a geographical area that may contain at least one travel path, an aerial structure 22, such as an unmanned aerial vehicle platform. (UAV) or in a piloted vehicle, such as a helicopter, to assemble the image capture device 20, a processor 24 having software thereon in communication with the image capture device 20 and capable of processing images from the capture device of image 20, a user interface 26 in communication with processor 24, and a server 28 that hosts the path path metadata of the global mapping system and that is in communication with processor 24. [0031] The image capture device 20 is configured to capture digital images such as photographs or a video of objects such as buildings, vegetation and vehicles, etc., arranged within the field of view of the image capture device 20. image capture device 20 is communicatively connected to processor 24. Image capture device 20 includes optical component capture of image capture device 30, an image capture unit 32, and a communication interface 34. image capture device 30 comprises lenses and other optical components, and is communicatively coupled with image capture unit 32. Image capture unit 32 transfers images to communication interface 34 which then transfers the images to processor 24 . [0032] The processor 24 and the server 28 are coupled together to transmit information between them. Information is sent to and received from server 28, i.e., over a communication network, such as a local area network, a wide area network, a wired network and/or a wireless network, etc. Server 28 can be on-board relative to airframe 22 or it can be ground-based. [0033] Processor 24 may be on-board relative to airframe 22, or may be ground-based. If processor 24 is terrestrially based, image capture device 20 can send images to processor 24 via wireless signals. Processor 24 processes image information from image capture device 20 and from server 28 and includes a central processing unit (CPU) or digital signal processor (DSP) and memory 38. CPU/DSP 36 is coupled to memory 38 which includes random access memory (RAM) and read-only memory (ROM). Memory 38 is non-transient and stores machine-executable and machine-readable software code 40 containing instructions that are configured to, when executed, cause the CPU/DSP 36 to perform various functions described herein. Processor 24 analyzes information from image capture unit 32 to produce images that can be displayed on the user interface display 26 and which can be printed using the user interface printer 26 and analyzed by the user. [0034] The user interface 26 includes an input device such as a keyboard, a display and speakers to provide video and audio information to a user and to allow the user to input input parameters to the processor 24. A user interface 26 may also include a printer for printing images captured by the system. [0035] Global mapping systems, for example Google Maps, Bing Maps or any other suitable global mapping system that utilize geopositioning information, provide extensive route path metadata for use by third parties. Path path metadata provides, among other items, contextual information regarding the location of path paths, such as roads, waterways and/or walking trails, and structures, such as buildings, vegetation and other potentially interfering objects, near the paths of course. This route path metadata may be provided as reference maps to third parties. A linear drawing example of such a reference map 42 is shown in FIG. 4A showing route paths, eg roads on this particular reference map, in a geographic area. Route paths are shown in shaded lines in the figure. 4B. Mapping information can be obtained online, eg via an Internet connection, and/or offline, eg via storage media, eg electric, magnetic or optical disk. This path path metadata and its associated reference maps may be hosted on server 28. Alternatively, this path path metadata and its associated reference maps may be hosted on processor 24. [0036] A plurality of digital images such as photographs or a video sequence can be taken by the image capture device 20. The images can then be transmitted to the processor 24 for processing. In an embodiment where photographs are taken, the processor 24 can be configured to separate the photographs into individual images 44a, 44b, 44c, etc. In an embodiment where video is taken, processor 24 can be configured to separate the video sequence into individual images 44a, 44b, 44c etc. [0037] Processor 24 can be configured to access server 28 to obtain an appropriate reference map 42, see FIG. 4A, containing geo-registration information. Processor 24 can be configured to compare the images from image capture device 20 to reference map 42. Processor 24 can then georegister the images to the reference map using matching planar homography and image features . [0038] During the geo-registration process, the first image 44a can be manually registered to the reference map 42 by the user inputting the registration information to the processor 24 via the user interface 26. Alternatively, the processor 24 can be configured to register 44a the first image 44a to the reference map 42 in an automatic manner using inputs such as GPS and/or inertial sensors incorporated in the image capture device 20. In the present system and method, it is assumed that a plane exists. dominant in images 44a, 44b, 44c, etc. and reference map 42, hence, processor 24 can be configured to use planar homography to align images 44a, 44b, 44c etc. for the reference map 42. [0039] Next, H0,M is the homography that aligns the first image 44a to the reference map 42. Given the registration of the first image 44a, subsequent images 44b, 44c. . . are geo-recorded as follows. It is defined as the image captured by the image capture device at time t, and It+1 is defined as the subsequent image. As shown in FIG. 6A, the image-to-image recording performed in a known manner by extracting descriptors by Scale Invariant Feature Transform (SIFT), or other known descriptors, from the images in both images (the current image being analyzed and the image previous already analyzed). An initial set of correspondences is obtained by combining SIFT descriptors from the images in both images through their nearest neighbor in Euclidean space. These putative matches contain many mismatches, these are suppressed by known methods such as random sample consensus (RANSAC, described by Fischler, and Bolles, Comm. Do ACM, Vol. 24, pp 381-395., 1981) , or another method for estimating parameters of a mathematical model of an observed data set, which contains attachments, which also provides Ht +1,t, which is the homography that aligns It +1 with It. As a result, the subsequent image I t + 1 is geo-registered via the product Ht+1,t Ht,t-1 Ht -1t-2 . . . H1.0 H0,M or more simply.. Processor 24 can be configured to geolog each successive image, or it can be configured to geolog pre-determined images of the images (eg some successive images are ignored). [0040] The above method can present small errors in each computation by homography, whose errors accumulate over time and can result in misaligned images after some time. These errors are mitigated by refining the image record to images with an additional image to reference map record. At time t, it is assumed that the image to be geo-registered is within a small error limit. The geo-record is refined by combining points of interest in it and in the reference map 42 via mutual information from the image patches. Mutual information is a measure of the overlap of information between two signals, or how much knowledge of one signal can provide knowledge of the second signal. Mutual information is robust and useful as it is only sensitive to the fact that one signal changes when the other signal does not change to its relative values. Since the reference map 42 and the image to be analyzed were taken at different times, there may be complicating factors such as different times of day, different weather conditions, etc. along with the fact that the image to be analyzed may have been taken at a different angle than the image on the reference map 42. Mutual information helps to mitigate these complicating factors. As a result, error accumulation and geo-record "drift" are undone. [0041] An example of a geo-recorded video sequence can be seen in Figs. 4A-4D. FIG. 4A shows reference map 42. FIG. 4B shows the first image 44a recorded to reference map 42. FIGs. 4C and 4D show subsequent images 44b, 44c recorded on the reference map 42. The path paths are shown in the shaded line using path path metadata obtained from the reference map 42. [0042] Since the images etc. are geo-registered, the processor 24 can be configured to use the route path metadata from the reference map 42 as additional context for detecting a moving object 46 of interest, eg a vehicle, in the images 44a, 44b , 44c etc. Moving object detection is performed by processor 24 by calculating differences in consecutive images, eg 44a and 44b, after they have been geo-registered to reference image 42 using coarse reference subtraction. It is defined as reference image 42, then images It-k, It-(k-1),. . . It-1, It+2,. . . , It+k are registered using It as the reference coordinate system. The squared pixel difference is then accumulated between the reference image 42 and all other images 44a, 44b, 44c, in other words, , typically k=2. The assumption is that since all images 44a, 44b, 44c are registered to the reference image 42, the stationary objects and the reference will cancel each other out in the difference square operation, while a moving object 46 will stand out. A T sensitivity threshold is applied to the accumulated difference image to produce a binary B image where the other way [0043] In fig. 6B, once the difference image is calculated, the processor 24 is configured to perform the threshold operation of the accumulated difference image based on the labeled regions on the map, which results in segmented image regions representing more moving objects. faster than a certain speed in relation to the reference. By varying the T threshold, motion sensitivity can be made dependent on the region of the labeled map that contains the segmented image regions. For example, a lower sensitivity threshold is used to move a candidate moving object on the path path versus a candidate moving object off the path path. The processor 24 can then be configured to detect the segmented image regions as variable decision threshold objects based on the labeled map regions. Thereafter, the processor 24 can be configured to form tracks, for example, using a Kalman filter in a known mode, or other suitable known tracker. As a final step, the processor 24 can be configured to issue the traces and object detections to the user. [0044] An example of the mobile object detection and tracking process is provided in FIGS. 7A-7C. FIG. 7A shows the accumulated difference image. FIG. 7B shows the segmented regions after processing the accumulated difference image of FIG. 7A, for example, with morphological operations and thresholds. Such morphological operations are well known in the art. FIG. 7C shows object tracking results. Each tracked object of interest 46 can be represented by a snapped ellipse, that is, an ellipse of a specific color. A history of past locations of the tracked object of interest 46 can be as shown as trailing dots of the same color. [0045] Furthermore, since the images etc. are logged, processor 24 can be configured to use the path path metadata to suppress false alarms for detections that are not on a path path. Once a mobile object of interest 46 is detected, the method performs the steps of FIG. 6B which uses path path metadata to filter out false alarms and improve failure detection. The false alarm rate is reduced using only the metadata path (reducing the search area), because the number of false alarms per image is approximately proportional to the search area. If the search area is reduced using path path metadata, then the false alarm rate will decrease while the detection rate of a moving object will remain unchanged. If the sensitivity threshold is lowered, the detection rate of a moving object will increase and the false alarm rate will also increase from the value that was lowered using path path metadata. For example, the sensitivity threshold can be lowered so that the false alarm rate is unchanged from the rate without metadata. The detection rate will then be higher due to the lower sensitivity threshold value. The false alarm rate can be kept constant by lowering the sensitivity threshold and reducing the search area. In this method, the accumulated difference image is bounded and converted to a binary image. If the search area is reduced, the false alarm rate is kept constant by lowering the sensitivity threshold. [0046] The user sets the sensitivity threshold T in the processor 24 for detecting an object of interest 46. The sensitivity threshold T can be set, for example, by the user defining a predetermined level of contrast in the images between the moving object and the structure the movable object is in, or can be defined in, for example by the user setting a predetermined pixel count for the movable object. The sensitivity threshold T can be set lower in regions where a path path is located, which regions are known from the reference map 42, and higher in any region other than the path path. For example, processor 24 can be configured to recognize that a moving object is darker than the path path or constructions in the images, and/or processor 24 can be configured to recognize that a moving object is lighter than the route path in images or constructions. Or, for example, processor 24 can be configured to recognize that a moving object has a pixel count greater than the path of travel. If, however, the user sets the sensitivity threshold T too low (which requires a small amount of contrast or small difference in pixel count between the moving object and the structure the moving object is in), that can result in a number unacceptable false alarms, because many objects will satisfy this setting. If, however, the user sets the sensitivity threshold T too high (which requires a large amount of contrast or a large difference in pixel count between the moving object and the structure the moving object is in), what can result on lost detections because of objects in the leftovers of a building will not show a high level of contrast between the moving object and path path or a big difference in pixel count. In the present invention, since the path path metadata is used by the processor 24 as context, false alarms are suppressed in areas outside the path path. At the same time, detections lost on the path path are reduced because the lower sensitivity threshold is used in the process of detecting moving object on the path path. A lower sensitivity threshold effectively increases the sensitivity of the moving object detection process. In the examples shown in Figs. 7A to 8B, the assumption is that the mobile object of interest 46 to be detected is a vehicle that is typically on the path of the path, and not typically off the path of the path. [0047] Figs. 8A and 8B illustrate some of the difficulties that may be encountered when processing images 44a, 44b, 44c taken in an urban environment. Tall buildings and vegetation are not on the earth plane and therefore cannot be aligned using planar homography. As a result, the difference square operation can introduce false alarms and missed detections as seen in fig. 8A. A false alarm can result from a moving object, where the images are not on the route path. Missed detection can arise when a moving object of interest is in the shadow of a tall building and is not easily detected by the image capture device 20 and processor 24. When using path path metadata as additional context, such false alarms and missed detection are mitigated by using the method in Fig. 6B. [0048] As described above, the path path metadata can also be used to increase the sensitivity of the moving object detection process in areas within the path path. As seen in FIG. 8A, a mobile vehicle is not detected in the shadow of the tall building. This is due to the low contrast of the image in the area. By lowering the sensitivity threshold of the detection process in regions where a route path is located, this mobile vehicle is detected. [0049] Once a mobile object of interest 46 is detected, the processor 24 can be configured to track the location of the mobile object of interest 46 over time. Alternatively, once a mobile object of interest 46 is detected, the user can input instructions to processor 24 via user interface 26 to track that specific mobile object of interest 46. Processor 24 can be configured to detect a mobile object 46 of interest in successive images 44a, 44b, 44c by appearance, location, speed, etc. [0050] Since images 44a, 44b, 44c are geo-registered, the detected moving object of interest 46 has precise geocoordinates. This allows a user to easily detect and track the mobile object of interest 46. [0051] The present invention can be used to detect and track various moving objects of interest in images 44a, 44b, 44c. These various moving objects of interest can be on the same travel path in images 44a, 44b, 44c. Alternatively, one or more of the various moving objects of interest may be on a path in images 44a, 44b, 44c, and another one or more of the various moving objects of interest may be on a different path in images 44a, 44b, 44c, etc. [0052] Although particular aspects of the present invention described herein have been shown and described, it will be apparent to those skilled in the art that, based on the teachings contained herein, changes and modifications may be made without departing from the subject matter described herein and its general aspects, and therefore the appended claims are to include within its scope all such changes and modifications which are within the true spirit and scope of the subject matter described herein. Furthermore, it is to be understood that the invention is defined by the appended claims. Accordingly, the invention is not to be limited, except in view of the appended claims and their equivalents.
权利要求:
Claims (14) [0001] 1. Method for real-time detection of an object (46), which comprises: capturing successive images (44a, 44b, 44c) of a given geographical area containing at least one path path using an image capture device ( 20); geo-recording at least some of those successive images (44a, 44b, 44c) in relation to a geographic reference map (42), such geographic reference map (42) comprising contextual information of the geographic area provided by route path metadata; and characterized in that said method further comprises: calculating square of the accumulated pixel difference (Diff) between a reference image (It, 44a) and all other images (It-1, It-2, It+1, It+2, 44a , 44b, 44c), produce a binary image (B) by applying a sensitivity threshold (T) to the accumulated difference image, which varies based on regions labeled in the reference map (42), resulting in regions of segmented images representing objects moving faster than a predetermined speed relative to the reference, using the path path metadata from the reference map (42) as additional context to suppress false detections of objects (46) moving out of a route path. [0002] 2. Method according to claim 1, characterized in that a first image (44a) of successive images (44a, 44b, 44c) is manually geo-registered. [0003] 3. Method according to claim 1 or 2, characterized in that the route path is a road. [0004] 4. Method according to any one of claims 1 to 3, characterized in that the image capture device (20) is positioned in an air vehicle. [0005] 5. Method according to any one of claims 1 to 4, characterized in that the successive images (44a, 44b, 44c) are generated by capturing at least one video sequence of the geographic area, and further comprising separating the sequence of video in the successive images (44a, 44b, 44c). [0006] 6. Method according to any one of claims 1 to 5, characterized in that planar homography is used to geo-record the successive images (44a, 44b, 44c), one in relation to the other, on the reference map (42 ). [0007] 7. Method according to any one of claims 1 to 6, characterized in that it further comprises using contextual information for detecting an object (46) that moves over a path. [0008] 8. Method according to any one of claims 1 to 7, characterized in that it further comprises reducing errors in the geo-record by adding an additional geo-record of one of said successive images (44a, 44b, 44c) in relation to the reference map (42). [0009] 9. A system for real-time detection of an object moving over a path in a geographical area, comprising: an image capture device (20) which is for capturing successive images (44a, 44b, 44c) of a geographic area; a geographic reference map (42) comprising contextual information of the geographic area provided by route path metadata; and a processor (24), characterized in that said processor (24) is configured to: calculate square of the accumulated pixel difference (Diff) between a reference image (It, 44a) and all other images (It-1, It- 2, It+1, It+2, 44a, 44b, 44c), produce a binary image (B) by applying a sensitivity threshold (T) to the accumulated difference image, which varies based on regions labeled in the map of reference (42), which results in segmented image regions representing objects moving faster than a predetermined speed relative to the reference, using the path path metadata from the reference map (42) as additional context for suppress false detections of objects (46) moving out of a path path. [0010] 10. System according to claim 9, characterized in that said image capture device (20) is a video camera. [0011] A system according to claim 9 or claim 10, characterized in that said moving object (46) is a vehicle and such route path is a road. [0012] 12. System according to claim 10 or claim 11, characterized in that the image capture device (20) is mounted on an aerial platform in an aerial vehicle. [0013] 13. System according to any one of claims 10 to 12, characterized in that the air vehicle is unmanned. [0014] 14. System according to any one of claims 10 to 13, characterized in that the geographic reference map (42) is hosted on a server (28) remote from the processor (24).
类似技术:
公开号 | 公开日 | 专利标题 BR102015003962B1|2021-06-01|METHOD AND SYSTEM FOR THE REAL-TIME DETECTION OF AN OBJECT JP6095018B2|2017-03-15|Detection and tracking of moving objects US9760996B2|2017-09-12|Non-rigid registration for large-scale space-time 3D point cloud alignment CN108020827B|2021-08-10|Mobile imaging platform calibration US20190065885A1|2019-02-28|Object detection method and system CN108196285B|2021-12-17|Accurate positioning system based on multi-sensor fusion CN107146200B|2020-06-30|Unmanned aerial vehicle remote sensing image splicing method based on image splicing quality evaluation US9165361B1|2015-10-20|Video tracking with jitter, slewing, or zoom Ahmadi et al.2019|Moving vehicle detection, tracking and traffic parameter estimation from a satellite video: A perspective on a smarter city KR20140009737A|2014-01-23|Hybrid map based localization method of robot Basori et al.2015|Fast markerless tracking for augmented reality in planar environment Ersoy et al.2012|Visual tracking with robust target localization Ramakrishnan et al.2015|Adaptive window strategy for high-speed and robust KLT feature tracker Liu et al.2012|Real-time power line extraction from unmanned aerial system video images CN103093481B|2015-11-18|A kind of based on moving target detecting method under the static background of watershed segmentation US20160035107A1|2016-02-04|Moving object detection CN111767853A|2020-10-13|Lane line detection method and device Schubert et al.2014|Robust registration and filtering for moving object detection in aerial videos Ayadi et al.2016|A parametric algorithm for skyline extraction CN110706258A|2020-01-17|Object tracking method and device WO2018207426A1|2018-11-15|Information processing device, information processing method, and program US11069071B1|2021-07-20|System and method for egomotion estimation Karimi et al.2022|Determining the spatio-temporal variability of glacier surface velocity using high-resolution satellite images and UAV data: Alamkuh glacier, Iran Han et al.2019|Traffic sign detection and positioning based on monocular camera US11068713B1|2021-07-20|Video-based intelligent road traffic universal analysis
同族专利:
公开号 | 公开日 US20150286868A1|2015-10-08| US9734399B2|2017-08-15| EP2930686B1|2020-09-02| BR102015003962A2|2016-07-26| JP6726932B2|2020-07-22| JP2015201183A|2015-11-12| KR20150116777A|2015-10-16| CN104978390B|2020-11-03| CN104978390A|2015-10-14| EP2930686A1|2015-10-14|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7424133B2|2002-11-08|2008-09-09|Pictometry International Corporation|Method and apparatus for capturing, geolocating and measuring oblique images| KR100489890B1|2002-11-22|2005-05-17|한국전자통신연구원|Apparatus and Method to Provide Stereo Video or/and Detailed Information of Geographic Objects| ES2554549T3|2008-06-26|2015-12-21|Lynntech, Inc.|Search procedure for a thermal target| WO2011133845A2|2010-04-22|2011-10-27|Bae Systems Information And Electronic Systems Integration Inc.|Situational awareness integrated network and system for tactical information and communications| US10645344B2|2010-09-10|2020-05-05|Avigilion Analytics Corporation|Video system with intelligent visual display| US9746988B2|2011-05-23|2017-08-29|The Boeing Company|Multi-sensor surveillance system with a common operating picture| US20130021475A1|2011-07-21|2013-01-24|Canant Ross L|Systems and methods for sensor control| US8958945B2|2012-02-07|2015-02-17|Ge Aviation Systems Llc|System and methods for maintaining and operating an aircraft| US9904852B2|2013-05-23|2018-02-27|Sri International|Real-time object detection, tracking and occlusion reasoning|US11080765B2|2013-03-14|2021-08-03|Igor Gershteyn|Method and system for data structure creation, organization and searching using basic atomic units of information| JP2013055569A|2011-09-06|2013-03-21|Sony Corp|Image capturing device, information processing device, control methods therefor, and programs therefor| US10248839B2|2015-11-30|2019-04-02|Intel Corporation|Locating objects within depth images| WO2017096548A1|2015-12-09|2017-06-15|SZ DJI Technology Co., Ltd.|Systems and methods for auto-return| CN105512685B|2015-12-10|2019-12-03|小米科技有限责任公司|Object identification method and device| US10670418B2|2016-05-04|2020-06-02|International Business Machines Corporation|Video based route recognition| US10054445B2|2016-05-16|2018-08-21|Northrop Grumman Systems Corporation|Vision-aided aerial navigation| IL248749A|2016-11-03|2019-08-29|Dan El Eglick|System for a route overview using several sources of data| US10560666B2|2017-01-21|2020-02-11|Microsoft Technology Licensing, Llc|Low-cost, long-term aerial imagery| US10209089B2|2017-04-03|2019-02-19|Robert Bosch Gmbh|Automated image labeling for vehicles based on maps| CN108176049B|2017-12-28|2021-05-25|珠海豹好玩科技有限公司|Information prompting method, device, terminal and computer readable storage medium| US10445913B2|2018-03-05|2019-10-15|Faro Technologies, Inc.|System and method of scanning and editing two dimensional floorplans| US11195324B1|2018-08-14|2021-12-07|Certainteed Llc|Systems and methods for visualization of building structures| CN110349173B|2019-07-15|2020-07-24|长光卫星技术有限公司|Ground feature change monitoring method based on high-resolution remote sensing image| US11200671B2|2019-12-31|2021-12-14|International Business Machines Corporation|Reference image guided object detection in medical image processing| CN111619584B|2020-05-27|2021-09-21|北京经纬恒润科技股份有限公司|State supervision method and device for unmanned automobile|
法律状态:
2016-07-26| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]| 2018-10-30| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-04-22| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-03-16| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-06-01| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 24/02/2015, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US14/247,398|US9734399B2|2014-04-08|2014-04-08|Context-aware object detection in aerial photographs/videos using travel path metadata| US14/247398|2014-04-08| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|