![]() System, method and computer for improved automated visual inspection of a physical asset.
专利摘要:
A computer-implemented system (100) for enhanced automated visual inspection of a physical asset includes a visual inspection device (110) capable of generating images of the physical asset (105) and a computing device (120) comprising a processor and a computer Processor unit connected to the processor. The computing device includes a memory device connected to the memory device and connected to the processor. The storage device contains at least one historical image of the physical asset and at least one design model that essentially represents the physical asset. The computing device (120) is configured to receive from a current image source at least one current image (115) of the physical asset taken by the visual inspection device (110). The computing device is configured to identify at least one matching historical image that corresponds to the at least one current image, the computing device is configured to identify at least one matching design model corresponding to the at least one current image. 公开号:CH709322B1 申请号:CH00931/15 申请日:2013-12-13 公开日:2017-09-15 发明作者:Nam Lim Ser;Brandon Laflen John;Guan Li;Li Shubao;Yang Xingwei;Christopher Domke Michael;Eldred Lambdin Thomas 申请人:Gen Electric; IPC主号:
专利说明:
description BACKGROUND [0001] The field of the invention relates generally to computer-implemented programs, and more particularly to a computer-implemented system for enhanced automated visual inspection of a physical asset. Known methods of visual inspection of physical goods include the use of optical devices incorporated in such physical goods without human access. Such known methods provide advantages by enabling rapid visual analysis of complex physical assets that may be inaccessible to human technicians without disassembly or other maintenance. In some such known cases, performing disassembly or maintenance of complex physical assets can take hours. During such disassembly or maintenance work, many such complex physical goods must be temporarily decommissioned. Therefore, known methods of visual inspection rapidly perform an analysis of the internal states of complex physical goods and reduce the down time that can be caused by disassembly or maintenance work. Known methods and systems of visual inspection of physical goods involve the sending of visual data to human technicians capable of diagnosing the conditions of the complex physical goods. Human technicians can review such visual data using monitors, computers, or other displays. In many known methods and systems of visual inspection, human technicians will superimpose collected visual data to a three-dimensional design model and manually match characteristics between the collected visual data and the three-dimensional model. Such superimposing, comparing and matching can be very time consuming. Known methods and systems of visual inspection may also face the complexity that when visual data is received by the human technician, it is often too late to obtain new physical measurements that have not previously been collected by field technicians. In such cases, new physical measurements can only be collected at the next inspection of the physical asset. Such prior art methods and systems of visually inspecting physical assets using optical devices do not enable efficient and effective identification of the component corresponding to the visual data transmitted from the optical device to the human technician. In many known complex physical goods are a variety of components and sub-components. Checking the architectural details of each component and subcomponent is often a necessary and time consuming aspect of using known visual inspection methods. However, such testing is necessary to ensure that the human technician captures the exact component or subcomponent that matches the visual data. A wrong identification can be logistically and financially expensive. Brief Description [0005] In one aspect, a computer-implemented system is provided for enhanced automated visual inspection of a physical asset. The system includes a visual inspection device that is capable of producing images of the physical asset. The system also includes a computing device. The computing device includes a processor. The computing device also includes a memory device connected to the processor. The computing device also includes a memory device connected to the memory device and also to the processor. The storage device contains at least one historical image of the physical asset and at least one design model that essentially represents the physical asset. The computing device is configured to receive from a current image source at least one current image of the physical asset that has been taken by the visual inspection device. The computing device is also configured to identify at least one matching historical image corresponding to the at least one current image. In addition, the computing device is set up to identify at least one matching design model corresponding to the at least one current image. In a further aspect, there is provided a computer based method for enhanced automated visual inspection of a physical asset. The computer-based method is performed by a computing device. The computing device has a processor. The computing device also includes a memory device connected to the processor. The computing device also has a memory device which is connected to the main memory device and also to the processor. The storage device contains at least one historical image of the physical asset and at least one design model that essentially represents the physical asset. The computer-based method includes receiving at least one current image of the physical asset taken by the visual inspection device from a current image source. The computer-based method also includes identifying at least one matching historical image corresponding to the at least one current image. The computer-based method further includes identifying at least one matching design model corresponding to the at least one current image. In another aspect, a computer is provided for enhanced automated visual inspection of a physical asset. The computer contains a processor. The computer also includes a memory device connected to the processor. The computer also includes a memory device connected to the memory device and also to the processor. The storage device contains at least one historical image of the physical asset and at least one design model that essentially represents the physical asset. The computer is adapted to receive from a current image source at least one current image of the physical asset captured by the visual inspection device. The computer is also adapted to identify at least one matching historical image corresponding to the at least one current image. The computer is also adapted to identify at least one matching design model corresponding to the at least one current image. Drawings These and other features, aspects and advantages will be better understood when the following detailed description is read with reference to the accompanying drawings, in which like reference characters represent like parts throughout the drawings, wherein: FIG. 1 is a schematic view of an exemplary sophisticated computer-implemented system for enhanced automated visual inspection of a physical asset; FIG. FIG. 2 is a block diagram of an exemplary computing device that may be used with the computer-implemented system shown in FIG. 1; FIG. 3 is a flow chart of an example method for enhanced automated visual inspection of a physical asset using the computer-implemented system shown in FIG. 1; and 4 is a simplified flow chart of the overall method for enhanced automated visual inspection of a physical asset using the computer-implemented system shown in FIG. 1 to simplify the improved visual inspection method shown in FIG. Unless otherwise indicated, the drawings provided herein are intended to illustrate key features of the invention. These characterizing features are considered to be applicable in a wide variety of systems having one or more embodiments of the invention. As such, the drawings are not intended to include all conventional features known to those skilled in the art to practice the invention. Detailed Description In the following description and in the claims, reference is made to a number of terms which are defined as having the following meanings. The singular forms "one", "one" and "the" contain the plural referents unless the context clearly indicates otherwise. Optional means that the event or circumstance described below may or may not occur, and that the description includes examples where the event occurs and examples where it does not. As used herein, the term "non-transitory computer-readable medium" is intended to be representative of any physical computer-based device implemented in any method or technology for short-term or long-term storage of information, such as computer-readable instructions , Data structures, program modules and sub-modules or other data in any device. Therefore, the methods described herein may be encoded as executable instructions embodied in a tangible, non-transitory, computer-readable medium, including, without limitation, a memory device and / or a memory device. When executed by a processor, such instructions cause the processor to perform at least part of the methods described herein. In addition, the term "non-transitory computer-readable medium" as used herein includes all tangible, computer-readable media, including without limitation, non-transitory computer storage devices, including, without limitation, volatile and non-volatile media and removable and non-removable media, such as firmware , physical and virtual memories, CD-ROMs, DVDs and any other digital sources such as a network or the Internet, as well as digital means still to be developed, with a transient, propagating signal being the only exception. As used herein, the terms "software" and "firmware" are synonymous and include any computer program stored in memory for execution by means including, without limitation, mobile devices, clusters, personal computers, workstations, clients and server. As used herein, the term "real time" refers to the time of occurrence of an associated event and / or the time of measurement and collection of previously determined data and / or the time to process the data and / or time System response to the events and the environment. In the embodiments described herein, these activities and events occur substantially instantaneously. As used herein, the term "computer" and related terms, e.g. "Calculator", not limited to integrated circuits referred to in the art as a computer, but further relates to a microcontroller, a microcomputer, a programmable logic controller (PLC), a custom integrated circuit or other programmable circuit, and these terms are used interchangeably herein. As used herein, the term "automated" and related terms, e.g. «Automatic», to the property of performing a task without any additional input. As used herein, visual inspection of a physical asset is also automated from the point of providing the system with a reference image (e.g., the current image) to the point where the reference image is superimposed on a three-dimensional model. As used herein, the term "visual inspection device" and related terms, e.g. "Visual inspection devices", any optical device capable of being introduced into a physical good, moving in a physical asset, receiving visual data relating to the physical asset and transmitting the visual data to other systems , Such visual inspection devices may include without limitation endoscopes, fiberscopes, video endoscopes, rigid endoscopes, or any digital camera capable of being introduced and maneuvered within the physical asset. As used herein, visual inspection facilities also enable improved inspection of a physical asset by providing up-to-date images of the physical asset. As used herein, the term "physical good" and related terms, e.g. "Goods", to any physical object that can be inspected using a visual inspection device. Such goods may include, without limitation, gas turbines, steam turbines, aircraft engines, diesel engines, vehicle engines, truck engines, pressure vessels, and any simple or complex machine into which a visual inspection device may be inserted. As used herein, physical goods are also used to obtain an improved visual inspection. As used herein, the term "design model" and related terms, e.g. «Design Model Data», any three-dimensional graphic model that is essentially a physical asset. Such design models may include, without limitation, a computer aided design (CAD), computer aided industrial design, photorealistic rendering, or any other model that essentially represents a physical asset and may be used to represent a typical physical asset in three dimensions. As used herein, design models are also used to describe the expected normal three-dimensional shape of the physical asset. As used herein, the term "key site feature detection" and related terms refers to a procedure to detect and describe local features in images. Key site feature detection typically includes methods for identifying points of interest in the object, extracting those locations as features, and describing the features. Such methods include, without limitation, scale invariant feature transformation (SIFT), the accelerated robustness algorithm (SURF), the maximum stable extreme region algorithm (MSER), and affine SIFT (ASIFT). As algorithms continue to evolve, some techniques may change, be added or removed. As used herein, key site feature detection enables the comparison of the current image with historical image data. As used herein, the term " key-point matching " and related terms refer to a method performed after key-location feature detection, where it can be determined that a first feature from a first image is at the same physical location of the physical good relates like a second feature in a second image. Therefore, key-point matching enables the correlation of features determined in the key-site feature identification between two different images. As used herein, key-point matching also enables the comparison of current images with historical image data. As used herein, the term "point cloud" and related terms, e.g. "2D3D point cloud", a mapping of feature matches between a three-dimensional model and a two-dimensional historical image. As used herein, the point cloud is also generated in an off-line manner and used to provide a three-dimensional design model along with the two-dimensional historical image after the image call identifies the historical image corresponding to the current image. As used herein, the term "image call" and related terms, e.g. "Picture Calling System" refers to a method of finding a second image that is most relevant (i.e., closest) to a first image. There are many methods of image calling and, accordingly, the image retrieval device may employ any such method. The image calling device can be under restriction process Using K-Next Neighbors Algorithm (K-NN), methods that implement a vocabulary tree and a cluster-based call, or have some other method capable of organizing historical images and efficiently identify historical images that match a current image. As used herein, the image retriever is also used to identify a historical image and a point cloud that corresponds to a current image. As used herein, the term "epipolar condition" and related terms, e.g. "Epipolar line", a concept of computer imaging and geometry that allows the adjustment of two two-dimensional images taken from different positions by the same three-dimensional object. An epipolar line refers to a line perceived as a point in a first image and a line in the second image. The different perception is caused by the different reference positions of the first image and the second image. As used herein, epipolar lines are used to limit match points between two images. The epipolar condition is the use of the epipolar line to require that for each point observed in one image, the same point be observed in the other image on a known epipolar line. If an epipolar condition is not met (i.e., the first image represents a point, with the suggested corresponding point in the second image not on the corresponding epipolar line), the match is flagged as an error and discriminated or discredited with respect to the other alignments. Approximate indications, as used throughout the specification and claims herein, may be employed to modify any quantitative statements that may be legally varied without in a change in the basic function to which it relates result. Therefore, a value that is modified by an expression or expressions such as "about" and "substantially" is not limited to the exact specified value. In at least some examples, approximate indications may be limited to an accuracy of an instrument for measuring the value. Here and throughout the description and claims range boundaries may be combined and / or interchanged, such ranges being indicated and including all sub-ranges contained therein unless the context or wording indicates otherwise. FIG. 1 is a schematic view of an exemplary sophisticated computer-implemented system 100 for improved visual inspection of a physical asset 105. In the exemplary embodiment, the physical asset is a steam turbine. In alternative embodiments, the physical asset 105 may be any physical object, including but not limited to, gas turbines, aircraft engines, diesel engines, vehicle engines, truck engines, pressure vessels, piping systems, generators, reduction gears, transformers, and any simple or complex machinery. The computer-implemented system 100 also includes a visual inspection device 110. In the exemplary embodiment, the visual inspection device 110 is an endoscope. In alternative embodiments, the visual inspection device 110 may include, without limitation, fiberscopes, video endoscopes, rigid endoscopes, or any digital camera device capable of being inserted and maneuvered within the physical asset 105. The computer-implemented system 100 also includes a computing device 120. The computing device 120 includes a processor 125 and a memory device 130. The processor 125 and the memory device 130 are interconnected. In an exemplary embodiment, computing device 120 includes a single processor 125 and a single memory device 130. In alternative embodiments, computing device 120 may include a plurality of processors 125 and / or a plurality of memory devices 130. The computing device 120 also includes a memory device 135. The memory device 135 is connected to the processor 125 and also to the memory device 130. In the exemplary embodiment, the storage device 135 is a hard disk drive. In alternative embodiments, storage device 135 may be any storage device including, but not limited to, optical storage devices, magnetic storage devices, network storage devices, flash memory devices, mass storage devices, and any other storage devices capable of storing data for computer-implemented system 100. In at least some embodiments, the computing device 120 is in communication with an external memory 160. In such embodiments, external memory 160 stores data that may be stored in memory device 135 or used directly in computing device 120. In the illustrative embodiment, the storage device 135 includes historical image data 140 and design model data 145. The historical image data 140 represents data acquired by the prior use of the visual inspection device 110 to inspect the physical asset 150. In the exemplary embodiment, at least some historical image data 140 is generated immediately after the physical asset 105 is manufactured. In alternative embodiments, historical image data may be generated at any time prior to the current inspection. In other embodiments, the historical image data 140 may include image data from multiple inspections of the physical asset 105. The design model data 145 represents data of the physical asset molds 105 prior to fabrication. In example embodiments, the design model data 145 includes computer aided design (CAD) drawings. In alternative embodiments, the design model data 145 may include computer-aided industrial design, photorealistic rendering, and any other model that is essentially a physical asset and may be used to illustrate the typical physical asset 105. In operation, the visual inspection device 110 is inserted into the physical asset 105 and transmits at least one current image 115 to the computing device 120. The computing device 120 receives the at least one current image 115 from the visual inspection device 110. In alternative embodiments, the computing device 120 received at least one current image 115, which was generated by the visual inspection device 110, but in the working memory device 130, the memory device 135 or the external memory 160 has been stored. In these embodiments, the current image data 115 may be received for purposes of acceleration by alternative sources. For example, For example, the technician using the visual inspection device 110 may have performed a thorough inspection of the physical asset 105 and captured all relevant image data 115, but is no longer available to lively deliver current image data as a data stream. In such examples, it may be useful to access the same current image data 115 from the data store. The computing device 120 also identifies at least one matching historical image (not shown in FIG. 1) from the historical image data 140 corresponding to the at least one current image 115. In addition, the computing device 120 identifies at least one matching design model (not shown in FIG. 1) from the design model data 145 corresponding to the at least one current image 115. In the exemplary embodiment, the historical image data 140 corresponding to the current image 115 is retrieved by an image retrieval process along with the design model data 145 through the use of a point cloud that matches features between the two-dimensional historical image data 140 and the three-dimensional design model data 145. The point clouds and the relationship between the two-dimensional historical image data 140 and the three-dimensional design model data 145 are generated in a separate manner from what is described in this disclosure. In the exemplary embodiment, the image retrieval method involves the use of a spatial entropy feature that captures the properties between regions and within a region based on the entropy. The spatial entropy feature represents the specific location of the visual inspection device 110 and scale similarities. The spatial entropy features are used to generate an entropy feature vector that contains information related to a given pixel and its neighbors (ie, the pixels immediately adjacent to the pixel are used). The spatial nature of the image is represented by dividing the image into spatial cells and assigning an entropy value to each cell and concatenating the entropy values from all cells into a feature vector. In some embodiments, the computing device may measure and compare features of the current image 115 and the design model data 145 to determine deviations between the physical good 105 and the design model data 145. Deviations may e.g. include a crack in the surface of the physical asset 105, a connection of a component of the physical asset 105, or any other condition that may create a deviation between the current image 115 and the design model data 145. The computing device 120 communicates any discrepancies to a service agent 155 who is capable of providing diagnosis, replacement, and / or maintenance of the physical asset 105. In such embodiments, the service agent 155 is able to receive the information from the computing device 120 regarding the physical asset 105. FIG. 2 is a block diagram of an example computing device 120 that may be used with the computer-implemented system 100 (shown in FIG. 1). The computing device 120 includes a memory device 130 and a processor 125 operatively connected to the memory device 130 for executing instructions. In the exemplary embodiment, computing device 120 includes a single processor 125 and a single memory device 130. In alternative embodiments, computing device 120 may include a plurality of processors 125 and / or a plurality of memory devices 130. In some embodiments, the executable instructions are stored in the memory device 130. The computing device 120 is configurable to execute one or more instructions described herein by programming the processor 125. For example. For example, the processor 125 may be programmed by encoding an operation as one or more executable instructions and providing the executable instructions in the memory device 130. In the exemplary embodiment, the memory device 130 is one or more devices that facilitate the storage and retrieval of information, such as executable instructions and / or other data. The memory device 130 may include one or more physical non-transitory computer-readable media such as, but not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), a read only memory, a hard disk, a read only memory (ROM). , an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM) and / or a non-volatile RAM (NVRAM). The foregoing types of storage are exemplary only and therefore not restrictive of the types of storage that may be used to store a computer program. The memory device 130 may be configured to store operational data including, without limitation, at least one current image 115 (not shown in FIG. 1) and / or any other type of data. In some embodiments, the processor 125 removes or "cleans" data from the memory device 130, depending on the age of the data. For example, For example, the processor 125 may rewrite previously recorded and stored data associated with a subsequent time and / or event. Additionally or alternatively, the processor 125 may remove data that exceeds a predetermined time interval. The memory device 130 also includes, without limitation, sufficient data, algorithms, and commands to facilitate the operation of the computer-implemented system 100. In some embodiments, the computing device 120 includes an operator input interface 230. In the exemplary embodiment, the operator input interface 230 is connected to the processor 125 and receives input from an operator 225. The operator input interface 230 may include, without limitation, a keyboard, a pointing device, a mouse, a pen having a touch-sensitive screen, without limitation, eg a touchpad or touchscreen, and / or having an audio input interface, without limitation, e.g. have a microphone. A single component, such as a touchscreen, may function as both a display device of a display interface 220 and an input interface 230. A communication interface 235 is connected to the processor 125 and configured to be in communication with one or more other devices, such as a sensor or other computing device 120, and to perform input and output operations with respect to those devices. For example, For example, the communication interface 235 may include, without limitation, a wired network adapter, a wireless network adapter, a mobile telecommunications adapter, a serial communication adapter, and / or a parallel communication adapter. The communication interface 235 may receive data from and / or transmit to one or more remote devices. For example, For example, a communication interface 235 of a computing device 120 may transmit an alarm to a communications interface 235 of another computing device 120. The communication interface 235 enables machine-to-machine communication, i. works as a machine-machine cutting parts. The display interface 220 or communication interface 235 are both capable of providing information suitable for use with the methods described herein, e.g. to the operator 225 or other device. Therefore, the display interface 220 and the communication interface 235 may be referred to as output devices. Similarly, operator input interface 230 and communication interface 235 are capable of receiving information suitable for use with the methods described herein and may therefore be referred to as input devices. In the exemplary embodiment, the display interface 220 is used to visualize the automatic overlay of the current images 115 onto the design model data 145. After visualization, the operator 225 may use the operator input interface 230 to perform tasks including, without limitation, measurements and determinations of deviations in the current images 115 with respect to the design model data 145. The task of the measurement may include the use of software that Provides measurement applications to perform such measurements and determinations of deviations. In the exemplary embodiment, the operator 225 may receive the computing device 120 upon receiving information about the at least one current image 115, the at least one matching historical image (not shown), and the at least one matching design model (not shown) via the display interface 220 use. The operator 225 may act on the displayed information and use the computing device 120 to update the state of the physical asset 105, request maintenance (not shown) from the service agent 155, or continue the inspection of the physical asset 105. The operator 225 may initiate such action via the operator input interface 230, which processes the operator command by the processor 125 and uses the communication interface 235 to communicate with other devices. In the exemplary embodiment, the computing device 120 is an example embodiment of a computing device 120. In at least some other embodiments, the computing device 120 is also an exemplary embodiment for other devices (not shown) that may be used in the enhanced visual inspection of a physical asset 105 become. In most embodiments, the computing device 120 represents at least the essential embodiment of such other devices. FIG. 3 is a flowchart of an example method 300 for improved visual inspection of a physical asset 105 using a computer-implemented system 100 (both shown in FIG. 1). The method 300 includes receiving 305 current visual inspector data. The receiving 305 of current visual inspection device data represents receiving at the computing device 120 (shown in FIG. 1) current images 115 (shown in FIG. 1) from the visual inspection device 110 (shown in FIG. 1), the images of which Physical property 105 absorbs. The method 300 additionally uses the image calling device to call a point cloud 315. In the exemplary embodiment, the image retrieval device uses a K-nearest neighbor search to search through a kd-tree structure that includes historical image data 140. In alternative embodiments, the image retriever uses various methods to search for a corresponding historical image 140. The point cloud represents a mapping of three-dimensional design model data 145 to two-dimensional historical image data 140. Therefore, a point cloud is included that includes both historical images 140 and design model data 145 when the image viewer identifies a corresponding historical image 140. The method 300 also includes a key-location comparison 320 to compare the current images 115 to matching historical images 140. The key population comparison 320 involves extracting specific features associated with images and uses the features to compare multiple images. In an exemplary embodiment, the features (not shown) of the current images 115 are compared to features (not shown) of historical images 140 that were invoked, resulting in pairs of matching features that refer to the same physical location on the physical asset 105. The features of the current images 115 and the historical images 140 may be the same features used in obtaining 315 a point cloud using the image retrieval device or other features. In the exemplary embodiment, key-point alignment 320 has three steps. First, key-location comparison 320 compares images in terms of local shape and local geometric patterns. The comparison of the local shape represents a comparison of a plurality of image features. The comparison of the local geometric pattern represents a comparison of images that have been broken down into geometric features of the components. The comparison further includes annotating the historical images 140 with shape constraint information and locating the same boundaries in the current images 115. This comparison also includes the transformation of each feature location in the historical images 140 and the current images 115. The feature location transformation may be e.g. be performed by generating a scale-invariant feature transform (SIFT) for a feature in each image. The estimation method also includes comparing transformed features. In the exemplary embodiment, comparing transformed features includes applying values to compensate for the appearance and location of each feature, thereby adding location information to the transformed features. Second, key-point alignment 320 includes the use of an estimation method to find the best set of matching points between the images. Once the transformed features have been compared, matches between features are obtained. The estimation method further includes applying two steps using the Random Sample Consensus (RANSAC) to eliminate outliers in the matches through multiple iterations. In the first step of the RANSAC application, matches are constrained so that matching points must meet epipolar conditions. For clarification, the epipolar condition requires that, for each observed point in an image, the corresponding point in the other image must be observed on a known epipolar line. Therefore, an epipolar line is determined for both the current image 115 and the historical image 140 and used to generate the epipolar condition, which is then used to constrain matches. In at least some embodiments, the epipolar condition is determined by generating a fundamental matrix representative of the epipolar line. If there are a large number of matches, the RANSAC approach involves using a subset of the original matches randomly selected from the complete set of matches. The value of this subset of original matches is determined by an error value that is calculated as the level of mismatch with the epipolar condition. Depending on the number of matches in the full set, the process is iterated until an appropriate number of subgroups have been evaluated for the epipole condition. The epipolar geometry associated with the subgroup having the lowest error rate is then used to eliminate matches that deviate from this geometry over a defined threshold. In the second step of the RANSAC application, matches are constrained to satisfy a projection matrix. The projection matrix is calculated using the called point cloud. The projection matrix represents a mathematical matrix that relates the two-dimensional to the three-dimensional point cloud that has been transferred to the current image 115. As in the first step, a subset of original matches is randomly selected from the group of matches (which is now restricted by the application of the epipole condition). The projection matrix calculated from this subset is assigned a quality value that depends on the correspondence of the matches in this subset with the calculated projection matrix. The process is iterated depending on the quality of the whole set of matches and the presumed least-error projection matrix is ultimately used. Third, the key-point alignment 320 allows the assignment of the design model data 145, which was called together with the historical image data 140 corresponding to the current image data 115, by using the retrieved point cloud corresponding to the historical image data. In this step, the association or registration between the two-dimensional historical image data 140 and the three-dimensional design model data 145 is transferred to the current image data 115. Therefore, each point on the current image data 115 for which a match has been found is now assigned to the three-dimensional points of the design model data 145. The method 300 further includes a camera pose estimate 325. The camera pose estimate 325 is representative of determining the specific position and orientation of the visual inspection device 110 with respect to the physical asset 105. The camera pose estimation 325 includes determining projections of potential camera positions based on Construction model data 145 associated with the historical image data 140 and the current images 115. In other words, the design model data 145 is now assigned or registered to actual images 115 based on the application of the point cloud (described above) and the Camera positions may be determined for the current images 115 by generating and applying a projection matrix to the mapping of the two-dimensional current image 115 to the three-dimensional design model data 145. In the exemplary embodiment, obtaining projections includes solving a system of linear equations representing a projection matrix of potential camera positions. The method 300 additionally includes rendering a model to an estimated camera position 330. Rendering a model to an estimated camera position 330 represents using the camera pose estimate 325 to the associated design model data 145 to generate a rendering model that is representative of the design model from the same camera position as the visual inspection device 110. In addition, the method 300 includes a physical measurement 335. The physical measurement 335 represents the comparison of physical features of the rendered design model with the current image 115. In the exemplary embodiment, the physical measurement 335 includes comparing the image data from a projection of the image Turbine bucket sheet design model data 145 with the current turbine bucket blade actual image 115 with a deviation such as a crack and crack measurement. FIG. 4 is a simplified flowchart of the overall method 400 for improved visual inspection of a physical asset 105 using a computer-implemented system 100 (both shown in FIG. 1) to provide the improved visual inspection method 300 shown in FIG is shown to be possible. The computing device 120 (shown in FIG. 1) receives 415 at least one current image of the physical asset 105. The receipt 415 of at least one current image of the physical asset 105 represents the receipt of current visual inspection device data 305 (shown in FIG. 3) and More specifically, receiving current images 115 from a current image source. In the exemplary embodiment, the current image source is the visual inspection device 110 (shown in FIG. 1). In alternative embodiments, the current image source may be the working memory device 130, the memory device 135, or an external memory 160. The computing device 120 also identifies 420 at least one matching image corresponding to the at least one current image. Identifying 420 at least one matching historical image corresponding to the at least one current image represents the use of an image caller 315 (shown in FIG. 3). More specifically, identifying 420 from at least one matching historical image corresponding to the at least one current image makes the selection of historical Image data 140 (shown in FIG. 1) corresponding to the current images 115. The computing device also 425 identifies at least one matching design model corresponding to the at least one current image. Identifying 425 from at least one matching design model represents using the image viewer 315 to obtain a point cloud having design model data 145 (shown in FIG. 1) associated with historical image data 140. As discussed above, the image retrieval device uses features determined for the current image 115 and compares them to historical image data 140 to determine historical image data 140 that most closely matches the current image 115. The computer-implemented systems and methods as described herein provide an automated approach to inspecting physical assets with a visual inspection device. The embodiments described herein enable automated comparison of image data from a visual inspection device with design model data without requiring manual intervention or external input. Automated retrieval of historical images, the automated alignment of current and historical image features that allows the transfer of a point cloud to the current image, avoids the laborious process of manually selecting feature matches between the model and the current image. In addition, the methods and systems described herein enable rapid inspection and analysis of the physical asset. Further, the methods and systems described herein will reduce the cost of the inspection due to the reduced time and manpower expenditures used for the inspection. In addition, these methods and systems will increase the uptime of physical goods by reducing the need to disassemble a good for inspection. In addition, the methods and systems described herein reduce the operational, logistical and financial expenses associated with the inspection of physical assets through efficient analysis and model comparisons. An exemplary technical effect of the methods and computer-implemented systems described herein includes (a) the increased rate of analysis of visual data from a visual inspection device on a physical asset; and / or (b) the improved monitoring of physical assets through accurate comparisons of visual data; and / or (c) a longer operating life of the physical goods due to a reduced need to disassemble the goods. Exemplary embodiments for improved visual inspection of a physical asset are described in detail above. The computer-implemented systems and methods for operating such systems are not limited to the specific embodiments described herein, but rather, components of systems and / or steps of the methods may be used independently and separately from other components and / or steps described herein. For example, For example, the methods may also be used in combination with other systems and methods of operation and are not limited to using only the visual inspection systems and methods
权利要求:
Claims (20) [1] to be used as described herein. Rather, the exemplary embodiment may be implemented and used in conjunction with many other operational applications. Although specific features of various embodiments of the invention may be shown in some drawings and not in others, this is for convenience only. In accordance with the principles of the invention, each feature of a drawing may be referenced and / or claimed in combination with any other feature of any other drawing. The written description uses embodiments to disclose the invention, including the preferred embodiment, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any included procedures. The patentable scope of the invention is defined by the claims, and may include other examples that will become apparent to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims. claims A computer-implemented system for enhanced automated visual inspection of a physical asset, comprising: a visual inspection device capable of generating images of the physical asset; and a computing device including a processor, a memory device connected to the processor, and a memory device coupled to the memory device and the processor, wherein the memory device comprises at least one historical image of the physical asset and at least one design model included in the Substantially represents the physical asset, the computing means being arranged to: receive at least one current image of the physical asset from a current image source captured by the visual inspection facility; Identifying at least one matching historical image corresponding to the at least one current image; and identifying at least one matching design model corresponding to the at least one current image. [2] The computer-implemented system of claim 1, wherein the current image source is at least one of the following: the visual inspection device; the storage device; the working storage device; or an external storage device. [3] 3. The computer-implemented system of claim 1, wherein the computer implemented system is configured to identify the at least one matching historical image corresponding to the at least one current image, and further configured to: from the at least one historical image to determine a group of historical characteristics; determine from the at least one current image a set of current typical features; and determine, using at least one image retrieval method, the at least one historical image corresponding to the at least one current image. [4] The computer-implemented system of claim 3, further adapted to use a key-point match to identify the most appropriate sets of matching points between the at least one historical image and the at least one current image. [5] 5. The computer-implemented system of claim 1, wherein the computer-implemented system configured to identify the at least one matching design model corresponding to the at least one current image is further configured to have a physical location of the visual inspection device in Reference to the physical good. [6] The computer-implemented system of claim 5, further adapted to: determine two-dimensional pixel locations in the at least one matching historical image; determine from the two-dimensional pixel locations three-dimensional dot locations in the matching design model; to generate from the three-dimensional point locations a mathematical model of projections of the matching design model, the mathematical model being representative of a function of properties of the visual inspection device, the rotation of the visual inspection device, and the translation of the visual inspection device; and determine a position of the visual inspection device with respect to the physical asset from the mathematical model of the projections of the matching design model and the at least one current image. [7] The computer-implemented system of claim 6, wherein the computer-implemented system configured to identify at least one matching design model corresponding to the at least one current image is further configured to: use the position of the visual inspection device To produce projection of the matching design model; superimpose the projection of the at least one current image on the matching design model; and measure differences between the projection and the at least one current image. [8] A computer-based method for enhanced automated visual inspection of a physical asset, the method being performed by a computing device, the computing device comprising a processor, a memory device connected to the processor, and a memory device connected to the memory device and the processor, the memory device at least includes a historical image of the physical asset and at least one design model that substantially represents the physical asset, the method comprising: receiving at least one current image of the physical asset from a current image source captured by a visual inspection device; Identifying at least one matching historical image corresponding to the at least one current image; and identifying at least one matching design model corresponding to the at least one current image. [9] The computer-based method of claim 8, wherein the current image source is at least one of the following: the visual inspection device; the storage device; the working storage device; or an external storage device. [10] The computer-based method of claim 8, wherein identifying the at least one matching historical image corresponding to the at least one current image comprises: determining a group of historical typical features from the at least one historical image; Determining a group of current typical features from the at least one current image; and determining the at least one historical image corresponding to the at least one current image using at least one image retrieval method. [11] The computer-based method of claim 10, further comprising using a key-point match to identify the most appropriate sets of matching points between the at least one historical image and the at least one current image. [12] 12. The computer-based method of claim 8, wherein identifying the at least one matching design model corresponding to the at least one current image further comprises determining a physical position of the visual inspection device with respect to the physical good. [13] The computer-based method of claim 12, further comprising: determining two-dimensional pixel locations in the at least one matching historical image; Determining three-dimensional dot locations in the matching design model from the two-dimensional pixel locations. Generating a mathematical model of projections of the matching design model from the three-dimensional point locations, wherein the mathematical model is representative of a function of properties of the visual inspection device, the rotation of the visual inspection device, and the translation of the visual inspection device; and determining a position of the visual inspection devices with respect to the physical good from the mathematical model of projections of the matching design model and the at least one current image. [14] 14. The computer-based method of claim 13, wherein identifying the at least one matching design model corresponding to the at least one current image further comprises: generating a projection of the design model using the position of the visual inspection device; Superimposing the at least one current image on the design model; and measuring differences between the projection and the at least one current image. [15] A computer for enhanced automated visual inspection of a physical asset, the computer having a processor, a memory device connected to the processor, and a memory device coupled to the memory device and the processor, the memory device including at least one historical image of the physical asset and at least one design model essentially representing the physical asset, the computer being configured to: receive, from a current image source, at least one current image of the physical asset received by a visual inspection device; identify at least one matching historical image corresponding to the at least one current image; and identify at least one matching design model corresponding to the at least one current image. [16] The computer of claim 15, wherein the current image source is at least one of the following: the visual inspection device; the storage device; the working storage device; and an external storage device. [17] The computer of claim 16, further adapted to use a key-point match to identify the most appropriate sets of matching points between the at least one historical image and the at least one current image. [18] 18. The computer of claim 15, wherein the computer configured to identify the at least one matching design model corresponding to the at least one current image is further configured to determine a physical position of the visual inspection device with respect to the physical good. [19] The computer of claim 18, further arranged to: determine two-dimensional pixel locations in the at least one matching historical image; determine from the two-dimensional pixel locations three-dimensional dot locations in the matching design model; to generate from the three-dimensional point locations a mathematical model of projections of the matching design model, the mathematical model being representative of a function of properties of the visual inspection device, the rotation of the visual inspection device, and the translation of the visual inspection device; and determine, from the mathematical model of the projections of the matching design model and the at least one current image, a position of the visual inspection device with respect to the physical asset. [20] 20. The computer of claim 19, wherein the computer configured to identify the at least one matching design model corresponding to the at least one current image is further configured to: generate a projection of the design model using the position of the visual inspection device; superimpose the at least one current image with the design model; and measure differences between the projection and the at least one current image.
类似技术:
公开号 | 公开日 | 专利标题 DE112013006312T5|2015-09-24|Methods and systems for improved automated visual inspection of a physical asset DE102018128158A1|2019-06-13|DEVICE FOR INSPECTION OF THE APPEARANCE PICTURE DE112012001984T5|2014-02-20|Integrate video metadata into 3D models DE102018214339A1|2019-03-28|INSPECTION DEVICE AND INSPECTION SYSTEM JP2005523053A5|2006-05-25| CA2358389C|2007-04-17|Method and devices for assisting in the control of building operations DE102005058701A1|2006-12-21|Method for examining 3D scanned data using parametric tolerance DE102005058699A1|2006-12-21|System and method for recalculating an analysis report for 3D scan data DE102005016955A1|2006-10-19|Method and device for processing image and volume data based on statistical models AT520781A2|2019-07-15|Behavior model of an environmental sensor DE102018128640A1|2019-05-23|Device for estimating a noise generation cause DE102019100822A1|2020-07-16|Boroscopic inspection procedure DE102019123801A1|2020-03-12|Tester and machine learning method DE102017213060A1|2019-01-31|A computer-implemented method and apparatus for automatically generating tagged image data and analyzer for inspecting a component DE112017007507T5|2020-03-19|CLOUD-CAPABLE TESTING OF CONTROL SYSTEMS CN110796754A|2020-02-14|Machine room inspection method based on image processing technology DE112016001224T5|2017-11-30|Image processing apparatus, method and program DE102012110623B4|2017-08-17|Measuring device for carrying out measuring and testing tasks in predefinable processes DE102015010264A1|2017-02-09|Method for creating a 3D representation and corresponding image acquisition device DE102011079429A1|2013-01-24|Performance simulation of medical procedures in a client-server environment DE102019129385A1|2021-05-06|Method for the automated determination of characteristic curves and / or characteristic diagrams DE102020207974A1|2020-12-31|SYSTEMS AND METHODS FOR DETECTING MOVEMENT DURING 3D DATA RECONSTRUCTION EP3836019A1|2021-06-16|Method and assistance system for checking patterns for defects DE102018133092B3|2020-03-12|Computer-implemented method for analyzing measurement data from a measurement of an object DE112019004583T5|2021-10-14|RATIONALIZATION OF AN AUTOMATIC VISUAL INSPECTION PROCESS
同族专利:
公开号 | 公开日 WO2014105463A3|2014-10-16| JP6405320B2|2018-10-17| JP2016502216A|2016-01-21| WO2014105463A2|2014-07-03| DE112013006312T5|2015-09-24| US20140185912A1|2014-07-03| US9251582B2|2016-02-02|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5699444A|1995-03-31|1997-12-16|Synthonics Incorporated|Methods and apparatus for using image data to determine camera location and orientation| US5850469A|1996-07-09|1998-12-15|General Electric Company|Real time tracking of camera pose| US6751343B1|1999-09-20|2004-06-15|Ut-Battelle, Llc|Method for indexing and retrieving manufacturing-specific digital imagery based on image content| AU5080201A|2000-03-07|2001-09-17|Sarnoff Corp|A method of pose estimation and model refinement for video representation of a three dimensional scene| US6825856B1|2000-07-26|2004-11-30|Agilent Technologies, Inc.|Method and apparatus for extracting measurement information and setting specifications using three dimensional visualization| US20050131657A1|2003-12-16|2005-06-16|Sean Mei Hsaio L.|Systems and methods for 3D modeling and creation of a digital asset library| US20050131659A1|2003-12-16|2005-06-16|Mei Hsaio L.S.|Systems and methods for 3D modeling and asset management| US6992315B2|2004-03-10|2006-01-31|Siemens Westinghouse Power Corporation|In situ combustion turbine engine airfoil inspection| JP4715539B2|2006-02-15|2011-07-06|トヨタ自動車株式会社|Image processing apparatus, method thereof, and image processing program| US8244025B2|2006-03-20|2012-08-14|Siemens Energy, Inc.|Method of coalescing information about inspected objects| US8477154B2|2006-03-20|2013-07-02|Siemens Energy, Inc.|Method and system for interactive virtual inspection of modeled objects| US7432505B2|2006-05-04|2008-10-07|Siemens Power Generation, Inc.|Infrared-based method and apparatus for online detection of cracks in steam turbine components| US7520176B1|2006-12-05|2009-04-21|The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration|Method for real-time structure shape-sensing| US7983487B2|2007-11-07|2011-07-19|Mitsubishi Electric Research Laboratories, Inc.|Method and system for locating and picking objects using active illumination| US8485038B2|2007-12-18|2013-07-16|General Electric Company|System and method for augmented reality inspection and data visualization| US7720322B2|2008-06-30|2010-05-18|Intuitive Surgical, Inc.|Fiber optic shape sensor| US7715994B1|2008-08-14|2010-05-11|The United States Of America As Represented By The National Aeronautics And Space Administration|Process for using surface strain measurements to obtain operational loads for complex structures| JP5253066B2|2008-09-24|2013-07-31|キヤノン株式会社|Position and orientation measurement apparatus and method| US8184152B2|2008-12-01|2012-05-22|The Boeing Company|Camera-holding apparatus for evaluating parts| US20100140936A1|2008-12-23|2010-06-10|General Electric Company|Wind turbine with gps load control| US8125529B2|2009-02-09|2012-02-28|Trimble Navigation Limited|Camera aiming using an electronic positioning system for the target| US8108168B2|2009-03-12|2012-01-31|Etegent Technologies, Ltd.|Managing non-destructive evaluation data| US8521480B2|2009-03-12|2013-08-27|Etegent Technologies, Ltd.|Managing non-destructive evaluation data| US20100275574A1|2009-04-30|2010-11-04|General Electric Company|Borescope plug with bristles| US9533418B2|2009-05-29|2017-01-03|Cognex Corporation|Methods and apparatus for practical 3D vision system| FR2949220B1|2009-08-21|2011-09-09|Snecma|METHOD AND SYSTEM FOR DETECTING THE INGESTION OF AN OBJECT BY AN AIRCRAFT TURBOJUSTER DURING A MISSION| US8290830B2|2010-04-07|2012-10-16|Siemens Product Lifecycle Management Software Inc.|System and method for visualization and comparison of physical assets using engineering design data| US8171809B2|2010-06-25|2012-05-08|General Electric Company|System and method for wind turbine inspection| CN102486829B|2010-12-01|2016-05-04|罗普特(厦门)科技集团有限公司|image analysis system and method| US8984402B2|2010-12-20|2015-03-17|Xerox Corporation|Visual indication of document size in a virtual rendering| US20120224049A1|2011-03-04|2012-09-06|General Electric Company|Clearance inspection apparatus for a machine| US9605952B2|2012-03-08|2017-03-28|Quality Manufacturing Inc.|Touch sensitive robotic gripper| US20140188772A1|2012-12-27|2014-07-03|General Electric Company|Computer-implemented methods and systems for detecting a change in state of a physical asset| US9552567B2|2012-12-27|2017-01-24|General Electric Company|Computer-implemented methods and systems for determining fleet conditions and operational management thereof| US9251582B2|2012-12-31|2016-02-02|General Electric Company|Methods and systems for enhanced automated visual inspection of a physical asset| US9612211B2|2013-03-14|2017-04-04|General Electric Company|Methods and systems for enhanced tip-tracking and navigation of visual inspection devices| US10437203B2|2013-10-08|2019-10-08|General Electric Company|Methods and systems for dynamic workflow prioritization and tasking|US9251582B2|2012-12-31|2016-02-02|General Electric Company|Methods and systems for enhanced automated visual inspection of a physical asset| US9607370B2|2014-01-15|2017-03-28|The Boeing Company|System and methods of inspecting an object| WO2015197908A1|2014-06-27|2015-12-30|Nokia Technologies Oy|A method and technical equipment for determining a pose of a device| JP2018506127A|2014-11-24|2018-03-01|キトフ システムズ エルティーディー.|Automatic inspection method| US10018465B2|2015-09-25|2018-07-10|General Electric Company|Inspection path display| BE1024192B1|2016-05-09|2017-12-12|Safran Aero Boosters S.A.|METHOD AND TURBOMACHINE TEST BENCH WITH INCREASED REALITY| WO2018005882A1|2016-06-30|2018-01-04|Unmanned Innovation, Inc.|Unmanned aerial vehicle wind turbine inspection systems and methods| US9950815B2|2016-07-01|2018-04-24|United Technologies Corporation|Systems and methods for detecting damage| US10866927B2|2017-05-10|2020-12-15|General Electric Company|Intelligent and automated review of industrial asset integrity data| US10878556B2|2018-01-19|2020-12-29|United Technologies Corporation|Interactive semi-automated borescope video analysis and damage assessment system and method of use| US10867177B2|2018-02-21|2020-12-15|Genscape Intangible Holding, Inc.|Method and system for estimating an operating state of a facility via imaging of electromagnetic radiation| US10928362B2|2018-05-04|2021-02-23|Raytheon Technologies Corporation|Nondestructive inspection using dual pulse-echo ultrasonics and method therefor| US11268881B2|2018-05-04|2022-03-08|Raytheon Technologies Corporation|System and method for fan blade rotor disk and gear inspection| US10958843B2|2018-05-04|2021-03-23|Raytheon Technologies Corporation|Multi-camera system for simultaneous registration and zoomed imagery| US10902664B2|2018-05-04|2021-01-26|Raytheon Technologies Corporation|System and method for detecting damage using two-dimensional imagery and three-dimensional model| US10943320B2|2018-05-04|2021-03-09|Raytheon Technologies Corporation|System and method for robotic inspection| US10685433B2|2018-05-04|2020-06-16|Raytheon Technologies Corporation|Nondestructive coating imperfection detection system and method therefor| US10488371B1|2018-05-04|2019-11-26|United Technologies Corporation|Nondestructive inspection using thermoacoustic imagery and method therefor| US11079285B2|2018-05-04|2021-08-03|Raytheon Technologies Corporation|Automated analysis of thermally-sensitive coating and method therefor| US10914191B2|2018-05-04|2021-02-09|Raytheon Technologies Corporation|System and method for in situ airfoil inspection| US10473593B1|2018-05-04|2019-11-12|United Technologies Corporation|System and method for damage detection by cast shadows| DE102019100820A1|2019-01-14|2020-07-16|Lufthansa Technik Aktiengesellschaft|Method and device for inspecting components that are difficult to reach| US20200394784A1|2019-06-17|2020-12-17|RecognAIse Technologies Inc.|Artificial intelligence-based process and system for visual inspection of infrastructure|
法律状态:
2017-03-15| NV| New agent|Representative=s name: GENERAL ELECTRIC TECHNOLOGY GMBH GLOBAL PATENT, CH | 2018-07-31| PL| Patent ceased|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/731,209|US9251582B2|2012-12-31|2012-12-31|Methods and systems for enhanced automated visual inspection of a physical asset| PCT/US2013/075020|WO2014105463A2|2012-12-31|2013-12-13|Methods and systems for enhanced automated visual inspection of a physical asset| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|