![]() WEAR-TOLERANT HYDRAULIC / PNEUMATIC PISTON POSITION DETECTION SYSTEM USING OPTICAL SENSORS
专利摘要:
The present invention relates to the use of self-calibrating and recalibrating piston rod motion with sensors (230, 925). Self-calibration allows on-site calibration of uncalibrated optical sensors (230, 925). During operation, the recalibration makes it possible to detect and correct the wear and damage of the piston rod (200) and / or the optical sensors (230, 925). The calibration positions (210) on the surface (208) of the piston rod (200) are captured by optical sensors (230) using laser or blackfield lenses designed for optical computer mice. Natural surface patterns (208) can be used in locations where calibration positions (210) are required, which reduces or eliminates the need for marked calibration positions (210). The marked calibration positions (210) are spatially unique coded sequences used to determine the absolute position of the piston rod. Storing only the significant characteristic aspects of the calibration positions (210) saves memory. 公开号:FR3058212A1 申请号:FR1760245 申请日:2017-10-30 公开日:2018-05-04 发明作者:Timothy David Webster 申请人:Timothy David Webster; IPC主号:
专利说明:
© Publication number: 3,058,212 (only to be used for reproduction orders) © National registration number: 17 60245 ® FRENCH REPUBLIC NATIONAL INSTITUTE OF INDUSTRIAL PROPERTY COURBEVOIE © IntCI 8 G 01 B 11/04 (2017.01), F 15 B 15/28 A1 PATENT APPLICATION ©) Date of filing: 30.10.17. © Applicant (s): WEBSTER TIMOTHYDAVID - CA. © Priority: 31.10.16 US 62414806. @ Inventor (s): WEBSTER TIMOTHY DAVID. (© Date of public availability of the request: 04.05.18 Bulletin 18/18. ©) List of documents cited in the report preliminary research: The latter was not established on the date of publication of the request. (© References to other national documents ® Holder (s): WEBSTER TIMOTHY DAVID. related: ©) Extension request (s): (© Agent (s): LLR. 04) WEAR-TOLERANT HYDRAULIC / PNEUMATIC PISTON DETECTION SYSTEM USING OPTICAL SENSORS. FR 3 058 212 - A1 _ The present invention relates to the use of a displacement of piston rod with sensors (230, 925) with self-calibration and recalibration. Self-calibration allows on-site calibration of non-calibrated optical sensors (230, 925). During operation, recalibration can detect and correct wear and damage to the piston rod (200) and / or the optical sensors (230, 925). The calibration positions (210) on the surface (208) of the piston rod (200) are captured by optical sensors (230) using laser or dark field lenses designed for optical computer mice. Natural surface patterns (208) can be used in locations where calibration positions (210) are required, which reduces or eliminates the need for marked calibration positions (210). The marked calibration positions (210) are spatially unique coded sequences used to determine the absolute position of the piston rod. Storing only the significant characteristic aspects of the calibration positions (210) saves memory. BACKGROUND OF THE INVENTION - OBJECTIVES AND BENEFITS This invention relates to the self-calibration and recalibration of an optical sensor for measuring the displacement of a piston rod, thereby enabling practical initial calibration and recalibration to correct wear and damage, and proximity, time-of-flight sensors, i.e. sensors based on the measurement of a propagation time, and a cumulative relative displacement are used to estimate the absolute displacement of the piston rod and reduce the number of spatially unique calibration positions required to compare to determine the absolute displacement of the piston rod, thereby reducing the memory storage required and the computing resources required, which allow close or continuous calibration positions, and the use speckle patterns, i.e. speckled or speckled, occurring naturally, as calibration positions, reduce the need for seq coded items marked as calibration positions. DESCRIPTION OF THE PRIOR ART Position sensors for the piston (for example, US Patent No. US 9,027,460 B2) which measure the position by electric resonators in series are not easy to implement in practice. The conductivity of the oil is variable depending on the temperature and the pressure. In addition, the relationship between oil conductivity, temperature and pressure must be re-characterized as the oil ages. Optical motion sensors (for example, US Pat. Nos. 8,525,777 B2, US 9,086,738 B2, US 9,342,164 B2, US 7,737,947 B2, US 8,692,880 B2) measure the relative movement of the surface, but are unable to determine the position absolute with respect to the surface. Optical position sensors (for example, US Patent No. US 8,482,607 B2) are capable of determining the absolute position at calibration positions. They do not include an efficient means of storing calibration position images. Therefore, only a few calibration position images can be stored, so that the estimated absolute position error can increase significantly until it is corrected at the calibration positions. Optical position sensors (for example, European Patent No. EP 2 769 104 B1) are based on an optically detected code pattern and use light pipes. Engraving or adding these optically detectable code patterns greatly increases the manufacturing cost. Optical position sensors (for example, U.S. Patent No. US 9,134,116 B2) use multiple lasers and / or sensors to provide greater coverage of the curved surface of the piston than is possible with a single laser and sensor, but are not able to determine the absolute position relative to the surface. Optical position sensors (for example, European Patent No. EP 2 775 268 A1) use coherent or almost coherent light to collect a speckle interference image for each position. The position between the stored spotted interference images is completely unknown. Without significant image compression and / or point identification, the memory size required to store a number of points sufficient to be useful is so large as to be impractical. Optical alignment sensors (e.g., Chinese Patent No. CN2015 / 075823) align images using significant image characteristics. However, the significant characteristics of the images and the absolute position are not memorized. Consequently, it is not possible to determine the absolute position. Optical feature matching is used (for example, US Patent No. US 9,449,238 B2) to improve image correlation in adverse conditions which affect the appearance of images. However, the means of collecting images is not provided and is limited to well defined regular images. Optical surface movement sensors use light limited to a short wavelength (for example, U.S. Patent No. US 8,847,888 B2) to decrease surface penetration by light and increase surface reflection of light . However, reduced light penetration offers no advantage for the most commonly used metal piston rods. A combination of narrow band red and blue light is more suitable for penetrating an oil film which may be present and reaching the optically very dense metal. Optical surface movement sensors (for example, US Pat. Nos. 7,728,816 B2, US 9,052,759 B2) adjust the measurement resolution along the X and Y axes based on the estimated speed along the axis. However, a means of collecting and storing images is not provided. Hollow piston rods containing magnetic, microwave and optical sensors are well known in the art. However, it is impractical or cost effective to drill or forge small diameter or small diameter piston rods. Optical and radio time-of-flight sensors require calibration and recalibration when the optical characteristics change. And low-cost external time-of-flight sensors are sensitive to objects obstructing the optical path. As a result, an optical time-offlight sensor used alone to determine the absolute displacement of the piston rod suffers from low reliability. SUMMARY OF THE INVENTION The present invention uses a self-calibrating and recalibrating optical sensor to measure the displacement of a piston rod. Self-calibration allows on-site calibration of an uncalibrated optical sensor. During operation, recalibration detects and corrects wear and damage to the piston rod and / or the optical sensors. Natural speckled patterns can be used in locations where calibration positions are required, which reduces or eliminates the need for marked calibration positions. Marked calibration positions are spatially unique coded sequences used to determine the absolute position of the piston rod. Memorizing only the significant characteristics of the calibration positions saves memory significantly. The reduced memory requirements of each calibration position allow close or continuous calibration positions. Multiple calibration position characteristics and multiple optical sensors together provide immunity to localized damage. Proximity sensors, time-of-flight sensors and cumulative relative displacement are used to estimate the absolute displacement of the piston rod and reduce the number of spatially unique calibration positions required to compare in order to determine the absolute displacement of the piston rod. The optical sensors used to determine the absolute displacement of a piston rod of a hydraulic or pneumatic cylinder relative to the cylindrical body of the cylinder are calibrated in calibration positions and the absolute displacement is estimated by means of the cumulative relative displacement measured from said absolute displacement. The steps for estimating the absolute displacement are as follows: capture images of the surface of the piston by a CMOS or CCD image sensor using a low-cost laser or a darkfield objective designed for optical computer mice. Arrangements of dots which contrast with the adjacent environments are selected from the captured images. These selected arrangements of significant points are mapped to their corresponding absolute piston rod positions and stored as calibration positions. These significant points which identify an absolute piston rod calibration position are an arrangement of known calibration points. The arrangement of points selected from the current image is aligned with the arrangement of known calibration points of a calibration position. An optical device according to the invention, for measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain mechanical parts moving relative to each other, comprises: a) optical image capture means, directed towards the surface of a movable part of said mechanical apparatus under observation which move relative to said optical image capture means, so that said optical image capture means are mounted on another part of said mechanical device, b) means for selecting, on said images, points which contrast with the adjacent environment, c) a plurality of calibration positions, such that at each calibration position there is an arrangement of known calibration points which contrast with the adjacent environment on said surface, d) means for detecting an alignment of said mobile part by said optical image capture means, in said calibration position with an arrangement of known calibration points which are in contrast with the adjacent environment, by setting correspond said points which contrast with the adjacent environment with said known calibration points which are in contrast with the adjacent environment, so that the absolute position of said movable part with respect to said optical image capturing means is known to said alignment detected, e) means for alternately measuring said absolute position of said movable part other than by directly said detection alignment with said calibration positions, f) means for identifying said calibration positions when said detected alignment is detected as being near said known absolute position of said calibration positions and said alternately measured absolute position of said movable part does not correspond to said known absolute position of said calibration position, so that the absolute positions of said identified calibration positions are capable of being updated, and g) means for identifying said calibration positions when said absolute position measured as an alternative to said detected alignment is not detected as being near said known absolute position of said calibration positions, so that said identified calibration positions can be deleted, said known absolute position of said mobile part relative to said optical image capturing means being thus corrected to said calibration positions, and said calibration positions thus not requiring marking on said surface of said mobile part of said mechanical device under observation, said means for alternating absolute position measurement thus being able to improve the accuracy of said absolute position known from said calibration positions, said means for measuring alternative absolute position thus being able to improve accuracy of said known absolute position of said position s calibration, said means for alternately measuring the absolute position thus being able to identify and delete said calibration positions which are no longer detectable, and said optical device thus being able to free up memory used by said positions d which are no longer detectable. Preferably, the optical device further comprises: a) means for creating an image composed from a plurality of said images, which combines optical characteristics originating from said plurality of said images, and b) said means for alternately measuring the absolute position by said means for detecting alignment with said image of said composite image, so that said absolute position is known at locations other than those in which there is said alignment detected with said current image, said alternative means for measuring the absolute position thus being able to determine said absolute position of said moving part on the areas covered by said plurality of said images of said composite image, and said composite image combining optical characteristics originating from said a plurality of said captured images thus having an increased likelihood of containing unique optical characteristics and therefore multiple such composite images are not necessary to uniquely define said calibration positions. Preferably, said alternative means for measuring said absolute position of said movable part is another optical device for measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain mechanical parts movable relative to the other, said alternative absolute position measurement means being capable of determining said absolute position of said movable part at the locations in which another optical device for measuring absolute mechanical displacement provides said absolute position measured by its means for detecting said alignment at its calibration positions. Preferably, the optical device further comprises: a) one or more means for measuring the relative position of said mobile part with respect to the previous position of said mobile part with respect to said optical image capture means, and b) means for estimating the absolute position by accumulating said relative displacement measured from said known absolute positions of said calibration positions, said estimated absolute position thus being estimated between said known absolute positions of said calibration positions, and the cost of the mechanical device thus being reduced thanks to the reduced number of calibration positions required. Preferably, the optical device further comprises: a) means for alternately measuring said absolute position of said movable part other than directly by said detection alignment with said calibration positions, b) means for comparing said estimated absolute position with said alternately measured absolute position, so that when the difference between said estimated absolute position and said alternately measured absolute position is too large, there is not enough calibration positions for the alignment of said mobile part by said optical image capture means, and c) means for storing said calibration position, so that at each calibration position, said absolute position of said movable part relative to said optical image capture means is known and is such that at each calibration position, said calibration points which contrast with the adjacent environment are known, the calibration positions being thus memorized to replace all the absent or obscure calibration positions, and said calibration positions for alignment of said movable part thus being captured from said surface of said movable part. Preferably, the relative position of said known calibration point arrangements of said calibration positions defines only the absolute position, so that said known calibration point arrangements of said calibration positions are locally unique as required to determine said relative position of said calibration positions, said relative position of said additional calibration positions allowing detection and correction of errors caused by a change in one or more of said known calibration point arrangements, and the locally unique arrangement of known calibration points are therefore less complex, requiring less non-volatile memory, which makes it possible to store more locally unique characteristics. Preferably, said arrangement of known calibration points of said calibration positions is derived from marks created on said surface of said movable part, so that said arrangement of known calibration points of said calibration positions is translated into coded sequences , so that said calibration positions with said arrangement of known calibration points translated into one-dimensional coded sequences have said calibration positions defined in one dimension, and such that said calibration positions with said arrangement of known calibration points translated into two-dimensional coded sequences have said calibration positions defined in two dimensions, the creation of marks on said surface of said movable part thus making it possible to be sure that said calibration positions with said d points calibration known which are in contrast to the adjacent environment are shown s as said coded sequence, thus requiring a significantly lower non-volatile memory, the creation of marks on said surface of said movable part thus making it possible to be assured that said calibration positions with said known calibration points are located sufficiently close to ensure an acceptable measurement error, and the creation of marks on said surface of said movable portion thereby ensuring that said calibration positions with said known calibration points can be sparingly located, to minimize manufacturing costs. Preferably, said means for detecting the alignment of said movable part further comprises: a) one or more distance optical reflectors moving jointly with said movable part of said mechanical device under observation, and b) one or more optical distance sensors moving together with said optical image capturing means, so that said one or more optical distance sensors measure the distance from said one or more optical distance reflectors, said optical distance sensors distance thus assisting said means for detecting alignment of said points with said known calibration points and thus reducing the calculation cost. An optical method according to the invention, for measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain moving mechanical parts which move relative to each other, comprises the steps consisting in: a) optically capturing images of the surface of a moving part of said mechanical device under observation which moves relative to means for said optical capturing of images, b) select, in said images, arrangement points which contrast with the adjacent environment, c) detecting the alignment of said movable part with calibration positions by matching said selected layout points with known calibration points which are in contrast with the adjacent environment of said calibration positions, so that the absolute position of said mobile part relative to the means for said optical image capture is known to said detected alignment, d) measuring the relative displacement of said movable part of said mechanical device by measuring a relative displacement between said currently optically captured images and previously optically captured images of said surface of said movable part, and e) estimating the absolute position of said movable part by adding said cumulative measured relative displacements of said movable part from said known absolute position of said calibration position, the absolute position of said movable part which moves relative to the means of optical capture of said images being thus corrected to the calibration positions, and the cost of the mechanical apparatus thus being reduced by virtue of the reduced number of calibration positions required. Preferably, the optical method further comprises the steps consisting in: a) combining said optically captured images to form groups of combined images, and b) operating said selection of point arrangements from among all of said images of said group of combined images, so that said selected arrangements of points having increased contrast with the adjacent environment can be found, said groups of combined images thus having an increased likelihood of containing unique optical characteristics, multiple groups of combined images thus likely not being required to uniquely define said calibration positions, and the arrangements of selected points having increased contrast with the adjacent environment thus allowing faster recognition of said arrangements of selected points, which allows faster movement of said mechanical parts which move relative to each other. Preferably, the optical method further comprises the steps consisting in: a) combining sequences of optically captured images, separated equally, so as to form groups of combined images, so that the separation of said optically captured images, equally separated, is configurable, and b) operating, in all said images of said group of combined images, said selection of said point arrangements, so that said selected point arrangements having increased contrast with the adjacent environment can be found, said surface resolution of group d 'combined images and the surface movement rate thus being able to be configured by the separation of said optically captured images, equally separated, and said groups of combined images thus having an increased probability of containing unique optical characteristics, which it follows that several groups of combined images are not necessary to uniquely define said calibration positions. Preferably, said optical image capture consists in using one or more optical distance sensors, and the method for obtaining the known absolute displacement of said mobile part comprises the steps consisting in: a) calibrating one or more of said optical distance sensors at said calibration positions, and b) measuring the absolute displacement by means of one or more of said optical distance sensors, said measurement of said absolute displacement by the optical distance sensors being thus verified and systematic optical measurement errors of distance being thus able to be corrected by calibration . Preferably, said calibration positions are adjacent or overlap, so that said measured relative displacement does not accumulate during movement between said calibration positions, said absolute displacement being thus known at adjacent calibration positions or overlapping, and the errors in said measured relative displacement thus not accumulating in said estimated absolute position. Preferably, the optical method further comprises the steps consisting in: a) alternately measuring said absolute position by means other than directly by said detection alignment with said calibration positions, b) identifying said calibration positions when said alternately measured absolute position is located near said known absolute displacement of said calibration position, so that said known absolute displacement of said identified calibration position can be updated with said absolute position measured alternately, c) identifying said calibration positions when said alternately measured absolute position is not located near the known absolute displacement of said calibration position, so that said identified calibration position can be deleted, and d) identifying the locations in which said calibration positions are missing, by comparing said estimated absolute position with said alternately measured absolute position, so that a calibration position is missing when the difference between said estimated absolute position and said alternately measured absolute position is too large, so that creating calibration positions by storing said selected point arrangements as said known calibration points from said calibration positions to said alternately measured absolute positions when said calibration positions are missing, said absolute positions measured alternately being thus used to verify said known absolute position obtained from said calibration positions, said absolute positions measured alternately being thus able to be used to improve accuracy from said known absolute position of said calibration positions, said alternately measured absolute positions thus being able to be used to identify and delete said calibration positions which are no longer detectable near their original location, the memory used by said calibration positions which are erased thus being able to be freed, said memorization of arrangements of selected points at the locations of said absolute positions measured alternately to create new calibration positions continuing thus until they are sufficient for said estimated absolute position to have the desired precision on said surface of said movable part, and new calibration positions with said selected stored arrangements being thus added, replacing any lost or hidden calibration position. Preferably, said alternative measurement of said absolute position of said movable part is another optical method of measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain mechanical parts movable with respect to each other. , said absolute alternative measurement position being thus able to determine said absolute position of said movable part at locations in which said other optical method for measuring absolute mechanical displacement provides said absolute position measured by said detection alignment in its calibration positions . Preferably, said means for optically capturing said images comprises steps consisting in: a) capturing a plurality of said images of said surface of said movable part, b) combining said plurality of said images to create a composite calibration image which combines optical characteristics originating from said plurality of said images, c) performing said alternative measurement of said absolute position by determining said absolute position of said movable part from said known absolute position of one or more of said images of said composite calibration image, said alternative measurement of absolute position being thus obtained by means other than by direct detection of said alignment of said current image with said calibration positions, and said composite image which combines optical characteristics originating from said plurality of said captured images thus having an increased probability of containing unique optical characteristics and, therefore, multiple composite images are not required to uniquely define said calibration positions. Preferably, the optical method further comprises the step of selecting said dot arrangements which are globally unique, so that each globally unique dot arrangement is unique among all of said dot arrangements stored at said calibration positions, each of said calibration positions thus being identifiable by its globally unique point arrangements, memorized at said calibration positions. Preferably, the optical method further comprises the step of selecting said point arrangements which are locally unique, so that each locally unique point arrangement is unique within the estimation error of said absolute position estimated, each arrangement of points selected locally unique thus being able to be very simple and having to be unique only in the relatively small area defined by said error of estimation of said absolute displacement estimated, and each arrangement of points selected locally unique being thus simple, requiring less non-volatile memory, thus making it possible to store more of said locally unique point arrangements. Preferably, a) the fact of selecting said point arrangements implies the selection of naturally appearing points, which contrast with the adjacent natural environments appearing naturally from said images, and b) selecting said arrangements of points to be stored as said known calibration point arrangements of a calibration position involves selecting naturally occurring points which contrast with adjacent naturally occurring environments, so that a manufacturing process for creating marks on said surface of said movable part is not necessary, which reduces the manufacturing cost. DRAWINGS - FIGURES The advantages of the present invention will be better understood by reading the description which follows with reference to the appended drawings, in which the figures indicate the structural elements and the characteristics in various figures. The drawings are not necessarily to scale, and illustrate the principles of the invention. Figure 1 is a block diagram of the position identification detection apparatus during operation; Figure 2 is a block diagram of the position identification detection apparatus during calibration initialization; Figure 3 is a block diagram of the procedure for storing calibrations 036 and storing calibration database 100, 102, 104; Figure 4 is a block diagram of the correlation 026/027 of the spotted feature pattern with significant 014 current picture features during normal operation using the FFT-CC; Figure 5 is a block diagram of the 026/027 correlation of spotted characteristic patterns with significant 014 current image characteristics during a calibration operation using the FFT-CC and IC-GN; Figure 6 is a block diagram of overall operation; Figure 7 is a side view diagram of a single laser position identification detection apparatus; Figure 8 is a cross section of a speckled pattern laser image detecting apparatus taken along the cutting plane A-A of Figure 24 illustrating the optical effects of the rounded portion 200 under observation; Figure 9 is a spatially additive black background view; Fig. 10 is a view illustrating the end view of a multi-component speckled pattern image detection apparatus; FIG. 11 is a view illustrating a top view of an apparatus composed of image detection with speckled patterns with the surface 208 of the part 200 under observation below; Figure 12 is a view illustrating an end view of a speckled pattern image detection apparatus having multiple laser LEDs, taken along the section plane B-B of Figure 14; Figure 13 is a view illustrating an end view of an apparatus for detecting speckled pattern images having multiple speckled image collectors, taken along the section plane B-B of Figure 14; Fig. 14 is a view illustrating a bottom view of an apparatus for detecting speckled pattern images; Figure 15 is a graph of the trend in position error versus spot / pixel size for multiple sizes of speckled features; FIG. 16 illustrates the construction of a continuous map of calibration positions on the surface 208 of the part 200 under observation; FIG. 17 illustrates the re-selection of the calibration position on the surface 208 of the part 200 under observation; Figure 18 illustrates the spotted pattern correlation algorithm; Figure 19 illustrates adjacent spotted non-unique calibration images 210; FIG. 20 illustrates spotted non-unique calibration images 210 spaced on the surface 208 of the part 200 under observation; FIG. 21 illustrates captures of frames 004 obtained during the calibration mapping of speckled image frames 210 of calibration; FIG. 22 illustrates the mapping of spotted calibration images 210 into frame captures 004 obtained during normal operation; Figure 23 is a sectional view of a hydraulic cylinder with a fixed photoelectric image sensor, taken along the section plane A-A of Figure 24; Figure 24 is an isometric view of a hydraulic cylinder with a photoelectric image sensor attached; FIG. 25 is a side view of a part 200 under observation with, in three positions, one-dimensional (ID) calibration patterns 215; Figure 26 is a side view of a portion 200 under observation with spaced apart two-dimensional (2D) calibration patterns 216. DRAWINGS - REFERENCE NUMBERS 001 spotted image 004 uniform scale image correction 005 wait until refresh 006 convolution with Gabor's wavelets 008 Gabor group selector 009 position group selector 010 absolute position estimated for group selection 012 descriptor selector SURF / SIFT 014 current significant speckled image characteristics which can also be defined by points of interest 015 time interval until next image capture 016 find the scale space 018 feature descriptor 020 candidate corresponding to a SIFT / SURF characteristic descriptor 024 spotted pattern selector 026 correlated displacement of speckled characteristic pattern between significant characteristics of calibration position 210 and significant characteristics 014 of the current image 027 correlation of patterns of speckled characteristics between significant characteristics 014 of previous image and significant characteristics 014 of current image 030 increment the number of successful matches 031 cumulative relative movement of speckles from most recent known position 032 test condition, is the known absolute position 033 estimated absolute position 034 new SURF / SIFT descriptor for the known location, new significant spots of speckles 036 calibration memorization procedure 038 relative movement of speckles 039 number of unsuccessful relative trips 040 calibration position 210 relative spotted calculated 041 absolute position from a piston stroke limit sensor 918, 919, or from one or more subcomponents of a device comprising position identification detection, or position relative to a spotted calibration characteristic 210 complete or partial with spotted view 209 complete 042 absolute position from the position detection algorithm described in figure 1 100 Gabor wavelet database for the grouping of SURF / SIFT characteristics descriptors 102 database of descriptors of SURF / SIFT characteristics grouped by Gabor wavelets and by position 104 database of mottled patterns 210 of significant calibration points indexed by SURF / SIFT characteristic descriptor 105 calibration data for time-of-flight 925 sensors 110 surface area spotted by red laser 112 surface area spotted by blue laser 114 surface area spotted by green laser 116 surface area 208 with black background 120 red laser 121 red LEDs 122 blue laser 124 green laser 130 optional source lenses 135 collecting lenses 200 cylindrical or flat part under observation such as a piston rod 202 CMOS or CCD image sensor 208 surface of the room 200 under observation 209 full spotted view 210 speckled image position for calibration of the speckled characteristic area described by the Gabor wavelets, SURF / SIFT characteristic descriptors and corresponding significant spot speckle pattern 212 of the pixels are periodically connected to form groupings 212 of spotted images 001 215 calibration pattern with spaced binary sequence ID 216 2D calibration pattern of spaced binary sequences 230 assembly forming optical displacement sensor 231 microprocessor, FPGA and / or ASIC 233 non-volatile memory 234 volatile memory 242 speckled image characteristics caused by wear and tear 300 flat part reference plane 400 spotted average standard calibration image 401 current average normalized spotted image 405 Fourier transform of the spotted image of average normalized calibration 406 Fourier transform of the current average normalized spotted image 410 inverse Fourier transform of the Fourier transform product 415 x and y displacement of FFT-CC of the inverse Fourier transform 410 at most of the product 405 x 406 of the Fourier transform, which is a correlated displacement of speckled characteristic pattern 026 at low resolution between significant characteristics 210 of the position and significant characteristics of the current image 014 420 distorted arrangement of known calibration points 421 characteristics of the significant pattern of current distorted image with delta displacement displacement 425 least squares calculation of the delta displacement 430 conditions of delta convergence of displacement of delta deformation 440 IC-GN subpixel delta displacement, which is a correlated displacement of speckled characteristic pattern 026 at high resolution between significant characteristics of calibration position 210 and significant characteristics of the current image 014 800 starting state 804 initialization, self-verification and communication status 820 full calibration image, including a composite calibration image 825 not surrounded by sufficient 210 calibration points or at end limits 832 operating state described in figure 1, figure 2 and figure 3 836 simultaneous calculation of the precise location described in figure 5 901 hydraulic cylinder body 902 piston 903 time-of-flight piston reflector also known as an optical distance reflector 904 basic stop 905 head stop 906 gasket in the cylinder 907 head chamber of the hydraulic cylinder 908 hydraulic cylinder base chamber 910 detection device housing 912 seals for the detection device 918 contact pressure sensor at the base, stroke limit sensor 919 head pressure contact sensor, stroke limit sensor 925 time-of-flight sensor also known as optical distance sensors 930 sensor PCB DETAILED DESCRIPTION In what follows, the speckled images described are either real speckled images generated by a coherent laser light interface or facsimile speckled images generated by a surface diffraction with a black background. The two spotted images can be constructed additively by overlapping spatial zones, as shown in Figures 9, 10, 11, 12 and 13. Figure 7 is a side view diagram of a single laser position identification detection apparatus. The coherent light from the lasers 120, 122 or 124 passes through an optional source lens 130 illuminating a spotted area 110, 112 or 114 on the surface 208 of the part 200 under observation. The optional lenses of the source 130 refract / bend the narrow laser beam to illuminate a larger area of the area of 200 under observation. According to the Huygens-Fresnel principle, any refractive surface point can be considered as the source of a new wavelet. Wavelets propagate freely without a lens to the observation plane with which they interfere, which results in locally distributed occurrences of constructive and destructive interference. Speckles can be seen from metallic surfaces and have a high contrast which makes artificial markers generally unnecessary. The average spot size d in the plane, observed by the 202 CMOS or CCD image sensor is given by, in which λ is the light wavelength, u is the distance between the 202 CMOS or CCD image sensor and the surface 208 the part 200 under observation, D is the diameter of the illumination on the surface 208 of the part 200 under observation. A uniform image scale correction step 004 can be used to compensate for the significant difference in the distance u through the image sensor 202 CMOS or CCD, between the image sensor 202 CMOS or CCD and the surface 208 of part 200 under observation. FIG. 1 relates to the control logic of the position identification detection apparatus. The data flow and processing steps illustrated in Figure 1 are as follows. The spotted image 001 is captured by a 202 CMOS or CCD image sensor of a single or compound pattern image detection apparatus. Correction 004 of the uniform scale of the image can be applied to correct the average size of the non-uniform spots caused by the curved surface of the piston rod 208. In the case of an image sensor with a laser pattern, the non-uniform average spot size d is caused by the variation of the distance u between the image sensor 202 CMOS or CCD and the surface 208 of the part 200 under observation. The spotted image 001 uniformly scaled processed by the correction 004 of uniform image scale is convolved with Gabor wavelets 006 extracted from the Gabor wavelet database 100. The estimated absolute position 010 is used to provide a selection 009 of the position group of the SURF / SIFT feature descriptors. The Gabor group selector 008 is used in combination with the position group 009 selector as input for the SURF / SIFT descriptor selector 012. The SURF / SIFT descriptor selector 012 selects the corresponding candidate descriptor SIFT / SURF characteristic descriptor classification groups. The result is a reduced number of candidate descriptors 020 of SURF / SIFT characteristics coming from the database 102 of descriptors of SIFT / SURF characteristics, grouped by Gabor wavelet and by position, which must be taken into account. Databases of Gabor wavelets 100, descriptors 102 of SURF / SIFT characteristics, and patterns 104 spotted with significant points are stored in a non-volatile memory 233 for reading / writing. The control logic is executed on a microprocessor 231, FPGA and / or ASIC, with temporary execution results preferably stored in the volatile memory 234. Error correction coding and redundancy can be used to ensure reliable storage. long-term and error-free execution of the control logic and data processing. Continuing the data flow and the processing steps illustrated in FIG. 1, the SIFT / SURF algorithm is applied as follows. Significant characteristics 014 of the spotted image 001 uniformly scaled are selected in parallel with the convolution of the spotted image with the Gabor wavelets 006. The SURF algorithm uses a blob detector based on the matrix of Hesse to find points of interest. The next step is to find the scale space 016 from the significant points previously selected. The resulting characteristic descriptor 018 is matched with corresponding descriptors 020 of SIFT / SURF characteristics extracted from the database 102. If a match occurs, the spotted pattern selector 024 retrieves a spotted pattern of calibration image 210 having significant characteristics, from the database 104 of characteristics of speckled pattern of significant points, indexed by descriptors of characteristics SURF / SIFT. The end result is that the significant characteristics of the current speckled image 001 are correlated with the speckled characteristic patterns of calibration image 210 extracted from the database 104. The data flow and the processing steps of the apparatus illustrated in FIG. 1 continue with the correlation 014 of significant characteristics of the spotted image. The significant characteristics 014 of the current speckled image 001 and the significant characteristics 014 of the previous speckled image 001 are correlated 027 to find the relative displacement of the speckles 038. The significant characteristics 014 of the previous image 001 are the significant characteristics 014 of the current spotted image 001 delayed 015 until the next image capture. The current and previous spotted 001 images are designed to overlap and will be correlated during normal operation. When the current spotted image 001 and the previous spotted image 001 are not correlated by the correlation operation 027, the apparatus increments the count of relative displacements 039 which have failed. A high 039 number of failed relative displacements indicates that the 202 CMOS or CCD image sensor is unable to reliably resolve the surface image 208. Or the current and previous spotted 001 images do not overlap, which is perhaps due to the fact that the observed part 200 moves at an extremely high speed. The data flow and processing steps illustrated in Figure 1 describing the detection of the calibration positions are as follows. The arrangement 210 of significant speckled points of the previously selected calibration position 210 is correlated with the significant points of the current image to find the absolute position of the part 200 under observation with respect to the corresponding calibration position 210 . The components of the calibration positions 210 are stored in databases 100, 102, 104 of calibration points. The correlation 026 of the spotted characteristic patterns between significant characteristics of calibration positions 210 and the significant characteristics of image 014 of current image is determined as follows. Refer to Figure 4 for details of the correlation process. The speckled calibration image characteristic pattern 210 is correlated with the speckled image 001 scaled uniformly using FFI-CC (from the English Fast Fourier Transform Cross Correlation} Cross correlation of Fourier transform fast, less accurate, shown in Figure 4. The spotted calibration image 210 at the calibration position is normalized to create an average normalized spotted calibration image 400. The Fourier transform 405 of the spotted image d the average normalized calibration is then created The significant characteristics of the spotted image 014 in progress are normalized to create a spotted image 401 in progress normalized average The Fourier transform 406 of the spotted image in progress normalized average is then created. The Fourier transforms 405 and 406 are combined to create the inverse Fourier transform 410 of the 405 x 406 product of Fourier transforms. shows me in Figure 18, the displacement x, y 415 of the spotted image 001 from the spotted image position 210 of calibration in the same spotted view 209 complete is found at the maximum inverse Fourier transform 410 of the product 405 x 406 of the Fourier transforms. The data flow and processing steps illustrated in Figure 1 are described in more detail below. When there is a spotted characteristic pattern correlation 026 between the significant calibration position characteristics 210 and the significant characteristics 014 of the current image, the correlation is used to obtain the spotted calibration position 210 calculated relative 040 in the spotted view 209 is complete. When the FFT-CC algorithm is used to obtain the calculated relative position 040 in the full spotted view 209, it provides the displacement x, y 415 of the spotted image 001 from the spotted image calibration position 210. If the IC-GN algorithm illustrated in FIG. 5 is used to obtain the calculated relative position 040 in the full spotted view 209, it provides the high precision delta 440 displacement IC-GN sub-pixel of the spotted image 001 from the position of the spotted calibration image 210. As a result, the spotted calibration image position 210 on the surface 208 of the part 200 under observation is known with respect to the image sensor 202 CMOS or CCD. Each time a successful correlation match occurs, the 030 count of successful spotted calibration images 210 is incremented. The data flow and processing steps illustrated in Figure 1 which occur when positions of spotted images of calibrations 210 cannot be matched are as follows. When the highest-ranking matching candidate SIFT / SURF characteristic descriptor 020 does not match, then other matching candidate's SIFT / SURF characteristic descriptors 020 in the group of SURF / SIFT characteristic descriptors from the database 102 selected by the group of Gabor 008 and the group of positions 009 are taken into account according to their classification. Similarly, if the spotted calibration position characteristic pattern 210 corresponding to the candidate SIFT / SURF characteristic descriptor is not sufficiently close to the spotted image 001, another candidate SIFT / SURF characteristic descriptor is selected by the group of Gabor 008 and position group 009. If there are no more candidates, no match has been found. If the spotted calibration image 210 is sufficiently close to the spotted image 001, then the relative position 040 of the spotted calibration image 210 in the full spotted view 209 is calculated. The data flow and processing steps illustrated in Figure 1 and Figure 2 describing the procedure for updating the calibration position are as follows. The complete procedure for updating calibration positions is illustrated in FIGS. 1, 2, 6. This procedure starts when no corresponding candidate SIFT / SURF characteristic descriptor is found for the current spotted image 001 in the spotted view 209 complete or when no spotted calibration characteristic 210 is found for the spotted image 001 in progress with the spotted view 209 complete. The absolute position 041 can be known by one or more position identification detection devices from a composite position identification detection device as shown in FIG. 10, or the absolute position 041 is supplied by a limit or by the relative position with respect to a speckled characteristic of calibration 210 partial or complete in the speckled view 209 complete. If the absolute position 041 is known, then the Gabor wavelets, the SURF / SIFT characteristic descriptors and the speckle characteristics of an image 001 coming from inside the complete spotted view 209 can be stored in the databases. 100, 102 and 104. FIG. 3 is the processing step contained in the procedure for storing calibrations 036. In the following description of the procedure for storing calibrations 036, reference is made to the components linked to but not contained in, the procedure for storing calibrations 036. As shown in FIG. 3, if there are insufficient image calibration speckled 210 surrounding positions 825 or if the piston rod 200 is at an end limit of displacement, the absolute position 041 from a limit sensor or from one or more subcomponents of a compound position sensor will be used. When a calibration position 210 is required in an unknown location from the absolute position 041, from a limit sensor or from one or more subcomponents of a compound position sensor, then a precise movement 836 from a previous absolute position 041 is determined using the IC-GN algorithm described in FIG. 5. When the optical displacement sensor 230 is at an absolute position 041, then the test 032 at a known absolute position is true and the absolute displacement can be used to obtain the significant characteristics 014 of spotted image in progress and for calibration data 105 for time-of-flight sensors 925. When the test condition 032 indicates a known absolute position, the new calibration position data 210 is stored using the calibration storage procedure 036 illustrated in FIG. 3. If the distance from the surrounding calibration positions is too large, one or more speckled calibration images 210 should be selected from speckled view 209 and stored. The maximum distance between calibration positions is determined by the maximum cumulative position error. The cumulative position error is the error between the estimated absolute position 033 and the true absolute position 041. The estimated absolute position 033 is calculated by adding the cumulative relative displacement to the true absolute position 041 of the last spotted image 210 of reliable calibration. The spotted calibration image 210 selected is indexed by the Gabor wavelets and the relative position to surround the spotted calibration images 210. The corresponding SURF / SIFT characteristic descriptor of the spotted calibration image 210 is stored in the database 102 of SURF / SIFT characteristic descriptors. And the significant dot patterns of the spotted calibration image 210 are indexed by its SURF / SIFT characteristic descriptor and stored in the database 104 of significant spotted patterns. The speckled calibration patterns 210 with a high number of successful matches are speckled calibration images 210 with stable significant speckled pattern characteristics. When the characteristic speckled calibration patterns 210 change as a result of surface wear or sudden surface damage, the count 030 of successful matches is no longer incremented. Calibration position speckle pattern characteristics that no longer increment their count of correct matches and are sufficiently close to other calibration position speckle pattern characteristics can be replaced when non-volatile memory 233 is required to store new positions of speckled calibration characteristics 210. Calibration may be required when a sufficient number of speckled pattern characteristics characteristics of calibration position no longer increases their 030 count of successful matches and are not close enough to the others speckled calibration feature positions 210. FIG. 2 relates to the control logic for initializing the calibration of the position identification detection apparatus. The spotted image 001 is captured by a 202 CMOS or CCD image sensor of a single or compound pattern image detection apparatus. A uniform image correction 004 of the image can be applied to correct the average size of the non-uniform spots. Initially, there are no stained calibration speckle positions 210 stored and the calibration point databases 100, 102, 104 are empty. The first step in memorizing the new calibration positions 210 consists in selecting significant characteristics 014 from the spotted image 001 scaled uniformly. The absolute position 041 can be provided by a piston stroke limit sensor 918, 919 shown in FIG. 23, or by one or more subcomponents of a device consisting of position identification detection, or by a position relative to a speckled characteristic 210 of partial or complete calibration with the speckled view 209 complete. During the initial calibration process, the absolute position 041 is only made available by a piston stroke limit sensor 918, 919. If test condition 032 is false, then the absolute position is unknown. And the location is again examined as shown in Figure 6, until an absolute position 041, such as a piston stroke limit 918, 919, is provided, which allows the test condition 032 is true and the absolute position is known. When the absence of the surrounding stored calibration speckle characteristic position 210 is detected, it is necessary to assign new memorized calibration speckle characteristic positions. It is important to accurately assign the position of the new speckled calibration characteristics. During the calibration, a correlation 027 of speckled characteristic pattern between the significant characteristics 014 of the previous image and the significant characteristics 014 of the current image is measured using the calculation 440 of Gauss-Newton sub-pixel delta displacement of reverse composition (abbreviated as IC-GN, from English Inverse Composition Gauss-Newton), which is extremely precise, typically ranging from 0.01 to 0.05 pixels. This is 20 to 200 times more precise than the FFT-CC correlation, fast Fourier transform cross correlation, at high speed 415 used in normal mode to calculate the displacement x and y 415, which typically varies from 1 to 2 pixels. When the position identification detection apparatus searches for new calibration positions, the position identification detection apparatus is in calibration mode. As the CMOS or CCD image sensor 202 moves on the cylindrical or flat part 200 under observation, complete spotted view windows 209 are captured periodically. In calibration mode, the speed of movement of the image sensor 202 CMOS or CCD on the cylindrical or flat part 200 is slowed down so that the full spotted view windows 209 are always overlapped by at least one position of speckled image 210 of calibration. The ICGN algorithm also takes several times longer than the FFT-CC algorithm. The movement speed during calibration should be reduced to account for this longer processing time. Figure 16 illustrates a full spotted view window step 209 of the calibration mode. The initial step is the upper left of the full spotted view 209. During the initial window step, four speckled image positions 210 for calibration are selected from the four corners of the full speckled view 209. If a calibration speckled image position 210 is completely included in the overlapping spotted images 209, such as the upper left of the second spotted image window step 209 complete, it does not need to be saved . The displacement between the complete 209 spotted images is precisely measured using the IC-GN 440 Inverse Gauss-Newton Composition. The speckled calibration image characteristic pattern 210 is correlated with the speckled image 001 using the slower, more precise IC-GN Inverse Gauss-Newton Composition, shown in Figure 5. In Figure 5, similar to the In Figure 4, the spotted calibration image 210 at the calibration position is normalized to create a spotted calibration image 400 normalized average. The significant characteristics 014 of the current spotted image are normalized to create a medium normalized spotted image 401 in progress. The next step is to create the Fourier transform 405 of the spotted average normalized calibration image. Simultaneously, the next step consists in creating the Fourier transform 406 of the spotted image 001 in the course of normalized average. The Fourier transforms 405 and 406 are combined to create the inverse Fourier transform 410 of the 405 x 406 product of Fourier transforms. As shown in Fig. 18, the displacement x, y 415 of the spotted image 001 from the spotted image calibration position 210 is within the same spotted view 209 complete. In Figure 5, the displacement FFT-CC x & y 415 is at the maximum inverse Fourier transform transform 410 of the product 405 x 406 of Fourier transforms. The displacement x & y 415 calculated by FFT-CC is the initial displacement x & y expected used in the algorithm IC-GN 440. The current distorted image 421 is calculated as being a function of the spotted image 401 normalized mean in progress and the current deformation. The deformed arrangement 420 of the known calibration points is calculated as a function of spotted standard calibration image 400. A calculation 425 by least squares of displacement by delta deformation is calculated on the arrangement 400 of combined deformations of the calibration points known and spotted image 401 being normalized average. When the computation 425 by least squares of the displacement by delta deformation meets the conditions 430 of displacement of delta deformation, the current computation 425 by least squares of the delta deformation is the displacement delta 440 sub-pixel IC-GN final. Otherwise, the current least squares calculation 425 of the delta displacement is used to calculate the next distorted current image 421. FIG. 17 illustrates the speckled image position 210 for calibration at the bottom right of the previous speckled view 209 complete removed and replaced by a speckled image position 210 for calibration with a high density of significant characteristics. When the density of the significant characteristics is high as shown in FIG. 17, a single speckled calibration image 210 is sufficient to uniquely define a calibration position on the surface 208 of the part 200 under observation. When the density of the significant characteristics is low as shown in FIG. 18, two spotted contiguous calibration images 210, or more than two, are used to uniquely define a calibration position on the surface 208 of the part 200 under observation. As the calibration progresses, either a single spotted calibration image sufficient to uniquely define a calibration position, or two, or more than two, adjoining calibration images sufficient to define only a calibration position, are saved for each successive full spotted view window step 209. In calibration mode, it is not necessary to systematically scan the cylindrical or flat part 200 under observation. The speckled calibration image 210 in each full speckled view window step 209 is retained until there are sufficient calibration speckled image positions 210 in all directions for the cylindrical portion 200 or flat under observation. These retained speckled calibration image positions 210 are used as absolute reference positions 041 to measure the displacement to speckled calibration image positions 210 along alternate paths. When calibration speckle image positions 210 are surrounded by sufficient calibration speckle image positions 210 in all directions, a single calibration speckle image 210 is sufficient to provide locally unique identification of the calibration position. calibration. In addition, a calibration position 210 with a higher density of significant features is maintained and the spotted adjacent calibration images 210, previously required, are deleted. Figure 6 is a block diagram describing the overall operation of the position identification detection apparatus. When powering on or resetting the position identification detection device, start state 800 must be activated. In start state 800, the start code is started, which may include firmware , firmware and software. From the start state 800, the position identification detection device enters the initialization, self-checking and communication state 804. In state 804, the position detection device position identification prepares to start its function of identifying the position of the cylindrical or flat part 200 under observation with respect to an image sensor 202 CMOS or CCD. The absolute position 042 of the position detection algorithm and the estimated absolute position 033 are the supplied outputs used to determine the absolute position. The initial estimated absolute position 033 is obtained from the known absolute position 041 from a 918, 919 limit extension / retraction sensor of the piston or from a calibrated time-of-flight 925 sensor. If the time-of-flight 925 sensor is uncalibrated, it can be calibrated using known absolute positions 041 as shown in Figure 3. The operating procedure described in Figure 1 is performed on each frame spotted image captured 001. The estimated absolute position 010 used for group selection significantly reduces the known significant feature sets of possible calibration position 210 intended to be compared to the current speckled picture significant characteristic 014. Operation in calibration mode depends on the absolute positions 041 known by means of a piston extension / retraction limit sensor 918, 919 or a time-of-flight sensor 925 calibrated for the memory storage procedure. 036 calibration of the new calibration positions 210. In normal operation, the absolute position 042 is determined by the relative calibration positions 210 calculated 040 or by the position calculated from the calibration storage procedure 036. The absolute position 042 calculated from the calibration memorization procedure 036 is preferably selected as being the most reliable. The most recent absolute position 042 and the estimated time-of-flight position 925 are kept 005 in volatile memory until they are updated with more recent values. The estimated absolute position 033 is calculated by combining the most recent position 005 contained in the volatile memory and the relative cumulative displacement 031 of the spotted pattern. The relative cumulative displacement 031 of the spotted pattern is the relative accumulated displacement 038 of the spotted pattern. An external source of displacement of the relative spotted pattern 031 can improve and / or verify the estimate of relative spotted relative displacement 038. The operating state 832 represented in FIG. 6 is described in FIGS. 1, 2, 3, 4 and 5. The test condition 032 is the absolute position corresponding to the known absolute position 041 coming from a limit sensor, or of one or more subcomponents of a device consisting of position identification detection, or of a position relative to a speckle characteristic 210 of partial or complete calibration inside the speckled view 209 complete, as shown in FIG. 1 and FIG. 2. If the absolute position has not been identified, then the detection device will continue to apply the test condition 032 to determine whether the absolute position is known, because that the image sensor 202 CMOS or CCD moves on the surface 208 of the part 200 under observation. After the test condition 032 determines that the absolute position is known and the image sensor 202 CMOS or CCD is at a speckled image position 210 for calibration, the next step is to check whether it is is an incomplete composite calibration image constructed by joining spotted calibration image positions 210. If the current spotted calibration image position 210 is an incomplete part of a composite calibration image 820 constructed by the junction of spotted calibration image positions 210, then the test condition 032 of known absolute position is updated with the found part of the composed 820 calibration image. FIG. 7 illustrates the variable size of the speckles which occurs when a round part 200 is observed under observation. The light coming from the laser 120 is not focused by the optional source lens 130 to cover the spotted area 110. The average spot size ¢ / = 1.2 ^ of spotting is proportionally greater when the distance u between the image sensor 202 CMOS or CCD and the area 208 of the part 200 under observation is larger. A uniform image scale correction step 004 can be used to compensate for the non-uniform speckle average size. FIG. 8 illustrates a non-uniform spot size resulting from a curved surface 208 of the part 200 under observation. The distance ui between the image sensor 202 CMOS or CCD and the reflection of the laser-spotted surface 110 is greater when the curved surface 208 of the part 200 under observation is more distant than the distance U2 to go to the reference plane 300 of flat part. Figure 9 is a spatially added black background view. The objective is to select speckled calibration images 210 with a sufficiently high density of significant characteristics to uniquely define a calibration position on the surface 208 of the part 200 under observation. And the objective is not to measure the topology of the surface 208 of the part 200 under observation. An LED light source 121 shines through an optional source lens 130 and illuminates the surface 208 of the part 200 under observation. Each collecting lens 135 or composite collecting lens 135 collects dark field scattered light from separate or overlapping dark field areas 116. Light tubes are capable of providing the same function as multiple / compound collecting lenses 135. In this implementation, multiple / compound collecting lenses 135 are preferred to light tubes because of their superior low distortion optical properties. The light scattered by the dark field from several dark field areas 116 is additively collected by the image sensor 202 CMOS or CCD. The magnification power of the collecting lenses 135 is chosen so that the size of the facsimile speckles is greater than 4 pixels by 4 pixels. As illustrated in FIG. 8, a curved surface 208 of the part 200 under observation reduces the spotted area 110 reflected. Multiple or compound collecting lenses 135 are ideally placed along the length of the curved part 200 under observation. FIG. 10 and FIG. 11 illustrate an apparatus for detecting images with multiple compound speckled patterns covering separate areas of the surface 208 of the part 200 under observation. FIG. 10 illustrates the end view of an image detection apparatus with multiple compound speckled patterns. FIG. 11 illustrates the top view of an image detection apparatus with spotted patterns composed with the surface 208 of the part 200 under observation located below. Experimental observation has indicated events which damage the surface of part 200 under observation causing localized damage. Multiple spotted calibration images 210, collectively unique, are stored for each calibration position 210, preventing loss of calibration position 210 for most common damage events. When some spotted calibration images 210 are damaged, the calibration position is still recognizable provided that there are spotted calibration images 210 that are locally unique. The relative position of the speckled calibration images 210 with respect to the other speckled image positions 210 of calibration can uniquely resolve the speckled image position 210 of calibration among other speckled image positions 210 of similar calibration. When the full spotted view 209, shown in Figures 16, 17, 18, is large enough, multiple spotted calibration images 210, collectively unique, can be stored for each calibration position in the full spotted view 209. An image speckled pattern detection apparatus shown in Figure 10 and Figure 11 increases the full spotted view area 209. However, when the multiple collectively unique calibration speckled images 210 are nearby in a single speckled view area 209 and damaged area 208 in a local area, the speckled calibration images 210 may be damaged. The calibration positions 210 constructed of spotted single calibration images 210 collectively from multiple spotted image detection apparatus are shown in Figures 16,17,18. They are widely separated on the surface 208 of the part 200 under observation. The widely separated spotted calibration images 210 are much less affected by events which damage the surface 208 of the portion 200 under observation. Damage is typically limited to a subcomponent of the multiple speckled pattern image detection apparatus. The collectively unique speckled calibration images 210 allow instant or rapid detection of changes to speckled calibration images 210 and reliable updating of speckled calibration images 210. Figures 12 and 13 illustrate a composite image detection apparatus. The relative positions of the CMOS or CCD image sensor 202 and the laser 120 or LED light sources 121 are illustrative and are not an exact cross section of the image detection apparatus shown in Figure 14. The sensor 202 CMOS or single CCD image frame shown in Figure 12 is highly suitable for collecting additive dark field characteristics. Calibration positions 210 require that the area 208 of the portion 200 under observation be large enough to provide unique speckled patterns. The multiple 202 CMOS or CCD image sensors illustrated in Figure 13 are suitable for laser speckled patterns or for lateral orientation. Large areas 208 of the portion 200 under observation can be created enclosing connected surface areas 208 by means of continuous observations of the fact that the laser image detection apparatus with speckled patterns passes over the surface 208 of the portion 200 under observation. This requires a small error in joining observations to form surface areas large enough to provide unique speckled patterns at the calibration locations. As illustrated in FIG. 3, the increase in the size of the image sensor 202 CMOS or CCD corresponding to a larger surface 208 of the part 200 under observation considerably increases the distortion caused by the nonuniform average size of the speckles. However, large 202 CMOS or CCD image sensors have high complexity and cost. The compound spotted image sensor increases the surface 208 of the part under observation by using several image sensors 202 CMOS or CCD as shown in FIG. 10. The advantage is that the surface 208 of the part 200 under observation corresponding to each 202 CMOS or CCD image sensor requires less uniform image scaling 004. The laser composite spotted image sensor as illustrated in FIG. 11 can also increase the combined surface 208 of the part 200 under observation by using a 202 CMOS image sensor or CCD RGB to combine the separate laser spotted areas 110 or the black background surface areas 116 116. Figure 14 is the bottom view of a speckled pattern image detecting apparatus. The essential components of an image detection apparatus are the image sensor 202 CMOS or CCD, one or more light sources with laser 120, 122, 124 or with LED 121, a sensor board 230, a microprocessor 231, an FPGA and / or ASIC, an image sensor 202 CMOS or CCD, a non-volatile memory 233 and a volatile memory 234. The laser light sources produce laser speckled images. Whereas LED light sources would typically be used to produce dark field images. The image sensor components 202 CMOS or CCD, one or more coherent laser light sources 120, 122, 124, a microprocessor 231, FPGA and / or ASIC, a non-volatile memory 233 and a volatile memory 234 are welded to the wafer sensor 230 and electrically interconnected by it. The CMOS or CCD image sensor 202, and said one or more coherent laser light sources 120, 122, 124 can be constructed as submodules to facilitate the proper placement of any optical source lens 130 used. The microprocessor 231, FPGA and / or ASIC SOC, system on chip comprising a non-volatile memory 233 and a volatile memory 234 are necessary for the execution of the control logic. A laser spotted image detection apparatus uses more than one coherent source of red laser 120, blue laser 122, green laser 124 and image sensor 202 CMOS or CCD with a corresponding pixel color red, blue, green. A simple uncomposed laser image detection device with speckled patterns uses only a coherent light source with red laser 120, blue laser 122 or green laser 124 and a 202 CMOS or black and white CCD image sensor. Figure 15 is a graph of the trend in position error relative to the spot / pixel size. The lines on the graph are the pixel width / feature width which is effectively the pixel size / feature size for the unit aspect ratios. The graph requires that the size of the series of pixels be greater than the size of the feature. As a result, the size of the feature determines the minimum size of the series of pixels. Calibration positions 210 large enough to ensure single speckled patterns can be the combined surface 208 of all the sub-components of the multicomponent speckled pattern image detecting laser apparatus as shown in FIGS. 9, 10, 12 and 13. It is shown that the error tendency effect decreases as the spot / pixel size increases and thus the Nyquist criterion is respected. When the spot / pixel size is less than 2 pixels x 2 pixels, the Nyquist criterion, the noise obstructs a precise determination of the displacement of the spotted pattern. And the obstructed precise determination of the movement of the spotted pattern prevents calibration positions 210 with stable arrangements of significant points contrasting with adjacent environments. When pixel imaging errors and 202 CMOS or CCD image sensor deficiencies are taken into consideration, it is recommended that the speckle size be greater than 4 pixels x 4 pixels along the horizontal and vertical axes of the 202 CMOS or CCD image sensor. FIG. 17 illustrates a natural speckled characteristic of calibration position 210 on the surface 208 of the part 200 under observation. The entire surface 208 of the part 200 under observation produces natural speckled patterns. For clarity, Figure 17 illustrates only the natural speckled features at a calibration position 210 within a full speckled view 209. The natural speckled characteristics of calibration position 210 are described by Gabor wavelets, SURF / SIFT characteristic descriptors and a corresponding significant speckled dot pattern. The complete speckled pattern is not stored, which results in reduced requirements in terms of volatile memory which can allow complete coverage of the surface 208 of the part 200 under observation with calibration positions 210. Figure 19 illustrates adjoining calibration image positions 210. Each adjoining calibration image 210 has few points of interest and has at least one SURF characteristic. After initialization 804, the first calibration image 210 detected during operation 832 is unlikely to be uniquely mapped to an absolute position 041. The paired calibration image 210 will assist in the selection of the position group. 009 of the corresponding SIFT / SURF characteristic descriptor 020. As operation 832 continues, the relative position of the additional paired calibration images 210 uniquely defines the absolute position. Once the absolute position 041 has been determined by the relative position of a collection of calibration images 210, each subsequent detected calibration image 210 indicates the new absolute position by the relative position of the collection of images d Calibration 210. No effort is made to ensure that any calibration position 210 uniquely defines the absolute position. Each adjoining complete spotted view 209 contains at least one complete calibration image 210. In the absence of a difference between the simple calibration image positions 210, it is not necessary to estimate the absolute position on the basis of the relative displacement from the last known absolute position. Relative movement of the previous calibration images 210 reconfirms the absolute position of the current calibration image. The calibration image 210 should only be unique within the relative displacement error. Each adjoining full spotted view 209 contains at least one calibration image 210, so that a relative displacement error cannot accumulate between calibration image positions 210. The calibration image 210 requires very few points of interest and a very simple characteristic aspect to be sufficiently unique. If the current calibration image 210 has been damaged, the current absolute position is known by calculating the precise relative displacement from the surrounding calibration images 210. Using the current known absolute position, a replacement calibration image 210 is selected and stored as described previously in FIG. 3. FIG. 20 illustrates some positions of calibration images 210 randomly separated, surrounded by calibration images 210 separated by a certain distance. Each calibration image 210 has few points of interest and has at least one SURF characteristic. After initialization 804, the absolute position is probably completely unknown until a calibration image 210 is paired. Even after a calibration image 210 has been paired during operation 832, it is unlikely to be uniquely mapped to an absolute position. The paired calibration image 210 and the relative displacement between the surrounding calibration images 210 will facilitate the selection of the group of positions 009 of the corresponding descriptor 020 of corresponding SIFT / SURF characteristic. As operation 832 continues, the relative position of the additional paired calibration images 210 uniquely defines the absolute position. Once the absolute position has been determined, the absolute position is estimated from the relative displacement coming from the absolute positions of the surrounding calibration images 210. When a calibration image 210 is damaged, it is not detected at the intended location. Whenever a calibration image 210 is not detected near its estimated absolute position 033, the incremental count 030 of successful matches is not incremented. With insufficient reliable calibration image points, a precise relative movement from the surrounding calibration image positions 210 is calculated until a replacement calibration image 210 is stored. When the most recent calibration position 210 is different from the previously used calibration position 210, the new estimated absolute position 033, which is a less precise absolute position, is used to statistically verify that the defined absolute displacement of the new position 210 of calibration image is correct. No effort is made to ensure that any calibration position 210 uniquely defines the absolute position 041. The absolute location 041 of the calibration positions 210 is saved during the calibration mode. The process of mapping spotted image frames 001 captured to calibration position frames 210 is shown in Figure 21. During calibration mode, selection of points of interest 014 can be performed on all images or the selection of points of interest 014 may be delayed until new significant spotted points are required for a new descriptor SURF / SIFT 034 for the known location. Once the absolute position 041 is known, the current captured frame can be used as the reference frame. When the calibration mode was first passed, no calibration position 210 was saved and the time-of-flight 925 sensor was not calibrated. The known position of the piston rod 200 in full extension and in full retraction is used as being the initial known absolute positions. These known absolute positions 041 are used to calibrate the time-of-flight 925 sensor and select a reference frame. In calibration mode, the surface 208 of the part 200 under observation is mapped with frames adjoining the selected reference frame. As the optical displacement sensor 230 moves on the surface 208 of the part 200 under observation, frames 001 of spotted images are captured. Many frames 001 of spotted images can overlap with each adjoining frame mapping the surface 208 of the part 200 under observation. If sufficient memory is available, each adjoining frame is used to store a calibration image 210 as shown in Figure 19. When there is no difference between the calibration positions, the estimated absolute position 033 is the absolute position of the calibration positions 210. Pixels 212 can be periodically connected to form groupings of speckled comb comb filters 001. The individual pixels or groupings of pixels are connected to digital analog converters ADC which convert the intensity of the pixel photo in a binary representation used as a spotted image 001. The pixels 212 connected periodically to form groups of comb filters 001 of spotted image reduce the memory required to memorize the significant characteristics of spotted calibration images 210 and the calculation effort required to match the significant characteristics d e the current spotted image 014 with the significant characteristics of the spotted calibration calibration image 210 stored. Dark field surface diffraction, by its nature, is concentrated in characteristics and is not uniform over the surface. A large number of the pixels 212 of groups of spotted image comb filters 001 connected periodically will not simultaneously receive surface diffraction lighting in black background. An alternative implementation of grouping of spotted images 001 according to the prior art, which is commonly used, consists in grouping pixels to form 2x2 frames of spotted images 001 of neighboring pixels. OOlde 2 x 2 or 4 x 4 pixel spotted image frames adjacent reduce detail of resolvable features. A series of combs can alternatively be represented in the form of a discrete Fourier transform (DFT). It can be concluded that a series of one-dimensional combs (ID) is essentially a correlation to ID at a special frequency. The periodically connected 001 spotted image comb filter illustrates a series of 2D combs with 4x4 elements per cell. In each cell, the 4x4 elements are illustrated by the lines; dotted, short dashed, long dashed, continuous. The spotted image comb filter 001 periodically connected 001 of 4x4 cell elements is repeated. Each pixel element of the 4x4 cell is connected to the corresponding pixel element of the adjacent cells. Alternative implementations may include 2D comb series having elements other than 4x4 pixels per cell, 2D comb series with sub-series or 2D comb series with dynamically reconfigurable comb connections to allow the spatial frequency is changed dynamically. The resolution of the dynamically reconfigurable 2D comb series can be changed; by adjusting the resolution of the connected ADC, by reconfiguring the electrical connections of the pixels to the connected ADC, or by varying the combination of binary pixel representations ADC used as spotted image 001. The reconfigurable combination of representations of Binary pixels produced by the ADC used as spotted image 001 is the preferred implementation. Equivalently, the spotted image 001 can be combined in a reconfigurable manner in step 004 of correction of uniform scale of image of the algorithm. The spotted image comb filter 212 periodically connected 001 has the advantage of pixel correlation made possible by computer hardware. When there are deviations between the calibration positions 210 as illustrated in Figure 20, the estimated absolute position 033 is obtained by measuring the relative displacement from the previous calibration position 210. Small errors in the measured relative displacement accumulate and cause an error in the estimated absolute position 033. The variance and the mean are calculated from pixels present in the calibration image 210 coming from spotted image frames 001 which overlap with the calibration frame 210. If the variance is uniformly too large, there is a pixel misalignment error and the IC-GN sub-pixel delta displacement algorithm 440 is required or several iterations of the IC-GN sub-pixel delta displacement algorithm 440 are required. Points with high variance are not good candidates for significant characteristics 014 of the current spotted image and should not be used for significant points of calibration image 210. Significant points of spotted image 210 selected calibration features can be natural characteristic features or characteristic features resulting from laser engraving or darkening. There is no need for precise location control when engraving unique characteristic features. The good candidate characteristic aspects are calibration patterns 215 binary coded in one dimension shown in Figure 25 and calibration patterns 216 binary coded in 2D illustrated in Figure 26. The relative geometry of the pattern is sufficient to describe the one-dimensional binary coded calibration pattern 215 and the binary coded calibration pattern 216 in 2D in sequence, which greatly reduces the memory required to store binary coded calibration patterns 215 in ID or calibration patterns 216 binary coded in 2D. The precise location of each element of the binary coded patterns is located using the method described above to store natural characteristic aspects as calibration positions 210. In FIG. 22, calibration positions 210 have been previously recorded in the calibration mode described in FIG. 21. These calibration positions 210 may have been natural characteristic aspects as illustrated in FIGS. 16, 17 and 18 or etched calibration positions 215 to 1D / 216 to 2D. The estimated absolute position 033 is obtained by measuring the relative displacement from the previous calibration position 210 and / or the time-of-flight 925 sensor. The variability rate of the timeof-flight 925 sensor and its vulnerability to optical interference are the main limitations of time-of-flight 925 sensors. A low-cost time-of-flight 925 sensor is able to estimate the absolute position, which considerably reduces the time required to locate 210 calibration positions. absolute estimated 033 is used by the position group selector 009 as input to the selector 012 of the SURF / SIFT descriptor of the candidate calibration position 210. With knowledge of the estimated absolute position 033, a few significant characteristic points 014 in the spotted image capture frame 001 are sufficient to detect the calibration position 210 and determine the absolute position 041. FIG. 23 is a side cross-sectional view of an embodiment of an assembly forming a hydraulic cylinder with an assembly 230 forming an optical displacement sensor. The hydraulic cylinder assembly comprises a cylindrical body 901 and a detection apparatus housing 910. A piston 902 is disposed inside the cylindrical body 901 for a reciprocating movement along an axis in response to hydraulic fluid. The piston 902 separates the cylindrical body 901 into two chambers, 907 and 908. The housing 910 is securely fixed on the cylindrical body 901. One end of a piston rod 200 is fixed to the piston 902 and extends along the axis of the movement. The other end of the piston rod 200 extends outside the housing 910. One and / or the other from the base of the cylindrical body and the external end of the piston rod 200 can be connected directly or indirectly with a machine component. The cylindrical body 901 has two openings for the passage of fluid such as oil or water in and out of the chambers 907, 908 to move the piston 902. Seals 906 inside the body cylindrical 901 are arranged so as to be flush with the surface of the piston rod 200 and thus prevent the fluid from leaving the chamber 907. The housing 910 contains an optical displacement sensor 230, which is used to determine the instantaneous position of the piston rod 200. The seals 912 inside the housing 910 are designed to clean the surface of the piston rod 200 and avoid as well as fluid or dirt comes to contaminate the optical displacement sensor 230. The housing 910 protects the optical displacement sensor assembly 230 from the environment and allows an easy replacement of the detection unit . The optical displacement sensor 230 is mounted in the housing 910 near the surface of the piston rod to allow the movement of the piston rod 200 to be read. The contact pressure sensor 919 of the head is mounted at the level of the head stop 905 of the cylinder body 901. The contact pressure sensor 918 of the base is mounted on the stop 904 of the base of the cylinder body 901 Together, these two contact sensors provide a two-bit digital signal to indicate whether the piston 902 reaches the head stop 905 or the base stop 904, or neither. Correspondingly, when the piston 902 reaches the head stop 905 or the base stop 904, the absolute displacement information in the memory is adjusted and updated. The time-of-flight 925 sensors are mounted on the housing 910 and directed towards the time-of-flight reflector 903 fixed to the piston rod 200. The time-of-flight 925 sensors are calibrated at the head 905 stop and / or to the basic stop 904. Time-of-flight 925 sensors estimate the extension of the piston rod 200, which is improved by means of the optical displacement sensor 230. In operation, the fluid introduced into the chambers 907, 908, or extracted from them, at variable pressures over time causes the piston 902, and therefore the piston rod 200, to slide in one direction or in the other with respect to the optical displacement sensor 230. The optical displacement sensor 230 reads the relative displacement of the piston rod 200 and produces a corresponding digital signal. The microprocessor 231, FPGA and / or ASIC, present on the sensor card 930 calculates the absolute displacement of the piston rod 200 by matching the calibration pattern and using the relative displacement. The absolute displacement obtained indicates the actual position of the piston rod 200 and the piston 902. FIG. 24 is an isometric view of a hydraulic cylinder with a photo image sensor attached and shows the section line A-A used to obtain the cross section shown in FIG. 23. Figure 25 is a schematic view of part 200 under observation with calibration patterns 215 binary coded in one dimension (ID). The number of calibration patterns is not limited to three. The number and location of the calibration patterns are determined by the requirements of the use. Unique calibration patterns are used to determine the current calibration position based on its calibration pattern. The absolute location of the binary coded calibration patterns 215 to ID is determined during the calibration mode, which allows the use of laser engraving of the binary coded calibration patterns 215 to ID, having a lower cost. FIG. 26 is a schematic view of the part 200 under observation with calibration patterns 216 binary coded in 2D. The number and location of the calibration patterns are determined by the requirements of the use. Unique calibration patterns allow you to determine what the current calibration position is based on its calibration pattern. The absolute location of the binary coded 2D calibration patterns 216 is determined during the calibration mode, which allows the use of a laser engraving of the binary coded 2D calibration patterns 216, having a lower cost. The reduced etching required for widely spaced 2D binary coded calibration patterns 216 can further reduce manufacturing costs.
权利要求:
Claims (19) [1" id="c-fr-0001] 1. Optical device for measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain mechanical parts moving relative to each other, characterized in that it comprises: a) optical means (230) for capturing images (001), directed towards the surface (208) of a mobile part (200) of said mechanical device under observation which move relative to said optical means (230) images (001), so that said optical means (230) for capturing images (001) are mounted on another part of said mechanical device, b) means for selecting, on said images (001), points (014) which contrast with the adjacent environment, c) a plurality of calibration positions (210), such that at each calibration position (210), there is an arrangement of known calibration points which contrast with the adjacent environment on said surface (208), d) means for detecting an alignment (026) of said movable part (200) by said optical image capture means (230) (001), in said calibration position (210) with an arrangement of points of known calibration which are in contrast to the adjacent environment, by matching said points (014) which contrast with the adjacent environment with said known calibration points which are in contrast with the adjacent environment, so that the position absolute of said movable part (200) relative to said optical means (230) for capturing images (001) is known to said detected alignment (026), e) means for alternately measuring said absolute position of said mobile part (200) other than by directly detecting said alignment (026) with said calibration positions (210), f) means for identifying said calibration positions (210) when said detected alignment (026) is detected as being near said known absolute position of said calibration positions (210), and said absolute position measured alternately from said movable part (200) does not correspond to said known absolute position of said calibration position (210), so that the absolute positions of said identified calibration positions (210) are capable of being updated, and g) means for identifying said calibration positions (210) when said absolute position measured as an alternative to said detected alignment (026) is not detected as being near said known absolute position of said calibration positions (210), so that said identified calibration positions (210) can be deleted, said known absolute position of said movable part (200) with respect to said optical image capture means (230) (001) being thus corrected at said calibration positions calibration (210), and said calibration positions (210) thus not requiring marking on said surface (208) of said movable part (200) of said mechanical device under observation, said means for alternating absolute position measurement being thus able to check said known absolute position of said calibration positions (210), said alternative absolute position measurement means thus being able to improve the pre cision of said known absolute position of said calibration positions (210), said alternative absolute position measurement means thus being able to identify and delete said calibration positions (210) which are no longer detectable, and said optical device thus being able to free up memory used by said calibration positions (210) which are no longer detectable. [2" id="c-fr-0002] 2. Optical device according to claim 1, characterized in that it further comprises: a) means for creating a composite image (820) from a plurality of said images (001), which combines optical characteristics originating from said plurality of said images (001), and b) said alternative means for measuring the absolute position by said alignment detection means (026) with said image of said composite image (820), so that said absolute position is known at locations other than those in which it there is said detected alignment (026) with said current image, said alternative means for measuring the absolute position being thus able to determine said absolute position of said movable part (200) on the areas covered by said plurality of said images (001) of said composite image (820), and said composite image (820) combining optical characteristics from said plurality of said captured images (001) thereby having an increased probability of containing unique optical characteristics, and therefore more than one such composite images (820 ) are not necessary to uniquely define said calibration positions (210). [3" id="c-fr-0003] 3. Optical device according to claim 1, characterized in that said alternative means for measuring said absolute position of said movable part (200) is another optical device for measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain mechanical parts which are movable with respect to each other, said alternative means for measuring the absolute position being capable of determining said absolute position of said movable part (200) at the locations in which another optical device for measuring the displacement absolute mechanics supplies said absolute position measured by its means for detecting said alignment (026) at its calibration positions (210). [4" id="c-fr-0004] 4. Optical apparatus according to claim 1, characterized in that the optical apparatus further comprises: a) one or more means for measuring the relative position of said movable part (200) with respect to the previous position of said movable part (200) with respect to said optical means (230) for capturing images (001), and b) means for estimating the absolute position by accumulating said relative displacement measured from said known absolute positions of said calibration positions (210), said calibration positions (210) for aligning said movable part ( 200) thus being captured from said surface (208) of said movable part (200). [5" id="c-fr-0005] 5. Optical device according to claim 4, characterized in that it further comprises: a) means for alternately measuring said absolute position of said movable part (200) other than by said detection alignment (026) directly with said calibration positions (210), b) means for comparing said estimated absolute position with said alternately measured absolute position, so that when the difference between said estimated absolute position and said alternately measured absolute position is too large, there is not enough calibration positions (210) for aligning said movable part (200) by said optical image capture means (230), and c) means for storing said calibration position (210), so that at each calibration position (210), said absolute position of said movable part (200) relative to said optical capture means (230) of images (001) is known and so that at each calibration position (210), the calibration points which contrast with the surroundings are known, whereby said calibration positions (210) are stored for replacing all missing or obscure calibration positions (210), whereby said calibration positions (210) for aligning said movable portion (200) are captured from said surface (208) of said portion mobile (200). [6" id="c-fr-0006] 6. Optical apparatus according to claim 1, characterized in that the relative position of said known calibration point arrangements of said calibration positions (210) defines only the absolute position, so that said calibration point arrangements known of said calibration positions (210) are locally unique as required to determine said relative position of said calibration positions (210), said relative position of said additional calibration positions (210) allowing detection and correction of errors caused by a change in one or more of said known calibration point arrangements, and the locally unique arrangement of known calibration points is thus less complex, requiring less non-volatile memory (233), which allows more features to be stored locally unique (018). [7" id="c-fr-0007] 7. Optical device according to claim 1, characterized in that said arrangement of known calibration points from said calibration positions (210) is derived from marks created on said surface (208) of said movable part (200), so that said arrangement of known calibration points of said calibration positions (210) is translated into coded sequences, so that said calibration positions (210) with said arrangement of known calibration points translated into coded sequences at a dimension have said calibration positions (210) defined in one dimension, and such that said calibration positions (210) with said arrangement of known calibration points translated into two-dimensional coded sequences have said calibration positions ( 210) defined in two dimensions, the creation of marks on said surface (208) of said movable part (200) thus making it possible to be sure that said calibration positions e (210) with said known calibration points which are in contrast to the adjacent environment are represented as said coded sequence, thereby requiring significantly lower non-volatile memory, the creation of marks on said surface (208) of said movable part (200) thus making it possible to be sure that said calibration positions (210) with said known calibration points are located close enough to ensure an acceptable measurement error, and the creation of marks on said surface (208) of said movable part (200) thereby making it possible to be assured that said calibration positions (210) with said known calibration points can be sparingly located, in order to minimize manufacturing costs. [8" id="c-fr-0008] 8. Optical device according to claim 1, characterized in that said means for detecting the alignment (026) of said movable part (200) further comprise: a) one or more optical distance reflectors (903) moving together with said movable part (200) of said mechanical device under observation, and b) one or more optical distance sensors (925) moving in conjunction with said optical image capture means (230) (001), so that said one or more optical distance sensors (925) measure the distance from said one or more optical distance reflectors (903), said optical distance sensors thereby assisting said alignment detection means (026) of said points (014) with said known calibration points and thus reducing the calculation cost. [9" id="c-fr-0009] 9. Optical method for measuring the absolute mechanical displacement of actuators, joints or other mechanical devices which contain moving mechanical parts which move relative to each other, characterized in that it uses the optical device according to one of the preceding claims, and in that it comprises the steps consisting in: a) optically capturing (230) images (001) of the surface (208) of a movable part (200) of said mechanical device under observation which moves relative to means for said optical capturing of images (001), b) selecting, in said images, arrangement points (014) which contrast with the adjacent environment, c) detecting the alignment (026) of said movable part (200) with calibration positions (210) by matching said selected arrangement points (014) with known calibration points which are in contrast to the the adjacent environment of said calibration positions (210), so that the absolute position of said moving part (200) relative to the means for said optical image capture is known to said detected alignment (026), d) measuring the relative displacement of said movable part (200) of said mechanical apparatus by measuring a relative displacement between said optically current captured images (001) and images (001) previously optically captured from said surface (208) of said movable part ( 200), and e) estimating the absolute position of said mobile part (200) by adding said cumulative measured relative displacements of said mobile part (200) from said known absolute position of said calibration position (210), the absolute position of said part mobile (200) which moves relative to the optical capture means of said images (001) being thus corrected to the calibration positions (210), and the cost of the mechanical apparatus being thus reduced thanks to the reduced number of positions calibration (210) required. [10" id="c-fr-0010] 10. Optical method according to claim 9, characterized in that it further comprises the steps consisting in: a) combining said optically captured images (001) to form groups (212) of combined images, and b) operating said selection of stitch arrangements (014) from among all of said images (001) of said group (212) of combined images, so that said selected stitch arrangements (014) having increased contrast with the adjacent environment can be found, said groups (212) of combined images thus having an increased likelihood of containing unique optical characteristics, multiple groups (212) of combined images are therefore probably not required to uniquely define said positions of calibration (210), and the arrangements of selected points (014) having an increased contrast with the adjacent environment thus allowing faster recognition of said arrangements of selected points (014), which allows faster movement of said mechanical parts which move relative to each other. [11" id="c-fr-0011] 11. Optical method according to claim 9, characterized in that it further comprises the steps consisting in: a) combining sequences of optically captured images (001), separated equally, so as to form groups (212) of combined images, such that the separation of said optically captured images (001), separated equally equal, either configurable, and b) operating, in all of said images (001) of said group (212) of combined images, said selection of said point arrangements (014), so that said selected point arrangements (014) having increased contrast with the environment adjacent can be found, said surface resolution of the group (212) of combined images and the rate of surface movement being thus able to be configured by the separation of said optically captured images (001), equally separated, and said groups (212) of combined images thereby having an increased likelihood of containing unique optical characteristics, which results in multiple groups (212) of combined images not being required to uniquely define said calibration positions (210) . [12" id="c-fr-0012] 12. Optical method according to claim 9, characterized in that said optical image capture consists in using one or more optical sensors (925) of distance, and the method for obtaining the known absolute displacement of said movable part (200) includes the steps of: a) calibrating one or more of said optical distance sensors (925) at said calibration positions (210), and b) measuring the absolute displacement by means of one or more of said optical distance sensors (925), said measurement of said absolute displacement by optical distance sensors (925) being thus verified and systematic optical measurement errors of distance being thus suitable for correction by calibration. [13" id="c-fr-0013] 13. Optical method according to claim 9, characterized in that said calibration positions (210) are adjacent or overlap, so that said measured relative displacement does not accumulate during movement between said calibration positions (210 ), said absolute displacement being thus known at the level of adjacent or overlapping calibration positions (210), and the errors in said measured relative displacement thus not accumulating in said estimated absolute position. [14" id="c-fr-0014] 14. Optical method according to claim 9, characterized in that it further comprises the steps consisting in: a) alternately measuring said absolute position by means other than directly by said alignment (026) of detection with said calibration positions (210), b) identifying said calibration positions (210) when said alternately measured absolute position is located near said known absolute displacement of said calibration position (210), so that said known absolute displacement of said position identified calibration (210) can be updated with said absolute position measured alternately, c) identifying said calibration positions (210) when said alternately measured absolute position is not located near the known absolute displacement of said calibration position (210), so that said calibration position (210 ) identified can be deleted, and d) identifying the locations in which said calibration positions (210) are missing, by comparing said estimated absolute position with said alternately measured absolute position, so that a calibration position (210) is missing when the difference between said estimated absolute position and said alternately measured absolute position is too large, so that the creation of calibration positions (210) by storing said arrangements of selected points (014) as said calibration points known to said calibration positions (210) at said alternately measured absolute positions when said calibration positions (210) are missing, said alternately measured absolute positions being thus used to verify said known absolute position obtained from said calibration positions ( 210), said absolute positions measured alternately thus being able to be used to improve the accuracy of said known absolute position of said calibration positions (210), said alternately measured absolute positions thus being able to be used to identify and delete said calibration positions (210) which are no longer detectable at proximity to their original location, the memory used by said calibration positions (210) which are erased thus being able to be freed, said memorization of arrangements of selected points at the locations of said absolute positions measured alternately to create new positions calibration (210) thus continuing until they are sufficient for said estimated absolute position to have the desired precision on said surface (208) of said movable part (200), and new calibration positions ( 210) with said memorized selected arrangements being thus added, replacing any position lost or hidden calibration (210). [15" id="c-fr-0015] 15. Optical method according to claim 14, characterized in that said alternative measurement of said absolute position of said movable part (200) is another optical method of measuring the absolute mechanical displacement of actuators, joints or other devices mechanical parts which contain mechanical parts which are movable with respect to each other, said absolute alternative measurement position being thus able to determine said absolute position of said movable part (200) at locations in which said other optical method for measuring absolute mechanical displacement supplies said absolute position measured by said detection alignment (026) at its calibration positions (210). [16" id="c-fr-0016] 16. Optical method according to claim 14, characterized in that said means (260) for optically capturing said images (001) comprises steps consisting in: a) capturing a plurality of said images (001) of said surface (208) of said movable part (200), b) combining said plurality of said images (001) to create a composite calibration image (820) which combines optical characteristics originating from said plurality of said images (001), c) performing said alternative measurement of said absolute position by determining said absolute position of said movable part (200) from said known absolute position of one or more of said images (001) of said composite calibration image (820) , said alternative absolute position measurement being thus obtained by means other than by direct detection of said alignment (026) of said current image with said calibration positions, and said composite image which combines optical characteristics coming from said plurality of said images (001 ) captured thereby having an increased likelihood of containing unique optical characteristics and, therefore, multiple composite images (001) are not required to uniquely define said calibration positions (210). [17" id="c-fr-0017] 17. An optical method according to claim 9, characterized in that it further comprises the step of selecting said dot arrangements (014) which are globally unique, so that each globally unique dot arrangement (014) is unique among all of said point arrangements (014) stored at said calibration positions (210), each of said calibration positions (210) being thus identifiable by its generally unique point arrangements (014), stored at said calibration positions (210 ). [18" id="c-fr-0018] 18. Optical method according to claim 9, characterized in that it further comprises the step of selecting said point arrangements (014) which are locally unique, so that each locally unique point arrangement (014) is unique inside the estimation error of said estimated absolute position, each arrangement of points (014) selected locally unique can thus be very simple and should only be unique in the relatively small area defined by said estimation error of said estimated absolute displacement, and each arrangement of points (014) selected locally unique being thus simple, requiring less non-volatile memory (233), thus making it possible to store more of said arrangements of points (014) locally unique. [19" id="c-fr-0019] 19. Optical method according to claim 14, characterized in that it a) selecting said point arrangements (014) involves selecting naturally occurring points, which contrast with adjacent natural environments naturally occurring from said images (001), and b) selecting said point arrangements (014) to be stored as said known calibration point arrangements of a calibration position (210) involves selecting naturally appearing points (014) which contrast with adjacent environments appearing naturally, so that a manufacturing process for creating marks on said surface (208) of said movable part (200) is not necessary, which reduces the manufacturing cost. 1/20
类似技术:
公开号 | 公开日 | 专利标题 FR3058212A1|2018-05-04|WEAR-TOLERANT HYDRAULIC / PNEUMATIC PISTON POSITION DETECTION SYSTEM USING OPTICAL SENSORS WO2016151249A1|2016-09-29|Method for determining the state of a cell JP2020503817A|2020-01-30|Apparatus and method for obtaining distance information from viewpoint FR2846432A1|2004-04-30|CODED TARGET AND PHOTOGRAMMETER METHOD USING SUCH TARGETS EP3140611B1|2018-06-06|Device and method for three-dimensional reconstruction of a scene by image analysis BE1017316A7|2008-06-03|Method and apparatus for scanning gemstone, comprises platform adapted to support gemstone, scanning system adapted to provide geometrical information concerning three-dimensional convex envelope of gemstone FR2993988A1|2014-01-31|DEVICE AND METHOD FOR CHARACTERIZING A SAMPLE BY LOCALIZED MEASUREMENTS WO2016075279A1|2016-05-19|Analysis method including the holographic determination of a position of a biological particle EP3519899A1|2019-08-07|Device for observing a sample and method for observing a sample CA2880145C|2020-07-28|Method for the non-destructive testing of a blade preform FR3067242B1|2019-07-19|METHOD FOR EVALUATING AN ORTHODONTIC GUTTER FR3034233A1|2016-09-30|METHOD OF CORRECTING AN IMAGE OF AT LEAST ONE REMOTELY PRESENTED OBJECT IN FRONT OF AN IMAGER AND LIGHTING BY A LIGHTING SYSTEM AND SHOOTING SYSTEM FOR IMPLEMENTING SAID METHOD EP3339963B1|2020-08-12|An apparatus and a method for in-line holographic imaging EP1479044A2|2004-11-24|Method for measuring the location of an object by phase detection FR3053817A1|2018-01-12|METHOD AND SYSTEM FOR RECONSTRUCTING THREE-DIMENSIONAL REPRESENTATION KR101347136B1|2014-01-10|Automatic measurement methods of contact angle of droplet based on Newton's ring FR3050528A1|2017-10-27|METHOD AND DEVICE FOR ESTIMATING OPTICAL PROPERTIES OF A SAMPLE FR3088160A1|2020-05-08|IMAGE SENSOR FOR OPTICAL CODE | RECOGNITION FR2922640A1|2009-04-24|METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION OF THE INTERNAL SURFACE OF A SHOE FR3075370A1|2019-06-21|METHOD FOR CALIBRATING AN ANALYZING DEVICE AND ASSOCIATED DEVICE FR2914058A1|2008-09-26|METHOD AND DEVICE FOR DETERMINING THE RELATIVE MOTION OF A MOBILE OBJECT BY MEASURING A PROJECTED MOTIF FR3050597A1|2017-10-27|METHOD FOR ADJUSTING A STEREOSCOPIC VIEWING APPARATUS FR3073047A1|2019-05-03|OPTICAL METHOD FOR ESTIMATING A REPRESENTATIVE VOLUME OF PARTICLES PRESENT IN A SAMPLE FR3055997A1|2018-03-16|SYSTEM FOR DETERMINING AT LEAST ONE CHARACTERISTIC RELATING TO THE CONTOUR OF A SUBJECT CONTAINED IN AT LEAST ONE DIGITAL IMAGE FR3112875A1|2022-01-28|Reading a graphical code
同族专利:
公开号 | 公开日 WO2018076120A1|2018-05-03| TW201818047A|2018-05-16| EP3532804A4|2020-04-29| KR20190075965A|2019-07-01| US20180120437A1|2018-05-03| FR3058212B1|2021-09-10| US10365370B2|2019-07-30| EP3532804A1|2019-09-04| CA3067971A1|2018-05-03| TWI642896B|2018-12-01| CN110036261A|2019-07-19| BR112019008756A2|2019-07-30|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7737947B2|2003-10-16|2010-06-15|Avago Technologies Ecbu Ip Pte. Ltd.|Tracking motion using an interference pattern| US20050258345A1|2004-05-21|2005-11-24|Silicon Light Machines Corporation|Optical position sensing device including interlaced groups of photosensitive elements| US20070051884A1|2005-09-07|2007-03-08|Romanov Nikolai L|Positional sensing system and method| US7728816B2|2006-07-10|2010-06-01|Cypress Semiconductor Corporation|Optical navigation sensor with variable tracking resolution| US9052759B2|2007-04-11|2015-06-09|Avago Technologies General Ip Pte. Ltd.|Dynamically reconfigurable pixel array for optical navigation| US8847888B2|2007-12-18|2014-09-30|Microsoft Corporation|Optical mouse with limited wavelength optics| US8482607B2|2008-03-10|2013-07-09|Timothy David Webster|Position sensing of a piston in a hydraulic cylinder using a photo image sensor| WO2009112895A1|2008-03-10|2009-09-17|Timothy Webster|Position sensing of a piston in a hydraulic cylinder using a photo image sensor| US8525777B2|2009-08-25|2013-09-03|Microsoft Corporation|Tracking motion of mouse on smooth surfaces| US9134116B2|2010-03-11|2015-09-15|Salzgitter Mannesmann Line Pipe Gmbh|Method and apparatus for measurement of the profile geometry of cylindrical bodies| US8692880B2|2010-10-05|2014-04-08|Mitutoyo Corporation|Image correlation displacement sensor| US9027460B2|2011-01-12|2015-05-12|Liebherr-Elektronik Gmbh|Piston-cylinder unit with device for determining position| EP2769104A2|2011-08-25|2014-08-27|Weber-Hydraulik GmbH|Position-measuring device for fluid cylinder| EP2775268A1|2013-03-05|2014-09-10|Univerzita Palackeho|A method of a non-contact detection of a moving object absolute position by making use of speckle effect and device for implementation of this method| US9086738B2|2013-03-12|2015-07-21|Apple Inc.|Multi-surface optical tracking system| US9342164B2|2013-04-19|2016-05-17|Pixart Imaging Inc.|Motion detecting device and the method for dynamically adjusting image sensing area thereof| US9449238B2|2014-04-30|2016-09-20|Lexmark International, Inc.|Augmented image correlation| US5825378A|1993-04-30|1998-10-20|Hewlett-Packard Company|Calibration of media advancement to avoid banding in a swath printer| DE60310233T2|2002-07-24|2007-09-13|Deka Products Ltd. Partnership|OPTICAL SHIFT SENSOR FOR INFUSION DEVICES| CN100359448C|2004-06-28|2008-01-02|凌阳科技股份有限公司|Method and system for instantly judging pick-up image pixel value abnormality| TWI245878B|2004-06-30|2005-12-21|Nat Huwei Institue Of Technolo|Device for measuring linear dual axis geometric tolerances| US20060056077A1|2004-09-15|2006-03-16|Donal Johnston|Method for assembling a self-adjusting lens mount for automated assembly of vehicle sensors| DE102005002934A1|2005-01-21|2006-07-27|Roche Diagnostics Gmbh|System and method for optical imaging of objects on a detection device by means of a pinhole| US7703873B2|2007-03-15|2010-04-27|Hewlett-Packard Development Company, L.P.|Method and apparatus for image registration| CN101312524B|2007-05-23|2010-06-23|财团法人工业技术研究院|Moving object detecting apparatus and method using light track analysis| CN102313741B|2010-07-06|2013-08-14|州巧科技股份有限公司|Automatic optical detection system| US8493633B2|2010-07-20|2013-07-23|Xerox Corporation|Media handling and uniformity calibration for an image scanner| US20130001412A1|2011-07-01|2013-01-03|Mitutoyo Corporation|Optical encoder including passive readhead with remote contactless excitation and signal sensing| US8840223B2|2012-11-19|2014-09-23|Xerox Corporation|Compensation for alignment errors in an optical sensor| US8944001B2|2013-02-18|2015-02-03|Nordson Corporation|Automated position locator for a height sensor in a dispensing system| US10061027B2|2014-02-25|2018-08-28|Adsys Controls, Inc.|Laser navigation system and method| US9628713B2|2014-03-17|2017-04-18|Invensense, Inc.|Systems and methods for optical image stabilization using a digital interface| TWI482945B|2014-07-24|2015-05-01|Nat Applied Res Laboratories|Adjusting apparatus for image capture devices disposed in concentric circle configuration| CN105389774B|2014-09-05|2019-03-01|华为技术有限公司|The method and apparatus for being aligned image| US10365370B2|2016-10-31|2019-07-30|Timothy Webster|Wear tolerant hydraulic / pneumatic piston position sensing using optical sensors|ITUA20164551A1|2016-06-21|2017-12-21|Ognibene Power Spa|METHOD OF DETECTION OF THE MUTUAL POSITION BETWEEN A CYLINDER AND A PISTON OF A CYLINDER-PISTON UNIT AND ITS RELATED CYLINDER-PISTON UNIT| US10365370B2|2016-10-31|2019-07-30|Timothy Webster|Wear tolerant hydraulic / pneumatic piston position sensing using optical sensors| GB201918864D0|2019-12-19|2020-02-05|Renishaw Plc|Apparatus|
法律状态:
2018-10-26| PLFP| Fee payment|Year of fee payment: 2 | 2019-10-30| PLFP| Fee payment|Year of fee payment: 3 | 2020-04-24| PLSC| Publication of the preliminary search report|Effective date: 20200424 | 2020-10-30| PLFP| Fee payment|Year of fee payment: 4 | 2021-10-28| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201662414806P| true| 2016-10-31|2016-10-31| US62414806|2016-10-31| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|