专利摘要:
The invention relates to a method for the automatic determination of at least one parameter associated with an ophthalmic device selected by an individual, said device comprising a frame called a selected mount, said determination being made from an acquired image of the face of the individual wearing the selected mount or a mount of a second ophthalmic device. The method comprises steps of: - detecting at least one characteristic point of at least one eye of the individual the acquired image and estimation of the three-dimensional position of the characteristic point (s) detected (s) ); detection of the mounted frame and estimation of the three-dimensional position of the mounted frame by aligning a three-dimensional representation of the mounted frame with the frame carried in the acquired image; determination of the parameter (s) from the relative position of the eyes relative to the three-dimensional representation of the selected frame.
公开号:FR3069687A1
申请号:FR1757070
申请日:2017-07-25
公开日:2019-02-01
发明作者:Sylvain Le Gallou;Jerome Guenard;Ariel Choukroun;Serge COURAL
申请人:Visionhub;FITTINGBOX;
IPC主号:
专利说明:

TECHNICAL FIELD OF THE INVENTION
The field of the invention is that of optics, and more particularly that of measurement for the adaptation of at least one glass of an ophthalmic device to the sight of an individual.
More specifically, the invention relates to a method for determining at least one parameter associated with an ophthalmic device intended to be worn by an individual. Such a device can be for example a pair of glasses or a mask.
The invention finds particular applications in the field of the sale of an ophthalmic device in stores or in stand-alone kiosks installed for example in a sales area not specialized in optics.
STATE OF THE ART
Generally, the adaptation of the lenses to the sight of an individual before their assembly in a frame of a pair of glasses requires determining optical parameters such as the width of the lens, the height of the lens, the width of the bridge, the effective diameter, curve angle, pupillary distance, pupillary half-distances, pupil heights, segment heights, pantoscopic angle, or glass-eye distance. Two main standards exist to indicate the dimensions of the outline of a circle of a frame: the BOXING system based on a framing of the circle by a rectangle or the DATUM system based on the width at half height of the circle. The BOXING system is the standard generally used by lens manufacturers and is by default on automatic grinders while the DATUM system is the traditional standard. In the DATUM system, the height is generally measured using a straight edge directly on the glass.
Figure 5 illustrates the difference between the BOXING system and DATUM, in which A represents the width of the glass, D the width of the bridge, B the height of the glass. The main difference between the two systems relates to the definition of the width of the bridge D which in the DATUM system corresponds to the distance between the glasses taken halfway up the glasses then in the BOXING system, the width of the bridge D corresponds to the minimum distance between the glasses. The centering point of the lens is defined by the interpupillary distance Pd and by the height of the pupils h. The interpupillary distance (Pd) is shown in Figure 5 from the median axis between the glasses. The height h depends on the system chosen. In the BOXING system, the height h is defined from the base of the lenses, that is to say from a line tangent simultaneously to the two lenses. While in the DATUM system, the height h is defined from the lower edge of the glass directly above the centering point.
FIG. 6 represents a mount seen from above illustrating the angle W called the curve angle or the angle of the arch, defined by the angle between the tangent to the bridge and one of the planes connecting the ends of the front face of one of the lenses. This angle is measured horizontally.
The optical parameters intrinsic to the frame chosen, that is to say measurable without requiring the presence of a wearer of the frame, such as the parameters A, B, D and W, are generally measured on 2D images of the frame . These parameters are often then determined in a not very precise way because the images used for the measurement correspond to projections in a plane of three-dimensional objects in particular often curved or rounded.
Other interesting eye measurements for fitting lenses and requiring measurements on the patient are: pupillary distance (PD: acronym of the English term "Pupillary Distance"), half pupillary distances (monoPD), pantoscopic angle (PA) , the glass-eye distance (VD: acronym of the English term "Vertex Distance"), the heights given between the bottom of the frame and the center of the pupils (FH: acronym of the English term "Fitting Heights") and the heights between the bottom of the frame and the lower eyelid partially covering the pupil (SH: acronym of the English term "Segment Heights") and finally the effective diameter (ED: acronym of the English term "Effective Diameter") which corresponds to the minimum diameter of the lens in which we have to cut the glass of the pair.
These optical parameters in relation to the wearing of the frame on the face of the individual are generally calculated by indirect methods using a test pattern positioned on the face of the individual. The target frame used has markers whose geometrical configuration is precisely known in order to allow the eye and optical parameters related to the individual to be measured on an image of his face wearing the target frame.
The major drawback of these techniques is that the target frame requires both perfect positioning of the target frame on the head of the individual and that the individual is as fronto-parallel as possible with respect to the measuring device. This double condition notably makes it possible, when it is respected, to avoid parallax errors on the measurements, which is in practice rarely the case.
Consequently, these techniques generally result in measurement errors which relate to the positioning of the corrected lenses in the frame chosen by the individual, it being understood that the measurement deferral between two distinct frames can also introduce an additional deviation in the positioning of the lenses. corrections on lenses assembled in the chosen frame.
On the other hand, it should be emphasized that these techniques generally offer a target frame having a substantially flat and non-curved face, which introduces an additional difficulty when carrying out measurement on curved frames.
Finally, these target frame techniques can be considered invasive or bothersome by the individual.
It should also be noted that the target frame is generally specific to a particular measurement device.
OBJECTIVES OF THE INVENTION
None of the current systems makes it possible to simultaneously meet all the requirements, namely to offer an automatic measurement technique which is precise and insensitive to the positioning of the face relative to the acquisition device.
Another objective of the invention is to propose a measurement technique which does not require the intervention of a person skilled in the art in the field of optical measurement, in particular an optician.
Another objective of the invention is to propose a measurement technique using any type of frame for a pair of glasses, and in particular directly on the frame selected by an individual requiring optical correction.
Another objective of the invention is to propose a technique which makes it possible to best adjust the positioning of the correction made for near and / or far vision on the lens assembled in the frame chosen by the individual.
STATEMENT OF THE INVENTION
These objectives, as well as others which will appear subsequently, are achieved using a method for automatically determining at least one parameter associated with an ophthalmic device selected by an individual, said ophthalmic device comprising a frame called a frame. selected, said determination being made from an image of the face of the individual wearing a frame of a pair of glasses called a worn frame, said worn frame being the selected frame or a frame of a second ophthalmic device, said image being acquired by an image acquisition system.
A parameter determined by the method, which can be called an optical parameter, is a parameter linked to the face of the individual such as the interpupillary distance, a parameter linked to a frame of a pair of glasses, such as the width of the lens, the bridge width or branch width, or a parameter relating to both the frame and the face of the individual, such as the glass-eye distance or the pantoscopic angle. From the parameter (s) determined by the process, it is thus possible to cut a corrective lens or not in order to position it correctly in the frame according to the individual brought to wear the frame.
An ophthalmic device may for example be a pair of glasses or a mask. The ophthalmic device can also be a device for protecting the eyes of the individual wearing said device on his face.
It should be noted that the frame of an ophthalmic device generally extends laterally on either side of the individual's head.
The frame worn by the individual and used by the process which is the subject of the invention can advantageously be the final frame selected by the individual, which makes it possible to determine the parameters precisely as a function of the wearing on the face of the individual of the selected frame. Generally, the frame selected is an actual frame chosen by the individual who wishes to purchase this frame in which at least one corrective lens is assembled.
It should be emphasized that in an advantageous mode, the frame worn by the individual is a conventional frame which does not include a specific reference mark intended for the measurement of parameters by identification of the three-dimensional position of the reference marks.
Furthermore, the frame worn can advantageously comprise at least one lens without optical corrections, commonly called neutral lens, or at least one corrective lens. It should be emphasized that said corrective lens comprises at least one optical correction.
According to the invention, the determination method comprises steps of:
detection of at least one characteristic point of at least one eye of the individual on the acquired image and estimation in the frame of the image acquisition system, of the three-dimensional position of the characteristic point (s) ( s) detected;
detecting the mounted frame and estimating the three-dimensional position of the mounted frame in the frame of the image acquisition system by aligning a three-dimensional representation of the mounted frame with the mounted frame in the acquired image;
in the case where the frame worn is the frame of the second ophthalmic device, positioning, in the frame of the image acquisition system, of a three-dimensional representation of the frame selected, the representation of the frame selected being positioned relative to the representation of the frame worn by means of a positioning offset between the two frames, said offset being previously established;
expression of the position of each characteristic point with respect to the three-dimensional representation of the selected frame;
determination of the parameter (s) from the relative position of the eyes with respect to the three-dimensional representation of the selected frame.
In other words, in the first case, in which the individual wears the frame selected in the acquired image, the determination method comprises steps of:
detection of at least one characteristic point of at least one eye of the individual on the acquired image and estimation in the frame of the image acquisition system, of the three-dimensional position of the characteristic point (s) ( s) detected;
detecting the selected frame and estimating the three-dimensional position of the selected frame in the frame of the image acquisition system by aligning a three-dimensional representation of the selected frame with the frame selected in the acquired image;
expression of the position of each characteristic point with respect to the three-dimensional representation of the selected frame;
determination of the parameter (s) from the relative position of the eyes with respect to the three-dimensional representation of the selected frame.
In the second case, in which the individual wears the frame of the second ophthalmic device, also called the second frame, in the acquired image, the determination process comprises steps of:
detection of at least one characteristic point of at least one eye of the individual on the acquired image and estimation in the frame of the image acquisition system, of the three-dimensional position of the characteristic point (s) ( s) detected;
detection of the second frame and estimation of the three-dimensional position of the second frame in the frame of the image acquisition system by aligning a three-dimensional representation of the second frame with the second frame in the acquired image;
positioning of a three-dimensional representation of the selected frame, the representation of the selected frame being positioned relative to the representation of the second frame by means of a positioning offset between the two frames, said offset being previously established;
expression of the position of each characteristic point with respect to the three-dimensional representation of the selected frame;
determination of the parameter (s) from the relative position of the eyes with respect to the three-dimensional representation of the selected frame.
Thus, in both cases, the measurement of one or more parameters is precise because it is directly performed on the frame selected by the individual thanks to a faithful representation of the selected frame which comes to be virtually superimposed on the worn frame. These measured parameters are particularly useful for adapting lenses to the frame.
In the second case, the second frame serves as a positioning reference for the selected frame. An offset can be introduced because the two frames, which are generally not identical, are positioned differently when they are worn on the face. In particular, the contact points at the nose and ears may be different depending on the shape and / or the size of each frame.
The characteristic point of the eye is preferably a characteristic point of the pupil, a characteristic point of the iris or a characteristic point of the eyeball such as for example its center of rotation.
It should be noted that the expression of the position of each characteristic point is generally carried out in a common reference frame with that of the three-dimensional representation of the selected frame. Preferably, to determine the parameters, the coordinate system used in the calculation corresponds to the coordinate system of the three-dimensional representation of the selected frame.
The representation of a frame was generally carried out by modeling techniques well known to those skilled in the art, such as that described in the international patent application published under the number WO 2013/139814.
It should be noted that the quality of the measurement is a function of the quality of the three-dimensional representation of the frame in virtual space. Without the alignment of the representation of the frame actually worn with the frame worn in the acquired image, it is very difficult to directly obtain the three-dimensional position of frame actually worn relative to the image acquisition system, which does not provide a reliable and precise measurement quality.
It should also be emphasized that the determination of the three-dimensional positioning of the mounted frame is carried out in particular by means of the virtual frame which comes to align with the frame worn in the acquired image and not by a direct calculation of the position of the frame carried by a three-dimensional scanner or with a depth camera.
In addition, positioning can be determined on an image, on a depth map or on a point cloud from a scanner. Indeed, for each of these cases, an alignment of a three-dimensional representation of the frame is carried out either in two dimensions by means of a projection of the three-dimensional representation of the frame on the acquired image or directly in three dimensions via the depth map or the point cloud of the scanner. This alignment of the three-dimensional representation of the frame makes it possible in particular to keep the semantics of the pair of glasses and to avoid interpreting the acquired images.
By way of example, it is currently possible to obtain a three-dimensional representation of the frame in a virtual space with less than 0.2 mm of deviation from the real frame, therefore making it possible to obtain a quality of measurement. parameters less than 0.5 mm, advantageously less than 0.1 mm.
Furthermore, the detection of the characteristic point (s) can be carried out automatically or manually by a person skilled in the art such as an optician.
In particular embodiments of the invention, the estimation of the three-dimensional position of the mounted frame is carried out by calculating the minimum of a distance function between the contour of the projection of the three-dimensional representation of the mounted frame and the outline of the mounted frame in the acquired image, the three-dimensional representation of the mounted frame being able to be articulated and / or deformed, so as to correctly position the three-dimensional representation of the mounted frame, then by calculating the three-dimensional position of the representation three-dimensional of the mounted frame, estimated to correspond to the real three-dimensional position of the mounted frame ,.
In particular embodiments of the invention, the characteristic point of each pupil is the center of the pupil which is calculated as the center of a circle representing the iris, said circle being positioned and oriented in three dimensions in minimizing a distance function between the projection of the circle on the image and the outline of the pupil on the image.
In particular embodiments of the invention, the position of the center of the eyeball of an eye is calculated:
in the case where the aiming point of said eye is known, as equal to the point on the right defined by the center of the pupil of said eye and said aiming point, at a distance equal to the average radius of an eyeball;
in the case where the aiming point of said eye is unknown, as equal to the center of a sphere of radius equal to the average radius of the eyeball and for which the iris represents a section plane.
In particular embodiments of the invention, the method also comprises a step of determining the positioning offset between the three-dimensional representation of the selected frame and the three-dimensional representation of the second frame worn by the individual, by positioning the two three-dimensional representations on a three-dimensional model representative of a face.
In particular embodiments of the invention, the method also includes a step of developing a three-dimensional model of the face, the three-dimensional model comprising the characteristic point (s) detected, and a step of superimposing the virtual frame on the three-dimensional model of the face.
In particular embodiments of the invention, the image acquisition system is stereoscopic.
In other words, in these embodiments, the image acquisition system comprises at least two cameras offset from one another and similarly oriented in order to be able to deduce from two images acquired simultaneously a distance by relation to the image acquisition system, of an object of the scene acquired by said system.
Preferably, the acquisition system comprises at least three cameras.
In particular embodiments of the invention, the image acquisition system comprises at least one infrared camera.
This makes it easier for the image acquisition system to acquire an image of the eyes behind a sun lens, which generally at least partially cut off visible light.
In particular embodiments of the invention, the determined parameter is included in the following list:
- interpupillary distance (PD);
- interpupillary half distance (monoPD);
- pantoscopic angle (PA);
- glass-eye distance (VD);
- height between the bottom of the frame and the center of a pupil (FH);
- height between the bottom of the frame and the lower eyelid (SH);
- effective diameter of the glasses (ED);
- the way of looking at each of the glasses.
The gaze path, also called progression path, corresponds to the trajectory on the glass of the evolution of the intersection of the direction of gaze of ίο the individual with the lens, between a gaze direction corresponding to a vision of far and a direction of the look corresponding to a vision of near.
Knowing precisely the path of the gaze on each lens makes it possible in particular to adapt the position of the progressive corrections according to the position of the eyes during a near vision and the position of the eyes looking into the distance.
In particular embodiments of the invention, the pantoscopic angle is determined by:
- detecting the two corners of the individual's mouth;
- estimating the 3D position of the midpoint between the two corners of the mouth;
- determining the value of the pantoscopic angle by calculating the angle between the plane formed by the midpoint and the characteristic point of each pupil, and the plane of the lenses assembled in the selected frame.
In particular embodiments of the invention, at least one parameter is also determined as a function of a low point of the frame, the low point being included on a straight line tangent simultaneously to the two lenses.
In particular embodiments of the invention, the automatic determination process previously includes a step of modeling the frame worn by the individual.
In particular embodiments of the invention, the automatic determination method previously comprises a step of calibrating the acquisition system.
In particular embodiments of the invention, the automatic determination method also comprises a step of sending a glass order taking account of the parameter (s) previously determined (s).
In particular embodiments of the invention, the automatic determination method also includes a step of adapting a lens to a pair of glasses on the basis of the parameter (s) previously determined.
Preferably, the method also comprises a step of machining a lens of a pair of glasses from the parameter (s) previously determined (s).
The invention also relates to a computer program product comprising a series of instructions making it possible to implement the steps of the automatic determination method according to any one of the preceding modes of implementation.
Finally, the invention also relates to a device comprising a screen, a plurality of cameras, a computer processor and a computer memory storing said computer program product.
It should be noted that the cameras of the device are generally oriented towards the individual, placing themselves naturally in front of the screen displaying the image of the individual in real or near real time.
BRIEF DESCRIPTION OF THE FIGURES
Other advantages, aims and particular characteristics of the present invention will emerge from the non-limiting description which follows of at least one particular embodiment of the devices and methods which are the subject of the present invention, with reference to the appended drawings, in which:
- Figure 1 shows an example of a device for measuring an optical parameter according to the invention;
- Figure 2 is a block diagram of an example of implementation of the automatic determination method according to the invention;
- Figure 3 illustrates the definition of heights in the DATUM and BOXING systems;
- Figure 4 is a three-dimensional representation of a frame and the associated frame;
- Figure 5 illustrates the difference between the BOXING and DATUM system;
- Figure 6 is a representation of a frame seen from above;
- Figure 7 illustrates in block diagram the main steps of the method according to two modes of implementation of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
This description is given without limitation, each characteristic of an embodiment can be combined with any other characteristic of any other embodiment in an advantageous manner.
We note, as of now, that the figures are not to scale.
Example of an embodiment of the invention
FIG. 1 represents a device 100 for measuring optical parameters used by an individual 110 wearing a frame 120 of a pair of glasses 120 which he has previously selected in the context of purchasing optical equipment.
In the present example, the device 100 comprises a display screen 130 placed substantially in the vertical plane and an acquisition system comprising at least two cameras 140. One of the two cameras 140, called the main camera 140i, is centered on the vertical median axis 131 of the display screen, while the other camera 140 2 is in the same vertical plane as the main camera 140i but shifted horizontally to the left of the screen 130.
The main camera 140i films the image of the individual 110 positioned facing the screen 130. The individual 110 thus sees his image displayed on the screen 130.
The second camera 140 2, which is advantageously offset horizontally, makes it possible to obtain, after processing with the images obtained by the main camera, a three-dimensional representation of the scene, also called stereoscopic image. Such a stereoscopic image is for example obtained when the horizontal offset is of the order of a centimeter or about ten centimeters.
This three-dimensional representation can be improved by adding other cameras, such as a third camera positioned symmetrically with respect to the median axis 131.
The cameras 140 can be advantageously sensitive to infrared in order both to see the pupils through non-polarized solar glasses and to limit the number of reflections on any type of glass, thus facilitating the identification of the pupils.
On the other hand, the cameras 140 can also include a polarizing filter positioned in front of their objective in order to see the pupils through polarized glasses, in particular solar.
It should be emphasized that the acquisition system is advantageously calibrated, that is to say that the images of the scene provided by the cameras 140 make it possible to obtain a metric representation of the scene.
The device 100 also includes a computer processor 150 processing the instructions of a computer program product implementing the method of automatic determination of at least one optical parameter associated with the frame 120 carried by the individual 110. These instructions are in particular stored in a memory 160 included in the device 100.
FIG. 2 illustrates in the form of a block diagram the method 200 for automatic determination, the steps of which are processed by the computer processor 150 of the device 100.
The method comprises a first step 210 of acquiring a plurality of images of the individual 110 wearing the frame 120. At least one image per camera 140 is acquired during this step 210.
It should be emphasized that the images acquired by each camera 140 are calibrated metrically. Calibration was thus generally carried out before the acquisition of the images during a step 211.
Furthermore, the selected frame 120 has also been previously modeled during a step 212 by modeling techniques well known to those skilled in the art, such as that described in the international patent application published under the number WO 2013 / 139814. A three-dimensional representation of the selected frame, also called a virtual frame, is thus stored in a database connected to the device 100. It should be emphasized that the dimensions of the three-dimensional representation are faithful to the real frame. They are thus precise with a difference of less than 1 mm, advantageously less than 0.5 mm, preferably less than 0.2 mm. Furthermore, the virtual representation of the selected frame is extracted from the database either automatically by detecting and recognizing the frame 120 worn by the individual 110, or manually by indicating the reference of the frame 120 for example by reading d '' a barcode or by typing on a physical keyboard or on a keyboard displayed on the screen 130.
From the acquired images, the selected frame 120 is detected during a detection step 215 using for example known techniques for detecting objects based on visual descriptors. These detection methods based on a learning technique give at least a position and a scale of the object. The training can be carried out on images of frames of pairs of glasses or on images of faces wearing a frame of a pair of glasses. It is also possible to obtain a position and a scale sufficiently precise to proceed to the next step of alignment of the 3D object in the 2D image, using only a face detector. In addition, there are also detectors, like the one described in "Deep ΜΑΝΤΑ: A
Coarse-to-fine Many-Task Network for joint 2D and 3D vehicle analysis from monocular image, F. Chabot, M. Chaouch, CVPR 2017 ", which can provide initial solutions for positioning in 3D space the elements sought.
In a variant of this particular embodiment of the invention, the detection of the frame worn 120 by the individual 110 is carried out by comparing the acquired image with an image of the individual 120 not wearing a frame of pair of glasses on the face.
From the images acquired in which the selected frame 120 is detected during step 215, an estimate of the three-dimensional position of the frame 120 relative to the main camera 140i is made during step 220 of the method 200 using the virtual frame which faithfully represents the real frame in three dimensions. The three-dimensional position of the selected frame 120, that is to say rotation R M and translation Tm of the object in the 3D scene, can be obtained manually by resetting the three-dimensional representation of the selected frame on the selected frame actually carried in the acquired image of the individual 110 or automatically thanks to an algorithm for aligning 3D objects on images by initializing the algorithm at the eye level of the individual 110. For such an algorithm, the a person skilled in the art can for example refer to the document by Yumi Iwashita, Ryo Kurazume and Kenji Hara entitled "Fast Alignment of 3D Geometrical Models and 2D Color Images using 2D Distance Maps". Those skilled in the art can also refer to the document by TW Drummond and R. Cipolla entitled "Real-time tracking of complex structures with on-line camera calibration" published in 2002 in the collection Image and Vision Computing.
It should be emphasized that the three-dimensional representation of the frame can be brought to be articulated and / or deformed during this step in order to take account of the actual opening of the branches of the frame as carried by the individual 110.
For this purpose, a minimization equation based on the outline of the frame model is used. Let gl3D l = lp be the 3D points of this model, ngl3D l = lp their normal and σ * the subset of these points such that their normal is perpendicular with their projection towards camera i, the function to minimize is as follows:
min
Rg.Tg.e.y
ΣΣ \ Proj l (Rg (G (gl3D ι, θ, γ)) + Tg) - bestCont (gl3D pnglADp i, Rg, Tg, Θ, y) | 2 i = l Ιεσ 1 where Rg and Tg are the rotation and translation matrices to properly position the pair and where G (gl3D ι, θ, γ) is a function of deformation of parameters 0 and controlling the opening of the branches (0) and their twist (y) depending on the position of the point to be deformed. The bestContQ function returns a point in the image corresponding to the point with the highest gradient standard along a segment centered in the projection of the 3D point of the frame model in the image. The direction along the segment is given by the projection of the normal of the 3D point. A multi-hypothesis mode such as that described in "Combining Edge and Texture Information for Real-Time Accu rate 3D Camera Tracking, Vacchetti, V. Lepetit, P. Fua, Third IEEE and ACM International Symposium on Mixed and Augmented Reality, 2004" allows keep the points with the strongest gradient values and select the best.
At the same time, the method 200 comprises a step 230 of estimating the three-dimensional position of the center of the pupils by combining the images acquired simultaneously by at least two cameras 140. This stereovision makes it possible in particular to obtain the 3D position of the eyes without requiring an estimation of the lens-to-eye distance VD.
Indeed, by using the calibration of the image acquisition means of the device 100 which comprises a plurality of cameras 140 and the 2D position of the centers of the pupils in each of the acquired images, the principle of stereovision based in particular on a triangulation using the epipolar geometry, makes it possible to obtain the 3D position of the centers of the pupils in the reference frame used for the calibration of the cameras.
It should be noted that the 2D position of the center of the pupils has been previously determined either manually by an optician, or automatically using a detector of characteristic points of the face. Automatic pupil detection is for example implemented by the use of a pupil contour search method, the center of a pupil being subsequently determined as being the center of an ellipse representative of the pupil detected.
Thus, the automatic detection of the pupils in the image can for example be carried out by means of an automatic detection of the irises which is easier to detect. Automatic iris detection can be initialized using a face detector which detects the corners of the eyes or an approximate center of each iris, but it can also be initialized using a detector glasses which determines the eye area. The position of the center of a pupil being assumed to be equal to the center of the iris, it is then a question of finding the position, orientation and dimension of a circle in three-dimensional space such as the projection of the circle in each of the images coincide with the iris which resembles an ellipse in the image. The pair of glasses having been previously positioned, the search area for the iris surrounded by the frame of the pair of glasses is relatively small, which limits the probability of a false positive. The parameters to be estimated are the radius of the circle, the position of the center of the circle and the orientation of the circle relative to its center. Since the circle is plane and symmetrical in the plane with respect to its center, two parameters are sufficient to characterize the orientation of the circle in three-dimensional space. Six parameter values are thus to be estimated. The circle is initialized with an average radius of iris and such that its center projects at the centers of the glasses in the image. Then, a technique based on a distance map or on a distance between the contour model and the image contour makes it possible to precisely find the radius, the position and the orientation of the circle in 3D space. A sampling of 3D points on this circle is projected in the acquired image. The circle being a plane object in 3D space, its normals at each sampled point are defined by the lines passing through the points and the center of the circle. We consider the normals always visible because the circle object is plane and the irises of the user are always visible. The cost function to minimize is a sum of squares of distance between the projections of the points and the bestCont function.
In order to simplify the calculations of eye measurements, the position of each pupil is expressed during step 240 in the frame of the virtual frame from the 3D position of the frame and the 3D position of the center of the pupils.
The 3D position of the pupil centers in the frame reference is characterized for each eye by:
PG3D = R M ~ r (PG3D0 - TM) and PD3D = RM ~ r (PD3D0 - TM) where PG3D0 and PD3D0 are the 3D coordinates of the respective centers of the left and right pupil in the coordinate system previously determined.
The method 200 then comprises a step 250 during which the 3D position of the centers of the eyeballs is calculated and expressed in the frame of reference of the frame.
To this end, in the case where the image is acquired while the individual 110 is looking at the central camera 140i which is considered to be in front of the individual 110, the position of the central camera 140i is then representative of the point of aiming of the eyes of the individual 110. Each center of rotation of an eye (PCOG3D and PCOD3D) is then on the right passing through each of the centers of the pupils and the point of aiming at a distance from the center of the pupil which is is found to be the average radius of an eyeball from which the size of a cornea is subtracted. It should be noted that the point of view may be slightly different for each eye of the individual 110.
The determination of the aiming point can be defined according to a protocol during which the individual 110 looks at all or part of the following points:
- the camera located in front of the individual;
- a secondary camera;
- a point displayed on the screen;
- a point on the screen located at the top right;
- a point on the screen located at the bottom left.
It should be noted that, in this protocol, the position of the screen is known with respect to the image acquisition system comprising the cameras.
In alternative embodiments of this step, the individual 110 looks at other points of sight such as a camera which is not in front of the individual. It is then necessary to add an intermediate step consisting in involving the 3D position of the target point which is used to correct:
- the position of the center of rotation of the eyeball (which is on the aiming point / pupil center axis)
the trajectory of the gaze by simulating it from the front for the calculation of the optical center which is carried out subsequently during step 260 of the method 200.
In the case where the image is acquired while the point of sight of the individual is unknown, it is possible to find the position of the center of rotation of each eyeball from the circle calculated for each iris during the step 230 for estimating the three-dimensional position of the center of the pupils. By considering an average radius of the eyeball, it is then possible to determine a center of rotation of each eye by calculating the position of the center of a sphere of radius equal to the average radius of the eyeball and for which the iris represents a plane cutting. It should be noted that the center of rotation of the middle globe corresponds for each eye to the solution located inside the skull. It is also possible to find the position of the center of rotation of each globe by considering as a simplified model of an eye a circle representing the iris located at a given distance, such as for example the average value of a radius of a globe standard ocular or adapted to the deformation relative to the correction, minus the thickness of a cornea (ie the distance between the apex of the cornea and the iris). The position of the center of rotation is then determined by resolving the position of this simplified model on the images acquired by a method described for example in the international application published under the number WO 2013/045531.
During step 260, the optical centers of the lenses (PCOVG3D and PCOVD3D, expressed in the frame of reference) are given by the intersection between each of the lenses and the straight line passing through the center of rotation of the corresponding eye and its point of aim.
The effective diameter (ED / Effective Diameter) can then be obtained by multiplying by two the distance between the optical center of a lens (PCOVG3D and PCOVD3D) and the most distant 3D point belonging to the same group of lenses.
At the same time, the method 200 comprises a step 270 during which the 3D position of the low points of the pupils and of each circle surrounding the lenses of the frame is calculated in the frame of reference of the frame as a function of the 3D position of the frame and the center of the pupils.
The calculation of the positions carried out during step 270 is based on a method similar to that of the calculation of the 3D position of the centers of the pupils made during step 230. It is thus possible to estimate:
- the 3D position of the lowest points on each pupil (if the pupil is covered by the lower eyelid, it will be the lowest visible point, on the pupil / eyelid border). The detection of these points in the acquired images can be automated by training a detector on the pupils or by leaning on the image of the contours at the level of the pupil which must form a circle.
- the 3D position of the “bottom of the frame” points at the level of each lens of the frame. To determine these “bottom of the frame” points, it is the lowest point of the group making up each of the 3D lenses for the BOXING assembly system and the lowest point of the group making up each of the 3D lenses which are located vertically from each optical center for the DATUM system. These points will be useful for calculating the heights PH1 in the case of the DATUM system or PH2 in the case of the BOXING system, as illustrated in Figure 3.
It should be emphasized that the detection of these points in the BOXING system does not necessarily have to be carried out in the acquired images because these points can be identified beforehand on the 3D modeling of the frame.
The method 200 ultimately includes a step 280 of calculating the ocular measurements. The glass-eye distances (left and right) happen to be the Euclidean distances between the iris centers and their projection on each of the glasses:
DVOG = \ PG3D - PCOVG3D \ 2 and DVOD = \ PD3D - PCOVD3D \ 2
The pupillary distance (PD) is given by:
PD = \ PCOG3D - PCOD3D \ 2
From the coordinates of the center of the pupils of the individual 110 determined during step 230, let (xPG, yPG, zPG) be the 3D coordinates of the center of the left pupil in the frame of reference and (xPD, yPD , zPD) the 3D coordinates of the center of the right pupil.
It should be emphasized, as illustrated in FIG. 4, that the center of the frame of the mount 410 is located at the center of the bridge 420 of the mount 410 and corresponds to the origin of the axis 430 in x defined as the passing axis. by the two 440 glasses.
Half pupillary deviations (monoPD) are given by:
monoPDG = —xPG monoPDD = xPD
From the 3D coordinates of the bottom of the pupils and the bottom of the frames determined in the frame of reference during step 270, let (xPdownG, yPdownG, zPdownG) be the 3D coordinates of the bottom of the left pupil, (xPdownD, yPdownD, zPdownD) the 3D coordinates of the bottom of the right pupil, (xMG, yMG, zMG) the 3D coordinates of the bottom of the frame at the left lens and (xMD, yMD, zMD) the 3D coordinates of the bottom of the frame at the right lens.
The heights (FH) are given by:
FHG = zMG - zPG + DrageoirOffset
FHD = zMD - zPD + DrageoirOffset
The heights of the segments (SH) are given by:
SHG = zMG - zPdownG + DrageoirOffset SHD = zMD - zPdownD + DrageoirOffset where DrageoirOffset represents the depth of the bezel of the frame chosen by the patient. This value is entered according to the type of frame material or directly retrieved from 3D modeling of the frame.
Finally the pantoscopic angle is directly given by the angle formed by the plane of the glasses of the frame and the vertical plane in the acquired image of the individual 110, if the individual 110 adopts a natural posture for a vision from afar , that is to say a horizontal gaze direction, or oriented towards the horizon.
In the case where the acquisition is carried out without the individual being able to adopt a natural posture for far vision as is for example the case when the individual 110 looks at a point too high which forces him to raise his head, the estimation of the pantoscopic angle can be done in two ways:
1. Detect the corners of the mouth on the images of the individual and thus estimate the 3D position of the middle of these two corners of the mouth; consider the plane passing through this estimated 3D point and the circles of the pupils; estimation of the pantoscopic angle by measuring the angle formed by this plane and the plane of the lenses of the selected frame.
2. Use the measurable pantoscopic angle when scanning the selected frame by placing it on a mannequin or by positioning the three-dimensional representation of the selected frame on an average virtual avatar representing an average head of an individual.
It should be noted that the mannequin can be selected from several mannequins according to the morphology of the individual, in particular the one with the morphology closest to that of the individual.
In a variant implementation of this particular mode of implementation of the invention, the pair of glasses selected by the individual is not physically accessible when using the device for measuring optical parameters. In this case, given that a three-dimensional representation of the frame of the pair of glasses selected by the individual is stored in the database, the individual uses the measuring device by wearing a second pair of glasses which is available to the individual and which serves as a reference for determining the parameter (s). It should be emphasized that a faithful three-dimensional representation of the frame of the second pair of glasses is also stored in the database accessible to the measuring device.
In this variant, the method for automatically determining at least one parameter, implemented by the measuring device, makes it possible to obtain the 3D position of the pair of glasses worn and of the centers of rotation of the eyes (PCOG3D and PCOD3D) in the acquired image as well as the PD and monoPD values which are not dependent on the pair of glasses worn.
So that the other parameters, in particular FH, SH, PA, VD and ED, are linked to the pair of glasses selected and not to the pair of glasses worn, the determination method also comprises a step of determining a positioning offset between the frame of the pair of glasses worn relative to the frame of the pair of glasses selected. The offset is determined by superimposing the three-dimensional representations of the two frames on a three-dimensional model of a face. It should be noted that the three-dimensional model of a face can be representative of the face of the individual. This three-dimensional model of the face can be developed from images of the face of the individual or selected from a virtual avatar based on the typical morphology of the individual. This selection is made for example via the value of the PD alone, or even a configuration value depending on the PD and the minimum distance between the parameters of spacing of the branches obtained on the user and the values obtained. on the reference mannequins. This configuration value can for example be obtained on real mannequins by measuring the 3D alignment on image, or by physical simulation of the 3D pair on virtual mannequins (cf. for example the international patent application published under the number WO 2016/135078).
As soon as the positioning offset is determined along the three axes, the three-dimensional representation of the selected frame is positioned in the frame of the image acquisition system with respect to the three-dimensional representation of the frame worn by the individual by the 'through the positioning offset between the two mounts. Thus, the expression of the position of each pupil in the coordinate system of the three-dimensional representation of the selected frame can be carried out and makes it possible to determine the values of the parameters
FH, SH, PA, VD and ED from the relative position of the eyes with respect to the three-dimensional representation of the selected frame.
In summary, FIG. 7 illustrates in the form of a block diagram the main steps of the method 700 of automatic determination implemented either from an image 710 of the individual wearing the selected frame on the face, or from from a 720 image of the individual wearing the second frame on his face.
In both cases, the method 700 comprises the steps of:
730, detection of at least one characteristic point of at least one eye of the individual on the acquired image and estimation in the frame of the image acquisition system, of the three-dimensional position of the point (s) characteristic (s) detected;
740, detection of the mounted frame and estimation of the three-dimensional position of the mounted frame in the frame of the image acquisition system by aligning the three-dimensional representation of the mounted frame with the mounted frame in the acquired image.
In the case where the frame worn is the frame of the second pair of glasses, the method 700 comprises an additional step 750 of positioning the three-dimensional representation of the selected frame, the representation of the selected frame being positioned relative to the representation of the frame carried by means of the positioning offset between the two frames.
Finally in both cases, the position of each characteristic point is expressed relative to the three-dimensional representation of the selected frame, during step 760, preferably in the reference frame of the three-dimensional representation of the selected frame.
At least one parameter associated with the selected frame is then determined from the relative position of the eyes with respect to the three-dimensional representation of the frame selected during step 770.
Advantages of the invention
A first advantage of the process for automatically determining an optical parameter is the accuracy of the measurement. Indeed, no manual intervention is necessary to calculate the eye measurements on a large number of images. It is thus possible to have high confidence about the measurements made by the automatic determination process.
Unlike existing systems, the measurement accuracy of which depends on the positioning and orientation of the patient's head, the process which is the subject of the present invention makes it possible to reconstruct the "eyes + glasses" system in 3D. This has the advantage of retrieving the ocular measurements in the acquisition position of the patient's head, which can be his natural position and if this is not the case, a correction of the measurements can be carried out by simulating the 3D system " eyes + glasses ”of the user with a different look and a different head orientation.
Thanks to the 3D reconstruction of the patient's "eye + glasses" system, it is possible to:
- calculate the ocular measurements in far vision then estimate these ocular measurements in near vision by carrying out a simulation of the system with a closer point of sight;
- calculate the ocular measurements in near vision then estimate these ocular measurements in far vision by carrying out a simulation of the system with a closer point of sight;
- calculate the gaze path on each lens by performing a simulation of the system with a set of near and far sighting points;
- measure strabismus of an eye from several images of the individual looking at a point at the reading distance (about 20 cm), a point at infinity and a point at an intermediate distance.
Other optional features of the invention
In alternative embodiments of the invention, the measurement of the interpupillary distance (PD) can be carried out by means of at least two images of the individual not wearing a pair of glasses on the face, the images being acquired simultaneously or almost simultaneously by two cameras separate from the device. It should be remembered that the images acquired by the cameras are calibrated.
权利要求:
Claims (15)
[1" id="c-fr-0001]
1. Method for automatically determining at least one parameter associated with an ophthalmic device selected by an individual, said device comprising a frame called a selected frame, said determination being carried out from an image of the face of the individual wearing a frame of an ophthalmic device, called a mounted frame, said worn frame being the selected frame or a frame of a second ophthalmic device, said image being acquired by an image acquisition system, characterized in that the method comprises steps from:
detection of at least one characteristic point of at least one eye of the individual on the acquired image and estimation in the frame of the image acquisition system, of the three-dimensional position of the characteristic point (s) ( s) detected;
detecting the mounted frame and estimating the three-dimensional position of the mounted frame in the frame of the image acquisition system by aligning a three-dimensional representation of the mounted frame with the mounted frame in the acquired image;
in the case where the frame worn is the frame of the second ophthalmic device, positioning, in the frame of the image acquisition system, of a three-dimensional representation of the frame selected, the representation of the frame selected being positioned relative to the representation of the frame worn by means of a positioning offset between the two frames, said offset being previously established;
expression of the position of each characteristic point with respect to the three-dimensional representation of the selected frame;
- determination of the parameter (s) from the relative position of the eyes relative to the three-dimensional representation of the selected frame.
[2" id="c-fr-0002]
2. Method according to claim 1, characterized in that the estimation of the three-dimensional position of the mounted frame is carried out by calculating the minimum of a distance function between the contour of the projection of the three-dimensional representation of the mounted frame and the outline of the frame worn in the acquired image, the three-dimensional representation of the frame worn can be articulated and / or deformed.
[3" id="c-fr-0003]
3. Method according to any one of claims 1 to 2, characterized in that the characteristic point of each pupil is the center of the pupil which is calculated as the center of a circle representing the iris, said circle being positioned and oriented in three dimensions by calculating the minimum of a distance function between the projection of the circle on the image and the outline of the pupil on the image.
[4" id="c-fr-0004]
4. Method according to claim 3, characterized in that the position of the center of the eyeball of an eye is calculated:
in the case where the aiming point of said eye is known, as equal to the point on the right defined by the center of the pupil of said eye and said aiming point, at a distance equal to the average radius of an eyeball;
in the case where the aiming point of said eye is unknown, as equal to the center of a sphere of radius equal to the average radius of the eyeball and for which the iris represents a section plane.
[5" id="c-fr-0005]
5. Method according to any one of claims 1 to 4, characterized in that it also comprises a step of determining the positioning offset between the three-dimensional representation of the selected frame and the three-dimensional representation of the second frame carried by the individual, by positioning the two three-dimensional representations on a three-dimensional model representative of a face.
[6" id="c-fr-0006]
6. Method according to any one of claims 1 to 5, characterized in that the image acquisition system is stereoscopic.
[7" id="c-fr-0007]
7. Method according to any one of claims 1 to 6, characterized in that the image acquisition system comprises at least one infrared camera.
[8" id="c-fr-0008]
8. Method according to any one of claims 1 to 7, characterized in that the determined parameter is included in the following list:
interpupillary distance (PD);
interpupillary half distance (monoPD);
pantoscopic angle (PA);
lens-to-eye distance (DV);
height between the bottom of the frame and the center of a pupil (FH);
height between the bottom of the frame and the lower eyelid (SH);
effective diameter of glasses (ED);
path of gaze, also called path of progression.
[9" id="c-fr-0009]
9. Method according to claim 8, characterized in that the determination of the pantoscopic angle is carried out by:
detecting the two corners of the individual's mouth;
estimating the 3D position of the midpoint between the two corners of the mouth; determining the value of the pantoscopic angle by calculating the angle between the plane formed by the midpoint and the characteristic point of each pupil, and the plane of the lenses assembled in the selected frame.
[10" id="c-fr-0010]
10. Method according to any one of claims 1 to 6, characterized in that at least one parameter is also determined as a function of a low point of the frame, the low point being included on a straight line tangent simultaneously to the two lenses .
[11" id="c-fr-0011]
11. Method according to any one of claims 1 to 10, characterized in that it previously comprises a step of calibrating the acquisition system.
[12" id="c-fr-0012]
12. Method according to any one of claims 1 to 11, characterized in that it also comprises a step of sending a glass order taking into account the parameter (s) previously determined (s).
[13" id="c-fr-0013]
13. Method according to any one of claims 1 to 12, characterized in that it also comprises a step of machining a glass of a pair of glasses from the parameter (s) previously determined.
[14" id="c-fr-0014]
14. A computer program product comprising a series of program code instructions for the execution of the steps of the automatic determination method according to any one of claims 1 to 13, when said program is executed on a computer.
[15" id="c-fr-0015]
15. Device comprising a screen, a plurality of cameras, a computer processor and a computer memory storing the computer program product according to claim 14.
类似技术:
公开号 | 公开日 | 专利标题
FR3069687A1|2019-02-01|METHOD FOR DETERMINING AT LEAST ONE PARAMETER ASSOCIATED WITH AN OPHTHALMIC DEVICE
FR3053509B1|2019-08-16|METHOD FOR OCCULATING AN OBJECT IN AN IMAGE OR A VIDEO AND ASSOCIATED AUGMENTED REALITY METHOD
EP2547248B1|2017-05-10|Method for measuring an interpupillary distance
CA2682239C|2017-04-25|Method for measuring the position of a remarkable point of the eye of a subject along the horizontal direction of the sagittal plane
EP2822451B1|2021-06-02|Method for determining at least one head posture characteristic of a person wearing spectacles
EP2251734B1|2014-10-15|Method and system for on-line selection of a virtual glasses frame
WO2013045531A1|2013-04-04|Method for determining ocular and optical measurements
EP2486444B1|2021-06-02|Measurement method and equipment for the customization and mounting of corrective opthalmic lenses
EP2999393B1|2021-09-08|Method for determining ocular measurements using a consumer sensor
FR2915291A1|2008-10-24|METHOD FOR MEASURING AT LEAST ONE GEOMETRIC-PHYSIOMICAL PARAMETER FOR IMPLANTATION OF A FRAME OF VISUAL CORRECTION GLASSES ON THE FACE OF A BEARER
EP2901209B1|2019-08-14|Method for helping determine the vision parameters of a subject
EP2822450A1|2015-01-14|Method for estimating a distance separating a pair of glasses and an eye of the wearer of the pair of glasses
FR2961591A1|2011-12-23|METHOD OF ESTIMATING THE POSTURE OF A SUBJECT
WO2018002533A1|2018-01-04|Method for concealing an object in an image or a video and associated augmented reality method
WO2020064763A1|2020-04-02|Automatic establishment of parameters necessary for constructing spectacles
FR3097423A1|2020-12-25|METHOD OF MEASURING THE CONVERGENCE OF THE EYES OF A PATIENT
同族专利:
公开号 | 公开日
US20200211218A1|2020-07-02|
EP3659109A1|2020-06-03|
CN111031893A|2020-04-17|
WO2019020521A1|2019-01-31|
US11158082B2|2021-10-26|
FR3069687B1|2021-08-06|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20090021693A1|2005-01-26|2009-01-22|Rodenstock Gmbh|Device and Method for Determining Optical Parameters|
US20140253875A1|2011-09-29|2014-09-11|Fittingbox|Method for determining ocular and optical measurements|
EP2772795A1|2013-02-28|2014-09-03|Shamir Optical Industry Ltd|Method, system and device for improving optical measurement of ophthalmic spectacles|
GB0713461D0|2007-07-11|2007-08-22|Ct Meter Ltd|Device and methods for obtaining measurements for spectacles fitting|
EP2637135A1|2012-03-08|2013-09-11|Essilor International |Method for ordering a spectacle lens and associated system|
EP3401879B1|2012-03-19|2021-02-17|Fittingbox|Method for modelling a three-dimensional object from two-dimensional images of the object taken from different angles|
KR102207026B1|2013-08-22|2021-01-22|비스포크, 인코포레이티드|Method and system to create custom products|
CN106030382B|2014-02-18|2021-01-19|依视路国际公司|Method for optimizing an optical lens apparatus of a wearer|
EP3218765A1|2014-11-14|2017-09-20|Essilor International |Devices and methods for determining the position of a characterizing point of an eye and for tracking the direction of the gaze of a wearer of spectacles|
CN107408315B|2015-02-23|2021-12-07|Fittingbox公司|Process and method for real-time, physically accurate and realistic eyewear try-on|
KR101697286B1|2015-11-09|2017-01-18|경북대학교 산학협력단|Apparatus and method for providing augmented reality for user styling|
JP6617662B2|2016-08-24|2019-12-11|株式会社Jvcケンウッド|Gaze detection device, gaze detection method, and computer program|
US9990780B2|2016-10-03|2018-06-05|Ditto Technologies, Inc.|Using computed facial feature points to position a product model relative to a model of a face|
US10222634B2|2017-07-07|2019-03-05|Optinno B.V.|Optical measurement aid device|CA2901477A1|2015-08-25|2017-02-25|Evolution Optiks Limited|Vision correction system, method and graphical user interface for implementation on electronic devices having a graphical display|
AU2017274570B2|2016-06-01|2022-02-03|Vidi Pty Ltd|An optical measuring and scanning system and methods of use|
EP3355214A1|2017-01-27|2018-08-01|Carl Zeiss Vision International GmbH|Method, computing device and computer program for spectacle frame design|
US10685457B2|2018-11-15|2020-06-16|Vision Service Plan|Systems and methods for visualizing eyewear on a user|
EP3944004A1|2020-07-23|2022-01-26|Carl Zeiss Vision International GmbH|Computer-implemented method for generating data for producing at least one spectacle lens and method for manufacturing spectacles|
法律状态:
2019-02-01| PLSC| Publication of the preliminary search report|Effective date: 20190201 |
2019-07-29| PLFP| Fee payment|Year of fee payment: 3 |
2020-07-31| PLFP| Fee payment|Year of fee payment: 4 |
2021-07-09| CA| Change of address|Effective date: 20210531 |
2021-07-30| PLFP| Fee payment|Year of fee payment: 5 |
2021-10-29| TQ| Partial transmission of property|Owner name: ESSILOR INTERNATIONAL, FR Effective date: 20210920 Owner name: FITTINGBOX, FR Effective date: 20210920 |
优先权:
申请号 | 申请日 | 专利标题
FR1757070A|FR3069687B1|2017-07-25|2017-07-25|PROCESS FOR DETERMINING AT LEAST ONE PARAMETER ASSOCIATED WITH AN OPHTHALMIC DEVICE|
FR1757070|2017-07-25|FR1757070A| FR3069687B1|2017-07-25|2017-07-25|PROCESS FOR DETERMINING AT LEAST ONE PARAMETER ASSOCIATED WITH AN OPHTHALMIC DEVICE|
CN201880056473.XA| CN111031893A|2017-07-25|2018-07-20|Method for determining at least one parameter associated with an ophthalmic device|
EP18740255.7A| EP3659109A1|2017-07-25|2018-07-20|Method for determining at least one parameter associated with an ophthalmic device|
PCT/EP2018/069792| WO2019020521A1|2017-07-25|2018-07-20|Method for determining at least one parameter associated with an ophthalmic device|
US16/633,721| US11158082B2|2017-07-25|2018-07-20|Method for determining at least one parameter associated with an ophthalmic device|
[返回顶部]