![]() OPTICAL IMAGING DEVICE
专利摘要:
An optical imaging device (100) comprising: - a matrix of light sources (104) substantially point; a photodetector matrix (106) interleaved with the matrix of light sources; an optical medium with a thickness greater than the pitch of the photodetectors and forming capture and detection surfaces; means for successively switching on and off each of a portion of the light sources and reading a portion of the photodetectors receiving rays from the lit source and having been reflected against the detection surface by forming an estimated incidence angle depending on the optical medium and the element to be imaged. 公开号:FR3064112A1 申请号:FR1752160 申请日:2017-03-16 公开日:2018-09-21 发明作者:Jean-Francois Mainguet;Francois Templier 申请人:Commissariat a lEnergie Atomique CEA;Safran SA;Commissariat a lEnergie Atomique et aux Energies Alternatives CEA; IPC主号:
专利说明:
Holder (s): COMMISSIONER FOR ATOMIC ENERGY AND ALTERNATIVE ENERGIES Public establishment, SAFRAN Société anonyme. Extension request (s) Agent (s): BREVALEX Limited liability company. OPTICAL IMAGING DEVICE. FR 3 064 112 - A1 Vf) Optical imaging device (100) comprising: - a matrix of light sources (104) substantially punctual; - an array of photo-detectors (106) interleaved with the array of light sources; - an optical medium with a thickness greater than the pitch of the photodetectors and forming capture and detection surfaces; means for successively switching on and off each of a part of the light sources and reading a part of the photo-detectors receiving rays from the light source and having undergone reflection against the detection surface by forming an estimated angle of incidence depending on the optical medium and the element to be imaged. i OPTICAL IMAGING DEVICE DESCRIPTION TECHNICAL AREA AND PRIOR ART The invention relates to the field of optical imaging devices, and in particular that of fingerprint sensors, capable of carrying out image capture by optical means, that is to say by performing light detection. In a fingerprint sensor carrying out an image capture by optical means, it is known to use a light source of the OLED (organic light emitting diode) type produced on a transparent substrate, and a photodetector such as a diode for reading. the signal. Given the resolution required for a sensor, generally at least 500 ppi (pixels per inch), it is difficult to integrate in a single pixel the photo-detector (for example a PIN diode) and a light source (for example an organic light emitting diode) with their control transistor. There are fingerprint sensors in which the capture of a fingerprint image is carried out in a so-called “near field” configuration, in which the finger whose fingerprint is imaged must be at a short distance (from the order of the resolution sought, ie approximately 50 μm for a sensor with a resolution equal to 500 dpi) of the measurement means of the sensor. Document US 2015/0331508 A1 describes a device which can form such a sensor. This need to have the finger positioned at a very short distance from the measuring means of the sensor is however an important drawback since it is not possible in this case to interpose between the sensor and the finger whose imprint is imaged a protective element, for example a glass plate, of sufficient thickness to ensure good protection of the sensor electronics. To avoid this proximity of the finger and the sensor, it is possible to interpose between the sensor and the finger optical means making it possible to guide the light between the finger and the sensor, such as for example optical fibers or lenses as described in the document US 6,259,108 Bl. This is, however, very restrictive, both technologically and economically. STATEMENT OF THE INVENTION An object of the present invention is to propose an imaging device capable of implementing an optical detection of an element to be imaged in a configuration which is not in near field, that is to say which is compatible with the presence a thick protective element between the element to be imaged and the electronic elements of the imaging device (photo-detectors and light sources), which does not require the presence of optical guidance means between the element to be imaged and the photo- imaging device detectors. For this, the present invention provides an optical imaging device comprising at least: - a matrix of substantially point light sources, produced on a substrate and forming a display screen; - a matrix of photo-detectors produced on the substrate and interleaved with the matrix of light sources; an optical medium of thickness greater than the pitch of the photodetectors of the photo-detector matrix, covering the light sources and the photo-detectors, through which a light intended to be emitted by the light sources is capable of propagating, and forming a capture surface against which an element to be imaged is intended to be disposed and a detection surface substantially parallel to the capture surface and against which the arrays of light sources and of photo-detectors are arranged; - control and reading means able to switch on and off successively each of a part of the light sources and, during the lighting of each of said part of the light sources, to read at least part of the photodetectors intended to receive rays of light coming from the light source on and having undergone at least one reflection against the capture surface by forming, with the capture surface, an angle of incidence of value between, or equal to, that of a first limiting angle of refraction defined by the optical index of the optical medium and the optical index of the atmosphere in which the element to be imaged is located, and that of a second limiting angle of refraction defined by the optical index of the medium optical and an additional optical index of value greater than that of the optical index of the optical medium and less than or equal to that of the optical index of the element to be imaged. The step of the photo-detectors corresponds to the distance separating the centers of the photosensitive portions of two neighboring photo-detectors arranged on the same row or the same column of the photo-detector matrix. This step is for example equal to approximately 50 μm. In this case, the optical medium can have a thickness greater than 50 μm. When the matrix of photo-detectors is not produced with a constant pitch, for example when the distance separating the centers from the photosensitive portions of two neighboring photo-detectors arranged on the same line of the matrix is different from the distance separating the centers photosensitive portions of two neighboring photo-detectors arranged on the same column of the matrix, or else the distance separating the centers of the photosensitive portions of two neighboring photo-detectors arranged on the same row or column of the matrix is different from one row or from one column to another, or even that this distance varies within the same row or column of the matrix, the step considered to define the minimum thickness of the optical medium is the one having the most value great among the different steps present within the photo-detector matrix. In this imaging device, the light sources emit rays of light substantially isotropically through the optical medium. For the rays forming, with the capture surface, an angle of incidence comprised between the first and second limiting angles of refraction as defined above, part of these rays are reflected by the interface between the optical medium and the medium external to the device, that is to say at the level of the capture surface, when the external medium reached by these rays corresponds to the external environment, or the atmosphere (air), in which the element is found to be imaged, and other rays pass through this interface when the external medium reached by these other rays corresponds to the element to be imaged (phenomenon of total reflection frustrated). The reflected light rays comprising the information to be detected by the photo-detectors correspond to those delimited by the first and second limiting angles of refraction. By successively switching on, one by one, part of the light sources of the matrix, partial images of the element to be imaged are obtained which are detected by the photo-detectors located in the region illuminated by these reflected rays. A desired final image of the element to be imaged can thus be constructed from these partial images. This imaging device judiciously uses a thick optical medium which is used both for the propagation of light rays from light sources to the photo-detectors (via a reflection on the capture surface), and for the protection of electronic elements (in particular light sources and photo-detectors) of the imaging device. Advantageously, the optical medium can correspond to a glass plate of a display screen. At least a part of the light sources of the pixels of this screen can form the light sources of the imaging device (all the light sources of the screen are not necessarily used to make the different image acquisitions). With this imaging device, it is not necessary to thin the glass plate at the location of the photodetector array because the imaging device works perfectly with, as the optical medium, a glass corresponding to a glass of standard screen protection. In addition, the presence of this thick optical medium does not cause a loss of precision in the captured image because this optical medium is part of the imaging device and does not disturb its operation. In addition, the resolution of the final image obtained from the element to be imaged is greater than the resolution with which the matrix of photo-detectors is produced because an optical magnification is naturally obtained given the path of the light rays at within the optical medium (with a reflection against the capture surface), and this without having a dedicated magnification optic present in the imaging device. For example, when the light sources and the photo-detectors are arranged in the same plane, for example produced on the same substrate, an optical magnification equal to 2 is obtained (for example, with a matrix of photo-detectors of resolution equal to 500 dpi, the image obtained with this imaging device is equivalent to an image that would have been captured with an initial resolution equal to 1000 dpi). An image resolution of the order of 2000 dpi or greater than 2000 dpi can also be easily obtained. The reflection of light rays against the capture surface corresponds to total reflection. In addition, the optical paths of the light rays are quite short. Thus, the photo-detectors receive a lot of light, which makes it possible to produce the photo-detectors for example in the form of photodiodes having small dimensions and / or to use reading architectures of type 1T (each diode is controlled by a single transistor) thanks to the best signal provided by the photodetectors. Thus, the photo-detectors require little space for their production, which leaves room for the production of light sources on the same substrate as the photo-detectors. A light source is substantially punctual when the size of the acquisition area of a pixel is greater than at least twice, or even at least three times, and preferably greater than ten times, the size of the emitting surface of the light source. This imaging device is particularly suitable for forming a fingerprint sensor, the element to be imaged corresponding to the skin. In addition, the atmosphere in which the element to be imaged is located may correspond to air. Since the light sources are substantially point-like, the part of the element to be imaged receiving the light rays forming an angle of incidence, with the capture surface, between the first and second limiting angles of refraction has an annular shape. After the reflection against the capture surface, these light rays arrive on the detection surface by illuminating a region of the matrix of photo-detectors also of annular shape but of larger dimensions than those of the part of the element to be imaged which is lit. Such an imaging device provides an architecture allowing images of the element to be imaged of resolution greater than or equal to 500 dpi to be obtained while being compatible with an embodiment in CMOS or OLED technology. In addition, the contrast of the images obtained with this imaging device is high and comparable to that obtained with a prism sensor in total reflection whose size is much larger than that of this sensor. The matrix of light sources is interlaced with that of the photodetectors, that is to say that the light sources are arranged within the matrix of photo-detectors, and conversely the photo-detectors are arranged within the matrix of sources bright. The value of the optical index of the atmosphere in which the element to be imaged is located and that of the additional optical index can be predetermined, that is to say predefined before taking an image capture. The control and reading means can be configured to read, during the ignition of each of said part of the light sources, the photodetectors located in a region of the annular detection surface delimited by the reflected rays of light forming, against the capture surface, the first and second limiting angles of refraction. For practical reasons, the control and reading means can be configured to control, upon the ignition of each of said part of the light sources, the acquisition of said rays by the photo-detectors located in a region of the surface of square or rectangular detection including the region of the annular detection surface. Thus, the reading time, or acquisition time, of the photo-detectors located in such a square or rectangular region is reduced and the subsequent digital processing of the square or rectangular images obtained with such an acquisition is then facilitated. The control and reading means can be configured such that the photo-detectors read during the lighting of each of the part of the light sources deliver a partial image of the element to be imaged such that each of the partial images obtained overlaps partially with at least one other partial image of the element to be imaged. In this case, the imaging device may further comprise means for calculating a final image of the element to be imaged from partial images of the element to be imaged. The imaging device may further comprise additional light sources disposed outside the matrix of light sources and substantially aligned with bisectors of the angles formed by the corners of the matrix of light sources. Such additional light sources can in particular be used to acquire the corners of the capture surface which cannot be captured using the light sources of the matrix. In a first embodiment, the imaging device can advantageously be produced in the form of a CMOS electronic chip, that is to say comprising transistors produced in CMOS technology on a semiconductor substrate such as silicon. The light sources can in this case correspond to LEDs, advantageously made from GaN formed beforehand on the semiconductor of the substrate. In a second embodiment, the imaging device can comprise TFT transistors produced for example on a transparent substrate, for example made of glass, and light sources of OLED type. Each pixel of the display screen can include one of the light sources and one of the photo-detectors. As a variant, it is however possible that certain pixels formed by the matrix of photo-detectors and the matrix of light sources do not comprise a light source (and therefore only a photo-detector). It is also possible that certain pixels of the display screen do not include photo-detectors. The light sources can be formed by LEDs and the control and reading means can comprise CMOS transistors, or the light sources can be of OLED type and the control and reading means can comprise TFT transistors arranged in an interlaced manner the matrix of photo-detectors. The realization of the imaging device with LED technology + CMOS transistors achieves very high resolutions. The production of the imaging device with OLED + TFT technology allows, thanks to the optical magnification obtained, the production of pixels of the imaging device with large dimensions at a reduced cost, for example produced with a pitch equal to approximately 100 microns, and / or increase the resolution of the image by keeping pixels of reduced dimensions. Advantageously, when the light sources are of the OLED type and the control and reading means comprise TFT transistors, said TFT transistors can be produced on the substrate which is transparent and which forms the optical medium. The thickness of the optical medium can be greater than about 50 μm, or the optical medium can comprise a glass plate of thickness greater than approximately 100 μm, or greater than or equal to approximately 200 μm, or greater than or equal to approximately 400 μm , or greater than or equal to about 700 pm. The control and reading means can be configured such that the light sources successively switched on form partial images of the element to be imaged, that is to say having dimensions smaller than the total dimensions of the final image, overlapping each partially with at least one other of the partial images of the element to be imaged, the final image then being subsequently constructed from these different partial images. In such a configuration, the impact of one or more faulty photo-detectors is therefore reduced since the overlapping of the partial images makes it possible to obtain, on several of the partial images, redundant information of the element to be imaged. The invention also relates to a method of capturing images implemented with an imaging device comprising at least: - a matrix of substantially point light sources, produced on a substrate and forming a display screen; - a matrix of photo-detectors produced on the substrate and interleaved with the matrix of light sources; an optical medium of thickness greater than the pitch of the photodetectors of the photo-detector matrix, covering the light sources and the photo-detectors, through which a light intended to be emitted by the light sources is capable of propagating, and forming a capture surface against which an element to be imaged is intended to be disposed and a detection surface substantially parallel to the capture surface and against which the arrays of light sources and of photo-detectors are arranged; the image capture process comprising the implementation of the following steps: a) switching on one of the light sources; b) reading of at least part of the photo-detectors receiving rays of light coming from the light source on and having undergone a reflection against the capture surface by forming, with the capture surface, a value angle of incidence between, or equal to, that of a first limiting angle of refraction defined by the optical index of the optical medium and the optical index of the atmosphere in which the element to be imaged is located, and that of a second limiting angle of refraction defined by the optical index of the optical medium and an additional optical index of value greater than that of the optical index of the optical medium and less than or equal to that of the optical index of the element to be imaged; c) switching off the light source on; steps a) to c) being repeated for each of a part of the light sources (104). The photo-detectors read during step b) can be those located in a region of the annular detection surface delimited by the rays of reflected light forming, against the capture surface, the first and second limit angles of refraction. The photo-detectors read during each implementation of step b) can deliver a partial image of the element to be imaged such that each of the partial images obtained partially overlaps with at least one other of the partial images of the element to be imaged. ίο The method may also include the implementation of a step of calculating a final image of the element to be imaged from partial images of the element to be imaged. Steps a) to c) can also be implemented for additional light sources arranged outside the matrix of light sources and aligned with bisectors of the angles formed by the corners of the matrix of light sources. BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be better understood on reading the description of exemplary embodiments given for purely indicative and in no way limiting, with reference to the appended drawings in which: - Figure 1 shows a schematic top view of a part of an imaging device, object of the present invention, according to a first embodiment; - Figure 2 shows the phenomenon of total frustrated reflection occurring within the imaging device, object of the present invention; - Figure 3 shows the phenomenon of total frustrated reflection occurring within the imaging device, object of the present invention, according to an alternative embodiment; - Figure 4 schematically represents detection and capture regions obtainable with an imaging device, object of the present invention; - Figures 5 to 8 schematically represent different capture phases implemented during an image acquisition by the imaging device, object of the present invention; - Figure 9 shows a schematic top view of part of another imaging device; - Figure 10 shows a schematic sectional view of an imaging device, object of the present invention, according to a second embodiment; - Figure 11 shows an exemplary embodiment of a stack comprising light sources of the OLED type and photo-detectors of the PIN diode type of an imaging device, object of the present invention; - Figure 12 shows a top view of an exemplary embodiment of an imaging device, object of the present invention. Identical, similar or equivalent parts of the different figures described below have the same reference numerals so as to facilitate the passage from one figure to another. The different parts shown in the figures are not necessarily shown on a uniform scale, to make the figures more readable. The different possibilities (variants and embodiments) must be understood as not being mutually exclusive and can be combined with one another. DETAILED PRESENTATION OF PARTICULAR EMBODIMENTS First of all, reference is made to FIG. 1 which represents a top view of a part of an imaging device 100 according to a first embodiment. In this first embodiment, the device 100 comprises, on the same semiconductor substrate 102 comprising for example silicon and gallium nitride (GaN), a plurality of substantially point light sources 104, corresponding here to a matrix of LEDs, and a matrix of photo-detectors 106, here photodiodes. The arrays of light sources 104 and of photo-detectors 106 are here interlaced, that is to say that the light sources 104 are arranged within the array of photo-detectors 106, and the photo-detectors 106 are also arranged within the matrix of light sources 104. In this first embodiment, the device 100 comprises several pixels, for example forming a matrix of 88 x 88 pixels, each comprising one of the light sources 104 and one of the photodetectors 106. The photo-detectors 106 correspond for example to PN or PIN diodes, or to phototransistors produced in the silicon of the substrate 102. The LEDs 104 are advantageously produced from the GaN transferred onto the silicon of the substrate 102, this GaN being etched with so as to form the light sources 104 that are substantially punctual. As a variant, the photo-detectors 106 can be produced on the substrate 102 after the light sources 104. In addition, the substrate 102 on which the arrays of LEDs 104 and photodiodes 106 are produced is covered with an optical medium 108 (visible in FIG. 2) which corresponds for example to a glass plate of thickness greater than approximately 100 pm, for example of the order of 400 pm or 700 pm or even 1000 pm, and here bonded to the substrate 102. As a variant, the optical medium 108 may comprise PMMA (whose optical index is equal to approximately 1.49), or any other suitable transparent material. An adhesive, advantageously having the same optical index as that of the optical medium 108 (or very close to it), may be present between the optical medium 108 and the substrate 102 which comprises the LEDs 104 and the photodiodes 106. It is also possible that the optical medium 108 is molded directly on the substrate 102. In general, the optical medium 108 is optically coupled to the substrate 102 such that there is no air between the optical medium 108 and the photodiodes 106 thus between the optical medium 108 and the LEDs 104. The imaging device 100 is intended to be coupled to an electronic display device, for example a smartphone or a computer tablet. Thus, the matrix of LEDs 104 and of photodiodes 106 form an additional device intended to be placed next to the display screen of this device, the optical medium 108 corresponding to the protective glass of this screen. Although not visible in FIG. 1, the imaging device 100 also includes control and reading means able to control the light emission produced by each of the LEDs 104 and to control the detection of the light information received by the photodiodes 106 These control and reading means comprise electronic components produced for example on the substrate. 102. When capturing an image by the imaging device 100, for example when capturing a fingerprint of a finger placed on the optical medium 108, several LEDs 104 are lit successively, a single LED 104 being lit at that time. In FIG. 1, one of the LEDs 104 is on and is designated by the reference 110. This LED 110 illuminates the skin located above the optical medium 108, on an upper surface of the optical medium 108 called the capture surface 109 (visible in Figure 2). A region 112 of annular shape and formed around the LED 110 which corresponds to the center of this region 112, corresponds to the region where the light emitted by the LED 110 undergoes a total reflection or not depending on whether the skin is in contact or not with the capture surface 109. The region 112 is of annular shape because the LED 110 corresponds to a substantially point light source illuminating the optical medium 108 in an isotropic manner. Before reaching the region 112, the rays of light emitted by the LED 110 pass through the optical medium 108 from its rear face, called the detection surface 111, and which covers the LEDs 104 and the photodiodes 106, until reaching its face. before forming the capture surface 109. After having reached the capture surface 109, part of these rays (those which do not meet any skin present above the optical medium 108) are reflected against this surface and pass through the medium again optic 108 for illuminating part of the photodiodes 106 on the detection surface 111. In the example of FIG. 1, the rays of reflected light which will form the image intended to be detected by these photodiodes 106 are included, on the surface detection 111, in region 114 which is also of annular shape and of larger dimensions than those of region 112. FIG. 2 schematically represents what is obtained from an optical point of view within the imaging device 100 when the LED 110 is on. The optical characteristics of the frustrated total reflection occurring at the level of the capture surface 109 of the optical medium 108 depends on the optical indices, or refraction indices, of the two media forming the interface considered, that is to say here the optical medium 108 and the external environment. During the operation of the imaging device 100, the value of the optical index of the external environment can be considered to be equal to a fixed value, for example equal to 1. When the optical medium 108 corresponds to glass, its optical index nio8 is equal to approximately 1.51. Then, since the imaging device 100 is, in this first embodiment, intended to carry out a capture of a fingerprint, it is necessary to consider the optical index of the skin, called n skin , when the skin is in contact with the capture surface 109 (at the level of a crest of the imprint), and that of the air called n air (at the level of a valley, or hollow, of the imprint, or next to the skin). Air is considered here (of value equal to 1) but in general, the index of the atmosphere in which the element to be imaged is to be taken into account. The value of the critical angle C, also called the limiting angle of refraction, which is that corresponding to a refraction of 90 ° such that C = arcsin (ni / n 2 ), with ni <n 2 when considering an interface between two media optics with index ni and n 2 , will be different depending on whether the light ray crosses the optical medium 108 and arrives on a part of the capture surface 109 in contact with the skin or in contact with air. The value of n air being equal to 1 and that of ni 0 8 being equal to 1.51, a first limiting angle of refraction obtained between air and glass is therefore C a i r / V erre = arcsin (1 / 1.51) = 41 °. The value of n skin being between 1.37 and 1.4 and that of ni 0 8 being equal to 1.51, a second limiting angle of refraction obtained between skin and glass C skin / glass is therefore between arcsin (1.4 / 1.51) = 68 ° and arcsin (1.37 / 1.51) = 65 °. For reasons of simplification, only the value of 65 ° will be considered for the rest. In general, the value of the second limiting angle of refraction can be calculated by considering the optical index ni 0 8 of the optical medium 108, for example equal to 1.51 when the optical medium 108 corresponds to glass, and an optical index additional value greater than that of the optical index nios of the optical medium 108 and less than or equal to that of the optical index η ρ θ 3υ of the element to be imaged. Thus, taking again figure 2, when the rays of light emitted by the lit light source 110 form an angle of incidence, with the normal to the surface of capture 109, ranging between 0 and 41 °, no or very little reflection light occurs at the interface formed at the surface 109. The rays of light then exit the optical medium 108 and no or very little light arrives on the photodiodes 106 by reflection, and this whatever the element (air or skin which is an absorbent medium for diffusing for light) present on the part of the capture surface 109 receiving these rays. The image formed by the photodiodes 106 being in the corresponding reflection zone is therefore dark due to the very small amount of light arriving in this reflection zone. The light information arriving in this area and which is not used for the reconstruction of the captured image can however be used for other applications, for example when the device is able to detect the nature of the imaged element. When the rays of light form an angle of incidence greater than 65 °, a total reflection occurs, regardless of the element (air or skin) present on the part of the capture surface 109 receiving these rays. The image formed by the photodiodes 106 being in the corresponding reflection zone is therefore clear since these photodiodes 106 receive all or almost all of these rays of light (the quantity of light decreasing however when the distance between the photo-detectors and the light source increases). Finally, when these rays of light form an angle of incidence between 41 ° and 65 °, if air is present on the capture surface 109, then a phenomenon of total reflection occurs. The rays are reflected at the interface formed by the capture surface 109 to then arrive on the photodiodes 106. The image obtained from the air is therefore clear. If skin is present on the capture surface 109, the light is diffused in the skin, and none or almost no light is reflected towards the photodiodes 106 (the image of the skin given by the photodiodes 106 corresponds to a dark region). In FIG. 2, the reference 113 designates a ray of light forming, with the normal to the capture surface 109, the first limiting angle of refraction equal to 41 °, and the reference 115 designates a ray of light forming, with the normal at the capture surface 109, the second limiting angle of refraction equal to 65 °. For reasons of readability, the values of the angles shown in FIG. 2 do not quite correspond to these values. The part of the skin imaged when the LED 110 is lit corresponds to that located on the region 112 of the capture surface 109 of annular shape delimited by the rays 113 and 115. The corresponding image formed on the detection surface 111 corresponds to the region 114 of this surface which is also of annular shape and delimited by the rays 113 and 115 arriving on the detection surface 111 after a reflection against the capture surface 109. The system is therefore invariant by rotation around the LED 110. On the image obtained by the light capture produced by photodiodes 106 being at least in region 114, the crests of the imprint appear in dark on a light background which represents the air around the imprint or the troughs of the imprint. The size of the image formed on the photodiodes 106, that is to say the dimensions of the region 114, is larger than that of the portion of skin read which corresponds to the region 112. A magnification factor greater than 1, here equal to 2, is therefore obtained with the imaging device 100. This magnification of the image is obtained without having to use optical means dedicated to this magnification. At the interface between the element to be imaged and the optical medium 108, that is to say at the level of the capture surface 109, the distance between the rays 113 and 115, which corresponds to the difference between the external diameter and the internal diameter of the ring formed by the region 112, is equal to: Rpeau = h. (Tan (Cpeau / verre) - tan (C air / V erre)) = h. (Tan (65 °) -tan (41 °)). At the interface between the optical medium 108 and the photodiodes 106, that is to say at the level of the detection surface 111, the distance between the first and second rays 113, 115 after the reflection on the capture surface 109, which corresponds to the difference between the external diameter and the internal diameter (that is to say the width) of the ring formed by the region 114, is equal to: Rimage = 2h. (Tan (C pe au / glass) - tan (C ai r / glass)) = 2h. (Tan (65 °) -tan (41 °)). The magnification obtained is therefore equal to G = Rimage / Rpeau = 2. This magnification is equal to 2 because the rays forming the image on the photo-detectors 106 cover a distance equal to twice that traveled by the rays of light between the light source and the finger from which the image is captured. The table below gives examples of dimension values obtained for the regions 112 and 114 of annular shape as a function of the thickness h of the optical medium 108, with the optical indices mentioned above. The width here corresponds to the difference between the external diameter and the internal diameter of one of the regions 112 and 114. Thickness h 1000 pm 700 pm 400 pm Region width 114 2250 pm 1785 pm 1020 pm Region width 112 1125 pm 892 pm 510 pm External diameter of region 114 8578 pm 6004 pm 3431 pm Outer diameter of region 112 4289 pm 3002 pm 1716 pm The size of the image of the element to be imaged obtained on the photodetectors 106, which corresponds to the detection field obtained when one of the light sources 104 is switched on, is therefore twice as large as that of the corresponding skin region 112 to this image, and is proportional to the thickness h of the optical medium 108. In the first embodiment of the imaging device 100, the light sources 104 and the photo-detectors 106 are produced on the same substrate 102 and are arranged at the same distance h from the capture surface 109 which corresponds to the thickness of the medium optical 108. This distance h is chosen such that it is greater than the pitch of the photo-detector array 106 so that the imaging device 100 does not operate in the near field. It is possible that the light sources 104 and the photo-detectors 106 are not placed at the same distance from the capture surface 109. FIG. 3 schematically represents such a configuration. The arrays of light sources 104 and of photo-detectors 106 can be seen as being interlaced but arranged on different levels. In this FIG. 3, h P h 0 denotes the thickness of material of the optical medium 108 between the photo-detectors 106 and the capture surface 109, and h | e d denotes the thickness of material of the optical medium 108 between the light sources 104 and the capture surface 109. We then have: Rpeau = h | ed-tan (Cpeau / verre) - h | ed-t3n (C a ir / V erre) = h | ed- (tan (65) -tan (41)) Rimage = (h | ed * hpho) -tan (Cpeau / verre) (h | ed * “hpho) ΐθ Π (Ca ir / verre) Rimage = (hied + hp ho ). (Tan (65 °) -tan (41 °)) The magnification obtained in such an imaging device 100 is equal to G = 1 + hp ho / hied · This magnification G therefore depends only on geometric parameters of the imaging device 100 and does not depend on the quality of the material of the optical medium 108 or on the nature of the element to be imaged. This magnification G is here always greater than 1. Thus, the resolution of the image captured by the photo-detectors 106 is greater than that of the matrix of photo-detectors 106 itself, and this without using means 0 dedicated magnification optics. The value of the magnification G can be modified for example by varying the values of h P h 0 and h | e d- Thus, the value of the magnification G can be compared with the value 1 by increasing for example the value of h | e d- The magnification value G can be increased by increasing for example the value h P h 0 . In FIG. 1 which represents a schematic top view of the imaging device 100, the regions 112 and 114 are shown such that they do not overlap. However, in reality, considering the values of limiting angle of refraction of 41 ° and 65 °, these regions 112 and 114 partially overlap, that is to say that the value of the outside diameter of the annular region corresponding to the region 112 is less than that of the internal diameter of the annular region corresponding to the region 114. By successively switching on several light sources 104, several images, each corresponding to a part of the element to be imaged present on the capture surface 109, are captured by the photodiodes 106. The number of light sources 104 to be switched on, and therefore the number of 'images to capture, is for example chosen according to the dimensions of the capture zone which corresponds for example to a square whose dimensions can be chosen according to the needs and / or the design of the acquisition system, for example 5 mm x 5 mm or 12.8 mm x 12.8 mm. Because the optical parameters described above relating to the light reflections occurring in the device 100 can be determined in advance (values of R pe au, Rimage, locations of regions 112 and 114, etc.), it is possible to know , for each light source 104 which will be on, the photo-detectors 106 which will form part of the region 114 associated with the light source 104 on. It is therefore not necessary for the ignition of each light source 104, all the photo-detectors 106 of the imaging device 100 to carry out an acquisition. Only the photo-detectors 106 included in the region 114, or forming for example a square or a rectangle including the region 114, can carry out this acquisition. It is also possible that the photodetectors 106 carrying out the acquisition are those included in a region smaller than the region 114. A greater number of acquisitions must in this case be carried out. When a greater number of photo-detectors 106 than those belonging to the region 114 carry out the detection of each of the images, digital processing is carried out after the capture of each image. Thus, the data provided by the photo-detectors 106 located in the center of the image, that is to say those surrounded by the region 114 as well as those located around the region 114 are eliminated. The data coming from the photo-detectors 106 which straddle the region 114 and the diameter inside or outside the region 114 can also be eliminated. In addition, in the different images obtained by successively switching on several light sources 104, since the position of the light source used for capturing each of the images is offset, in the plane of the substrate 102, for each of the images obtained, should also shift, during the reconstruction of the final image of the imaged element from the different images, the common pixels of the images relative to each other taking into account the magnification G so that the common parts of the different images are correctly superimposed within the final image. It is for example possible to keep the information of each pixel only once, that is to say that the light information of each pixel of the final image comes from a single captured image. As a variant, it is possible to calculate the average or the median of the different captures of each pixel, or even to use more complex calculation algorithms. Before carrying out the calculation of the final image, and the fact that the intensity of the light decreases when the distance from the LED increases, it is advantageous to carry out a digital processing of the different images making it possible to compensate for these differences in light intensity within each image. This compensation can be carried out either from a predefined model, or by carrying out a vacuum acquisition beforehand with the device 100 (without finger on the capture surface 109). The different light sources 104 successively lit are chosen such that the regions 112 captured together cover the entire surface of the element to be imaged. By lighting, on a line of the pixel matrix, several light sources 104 (each allowing the acquisition of an image), and this for several lines of the pixel matrix, it is possible to cover a major part of this surface. of the item to capture. However, given the shape of the region 112 which corresponds to a ring, it is advantageous to light certain light sources 104 located on the vertical edges of the pixel matrix of the imaging device 100 and / or in the corners of this matrix , in order to lose as little detection area as possible. As a variant, it is also possible that during the capture carried out for each light source 104 that is turned on, only a portion of the photo-detectors 106 located in the region 114, for example groups of photo-detectors 106 forming together rectangles arranged at the within region 114, carry out a detection of the light received. According to another variant, it is for example also possible to read one or more photo-detectors 106 located in the region 114, on switching on each light source 104, the number of image acquisitions being in this case more important. The imaging device 100 described above in connection with FIG. 1 may include photodiodes 106 and LEDs 104 produced in CMOS technology. The LEDs 104 can advantageously be produced from GaN transferred onto the silicon substrate 102 comprising the photo-detectors 106 produced in CMOS (as well as the reading electronics, even the processing electronics). The photo-detectors 106 and the light sources 104 for example form pixels (each pixel comprising a photo-detector 106, a light source 104 such as an LED of dimension equal to 10 × 10 pm 2 or 3 × 3 pm 2 , transistors, etc.) of dimension equal to 50 x 50 pm 2 . In this case, with a magnification G equal to 2, the resolution of the final image obtained is equivalent to 1000 dpi while the resolution of the pixels of the imaging device 100 is 500 dpi. Thus, by considering a square including a detection region 114 comprising 120 pixels x 120 pixels (each of dimension 50x50 pm 2 ), that is 6000 x 6000 pm 2 and therefore including a ring of 6 mm outside diameter corresponding to the region useful for an optical medium 108 corresponding to a glass of thickness equal to 700 μm, the capture region 112 of the imaging device 100 may correspond to a square of dimension equal to 3000 × 3000 μm 2 . With the pixel dimensions indicated above and with an optical medium 108 corresponding to glass with a thickness equal to 700 μm, it is possible for example to light, on the same line of pixels, an LED 104 every 8 pixels (in order to have a capture offset corresponding to half the width of the ring formed by the region 112), or even every 9 or 10 pixels, so that the partial images obtained together form the part of the final image corresponding to this line of pixels. With such a reading, only one line of pixels out of 50, or 55 or even 60, is sufficient to obtain the set of partial images allowing the reconstruction of the final image including the entire capture surface. Other steps and other sizes of capture / detection region are possible, sufficient overlapping of the images allowing the final image to be obtained. FIG. 4 represents the regions 112 and 114 obtained in the particular configuration described above (magnification G equal to 2, optical medium 108 corresponding to a glass plate of thickness equal to 700 μm). The skin region located on the capture surface 109 and which is intended to be detected here corresponds to a square of dimensions equal to 4.4 × 4.4 mm 2 referenced 116. In FIG. 4, the regions 112 and 114 partially overlap at a region referenced 117. In addition, these regions 112 and 114 shown in Figure 4 correspond to those obtained with a light source 110 on positioned in the upper right corner of the region 116 to be imaged. Finally, FIG. 4 illustrates the fact that the corners of the region 116 cannot be detected by the photo-detectors 106 located in the region 116 because even by switching on the light sources located in these corners, the latter are illuminated by rays forming, with the capture surface 109, an angle of incidence less than the first limiting angle of refraction. In this configuration, it is for example possible first of all to capture the upper and lower bands of the region 116 by successively switching on the light sources 104 located in the pixels forming the upper and lower edges of the region 116. each of these bands, a light source 104 every 8 pixels can for example be lit, or 12 light sources 104 per band. FIG. 5 schematically represents part of the regions 112 imaged during a capture of these upper and lower bands of the region 116. Advantageously, the light sources 104 which are switched on are chosen so that each of the different regions 112 captured has an overlap with one or more other regions 112. The capture of a central strip of region 116 can be carried out by switching on the light sources 104 (for example one every 8 pixels) of a line being substantially in the center of region 116. FIG. 6 represents the parts of the region 116 captured at this stage and which correspond to 36 partial images of the imprint. The references 118, 120 and 122 respectively designate the lower, upper and central bands of the region 116 (with little overlap between the parts 118 and 122, as well as between the parts 120 and 122). In addition, the reference 124 designates the corners of the regions 116 as well as the parts of the edges of the central region 122 which are not captured when the three bands 118, 120 and 122 are read. The portions 124 of the edges of the central region 122 not captured by the steps implemented previously can be captured by lighting light sources 104, for example four in number, located at the edges of the region 122. FIG. 7 represents the regions 112 and 114 obtained by lighting one of these light sources, referenced 110 in FIG. 7, and which makes it possible to partially capture one of these parts 124 not initially captured during the acquisition of the central strip 122. At this acquisition stage, 40 partial images of the imprint were obtained. Then, in order to read the corners of the region 116, it is possible to produce the imaging device 100 as it comprises, in addition to the matrix of light sources 104, at least four additional light sources 126 located outside of the matrix formed by the light sources 104, and each arranged along a bisector of the angle of one of the other corners of the region 116. FIG. 8 schematically represents these four LEDs 126 which make it possible to acquire the corners of region 116. With an imaging device 100 having the dimensions described above, each of the additional light sources 126 is for example placed at a distance equal to approximately 600 μm from the corner closest to the matrix. It is also possible that several additional light sources 126 are arranged along each of the bisectors of the angles of the corners of the matrix of light sources 104, at different distances from these corners. Thus, depending on the thickness of the optical medium 108, it is possible to use, for the acquisition of each corner, the additional light source 126 located at the appropriate distance from the matrix to carry out this acquisition. As a variant, in particular when the imaging device 100 is large, it is possible not to read the corners of the region 116. Other methods of reading the region 116 than that described above are possible. For example, it is possible that when each of the light sources 104 is switched on, only part of the region 114 is read by the photo-detectors 106, for example one or more square parts included in the region 114, or even only one or a few photo-detectors 106 included in the region 114. The sequence of switching on the light sources 104 is in this case adapted so as to have a certain overlap between the different captured images. In the examples given above, each of the pixels (which comprises a light source 104 and a photo-detector 106) has a dimension equal to approximately 50 μm × 50 μm 2 . These dimensions can be different: for example each pixel can have dimensions equal to approximately 25 μm x 25 μm, or even equal to 12.5 μm × 12.5 μm. Finally, if information is missing for one or more pixels of the final image, it is possible to implement an interpolation in order to calculate this information. FIG. 12 represents a top view of the imaging device 100, here according to the first embodiment, on which the control and reading means capable of switching on and off successively each of the light sources 104 and, during the switching on of each of the light sources 104, to read the photo-detectors 106 and to control the acquisition by the photo-detectors 106 of a partial image of the element to be imaged are visible. These control means include for example here circuits 136 for addressing light sources 104 and photo-detectors 106. The imaging device 100 also includes means 138 for processing the information captured by the photo-detectors 106. These means 138 are for example those constructing the different partial images obtained then calculating the final image of the captured imprint, from the different partial images. Finally, the imaging device 100 also includes a control element 140 connected to the circuits 136 and to the means 138 and controlling these elements. FIG. 9 represents an imaging device 200 according to another exemplary embodiment. Unlike the imaging device 200 according to the first embodiment in which the photo-detector arrays 106 and light sources 104 are interlaced with one another, the photo-detector array 106 of the imaging device 200 is arranged next to it of the matrix of light sources 104, in a sensor part 103 of the device 200 distinct from a display part 101 of the device 200. Typically, the display part 101 can correspond to a screen of an electronic device, for example a smartphone, to the edge of which the sensor part 103 is arranged. The sensor part 103 may or may not be juxtaposed with the display part 101. In FIG. 9, the regions 112 and 114 obtained during the lighting of one of the light sources 104, referenced 110 in this figure, are shown. By switching on several light sources 104 in the same column as the light source 110, it is possible to capture a region of the element to be imaged located on the display part 101, at the edge juxtaposed with the sensor part 103. An additional capture can also be carried out by using the light sources 104 of one or more other columns of the display part 101, this or these columns remaining close to the sensor part 103 so that the region 114 is found at least partially on the sensor part 103. Using as optical medium 108 a glass plate with a thickness equal to approximately 700 μm, it is possible to obtain a region 114 of width equal to approximately 1725 μm (which corresponds to 34 pixels each of width equal to 50 μm) when a single column of light sources 104 is used. In this configuration, the distance between the light sources 104 of the column used and the region 114 is equal to 1217 μm. In addition, in the device 200, it is advantageous to have a thick optical medium 108, for example obtained by superposing two (or more) glass slides, for example making it possible to obtain an optical medium 108 of greater thickness. or equal to 1 mm. In the configuration shown in FIG. 9, the sensor part 103 is arranged next to the display part 101. As a variant, if the display part 101 is produced such that it forms an at least partially transparent screen, that is to say -to say able to be traversed by a part of the received light, the sensor part 103 can be placed under this screen. In this case, the arrays of light sources and photodetectors can be seen as being interleaved but arranged on different levels. The device 200 is preferably such that the display part 101 is produced with light sources 104 of the OLED type, and advantageously such that the emission of light is carried out through the substrate 102 (for example made of glass) on which the light sources 104 are produced (“bottom emission”, or emission from the rear) and through the optical medium 108. In the rear emission configuration, the transparent conductive tracks, comprising for example ΙΊΤΟ, of the substrate 102 on which the sources luminous 104 are produced can be used to connect the sensor part 103. The upper metal levels can also be used. It is also possible to produce the device 200 such that transparent holes are present within the display part 101, the sensor part 103 can in this case be transferred to the display part 101 by placing the photodetectors 106 opposite the transparent holes. FIG. 10 represents the imaging device 100 according to a second embodiment. In this second embodiment, the optical medium 108 corresponds to the transparent substrate used for producing TFT transistors used for addressing the pixels of the device 100, and which are part of the control means of the imaging device 100. In FIG. 10 , the reference 128 designates the assembly formed on the side of the detection surface 111 and which comprises the TFT addressing transistors, the light sources 104 of OLED type emitting their light from the side of their rear face, and the photo-detectors 106 which are for example PIN photodiodes. One or more electronic chips 130 controlling for example the lighting of light sources 104 and / or the reading by the photo-detectors 106 are also arranged on the side of the detection surface 111, next to the assembly 128. Thus, the the other face of the substrate forming the optical medium 108 remains free and forms the capture surface 109 on which the element to be imaged, typically a finger, is intended to be located. An interconnection layer 132 is present between the optical medium 108 and these elements 128, 130. In this second embodiment, the optical medium 108 may have a thickness greater than or equal to approximately 1 mm, for example between approximately 1 mm and 2 mm. Such a thick optical medium 108 is for example obtained by bonding several strips of the same optical index with an adhesive, or glue, also having this same optical index so as not to have reflection at the interface formed by the glue. In addition, it is possible for the light sources 104 of the OLED type to produce a light emission from their front face, the light emitted passing only through the optical medium 108 transferred to these light sources 104 and not the substrate 102. The device 100 according to this second embodiment can be such that it is of large dimensions and able to carry out a simultaneous capture of the fingerprints of several fingers or even of an entire hand. In this case, an optical medium 108 of significant thickness, for example greater than about 1 mm, makes it possible to ensure a certain solidity in the device 100. FIG. 11 represents the technological stack produced to form such an assembly 128. The reference 133 designates the face of this assembly 128 which is intended to be located on the side of the optical medium 108. First of all, a part 134 comprising transistors Addressing TFT 135 is made from polycrystalline silicon. These TFT 135 transistors can also be made from amorphous silicon or IGZO. Polycrystalline silicon is preferably used because it allows better electrical performance to be obtained, which makes it possible to reduce the size of the TFT transistors. This part 134 also includes the various electrical interconnection lines of the imaging device 100. The stack of layers forming the photo-detectors 106, for example PIN diodes, is then formed on the part 134. Amorphous silicon can be advantageously used for the production of these PIN diodes. The PIN diode electrodes located on the side of the TFT transistors can be formed by a conductive layer dedicated to these electrodes, or can be formed within a level of metal common to the transistors. The electrodes of the PIN diodes located on the side of the light sources 104 are made from an electrically conductive and for example opaque material. A planarizing insulating material 137 is then deposited on the assembly produced, forming a planar surface on which the light sources 104 of OLED type are then produced. The production of OLED type diodes includes in particular the implementation of a deposit and a lithography of a conductive material to form the anodes, then the deposit of emissive organic layers, and the production of a cathode common to the diodes . Producing the OLED diodes after the other elements of the imaging device 100 makes it possible not to expose the materials used to form the OLED diodes to the temperatures and solvents used for the production of the previous elements. The OLED structure produced is transparent, with for example a metal layer forming a mirror placed on the cathode to reflect light, and the light emission produced can be monochrome (any wavelength of the entire visible range, UV and IR), or white thanks to a vertical stack of two or three colors emitted. An encapsulation material 139 is then deposited on the assembly produced. TFT transistors can be arranged at the periphery of the matrix of light sources 104 and of photo-detectors 106 to form the means used for control and / or reading and thus limit the number of inputs / outputs to be produced to the circuits external control realized in silicon technology, for example in CMOS. In an imaging device 100 produced with light sources 104 of OLED type and control means comprising TFT transistors, each light source 104 of OLED type can be controlled by two TFT transistors (a first for selecting the pixel, and a second for supply the OLED diode) in the case of an active matrix (AMOLED solution), or else be controlled by a passive matrix forming a grid with lines and columns of conductive lines allowing an ignition of the OLED diode when the voltage ignition is applied to the row and column corresponding to this diode (PMOLED solution). Each photo1064112 detector 106 of the PIN diode type can be controlled by another TFT transistor (in the case of a so-called passive reading), or by several other TFT transistors in the case of a so-called active reading. In the imaging devices 100 previously described, the value of the second limiting angle of refraction is obtained by considering an optical skin index equal to 1.37. When one of the light sources 104 is on, by knowing the photo-detectors 106 intended to receive the rays forming the outer edge of the region 114, it is possible to estimate the optical index of the two media present on the surface 109, this estimate then being used to verify that these optical indices correspond well to that of air (optical index equal to 1) and of the skin (optical index of value within a predetermined range. This makes it possible to detect whether the skin illuminated by this light source corresponds to real skin, or if this skin is artificial .. The imaging device 100 is therefore well suited for carrying out fraud detection when this device 100 is used as a fingerprint sensor. It is also possible to estimate the optical index of the imaged element from the dimensions of region 114. In all the embodiments, the optical medium 108 can correspond to several layers of transparent material superimposed. For example, if the smartphone in which the imaging device 100 is made initially comprises a glass of thickness less than that desired in order to have a good functioning of the imaging device 100, it is possible to add an additional protective glass making it possible to form, with the initial glass, an optical medium 108 having the desired thickness. In this case, the materials used to form the optical medium 108, the additional protective glass and the glue between these two elements are chosen so that their optical indices are as close as possible to each other in order to limit reflections as much as possible. interfaces of these different elements. In all the embodiments, it is possible that the sensor 100 includes a filtering system making it possible to limit or avoid the disturbances which would be due to the illumination of the sensor 100 by the light outside it. This filtering system can correspond for example to a pair of polarizing filters, or else wavelength filters or even angular filters allowing only light rays having a certain angle of incidence to pass. In addition, in all the embodiments, the sensor 100 can be used to take a static fingerprint capture, that is to say with a finger not moving on the optical medium 108, or else to form a sensor. scanning, that is to say adapted to capture the fingerprint of a moving finger on the optical medium 108. In the case of a scanning sensor, a smaller number of light sources 104 can be used compared to a static fingerprint sensor because the image does not need to have a large dimension in the direction of movement of the finger. In addition, during the calculation of the final image produced from the captured partial images, the displacement of the element to be imaged produced between the capture of each of the partial images will be calculated and taken into account. To build the final image, a capture of partial images is sufficient because the finger moves between two acquisitions of partial images. In addition, even if the imaging device 100 can advantageously be used to form a fingerprint sensor, it is also possible to use the imaging device 100 as a device for detecting movement of the element to be imaged, for example to form a pointing device, or touchpad. In general, the photo-detectors 106 of the imaging device 100 used when capturing images receive a large amount of light from the light sources 104. Thus, the integration times of the imaging device 100 are short. For example, when the imaging device 100 forms a sensor having a pixel resolution of 500 dpi, the acquisition of the complete image (obtained by switching on 32 light sources) is carried out in a little less than 50 ms. In all the embodiments, given that the same point of skin can be read by several different pairs of photo-detector 106 / light source 104, a failure of one of these pairs can be compensated for by using another of these pairs . Thus, the possible manufacturing defects of certain pixels can be compensated, which limits the impact of the errors obtained on reading the imprint. In addition, the device 100 remains functional even if certain pixels no longer work. For example, if a failing pixel corresponds to one of those whose LED is not intended to be lit to capture part of the footprint, then the information not captured by this pixel can be obtained from another image partially covering that comprising the defective pixel. If the failing pixel corresponds to one of those whose LED is intended to be lit to capture part of the footprint, it is possible to call on a neighboring pixel whose LED can be lit and carry out a capture of part of the footprint. In the case of a large imaging device 100, for example produced with light sources 104 of the OLED type, it is possible, during image capture, to switch on several light sources 104 simultaneously provided that these sources light 104 are sufficiently distant from each other so that the rays of light coming from one of the light sources 104 and arriving in a region 114 intended to be illuminated by rays coming from another light source 104 have an intensity negligible. Advantageously, these sufficiently distant light sources 104 which are switched on simultaneously are arranged on the same line, which makes it possible to reduce the overall duration of capture of the element to be imaged when the imaging device 100 is configured to read all the pixels of the same line simultaneously. In the various embodiments and examples described above, when the arrays of light sources 104 and of photo-detectors 106 are interleaved, each pixel may include one of the light sources 104 and one of the photo-detectors 106. As a variant, it is possible that each pixel includes either a light source 104 or a photo-detector 106. In fact, since each pixel of the final image can be seen by several photo-detector / light source pairs, there is redundancy in the information captured and it is therefore not necessary to have a photo-detector / light source pair within each of the pixels of the device 100. Thus, it is possible to have periodically, within the matrix, pixels not comprising photodiodes in order to free up space for the production of light sources. In all the embodiments described above, the capture produced by the imaging device 100 makes it possible to obtain a final monochrome image, the light sources 104 being able to be produced such that they emit light according to the same wavelength ( for example red or green). However, since the light acquisition is based on the phenomenon of total frustrated reflection, the formation of the image does not depend, or only slightly, on the wavelength of the light. It is however possible to take this into account during the reconstruction of the final image in order to limit the impact of chromatic aberrations. It is however possible to produce the imaging device 100 such that the acquired image is in color via the use of light sources 104 emitting lights of different colors. For example, it is possible to have, from one line to another of the matrix of light sources 104, light sources 104 emitting light of different wavelengths. Thus, a first line of light sources 104 can emit green light, a second line of light sources 104 can emit red light, a third line of light sources 104 can emit blue light, and this for all the rows of the matrix light sources 104. In addition, the light sources 104 can be shared between several photo-detectors 106 arranged side by side, for example by producing each of the light sources 104 at the center of a group of four pixels each comprising a photo-detector 106. According to another variant, it is also possible, in particular when the imaging device is produced in CMOS technology, to produce each pixel as it comprises a photo-detector 106 and three light sources 104 each capable of emitting light of red color, green or blue. Thus, the sensor part of the imaging device 100 forms a color display that can be confused, for example, with the rest of the screen of the smartphone in which the imaging device 100 is integrated. It is also possible to have an imaging device 100 serving for example as a “flash”, that is to say capable of emitting high intensity white light. Such an imaging device 100 can in this case be placed on the rear side of the electronic device of the smartphone type, this flash being used for example when taking a photograph. In the various embodiments described above, the capture 109 and detection 111 surfaces formed by the upper and lower faces of the optical medium 108 are substantially planar. It is however possible to envisage having at least one of these surfaces, in particular the capture surface 109, which is concave or convex. In this case, it is necessary to take into account the radius of curvature of this face which influences the reflection of the light rays on this face, and therefore the choice of photo-detectors 106 to be used for the acquisition of each of the partial images of the element to be imaged, as well as the associated magnification or distortion factors. As these parameters are stable, a system calibration can advantageously be implemented in production. In the various embodiments described, it is possible to control the angle of light emission of the LEDs, for example by depositing a layer of weak optical index in front of the finger, such as the limit angle between this layer and the glass corresponds to the desired angle value. Such a control can for example serve to simultaneously switch on two light sources which are not far enough apart from one another so as not to interfere with each other. t Μ
权利要求:
Claims (16) [1" id="c-fr-0001] 1. Optical imaging device (100) comprising at least: - a matrix of light sources (104) substantially point 5, produced on a substrate (102) and forming a display screen; - an array of photo-detectors (106) produced on the substrate (102) and interleaved with the array of light sources (104); - an optical medium (108) of thickness greater than the pitch of the photodetectors (106) of the photo-detector matrix (106), covering the sources 10 light (104) and the photo-detectors (106), through which a light intended to be emitted by the light sources (104) is capable of propagating, and forming a capture surface (109) against which an element to imager is intended to be placed and a detection surface (111) substantially parallel to the capture surface (109) and against which the arrays of light sources (104) and of photo-detectors (106) are 15 arranged; - control and reading means (134, 136, 138, 140) capable of successively switching on and off each of a part of the light sources (104) and, during the lighting of each of said part of the light sources (104), to read at least part of the photo-detectors (106) intended to receive rays (113,115) 20 of light from the light source (104) which is on and which has undergone a reflection against the capture surface (109) by forming, with the capture surface (109), an angle of incidence estimated as a function of the optical medium (108 ) and the element to be imaged. [2" id="c-fr-0002] 2. Optical imaging device (100) according to claim 1, in 25 where the value of the angle of incidence is between, or equal to, that of a first limiting angle of refraction defined by the optical index of the optical medium (108) and the optical index of the atmosphere in which is the element to be imaged, and that of a second limiting angle of refraction defined by the optical index of the optical medium (108) and an additional optical index of value greater than that of the optical index of the optical medium 30 (108) and less than or equal to that of the optical index of the element to be imaged. [3" id="c-fr-0003] 3. Optical imaging device (100) according to one of the preceding claims, in which the control and reading means (134, 136, 138, 140) are configured to read, during the ignition of each of said part. light sources (104), the photo-detectors (106) being in a region (114) of the detection surface (111) of annular shape delimited by the rays (113, 115) of reflected light forming, against the surface capture (109), the first and second limiting angles of refraction. [4" id="c-fr-0004] 4. Optical imaging device (100) according to one of the preceding claims, in which the control and reading means (134, 136, 138, 140) are configured such as the photo-detectors (106) read during the ignition of each of the part of the light sources (104) deliver a partial image of the element to be imaged such that each of the partial images obtained partially overlaps with at least one other of the partial images of the element to be imaged. [5" id="c-fr-0005] 5. An optical imaging device (100) according to claim 4, further comprising means for calculating a final image of the element to be imaged from partial images of the element to be imaged. [6" id="c-fr-0006] 6. Optical imaging device (100) according to one of the preceding claims, further comprising additional light sources (126) disposed outside the matrix of light sources (104) and aligned with bisectors of the angles formed by the corners of the matrix of light sources (104). [7" id="c-fr-0007] 7. Optical imaging device (100) according to one of the preceding claims, in which each pixel of the display screen comprises one of the light sources (104) and one of the photo-detectors (106). [8" id="c-fr-0008] 8. Optical imaging device (100) according to one of the preceding claims, in which the light sources (104) are formed by LEDs and the control and reading means (134, 136, 138, 140) comprise CMOS transistors , or in which the light sources (104) are of the OLED type and the control and reading means (134, 136, 138, 140) comprise TFT transistors arranged in an interlaced fashion with the matrix of photo-detectors (106). [9" id="c-fr-0009] 9. An optical imaging device (100) according to claim 8, in which, when the light sources (104) are of the OLED type and the control and reading means (134, 136, 138, 140) comprise TFT transistors, said TFT transistors are produced on the substrate (102) which is transparent and which forms the optical medium (108). [10" id="c-fr-0010] 10. Optical imaging device (100) according to one of the preceding claims, in which the thickness of the optical medium (108) is greater than approximately 50 μm, or in which the optical medium (108) comprises a glass plate of thickness greater than or equal to about 700 μm. [11" id="c-fr-0011] 11. Image capture method implemented with an optical imaging device (100) comprising at least: - a matrix of light sources (104) substantially punctual, produced on a substrate (102) and forming a display screen; - an array of photo-detectors (106) produced on the substrate (102) and interleaved with the array of light sources (104); - an optical medium (108) of thickness greater than the pitch of the photodetectors (106) of the photo-detector array (106), covering the light sources (104) and the photo-detectors (106), through which a light intended to be emitted by light sources (104) is capable of propagating, and forming a capture surface (109) against which an element to be imaged is intended to be disposed and a detection surface (111) substantially parallel to the surface capture (109) and against which the arrays of light sources (104) and photo-detectors (106) are arranged; the image capture process comprising the implementation of the following steps: d) switching on one of the light sources (104); e) reading of at least part of the photo-detectors (106) receiving rays (113,115) of light coming from the light source (104) turned on and having undergone a reflection against the capture surface (109) by forming, with the capture surface (109), an angle of incidence estimated as a function of the optical medium (108) and of the element to be imaged; f) switching off the light source (104) which is switched on; steps a) to c) being repeated for each of a part of the light sources (104). [12" id="c-fr-0012] 12. The method of claim 11, wherein the value of the angle of incidence is between, or equal to, that of a first limiting angle of refraction defined by the optical index of the optical medium (108) and l optical index of the atmosphere in which the element to be imaged is located, and that of a second limiting angle of refraction defined by the optical index of the optical medium (108) and an additional optical index of value greater than that of the optical index of the optical medium (108) and less than or equal to that of the optical index of the element to be imaged. [13" id="c-fr-0013] 13. Method according to one of claims 11 or 12, wherein the photo-detectors (106) read during step b) are those located in a region (114) of the detection surface (111) of annular shape delimited by the rays (113, 115) of reflected light forming, against the capture surface (109), the first and second limiting angles of refraction. [14" id="c-fr-0014] 14. Method according to one of claims 11 to 13, in which the photo-detectors (106) read during each implementation of step b) deliver a partial image of the element to be imaged such that each of the partial images obtained partially overlaps with at least one other of the partial images of the element to be imaged. [15" id="c-fr-0015] 15. The method of claim 14, further comprising the implementation of a step of calculating a final image of the element to be imaged from partial images of the element to be imaged. [16" id="c-fr-0016] 16. Method according to one of claims 11 to 15, in which steps a) to c) are also implemented for light sources 10 additional (126) disposed outside the light source array (104) and aligned with angle bisectors formed by the corners of the light source array (104). S.61343 1/4 112 φ—> X 114 □ □ ..... □ □ □ □ π π □ π π □ □ □ □ □ □ □ □ □ □ π π ο ο ο ο □ ο] □ □ □ □ Π Β Β Β □ □ □ □ □ □ □ □ Β □ □ □ Π Π Π Π Π Π Π Π Π ο Π ο ο Π ο ο Π ο □ □ □ □ □ □ □ □ □ □ □ □ □ □ Ο Ο Ο Ο Ο □ □ □ Ο □ □ Ο Ο □ □ Ο Ο □ □ Ο □ □ Ο ο Ζ — 106 Ε3 ”- -S
类似技术:
公开号 | 公开日 | 专利标题 EP3376544A1|2018-09-19|Optical imaging device EP3201832B1|2018-10-10|Fingerprint or palmprint sensor EP3388975A1|2018-10-17|Device for capturing an impression of a body part FR2969819A1|2012-06-29|THREE DIMENSIONAL IMAGE SENSOR FR2974669A1|2012-11-02|IMAGEUR DEVICE FOR EVALUATING DISTANCES OF ELEMENTS IN AN IMAGE EP3210162A1|2017-08-30|Device for acquiring digital fingerprints EP3503192A1|2019-06-26|Device for acquiring a 2d image and a depth image of a scene FR2974188A1|2012-10-19|ELEMENTARY IMAGE ACQUISITION OR RESTITUTION DEVICE FR2952473A1|2011-05-13|METHOD FOR PRODUCING A CURVED CIRCUIT FR2958079A1|2011-09-30|THREE DIMENSION ARCHITECTURE CMOS IMAGE DEVICE WO2007088267A2|2007-08-09|Light emission device with chromatic control EP2895938B1|2018-10-31|Non-contact user interface system FR2940720A1|2010-07-02|IMAGE ENTRY DEVICE COMPRISING FOCUS CORRECTION MEANS FR2945666A1|2010-11-19|IMAGE SENSOR. FR3107611A1|2021-08-27|Multiple resolution display screen and production method FR3026227A1|2016-03-25|DEVICE FOR ACQUIRING 3D IMAGES BE1015708A3|2005-07-05|Method for measuring the height of spheres or hemispheres. FR2929478A1|2009-10-02|IMAGE SENSOR WITH IMPROVED SENSITIVITY FR2860119A1|2005-03-25|Object scenes representation producing device for monitoring surroundings of e.g. aircraft, has filtering unit arranged in intermediate image plane and having two color frames with two different filtering characteristics WO2022038032A1|2022-02-24|System for acquiring images FR3113429A1|2022-02-18|Image acquisition device EP3913393A2|2021-11-24|Improved detection device and associated lidar system FR3113430A1|2022-02-18|Image acquisition system EP3767679A1|2021-01-20|Image sensor pixel FR2861187A1|2005-04-22|Photograph pick up device for mobile telephone, has optical unit with mirrors and entrance pupils that observe visible radiation of object field directed towards sensor, where field is divided into four portions
同族专利:
公开号 | 公开日 US20180269240A1|2018-09-20| US10381387B2|2019-08-13| CN108629269A|2018-10-09| FR3064112B1|2021-06-18| EP3376544A1|2018-09-19|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5627364A|1994-10-11|1997-05-06|Tdk Corporation|Linear array image sensor with thin-film light emission element light source| US20020079512A1|2000-12-12|2002-06-27|Shunpei Yamazaki|Information device| US20070291325A1|2004-04-19|2007-12-20|Yoshiaki Toyota|Combined Image Pickup-Display Device| US20100155578A1|2008-12-24|2010-06-24|Seiko Epson Corporation|Light source integrated photoelectric conversion apparatus| US5101099A|1990-06-15|1992-03-31|Fuji Xerox Co., Ltd.|Image reading device with different reflectivity coefficients in a transparent layer and a substrate| JP2001351820A|2000-06-07|2001-12-21|Mitsubishi Electric Corp|Electric apparatus| FR3016115B1|2014-01-06|2016-02-05|Commissariat Energie Atomique|CONTAINER INTERACTING WITH A REDUCED ENERGY CONSUMPTION USER BEFORE UNPACKING| US9836165B2|2014-05-16|2017-12-05|Apple Inc.|Integrated silicon-OLED display and touch sensor panel| CN107580709B|2015-06-18|2021-02-12|深圳市汇顶科技股份有限公司|Multifunctional fingerprint sensor with optical sensing capability| FR3054697B1|2016-07-29|2019-08-30|Commissariat A L'energie Atomique Et Aux Energies Alternatives|METHOD OF CAPTURING THERMAL PATTERN WITH OPTIMIZED HEATING OF PIXELS| FR3054696B1|2016-07-29|2019-05-17|Commissariat A L'energie Atomique Et Aux Energies Alternatives|THERMAL PATTERN SENSOR WITH MUTUALIZED HEATING ELEMENTS| FR3065306B1|2017-04-12|2019-04-05|Safran Identity and Security|METHOD OF DETECTING FRAUD| CN107066162B|2017-05-27|2020-03-17|上海天马微电子有限公司|Display panel and display device| CN107133613B|2017-06-06|2020-06-30|上海天马微电子有限公司|Display panel and display device| US10664676B2|2017-06-12|2020-05-26|Will Semiconductor Co. Ltd.|Systems and methods for reducing unwanted reflections in display systems incorporating an under display biometric sensor|FR3054698B1|2016-07-29|2018-09-28|Commissariat A L'energie Atomique Et Aux Energies Alternatives|ACTIVE THERMAL PATTERN SENSOR COMPRISING A PASSIVE MATRIX OF PIXELS| US10664676B2|2017-06-12|2020-05-26|Will SemiconductorCo. Ltd.|Systems and methods for reducing unwanted reflections in display systems incorporating an under display biometric sensor| TWI666753B|2017-09-28|2019-07-21|致伸科技股份有限公司|Optical fingerprint identification unit and manufacturing method thereof| US10474860B2|2017-09-29|2019-11-12|Apple Inc.|Electronic device including a display driven based upon first and second alternatingly read memories and related methods| CN110895824A|2018-09-12|2020-03-20|上海耕岩智能科技有限公司|Method for automatically searching thickness parameter of display screen, storage medium and electronic equipment| CN110650278A|2019-10-12|2020-01-03|Oppo广东移动通信有限公司|Image sensor, camera module and electronic equipment| CN111128046B|2020-01-16|2021-04-27|浙江大学|Lens-free imaging device and method of LED display screen|
法律状态:
2018-02-19| PLFP| Fee payment|Year of fee payment: 2 | 2018-09-21| PLSC| Publication of the preliminary search report|Effective date: 20180921 | 2020-03-31| PLFP| Fee payment|Year of fee payment: 4 | 2021-03-30| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1752160A|FR3064112B1|2017-03-16|2017-03-16|OPTICAL IMAGING DEVICE| FR1752160|2017-03-16|FR1752160A| FR3064112B1|2017-03-16|2017-03-16|OPTICAL IMAGING DEVICE| US15/922,204| US10381387B2|2017-03-16|2018-03-15|Optical imaging device| CN201810214172.7A| CN108629269A|2017-03-16|2018-03-15|Optical imaging apparatus| EP18161920.6A| EP3376544A1|2017-03-16|2018-03-15|Optical imaging device| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|