![]() Apparatus and method to manage consistent detection from multiple apertures in a LIDAR system (Machi
专利摘要:
Apparatus and method for managing consistent detection from multiple apertures in a lidar system. An aperture array comprises apertures arranged in one or more dimensions. Each aperture is configured to receive a portion of an optical wavefront. Each aperture is coupled to an optical mixer that coherently interferes the portion of the received optical wavefront with a local oscillator optical wave. A processing module configured to process electrical signals from optical mixer outputs, including: for each optical mixer, determining a phase or amplitude information of a detected electrical signal from an optical mixer output, determining direction-based information, associated with a subset of the field of view, based on phase or amplitude information derived from two optical mixers, and determining distance information from the direction-based information. (Machine-translation by Google Translate, not legally binding) 公开号:ES2868574A1 申请号:ES202030329 申请日:2020-04-21 公开日:2021-10-21 发明作者:Balbás Eduardo Margallo 申请人:Mouro Labs S L; IPC主号:
专利说明:
[0002] APPARATUS AND METHOD TO MANAGE COHERENT DETECTION FROM [0004] OBJECT OF THE INVENTION [0006] The present disclosure relates to optical signal detection systems and methods, such as light distance detection and measurement apparatus (LiDAR) and method for detection using the same, and more particularly to an apparatus and method for managing detection coherent to starting from multiple openings in a LiDAR system. [0008] BACKGROUND OF THE INVENTION [0010] A variety of types of LIDAR systems use various kinds of scene reconstruction techniques for operation. In some systems, clusters of focal planes are used in an imaging configuration, in which different parts of a field of view are represented in different respective elements of the cluster. In some systems, coherent detection is used by mixing optical signals from different elements to select a given direction, adjustable through varying physical phase shifts between the elements, but the use of amplitude and phase information from such coherent detection can be limited. in various ways. [0012] DESCRIPTION OF THE INVENTION [0014] In one aspect, in general, an apparatus comprises: a first optical source or port that provides an optimal wave of modulated illumination illuminating a field of view; a second optical source or port that provides an optical reference wave having a defined phase relationship with the optimal wave of modulated illumination; an array of apertures comprising a plurality of apertures arranged in one or more dimensions, and which is configured to receive an optical wavefront comprising contributions in at least a portion of the field of view, wherein: each of two or more of the apertures is configured to receive a respective portion of the received optical wavefront, and at least two non-adjacent apertures in the array of apertures are configured to receive a respective portion of the front received optical wavefront comprising a contribution from the same portion of the field of view, and each of two or more of the apertures is coupled to a respective optical mixer that coherently interferes the respective portion of the received optical wavefront with a respective wave local oscillator optics; wherein each respective local oscillator wave is derived from the reference optical wave in such a way that, for each respective aperture, respective differences in group delay, between (i) the second optical source or port and the respective mixer optical, and (ii) the respective aperture and the respective optical mixer, are substantially the same. [0016] The apparatus also comprises a processing module configured to process electrical signals detected from outputs of the optical mixers, the processing comprising: for each optical mixer of a plurality of the optical mixers, determining at least one of phase or amplitude information a starting from at least one detected electrical signal from at least one output of that optical mixer, determining first direction-based information, associated with a first subset of the field of view, based on phase or amplitude information derived from at least two mixers optics from the plurality of optical mixers, determine first distance information from the first direction-based information, determine second direction-based information, associated with a second subset of the field of view, based on phase or amplitude information derived from starting from at least two optical mixers of the plurality of the optical mixers, and determining second distance information from the second address-based information. [0018] In another aspect, in general a method for managing coherent detection from multiple apertures comprises: providing, from a first source or optical port, an optimal wave of modulated illumination illuminating a field of view; providing, from a second optical source or port, an optical reference wave having a defined phase relationship with the optimal wave of modulated illumination; receiving an optical wavefront that includes contributions in at least a portion of the field of view in an array of apertures comprising a plurality of apertures arranged in one or more dimensions, wherein: each of two or more of the apertures is configured to receive a respective portion of the received optical wavefront, and at least two non-adjacent apertures in the array of apertures are configured to receive a respective portion of the received optical wavefront that includes a contribution from the same portion of the field of view , and each of two or more of the apertures is coupled to a respective optical mixer that coherently interferes the respective portion of the received optical wavefront with a respective local oscillator optical wave. [0020] Each respective local oscillator wave is derived from the reference optical wave such that, for each respective aperture, respective differences in group delay, between (i) the second optical source or port and the respective optical mixer, and (ii) the respective aperture and the respective optical mixer are substantially the same. [0022] The method also comprises: processing, in a processing module, electrical signals detected from the outputs of the optical mixers, including the processing: for each optical mixer of a plurality of the optical mixers, determining at least one of phase information or amplitude from at least one electrical signal detected from at least one output of that optical mixer, determining first direction-based information, associated with a first subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, determine first distance information from the first information based on direction, determine second information based on direction, associated with a second subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of mixers optics, and determining second distance information from the second address-based information. [0024] In another aspect, in general, an apparatus comprises: a first optical source or port that provides an optimal wave of modulated illumination illuminating a field of view; a second optical source or port that provides an optical reference wave having a defined phase relationship with the optimal wave of modulated illumination; an array of apertures comprising at least 40 apertures arranged in one or more dimensions, and which is configured to receive an optical wavefront comprising contributions in at least a portion of the field of view, wherein: each of two or more of the apertures is configured to receive a respective portion of the received optical wavefront, and at least two non-adjacent apertures in the array of apertures are configured to receive a respective portion of the received optical wavefront that includes a contribution thereof portion of the field of view, and each of two or more of the apertures is coupled to a respective mixer optical coherently interfering the respective portion of the received optical wavefront with a respective local oscillator optical wave derived from the reference optical wave; and a processing module configured to process electrical signals detected from outputs of the optical mixers, the processing comprising: for each optical mixer of a plurality of the optical mixers, determining at least one of phase or amplitude information from at minus a detected electrical signal from at least one output of that optical mixer, determine first direction-based information, associated with a first subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, determine first distance information from the first direction-based information, determine second direction-based information, associated with a second subset of the field of view, based on phase or amplitude information derived from at least two optical mixers out of the plurality of optical mixers, and determining second distance information from the second address-based information. [0026] Aspects may comprise one or more of the following characteristics. [0028] Optimal modulated illumination wave has a frequency spectrum that includes a peak at a frequency that is adjustable to provide a frequency modulated continuous wave (FMCW) optical illumination wave. [0030] The optimal modulated lighting wave is a pulsed signal. [0032] The optimal modulated illumination waveform is formed by altering light of two wavelengths. [0034] The optimal modulated lighting wave has a spectrum that covers different frequency bands. [0036] The respective differences in group delay, between (1) the second optical source or port and the respective optical mixer, and (2) the respective aperture and the respective optical mixer, correspond to an optical path length difference of less than 10 cm. , or less than 1 cm. [0037] The first and second direction-based information is further processed to measure the first and second intensity of light from the first and second subsets of the field of view, respectively. [0039] The first and second direction-based information is further processed to measure the relative speed of objects reflecting light from the first and second subset of the field of view, respectively. [0041] At least a portion of the first address-based information and at least a portion of the second address-based information are determined in parallel. [0043] The optical wave of illumination is provided to illuminate the entire field of view at the same time. [0045] The optical wave of illumination is provided to scan different portions of the field of view over time. [0047] One or more of the apertures in the array of apertures is used to emit at least a portion of the illumination optical wave. [0049] The apparatus further comprises at least one illumination aperture that is not included in the aperture array, wherein the illumination aperture is configured to emit at least a portion of the illumination optical wave. [0051] The aperture array has its apertures arranged in a regularly spaced rectangular grid. [0053] The aperture array has its apertures arranged in a regularly spaced polarized grid. [0055] The aperture array has its apertures arranged in a Mills cross configuration. [0057] The array of apertures has its apertures arranged in a pseudo-random configuration. [0058] The grouping of apertures is defined by the pixels of an imaging sensor. [0060] The respective mixer is configured to provide in-phase / quadrature (I-Q) detection through the use of a 90o offset replica of the optical reference wave. [0062] The respective mixer is configured to provide in-phase / quadrature (I-Q) detection through the use of interference with the reference optical wave in a multi-mode interference coupler. [0064] Mixers are implemented through at least one of: a partially transmissible layer, a directional coupler, an evanescent coupler, a multi-mode interference coupler, or a grid coupler. [0066] The processing module is configured to compensate for errors in relative phase between apertures in the aperture cluster, estimated based at least in part on a modulation pattern of the optimal modulated illumination wave. [0068] The processing module is configured to compensate for errors in the relative phase between apertures in the array of apertures, estimated based at least in part on calibration data obtained with a predetermined wavefront. [0070] The processing module is configured to compensate for errors in the relative phase between apertures in the array of apertures, estimated using sensors that measure temperature, and / or temperature gradients in the apparatus, and / or its surroundings. [0072] The processing module comprises analog-to-digital conversion components. [0074] The processing module comprises a parallel to serial data converter. [0076] The processing module comprises an electro-optical transducer for data output through a fiber optic link [0078] The first optical source or port and the second optical source or port provide light from a single common light source. [0079] The second optical source or port provides light by phase modulation of light fed to the first optical source or port. [0081] The first optical source or port illuminates the field of view through a light diffusing element. [0083] Aspects can have one or more of the following advantages. [0085] Coherent detection for multiple apertures can use a single local oscillator in a way that preserves the relative phase information between the apertures, and can allow selection of any direction within a field of view through digital post-processing of a single acquisition, without the need for physical beam steering. [0087] Reconstruction of a desired wavefront or beam direction can be done in post-processing, such as digital post-processing. [0089] The relative amplitude and phase information from each aperture in the array can be digitally recorded and stored, and can be combined to produce a virtual beam steering and image scanning effect. [0091] Heterodyne detection can be used to extract phase information between a number of apertures, and that information can be processed in the complex domain to separate different viewing directions. The distance and intensity of multiple contributions from different portions of the field of view can be resolved for each direction and in this way tomographic information about a volume in the field of view can be reconstructed. [0093] The techniques described are compatible with embedded optics implementations. [0095] Techniques can be used to minimize and estimate phase errors, which can be compensated, to facilitate good system performance. [0097] A full field of view can be rendered simultaneously, allowing wide-field illumination and therefore higher illumination power, within permissible eye safety standards. [0099] As a result of the higher light output that is tolerable, faster imaging, longer ranges and / or higher resolution imaging can be achieved. [0101] DESCRIPTION OF THE DRAWINGS [0103] To complement the description made and to help towards a better understanding of the characteristics of the invention, according to a preferred example of practical embodiment thereof, a set of drawings is attached as an integral part of said description in which, with illustrative and non-limiting character, the following are presented: [0105] Figures 1A and 1B.- Show schematic diagrams of example coherent detection schemes for generating interference with a local oscillator using balanced detection and unbalanced detection, respectively. [0107] Figures 2A and 2B.- Show schematic diagrams of example coherent detection schemes in which a local oscillator is mixed with two copies of an incoming field. [0109] Figure 3.- Shows a schematic diagram of an example reception subsystem. [0111] Figure 4.- Shows a schematic diagram of an example reception subsystem. [0113] Figure 5.- Shows a schematic diagram of an example reception subsystem with a distribution of a local oscillator with equalized arm lengths to each mixer. [0115] Figure 6.- Shows an illustration of a mask design with a circular distribution of the local oscillator with equalized arm lengths to each mixer, in which the aperture distribution defines two concentric circles. [0116] Figure 7.- Shows an illustration of two waveguide segments with the same length, but different displacement in the horizontal direction. [0118] Figure 8.- Shows a schematic diagram of the optical configuration for an example LIDAR system. [0120] Figures 9A and 9B.- Show three-dimensional graphs of angular distributions of a radiation pattern on a sphere for plane arrangements of apertures. [0122] PREFERRED EMBODIMENT OF THE INVENTION [0124] Various examples of a LIDAR system (or LiDAR system) can be implemented based on a synthetic aperture formed from a spatial distribution of individual pickup apertures of a sensing array coupled to receiving waveguides of a receiving subsystem. [0126] The field of the electromagnetic wave captured at each aperture (or "captured field"), after each electromagnetic wave has been coupled into a respective receiving waveguide, mixes with a local oscillator (LO) field in a way in such a way that the phase information of the field captured at the entrance of the aperture can be inferred and relative phase differences between apertures can be measured. [0128] This can be done through the introduction of a phase / quadrature (IQ) optical demodulator using two 90o phase shifted local oscillators, for example. Alternatively, the local oscillator can be shifted in frequency relative to the frequency of the fields picked up at the apertures so that relative phase differences between fields picked up at apertures can be measured relative to a carrier frequency that results from the frequency offset. [0130] Example detection options are illustrated in Figures 1A and 1B. In Figure 1A, a coherent detector (or "mixer") (100A) comprises a 2x2 coupler (102) (eg, a multi-mode interference coupler (MMI)) used to produce a heterodyne mixture of an oscillator. local (LO) from a LO source (104) and the field captured at an input aperture (101). Two detectors 106A and 106B (for example, photodetectors such as photodiodes) are used to generate 180 ° shifted versions of the detected optical interference signals that they produce photocurrents that add together using balanced detection to produce a current that represents a differential mode signal. This has the advantage of suppressing common mode elements in the signal that can increase noise and interference. [0132] Alternatively, in Figure 1B, a coherent detector 100B comprises a single detector 106C (for example, a photodetector such as a photodiode), which could provide greater simplicity, with a loss-of-blanking effect compensation. common. A potential disadvantage of this unbalanced detection scheme rather than the balanced detection scheme is that, if the local oscillator field and the captured field are of the same frequency, the DC component of the interference at the detector (106C), which depends on the amplitude of the signals and the phase shift between them, it will mix with the non-interfering DC component of the unbalanced detection which depends mainly on the amplitude of the local oscillator. If there is a frequency shift between the two due to frequency modulation of the LO, for example, then it would be possible to resolve both phase and amplitude of the captured field. [0134] In any given implementation, the defined coupler 102 can be an MMI, an evanescent coupler, or any other suitable form of coupler. Excess losses from these devices should be reduced to improve sensitivity and increase the effective range of the system. The reduced number of devices found between the collection aperture and the detector helps to reduce the effect of excess device loss, relative to, for example, the long binary trees commonly used in phase antennas. [0136] The electromagnetic waves used may have a peak wavelength that is in a particular range of optical wavelengths (eg, between about 100 nm to about 1 mm, or some sub-range thereof), also referred to herein simply as "light". [0138] Photodetectors can be implemented via PIN photodiodes, avalanche photodiodes, photomultiplier tubes, and other light sensitive devices suitable for the application. In particular, they are at least sensitive to the wavelength of light used for the LIDAR system and have sufficient bandwidth to allow the signals of interest to be read. The dark current and quantum efficiency of these photodetectors can be optimized to maximize system sensitivity and range. [0139] Referring to Figure 2A, in an alternative implementation of a coherent detector (200), a 2x2 coupler (202A) receives an optical wave of LO from a source of LO (204) and creates two versions of the local oscillators shifted 90 °. each. These shifted LOs are mixed, respectively, with two replicas of the incoming field generated by a 1x2 divider (202B) that divides the incoming field from an input aperture (201) to two outputs, which does not generate phase shifts between the two outputs. , to obtain IQ demodulation, in respective 2x2 couplers (202C) and (202D). Couplers 202A, 202B, 202C, and 202D can be MMI couplers, for example. Four detectors 206A, 206B, 206C and 206D (for example, photodetectors such as photodiodes) are used in this case for balanced detection of each of the I and Q channels. [0141] With this construction, the phase can be recovered without the need for a frequency-shifted carrier. Unbalanced detection as in the single mixer scheme is also possible, with similar limitations. Figure 2B shows another alternative implementation of a coherent detector (210) in which a 2x4 coupler (212) is used in place of the two separate 2x2 couplers (202C) and (202D). In this implementation, the 2x4 coupler (212) is an MMI coupler that mixes the incoming field from the input aperture (201) with the LO from the LO source (204) with appropriate phase shifts in all four detectors (206A ), (206B), (206C) and (206D). [0143] To explore the field of view covered by the synthetic aperture of a sensing array that includes multiple pickup apertures, and retrieve a representation of light-reflecting object or objects in the field of view (for example, a 3D cloud map ), digitized versions of the phasor are combined at each pickup aperture. This combination effectively defines a virtual wavefront that corresponds to a desired direction within the field of view. Since this is a numerical calculation, it can be done simultaneously for all possible reception directions within the field of view of the LIDAR system through adjustment of the phase shifts made in the complex domain. This corresponds to a complex matrix multiplication, which can be done in a sequential calculation, for example, using a computer CPU, or it can be done in a parallel calculation, for example, using FPGA / GPU hardware. These or any of a variety of computation modules may be used in any sequential, parallel, or single computation. combination of sequential and parallel calculations. [0145] Without being bound by theory, as an example of a formulation for some of the equations that can be used to perform some of the calculations, for a desired direction (0j, ^ j) in the field of view, the phase shift to apply at a specific opening of coordinates (xi, yi, 0) in the cluster and with phase error <f¿, in relation to the local oscillator reference, it can be expressed as: [0147] 2n [0148] A ^ íJ = t sin (0;) cos ( $ j) + yi sin {° j) sin (0;)] + & [0150] If A is the matrix of complex amplitudes at all apertures in the cluster, a computational module can reconstruct the field of view as follows: [0153] And, a uniform transformation matrix can be expressed as: [0155] M = e i A 'p í ’j [0157] Other transformation matrices are possible, in which a series of amplitude factors are introduced to narrow the equivalent radiation pattern of the cluster. In regularly spaced linear clusters, some characteristic patterns include triangular or binomial shaped field strength from the center of the cluster. These designs suppress secondary lobes at the cost of a wider primary radiation lobe. An alternative design can be based on Chebychev polynomials, with Dolph or Taylor transforms, which allow an upper limit to be set on the secondary lobes while minimizing the width of the main lobe. [0159] The spatial distribution of the pick-up apertures is a sampling problem analogous to that of in-phase antenna array designs. Different configurations are possible depending on the desired antenna pattern and lobe profile. Example arrangements of the array of pickup apertures may include arrangements such as circular arrays, rectangular grids, etc., that can be used, analogously to systems that use arrays of antennas (eg, RADAR systems). In some implementations, the number of apertures is large enough to enable high resolution imaging of non-trivial scenes, and sufficient light gathering for long-range imaging (eg> 300m). [0161] In some implementations, the apertures may be arranged in subunits of a non-planar formation that is capable of self-assembling through the application of a magnetic force, as described in more detail in US Provisional Patent Application Serial No. 62 / 842,924, filed May 3, 2019, incorporated herein by reference. For example, a plurality of subunits are fabricated on a planar substrate, wherein each subunit comprises: an optical sensing structure configured to receive at least a portion of an optical wavefront that affects one or more of the subunits, and material that forms at least a portion of a hinge in a neighborhood of an edge with at least one adjacent subunit. At least a portion of the substrate is removed at respective edges between each of at least three different pairs of subunits to enable relative movement between the subunits in each pair limited by one of the hinges formed from the material. One or more actuators are configured to apply a force to bend a connected network of multiple subunits into a non-planar formation. [0163] If a device containing the components of the LIDAR system is implemented using integrated optics, for example, the arrangement of waveguides and apertures can be made in the plane of a wafer surface and optical elements can be used at the end of the waveguides. waves to deflect optical radiation out of plane (eg, perpendicular to the wafer surface). Such optical elements can include grating couplers, engraved 45 ° mirrors, 3D printed micromirrors, or external micromirrors, among others. Additionally, diffractive elements can be introduced into the design such as microlenses, to adapt the field of view of the design as described in United States Publication No. 2017 / 0350965A1, incorporated herein by reference. These microlenses can be introduced using grayscale lithography, resistance reflow, print molding, 3D printing techniques, among other methods. [0165] In an integrated optics implementation, photodiodes and electronic amplifiers (eg, transimpedance amplifiers (TIA)) can be produced on the same substrate, minimizing system cost and reducing device footprint. This can be done through the application of CMOS-compatible technologies. The Electronics can be produced using a CMOS process and waveguides can be produced on top of the electronics layer using silicon, silica, silicon nitride or silicon oxynitride, for example. Photodetectors can be produced, for example, using germanium placed on the silicon wafer for higher wavelengths, or through silicon detectors available on the CMOS platform if wavelength allows. [0167] The electronics for some LIDAR system implementations may include one or more amplification stages configured to provide adequate transimpedance gain for each of the detectors or pairs of detectors in the device. Once amplified, the signal can be digitized and digitally processed (eg, in accordance with the equations above) to generate independent data streams that correspond to each of the desired viewing directions in the field of view. These data streams can then be processed to extract depth information (also called scope information) using depth extraction (or scope extraction) algorithms used in some other LIDAR systems. In frequency modulated continuous wave (FMCW) or pulsed frequency modulated systems, depth is encoded in the instantaneous frequency difference between the local oscillator and the frequency of the received light. In other schemes, phase differences may be applied when switching between two wavelengths or time measurements for pulse schemes with heterodyne detection. [0169] The digital processing electronics can be made on the same substrate as the optical device or it can be implemented in a separate specialized device such as an ASIC chip. Commercially available components can also be used for this purpose, such as FPGAs, DSPs, or software implementations running on CPUs or GPUs. Figure 3 shows an example of a digital signal processing (DSP) module 300 that is coupled to an array of IQ detectors (302A) ... (302B) (for respective apertures in an array of apertures) that can be integrated on the same device, or otherwise combined, in a receiving subsystem. The IQ detector (302A) includes a pair of photodiodes (304A) for a component in phase (I) and a pair of photodiodes (304B) for a component in quadrature (Q). The signals from the photodiodes (304A) and (304B) are amplified by respective TIA (306A) and (306B), which are converted into the digital domain by respective DAC (308A) and (308B). Similarly, the IQ detector (302B) includes a pair of photodiodes (304C) for an I component and a pair of photodiodes (304D) for a Q component. The signals from the photodiodes (304C) and (304D) are amplified by respective TIAs (306C) and (306D), which are converted into the digital domain by respective DACs (308C) and (308D). [0171] The data throughput resulting from the number of channels for the multiple pickup apertures, the scanned range in the field of view for a particular scene, and / or the scene acquisition rate can be large. In some applications, the same photonics platform that is being used to implement the device used can be used to encode and optically transmit the information back to the rest of the system. [0173] This can be done, for example, through fast modulators based on carrier injection in PIN devices or through other electro-optical effects. Figure 4 shows an example of a receiving subsystem in which data from an array of IQ detectors (402A) ... (402B) (for respective apertures in a cluster of apertures) is coupled to a parallel-to-series converter. (404) that converts digital on-chip signals from parallel to series before the parallel-to-series converted output is amplitude-modulated by a modulator (406) on an optical carrier of an external light source (408). A direct sensing optical-to-electrical converter (410) and serial-to-parallel converter (412) can then provide the digital signals to a DSP module (414). [0175] This on-chip optical communication channel can be multiplexed in the same optical path used to provide the local oscillator to the device using wavelength division multiplexing, through isolators / couplers to separate both directions of propagation or using time multiplexing among others. techniques. Alternatively, a separate physical path may be used for the encoded optical information, which may be of a different type than the single mode optical fiber used for the laser that provides the emitted and captured light and the local oscillator light. For example, this separate path for the encoded optical information can be a multi-mode optical fiber. This fiber optic data communication can simplify the interface with the sensing element to just a few electrical power / control signals and one or two fiber optics. [0177] The waveguide arrangement can be configured to improve device performance. Integrated optics in a photonic integrated circuit (PIC) has an advantage of the precision that is achievable with modern lithography, which can be significantly better than 100 nm in scale. It is possible to assemble some implementations of the device using these block optics and / or optical fibers instead of or in addition to optics integrated into a photonic integrated circuit. But, in some implementations that integrate all the major optical components of the LIDAR system into one PIC, the dimensions and tolerances enabled for such an implementation may facilitate a more stable system, and the information may be easier to retrieve. [0179] Another possible implementation is one in which the distribution network is produced using a 3D printing technique of adequate resolution and satisfactory waveguide quality. Sufficient index contrast to implement the low bends and losses can be a factor in implementing such a system. Couplers used to provide the mixing function could be produced using flat-type structures or true 3D components such as photonic luminaires. The 3D printed waveguides can be directed after mixing to a suitable detector array. [0181] In the case of an FMCW system, in some implementations, the path length difference between, first, each optical path connecting each aperture and the respective mixer input and, second, the optical path connecting the The common input of the local oscillator to the interference generating coupler (s) is substantially the same for all channels in the array. This will minimize the phase shift encountered between them during the wave number pulsed frequency modulated characteristic of FMCW systems and will reduce the need for calibration and digital compensation of the above transform matrices. Similarly, in other source modulation schemes that especially affect the wavelength of the local oscillator, the path length difference can be minimized to avoid introducing modulation-dependent phase errors in relative phase measurements between apertures that prevent a recovery of proper direction. [0183] Equalization of arm lengths also helps with temperature sensitivity. Some materials that could be used to implement this example system, such as monocrystalline silicon, have moderate to high thermo-optical effects. If the lengths from the local oscillator input to each mixer, or from the opening to the mixer are different, changes in temperature can induce Uncontrolled phase shifts in the cluster, resulting in a loss of calibration. In the case of silicon, having a thermo-optical coefficient of 2.4x10-4 at 1.3 pm and assuming the maximum acceptable phase error in the cluster is X./100, the maximum tolerable path length difference it could be selected not to be more than 54 pm per 1 K of tolerated temperature variation, at a wavelength of 1.3 pm. If the chip does not have to be thermalized and must operate in a temperature range of -20 to 80 ° C, the maximum tolerable path length difference could be selected to be no more than 0.54 pm. [0185] In the event that the device is packaged or manufactured together with electronics or electro-optical components that dissipate heat, additional attention can be paid to reduce any thermal gradients in the structure. Alternatively, materials with a lower thermo-optical coefficient, such as silicon oxide, nitride or oxynitride, can be used for all waveguides or some sections of the optical circuit. Additionally, one or more temperature sensors could be included in the substrate used to manufacture this unit to estimate phase errors and be able to compensate for them in post-processing. [0187] Total path length equalization for the total oscillator can be achieved, among other ways, through a binary division tree. An example of this scheme is shown in Figure 5 for an example of a receiving subsystem (500) that includes a small linear array of N apertures (for example, N = 8 in this example) coupled to coherent IQ detectors (for example, coherent detector 200 described above). Since there are types of 1x2 (502) dividers that do not index to phase shifts between their two outputs, and that all three can be configured to be symmetric on each stage, the phase and group delay for the LO between the general input (504) and the input of each coherent IQ detector can be configured to be substantially the same. The distance from the aperture to the mixer can be kept constant to ensure that the phase delay is the same for all apertures. [0189] Instead of a linear arrangement of apertures within an array of apertures, the geometry of the aperture arrangement may be distributed in a two-dimensional or three-dimensional arrangement. For example, in the case of a circular geometry, the angular span and segment length after each divider could be arranged to be substantially the same as illustrated in the example mask design (600) shown in Figure 6. [0191] In the illustrated design, the openings are arranged along two concentric rings and the distance from the opening to the mixer has been kept constant. This has the advantage of producing alternate positions for the mixers, which are typically wider than the individual waveguides. This interleaving allows an increase in the density of openings in the rings. However, due to the absence of symmetry, the path lengths between the local oscillator input to the device and the different mixer inputs can be adjusted. To this end, it is possible to introduce compensation elements that adjust varying physical distances on the wafer, but keep the total optical delay constant, as shown in Figure 7. A waveguide segment (700A) and a waveguide segment Waves 700B have different distances between their end points in the horizontal direction (eg, in the plane of the cluster), but the same propagation distance through the waveguide segment between those end points. [0193] The emission subsystem of a LIDAR system that incorporates a reception subsystem that uses the array of detection and processing techniques described in this document can take any of a variety of forms such that the illumination beam covers the area of the scene. which is of interest. For example, a single waveguide or aperture can be used alone, or in combination with beamforming optics, to produce an illumination pattern that fully covers the field of view by the emitted beam. In this case, the receiving subsystem is responsible for resolving the field of view (FOV) with the desired resolution. [0195] Figure 8 shows an example of the emitter and receiver optical configurations of a LIDAR system (800) that includes a transmitting subsystem (or "emitter") and a receiving subsystem (or "receiver"), and illustrates how the emitter covers the entire FOV (802) and it is the signal processing in the receiver that selects a specific address in the FOV (802) subjected to grouping resolution (804). In this example, a light source (806) provides light as a local oscillator (808) and light to a LIDAR emitter (810) that illuminates the FOV (802). A parallel coherent receiver (812) receives an optical wavefront in an array of multiple apertures. Each aperture is configured to receive a respective portion of the received optical wavefront. Also, different apertures (including different non-adjacent apertures) are configured to receive a respective portion of the received optical wavefront, in which each of these portions of the optical wavefront include a contribution from the same portion of the field of view. [0197] Alternatively, a phase antenna can be used to steer the excitation beam and scan the field of view. For the design of such a phase antenna, any of a variety of techniques and distribution schemes can be applied. Figures 9A and 9B show the spherical radiation patterns of a rectangular grid (Figure 9A) and a circular cluster (Figure 9B), defining the maximum beam intensity at the maximum of the scale and pointing in the direction of propagation at a target object. . Figure 9A shows a spherical radiation pattern (900) that results from a 40x40 rectangular grid with a spacing of 3.8A, and Figure 9B shows a spherical radiation pattern (902) that results from a grouping circular with (600) elements and a spacing of 1.6A ,. Below each radiation pattern is a legend that shows correlations between shading intensity and relative radiation intensity (in dB). No consideration has been made of the antenna function of the apertures in these examples. [0199] As an additional option, a MEMS device or other electromechanical device can be used to provide a scanning function for the emitter. In these cases, selection of an angular direction by the overlap between the excitation scanner and the pickup cluster can be the basis for an anti-overlap suppression scheme (for example, either through a Vernier distribution of the overlapping replicas of emission and capture groupings or spatial filtering through the use of an excitation FOV that is less than the overlap angular period). [0201] When the excitation beam is oriented to scan in different subsets of a larger field of view, the calculations used to determine direction-based information, such as data streams that correspond to different viewing directions in the field of view, can be used to explore in each subset. Also, when collecting received data for a given subset, different parameters (eg, integration time) may be used for different subsets of the field of view. [0203] The techniques described in this document can address a variety of potential technical problems, some of which are relevant to ensuring LIDAR detection of long range high speed. This can be useful for improving safety in autonomous vehicles and other applications, such as aerospace, where extended range is beneficial. [0205] The range limitation in existing systems is tied to the maximum power that can be used at a given wavelength and the sensitivity that can be achieved using a given detector technology. The maximum beam power used to illuminate a scene can be limited by practical considerations in instrumentation and by safety limits for ocular exposure. These limits are wavelength dependent, with shorter wavelengths having tighter restrictions due to less absorption by the eye. Longer wavelengths are inherently safer. Additionally, the physical properties of the beam are relevant for calculating eye safety. The maximum power in a collimated beam such as those used in LIDAR depends on the diameter of the beam and the possible intersection of the beam with the pupil of the eye. In any case, for a given choice of wavelength and optical design, there is a maximum power that can be used safely. The present description shows how by moving the scene imaging functionality to a cluster of receivers, it is possible to use a beam of illumination that is as wide as the entire field of view and given its greater divergence or more powerful or inherently more safe. [0207] In terms of sensitivity, different sensitivity problems arise, such as time-of-flight systems and heterodyne or CW systems. Time-of-flight systems can suffer from interior sensitivity relative to heterodyne systems, as electronic noise can easily overcome trigger noise for very weak signals. Heterodyne systems can benefit from a first level of optical "gain", which is derived from interference between detected reflections in the field of view and a reference signal. [0209] Although efforts to develop single photon avalanche diode (SPAD) -based detector arrays can improve sensitivity in intensity-based systems, improvement may be limited by device non-idealities and may introduce other design compromises. Photodiode arrays are typically manufactured from silicon due to its high integration capacity and low cost. In practice, this limits the operating range of a time-of-flight LIDAR system to wavelengths <1 pm, which is the minimum energy to generate a hole-electron pair. given the silicon bandgap. In turn, this may be suboptimal from an allowable optical power perspective. [0211] An additional potential advantage of heterodyne systems is that they provide intrinsic protection against crosstalk between multiple devices looking at the same FOV. In time-of-flight systems, it may not be possible to discriminate between pulses from different emitters. However, since heterodyne systems use a local oscillator to produce interference with the reflected signal, the independent emitters will generally be inconsistent with each other. [0213] A potential problem with FMCW systems is that they typically have a limited extent (AQ), as the solid angle of each beam determines the resolution of the LIDAR. This limits the ability of the system to capture reflected photons. The present description shows how this limitation can be addressed, increasing the system extension and optical performance while moving the imaging functionality to a cluster of detectors and still enabling static single-beam illumination. Replicating the number of beams in rotational scanners can increase scanning speed, but can add complexity and cost. [0215] The characteristics described can improve the performance of LIDAR systems through different mechanisms, including the following two mechanisms: [0217] 1. Increase the number of pickup apertures without limiting brightness through reciprocity losses in in-phase antenna arrays. Higher signals mean that longer ranges can be achieved and faster scanning is possible. [0219] 2. Reconstruct the field of view through a single complex field measurement at each aperture and a mathematical transform. This eliminates the need to scan the phases of each aperture to generate a moving radiation pattern for the cluster. [0221] To increase some of these benefits, an FMCW detection scheme can be used, since a heterodyne gain can be used to boost the signal above the electrical noise level and good axial range and resolution can be achieved. Other heterodyne schemes, such as dual wavelength LIDAR, are also applicable to some implementations of the system. [0222] Range and speed issues can be addressed simultaneously by increasing the emitter's optical output power. However, there are safe limits to the amount of laser power that can be put into a collimated beam. This safety power threshold can create a performance limitation for some systems. [0224] Rather than reducing the scan to a mathematical transform on the data collected from a cluster, it is possible to actually steer the beam through phase shifters. However, these phase shifters may need calibration due to manufacturing tolerances. Also, jacket mixing of received signals reduces reception light gathering efficiency and, depending on the drive mechanism used for the phase shifter, the resulting beam orientation may be too slow for some applications. [0226] A potential advantage of the techniques described is the increase in extension (AQ) obtained from the pool. For a single collection aperture this extension is minimal and is basically defined by the wavelength: AQ ~ A, 2. This limits the aperture's ability to capture background scattered light in a general lighting setup. If multiple waveguides are combined using typical phase antenna constructions, essentially the same spread and brightness results as would be obtained for a single waveguide. [0228] One way of looking at this is through reciprocal losses in the couplers as the different gap contributions are combined together. However, in the techniques described, light captured at each aperture is mixed with a local oscillator and detected without intrinsic loss. Since all photons captured from all apertures interfere, the signal-to-noise ratio of the system increases by a factor of N for an evenly lit scene. This allows the system to scan at higher speeds, as it needs to wait less to reach a sufficient level of photons to call a detection. [0230] A potential problem with some implementations is the presence of phase and group delay errors between the gaps. This can be reduced during design, using high resolution lithography capabilities to reduce geometric differences between LO paths. Additionally, external parameters that affect the group and phase refractive indices can be taken into account; For this, the corresponding waveguides can be kept relatively short, and / or can circulate closely to each other and exhibit symmetry to minimize differential errors. [0232] It is also possible to perform a device calibration using a well known excitation (eg a collimated beam) and store it as a compensation matrix to be multiplied with the geometric transformation matrix defined above. [0234] Some implementations of the techniques described use a coherent source with sufficient coherence length to guarantee interference throughout the entire desired depth scan range. [0236] While the disclosure has been described in connection with certain embodiments, it should be appreciated that the disclosure should not be limited to the disclosed embodiments but rather is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, a the scope of which should be given the broadest interpretation to encompass all such modifications and equivalent structures as permitted by law.
权利要求:
Claims (32) [1] 1.- An apparatus to manage coherent detection from multiple openings in a LiDAR system, comprising: a first source or optical port that provides an optimal wave of modulated illumination that illuminates a field of view; a second optical source or port that provides an optical reference wave having a defined phase relationship with the optimal wave of modulated illumination; an array of apertures that includes a plurality of apertures arranged in one or more dimensions, and that is configured to receive an optical wavefront that includes contributions in at least a portion of the field of view, wherein: each of two or more of the apertures is configured to receive a respective portion of the received optical wavefront, and at least two non-adjacent apertures in the array of apertures are configured to receive a respective portion of the received optical wavefront that includes a contribution thereof portion of the field of view, and each of two or more of the apertures is coupled to a respective optical mixer that coherently interferes the respective portion of the received optical wavefront with a respective local oscillator optical wave; wherein each respective local oscillator wave is derived from the reference optical wave in such a way that, for each respective aperture, respective differences in group delay, (i) between the second optical source or port and the respective mixer optical, and (ii) the respective aperture and the respective optical mixer, are substantially the same; and a processing module configured to process electrical signals detected from optical mixers outputs, including processing: for each optical mixer of a plurality of optical mixers, determining at least one of phase or amplitude information from at least one detected electrical signal from at least one output of that optical mixer, determining first direction-based information, associated with a first subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, determining first distance information from the first direction-based information, determining second direction-based information, associated with a second subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, and determining second distance information from the second address-based information. [2] 2. - The apparatus of claim 1, wherein the optimal modulated lighting wave has a frequency spectrum that includes a peak at a frequency that is adjustable to provide a frequency modulated continuous wave (FMCW) optical lighting wave . [3] 3. - The apparatus of claim 1, wherein the optimal modulated lighting wave is a pulse signal. [4] 4. - The apparatus of claim 1, wherein the optimal wave of modulated illumination is formed by altering light of two wavelengths. [5] 5. - The apparatus of claim 1, wherein the optimal modulated lighting wave has a spectrum that covers different frequency bands. [6] 6. - The apparatus of claim 1, wherein the respective differences in group delay, between (i) the second source or optical port and the respective optical mixer, and (ii) the respective aperture and the respective optical mixer, correspond to an optical path length difference of less than 10 cm. [7] 7. - The apparatus of claim 6, wherein the respective differences in group delay, between (i) the second source or optical port and the respective optical mixer, and (ii) the respective aperture and the respective optical mixer, correspond to an optical path length difference of less than 1 cm. [8] 8. - The apparatus of claim 1, wherein the first and second address-based information is further processed to measure the first and second intensity of light from the first and second subsets of the field of view, respectively. [9] 9. - The apparatus of claim 1, wherein the first and second direction-based information is further processed to measure the relative speed of objects that reflect light from the first and second subset of the field of view, respectively. [10] The apparatus of claim 1, wherein at least a portion of the first address-based information and at least a portion of the second address-based information are determined in parallel. [11] 11. - The apparatus of claim 1, wherein the optical wave of illumination is provided to illuminate the entire field of view at the same time. [12] 12. - The apparatus of claim 1, wherein the optical wave of illumination is provided to scan different portions of the field of view over time. [13] 13. - The apparatus of claim 1, wherein one or more of the apertures in the array of apertures are used to emit at least a portion of the optical illumination wave. [14] 14. - The apparatus of claim 1, further comprising at least one illumination aperture that is not included in the grouping of apertures, wherein the illumination aperture is configured to emit at least a portion of the illumination optical wave. [15] 15. - The apparatus of claim 1, wherein the array of openings has its openings arranged in a regularly spaced rectangular grid. [16] 16. - The apparatus of claim 1, wherein the array of apertures has its apertures arranged in a regularly spaced polarized grid. [17] 17. - The apparatus of claim 1, wherein the array of openings has its openings arranged in a Mills cross configuration. [18] 18. - The apparatus of claim 1, wherein the array of openings has its openings arranged in a pseudo-random configuration. [19] 19. - The apparatus of claim 1, wherein the grouping of apertures is defined by the pixels of an imaging sensor. [20] 20. - The apparatus of claim 1, wherein the respective mixer is configured to provide phase / quadrature detection (I-Q) through the use of a 90o offset replica of the optical reference wave. [21] 21. - The apparatus of claim 1, wherein the respective mixer is configured to provide in-phase / quadrature (I-Q) detection through the use of interference with the reference optical wave in a multi-mode interference coupler. [22] 22. - The apparatus of claim 1, wherein the mixers are implemented through at least one of: a partially transmissible layer, a directional coupler, an evanescent coupler, a multi-mode interference coupler, or a grid coupler . [23] 23. - The apparatus of claim 1, wherein the processing module is configured to compensate for errors in the relative phase between apertures in the array of apertures, estimated based at least in part on an optimal wave modulation pattern of modulated lighting. [24] 24. - The apparatus of claim 1, wherein the processing module is configured to compensate for errors in the relative phase between openings in the group of openings, estimated based at least in part on calibration data obtained with a wavefront predetermined. [25] 25. - The apparatus of claim 1, wherein the processing module is configured to compensate for errors in the relative phase between openings in the group of openings, estimated using sensors that measure temperature, and / or temperature gradients in the device. , and / or its environment. [26] 26. - The apparatus of claim 1, wherein the processing module comprises analog-to-digital conversion components. [27] 27. - The apparatus of claim 1, wherein the processing module comprises a parallel to serial data converter. [28] 28. - The apparatus of claim 1, wherein the processing module comprises an electro-optical transducer for data output through a fiber optic link. [29] 29. - The apparatus of claim 1, wherein the first source or optical port and the second source or optical port provide light from a single common light source. [30] 30. - The apparatus of claim 1, wherein the second source or optical port provides light by phase modulation of light fed to the first source or optical port. [31] 31. - The apparatus of claim 1, wherein the first source or optical port illuminates the field of view through a light diffusing element. [32] 32. - A method to manage coherent detection from multiple openings, the method comprising the steps of: provision, from a first optical source or port, of an optimal wave of modulated illumination illuminating a field of view; provision, from a second source or optical port, of a reference optical wave having a defined phase relationship with the optimum wave of modulated illumination; receiving an optical wavefront that includes contributions in at least a portion of the field of view in an array of apertures that includes a plurality of apertures arranged in one or more dimensions, wherein: each of two or more of the apertures is configured to receive a respective portion of the received optical wavefront, and at least two non-adjacent apertures in the array of apertures are configured to receive a respective portion of the received optical wavefront that includes a contribution from the same portion of the field of view, and each of two or more of the apertures is coupled to a respective optical mixer that coherently interferes the respective portion of the received optical wavefront with a respective local oscillator optical wave; wherein each respective local oscillator wave is derived from the reference optical wave in such a way that, for each respective aperture, respective differences in group delay, between (i) the second optical source or port and the respective mixer optical, and (ii) the respective aperture and the respective optical mixer, are substantially the same; and processing, in a processing module, of electrical signals detected from outputs of the optical mixers, including processing: for each optical mixer of a plurality of the optical mixers, determining at least one of phase or amplitude information from at least one electrical signal detected from at least one output of that optical mixer, determine first direction-based information, associated with a first subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, determining first distance information from the first direction-based information, determining second direction-based information, associated with a second subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, and determining second distance information from the second address-based information. - An apparatus comprising: a first source or optical port that provides an optimal wave of modulated illumination that illuminates a field of view; a second optical source or port that provides an optical reference wave having a defined phase relationship with the optimal wave of modulated illumination; an array of apertures comprising at least 40 apertures arranged in one or more dimensions, and which is configured to receive an optical wavefront that includes contributions in at least a portion of the field of view, wherein: each of two or more of the apertures is configured to receive a respective portion of the received optical wavefront, and at least two non-adjacent apertures in the array of apertures are configured to receive a respective portion of the received optical wavefront that includes a contribution thereof portion of the field of view, and each of two or more of the apertures is coupled to a respective optical mixer that coherently interferes the respective portion of the received optical wavefront with a respective local oscillator optical wave derived from of the reference optical wave; and a processing module configured to process electrical signals detected from the outputs of the optical mixers, the processing comprising: for each optical mixer of a plurality of optical mixers, determining at least one of phase or amplitude information from at least one detected electrical signal from at least one output of that optical mixer, determining first direction-based information, associated with a first subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, determining first distance information from the first direction-based information, determining second direction-based information, associated with a second subset of the field of view, based on phase or amplitude information derived from at least two optical mixers of the plurality of optical mixers, and determining second distance information from the second address-based information.
类似技术:
公开号 | 公开日 | 专利标题 US20210318442A1|2021-10-14|Wavelength Division Multiplexed LIDAR JP2020510882A|2020-04-09|Modular 3D optical sensing system JP6821024B2|2021-01-27|Array of waveguide diffusers for photodetection using apertures JP2020532731A|2020-11-12|Shared waveguides for lidar transmitters and receivers CN108375774A|2018-08-07|A kind of single photon image detecting laser radar of no-raster US10310085B2|2019-06-04|Photonic integrated distance measuring pixel and method of distance measurement US20190025426A1|2019-01-24|Lidar system with speckle mitigation CN101806625A|2010-08-18|Static Fourier transform interference imaging spectrum full-polarization detector US10061125B2|2018-08-28|Directional optical receiver CN110249239A|2019-09-17|Optical module and laser radar apparatus with this optical module CN110114691A|2019-08-09|Mix direct detection and coherent light detection and ranging system WO2019067033A9|2019-05-23|Integrated optical structures for lidar and other applications employing multiple detectors CN101210969A|2008-07-02|Peer type high resolution three-dimensional imaging detector CN107632386A|2018-01-26|A kind of endoscopic system and imaging method based on single fiber relevance imaging ES2868574B2|2022-02-28|Apparatus and method for managing consistent detection from multiple openings in a lidar system US20200256958A1|2020-08-13|Ranging using a shared path optical coupler US11016195B2|2021-05-25|Apparatus and method for managing coherent detection from multiple apertures in a LiDAR system KR20220016211A|2022-02-08|Aerial terrain-depth lidar system and method therefor KR20200078695A|2020-07-01|Parallax compensation spatial filters KR20200033068A|2020-03-27|Lidar system US20200300993A1|2020-09-24|Lidar apparatus with an optical amplifier in the return path EP3654055A1|2020-05-20|Distance measurement device and moving body apparatus JP2020166061A|2020-10-08|Optical scanning device JP2021139869A|2021-09-16|Laser transmitter/receiver module for rider Wu et al.2022|Multi-beam optical phase array for long-range LiDAR and free-space data communication
同族专利:
公开号 | 公开日 ES2868574B2|2022-02-28|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20180284280A1|2017-03-29|2018-10-04|Jason M. Eichenholz|Non-Uniform Separation of Detector Array Elements in a Lidar System| US20190086518A1|2017-09-19|2019-03-21|Veoneer Us, Inc.|Scanning lidar system and method|
法律状态:
2021-10-21| BA2A| Patent application published|Ref document number: 2868574 Country of ref document: ES Kind code of ref document: A1 Effective date: 20211021 | 2022-02-28| FG2A| Definitive protection|Ref document number: 2868574 Country of ref document: ES Kind code of ref document: B2 Effective date: 20220228 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES202030329A|ES2868574B2|2020-04-21|2020-04-21|Apparatus and method for managing consistent detection from multiple openings in a lidar system|ES202030329A| ES2868574B2|2020-04-21|2020-04-21|Apparatus and method for managing consistent detection from multiple openings in a lidar system| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|