专利摘要:
The present invention relates to a panoramic vision system (SVP), a motor vehicle (1) equipped with such a panoramic vision system (SVP) and an associated display method. The panoramic vision system (SVP) includes three cameras (CC, CLG, CLD) covering a panoramic view, a processing unit (UT) of images and a screen (10) in portrait format adapted to display the panoramic vision captured by the three cameras (CC, CLG, CLD). For this, the captured images (S10) are cropped (S30) so that a banner (BHG, BHD, BVC) of each image is displayed on the screen (S70) thus reconstituting the panoramic vision. Advantageously, the images are corrected for the distortion of the cameras and transposed (S20) according to two methods: a method called "projection on a cylinder" and a method called "reconstitution in three parts".
公开号:FR3079180A1
申请号:FR1852471
申请日:2018-03-22
公开日:2019-09-27
发明作者:Stephanie Ambroise
申请人:Renault SAS;
IPC主号:
专利说明:

PANORAMIC VISION SYSTEM DISPLAYED ON A PORTRAIT SCREEN
DESCRIPTION
Technical Field [01] The present invention relates to a panoramic vision system displayed on a screen in portrait format, a motor vehicle equipped with such a system and a method for displaying a panoramic vision on a screen in portrait format .
State of the art [02] The 360 ° vision is a sought-after service in the automotive field. Its main purpose here is to display to the driver on a multimedia screen a view of the environment close to the vehicle, in the form of a combination of two views, according to the speed engaged and the choice of the driver.
[03] As illustrated in Figure 1, 360 ° vision requires the installation of four cameras 2: one at the front of the vehicle 1, one at the rear and one under each exterior mirror. It also requires the use of a dedicated computer and a link to the multimedia system. The cameras 2 used are of the "fisheye" (fish eye) or hypergone type, that is to say with a horizontal angle of view close to 180 ° (or even higher).
[04] The available views are therefore: a front view, a rear view, a driver side view and a passenger side view. The grouping of these four shots makes it possible to provide a top view of the vehicle called "bird-view" BV, as can be illustrated in Figure 1.
[05] On the front and rear native views, it is common for a strong distortion of the image to be observed at the periphery. This strong deformation often causes manufacturers to crop these images on the sides to remove these deformations. One solution to display the horizontal field of view of native images is to create a panoramic view.
[06] A second problem resides in the display of a panoramic view on a portrait screen such as a 10 inch screen with vertical resolution 1024 pixels and horizontal resolution 800 pixels. Indeed, the width in pixels of the panoramic view is greater than the width in pixels of the screen.
[07] There is therefore a real need for a panoramic vision system which overcomes these faults, drawbacks and obstacles of the prior art, in particular a method of displaying such vision making it possible to maximize the field of vision. , while maintaining optimal image quality and resolution when displayed on a portrait screen.
Description of the invention [08] To resolve one or more of the drawbacks mentioned above, the invention provides a panoramic vision system comprising:
• three cameras each equipped with an optical sensor, a central camera whose field of vision is adapted to allow the acquisition of a central area of panoramic vision, a left side camera whose field of vision is adapted to allow the acquisition of a left lateral zone of the panoramic vision partially covering the central zone, and a right lateral camera whose field of vision is adapted to allow the acquisition of a right lateral zone of the panoramic vision partially covering the central zone panoramic vision;
• an image processing unit capable of receiving and processing the images captured by the three optical sensors; and • a screen capable of displaying the panoramic vision captured by the three optical sensors and processed by the processing unit.
[09] With this objective in view, the device according to the invention, moreover in accordance with the preamble cited above, is essentially characterized in that the screen is in portrait format and comprises three display zones, a zone d 'left display, a central display area and a right display area; and the image processing unit includes:
Means of cropping the sensor images according to a left horizontal strip for the images coming from the sensor of the left side camera, a right horizontal strip for the images coming from the optical sensor of the right side camera and according to a central vertical strip for the images from the optical sensor of the central camera, and • means for rotating the left horizontal strip 90 ° counter-clockwise and the right horizontal strip 90 ° clockwise, and • means for aggregating the 3 strips so that the reproduction of the panoramic vision on the screen is constituted on the left display area of the left horizontal strip rotated by -90 °, on the central central display area of the central vertical strip and on the right display area of the right horizontal strip rotated by + 90 °.
[10] Particular characteristics or embodiments, which can be used alone or in combination for this panoramic vision system, are:
[11] the processing unit also comprises correction means suitable for correcting the distortion generated by the cameras and for transposing the corrected images obtained from each sensor before cropping the sensor images;
[12] the correction and transposition of the sensor images is done:
• by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center and vertex are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the hemisphere being calculated using to the theorem of Thales and the equation of the sphere; then • by projection of the points of the half-sphere on the internal surface of a vertical half-cylinder of radius R2 encompassing the half-sphere and whose axis of symmetry passes through the center of the half-sphere; and finally • by projection of the points of the half-cylinder on a plane tangent to the semi-cylinder thus forming the corrected image to be cropped;
[13] the correction and transposition of the sensor images is done:
• by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center and vertex are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the hemisphere being calculated using to the theorem of Thales and the equation of the sphere; then • the image projected on the half-sphere is then projected on three rectangles, a central rectangle, a right rectangle and a left rectangle, these three rectangles having the same orientation as the sensor and each comprising a vertical central axis and an axis central horizontal, and such that the central rectangle is parallel to the plane of the sensor and located at a distance f2 from the center of the hemisphere, the vertical central axis of the right rectangle is tangent to the right vertical edge of the central rectangle, the axis vertical center of the left rectangle is tangent to the left vertical edge of the central rectangle, and the horizontal central axis of the right rectangle has an angle a with respect to the horizontal central axis of the central rectangle, and the horizontal central axis of the left rectangle presents an angle -a relative to the horizontal central axis of the central rectangle, the corrected image to be cropped being obtained by aggregating a vertical central stripe of the image o obtained by projection on the central rectangle, a straight vertical strip of the image by projection obtained on the right rectangle and a vertical left strip of the image obtained by projection on the left rectangle.
[14] In a second aspect, the invention provides a motor vehicle comprising at least one panoramic vision system as defined above.
[15] Particular characteristics or embodiments, which can be used alone or in combination for this motor vehicle, are:
[16] the panoramic vision system is adapted to display on the portrait screen a panoramic vision of the visual field located at the front of the motor vehicle;
[17] the panoramic vision system is adapted to display on the portrait screen a panoramic vision of the visual field located behind the motor vehicle;
[18] the motor vehicle further comprises at least one radar corner adapted to detect and locate a moving object in a detection zone covering at least part of one of the zones covered by the cameras of the panoramic vision system, the processing unit being adapted to integrate at least one item of information relating to the moving object into the panoramic vision displayed on the screen.
[19] In a third aspect, the invention proposes a method of displaying a panoramic vision on a screen, the panoramic vision being covered by means of three cameras each equipped with an optical sensor, a central camera of which the field of vision is adapted to allow the acquisition of a central zone of the panoramic vision, a left side camera whose field of vision is adapted to allow the acquisition of a left lateral zone of the panoramic vision partially covering the central zone, and a right lateral camera whose field of vision is adapted to allow the acquisition of a right lateral zone of the panoramic vision partially covering the central zone of the panoramic vision, and the screen being in portrait format and comprising three display areas, a left display area, a central central display area and a right display area, the process com taking:
• a step of receiving the images captured by the three optical sensors;
• a step of cropping the sensor images according to a left horizontal strip for the images coming from the sensor of the left side camera, a right horizontal strip for the images coming from the optical sensor of the right side camera and according to a central vertical strip for the images from the optical sensor of the central camera, • a step of rotation of the left horizontal strip by 90 ° anticlockwise and the right horizontal strip by 90 ° clockwise;
• a step of aggregating the three strips so that the restitution of the panoramic vision on the screen is constituted on the left display area of the left horizontal strip rotated by -90 °, on the central central display area of the central vertical strip and on the right display area of the right horizontal strip rotated by + 90 °; and • a step of displaying the image thus aggregated on the screen.
[20] Special features or embodiments, which can be used alone or in combination for this display process, are:
[21] the method further comprises, before the cropping step, a correction and transposition step in which the sensor images are corrected for the distortion generated by the cameras;
[22] the correction and transposition of the sensor images is done:
• by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center and vertex are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the hemisphere being calculated using to the theorem of Thales and the equation of the sphere; then • by projection of the points of the half-sphere on the internal surface of a vertical half-cylinder of radius R2 encompassing the half-sphere and whose axis of symmetry passes through the center of the half-sphere; and finally • by projection of the points of the half-cylinder on a plane tangent to the semi-cylinder thus forming the corrected image to be cropped;
[23] the correction and transposition of the sensor images is done:
• by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center and vertex are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the hemisphere being calculated using to the theorem of Thales and the equation of the sphere; then • the image projected on the half-sphere is then projected on three rectangles, a central rectangle, a right rectangle and a left rectangle, these three rectangles having the same orientation as the sensor and each comprising a vertical central axis and an axis central horizontal, and such that the central rectangle is parallel to the plane of the sensor and tangent to the top of the hemisphere, the vertical central axis of the right rectangle is tangent to the right vertical edge of the central rectangle, the vertical central axis of the rectangle left is tangent to the left vertical edge of the central rectangle, and the horizontal central axis of the right rectangle has an angle a with respect to the horizontal central axis of the central rectangle, and the horizontal central axis of the left rectangle has an angle -a with respect to the horizontal central axis of the central rectangle, • the corrected image to be cropped being obtained by aggregating a vertical central strip of the image obtained by pro jection on the central rectangle, a straight vertical strip of the image obtained by projection on the right rectangle and a vertical left strip of the image obtained by projection on the left rectangle [24] Finally, in a fourth aspect, the invention relates to a computer program product downloadable from a communication network and / or recorded on a computer-readable medium and / or executable by a processing unit, the computer program product comprising instructions for implementing the display method as defined above.
Brief description of the figures [25] The invention will be better understood on reading the following description, given solely by way of example, and with reference to the appended figures in which:
- Figure 1 shows a top view ("Bird View") of a motor vehicle equipped with a current vision system;
- Figures 2a and 2b show the optical diagram of a conventional camera as can be modeled; and
- Figure 3 shows a panoramic vision system according to one embodiment of the invention;
- Figures 4 to 7 show the steps of a model for correcting and transposing images according to a method called "projection on a cylinder";
- Figures 8 to 11 show the steps of a model for correcting and transposing images according to a method known as "three-part reconstruction";
- Figure 12 shows an image taken by a camera of a panoramic vision system fitted to a motor vehicle on which a target appears;
- Figure 13 shows a spatial modeling of the target of Figure 12 with respect to the vehicle equipped with the panoramic vision system;
- Figures 14 and 15 represent the stages of construction of a rectangle indicating the target in the spatial modeling of Figure 13.;
- Figure 16 shows the image of Figure 12 in which has been embedded the rectangle indicating the target thanks to the spatial modeling performed in Figures 13 to 15;
- Figure 17 shows the image of Figure 12 corrected and transposed according to a method called "reconstruction in three parts" illustrated in Figures 8 to 11, and in which was embedded the rectangle indicating the target through spatial modeling performed in Figures 13 to 15;
- Figure 18 shows the display of a panoramic vision by a panoramic vision system integrated in a motor vehicle according to another embodiment of the invention;
- Figures 19a and 19b show examples of displays of a panoramic view on a screen in portrait format according to the invention;
- Figure 20 shows the block diagram of a method for displaying a panoramic vision according to an embodiment of the invention;
- Figure 21 shows the block diagram of step S20 of Figure 20 implementing the steps of the method called "projection on a cylinder"; and
- Figure 22 shows the block diagram of step S20 of Figure 20 implementing the steps of the method known as "reconstitution in three parts".
Definitions [26] Figure 2a shows the optical diagram of a conventional camera 20 such as can be found commercially. This kind of camera 20 generally comprises a sensor 30 and a lens 40 composed of several lenses L.
[27] To simplify the calculations and the geometric representation of the camera 20, it is possible to replace all of the lenses L included in the objective with a single equivalent lens Leq with focal length f and center A as illustrated in FIG. 2b, the sensor being positioned in the focal plane of this equivalent lens.
[28] The optical axis 50, also called the main axis, of the objective 40 corresponds to the axis of symmetry of rotation of the lenses L making up the objective. In the case of simplified geometric representation, this optical axis 50 can be redefined as being the axis passing through the center of the equivalent lens Leq and the center of the sensor 30.
[29] For the remainder of this description, the optical representation model of a camera will be the simplified model of Figure 2b.
Embodiments of a Panoramic Vision System [30] A first embodiment according to the invention is illustrated in FIG. 3 and represents a SVP panoramic vision system.
[31] The SVP panoramic vision system includes:
• three cameras each equipped with an optical sensor: a central camera CC whose field of vision is adapted to allow the acquisition of a central zone ZC of the panoramic vision, a left side camera CLG whose field of vision is adapted to allow the acquisition of a left lateral zone ZLG of the panoramic vision partially covering the central zone ZC, and a right lateral camera CLD whose field of vision is adapted to allow the acquisition of a right lateral zone ZLD of the panoramic vision partially covering the central zone ZC of the panoramic vision;
• a UT image processing unit capable of receiving and processing the images captured by the three optical sensors; and • a screen 10 capable of displaying the panoramic vision captured by the three optical sensors and processed by the processing unit UT.
[32] The areas where the fields of vision ZLG, ZC and ZLD of the cameras CLG, CC and CLD overlap are hatched in Figure 3. Preferably, the cameras CLG, CC and CLD used are wide angle cameras of the type " fisheye >> covering a field of the order of 180 ° or more.
[33] Unlike the screens used for displaying a panoramic image, the screen 10 used in the invention is in portrait format and includes three display areas, a left display area ZAG, a display area central display ZAC and a right display area ZAD.
[34] In order to process the images captured by the three optical sensors, the UT image processing unit includes:
Means of cropping the sensor images according to a left horizontal strip BHG for the images coming from the sensor of the left side camera CLG, a right horizontal strip BHD for the images coming from the optical sensor of the right side camera CLD and according to a vertical strip central BVC for the images coming from the optical sensor of the central camera CC, and • means of rotation of the left horizontal strip BHG by 90 ° anticlockwise and of the right horizontal strip BHD of 90 ° in the clockwise, and • means for aggregating the 3 bands so that the reproduction of the panoramic vision on the screen is made up on the left display zone ZAG of the left horizontal band BHG turned from -90 °, in the central display area ZAC of the central vertical strip BVC and in the right display area ZAD of the right horizontal strip BHD rotated by + 90 °.
[35] Since fisheye type cameras present a lot of distortion, it is advantageous for the UT processing unit also to include correction means suitable for correcting the distortion generated by the CC, CLG and CLD cameras, and for transposing the corrected images. obtained from each sensor before cropping the sensor images.
[36] To make this correction and transposition, two methods can be used:
• the projection method on a cylinder; or • the three-part reconstruction method.
Projection method on a cylinder:
[37] The projection method on a cylinder is illustrated in Figures 4 to 7.
[38] Firstly and as illustrated in Figure 4, the correction and transposition of the sensor images start with the projection of the sensor pixels on the internal surface of a half-sphere 70 of radius R1 and whose center C and the vertex S are located on the main optical axis 50 of the camera.
[39] The native images from the camera consist of Resh pixels in the horizontal direction and Res v pixels in the vertical direction. To perform the digital simulation, the plane of the sensor 30 of the camera considered is virtually brought closer to the lens equivalent to the objective of the camera by homothety centered on the center A of the equivalent lens. A rectangle 60 of width Cx and height Cy distant from d with respect to the equivalent lens represents this new sensor plane. We define the orthonormal coordinate system (X pu ; Ypu) centered on R1, the center of the rectangle 60 and therefore the center of the sensor image thus projected.
[40] In this coordinate system (X pu ; Y pu ), an image pixel P has the coordinates (x pu ;
y pu ) such as:
'_ / Res h C x PU ~ 2) Res h * f
ÎRes v Λ C Y (E1) [41] WHERE:
i is an integer in the range
Res h '2.
j is an integer in the interval
ÆesJ; and
1:
• f is the focal length of the camera.
[42] The half-sphere 70 has for orthonormal reference (X sp h; Ys P h; Z sp h) centered on C the center of the half-sphere 70. In this reference (X sp h; Ys P h; Z sp h), the projection of the pixel P (x pu ; y P u) has the coordinates Pi (xi; yi; zi).
[43] The coordinates of the point P1 (xi; yi; zi) can be calculated using the theorem of Thales and the equation of the sphere.
[44] Thus, the application of the theorem of Thales gives between these two pixels P and Pi gives the following equation:
z 1 _ yi _ xl + cam_sph _ ypu x pu æ !
[45] Where:
[46] cam sp h represents the distance between the center A of the equivalent lens and the center C of the hemisphere 70;
[47] d is the distance between the rectangle 60 representing the projection of the sensor and the center A of the lens equivalent to the objective of the camera considered; and [48] η is a constant (i.e. a real number).
[49] The coordinates of P1 can therefore be written:
(x 1 = η * R - cam_sph = η xpu (E3)
Zi = η * ypu [50] The point P1 belonging to the half-sphere, these coordinates verify the equation of this one to know:
% i 2 + yi 2 + zi 2 = " 2 (E4) (j] x pu ) + (η y pu ) + (pR - cam_spfi) 2 = R 2 (E5) [51] Solving this equation of the second degree allows us to deduce the positive constant η:
R * cam spfl + 'R z * camspfl z + Çxpu z + ypu z + R 2 ) * ÇR 2 -camspfl z ) η = (E6)
Xpu 2 + ypu 2 + R [52] And therefore to deduce the values of x S ph, y S ph and z S ph.
[53] If we transpose into spherical coordinates, we then obtain the following angles of latitude and longitude:
ϋ = atan (-) X1 /
Ψ = - asin ((E7) [54] The point P1 is then projected onto the internal surface of a vertical half-cylinder of radius R2 encompassing the half-sphere 70 and whose axis of symmetry passes through the center C of the hemisphere 70.
[55] The half-cylinder 80 has for orthonormal reference (X cy i; Ycyi; Zcyi) centered on C the center of the hemisphere 70. In this reference (Xcyi; Ycyi; Z cy i), the projection of the point P1 is a point P2 with coordinates (X2; y2; Z2), such that:
x 2 = R 2 * cos (0) y 2 = y] R 2 2 -x 2 2 (E8) z 2 = zl [56] Finally, point P2 is projected onto a plane tangent 90 to the half-cylinder 80.
[57] The tangent plane 90 has for orthonormal reference (X PP ; Y PP ) centered on the point R PP corresponding to the intersection of the optical axis 50 with tangent plane 90. In this reference (X PP ; Y pp ) , the projection of point P2 is a point P3 with coordinates (X3; ys), such that:
(x 3 = - <E9)
Λ = Γ k î 2 [58] Where η 2 * R = x2 + cam_sph (E10) [59] The image thus obtained by the projection of all the pixels P of the rectangle 60 is then resized to the size of the initial sensor image by homothety. A pixel P4 of this corrected and resized image therefore has the following coordinates:
Res h
Cx
Res v
C Y (E11) [60] The projection on the half-sphere having generated an up / down (H / B) and left / right (L / R) inversion of the image, before proceeding to crop the image corrected and resized, a new H / B and L / R inversion is performed.
[61] It should be noted that this latter inversion operation can be replaced by modifying the angle of rotation of the left and right horizontal strips as illustrated in Figure 7.
[62] The image at the top left of Figure 7 corresponds to the image of the left side area ZLG taken by the left side camera CLG. The image in the top center corresponds to the image of the central zone ZC taken by the central camera CC and the image in the top right corresponds to the image of the right lateral zone ZLD taken by the right side camera CLD.
[63] In this Figure 7, the image in the middle on the left corresponds to the corrected and resized image according to the projection method on a cylinder previously described and without additional H / BG / D inversion of the image taken by the camera CLG left side view, as well as the image in the middle right corresponding to the processed image from the CLD right side camera.
[64] The frames present in the middle images correspond respectively from left to right: to the left horizontal strip BHG, to the central vertical strip BVC and to the right horizontal strip BHD.
[65] The image of the CC central camera is slightly distorted, it is possible not to correct and transpose it before cropping as illustrated in the central image of Figure 7. This saves time and computing resources.
[66] Once the cropping of each image has been carried out, and in the absence of an additional H / BG / D inversion, the rotation means rotate the left horizontal strip BHG by -90 ° and the right horizontal strip BHD by + 90 °. Then, the aggregation means bring the 3 strips together and display them on the screen 10 with on the left display area ZAG the left horizontal strip BHG, on the central display area ZAC the central vertical strip BVC and on the right display area ZAD the right horizontal banner BHD.
Reconstitution method in three parts:
[67] The three-part reconstruction method is illustrated in Figures 8 to 11.
[68] Firstly and as illustrated in FIG. 8, the correction and transposition of the sensor images start with the projection of the pixels of the sensor on the internal surface of a hemisphere 70 of radius R1 and whose center C and the vertex S are located on the main optical axis 50 of the camera. This step is the same as described above.
[69] Then the point P1 (xi; yi; zi) obtained after projection of a pixel P (x pu ; y pu ) of the rectangle 60 on the hemisphere 70, is projected onto three rectangles: a central rectangle RC, a right rectangle RD and a left rectangle RG.
[70] These three rectangles RC, RG and RD have the same orientation as the sensor and each include a vertical central axis and a horizontal central axis. Like rectangle 60, their dimensions are Cx in width and Cy in height.
[71] The central rectangle RC is parallel to the plane of the sensor and therefore to the rectangle 60 and located at a distance Î2 from the center C of the hemisphere 70. Figure 9 illustrates the projection of the point P1 on the central rectangle RC. We define the orthonormal coordinate system (Xrc; Yrc; Zrc) associated with the central rectangle RC and centered on the point Rrc center of the central rectangle RC through which the optical axis 50 passes.
[72] In this coordinate system (Xrc; Yrc), the projection of point P1 is a point Prc with coordinates (xrc; yac), such that:
_ ii · / 2 x rc —-- y RC % i · Z 1 · / 2 % i · [73] Or, after resizing to the size of the sensor image by homothety:
_ Resh x redim_RC _ Res v yredim_RC X RC * yRC *
Res h C x -2 Res v C Y -2 [74] Concerning the left rectangles RG and right RD: the vertical central axis of the right rectangle RD is tangent to the right vertical edge of the central rectangle RC, the vertical central axis of the left rectangle RG is tangent to the left vertical edge of the central rectangle RC, and the horizontal central axis of the right rectangle RD has an angle a with respect to the horizontal central axis of the central rectangle RC, and the horizontal central axis of the left rectangle RG has an angle -a with respect to the horizontal central axis of the central rectangle RC.
[75] Figure 10 shows a top view of Figure 9 with the presence this time of the left RG and right RD rectangles. We can define an orthonormal coordinate system (Xrg; Yrg; Zrg) with center G associated with the left rectangle RG, and an orthonormal coordinate system (Xrd; Yrd; Zrd) with center D associated with the right rectangle RD. Where, point G corresponds to the center of the left rectangle RG and located on the left edge of the central rectangle RC and point D corresponds to the center of the right rectangle RD located on the right edge of the central rectangle RC.
[76] The center C of the hemisphere 70 in the coordinate system (Xrg; Yrg; Zrg) centered on G has the coordinates (x c _rg; yc_RG; z c _rg):
( x c_rg = y * sin (a) + f 2 * cos (a)) Ycj g = -y * cos (a) + / 2 * sin (a) ( E14 )
V Z C _RG = θ · θ [77] The point P1, in the coordinate system (Xrg; Yrc; Zrg) centered on Rrc associated with the central rectangle RC, has the coordinates (xi_rc; yi_Rc; zi_rc) such that:
i x l_RC = Î2 ~ X 1 yi_Rc = yi (E15)
V Z 1_RC = Z 1 [78] Therefore, in the coordinate system (Xrg; Yrg; Zrg) centered on G associated with the left rectangle RG, the point P1 has the coordinates (xi_rg; yi_RG; zi_rg), such that:
( x i_rg = sin (a) + x 1RC * cos (a) -y 1RC * sin (a) yi_RG = - y * cos (a) + x 1RC * sin (a) + y 1RC * sin (a) ( E16 ) Z 1_RG = Z 1_RC [79] The point Ig is defined as the point of intersection between the line connecting the center C of the hemisphere 70 at point P1 and the plane defined by the left rectangle RG. associated with the left rectangle RG (Xrg; Yrg; Zrg) centered on G, the coordinates of the point Ig are (0; yiG; zig).
[80] As the center C of the hemisphere 70, the point P1 and the point Ig are aligned, we can write:
ί θ X C_RG - ^ 2 * ( X c RG X RG)) V1G ~ Vc RG = (yc_RG ~ yRG) * ^ 2 (E1 7) l Z 1G ~ Z C_RG = ( z c RG - z Rg) * ^ 2 z £ ____ x c_RG I 2 ( x c RG ~ x RG) is ] yig = (-yRG + y c _Rc) * t 2 + y c _RG ( E18 ) VZ / G = (—Z RG + Z c rg) * ^ 2 + z c_RG [81][82] Where t2 is a constant (i.e. a real number).According to this equation E18 and by putting it back in an image of the
size of the sensor image, the point Ig has the coordinates:
x IG_cap ~ Res h ,, „ Res h - ~ r ~ yig * 77 2 Cx2 (E19) Res v Res v U- 1 '' Z yiG, cap ~ 2 iLr C Y -2
[83] Similarly, the center C of the hemisphere 70 in the coordinate system (Xrd; Yrd; Zrd) centered on D has the coordinates (x c _rd; y c _RD; z c _rd):
f ( x crd = -7 * sin (a) + f 2 * cos (a)) yc_RD = - 7 * cos (a) -f 2 * sin (a) ( E20 )
V Z C_RD = θ · θ [84] Now, the point P1, in the coordinate system (Xrc; Yrc; Zrc) centered on Rrc associated with the central rectangle RC, has the coordinates (xi_rc; yi_Rc; zi_rc) such that:
i x l_RC = Î2 ~ X 1 yi_Rc = yi (E15)
V Z 1_RC = Z 1 [85] Therefore, in the coordinate system (Xrd; Yrd; Zrd) centered on D associated with the right rectangle RD, the point P1 has the coordinates (xi_rd; yi_RD; zi_rd), such as:
( X 1_RD = - 7 * sin (a) + x 1RC * cos (a) + y 1RC * sin (a) yi_RD = “7 * cos (a) - x 1RC * sin (a) - y 1RC * sin ( a) (E16b) Z 1_RD = Z 1_RC [86] The point Id is defined as the point of intersection between the line connecting the center C of the hemisphere 70 at point P1 and the plane defined by the right rectangle RD. In the coordinate system of the right rectangle RD (Xrd; Yrd; Zrd) centered on D, the coordinates of the point Id are (0; yiD; zid).
[87] As the center C of the half-sphere 70, the point P1 and the point Id are aligned, we can write:
is ί θ - x c_RD = h * ( x c_RD ~ x Rd) ·) yiD ~ Vc RD = ^ 3 * (Nc_RD ~ Vrd} (E1 7b) Z ID ~ z c_RD = ^ 3 * (, z c_RD ~ z Rd) I (. X sb ~ x pb) I y pu (~ y P b + ysb ') * f-3 + y $ b 8b) ^ z pu (~ z pb + z sb) * f-3 + z sb [88][89] Where t3 is a constant (i.e. a real number).According to this equation E18 and by putting it back in an image of the
size of the sensor image, the point Id has the coordinates:
_ Res h , Res h x ID_cap - yiD * _ Res v Res v yiD.cap 2 ZfD * c Y -2 (E19b) [90] Thus, the corrected image to be cropped is obtained by aggregating a vertical central band Im1 of the image obtained by projection on the central rectangle RC, a vertical straight band Im2 of the image by projection obtained on the right rectangle RD and a vertical left band Im3 of the image obtained by projection on the left rectangle RG.
[91] That is to say that the straight vertical strip Im1 is created for the pixels with coordinates (xid; yio) when xid is between 0 and x e pi, i.
[92] The central vertical band Im2 is created for the coordinate pixels (x re dim_Rc; yredim_Rc) when x re dim_Rc is between x e pi, i and Xepi, 1 + L.
[93] And finally the left vertical strip Im3 is created for the pixels with coordinates (xig; yio) when xig is between x e pi, i + L and Resh.
[94] As before, the projection on the hemisphere having generated an up / down (H / B) and left / right (L / R) inversion of the image, before proceeding to crop the corrected image and resized, a new H / B and L / R inversion is performed.
[95] It should be noted that this latter inversion operation can be replaced by modifying the angle of rotation of the left and right horizontal strips as illustrated in Figure 11.
[96] The image in the upper left of Figure 11 corresponds to the image of the left side area ZLG taken by the left side camera CLG. The image in the top center corresponds to the image of the central zone ZC taken by the central camera CC and the image in the top right corresponds to the image of the right lateral zone ZLD taken by the right side camera CLD.
[97] In this Figure 11, the image in the middle on the left corresponds to the image corrected and resized according to the method of reconstruction in three parts and without additional H / BG / D inversion of the image taken by the left side camera CLG, as well as for the image in the middle on the right corresponding to the image processed from the CLD right side camera.
[98] The frames in the middle images correspond respectively from left to right: the left horizontal banner BHG, the central vertical banner BVC and the right horizontal banner BHD.
[99] As the image of the CC central camera is slightly distorted, it is possible not to correct and transpose it before cropping. This saves computing time and resources.
[100] Once the cropping of each image has been carried out, and in the absence of an additional H / BG / D inversion, the rotation means rotate the left horizontal strip BHG by -90 ° and the right horizontal strip BHD by + 90 °. Then, the aggregation means bring the 3 strips together and display them on the screen 10 with on the left display area ZAG the left horizontal strip BHG, on the central display area ZAC the central vertical strip BVC and on the right display area ZAD the right horizontal banner BHD.
[101] It should be noted that, whatever the method of correction and transposition used, each banner can be resized or readjusted, that is to say enlarged or shrunk while keeping the proportions of the image, according to the dimensions of the screen 10 on which the banners are displayed.
Embodiments of a Motor Vehicle Equipped with a Panoramic Vision System [102] In a second aspect, the invention relates to a motor vehicle.
[103] According to a general embodiment, the motor vehicle comprises at least one SVP panoramic vision system as described above.
[104] As illustrated in FIG. 1, most of the current motor vehicles are equipped with one to four cameras 20 positioned at the front, at the rear and / or on the side wings of the motor vehicle 1.
[105] Thus, in this general embodiment, the cameras of the SVP panoramic vision system can be three of the cameras 20 of the motor vehicle, each a part of the panoramic vision to be displayed on a screen 10 in portrait format integrated into the vehicle interior 1.
[106] According to a first embodiment derived from the general mode, the SVP panoramic vision system is adapted to display on the portrait screen 10 a panoramic vision of the visual field located at the front of the vehicle.
[107] In this first embodiment, the central camera CC corresponds to a camera 20 located at the front of the motor vehicle 1. This central camera CC can be integrated, for example at the dashboard, the windshield, the roof or the front bumper of the motor vehicle 1.
[108] The left side camera CLG, respectively the right side camera CLD, corresponds to a camera 20 located on the left wing, respectively on the right wing, of the motor vehicle 1. The left side cameras CLG and right CLD can by example be integrated into the left and right mirrors of the motor vehicle 1.
[109] According to a second embodiment derived from the general mode, the SVP panoramic vision system is adapted to display on the 10 portrait screen a panoramic vision of the visual field located behind the motor vehicle.
[110] In this case, the central camera CC corresponds to a camera located at the rear of the motor vehicle 1. This central camera CC can be integrated, for example at the rear shelf, the rear window, the roof or the rear bumper of the motor vehicle 1.
The left side camera CLG, respectively the right side camera CLD, corresponds to a camera 20 located on the left wing, respectively on the right wing, of the motor vehicle 1. The left side cameras CLG and right CLD can by example be integrated into the left and right mirrors of the motor vehicle 1.
[112] Thus, if the motor vehicle comprises four cameras 20 located respectively at the front, at the rear and on the left and right wings of the vehicle, it is possible to combine the first and the second embodiments derived from the mode general. In this case, the UT image processing unit can be common to the two SVP panoramic vision systems, as can the CLG left and CLD right side cameras.
[113] A panoramic view as presented above thus makes it possible to visualize the arrival of a vehicle, a pedestrian or a cyclist, in the direction perpendicular to the longitudinal axis of the vehicle, laterally several tens of meters.
[114] However, a drawback is that the detected target is not easily visible if it is located several tens of meters because of too small a resolution.
[115] Figure 12 illustrates an image taken by one of the side cameras of the panoramic vision system on which a second motor vehicle 3 emerges 60m from the longitudinal axis of the motor vehicle 1 equipped with the SVP panoramic vision system as previously described . This second motor vehicle 3 is not easy to distinguish, representing barely a few pixel cameras.
[116] To overcome this drawback, the invention provides a third compatible embodiment of the preceding ones and in which the motor vehicle 1 also comprises at least one radar corner adapted to detect and locate a moving object in a detection zone ZD covering at least part of one of the zones ZLG, ZC and ZLD covered by the cameras CLG, CC and CLD of the SVP panoramic vision system, the processing unit UT being adapted to integrate at least information relating to the object moving in the panoramic vision displayed on the screen 10.
[117] In general, the corner radars used have a horizontal viewing angle of approximately 135 ° and a detection range of approximately 60m.
[118] Exit data from a radar corner are:
• the longitudinal position (in X) of the target: X_target;
• the lateral position (in Y) of the target: Y_target;
• the direction of a target 4 with respect to an axis of the motor vehicle 1 equipped with the SVP panoramic vision system and at least one radar corner, for example the lateral Y axis described in the reference illustrated in FIG. 13, identified by the Heading angle.
• the direction of evolution of target 4, indicating whether it is moving towards the eco vehicle 1 or if it is moving away from it; and • the type of target 4: vehicle, pedestrian or even cyclist, for example.
[119] For each target detected, the following construction is carried out:
[120] In Figure 14 illustrating a top view of the motor vehicle 1, the width of the wide target 4 and its long length are defined. These 2 values are selected according to the type of target 4.
[121] The coordinates of the lower red point 4a are:
iXl = X_target + 0.01m * k * cos (Heading) + 0.01m * k * sin (Heading) (Y1 = Y_target - 0.01m * k * sin (Heading) + 0.01m * k * cos (Heading) (E21) [122] Where k is a multiplying coefficient.
[123] The coordinates of the upper red point 4b are:
ίΧ2 = X_target2 - 0.01m * k * cos (Heading) + 0.01m * k * sin (Heading) (Y2 = Y_target2 + 0.01m * k * sin (Heading) + 0.01m * k * cos (Heading) (E22) [124] With:
iXtarget2 = X_target - large * cos (Heading) (Ytarget2 = Y_target + large * sin (Heading) '' [125] The red line M connecting the two points 4a and 4b is drawn.
[126] As illustrated in Figure 15, in the plane passing through M and the vertical, two other points 4c and 4d are constructed, with coordinates:
X3 = XI
Y3 = Y1 (E24) lZ3 = k * hc [127] Where hc is the height of the target, selected according to its type; and
X4 = X2
Y4 = Y2 (E25)
IZ4 = k * hc [128] The rectangle passing through the four points 4a, 4b, 4c and 4d is thus constructed.
[129] Then it is projected into the plane of the native image. Such a method of projecting 3D points into the image of a fisheye camera is described in patent WO2017144826A1. In the present case, the guidelines constructed in WO2017144826A1 are replaced by the rectangle constructed as illustrated in Figure 16.
[130] The coefficient f varies depending on the distance Ytarget. A correspondence "map" is built during a development phase.
[131] Figure 17 illustrates the case where the information relating to the moving object from the radar corner is integrated into the corrected image and transposed according to the three-part reconstruction method.
[132] According to another option of this third embodiment, it is possible:
[133] Display a model of vehicle, pedestrian or cyclist, k times enlarged. The red rectangle in the final image is then replaced by the symbolic image of a front or rear face of a vehicle, a pedestrian or a cyclist. The information "direction of evolution of the target" is then used.
[134] Or display a crop >> of the image, k times enlarged. The inside of the red rectangle is cropped and then enlarged. The pixel in the center of the rectangle is calculated: i, j. The image thus formed is superimposed on the final image in i, j.
[135] According to a fourth compatible embodiment of the previous ones and as illustrated in FIG. 18, when the “bird view” BV vision of the motor vehicle 1 is available, it is possible to replace the image of the central camera CC situated at the front (respectively rear) of the vehicle by the image of the front part (respectively of the rear part) of the motor vehicle 1 visible on the "bird view >> BV.
[136] This embodiment can be advantageous because the addition of a part of the front view in the central part of the screen 10 could induce certain drivers in error in the case where low obstacles are present in the near environment. of the vehicle. In this case, the display of the "bird view" is possible to visualize this type of obstacle.
[137] In this case, the layout and shape of the central ZAC, left ZAG and right ZAD display areas should be adapted.
[138] Indeed, the banner selected by cropping the "bird view" BV vision is not vertical as was the case with the image from the central camera, the central display zone ZAC does not is more of a vertical but horizontal band. Thus, according to the part of the integrated bird view >> BV, the central display zone ZAC will rather be located at the top of the screen 10 (see right image in Figure 18) in the case of the display of the front part of the motor vehicle 1 from the “bird view >> BV (see left image in FIG. 18), or at the bottom of the screen 10 in the case of the display of the rear part of the motor vehicle 1 from the “bird view” BV.
[139] Figures 19a and 19b show examples of the arrangement of the left, right and central display areas depending on the size of the screen 10.
[140] In the two figures screen 10 is a standard screen 800 pixels wide and 1024 pixels high.
[141] In FIG. 19a, a central vertical stripe of the image of the central camera CC and resized to 75 × 180 pixels is displayed in the central display area ZAC of the screen 10 located between the left display areas ZAG and right ZAD.
[142] In Figure 19b, it is a horizontal banner of the BV "bird view" resized to 600x100 pixels which is displayed in the central zone ZAC located on the upper part of the screen 10 and above the zones display left ZAG and right ZAD.
Embodiments of a method for displaying a panoramic vision on a screen [143] In a third aspect, the invention relates to a method for displaying a panoramic vision on a screen, the panoramic vision being covered by means of three cameras each equipped with an optical sensor, a central camera CC whose field of vision is adapted to allow the acquisition of a central zone ZC of the panoramic vision, a left side camera CLG whose field of vision is adapted to allow the acquisition of a left lateral zone ZLG of the panoramic vision partially covering the central zone ZC, and a right lateral camera CLD whose field of vision is adapted to allow the acquisition of a lateral zone right ZLD of the panoramic vision partially covering the central zone ZC of the panoramic vision, and the screen 10 being in portrait format and comprising three display zones hage, a ZAG left display area, a ZAC central display area and a ZAD right display area.
[144] According to a general embodiment shown in Figure 20, the display method comprises:
• a step S10 of reception of the images captured by the three optical sensors;
• a step S30 of the sensor images cropping according to a left horizontal strip BHG for the images coming from the sensor of the left side camera CLG, a right horizontal strip BHD for the images coming from the optical sensor of the right side camera CLD and according to a strip vertical central BVC for the images from the optical sensor of the central camera CC, • a step S40 of rotation of the left horizontal strip BHG by 90 ° anticlockwise and of the right horizontal strip BHD of 90 ° in clockwise;
An aggregation step S50 of the three strips so that the reproduction of the panoramic vision on the screen 10 is constituted on the left display area ZAG of the left horizontal strip BHG rotated by 90 °, on the display area central ZAC of the central vertical strip BVC and on the right display zone ZAD of the right horizontal strip BHD rotated by + 90 °; and • a step of displaying S70 of the image thus aggregated on the screen 10.
[145] Advantageously, the display method further comprises, before the cropping step S30, a correction and transposition step S20 in which the sensor images are corrected for the distortion generated by the cameras.
[146] To make this correction and transposition, two methods can be used:
• the projection method on a cylinder; or • the three-part reconstruction method.
[147] These two methods having already been explained previously as well as the associated calculations, we will only recall here the main stages of these.
[148] According to the projection method on a cylinder and as illustrated in Figure 21, the step of correction and transposition S20 of the sensor images comprises three sub-steps:
• a first sub-step S21 of projection of the pixels of the sensor on the internal surface of a hemisphere 70 of radius R1 and whose center C and the vertex S are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the half-sphere 70 being calculated thanks to the theorem of Thalès and the equation of the sphere; then • a second sub-step S22 of projection of the points P1 of the hemisphere on the internal surface of a vertical half-cylinder 80 of radius R2 encompassing the hemisphere 70 and whose axis of symmetry passes through the center C of the hemisphere 70; and finally • a third sub-step S23 of projection of the points P2 of the half-cylinder 80 on a plane tangent 90 to the half-cylinder 80 thus forming the corrected image to be cropped.
[149] According to the three-part reconstruction method and as illustrated in FIG. 22, the step of correction and transposition S20 of the sensor images comprises five sub-steps:
• a first sub-step S21 of projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center C and the vertex S are located on a main optical axis of the camera, the coordinates of the pixels sensor projected on the half-sphere being calculated thanks to the theorem of Thales and the equation of the sphere; then • three projection sub-steps S221, S222 and S223 where the image projected on the half-sphere is projected onto three rectangles, a central rectangle RC, a right rectangle RD and a left rectangle RG, these three rectangles having the same orientation that the sensor and each comprising a vertical central axis and a horizontal central axis, and such that the central rectangle RC is parallel to the plane of the sensor and tangent to the apex (S) of the hemisphere, the vertical central axis of the rectangle right RD is tangent to the right vertical edge of the central rectangle RC, the vertical central axis of the left rectangle RG is tangent to the left vertical edge of the central rectangle RC, and the horizontal central axis of the right rectangle RD has an angle a with respect to the horizontal central axis of the central rectangle RC, and the horizontal central axis of the left rectangle RG has an angle -a with respect to the horizontal central axis of the central rectangle RC; and finally • an aggregation sub-step S230 where the corrected image to be cropped is obtained by aggregating a vertical central band of the image obtained by projection on the central rectangle RC, a straight vertical band of the image by projection obtained on the right rectangle RD and a vertical left band of the image obtained by projection on the left rectangle RG.
[150] The step S21 of projection on the half-sphere generating an up / down (H / B) and left / right (L / R) inversion of the image, it may be advantageous to carry out an additional step S60 allowing to put the image back in the original direction by performing a H / B and L / R inversion of the image obtained either before the cropping step S30, or before the display step S70.
[151] Thus, the present invention offers numerous advantages such as:
• allow the driver, when it is integrated into a motor vehicle, to have visibility over several tens of meters and therefore greater than the so-called "bird view" vision currently present on Renault vehicles; or • display on a screen in portrait format a panoramic vision presenting the same or even more information than that displayed on a screen in landscape format, and thus avoid blind spots.
LIST OF REFERENCES
Motor vehicle
Camera
Second motor vehicle
Target
4a, 4b, 4c, 4d Points of the rectangle representing the target
Camera
Sensor
Goal
Optical axis
Rectangle / new sensor plane
Hemisphere
Half Cylinder
Tangent map
A Center of equivalent lens
BV Bird-View (top view)
BHD Straight Horizontal Headband
BHV Left Horizontal Headband
BVC Vertical Central Headband
C Center of the hemisphere 70
CC Central Camera
CLD Right Side Camera
CLD Left Side Camera
Cx Width of rectangle 60
C y Height of rectangle 60 d Distance between rectangle 60 and center A of the equivalent lens
D center of the Right Rectangle
G center of the Left Rectangle
The Lens Leq Equivalent lens
P (xpu; y P u) Point of the rectangle 60
Pi (xi; yi; zi) Projection of point P on the hemisphere 70
P2 (X2; y2; Z2) Projection of point P1 on the half-cylinder 80
P3 (X3; ys) Projection of point P2 on the tangent plane 90
Prc (xrc; ync; zrc) Projection of point P1 on the Central Rectangle
Prd (xrd; yRD; zrd) Projection of point P1 on the Right Rectangle
Prg (xrg; yRG; zrg) Projection of point P1 on the Left Rectangle
RC Central Rectangle RD Right Rectangle RG Left Rectangle
(Rpp; Xpp; Ypp; Z pp ) Orthonormal coordinate system associated with the tangent plane 80 (Rpu; Xpu, Ypu, Z P u) Orthonormal coordinate system associated with rectangle 60 (Rrc; Xrc, Yrc, Zrc) Orthonormal coordinate system associated with the Central Rectangle ( D; Xrd, Yrd, Zrd) Orthonormal frame associated with the Right Rectangle (G; Xrg, Yrg, Zrg) Orthonormal frame associated with the Left Rectangle (C; Xsph, Ysph, Zsph) Frame associated with the hemisphere 70
S Top of the hemisphere 70 Please Panoramic Vision System UT Image Processing Unit ZAC Central Display Area ZAD Right Display Area ZAG Left Display Area ZC Central Zone ZLD Right Side Zone ZLG Left Side Zone zdr Recovery Zone
权利要求:
Claims (13)
[1" id="c-fr-0001]
1. Panoramic vision system (SVP) including:
• three cameras each equipped with an optical sensor, a central camera (CC) whose field of vision is adapted to allow the acquisition of a central zone (ZC) of panoramic vision, a left side camera (CLG) including the field of vision is adapted to allow the acquisition of a left lateral zone (ZLG) of the panoramic vision partially covering the central zone (ZC), and a right lateral camera (CLD) whose field of vision is adapted to allow the acquisition of a right lateral zone (ZLD) of the panoramic vision partially covering the central zone (ZC) of the panoramic vision;
• an image processing unit (UT) capable of receiving and processing the images captured by the three optical sensors; and • a screen (10) capable of displaying the panoramic vision captured by the three optical sensors and processed by the processing unit (UT), characterized in that the screen (10) is in portrait format and includes three zones d 'display, a left display area (ZAG), a central display area (ZAC) and a right display area (ZAD);
the image processing unit (UT) includes:
means for cropping the sensor images according to a left horizontal strip (BHG) for the images coming from the sensor of the left side camera (CLG), a right horizontal strip (BHD) for the images coming from the optical sensor of the right side camera (CLD) and according to a central vertical strip (BVC) for the images coming from the optical sensor of the central camera (CC), and
means of rotation of the left horizontal strip (BHG) by 90 ° counterclockwise and of the right horizontal strip (BHD) by 90 ° clockwise, and
means of aggregation of the 3 strips so that the reproduction of the panoramic vision on the screen (10) is constituted on the left display area (ZAG) of the left horizontal strip (BHG) rotated by -90 °, on the central central display area (ZAC) of the central vertical strip (BVC) and on the right display area (ZAD) of the right horizontal strip (BHD) rotated by + 90 °.
[2" id="c-fr-0002]
2. Panoramic vision system (SVP) according to claim 1 wherein the processing unit (UT) also comprises correction means adapted to correct the distortion generated by the cameras and to transpose the corrected images obtained from each sensor before the cropping of sensor images.
[3" id="c-fr-0003]
3. Panoramic vision system (SVP) according to claim 2 in which the correction and transposition of the sensor images is done:
- by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center (C) and the vertex (S) are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the half-sphere being calculated thanks to the theorem of Thales and the equation of the sphere; then
- by projection of the points of the half-sphere on the internal surface of a vertical half-cylinder of radius R2 encompassing the half-sphere and whose axis of symmetry passes through the center (C) of the half-sphere; and finally
- by projecting the points of the half-cylinder on a plane tangent to the half-cylinder thus forming the corrected image to be cropped.
[4" id="c-fr-0004]
4. Panoramic vision system (SVP) according to claim 2 in which the correction and transposition of the sensor images is done:
- by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center (C) and the vertex (S) are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the half-sphere being calculated thanks to the theorem of Thales and the equation of the sphere; then
- the image projected on the half-sphere is then projected on three rectangles, a central rectangle, a right rectangle and a left rectangle, these three rectangles having the same orientation as the sensor and each comprising a vertical central axis and a central axis horizontal, and such that the central rectangle is parallel to the collector plane and located at a distance f2 from the center (C) of the hemisphere, the vertical central axis of the right rectangle is tangent to the right vertical edge of the central rectangle, l the vertical central axis of the left rectangle is tangent to the left vertical edge of the central rectangle, and the horizontal central axis of the right rectangle has an angle a with respect to the horizontal central axis of the central rectangle, and the horizontal central axis of the rectangle left presents an angle -a with respect to the horizontal central axis of the central rectangle, the corrected image to be cropped being obtained by aggregating a vertical central strip of the image obtained naked by projection on the central rectangle, a straight vertical strip of the image obtained by projection on the right rectangle and a vertical left strip of the image obtained by projection on the left rectangle.
[5" id="c-fr-0005]
5. Motor vehicle comprising at least one panoramic vision system (SVP) as defined according to one of claims 1 to 4.
[6" id="c-fr-0006]
6. Motor vehicle according to claim 5 wherein the panoramic vision system (SVP) is adapted to display on the screen (10) portrait a panoramic vision of the visual field located at the front of the motor vehicle.
[7" id="c-fr-0007]
7. Motor vehicle according to claim 5 wherein the panoramic vision system (SVP) is adapted to display on the screen (10) portrait a panoramic vision of the visual field located behind the motor vehicle.
[8" id="c-fr-0008]
8. Motor vehicle according to one of claims 5 to 7 further comprising at least one radar corner adapted to detect and locate a moving object in a detection area covering at least part of one of the areas covered by the cameras of the panoramic vision system (SVP), the processing unit (UT) being adapted to integrate at least one item of information relating to the moving object into the panoramic vision displayed on the screen (10).
[9" id="c-fr-0009]
9. Method for displaying a panoramic vision on a screen (10), the panoramic vision being covered by means of three cameras each equipped with an optical sensor, a central camera (CC) whose field of vision is adapted to allow the acquisition of a central zone (ZC) of panoramic vision, a left lateral camera (CLG) whose field of vision is adapted to allow the acquisition of a left lateral zone (ZLG) of vision panoramic partially covering the central zone (ZC), and a right lateral camera (CLD) whose field of vision is adapted to allow the acquisition of a right lateral zone (ZLD) of the panoramic vision partially covering the central zone (ZC) ) of the panoramic vision, and the screen (10) being in portrait format and comprising three display zones, a left display zone (ZAG), a central central display zone (ZAC) and a zone of right display (ZAD) , the process comprising:
• a reception step (S10) of the images captured by the three optical sensors;
• a step of cropping (S30) the sensor images according to a left horizontal strip (BHG) for the images coming from the sensor of the left side camera (CLG), a right horizontal strip (BHD) for the images coming from the optical sensor of the right side camera (CLD) and according to a central vertical strip (BVC) for the images coming from the optical sensor of the central camera (CC), • a step of rotation (S40) of the left horizontal strip (BHG) by 90 ° in the anti-clockwise and 90 ° right horizontal strip (BHD) clockwise;
• an aggregation step (S50) of the three strips so that the restitution of the panoramic vision on the screen (10) is made up on the left display area (ZAG) of the left horizontal strip (BHG) turned from - 90 °, on the central central display area (ZAC) of the central vertical strip (BVC) and on the right display area (ZAD) of the right horizontal strip (BHD) rotated by + 90 °; and • a display step (S70) of the image thus aggregated on the screen (10).
[10" id="c-fr-0010]
10. The display method according to claim 9 further comprising, before the cropping step, a correction and transposition step in which the sensor images are corrected for the distortion generated by the cameras.
[11" id="c-fr-0011]
11. Display method according to claim 10, in which the correction and transposition of the sensor images is done:
- by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center (C) and the vertex (S) are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the half-sphere being calculated thanks to the theorem of Thales and the equation of the sphere; then
- by projection of the points of the half-sphere on the internal surface of a vertical half-cylinder of radius R2 encompassing the half-sphere and whose axis of symmetry passes through the center (C) of the half-sphere; and finally
- by projecting the points of the half-cylinder on a plane tangent to the half-cylinder thus forming the corrected image to be cropped.
[12" id="c-fr-0012]
12. Display method according to claim 10, in which the correction and transposition of the sensor images is done:
- by projection of the pixels of the sensor on the internal surface of a hemisphere of radius R1 and whose center (C) and the vertex (S) are located on a main optical axis of the camera, the coordinates of the sensor pixels projected on the half-sphere being calculated thanks to the theorem of Thales and the equation of the sphere; then
- the image projected on the half-sphere is then projected on three rectangles, a central rectangle, a right rectangle and a left rectangle, these three rectangles having the same orientation as the sensor and each comprising a vertical central axis and a central axis horizontal, and such that the central rectangle is parallel to the plane of the sensor and tangent to the apex (S) of the hemisphere, the vertical central axis of the right rectangle is tangent to the right vertical edge of the central rectangle, the central vertical axis of the left rectangle is tangent to the left vertical edge of the central rectangle, and the horizontal central axis of the right rectangle has an angle a with respect to the horizontal central axis of the central rectangle, and the horizontal central axis of the left rectangle has an angle -a relative to the horizontal central axis of the central rectangle, the corrected image to be cropped being obtained by aggregating a vertical central strip of the image obtained by projection on the central rectangle, a vertical right strip of the image obtained by projection on the right rectangle and a vertical left strip of the image obtained by projection on the left rectangle.
[13" id="c-fr-0013]
13. Product computer program downloadable from a communication network and / or recorded on a medium readable by computer and / or executable by a processing unit (UT), characterized in that it includes instructions for setting implementation of the display method according to at least one of claims 9 to 12.
类似技术:
公开号 | 公开日 | 专利标题
US9762880B2|2017-09-12|Vehicle vision system with customized display
KR20160072190A|2016-06-22|Bowl-shaped imaging system
US20200193832A1|2020-06-18|Image generating apparatus, image generating method, and recording medium
FR3079180A1|2019-09-27|PANORAMIC VISION SYSTEM DISPLAYED ON A PORTRAIT SCREEN
US11055541B2|2021-07-06|Vehicle lane marking and other object detection using side fisheye cameras and three-fold de-warping
FR2864311A1|2005-06-24|Display system for vehicle e.g. automobile, has extraction unit extracting position on image of scene from red light element of recognized object, and windshield having display zone in which display image is superposed on scene
FR2965765A1|2012-04-13|METHOD AND DEVICE FOR FORMING AN IMAGE OF AN OBJECT IN THE ENVIRONMENT OF A VEHICLE
FR3041110A1|2017-03-17|PROJECTION METHOD FOR A MOTOR VEHICLE OF AN IMAGE ON A PROJECTION SURFACE
EP3671643A1|2020-06-24|Method and apparatus for calibrating the extrinsic parameter of an image sensor
KR102352833B1|2022-01-18|Apparatus and method for providing around view
EP3355277A1|2018-08-01|Method for displaying on a screen at least one representation of an object, related computer program, electronic display device and apparatus
FR2965956A1|2012-04-13|METHOD AND DEVICE FOR OPTICALLY REPRESENTING THE ENVIRONMENT OF A VEHICLE
EP2801075B1|2018-09-19|Image processing method for an on-board camera installed on a vehicle and corresponding processing device
EP2432660B1|2013-05-01|Method and device for extending a visibility area
FR3002673A1|2014-08-29|METHOD AND DEVICE FOR THREE-DIMENSIONAL IMAGING OF A PARTIAL REGION OF THE ENVIRONMENT OF A VEHICLE
KR20140118115A|2014-10-08|System and method for calibrating around view of vehicle
FR2926908A1|2009-07-31|Motor vehicle i.e. car, driving assisting method, involves allocating quality score to each of pixels in each of corrected images, and selecting pixel having high quality in order to elaborate top view of environment of motor vehicle
EP2761876A1|2014-08-06|Method and device for filtering a disparity map
FR3088754A1|2020-05-22|METHOD OF CREATING A VIEW FROM AN IMAGE CAPTURED BY A WIDE ANGLE CAMERA
US20200151942A1|2020-05-14|Method for generating at least one merged perspective viewing image of a motor vehicle and an environmental area of the motor vehicle, a camera system and a motor vehicle
EP3263406B1|2019-12-18|Driving assistance system for a vehicle, related railway vehicle and use method
US11107197B2|2021-08-31|Apparatus for processing image blurring and method thereof
FR3066850B1|2019-06-14|METHOD FOR VISUALIZATION IN THREE DIMENSIONS OF THE ENVIRONMENT OF A VEHICLE
US20210327129A1|2021-10-21|Method for a sensor-based and memory-based representation of a surroundings, display device and vehicle having the display device
FR3091946A1|2020-07-24|Method of creating views from an image captured by a wide-angle camera
同族专利:
公开号 | 公开日
FR3079180B1|2020-03-13|
EP3768555A1|2021-01-27|
WO2019180001A1|2019-09-26|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20110166782A1|2008-12-18|2011-07-07|Aisin Seiki Kabushiki Kaisha|Display device|
EP2476587A1|2009-09-11|2012-07-18|Aisin Seiki Kabushiki Kaisha|Vehicle surrounding monitor apparatus|
US20150138356A1|2012-06-07|2015-05-21|Hitachi Construction Machinery Co., Ltd.|Display device for self-propelled industrial machine|
FR3047947B1|2016-02-24|2018-03-09|Renault S.A.S|METHOD FOR AIDING DRIVING BEFORE A MOTOR VEHICLE WITH A FISH-EYE TYPE OBJECTIVE CAMERA|CN111954054B|2020-06-05|2022-03-04|筑觉绘科技有限公司|Image processing method, system, storage medium and computer device|
CN113329149A|2021-05-10|2021-08-31|武汉大学|Dual-camera drilling hole wall panoramic imaging probe, system and panoramic image generation method thereof|
法律状态:
2019-03-22| PLFP| Fee payment|Year of fee payment: 2 |
2019-09-27| PLSC| Search report ready|Effective date: 20190927 |
2020-03-19| PLFP| Fee payment|Year of fee payment: 3 |
2021-03-23| PLFP| Fee payment|Year of fee payment: 4 |
优先权:
申请号 | 申请日 | 专利标题
FR1852471|2018-03-22|
FR1852471A|FR3079180B1|2018-03-22|2018-03-22|PANORAMIC VISION SYSTEM DISPLAYED ON A PORTRAIT SCREEN|FR1852471A| FR3079180B1|2018-03-22|2018-03-22|PANORAMIC VISION SYSTEM DISPLAYED ON A PORTRAIT SCREEN|
PCT/EP2019/056810| WO2019180001A1|2018-03-22|2019-03-19|Panoramic vision system the view of which is displayed on a portrait screen|
EP19710434.2A| EP3768555A1|2018-03-22|2019-03-19|Panoramic vision system the view of which is displayed on a portrait screen|
[返回顶部]