![]() DEVICE FOR DETERMINING THE ATTENTION STATUS OF A VEHICLE DRIVER, ONBOARD SYSTEM COMPRISING SUCH A DE
专利摘要:
The invention relates to a device (10) for determining a state of attention of a vehicle driver (4) comprising: - an image capture unit on board said vehicle (1), said image capture unit images being adapted to capture at least one image of a detection zone (D) located in said vehicle (1), and - an image processing unit adapted to receive said captured and programmed image to determine the state of attention of the driver (4), depending on the detection of the presence of a distraction object in one of the hands of the driver (4) located in the detection zone (D). An embedded system comprising such a device and an associated method are also provided. 公开号:FR3063557A1 申请号:FR1700222 申请日:2017-03-03 公开日:2018-09-07 发明作者:Frederic Autran 申请人:Valeo Comfort and Driving Assistance SAS; IPC主号:
专利说明:
Holder (s): VALEO COMFORT AND DRIVING ASSISTANCE Simplified joint-stock company. Extension request (s) Agent (s): VALEO COMFORT AND DRIVING ASSISTANCE. DEVICE FOR DETERMINING THE STATE OF ATTENTION OF A VEHICLE DRIVER, ON-BOARD SYSTEM COMPRISING SUCH A DEVICE, AND ASSOCIATED METHOD. FR 3,063,557 - A1 (tt) The invention relates to a device (10) for determining a state of attention of a driver (4) of a vehicle comprising: an image capture unit on board said vehicle (1), said image capture unit being adapted to capture at least one image of a detection zone (D) located in said vehicle (1), and - an image processing unit adapted to receive said captured image and programmed to determine the driver's attention state (4), as a function of the detection of the presence of a distraction object in one of the driver's hands (4) located in the detection zone (D). An on-board system comprising such a device and an associated method are also proposed. Device for determining the state of attention of a vehicle driver. on-board system comprising such a device, and associated method Technical field to which the invention relates The present invention relates to a device for determining the state of attention of a vehicle driver. It also relates to an on-board system comprising such a device. It also relates to a method associated with such a device. Finally, it finds a particularly advantageous application in the case of autonomous vehicles, in particular autonomous motor vehicles. Technological background It is known to monitor the driver of a motor vehicle using a monitoring device adapted to determine a state of alertness of the driver and in particular to prevent falling asleep at the wheel. Depending on the state of alertness determined, the monitoring device alerts the driver to prevent him from getting into a dangerous situation. Such a monitoring device deduces the state of alertness of the driver as a function of behavioral parameters associated with the driver and / or of operating parameters of the vehicle. In practice, the behavioral parameters, for example the closing rate of the eyelids or the direction of the gaze, are obtained by analyzing images of the driver's head, and the operating parameters of the vehicle, for example the parameters relating to an angle of Steering wheel rotation, at vehicle speed or at the driver's action on certain keys, are obtained from the vehicle's physical sensors. However, in certain cases it may be necessary to know not only the state of alertness of the driver, but also his state of general attention, which is not always possible from the behavioral parameters associated with the driver's face alone and / or vehicle operation. Object of the invention In order to remedy the aforementioned drawback of the state of the art, the present invention provides an on-board device for determining a state of attention of a vehicle driver. More particularly, according to the invention, there is proposed a device for determining a state of attention of a vehicle driver comprising: an image capture unit on board said vehicle, said image capture unit being adapted to capture at least one image of a detection zone located in said vehicle, and an image processing unit adapted to receive said captured image and programmed to determine the driver's attention state, as a function of the detection of the presence of a distraction object in one of the driver's hands located in the detection area. Within the meaning of the invention, an object of distraction is an object other than a vehicle driving member. It is an object capable of distracting the driver from his driving, and occupying his hand so that said distracting object prevents the driver from interacting safely with the driving organs of the vehicle. Thus, the device according to the invention makes it possible to determine the state of attention of the driver as a function of a state of occupation of at least one of the hands of the driver from a “busy” state in which the hand holds an object. distraction, and a "free" state in which it is free to interact with the driving organs of the vehicle because it does not hold any object of distraction. When at least one of the hands is in an occupied state, the driver is in a state of lower attention than when both hands are in a free state because the driver may be embarrassed to intervene quickly on the driving organs of the vehicle if he is holding a distraction in his hand. Other non-limiting and advantageous characteristics of the device according to the invention are as follows: the image capture unit comprises at least one sensor adapted to capture a three-dimensional image of the detection zone, said three-dimensional image comprising information relating to the distance, relative to said sensor, at least of said distraction object and / or with said hand located in the detection zone; the image capture unit comprises at least one sensor adapted to capture at least one image of a first nature comprising a first type of information relating to said distraction object and / or to said hand located in the detection zone , and a sensor adapted to capture at least one image of a second nature distinct from the first nature, comprising a second type of information relating to said distraction object and / or to said hand located in the detection zone; the first-nature image is chosen from: a three-dimensional image comprising information relating to the distance, with respect to said sensor, at least of said distraction object and / or of said hand located in the detection zone, a two-dimensional image comprising information relating to the luminance of at least said distraction object and / or said hand located in the detection zone, and a thermal image comprising information relating to the temperature of at least said distraction object and / or said hand located in the detection zone (the second nature image can also be chosen from the abovementioned image natures, while being of a nature distinct from the first nature as already indicated); the image capture unit further comprises at least one sensor adapted to capture an image of a third nature distinct from said first and second natures, comprising a third type of information relating to said object and / or to said hand located in the detection area the image of third nature is chosen from: a three-dimensional image comprising information relating to the distance, with respect to said sensor, at least of said distraction object and / or of said hand located in the detection zone, a two-dimensional image comprising information relating to the luminance of at least said distraction object and / or said hand located in the detection zone, and a thermal image comprising information relating to the temperature of at least said distraction object and / or said hand located in the detection area; - the image processing unit is programmed to implement the following steps; b1) locating said driver's hand in said image received from the image capture unit, b2) detecting the presence of said distraction object in said driver's hand in order to deduce the state of attention of the driver; - the image processing unit is programmed to implement step b1) according to the following substeps: * detect at least part of an arm of the driver, in the image received the image capture unit, and * deduce the position, in said image received from the image capture unit, of said hand of the conductor from the place of detection of the part of the conductor's arm in the image received from the image capture unit; - the image processing unit is programmed to implement step b2) according to the following substeps: * detect the presence of any object in the hand of the driver, and * determine the nature of said object detected in the hand of the driver, in order to deduce the state of attention of the driver; - the image processing unit implements step b1) from the first nature image, and step b2) from the image of nature distinct from first nature; the image processing unit implements steps b1) to b2) on the basis of each image received from the image capture unit, and implements an additional step according to which it determines a confidence index associated with the detection of the presence of the distraction object in the hand of the driver according to which it deduces the state of attention of the driver; - the image processing unit comprises processing means using a trained neural network to determine the driver's attention state from the image of the detection zone captured by the image capture unit images. The invention also relates to an on-board vehicle system comprising: - a device according to the invention, adapted to determine the state of attention of the driver, an autonomous driving unit of said vehicle programmed to control driving members of said vehicle independently of the driver, and - a decision unit programmed to authorize the driver to at least partially control the vehicle's driving components in the event of determination of a free state of the hand (or at least one of the two hands, or both hands) of the driver and / or to alert the driver if a busy state is determined by the driver's hand, and / or to switch autonomous driving to a secure mode. Thus, when the vehicle is an autonomous motor vehicle, that is to say the driving members of which are controlled independently of the driver, the driver may not resume control, even partial, of the driving members of the vehicle, if the condition occupancy of his hand is determined as "occupied". If a busy state is determined by the driver, the autonomous vehicle can be brought by the on-board system decision unit to park on the side (or on the emergency stop strip), in accordance with the aforementioned secure mode. In the case of a manual or semi-autonomous driving vehicle, the detection of a distraction object in at least one of the driver's hands, for example, can be followed by a driver alert on the risk taken. The invention finally proposes a method for determining a state of attention of a vehicle driver, according to which a) an on-board image capture unit in said vehicle captures at least one image of a detection zone located in said vehicle, and b) an image processing unit receives said captured image and determines the driver's attention state, as a function of the detection of the presence of a distraction object in at least one of the driver's hands located in the area detection. Advantageously, when the driving members of said vehicle are controlled independently of the driver, there is further provided a step according to which the driver is authorized to control at least partially the driving members of the vehicle in the event of determination of a state free from one of the driver's hands and / or the driver is alerted when a busy state of one of the driver's hands is determined and / or the autonomous driving is switched to a secure mode. The method which has just been proposed may also optionally include steps such as those proposed above relative to the device for determining a state of attention of the driver (in particular steps b1) and b2) and the sub - possible steps for these steps). Detailed description of an exemplary embodiment The description which follows with reference to the appended drawings, given by way of nonlimiting examples, will make it clear what the invention consists of and how it can be carried out. In the accompanying drawings: - Figure 1 shows schematically, in front view, a motor vehicle comprising a device according to the invention; - Figure 2 schematically shows the device of Figure 1 according to two possible embodiments, a first embodiment (in solid lines) in which the image capture unit 11 comprises a single sensor 12, and a second mode in which the image capture unit 11 comprises two sensors 12, 13 (in solid and dotted lines, the sensor 13 being shown in dotted lines); and - Figure 3 shows a flow diagram of the main steps of the method according to the invention. Device In FIG. 1, the front of a motor vehicle 1 showing a device 10 for determining a state of attention of a driver of vehicle 1 is shown. More specifically, such a device 10 is suitable for determining the state of attention of the driver 4 as a function of a state of occupation of at least one of the hands of said driver 4 of the vehicle 1. Thus, such a device 10 is adapted to detect at least one of the driver's hands in a detection zone D of the vehicle 1, and to determine the state of occupation of this (or these) hand (s) to deduce the driver's attention. The occupancy state is determined from an "occupied" state in which the hand holds an object of distraction, and a "free" state in which it does not hold an object of distraction. In this free state, the hand can for example be engaged in driving the vehicle, that is to say act on a driving member of the vehicle, or be empty, for example at rest on an armrest. In the sense understood here, the distraction object is an object other than a vehicle driving organ. This is for example a mobile phone, a book, a road map, a GPS etc. The driving components accessible to the driver's hands are for example the steering wheel 3, the gear lever, the commodos (turn signal or wiper levers), the switches (such as the hazard warning lights) or the hand brake. . As shown in FIG. 2, the device according to the invention comprises: an image capture unit 11 on board said vehicle 1, said image capture unit 11 being adapted to capture at least one image of the detection zone D located in said vehicle 1, and an image processing unit 15 adapted to receive said captured image and programmed to determine the state of occupation of the hand of the driver 4 located in the detection zone D, as a function of the detection of the presence of a object of distraction in said hand. Here, the image capture unit 11 is embedded in the motor vehicle, that is to say arranged inside the vehicle 1, more precisely inside the passenger compartment of the vehicle 1. The image capture unit 11 comprises at least one sensor 12, 13 adapted to capture a first-kind image of the detection zone D. As shown in Figure 1, the detection zone D is located between the gear lever 2 and the driver's front door. The detection zone D thus includes the steering wheel 3 of the vehicle 1 and contains the driver's two hands. To capture such a detection zone D, the sensor 12, 13 of the image capture unit 11 is for example placed in a front courtesy lamp of the motor vehicle 1, so that it takes in view from above the zone of detection D. Alternatively, the sensor could be placed on the dashboard of the vehicle, in a central area of the latter, so that the detection area would be seen from the front. Depending on the opening angle of the sensor, the detection area could contain only one of the driver's two hands. In another variant, the sensor could be placed behind the steering wheel 3 of the vehicle, at the dashboard. The detection zone could then easily contain the driver's right and left hands. Any other place making it possible to have the driver's hands in the field of vision of the sensor 12, 13 is still possible to arrange the sensor. According to a first embodiment of the device 10 according to the invention, shown in FIG. 2 in solid lines, the image capture unit comprises a single sensor 12 adapted to capture a three-dimensional image of the detection zone D, said three-dimensional image comprising information relating to the distance, with respect to said sensor, of at least part of the elements of the space contained in the detection zone D. These elements of the space include in particular the hands of the driver 4 and the distraction object possibly present in the detection zone D. The elements of the space can also include elements of the environment of the driver, for example elements of the passenger compartment of the vehicle and of the vehicle's steering components such as the gear lever, the steering wheel, an armrest, etc. The three-dimensional image comprises a cloud of points representing the envelope of the elements of space present in the detection zone D, including of the driver's hand, the driver's forearm, and of said distracting object capable of to be present. The point cloud thus gives information as to the position in space of the elements of space present in the detection zone D, in particular information relating to their distance from said sensor. Such a sensor 12 suitable for capturing three-dimensional images is known to those skilled in the art and will not be described in detail. It will simply be specified that it could be a time of flight sensor, such as a time of flight camera, ("Time of Flight" or TOF according to the acronym) adapted to send the light towards the conductor 4 and measuring the time it takes for this light to return to said time-of-flight sensor to deduce the three-dimensional image of the detection zone D. As a variant, it could be a stereoscopic sensor comprising at least two cameras, each capturing an image of the detection zone according to its own point of view, the images of each camera then being combined to deduce the three-dimensional image of the detection zone. It can also be a structured light sensor adapted to project a pattern onto the detection zone and to analyze the deformation of this pattern to deduce the three-dimensional image of the conductor 4. According to this first embodiment, the image processing unit 15 is adapted to receive the three-dimensional image of the sensor, and programmed to determine by means of this image, the state of occupation of at least one of the hands of the driver, depending on the detection of the presence of the distraction object in said hand. More specifically, the image processing unit 15 is programmed to implement the following steps: b1) detecting at least one of the driver's hands in said image received from the image capture unit 11, and b2) detecting the presence of said distraction object in the driver's hand in order to deduce the state of occupation of driver's hand. The remainder of the description is based on the detection of a hand of the driver. Of course, the same principle can be applied to the other hand of the driver. The image processing unit 15 for this purpose comprises processing means programmed to detect the image of the driver's hand in the image captured by the image capture unit 11, as well as the presence of the distraction in this hand. More specifically, the image processing unit 15 is programmed to implement step b1) according to the following substeps: - detect at least part of an arm of the driver, in the image received by the image capture unit, and - deduce the position, in said image received from the image capture unit, of the driver's hand from the place of detection of the part of the driver's arm in this same image. In practice, the processing means of the processing unit 15 are programmed to identify, among the points of the point cloud, those associated with the image of the driver's arm or forearm. The recognition of the shape of the driver's arm or forearm (step b1)) is here based on a shape recognition algorithm. From the point cloud, the processing means are also programmed to recognize characteristic shapes of space elements which may be present in the detection zone D, such as part of the gear lever 2, part of the steering wheel 3, part of an armrest, etc. The "shape" of the elements of space here corresponds to its outer envelope. The detection of these elements of space can facilitate the detection of the driver's arm. From the identification of the driver's arm or forearm, the processing means of the processing unit 15 identify a portion of the point cloud in which the driver's hand is likely to be found. The recognition of the shape of the hand is here based on a shape algorithm, performed with the determined portion of the point cloud. The processing means are also programmed to identify at least two different regions in said three-dimensional image captured by the image capture unit 11, for example a first region formed by the images of the points closest to the sensor (foreground) and a second region formed by the images of the points furthest from the sensor (second plane), and for recognizing the shape of the arm or forearm of the conductor 4 in these regions. Thus, the processing means are adapted to determine the location of the driver's arm, and then to deduce the location of the driver's hand in the real space of the detection zone D. In addition, the image processing unit 15 is programmed to implement step b2) according to the following substeps: - detect the presence of any object in the driver's hand, and - determine the nature of said object detected in the driver's hand, in order to deduce the state of attention of the driver. Thus, the processing means are also programmed to identify the nature of the objects detected, according to the shape recognized at the estimated location of the hand in the image. In practice, the processing unit 15 is programmed to deduce the state of occupation of the driver's hand, as a function of the shape recognized at the level of the driver's hand in the three-dimensional image. This recognized form can be that associated with an object of distraction, a mobile phone, a glass or a cup, a book, a road map, etc., or that of elements always present in the detection zone D, As a variant, it would also be possible to deduce the state of occupation of the driver's hand as a function of the distance between the hand and the possible objects detected, in the captured image. More specifically, the image processing unit 15 is programmed to determine that the driver's hand is in an occupied state when it is very likely, taking into account the shape recognized at the level of the driver's hand in the image. three-dimensional, let the driver hold this distraction with his hand. Alternatively, the image processing unit could be programmed to determine that the hand is in a busy state when the distance between the hand and the distraction object, in the captured three-dimensional image, is less than a threshold value predetermined. On the contrary, it is programmed to determine that the driver's hand is in a free state when it is unlikely, given the form recognized in the driver's hand, that the driver holds an object of distraction with this hand. Alternatively, the image processing unit could be programmed to determine that the hand is in a free state when the distance between the hand and the detected distraction, in the captured three-dimensional image, is greater than a value predetermined threshold. When his hand is in this free state, the driver can react more quickly to the manual control elements of the vehicle, such as the steering wheel, indicators, the horn, the hazard lights, or even the gear lever, in the event of need. The device according to the invention is thus adapted to determine that the overall attention state of the driver is lower when his hand is in an occupied state than when it is in a free state. As a variant, still according to this first embodiment of the device according to the invention, instead of implementing steps b1), and b2), the image processing unit 15 comprises processing means including a network of neurons trained to directly recognize the state of occupation of the hand, or even the state of attention of the driver, from an image of the detection zone D. In practice, the neural network is trained prior to its use in the device 10 according to the invention. To do this, the neural network is supplied as input by a plurality of images of the detection zone D in which the hand is in a free state (that is to say that the hand is empty of any object of distraction, or holds a driving organ of the vehicle in hand) and the neural network is told that for its images, the state of the hand is free. According to the same principle, the neural network is also supplied with a plurality of images of the detection zone D in which the hand is in an occupied state (that is to say that it is holding an object of distraction) , and we indicate to the neural network that for its images, the state of the hand is busy. Once trained, the neural network then receiving as input the image of the detection zone D captured by the image capture unit 11, is programmed to output, the free or occupied state of the hand of the driver, or even directly the state of attention high (if the hand is in a free state) or low of the driver (if the hand is in a busy state). According to a second embodiment of the device 10 according to the invention, shown in FIG. 2 both in solid line and dotted line, the image capture unit 11 comprises at least a sensor 12 (in solid lines in FIG. 2) adapted to capture at least one first-rate image comprising a first type of information relating to the elements of the space contained in the detection zone D, and a sensor 13 (in dotted line in FIG. 2) adapted to capture at least one image of a second nature distinct from the first nature, comprising a second type of information relating to the elements of space contained in the area of detection D. As in the first embodiment, the elements of the space contained in the detection zone D include in particular the driver's hand and / or the distraction object whose presence is to be detected. They can also include elements of the driver’s environment that are naturally present in the detection zone D, such as the steering wheel, the gear lever or the armrest of one of the seats. In practice, the image capture unit 11 comprises either a single sensor (not shown) adapted to capture both first and second nature images, or at least two separate sensors 12, 13 adapted to capture the images respectively. first and second natures. When the image capture unit 11 comprises two separate sensors 12, 13, it is conceivable that they are arranged in the same place in the passenger compartment of the vehicle 1, for example in the ceiling lamp, behind the steering wheel 3, or else in a central region of the dashboard as previously described. It is also conceivable that each separate sensor is placed at a different location in the passenger compartment of the vehicle, in particular among the locations previously described. The first and second kind images are chosen from: a three-dimensional image comprising information relating to the distance, with respect to said sensor, of at least part of the elements of the space contained in the detection zone D, that is to say here at least of the driver and / or distraction object, a two-dimensional image comprising information relating to the luminance of at least part of the elements contained in the detection zone D, that is to say at least of the distraction object and / or of the driver's hand , and, a thermal image comprising information relating to the temperature of at least part of the elements contained in the detection zone D, that is to say at least of the distraction object and / or of the driver's hand . To capture a three-dimensional image, the sensor 12, 13 can be one of those described previously in the first embodiment of the device 10, namely a time-of-flight sensor, a stereoscopic sensor, or even a structured light sensor. This type of sensor can in certain cases also be suitable for capturing two-dimensional images. A conventional photographic type sensor, or a camera, is also capable of capturing two-dimensional images. Two-dimensional images are images giving information on the luminance of the elements present in the detection zone D, including of the driver's hand and forearm. Conventionally, two-dimensional images include pixels representing regions of the detection area D, that is to say that one pixel corresponds to the image of a region of the detection area D. Each pixel is more or less luminous as a function of the corresponding luminance of the region of the detection zone D which it represents. Here, the two-dimensional images are in black and white, but one could also have two-dimensional images in color, in which case each pixel would represent the chrominance of each corresponding region of the detection area. Thanks to the two-dimensional images giving details of the detection zone D, it is therefore easier for the image processing unit 15 to recognize the nature of the objects included in the detection zone D than when the only three-dimensional images are processed. . This is for example thanks to conventional shape recognition algorithms. To capture thermal images, the sensor can be a thermal camera, for example an infrared camera with high wavelength (or LWIR camera according to the acronym "Long Wave InfraRed"). The light intensity of the pixels of the thermal images depends on the temperature of the regions of the detection zone D corresponding to each pixel: the higher the temperature, the brighter the pixel, the lower the temperature, the darker the pixel. So, for example, the driver's forearm and hand will be represented by bright pixels, as will the cell phone battery, or a cup filled with hot liquid. On the contrary, the gear lever, a book or even a road map will be represented by darker pixels. Thus, thanks to the thermal images, it is also easier for the processing unit to recognize the nature of the elements of the space included in the detection zone D, and in particular to discern the driver's forearm terminated by the hand. of the driver from the rest of these space elements. In addition to or in a variant of the second embodiment of the device 10 according to the invention, the image capture unit 11 can also comprise at least one sensor adapted to capture an image of a third nature, distinct from said first and second type, comprising a third type of information relating to the elements of space contained in the detection zone D. Preferably, the image capture unit 11 will in this case include at least two separate sensors, possibly arranged at different locations in the vehicle interior, to capture first, second and third kind images. The third nature image is chosen from the images described above, namely a three-dimensional image, a two-dimensional image or a thermal image. The sensors described above are used for example. According to this variant, the image processing unit 15 takes into account at least three images of different natures, namely a three-dimensional image, a two-dimensional image and a thermal image, to determine the state of occupation of the hand of the driver. According to the second embodiment of the device according to the invention, whatever the variant envisaged, that is to say that the image capture unit 11 captures images of two or three different natures, the unit image processing unit 15 is programmed to implement the steps b1) and b2) explained above, namely: b1) locating said hand of the driver in at least one of said images received from the image capture unit 11, b2) detecting an object of distraction in the hand of driver 4 in order to deduce the state of occupation of the hand then the driver’s attention state. To this end, the image processing unit 15 comprises processing means programmed to implement steps b1) and b2) according to a so-called sequential implementation or according to a so-called concurrent implementation. It is also possible to envisage that steps b1) and b2) are carried out from one or two images of the same nature. However, it is advantageous to use for these steps images of different natures which give access to different information on the detection area. It is generally envisaged either to carry out one of steps b1) and b2) from one or more images of a first nature and to carry out the other step b1) or b2) from one or more several images of the same nature, different from the first nature, either to carry out at least one of the two stages b1) and b2) from several images of different natures, or to carry out each stage b1) and b2) to from several images of different natures. The combination of several images of different natures to carry out steps b1) and / or b2) makes it possible to improve the accuracy of the detection of the hand and / or of the object of distraction. Example of sequential implementation For clarity, the sequential implementation is described in the case where the image capture unit 11 captures images of two different natures. Of course, the same principle can be applied when capturing images of three different natures. According to this example of the sequential implementation, the image processing unit 15 implements step b1) from at least one first captured image of first nature, and step b2) from 'at least one second captured image of a nature different from the first nature. In practice, the first image of first nature is for example one of the three-dimensional images captured by the image capture unit 11. The processing means are programmed to segment this three-dimensional image, that is to say for identify at least two different regions in said three-dimensional image, and to recognize the shape of the forearm or of the conductor arm 4 in each of these regions, as described in the first embodiment of the device 10, and to deduce the position of the driver's hand, logically at the end of the recognized arm. The two aforementioned regions correspond for example, as mentioned previously, to a close region and to a region distant from the sensor. It is advantageous to recognize a part of the arm (for example the forearm) of the conductor from the three-dimensional image because it is thus possible to know the region, near or far with respect to the sensor, in which the arm is located. of the driver. After locating the driver's hand by means of the first three-dimensional image, the processing means of the image processing unit 15 are programmed to detect the possible presence of an object in the driver's hand by means of the second second nature image captured by the image capture unit 11. For this purpose, the processing means of the image processing unit 15 are programmed to recognize the characteristic shape of objects possibly present in the area of detection D, at the position of the hand as evaluated as explained above, from two-dimensional and / or thermal images. The processing means are then programmed to determine, by combining the information from the first and second images (of first and second nature) whether an object is present in the driver's hand or not and, if so, to determine the nature of this object in order to assess whether it is an object of distraction (or a driving organ). When no object image is detected in the second image (of second nature) captured by the image capture unit 11, that is to say that no object is present in the area detection unit D, the image processing unit 15 is programmed to determine that the hand is in a free state. When at least one object is detected in the second image (of second nature) captured by the image capture unit 11, the processing unit 15 is programmed to identify the nature of the object detected. The image processing unit 15 is then programmed to determine that the hand is in an occupied state if the detected object is an object of distraction (and to determine that the hand is in a free state, in the sense available for drive, if the object detected is a drive device). Of course, as in the first embodiment of the device, the shape recognized at the level of the hand in the first and second nature images makes it possible to confirm the probability that it holds an object of distraction, or to greatly reduce this probability. The shape of the outline of the hand can thus be taken into account by the image processing unit 15 for determining the state of occupation of the hand. Advantageously, the sequential implementation makes it possible to use the most relevant information from each of the first and second nature images, depending on the step to be performed. Of course, in practice, it would be possible to carry out the sequential implementation using the thermal or two-dimensional images as a first nature image, and the three-dimensional, two-dimensional or thermal images as the second nature image as long as they are chosen from nature distinct from first nature. It would also be possible to use third nature images to facilitate the step of recognizing the shape of the driver's hand and / or forearm, and / or to facilitate the step of detecting the object in detection area D. Example of concurrent implementation According to the concurrent implementation, the image processing unit 15 implements steps b1) and b2) from each image received from the image capture unit 11, and implements an additional step according to which it determines a confidence index associated with the detection of the presence of the object in the hand of the driver as a function of which it deduces the state of occupation from the hand of the driver. More specifically, the image processing unit 15 implements steps b1) and b2) independently for each of the first, second and third kind images. In practice, steps b1) and b2) are implemented with the three-dimensional images as described in the case of the first embodiment of the device 10 according to the invention. For two-dimensional and thermal images, the processing means of the image processing unit 15 are programmed to identify, among the pixels of each image, those associated with the image of the hand and / or the image of the driver's forearm, and those associated with the image of a distraction object possibly present in the detection zone D, using shape recognition algorithms specific to said thermal and two-dimensional images. Depending on the location of the hand and the possible detection of a distraction object in the hand thus located, the image processing unit 15 is programmed to deduce the state of occupation of the driver's hand in these images. At the end of the processing of each first, second and possibly third kind image, the image processing unit 15 determines the confidence index of the processing, that is to say determines if the processing has led to a result, and if this result is sure, or if it is rather random. For example, by modeling a hand by a palm and five fingers each composed of three phalanges, the confidence index is at most if we recognize the palm and five fingers in the image; the index is lower when fewer elements are recognized, for example when the palm and two fingers are recognized only. The image processing unit 15 determines the actual occupancy state (occupied or free) of the driver's hand from the image whose processing has led to the highest confidence index, c ' that is to say whose result is the most certain. As a variant, still according to the second embodiment of the device according to the invention, instead of implementing steps b1) and b2) concurrently or sequentially, the image processing unit 15 comprises processing means including a network of neurons trained to directly recognize the state of occupation of the hand from an image of the detection zone D. In practice, the neural network is trained prior to its use in the device according to the invention according to the same principle as that which has been described in the case of the first embodiment, except that it is trained with first, second and possibly third kind images. The neural network thus trained is thus adapted to receive as input first, second, and possibly third-nature images, to determine the state of occupation of the driver's hand. Advantageously, the fact of using images of different natures at the input of the neural network makes it possible to increase the reliability of the result given at the output of said neural network. The device according to the invention finally finds a particularly useful application in an on-board system 100 for a vehicle comprising the device according to the invention, adapted to determine the state of occupation of at least one of the hands of the driver 4 present in the detection zone D, an autonomous driving unit 50 of said vehicle 1 programmed to control driving members of said vehicle independently of the driver, and - a programmed decision unit (not shown) authorizing the driver 4 to at least partially control the driving members of the vehicle 1 if the driver's hand is determined to be free and / or to alert the driver if it is determined of a state occupied by the hand of the driver, and / or to switch autonomous driving into a secure mode. Here, the driving components include in particular the steering wheel 3, the accelerator and brake pedals, the gear lever, the indicators, the headlights, the wipers. Thus, the vehicle's driving components include all of the elements of the vehicle used for driving. The autonomous driving unit 50 is adapted to control the various driving members so that said vehicle is driven without driver intervention. In this situation, the driver is allowed to be insufficiently alert, that is to say, to be distracted, and it is possible that he is reading a book or looking at his mobile phone without danger to driving. It sometimes happens that the driver wants to at least partially regain control of the driving members (or that the autonomous driving unit 50 requires such recovery by the driver, for example because of traffic conditions). In this case, the decision unit is programmed to authorize the driver to at least partially control the driving members only when his overall attention state is high, in particular when at least one of his hands is in a state free. On the contrary, when at least one of his hands is in an occupied state (or alternatively when these two hands are in an occupied state), the decision unit does not authorize a recovery in hand by the driver, but can for example controlling the display of an alert message intended for the driver to encourage him to be attentive and / or to switch autonomous driving into a secure mode in which the autonomous driving unit 50 controls for example the parking of the vehicle on the side (or on the emergency lane). Process In FIG. 3, the main stages of the process implemented by the device according to the invention are shown in the form of a flowchart. The method according to the invention for determining the state of occupancy of at least one of the hands of the driver of the vehicle, comprises steps according to which: a) the image capture unit 11 on board said vehicle 1 captures at least one image of the detection zone D located in said vehicle 1, b) the image processing unit 15 receives said captured image and determines the state of occupation of one hand of the driver 4 located in the detection zone D, as a function of the detection of the presence of an object distraction in said hand. More specifically, in step a) (block E1 of FIG. 3), the image capture unit 11 of the device 10 according to the invention captures at least one image of a first nature from the detection zone D . Preferably, it captures two or even three images, of the same nature or of different natures, of the detection zone D, preferably of different natures. Preferably, each image of a different nature is captured at a given instant, that is to say that all of the images are captured simultaneously by the image capture unit 11 or in a relatively short interval of time, notably less than a minute. This ensures that the situation analyzed has not changed between the capture of images of different natures. In step b), the image processing unit 15 receives the image or images captured in step a). According to a first possible implementation of the method according to the invention using the device 10 of the first or second embodiment according to the invention, represented by the channel (1) in FIG. 3, the image processing unit 15 implements the steps described above, namely - detecting at least part of the driver's arm in at least one received image (block F1); - locate the driver's hand in said image or images received from the image capture unit 11 (block F2), depending here on the location of detection of the part of the aforementioned arm, - detect the presence of said distraction object in the driver's hand in order to deduce the state of occupation of the driver's hand (block F3). When two images of different natures are captured in step a), by the image capture unit 11 according to the second embodiment of the device 10 according to the invention, the implementation of step b) can be sequential or concurrent, as described above. According to a second possible implementation of the method according to the invention using the variants of the first and second embodiments of the device 10 in which the image processing unit 15 comprises a network of trained neurons, represented by the channel (2 ) in FIG. 3, said image processing unit 15 directly recognizes the state of occupation of the hand from the images received from the image capture unit 11 (block G of FIG. 3). Whatever the implementation of the method envisaged, in step b), the image processing unit 15 sends the busy state of the driver's hand to the devices of the vehicle requesting it (block E2 of the Figure 3), in particular to the decision unit in the case of an autonomous vehicle, or to a device for monitoring the existing driver of the vehicle. It could also be envisaged that the device according to the invention informs the driver of the busy state with at least one of his hands, so as to encourage the latter to refocus on his driving. Advantageously, an additional step is also provided according to which a state of overall attention of the driver is determined taking into account the state of occupation of the hand of the driver and / or a state of alertness of the driver determined by d other known means. Advantageously, in the case of autonomous vehicles, in a situation where the driving members of said vehicle are controlled independently of the driver, there is further provided a step according to which the driver is authorized to control at least partially the driving members of the vehicle. if a free state is determined by the driver's hand and / or if a busy state is determined by the driver's hand, the driver is alerted and / or the autonomous driving mode is switched to a secure mode. The device, system and method according to the invention are particularly advantageous in partially or completely autonomous driving situations, during which the driver is authorized to relax his attention, that is to say to present an insufficient state of alertness. . The driver's position in his situations can be changed to the point where he is no longer in front of a possible monitoring device adapted to capture an image of his head to assess his level of alertness. It is therefore very useful to determine the state of occupation of the driver's hands to understand his overall state of attention. In addition, the invention makes it possible to provide additional information to that already provided by a possible driver monitoring device. Finally, the invention applies to any type of vehicle, including transport vehicles such as boats, trucks, trains.
权利要求:
Claims (14) [1" id="c-fr-0001] 1. Device (10) for determining a state of attention of a vehicle driver (1) comprising: - an image capture unit (11) on board said vehicle (1), said image capture unit (11) being adapted to capture at least one image of a detection zone (D) located in said vehicle (1), and - an image processing unit (15) adapted to receive said captured image and programmed to determine the driver's attention state (4), as a function of the detection of the presence of a distraction object in one of the driver's hands (4) located in the detection zone (D). [2" id="c-fr-0002] 2. Device (10) according to claim 1, wherein the image capturing unit (11) comprises at least one sensor (12, 13) adapted to capture a three-dimensional image of the detection zone (D), said three-dimensional image comprising information relating to the distance, with respect to said sensor (12, 13), at least of said distraction object or of said hand located in the detection zone (D). [3" id="c-fr-0003] 3. Device (10) according to claim 1, wherein the image capture unit (11) comprises at least - a sensor (12) adapted to capture at least one first-rate image comprising a first type of information relating to said distraction object or to said hand located in the detection zone (D), and - a sensor (13) adapted to capture at least one image of a second nature distinct from the first nature, comprising a second type of information relating to said distraction object or to said hand located in the detection zone (D). [4" id="c-fr-0004] 4. Device (10) according to claim 3, in which the first nature image is chosen from: a three-dimensional image comprising information relating to the distance, relative to said sensor (12, 13), at least of said distraction object or of said hand located in the detection zone (D), a two-dimensional image comprising information relating to the luminance at least of said distraction object or of said hand situated in the detection zone (D), and a thermal image comprising information relating to the temperature of at least said distraction object or said hand located in the detection zone (D). [5" id="c-fr-0005] 5. Device (10) according to one of claims 3 and 4, wherein the image capture unit (11) further comprises at least one sensor adapted to capture an image of a third nature distinct from said first and second natures, comprising a third type of information relating to said distraction object or to said hand located in the detection zone (D). [6" id="c-fr-0006] 6. Device (10) according to claim 5, in which the third nature image is chosen from: a three-dimensional image comprising information relating to the distance, with respect to said sensor, at least of said distraction object or of said hand located in the detection zone, a two-dimensional image comprising information relating to the luminance of at least said distraction object or of said hand situated in the detection zone, and a thermal image comprising information relating to the temperature of at least said distraction object or with said hand located in the detection zone. [7" id="c-fr-0007] 7. Device (10) according to one of claims 1 to 6, in which the image processing unit (15) is programmed to carry out the following steps: b1) locating (F1, F2) said driver's hand (4) in said image received from the image capture unit (11), b2) detecting (F3) the presence of said distraction object in said driver's hand ( 4) in order to deduce the state of attention of the driver (4). [8" id="c-fr-0008] 8. Device (10) according to claim 7, in which the image processing unit (15) is programmed to implement step b1) according to the following substeps: - detect (F1) at least part of an arm of the driver, in the image received from the image capture unit, and - deduct (F2) the position, in said image received from the image capture unit, of said hand of the driver from the place of detection of the part of the driver's arm in the image received from the image capture. [9" id="c-fr-0009] 9. Device (10) according to one of claims 7 and 8, in which the image processing unit (15) is programmed to carry out step b2) according to the following substeps: - detect the presence of any object in the driver's hand, and - determine the nature of said object detected in the driver's hand, in order to deduce the state of attention of the driver. [10" id="c-fr-0010] 10. Device (10) according to one of claims 7 to 9 taken in dependence on one of claims 3 to 6, in which the image processing unit (15) implements step b1) from the image of first nature, and step b2) from the image of nature distinct from first nature. [11" id="c-fr-0011] 11. Device (10) according to one of claims 7 to 9, in which the image processing unit (15) implements steps b1) and b2) from each image received from the image processing unit. image capture (11), and implements an additional step according to which it determines a confidence index associated with the detection of the presence of the distraction object in the hand of the driver (4) according to which it deduces the driver's attention (4). [12" id="c-fr-0012] 12. Device (10) according to one of claims 1 to 6, wherein the image processing unit (15) comprises processing means using a neural network trained to determine the state of attention of the driver from the image of the detection zone (D) captured by the image capture unit (11). [13" id="c-fr-0013] 13. On-board system (100) for vehicle comprising: - a device (10) according to one of claims 1 to 12, adapted to determine the state of attention of a driver (4), an autonomous driving unit (50) of said vehicle (1) programmed to control driving members of said vehicle (1) independently of the driver, and - a decision unit programmed to authorize the driver to at least partially control the vehicle's driving components in the event of determination of a free state in the driver's hand. [14" id="c-fr-0014] 14. Method for determining a state of attention of a driver (4) of a vehicle (1), according to which a) an image capture unit (11) on board said vehicle (1) captures at least one image of a detection area (D) located in said vehicle, b) an image processing unit (15) receives said captured image and determines the state of attention of the driver, based on the detection of the presence of a distraction object in one of the driver's hands located in the detection area (D). 1/1 F3
类似技术:
公开号 | 公开日 | 专利标题 EP3084511B1|2021-09-01|System and method for controlling the luminosity of a head-up display and display using said system WO2018158163A1|2018-09-07|Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method FR3045177A1|2017-06-16|METHOD FOR CONTROLLING A FUNCTIONALITY OF A MOTOR VEHICLE USING A MOBILE TERMINAL EP2061006B1|2017-04-26|Method of detecting a phenomenon that can reduce visibility for a vehicle EP2020595A1|2009-02-04|Method for a vehicle to detect a phenomenon affecting visibility FR3005448A1|2014-11-14|METHOD AND DEVICE FOR DETECTING AN INTENTION TO START FROM A STOP VEHICLE EP2056093A1|2009-05-06|Method for detection of a phenomenon perturbating the visibility for a vehicule FR3066158A1|2018-11-16|METHOD AND SYSTEM FOR ALERTING A PRESENCE IN A DEAD ANGLE OF A VEHICLE FR2910408A1|2008-06-27|Infrared illuminator controlling method for e.g. car, involves detecting presence of object or person in detecting zone in carrying zone of illuminator using presence sensor, where detection zone has ophthalmic risk zone EP2743129A1|2014-06-18|Method and device for controlling a light beam FR2942064A1|2010-08-13|Method for alerting driver of motor vehicle e.g. bus, during event occurred on back side of vehicle, involves displaying pictogram in form of contour of visible part of reflection of vehicle on glass of rear-view mirrors WO2017174673A1|2017-10-12|Method for controlling the automatic display of a pictogram indicating the imminent opening of a vehicle door FR3047961A1|2017-08-25|DEVICE AND METHOD FOR CHANGING THE CIRCULATION PATHWAY FOR A MOTOR VEHICLE EP3609742A1|2020-02-19|Luminous lane-change signalling device for a motor vehicle FR3083507A1|2020-01-10|METHOD AND DEVICE FOR CONTROLLING THE TRAJECTORY AND INDICATOR OF A VEHICLE IN THE EVENT OF A RISK OF CROSSING A TRAFFIC LANE FR3056501A1|2018-03-30|MOTOR VEHICLE IDENTIFICATION ASSISTANCE SYSTEM AND METHOD FOR IMPLEMENTING THE SAME FR3053650A1|2018-01-12|AUTONOMOUS DRIVING METHOD FOR A MOTOR VEHICLE FR3109122A1|2021-10-15|HIGH BEAM VEHICLE WITH LIGHT INTENSITY ADAPTATION TO REDUCE GLARE BY REFLECTION FR3103052A1|2021-05-14|Method of assisting in driving a vehicle and associated driving assistance system FR3061780A1|2018-07-13|METHOD FOR CONTROLLING A MOTOR VEHICLE DISPLAY AND CORRESPONDING DISPLAY FR2938365A1|2010-05-14|Data e.g. traffic sign, operating device for use in motor vehicle e.g. car, has regulation unit automatically acting on assistance device when speed is lower than real speed of vehicle and when no action by driver FR3068667A1|2019-01-11|AUTOMATIC ACTIVATION METHOD OF A FUNCTION OF A MOTOR VEHICLE FR3097829A1|2021-01-01|Method for activating at least one function relating to the safety of a vehicle FR3082485A1|2019-12-20|DISPLAY DEVICE FOR ASSISTING THE DRIVING OF A DRIVER OF A VEHICLE WO2019081829A1|2019-05-02|Display device having a screen fixed to part of a system for making a masking element transparent
同族专利:
公开号 | 公开日 US20200012872A1|2020-01-09| EP3590071A1|2020-01-08| FR3063557B1|2022-01-14| CN110383290A|2019-10-25| WO2018158163A1|2018-09-07| US11170241B2|2021-11-09|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20090278915A1|2006-02-08|2009-11-12|Oblong Industries, Inc.|Gesture-Based Control System For Vehicle Interfaces| WO2014128273A1|2013-02-21|2014-08-28|Iee International Electronics & Engineering S.A.|Imaging device based occupant monitoring system supporting multiple functions|FR3085927A1|2018-09-18|2020-03-20|Valeo Comfort And Driving Assistance|DEVICE, SYSTEM AND METHOD FOR DETECTING DISTRACTION OF A CONDUCTOR|US7983817B2|1995-06-07|2011-07-19|Automotive Technologies Internatinoal, Inc.|Method and arrangement for obtaining information about vehicle occupants| US20030117275A1|2001-10-25|2003-06-26|Watkins D. Scott|Apparatus for detecting potential tire failure| JP4922715B2|2006-09-28|2012-04-25|タカタ株式会社|Occupant detection system, alarm system, braking system, vehicle| JP4420081B2|2007-08-03|2010-02-24|株式会社デンソー|Behavior estimation device| WO2012128952A2|2011-03-18|2012-09-27|Battelle Memorial Institute|Apparatuses and methods of determining if a person operating equipment is experiencing an elevated cognitive load| JP5505434B2|2012-02-09|2014-05-28|株式会社デンソー|Armpit judging device| JP2013225205A|2012-04-20|2013-10-31|Denso Corp|Smoking detection device and program| US9751534B2|2013-03-15|2017-09-05|Honda Motor Co., Ltd.|System and method for responding to driver state| JP5983575B2|2013-09-27|2016-08-31|株式会社Jvcケンウッド|Operation input device, operation input method and program| DE102014202490A1|2014-02-12|2015-08-13|Volkswagen Aktiengesellschaft|Apparatus and method for signaling a successful gesture input| CN106068201B|2014-03-07|2019-11-01|大众汽车有限公司|User interface and in gestures detection by the method for input component 3D position signal| EP2930081B1|2014-04-08|2019-03-27|Volvo Car Corporation|Method for transition between driving modes| DE102015201369A1|2015-01-27|2016-07-28|Robert Bosch Gmbh|Method and device for operating an at least partially automatically moving or mobile motor vehicle| US9996756B2|2015-08-31|2018-06-12|Lytx, Inc.|Detecting risky driving with machine vision| US10710608B2|2015-10-26|2020-07-14|Active Knowledge Ltd.|Provide specific warnings to vehicle occupants before intense movements| US10059347B2|2015-10-26|2018-08-28|Active Knowledge Ltd.|Warning a vehicle occupant before an intense movement| CN108349455B|2015-10-26|2021-06-15|主动知识有限责任公司|Movable internal vibration absorbing energy dissipating pad| DE102016206771A1|2015-12-16|2017-06-22|Robert Bosch Gmbh|Method and device for controlling at least one driver interaction system| US10043084B2|2016-05-27|2018-08-07|Toyota Jidosha Kabushiki Kaisha|Hierarchical context-aware extremity detection| EP3469437A4|2016-06-13|2020-03-25|Xevo Inc.|Method and system for providing auto space management using virtuous cycle| US9928433B1|2016-06-14|2018-03-27|State Farm Mutual Automobile Insurance Company|Apparatuses, systems, and methods for determining when a vehicle operator is texting while driving| CN109906165A|2016-08-10|2019-06-18|兹沃公司|The method and apparatus of information is provided via the metadata collected and stored using the attention model of deduction| US20180096668A1|2016-09-30|2018-04-05|Ford Global Technologies, Llc|Hue adjustment of a vehicle display based on ambient light| US10467488B2|2016-11-21|2019-11-05|TeleLingo|Method to analyze attention margin and to prevent inattentive and unsafe driving| US20180231976A1|2017-02-10|2018-08-16|Magna Electronics Inc.|Vehicle driving assist system with driver attentiveness assessment|KR20200046179A|2018-10-18|2020-05-07|주식회사 만도|Emergency controlling device for vehicle| CN111845757A|2019-04-30|2020-10-30|通用汽车环球科技运作有限责任公司|Distraction-eliminating system|
法律状态:
2018-03-29| PLFP| Fee payment|Year of fee payment: 2 | 2018-09-07| PLSC| Publication of the preliminary search report|Effective date: 20180907 | 2020-03-31| PLFP| Fee payment|Year of fee payment: 4 | 2021-03-30| PLFP| Fee payment|Year of fee payment: 5 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1700222|2017-03-03| FR1700222A|FR3063557B1|2017-03-03|2017-03-03|DEVICE FOR DETERMINING THE STATE OF ATTENTION OF A VEHICLE DRIVER, ON-BOARD SYSTEM COMPRISING SUCH A DEVICE, AND ASSOCIATED METHOD|FR1700222A| FR3063557B1|2017-03-03|2017-03-03|DEVICE FOR DETERMINING THE STATE OF ATTENTION OF A VEHICLE DRIVER, ON-BOARD SYSTEM COMPRISING SUCH A DEVICE, AND ASSOCIATED METHOD| PCT/EP2018/054586| WO2018158163A1|2017-03-03|2018-02-23|Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method| CN201880015472.0A| CN110383290A|2017-03-03|2018-02-23|For determining the device of vehicle driver's attention, the onboard system and associated method including this device| EP18708630.1A| EP3590071A1|2017-03-03|2018-02-23|Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method| US16/490,767| US11170241B2|2017-03-03|2018-02-23|Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|