![]() Method and associated device to control at least one parameter of a vehicle (Machine-translation by
专利摘要:
Method and associated device for controlling at least one parameter (12) of a vehicle (1), comprising i) representing a parameter (12) in a portion (53) of a graphic representation device (2); ii) detecting an orientation (4) of an eyeball (31) of an occupant (3); iii) determining a match between the orientation (4) of the eyeball (31) and the portion (53); iv) memorizing the at least one parameter (12) represented in the portion (53) if there is a coincidence between the orientation (4) of the eyeball (31) and the portion (53); v) determine a performance of the occupant (3); vii) controlling the at least one parameter (12), based on a memorized parameter (12) and based on the performance of the determined occupant (3). This increases the efficiency in determining whether the occupant (3) of the vehicle (1) wishes to act on an actuator (13) or not. (Machine-translation by Google Translate, not legally binding) 公开号:ES2718429A1 申请号:ES201731496 申请日:2017-12-29 公开日:2019-07-01 发明作者:Pamplona Jose Antonio Garcia;Estevez Borja Regueiro;Redondo Antonio Alcocer;Martinez Daniel Molina 申请人:SEAT SA; IPC主号:
专利说明:
[0001] [0002] Method and associated device to control at least one parameter of a vehicle [0003] [0004] OBJECT OF THE INVENTION [0005] [0006] The object of the present patent application is a method for controlling at least one parameter of a vehicle according to claim 1, and its associated device according to claim 14, incorporating both notable innovations and advantages. [0007] [0008] BACKGROUND OF THE INVENTION [0009] [0010] The dashboard screens of current vehicles show diverse information towards the driver or occupant. Current vehicles may present systems for detecting the orientation of the look of said driver in order to determine if said information is being perceived, and, if necessary, if the driver wishes to operate with said information. [0011] [0012] In this regard it is known in the state of the art, as reflected in document US20140348389, a method comprising one or more processors for controlling one or more devices arranged in the vehicle based on the detection of the occupant's gaze, in the that the method comprises the steps of: i) receiving one or more images from a camera attached to the car ii) based on these images, locating one or more of a characteristic associated with the occupant's body iii) determining the direction of the occupant's gaze based on one or more bodily features of the occupy iv) control one or more devices based on the at least one direction of the gaze. [0013] [0014] Thus, and in view of the above, it is seen that there is still a need to have a method to manage the multiplicity of components on which to interact (radio, screen, HUD - Head UP Display) and interaction devices while driving such as the steering wheel, rotary command and control elements, control and touch control elements, touch screens, etc ... In this regard there is also the need to increase the efficiency in determining if the vehicle user wishes to act on a actuator, or just looked over the actuator without the desire to modify or interact with it. [0015] [0016] DESCRIPTION OF THE INVENTION [0017] [0018] The present invention consists of an associated method and device for controlling at least one parameter of a vehicle. Using an eye sensor determines the point at which the driver or occupant is looking and it is possible to determine the component on which to interact and the element or region within that component. In this way it is possible to interact with the elements of the vehicle. [0019] [0020] Regarding the document outlined in the state of the art, it is observed that in US20140348389 the control of the device or screen is based on the direction of the gaze, which implies that the driver must focus his gaze on the device or screen while the control is being carried out. or acting. More in detail, the driver should focus his gaze on a graphic representation device and at the same time he is focusing his visual attention on the graphic representation device, interact by means of an actuator, such as a button on the dashboard. instruments. This implies a loss of concentration in driving tasks for a long period of time. [0021] [0022] The differential technical element in the method of the present invention is the presence of the steps involved in memorizing the last segment or area of the screen where the user has focused his gaze. Thus, it is avoided that the driver has to be looking at the icon on which he wants to act for the entire duration of the performance, memorizing the last icon or parameter in which the coincidence has occurred and allowing the performance without having to look continually. Coincidence means the fact that the driver or user looks at a certain area of the screen, and this area includes a parameter or function of the vehicle that can be modified by the user. [0023] [0024] Related to the objective of increasing accuracy, another problem detected in the state of the art is the detection of the driver's gaze at all times, without distinguishing whether the driver wants to act on the icon or area of the screen for which it has been passed the look or has simply looked through that segment of the screen with no intention of activate it, or act on it. In a typical driving situation, the user looks at the steering wheel of the vehicle and, in the transition to focus again the look on the road, that is, when looking up the steering wheel to the windshield, his gaze coincides with some activatable area for being in the path of the change of look. For example, it can coincide with a graphic display device arranged in the area of the speed meter or speedometer, or a HUD. The problem that the user may not want to interact or activate the vehicle icon or function is highlighted. [0025] [0026] Thus, and more specifically, the method of controlling at least one parameter of a vehicle, comprises the steps of: [0027] i) representing at least one parameter in at least one graphic representation device, where the at least one parameter is represented in a portion of the at least one graphic representation device; [0028] ii) detect an orientation of at least one eyeball of a vehicle occupant; [0029] iii) determine a match between the orientation of the at least one eyeball and the portion of the at least one graphic display device; [0030] iv) memorize the at least one parameter represented in the portion if the coincidence between the orientation of the at least one eyeball and the portion is determined; [0031] v) determine an action of the occupant; Y [0032] vii) controlling the at least one parameter, where controlling the at least one parameter is based on at least one memorized parameter and based on the performance of the determined occupant. [0033] [0034] In this way the driver can interact with any component of the vehicle, said component or function of the vehicle being selectable with the view, and can also use different interaction paths, such as voice or gestures, on different components and graphic representation devices (radio, screen, HUD) selectable with the view. [0035] [0036] Note that by portion of at least one graphic representation device, the entire graphic representation device of the vehicle can also be understood. And that by actionable element is to be understood as an element that can be modified, that is, that the parameter associated with the information represented by said actionable element is modifiable, or capable of being modified, by the user, such as a song of the radio, an air conditioner fan intensity ... [0037] Advantageously, the stage of determining an action of the occupant is a separate action from the stage of determining the coincidence between the orientation of the at least one eyeball and the portion of the at least one graphic representation device. A separate action means a non-simultaneous action. In this way, the user does not have to be looking at the command to be activated simultaneously while acting on the actuator. In this regard, it should be noted that there may be an overlap between the two stages but, in general, it is not necessary for said overlap or simultaneity to occur because the portion of the at least one graphic representation device has been memorized and determined as an actionable parameter. [0038] [0039] In a preferred embodiment of the invention, the method comprises an additional step of determining whether the at least one parameter represented in the portion of the at least one graphic representation device is operable by the occupant, where step iv) comprises memorizing the at less a parameter represented in the portion if the at least one parameter is determined as actionable and if the coincidence between the orientation of the at least one eyeball and the portion is determined. This improves the accuracy of the actuation system, by not performing any operation as long as the parameter is not determined as actionable. By actionable parameter is understood that function of the vehicle that is capable of being modified by the user. On the other hand, there will be other zones or portions of the graphic representation device that will not be capable of being modified by the user, such as a representation of a fault, a speedometer ... [0040] [0041] According to another aspect of the invention, step iii) comprises determining a start of the coincidence between the orientation of at least one eyeball and the portion of the at least one graphic representation device, and comprises determining an end of the coincidence between the orientation of at least one eyeball and the portion of the at least one graphic display device. In this way, an improvement in the accuracy of the detection of the last image portion that the occupant has looked at can be offered, being in a position to quantify the period of time in which said coincidence has taken place. [0042] [0043] More specifically, the method comprises counting a period of time between the beginning and the end of the coincidence between the orientation of the at least one eyeball and the portion of the at least one graphic representation device, where step iv) comprises memorizing the at least one parameter represented if the period of time counted is greater than a predefined value. In this way, the processing resources and the memory of the device in which the method of the present invention is executed are optimized, by only using processing and memory resources if the specific condition exists that the period of time accounted for is longer to a predefined value. Counting the period of time allows to ensure that the memorized parameter is that with which the user really wants to interact, discarding all those parameters in which the user only looks and does not want to modify. [0044] [0045] According to another aspect of the invention, the predefined value is based on at least one parameter of the vehicle, at a priority of at least one represented parameter and / or a performance history of the determined occupant, so that it is possible to particularize the magnitude of the predefined value depending on a series of circumstances such as the type of parameter, its importance, and the history of handling that parameter. Thus, if a performance history indicates that the user acts on the windshield when a rain situation starts, the predefined time value will be lower. On the contrary, a vehicle parameter that has never been operated or modified by the user can comprise a higher predefined time value. [0046] [0047] In a preferred embodiment of the invention, the method comprises defining a first area in the at least one graphic representation device, where the first area limits an area where the at least one parameter is represented, where the first area is comprised in the portion , and defining a second area, where the second area limits an area where the at least one parameter can be operated by the occupant, where the second area is comprised in the portion. In this way, it is possible to differentiate between the area destined merely for a graphic representation, and the area in which the parameter can also be operated, that is, managed by the user driving the vehicle. [0048] [0049] Thus, two areas are defined on the screen or graphic display device. A first area where information is displayed, such as the vehicle speedometer or tank level. And a second area, predefined as the control area where the user's gaze is to be detected and the parameter is memorized as a parameter capable of being operated, such as, for example, a cruise control speed represented within the first area, the which represents the speedometer. This second area is preferably less than the first area in order to limit the section where the driver's gaze can be detected, thus avoiding false positives. [0050] [0051] Note that when the driver directs his gaze to this second area, and / or the detection time of his gaze is longer than a predefined time, coincidence will occur and the icon functionality will be activated. At that time, the system memorizes that function as the actionable function, being able to subsequently modify its parameters according to your needs. [0052] [0053] Advantageously, the method comprises an additional step of generating a first visual effect in the graphic representation device, where the first visual effect is based on the coincidence determined between the orientation of the at least one eyeball and the first area. In this way the driver receives a wake-up call so that he is aware that the direction of his gaze coincides with information represented in the first area of the graphic display device. [0054] [0055] More particularly, defining the second area comprises establishing a surface of the second area based on at least one parameter of the vehicle, at a priority of the at least one parameter represented and / or a performance history of the determined occupant. In this way the surface of the second area is determined in a personalized way to a series of variables such as the type of parameter, its importance and / or urgency, and the driving history of the driver. In this way, it is facilitated that there is a coincidence between the user's gaze and a given vehicle parameter, favoring that the gaze be comprised in the second area of the graphic representation device. [0056] [0057] In a preferred embodiment of the invention, step iii) comprises determining a coincidence between the orientation of the at least one eyeball and the second area, so as to specify the situation in which the driver is in a position to act on at least A vehicle parameter. [0058] [0059] Additionally, the method comprises an additional step of generating a second visual effect in the graphic representation device, where the second visual effect is based on the coincidence determined between the orientation of the at least one eyeball and the second area. In this way a second visual effect different from the first visual effect is presented, in order to highlight the information and / or parameters that can be modified. [0060] In a preferred embodiment of the invention, step v) comprises determining a performance of the occupant by means of an actuator, where the actuator is unique for a plurality of graphic display devices. In this way the driver user always goes to the same actuator, being a more comfortable operation, minimizing the chances of confusion on which actuator to handle. In this way, the parameter of the vehicle to be operated is selected by focusing the eye on said parameter and, the value of the vehicle parameter is modified by means of a single actuator of the vehicle. [0061] [0062] More particularly, the actuator is a voice control, a gesture control or at least one button, where the at least one button is arranged in an instrument cluster and / or a steering wheel. In this way the driver user has several driving options, being able to choose the one that best suits his needs. [0063] [0064] On the other hand, indicate that a data processing device, preferably integrated in the vehicle, comprises at least one memory and one or more programs, wherein one or more programs are stored in said memory and configured to be executed by the processing device. of data. Said data processing device comprises means for executing the method to control the at least one parameter of the vehicle. [0065] [0066] The attached drawings show, by way of non-limiting example, an associated method and device for controlling at least one parameter of a vehicle, and the physical elements for its application, constituted according to the invention. Other features and advantages of said method and associated device for controlling at least one parameter of a vehicle, object of the present invention, will be apparent from the description of a preferred, but not exclusive, embodiment illustrated by way of example. limiting in the accompanying drawings, in which: [0067] [0068] BRIEF DESCRIPTION OF THE DRAWINGS [0069] [0070] Figure 1.- It is a front perspective view of the driver's position of a vehicle with at least one graphic representation device, in accordance with the present invention. [0071] Figure 2.- It is a perspective view of the pilot and co-pilot positions in a vehicle, in accordance with the present invention. [0072] Figure 3.- It is a perspective view of the position of the driver of a vehicle with at least one graphic representation device, in accordance with the present invention. [0073] Figure 4A.- It is a representation of the speedometer of a vehicle, with a first and a second areas, in accordance with the present invention. [0074] Figure 4B.- It is a representation of the speedometer of a vehicle, with a first and a second areas and a cursor representing the eyes of the driver, in accordance with the present invention. [0075] Figure 5A.- It is a plan view of the driver's position, which performs gestural handling, in accordance with the present invention. [0076] Figure 5B.- It is a plan view of the driver's position, which performs voice handling, in accordance with the present invention. [0077] Figure 6.- It is a schematic representation of the integrating elements of the device of the present invention. [0078] [0079] DESCRIPTION OF A PREFERRED EMBODIMENT [0080] [0081] In view of the aforementioned figures and, according to the numbering adopted, an example of a preferred embodiment of the invention can be observed therein, which comprises the parts and elements indicated and described in detail below. [0082] [0083] In Fig. 1 an illustrative front view of the driver's position of a vehicle 1 with at least one graphic display device 2 can be seen illustratively. The instrument panel 15 comprising the steering wheel 16 can also be seen. Specifically, there are two graphic display devices 2, one arranged in the central area of the instrument panel 15 and another arranged in the rear area of the steering wheel 16. [0084] [0085] Figure 2 shows, in an illustrative way, a perspective view of the pilot and co-pilot positions in a vehicle 1. Both pilot or driver and co-pilot or passenger can be considered as occupants 3. In said figure it can be seen the orientation 4 of the eyeball 31, the gaze 32 of the occupant 3 being a relevant body variable for the handling of the parameters of the vehicle 1, in the context of the present invention. Other relevant body variables for handling the parameters of vehicle 1 may be voice 33 and gesture 34. [0086] [0087] Figure 3 shows, in an illustrative way, a perspective view of the position of the driver or occupant 3 of the vehicle 1 with at least one graphic display device 2. Likewise, at least one actuator 13 can be seen in the form of a button 14, located on the handwheel 16, although it could be located at another accessible point of the instrument cluster 15. The present invention aims to, through a single set of actuators 13, such as a group of buttons 14 arranged in The steering wheel 13 can interact with different graphic display devices 2 arranged in the vehicle 1. [0088] [0089] Thus, the method of the present invention is intended to enable control by an occupant 3 of a plurality of vehicle functions 1. More specifically, it is intended that the occupant 3 modify a parameter 12 of said vehicle functions 1 in the manner that It is expressed below: [0090] [0091] - The function is represented in at least one graphic representation device 2, specifically, in a certain area of the graphic representation device 2. Simultaneously there may be more than one function represented in the same graphic representation device 2. usual, a function of the vehicle 1 is represented by means of a parameter 12, which will be capable of being modified by the occupant 3. [0092] - In order to know which function or parameter 12 of the vehicle 1 is to be modified, an orientation 4 of at least one eyeball 31 of the occupant 3 is detected, in order to find a match between said orientation 4 of the at least one eyeball 31 with the position or portion 53 of the graphic display device 2 where parameter 12 is represented. [0093] - When a coincidence between the orientation 4 of the at least one eyeball 31 and the portion 53 of the graphic display device 2 is determined, the parameter 12 or function of the vehicle 1 is memorized, so that said function will be the last one in which occupant 3 of vehicle 1 has focused attention. [0094] - If an action is detected on an actuator 13 of the vehicle 1, the parameter 12 or function of the vehicle 1 that has been memorized will alter its value according to the needs of the occupant 3, in response to the action on the actuator 13. [0095] [0096] It is therefore emphasized that, from a control of the position of the eyeball 31, the function or parameter 12 of the vehicle 1 to be modified is determined and, through a variation on the actuator 13, a magnitude to be modified of said parameter is determined 12. Both actions of determining an action of the occupant 3 and determining a coincidence between the orientation 4 of the at least one eyeball 31 and the portion 53 of the graphic display device 2 are separate actions and do not have to be simultaneous, so that the occupant 3 must not be staring at the function or parameter 12 represented while acting on the actuator 13 to modify its magnitude. [0097] [0098] In Fig. 4A an illustrative representation of the speedometer of a vehicle 1 can be seen, with a first area 51 and a second area 52, represented in a central portion 53 of the graphic display device 2. In said first area 51 and second area 52 an information 5 is represented with at least one parameter 12 of the vehicle 1. As an example, in said central position 54 an information is represented: the speedometer, and a parameter 12 capable of being modified by the occupant 3: A cruise control speed. [0099] [0100] In figure 4A, a second information 5, shown in a left portion 53 of the graphic display device 2, is further represented. Said second information 5 may be a temperature level of the vehicle's engine 1. This second information 5 also comprises a first area 51 representing the temperature level of the vehicle's engine and a second area 52, said second area 52 comprising a parameter 12 of the vehicle 1. Similarly, a second information 5 is also shown, shown in a right portion 53 of the device graphic representation 2. Said second information 5 may be a vehicle speed counter 1. This second information 5 also comprises a first area 51, which represents a magnitude of the vehicle revolutions and a second area 52, said second area 52 comprising a parameter 12 of vehicle 1. [0101] [0102] In the example represented in Figure 4A, three information 5 can be displayed on the graphic display device 2 that can be operated by the occupant 3, that is to say that the occupant 3 can modify a value of a magnitude of the parameter 12. It could be that an information 5 represented in the graphic representation device 2 is not modifiable or actionable by the occupant 3, so that the information 5 comprises only a first area 51, not comprising a second area 52 or a representation of parameter 12. [0103] [0104] In figure 4B it can be seen, illustratively, the graphic representation device 2 described above in figure 4A, additionally representing a cursor symbolizing an evolution of the look 32 of the occupant 3. [0105] [0106] Thus, the look 32 is translated into a cursor, not shown to the occupying user 3. Said cursor is represented in a spherical shape, with a size smaller or equal to the second area 52. Thus, according to an example of operation of the present invention, an orientation 4 of the at least one eyeball 31 of the occupant 3 is detected by making a movement from left to right of the graphic display device 2. The gaze 32 is initially focused outside the graphic display device 2 and evolves into the interior of the graphic display device 2. [0107] [0108] Thus, according to a first embodiment, a coincidence between the orientation 4 of the eyeball 31 and the left portion 53 of the graphic display device is determined, where the engine temperature is represented, said information 5 or parameter 12 is memorized as parameter which is likely to be modified by the occupant 3.. [0109] [0110] In Figure 5A, an illustrative plan view of the position of the driver or occupant 3 can be observed, illustratively, which handles by means of at least one gesture 34, after observing in the graphic representation device 2 an information 5 with at least one parameter 12. The data processing system or device 11 has memorized the parameter 12 of the motor temperature as parameter 12 that can be modified by the occupant 3, so that when determining an action of the occupant 3, in this particular case by means of at least one gesture 34, parameter 12 is controlled based on at least one determined gesture 34. Thus, the orientation 4 of the at least one eyeball 31 must not be focused on parameter 12, while it is being controlled by the occupant 3. [0111] As an example of action, it is possible to mention looking at the area of the graphic representation device 2 of the volume of a sound system of the vehicle 1, for the purpose of define with view the element or parameter 12 on which to interact, and with a hand rotation gesture the volume of said sound system is raised or lowered. [0112] [0113] In Figure 5B, an illustrative plan view of the position of the driver or occupant 3 can be observed, which makes a voice operation 33, of the parameter 12 selected, by means of the orientation 4 of the eyeball 31, of Enter the information 5 that appears in the graphic display device 2. The data processing system or device 11 has memorized parameter 12 of the engine temperature as parameter 12 that can be modified by the occupant 3, so that when determines an action of the occupant 3, in this particular case by means of at least one voice command, parameter 12 is controlled based on at least one determined voice command. [0114] [0115] More in detail, as shown in Figure 4B, the information 5 is represented in the graphic representation device 2 as follows: a first area 51 is defined in the at least one graphic representation device 2, where the first area 51 limits an area where the at least one parameter 12 is represented, where the first area 51 is comprised in the portion 53, and a second area 52 is defined, where the second area 52 limits an area where the at least one parameter 12 can be operated by the occupant 3, where the second area 52 is comprised in portion 53. [0116] [0117] Specify that the portion 53 may be of the same dimensions as the first area 51. The second area 52 may, on the other hand, be of the same dimensions as the first area 51, but may also be of smaller dimensions than the first area 51, with In order to increase the accuracy with which the driver or occupant user 3 has to look at the information 5 in order to act on it. In Fig. 4A, three representations are shown where the first areas 51 are larger than the second areas 52. Note that the second areas 52 are not visually represented in the graphic display device 2. [0118] [0119] According to another aspect of the invention, as shown in Figure 4B, the method comprises an additional step of generating a first visual effect 55 in the graphic display device 2. Thus, when the look 32 or orientation 4 of the at least an eyeball 31 coincides with the first area 51 represented to the left of the graphic display device 2, a first visual effect 55 is generated in said first area 51. [0120] This would be the case where the eyes or eyeballs 31 only pass through the areas of the first area 51. If this situation occurs, a first visual effect 55 can occur, such as increasing the intensity of the image, changing the color of the contour, or modify the opacity of the first area 55. [0121] [0122] According to a second embodiment, and in reference to Figure 4B, an onset of coincidence between the orientation 4 of at least one eyeball 31 and the second area 52, represented by dashed lines, is determined. Specifically, the coincidence between the orientation 4 of the at least one eyeball 31 and the second area 52 represented to the left of the graphic display device 2 is determined. This coincidence comprises determining a beginning of the coincidence and an end of the coincidence, that is, to know when the occupant 3 begins a focus of his gaze 32 with the second area 52. [0123] [0124] Additionally, when an onset of coincidence occurs between the orientation 4 of the at least one eyeball 31 and the second area 52 a second visual effect 56 is generated in the graphic display device 2. Thus, the first visual effect 55 comprises modifying a visual appearance of the first area 51, in particular, increase the intensity of the image, change the color of the contour and / or modify the opacity of the first area 51. On the other hand, and in the case of the eyes or balloons eyepieces 31 pass through the areas of the second area 52 of the image a second visual effect 56 is generated different from the first visual effect 55, to emphasize, not only that the occupant 3 has focused his gaze 32 on the information 5, but also said information 5 can be controlled at the will of the occupant 3. [0125] [0126] In order to ensure the occupant 3 wishes to modify a certain parameter 12 of the vehicle 1, it is established that the parameter 12 will be memorized only if the start and end of the coincidence between the orientation 4 of the at least one eyeball 31 and the second Area 52 exceeds a predefined value. Thus, if the gaze 32 is focused more than a predefined time value on the second central area 52 of the graphic display device 2, the cruising speed will be memorized as parameter 12 capable of being modified by the occupant 3, representing said action by the second effect 56. Otherwise, if the occupant 3 only shifts his gaze on the first area 51 and the second area 52, but his gaze does not remain in the second area 52 a predefined time value, only one first effect 55 but the parameter 12 will not be memorized, so it cannot be modified by the occupant 3. [0127] [0128] It is thus observed that, both the dimensions of the second area 52 and the predefined time value are two magnitudes that allow adjusting the ease with which the occupant 3 activates a parameter 12 of the vehicle 1. Thus, if dimensions of the second area 52 are reduced, it will be more complicated that the look 32 of the occupant 3 coincides with said second area 52. Additionally, if the predefined time value is a large value, for example 2 seconds, it will be more complicated for the look 32 to remain fixed those 2 seconds in the second area 52, making it difficult for parameter 12 represented in that second area 52 to be memorized. On the contrary, if the dimensions of the second area 52 are large, it will be easier for the look 32 of the occupant 3 to coincide with said second area 52. Additionally, if the predefined time value is a small value, for example 0.3 seconds, it will be easier for the look 32 to remain fixed for 0.3 seconds in the second area 52, making it easier for parameter 12 represented in that second area 52 to be memorized. [0129] [0130] Mention on the other hand, that preferably, the predefined value with which the counted time and / or the surface of the second area 52 is compared, are based on at least one parameter 12 of the vehicle 1, at a priority of at least one parameter 12 represented and / or a performance history of the determined occupant 3. [0131] [0132] The coincidence based on the size of second area 52 and the predetermined time of look, is variable based on the following criteria: [0133] - Parameters 2 of vehicle 1: speed, road conditions etc. If, for example, the vehicle 1 withstands rain conditions or circulates at high speed, the coincidence is facilitated, that is, the size of the actionable area or second area 52 increases, and the predetermined look time 32 decreases. [0134] - Priority of information 5: It is facilitated the coincidence with the information 5 that is established as a priority (emergencies, state of the vehicle 1, etc ...) [0135] - History of action: It takes into account the representative icons of parameters 12 with which the driver or occupant 3 interacts most, facilitating the coincidence. [0136] [0137] Figure 6 shows, in an illustrative way, a schematic representation of the integrating elements of the device of the present invention, among which are a data processing device 11, an actuator 13, at least one graphic representation device 2, and an eyeball follower device 41. Specifically it is observed that, having a plurality of graphic representation devices 2, these are connected to the data processing device 11, to which the actuator 13 and the eyeball follower device 41 are also connected. [0138] [0139] According to a preferred embodiment of the invention, the actuator 13 is unique for a plurality of graphic representation devices 2, so that by means of an orientation 4 of the at least one eyeball 31 of the occupant 3 the parameter 12 represented in any of the graphic representation devices 2 to be modified, and by means of an actuation on the single actuator 13 in the vehicle 1, it is allowed to alter the magnitude of the parameter 12, whatever the parameter 12 and independently of the graphic representation device 12 where the parameter 12 It has been represented. [0140] [0141] By way of example, for all the functions, and by means of buttons 14 of the steering wheel 16, one acts on the same actuating portion 53, independently of the screen, or graphic representation device 2, in which it is represented. Said actuating portion 53 is unique to all screens, making the selection of the actuating portion 53 through the eyes or eyeball 31. [0142] [0143] Additionally, the actuator 13 is a voice control 33, a gesture control 34 or at least one button 14, wherein the at least one button 14 is arranged in an instrument cluster 15 and / or a steering wheel 16. [0144] [0145] The details, shapes, dimensions and other accessory elements, as well as the components used in the implementation of the method and associated device to control at least one parameter of a vehicle, may be conveniently replaced by others that are technically equivalent, and not they depart from the essentiality of the invention or the scope defined by the claims that are included after the following list. [0146] [0147] List references: [0148] [0149] 1 vehicle [0150] 11 data processing device 12 parameter [0151] 13 actuator [0152] 14 button [0153] 15 instrument cluster [0154] 16 steering wheel [0155] 2 graphic display device 3 occupant [0156] 31 eyeball [0157] 32 look [0158] 33 voice [0159] 34 gesture [0160] 4 orientation [0161] 41 eyeball follower device 5 information [0162] 51 first area [0163] 52 second area [0164] 53 serving [0165] 54 position [0166] 55 first visual effect [0167] 56 second visual effect
权利要求:
Claims (9) [1] 1- Method to control at least one parameter (12) of a vehicle (1), where the method comprises the steps of: i) representing at least one parameter (12) in at least one graphic representation device (2), where the at least one parameter (12) is represented in a portion (53) of the at least one graphic representation device (2) ; ii) detecting an orientation (4) of at least one eyeball (31) of an occupant (3) of the vehicle (1); iii) determine a match between the orientation (4) of the at least one eyeball (31) and the portion (53) of the at least one graphic display device (2); iv) memorize the at least one parameter (12) represented in the portion (53) if the coincidence between the orientation (4) of the at least one eyeball (31) and the portion (53) is determined; v) determine an action of the occupant (3); vii) controlling the at least one parameter (12), where controlling the at least one parameter (12) is based on at least one parameter (12) memorized and based on the performance of the determined occupant (3). [2] 2- Method according to claim 1, wherein the step of determining an action of the occupant (3) is an action separated from the step of determining the coincidence between the orientation (4) of the at least one eyeball (31) and the portion (53) of the at least one graphic representation device (2). [3] 3- Method according to any of the preceding claims, in which it comprises an additional step of determining whether the at least one parameter (12) represented in the portion (53) of the at least one graphic representation device (2) is operable on the part of the occupant (3), where step iv) comprises memorizing the at least one parameter (12) represented in the portion (53) if the at least one parameter (12) is determined as actionable and if the coincidence between the orientation (4 ) of the at least one eyeball (31) and the portion (53) is determined. [4] 4- Method according to any of the preceding claims, wherein step iii) comprises: - determining an onset of coincidence between the orientation (4) of at least one eyeball (31) and the portion (53) of the at least one graphic display device (2); - determining an end of the coincidence between the orientation (4) of at least one eyeball (31) and the portion (53) of the at least one graphic display device (2); [5] 5- Method according to claim 4, wherein it comprises accounting for a period of time between the beginning and the end of the coincidence between the orientation (4) of the at least one eyeball (31) and the portion (53) of the at least a graphic representation device (2), where step iv) comprises memorizing the at least one parameter (12) represented if the period of time counted is greater than a predefined value. [6] 6- Method according to claim 5, wherein the predefined value is based on at least one parameter (12) of the vehicle (1), at a priority of at least one parameter (12) represented and / or a performance history of the determined occupant (3). [7] 7- Method according to any of the preceding claims, wherein step i) comprises: a) define a first area (51) in the at least one graphic representation device (2), where the first area (51) limits an area where the at least one parameter (12) is represented, where the first area (51 ) is comprised in portion (53), and b) defining a second area (52), where the second area (52) limits an area where the at least one parameter (12) can be operated by the occupant (3), where the second area (52) is comprised in the portion (53). [8] 8- Method according to claim 7, wherein it comprises an additional step of generating a first visual effect (55) in the graphic representation device (2), wherein the first visual effect (55) is based on the coincidence determined between the orientation (4) of the at least one eyeball (31) and the first area (51). [9] Method according to claim 6, wherein defining the second area (52) comprises establishing a surface of the second area (52) based on at least one parameter (12) of the vehicle (1), at a priority of at less a parameter (12) represented and / or a performance history of the determined occupant (3). 10. Method according to claim 9, wherein step iii) comprises determining a coincidence between the orientation (4) of the at least one eyeball (31) and the second area (52). 11. Method according to claim 10, wherein it comprises an additional step of generating a second visual effect (56) in the graphic representation device (2), wherein the second visual effect (56) is based on the coincidence determined between the orientation (4) of the at least one eyeball (31) and the second area (52). 12. Method according to any of the preceding claims, characterized in that step v) comprises determining an action of the occupant (3) by means of an actuator (13), wherein the actuator (13) is unique for a plurality of graphic display devices (two). 13. Method according to claim 12, wherein the actuator (13) is a voice control (33), a gesture control (34) or at least one button (14), wherein the at least one button (14) It is arranged in an instrument cluster (15) and / or a steering wheel (16). 14- Data processing device (11) comprising means for executing the method of claim 1.
类似技术:
公开号 | 公开日 | 专利标题 KR20130063911A|2013-06-17|Eye breakaway prevention system during vehicle's driving US8155837B2|2012-04-10|Operating device on vehicle's steering wheel US8538628B2|2013-09-17|Control device ES2381563T3|2012-05-29|On-board control unit for motor vehicles US10583855B2|2020-03-10|Steering device for a vehicle, in particular an electric vehicle JP2011063103A|2011-03-31|Onboard equipment operation system ES2289723T3|2008-02-01|SYSTEM AND PROCEDURE FOR ADJUSTMENT OF THE MODE OF OPERATION OF CUSTOMIZABLE FUNCTIONS OF THE VEHICLE. GB2483959A|2012-03-28|Display arrangement with display on steering column ES2718429B2|2019-11-18|Method and associated device to control at least one parameter of a vehicle JP2007302116A|2007-11-22|Operating device of on-vehicle equipment US9201504B2|2015-12-01|Vehicular glance lighting apparatus and a method for controlling the same JP5588764B2|2014-09-10|In-vehicle device operation device WO2017072939A1|2017-05-04|Vehicle information display control device, and method for displaying automatic driving information EP2936291B1|2019-05-01|Moving control console CN105677009B|2020-05-12|Gesture recognition device and vehicle with same EP2537700A1|2012-12-26|User interface system, method and program for actuation of a vehicle function JP2006264615A|2006-10-05|Display device for vehicle JP2017111711A|2017-06-22|Operation device for vehicle KR101351656B1|2014-01-16|Vehicular glance lighting apparatus ES2704373B2|2020-05-29|Method and system to display virtual reality information in a vehicle JP2015080994A|2015-04-27|Vehicular information-processing device US20180208212A1|2018-07-26|User Interface Device for Selecting an Operating Mode for an Automated Drive ES2725465T3|2019-09-24|Procedure and control device for the regulation of a vehicle air conditioning device ES2717343B2|2021-07-08|Gesture control method and device of at least one function of a vehicle ES2779723T3|2020-08-19|Display and control device for a vehicle component
同族专利:
公开号 | 公开日 ES2894368T3|2022-02-14| EP3663120A1|2020-06-10| EP3663120B1|2021-06-30| EP3505384A1|2019-07-03| EP3505384B1|2021-05-05| ES2718429B2|2019-11-18|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20150169055A1|2012-08-30|2015-06-18|Bayerische Motoren Werke Aktiengesellschaft|Providing an Input for an Operating Element| EP3001289A1|2013-05-23|2016-03-30|Pioneer Corporation|Display controller| WO2015061486A2|2013-10-24|2015-04-30|Johnson Controls Technology Company|Systems and methods for displaying three-dimensional images on a vehicle instrument console| US20160320835A1|2013-12-20|2016-11-03|Audi Ag|Operating device that can be operated without keys| US20170293355A1|2016-04-07|2017-10-12|Robert Bosch Gmbh|Method and apparatus for assigning control instructions in a vehicle, and vehicle| DE10121392A1|2001-05-02|2002-11-21|Bosch Gmbh Robert|Device for controlling devices by viewing direction| US9517776B2|2011-12-29|2016-12-13|Intel Corporation|Systems, methods, and apparatus for controlling devices based on a detected gaze| US9580081B2|2014-01-24|2017-02-28|Tobii Ab|Gaze driven interaction for a vehicle|CN113220111A|2020-01-21|2021-08-06|厦门歌乐电子企业有限公司|Vehicle-mounted equipment control device and method|
法律状态:
2019-07-01| BA2A| Patent application published|Ref document number: 2718429 Country of ref document: ES Kind code of ref document: A1 Effective date: 20190701 | 2019-11-18| FG2A| Definitive protection|Ref document number: 2718429 Country of ref document: ES Kind code of ref document: B2 Effective date: 20191118 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201731496A|ES2718429B2|2017-12-29|2017-12-29|Method and associated device to control at least one parameter of a vehicle|ES201731496A| ES2718429B2|2017-12-29|2017-12-29|Method and associated device to control at least one parameter of a vehicle| EP18248175.4A| EP3505384B1|2017-12-29|2018-12-28|Method and associated device for controlling at least one parameter of a vehicle| ES20150190T| ES2894368T3|2017-12-29|2018-12-28|Method and associated device for controlling at least one parameter of a vehicle| EP20150190.5A| EP3663120B1|2017-12-29|2018-12-28|Method and associated device for controlling at least one parameter of a vehicle| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|