![]() METHOD OF RECEIVING A SELECTION OF A USER INTERFACE ELEMENT, COMPUTER READable MEDIUM AND COMPUTER S
专利摘要:
method of receiving a selection of a user interface element displayed in a graphical user interface, method of receiving a selection through gaze gestures of content displayed in a graphical user interface, and head-mounted display system. the invention relates to a method of selecting user interface elements by means of a periodically updated position signal, the method comprising displaying in a graphical user interface a representation of a user interface element and a representation of an interactive target (106). the method further comprises receiving a coordinate input of the periodically updated position signal, and determining a user interface element selection if a motion interaction of the periodically updated position signal with the interactive target corresponds to a condition of predetermined movement. 公开号:BR112015031695B1 申请号:R112015031695-6 申请日:2014-06-20 公开日:2022-01-11 发明作者:Morgan Kolya Venable;Bernard James Kerr;Vaibhav Thukral;David Nister 申请人:Microsoft Technology Licensing, Llc; IPC主号:
专利说明:
background [001] Various types of user input devices can provide position signals to a computing device. For example, eye tracking systems can be used to track a location where a user's gaze intersects a displayed user interface. Various mechanisms can be used to track eye movements. For example, some eye tracking systems may include one or more light sources configured to direct light (such as infrared light) towards a user's eye, and one or more image sensors configured to capture images of the user's eye. . Images of the eye acquired while light sources are emitting light can be used to detect the location of the wearer's pupil and corneal (or other) reflections arising from the light sources. This information can then be used to determine a user's eye position. Information regarding the position of the user's eye can then be used in combination with information regarding the relative location of the user's eye compared to the user interface to determine the location at which the user's gaze intersects the user interface. and thus to determine the position signal. summary [002] The modalities are described here referring to the selection of user interface elements through a periodically updated position signal. For example, one described embodiment provides a method comprising displaying in a graphical user interface a representation of a user interface element and a representation of an interactive target. The method further comprises receiving a coordinate input of the periodically updated position signal, and determining a user interface element selection if a motion interaction of the periodically updated position signal with the interactive target corresponds to a predetermined motion condition. -nothing. [003] This Table of Contents is provided to introduce a selection of concepts in a simplified form which will be further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the present claimed subject matter, nor is it intended to be used to limit the scope of the present claimed subject matter. Additionally, the present claimed matter is not limited to implementations that address any and all drawbacks noted elsewhere in this disclosure. Brief Description of Drawings [004] Figures 1A to 1C present an illustrative interaction of a user with a graphical user interface through a periodically updated position signal. [005] Figure 2 illustrates a block diagram of an illustrative modality of a computing system configured to receive a selection of a graphical user interface element through a periodically updated position signal. [006] Figure 3 illustrates a flowchart showing an illustrative modality of a method for selecting a user interface element through a periodically updated position signal; [007] Figures 4A to 4C illustrate another illustrative interaction of a user with a graphical user interface through a periodically updated position signal; [008] Figures 5A to 5D illustrate another illustrative interaction of a user with a graphical user interface through a periodically updated position signal; [009] Figure 6 illustrates another illustrative interaction of a user with a graphical user interface through a periodically updated position signal; [0010] Figure 7 illustrates an illustrative embodiment of a computing system. Detailed Description [0011] As mentioned above, eye tracking can be used to provide input to a computing device. However, determining a user's intent to select a graphical user interface element through an eye tracking system can pose challenges. For example, in the case of a user input device such as a computer mouse, touch keyboard, or track ball, a user can manifest an intention to select a user interface object by moving a cursor over a user interface element. , and then pushing a button or similar on the user input device to select the element. Similarly, in the case of a touchscreen display, a user can select a UI item by selectively touching the display on an area representing the UI item. [0012] In each of these examples, user intent can be clearly inferred by user input (e.g. mouse button press or touch screen) performed by activating a mechanism (e.g. mouse button or touch screen) that allows intent to be inferred with some certainty. However, in the case of an eye-tracking system configured to provide a positional signal to a computing device, it can be difficult to infer a user's intent to select a user interface element with any certainty in the absence of another type. input (for example, a voice command, body or head gesture, etc.) to trigger the selection of a user interface object that the user is currently observing. [0013] A potential method of inferring a user's intent to select a UI element might be to infer such intent if a user's gaze is determined to remain on a UI element for a timeout duration. However, permanence-based selection mechanisms can seem slow or otherwise awkward to users. [0014] Accordingly, the modalities are described here with reference to the selection of a user interface element through the interaction between a periodically updated position signal and an interactive target associated with the user interface element. The term "periodically updated position signal" refers to a user input mechanism in which a position signal with respect to the displayed user interface is provided and updated regularly without any user input used to trigger the regular update. Selection of a user interface item via a periodically updated position signal is distinguished from normal selection of an item via a computer mouse or track pad, which may provide a position signal at a regular frequency but uses a button press or other signal to select a user interface item. While described herein in the context of an eye tracking system for a near eye display system, it will be understood that the described modalities may be used with any other periodically updated position signal, including, but not limited to, body gesture signals derived from the data. image and/or motion sensor data. [0015] Figures 1A to 1C illustrate an illustrative embodiment of a user interaction with a graphical user interface through a periodically updated position signal. More specifically, Figures 1A to 1C depict a user 100 of a quasi-ocular display device in the form of a head monitor 102 that is configured to display the augmented reality images. These figures also illustrate the user 100 interacting with a graphical user interface element in the form of a virtual television object 104 via an eye-tracking system integrated into the near-eye display device, where the eye-tracking system provides an eye-tracking signal. position updated periodically. It will be understood that this mode is presented, for example, and should not be limiting, and that any computing device may receive input from any suitable input device configured to provide a periodically updated position signal. [0016] When the user 100 looks at the virtual television object 104, it can be difficult to determine whether the user wants to select playback of video content through the virtual television object 104, or if the user is merely examining the video object. virtual television 104 more closely. Thus, Figure 1A also presents a user interface element in the form of an interactive target 106 that is associated with the virtual television object 104. As illustrated in Figure 1B, the user 100 can express an intention to activate the virtual television object 104. virtual television 104 by moving a gaze location 108 from the interactive television object 104 to the interactive target 106. After detecting that a motion interaction of the periodically updated position signal with the interactive target reaches a predetermined motion condition ( (e.g., moving from the virtual television object to the interactive target, with or without a dwell duration at either location), the virtual television object 104 can be triggered to play video content 110 for the user 100, such as illustrated in figure 1C. [0017] Using an interactive target that is displayed separately from a corresponding selectable UI element can allow the user's intent to interact with the UI element to be determined more quickly, and can also provide a better user experience. more natural than using permanence-based position signal inputs to select user interface elements. For example, a gaze dwell of adequate duration to express the user's intent to select a graphical user interface element may seem too long and slow to a user. In contrast, interactive targets as described here can allow the use of more natural eye movements to be used to express that intent. Several examples of interactive targets are described in more detail below. [0018] Figure 2 illustrates a block diagram of a mode of a computing system 200 configured to allow the selection of user interface elements through a periodically updated position signal. Computing system 200 may represent a head monitor 102 of Figure 1, and/or any other suitable computing system configured to use a periodically updated position signal as an input. The computing system 200 includes a monitor 202 on which a graphical user interface can be displayed. The monitor 202 can take any suitable form. For example, in some embodiments, the computing system may include a quasi-ocular display 204. Examples of quasi-ocular display systems may include, but are not limited to, head-mounted monitors and handheld devices (e.g., smart phones having a quasi-ocular display). In such devices, the quasi-ocular monitor may comprise a transparent monitor 206, or an opaque monitor. In other embodiments, the monitor may be external 208 to a computing device, such as in the case of a conventional monitor, television, or the like. [0019] The computing system 200 further comprises an input subsystem 210 configured to receive a periodically updated position signal input from a suitable input device. For example, in some embodiments, input subsystem 210 may comprise an eye tracking system 212 configured to produce a periodically updated position signal based on a determined location of a user's gaze in a graphical user interface. In some embodiments, eye tracking system 212 may be head 214 (e.g., incorporated into a head quasi-ocular display system) or otherwise incorporated into a computing device. In other embodiments, the eye tracking system may be external 216 to a computing device, such as in the case of an eye tracking system utilizing one or more cameras external to a conventional monitor or television used to display a graphical user interface. The eye tracking system may utilize any suitable components. For example, in some embodiments, the eye tracking system may utilize one or more light sources configured to create flickering reflections from the cornea of an eye, and one or more image sensors configured to acquire an image of the eye. [0020] Additionally, input subsystem 210 may include other sources of periodically updated position signals. For example, input subsystem 210 may include one or more motion sensors 218 (e.g., incorporated into a head monitor or other handheld device) and/or one or more image sensors 220 (e.g., one or more sensors (outward-facing image sensors configured to capture the video footage of gestures made by a user, plus inward-facing image sensors for eye tracking). It will be understood that those embodiments of user input devices that provide periodically updated position signals are presented for illustrative purposes, and are not intended to be limiting in any way. [0021] The computer system 200 also includes a logical subsystem 222, and a storage subsystem 224. The storage subsystem 224 may include stored instructions executable by the logical subsystem 222 to perform various tasks related to the presentation of a graphical user interface. user and receiving and processing periodically updated position signals. Illustrative computing devices are described in more detail below. [0022] Figure 3 illustrates a flowchart showing an illustrative embodiment of a method 300 for selecting a graphical user interface element through a periodically updated position signal. The method comprises, at 302, displaying in the graphical user interface a user interface element. The UI element can take any suitable form and represents any suitable selectable item. [0023] Method 300 further comprises, at 304, displaying the graphical user interface of an interactive target associated with the user interface element. As mentioned above, the interactive target can be used to determine a user's intention to select the UI element through the movement of a gaze path (or other user input involving control of a periodically updated position signal) which corresponds to a predetermined movement condition with respect to the interactive target. For example, in some embodiments, the interactive target may represent a location to which the user's gaze can move the user interface element through a gaze or gesture or other suitable type of gesture. Additionally, in some embodiments, multiple interactive targets may be displayed for a corresponding user interface element, so that a user interacts with a plurality of interactive targets to select the user interface element. [0024] An interactive target can be displayed persistently along with the associated UI element, or it can be hidden until the user interacts with the UI element. Additionally, where multiple interactive targets are used for a user interface element, in some examples each interactive target may remain hidden until a user interacts with an interactive target immediately before it in a sequence. It will be appreciated that these modalities of the interactive targets are described for purposes of illustration and should not be limiting in any way. Other non-limiting examples of interactive targets are described in more detail below. [0025] Continuing, the method 300 comprises, in 306, receiving an input of coordinates of the position signal updated periodically. Coordinates can represent, for example, a location at which a user is aiming at a graphical user interface as determined through an eye tracking system, a location at which the user is pointing as determined from an eye tracking system. gestures, a location to which a user has moved a cursor or other representation of the position signal through a gesture and/or any other suitable information. [0026] Additionally, changes in the position represented by the coordinates as a function of time can represent input gestures. As described above, the movements of a periodically updated position signal that interacts with a user interface element and an associated interactive target can be used to determine a user intent to select the user interface element. As such, method 300 comprises, at 308, determining whether the movement of the periodically updated position signal with respect to the interactive target corresponds to a predetermined movement condition. The predetermined motion condition can represent any suitable condition between the motion of the periodically updated position signal and the interactive target. For example, in some embodiments, the predetermined movement condition may correspond to one or more sequential eye gestures (pictured at 310) that move between the user interface element and/or one or more interactive targets. Figures 1A to 1C, discussed above, illustrate an example of a sequential eye gesture. The term "sequential eye gesture" as used herein means that the gaze location moves from a starting location to an ending location in the user interface in one or more motion sequence segments. Such gestures may allow natural eye movements, such as saccades, to be used to perform the gesture. [0027] A sequential eye gesture can be compared to any suitable predetermined movement condition. For example, in some embodiments, the motion condition may correspond to an expected sequence in which the periodically updated position signal intersects the UI element and each interactive target. In other embodiments, the motion condition may correspond to how closely a path followed by the motion signal matches a predetermined path. It will be understood that these examples of predetermined movement conditions are presented for purposes of illustration and should not be limiting in any way. [0028] In the example of figures 1A to 1C, the sequential eye gesture is performed between the user interface element and a single presented interactive target. However, as mentioned above, in some modalities more than one interactive target can be used. Figures 4A through 4C illustrate an illustrative embodiment in which sequential eye gestures between various interactive elements are used to determine a user's intent to select an associated user interface element. [0029] First, Figure 4A illustrates a gaze location 108 on the virtual television, and also illustrates a first interactive target 400 displayed to the right of the virtual television object 104. Next, Figure 4B illustrates the gaze location after realization of a first eye gesture sequence 402 in which the gaze is moved from the virtual television to the first interactive target 400. As shown, the interaction between the periodically updated position signal controlled by the user's gaze and the first interactive target results in the display of a second interactive target 404. Next, with reference to Figure 4C, after the gaze locating movement to the second interactive target, playback of the video content 406 through the virtual television object 104 is selected due to the user's gaze movements that correspond to a predetermined movement condition (e.g. looking at interactive targets in a predetermined order). [0030] While the presented modality uses two interactive targets, it will be understood that any suitable number of targets can be used. Using a larger number of targets can reduce a probability of false positive interactions between the user and the interactive target. Additionally, while the selection of the first interactive target results in the display of the second interactive target in the presented embodiment, in other embodiments a plurality of interactive targets for a user interface item may be displayed persistently. Additionally, in some embodiments, multiple interactive targets can be displayed so that they mount to the associated UI element, so that the users gaze position is measured approximately over the center of the associated UI element. Such an arrangement can help prevent the user from inadvertently activating neighboring u-user interface elements. An example of such a configuration is illustrated in Figure 6, where a user is illustrated looking at a second interactive target 602 after having previously looked at a first interactive target 600 located on an opposite side of a virtual television 604. The direction of movement of the The look between the first interactive target 600 and the second interactive target 602 is illustrated by the dashed line 605. Additionally, an additional user interface element is illustrated as an icon 606 representing a tic-tac-toe game. By placing the virtual television interactive targets 604 on opposite sides of the virtual television 604, the gaze locations that create the eye gesture illustrated by the dashed line 605 can converge to a location on the virtual television 604, and therefore provide information regarding an intended user interface element. Additionally, placement of interactive targets 600, 602 for virtual television 604 can help prevent a user from inadvertently looking at an interactive target 608 associated with icon 606. [0031] In some embodiments, the predetermined movement condition may include short gaze dwells on one or more interactive targets. Therefore, the specific sequence of movement interactions may include one or more gaze permanences between gaze movements. Such stays may be imperceptible to the user (i.e., the user does not feel they are staring at a particular object for any unnatural duration), but may be long enough to indicate that the user has interacted enough with the target. interactive. As a non-limiting example, dwells can be between 100 and 150 ms in duration. However, any suitable dwell duration can be used for the predetermined movement condition. [0032] In modes where multiple targets are displayed persistently, targets may employ any suitable system to indicate a correct sequence that will result in the activation of the content item. For example, interactive targets can have numbers (such as 1, 2, 3), words, or animations (such as arrows, sequential highlighting of each target in the proper order, sequential revelation of each target, etc.) in which a user must interact with the interactive targets to cause the content item to activate. The examples listed above are for illustrative purposes and are not intended to be limiting in any way. [0033] Additionally, where multiple interactive targets are used to select a UI element, user interaction with a later interactive target in a sequence of interactive target interactions may cause an interactive target earlier in the sequence to disappear or change its name. appearance, as illustrated in Figure 4C. Such changes in appearance may include, but are not limited to, changes in color, shape, size, transparency and/or location. It will be appreciated that if a user wishes not to activate the content item associated with the interactive target, the user may disengage from the saccadic gesture before the saccadic gesture is completed (that is, before the predetermined movement condition is met). [0034] Referring again to Figure 3, in other embodiments, the predetermined motion condition may correspond to the motion of a UI element at the intersection (or other spatial arrangement) with another UI element via the updated position signal periodically. This can be referred to as a "chasing" gesture, as indicated at 312, as the user's gaze chases or triggers the UI element while moving to intersect with the other UI element. It will be understood that the term "chasing gesture" as used herein refers to gestures in which a user's gaze follows or triggers the movement of a mobile interactive target towards a target user interface element. [0035] Figures 5A through 5D illustrate an embodiment of a chasing motion interaction in which a gaze gesture is used to move a user interface element to a target position with respect to the associated interactive target. First, Figure 5A illustrates the user 100 with gaze lines 108 directed at a virtual television object 104, and also illustrates an interactive target 500 and a target position 504 in the form of a receptacle 504 for the interactive target 500, where the receptacle 504 can remain in a fixed location during the movement of the interactive target. In Figure 5A , the interactive target 500 is illustrated at a home position 502. If the user chooses to select the user interface element, the user can first direct a gaze location to the interactive target 500, as illustrated in Figure 5B. The intersection of the gaze location with the interactive target (potentially for a timeout duration) can initiate movement of the interactive target smoothly away from the gaze location and towards the target position 504. The user can follow the motion of the interactive target from the starting position to the target position to select the user interface element, as indicated in 5C and 5D. It will be understood that the target position indicator may be displayed persistently, or may be revealed upon eye interaction with the corresponding user interface element. [0036] As discussed previously, a user can choose not to activate a content item by disengaging a gesture. In the case of a chase-based gesture, the user may look away from the interactive target before reaching the target position to avoid activating the content item, or may disengage in any other suitable way (e.g., voice command , body gesture command, etc.). It should be noted that a gaze interaction with a UI element, whether chase, sequence, or otherwise, can tolerate some amount of error in gaze location. For example, gaze locations within a boundary distance outside the displayed area of the interactive target can still be considered as a gaze interaction with the target in some modalities, thus allowing a user gaze to wander to a certain point without discontinuing a gesture being performed. This tolerance can have hysteresis characteristics, as the tolerance can be applied during the performance of a gesture, but not during the initiation of an interaction with an interactive target. [0037] Returning briefly again to figure 3, if the movement of the periodically updated position signal with respect to the interactive target corresponds to a predetermined condition (or one or more predetermined conditions from a set of conditions), then the method 300 comprises, at 314, determining that the user has selected the UI element. The term "selection" and the like can refer to any interaction with the UI element and/or program represented. Examples of selection include, but are not limited to, launching a program represented by the element, directing an operating system's focus to a program or data represented by the element, illustrating information regarding a program or data represented by the element, changing a state of a program or data represented by the element (e.g. controlling video playback, adjusting the audio volume level, selecting an item from a user interface menu, etc.), illustrating information regarding the interface element user interface itself (as opposed to information regarding the program or data represented by the element), and/or any other appropriate interaction with the user interface element and/or program or data represented at least. On the other hand, if the movement of the periodically updated position signal with respect to the interactive target does not correspond to the predetermined condition or conditions, then method 300 comprises, at 316, not determining that the user has selected the user interface element. [0038] In some embodiments, multiple types of motion interactions can be used with the same interactive target to select an associated user interface element. For example, while a new user might prefer a slower pace of a chasing gesture, a more advanced user might want a faster sequential interaction, for example using a saccadic gesture. Therefore, in some embodiments, an interactive target may allow interaction through two or more different predetermined motion conditions that can be used to select the corresponding user interface element. This may allow users to vary specializations and/or preferences to interact with the graphical user interface in different ways. For example, an interactive target that is rectangular in shape can be connected to a chase gesture and a saccadic gesture. A chase gesture can activate the associated content item as the user aims from one side of the target to the other (with or without following a movement of an interactive element), while a saccadic gesture can follow a gaze path between the pre-selected locations on the target. [0039] In this way, the modalities described here can allow user interface elements to be selected through a periodically updated position signal without using additional input mechanisms (e.g. buttons, voice, etc.) to manifest a intention of selecting an element, and without the use of permanence gestures, although permanence can be used in combination with the gaze gestures described in some examples. [0040] In some embodiments, the methods and processes described here may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer application program or service, an application programming interface (API), a library, and/or other computer program product. [0041] Figure 7 schematically illustrates a non-limiting embodiment of a computing system 700 that can apply one or more of the methods and processes described above. The computing system 700 is illustrated in simplified form. Computing system 700 may take the form of one or more personal computers, server computers, tablet computers, home entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), wearable devices (eg head monitors such as those described above) and/or other computing devices. [0042] Computing system 700 includes a logic subsystem 702 and a storage subsystem 704. Computing system 700 additionally includes a display subsystem 706, an input subsystem 708, a communication subsystem 710, and/or other non- illustrated in figure 7. [0043] Logic subsystem 702 includes one or more physical devices configured to execute instructions. For example, the logical subsystem can be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result. [0044] The logical subsystem may include one or more processors configured to execute the software instructions. Additionally or alternatively, the logical subsystem may include one or more hardware or firmware logical subsystems configured to execute hardware or firmware instructions. The logic subsystem processors can be single-core or multi-core, and the instructions executed can be configured for sequential, parallel, and/or distributed processing. Individual components of the logical subsystem may optionally be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logical subsystem can be virtualized and executed by remotely accessible network computing devices configured in a cloud computing configuration. [0045] Storage subsystem 704 includes one or more physical devices configured to hold instructions executable by the logical subsystem for implementing the methods and processes described here. When such methods and processes are implemented, the state of the storage subsystem 704 can be transformed - for example, to hold different data. [0046] The 704 storage subsystem may include removable and/or built-in computer-readable storage media. Storage subsystem 704 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or (e.g. hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Storage subsystem 704 may include volatile, non-volatile, dynamic, static, read-write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. [0047] It will be appreciated that the storage subsystem 704 includes one or more physical devices and excludes signals per se. However, aspects of the instructions described here may alternatively be propagated over a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored through the physical storage device. [0048] Aspects of logical subsystem 702 and storage subsystem 704 can be integrated into one or more logical hardware components. Such logical hardware components may include field programmable gate assemblies (FPGAs), application-specific integrated circuits (PASIC/ASICs), program and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example. [0049] The terms "module", "program", and "engine" may be used to describe an aspect of the computing system 700 implemented to perform a particular function. In some cases, a module, program, or engine may be started via the logic subsystem 702 by executing instructions held by the storage subsystem 704. It will be understood that different modules, programs, and/or engines may be started from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine can be started by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms "module", "program" and "engine" may represent individuals or groups of executable files, data files, libraries, triggers, scripts, database records, etc. It will be further appreciated that the modalities may be implemented as software as a service (SaaS) where one or more programs and/or data are stored remotely from end users and accessed by end users over a network. [0050] When included, the display subsystem 706 may be used to present a visual representation of the data held by the storage subsystem 704. This visual representation may take the form of a graphical user interface (GUI). As described here, the methods and processes change the data held by the storage subsystem, and in doing so transform the state of the storage subsystem, the state of the display subsystem 706 can likewise be transformed to visually represent the changes. in the underlying data. Display subsystem 706 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 702 and/or storage subsystem 704 in a shared enclosure, or such display devices may be peripheral display devices. [0051] When included, the 708 input subsystem may comprise or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may understand or interface with selected natural user input (NUI) components. Such components can be integrated or peripheral, and the transduction and/or processing of input actions can be handled on or off the panel. Illustrative NUI components may include a microphone for speech and/or voice recognition; infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intention recognition, and electrical field perception components for determining brain activity. [0052] When included, the communication subsystem 710 may be configured to communicatively couple the computing system 700 with one or more other computing devices. Communication subsystem 710 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem can be configured to communicate over a wireless telephone network, or a wired or wireless local or wide area network. In some embodiments, the communication subsystem may allow the computing system 700 to send and/or receive messages to and/or from other devices over a network such as the Internet. [0053] It will be understood that the configurations and/or approaches described here are illustrative in nature, and that these specific modalities or examples are not considered in a limiting sense, as numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various illustrated and/or described acts may be performed in the illustrated and/or described sequence, in other sequences, in parallel, or omitted. Likewise, the order of the processes described above can be changed. [0054] It will be understood that the configurations and/or approaches described here are illustrative in nature, and that these specific modalities or examples should not be considered in a limiting sense, as numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various illustrated and/or described acts may be performed in the illustrated and/or described sequence, in other sequences, in parallel, or omitted. Likewise, the order of the processes described above can be changed. [0055] The subject matter of the present description includes all new and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other characteristics, functions, acts and/or properties described herein, in addition to any or all equivalence.
权利要求:
Claims (10) [0001] 1. Method of receiving a selection of a user interface element (104) displayed in a graphical user interface through a user registration mechanism providing a periodically updated position signal, wherein the periodically updated position signal captures user gaze gestures, the method comprising the steps of: displaying (302) in the graphical user interface the user interface element (104); displaying (304) in the graphical user interface a representation of an interactive destination (500) associated with the user interface element (104); receiving (306) a periodically updated position signal coordinate input; and determining (314) a selection of the user interface element (104) in response to a motion interaction of the periodically updated position signal with the interactive target (500) corresponding to a predetermined motion condition, wherein the predetermined motion condition includes a chasing gaze gesture based on motion of the interactive target (500), including first determining that a user's gaze location is directed to the interactive target (500), characterized by the fact that an intersection of the user's gaze location user with the interactive target (500) initiating movement of the interactive target (500) smoothly away from the user's gaze location and toward a displayed target position (504); determining that the user's gaze follows the movement of the interactive target (500) to the target position (504) to select the user interface element (104). [0002] 2. Method according to claim 1, characterized in that the predetermined movement condition includes a sequential gaze gesture. [0003] 3. Method according to claim 2, characterized in that the sequential gaze gesture includes a specific sequence of movement interactions between the periodically updated position signal and one or more interactive targets. [0004] 4. Method according to claim 1, characterized in that the predetermined motion condition is one of two or more different motion conditions usable for selecting the user interface element. [0005] 5. Method, according to claim 1, characterized by the fact that the periodically updated position signal captures the user's head movements. [0006] 6. Method according to claim 1, characterized in that the interactive target is displayed (304) persistently in the graphical user interface. [0007] 7. Method according to claim 1, characterized in that the interactive target is displayed (304) in response to a user interaction with one or more user interface elements. [0008] 8. Method according to claim 1, characterized in that the motion interaction between the displayed interactive target and the periodically updated position signal causes the display of a second interactive target. [0009] 9. Computer readable medium, characterized in that it has the method as defined in any one of claims 1 to 8. [0010] 10. Computer system characterized in that it is configured to receive a selection of user interface elements by means of a periodically updated position signal, according to the method as defined in any one of claims 1 to 8.
类似技术:
公开号 | 公开日 | 专利标题 BR112015031695B1|2022-01-11|METHOD OF RECEIVING A SELECTION OF A USER INTERFACE ELEMENT, COMPUTER READable MEDIUM AND COMPUTER SYSTEM US10705602B2|2020-07-07|Context-aware augmented reality object commands US10222981B2|2019-03-05|Holographic keyboard display EP3137974B1|2018-05-23|Display device viewer gaze attraction JP6730286B2|2020-07-29|Augmented Reality Object Follower KR102358939B1|2022-02-04|Non-visual feedback of visual change in a gaze tracking method and device EP2946264B1|2016-11-02|Virtual interaction with image projection US10186086B2|2019-01-22|Augmented reality control of computing device US9977492B2|2018-05-22|Mixed reality presentation US10133345B2|2018-11-20|Virtual-reality navigation WO2015026629A1|2015-02-26|Automatic customization of graphical user interface for optical see-through head mounted display with user interaction tracking US10409444B2|2019-09-10|Head-mounted display input translation
同族专利:
公开号 | 公开日 US10025378B2|2018-07-17| EP3014390B1|2019-07-24| RU2676244C2|2018-12-26| JP6387404B2|2018-09-05| CN105393190B|2018-12-07| AU2014302874A1|2015-12-17| AU2014302874B2|2019-04-18| TW201506688A|2015-02-16| EP3014390A1|2016-05-04| KR20160023889A|2016-03-03| US20140380230A1|2014-12-25| RU2015155603A|2017-06-30| BR112015031695A8|2021-05-25| MX2015017633A|2016-04-15| MX362465B|2019-01-18| BR112015031695A2|2017-07-25| CN105393190A|2016-03-09| JP2016526721A|2016-09-05| CA2913783C|2021-07-27| RU2015155603A3|2018-05-24| KR102223696B1|2021-03-04| WO2014209772A1|2014-12-31| CA2913783A1|2014-12-31|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5583795A|1995-03-17|1996-12-10|The United States Of America As Represented By The Secretary Of The Army|Apparatus for measuring eye gaze and fixation duration, and method therefor| US6535230B1|1995-08-07|2003-03-18|Apple Computer, Inc.|Graphical user interface providing consistent behavior for the dragging and dropping of content objects| AU1091099A|1997-10-16|1999-05-03|Board Of Trustees Of The Leland Stanford Junior University|Method for inferring mental states from eye movements| TWI356388B|2002-05-10|2012-01-11|Jasper Display Corp|Display, system having a plurality of displays and| US20040095390A1|2002-11-19|2004-05-20|International Business Machines Corporaton|Method of performing a drag-drop operation| US7561143B1|2004-03-19|2009-07-14|The University of the Arts|Using gaze actions to interact with a display| CN1960670B|2004-04-01|2011-02-23|威廉·C·托奇|Biosensors, communicators, and controllers for monitoring eye movement and methods for using them| PT1607840E|2004-06-18|2015-05-20|Tobii Ab|Eye control of computer apparatus| US7657849B2|2005-12-23|2010-02-02|Apple Inc.|Unlocking a device by performing gestures on an unlock image| WO2007107949A1|2006-03-23|2007-09-27|Koninklijke Philips Electronics N.V.|Hotspots for eye track control of image manipulation| US20100079508A1|2008-09-30|2010-04-01|Andrew Hodge|Electronic devices with gaze detection capabilities| TW201020901A|2008-11-20|2010-06-01|Ibm|Visual feedback for drag-and-drop operation with gravitational force model| US8678589B2|2009-06-08|2014-03-25|Panasonic Corporation|Gaze target determination device and gaze target determination method| US8970478B2|2009-10-14|2015-03-03|Nokia Corporation|Autostereoscopic rendering and display apparatus| JP5613025B2|2009-11-18|2014-10-22|パナソニック株式会社|Gaze detection apparatus, gaze detection method, electrooculogram measurement apparatus, wearable camera, head mounted display, electronic glasses, and ophthalmologic diagnosis apparatus| JP5617246B2|2010-01-12|2014-11-05|ソニー株式会社|Image processing apparatus, object selection method, and program| JP5743416B2|2010-03-29|2015-07-01|ソニー株式会社|Information processing apparatus, information processing method, and program| KR101694159B1|2010-04-21|2017-01-09|엘지전자 주식회사|Augmented Remote Controller and Method of Operating the Same| US8593375B2|2010-07-23|2013-11-26|Gregory A Maltz|Eye gaze user interface and method| US8402533B2|2010-08-06|2013-03-19|Google Inc.|Input to locked computing device| US8854318B2|2010-09-01|2014-10-07|Nokia Corporation|Mode switching| JP5558983B2|2010-09-15|2014-07-23|Necカシオモバイルコミュニケーションズ株式会社|Screen display device, screen display control method, screen display control program, and information terminal device| KR101729023B1|2010-10-05|2017-04-21|엘지전자 주식회사|Mobile terminal and operation control method thereof| US9690099B2|2010-12-17|2017-06-27|Microsoft Technology Licensing, Llc|Optimized focal area for augmented reality displays| US20120257035A1|2011-04-08|2012-10-11|Sony Computer Entertainment Inc.|Systems and methods for providing feedback by tracking user gaze and gestures| CN102253714B|2011-07-05|2013-08-21|北京工业大学|Selective triggering method based on vision decision| US8235529B1|2011-11-30|2012-08-07|Google Inc.|Unlocking a screen using eye tracking information| US8847903B2|2012-04-26|2014-09-30|Motorola Mobility Llc|Unlocking an electronic device| EP2956844B1|2013-02-14|2017-05-24|Facebook, Inc.|Systems and methods of eye tracking calibration| US9791921B2|2013-02-19|2017-10-17|Microsoft Technology Licensing, Llc|Context-aware augmented reality object commands| US10216266B2|2013-03-14|2019-02-26|Qualcomm Incorporated|Systems and methods for device interaction based on a detected gaze|US20130339859A1|2012-06-15|2013-12-19|Muzik LLC|Interactive networked headphones| US10884577B2|2013-01-15|2021-01-05|Poow Innovation Ltd.|Identification of dynamic icons based on eye movement| US9285872B1|2013-12-12|2016-03-15|Google Inc.|Using head gesture and eye position to wake a head mounted device| KR102153435B1|2013-12-20|2020-09-08|엘지전자 주식회사|The mobile terminal and the control method thereof| US10564714B2|2014-05-09|2020-02-18|Google Llc|Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects| EP3140780B1|2014-05-09|2020-11-04|Google LLC|Systems and methods for discerning eye signals and continuous biometric identification| US10248192B2|2014-12-03|2019-04-02|Microsoft Technology Licensing, Llc|Gaze target application launcher| DE102016101023A1|2015-01-22|2016-07-28|Sennheiser Electronic Gmbh & Co. Kg|Digital wireless audio transmission system| JP6295995B2|2015-04-28|2018-03-20|京セラドキュメントソリューションズ株式会社|Job instruction method to information processing apparatus and image processing apparatus| EP3332311B1|2015-08-04|2019-12-04|Google LLC|Hover behavior for gaze interactions in virtual reality| CN108351685A|2015-08-15|2018-07-31|谷歌有限责任公司|System and method for the ocular signal based on biomethanics interacted with true and virtual objects| EP3369091A4|2015-10-26|2019-04-24|Pillantas Inc.|Systems and methods for eye vergence control| US10466780B1|2015-10-26|2019-11-05|Pillantas|Systems and methods for eye tracking calibration, eye vergence gestures for interface control, and visual aids therefor| US10657036B2|2016-01-12|2020-05-19|Micro Focus Llc|Determining visual testing coverages| US20170199748A1|2016-01-13|2017-07-13|International Business Machines Corporation|Preventing accidental interaction when rendering user interface components| US10163198B2|2016-02-26|2018-12-25|Samsung Electronics Co., Ltd.|Portable image device for simulating interaction with electronic device| US10089000B2|2016-06-03|2018-10-02|Microsoft Technology Licensing, Llc|Auto targeting assistance for input devices| US20180095618A1|2016-10-04|2018-04-05|Facebook, Inc.|Controls and Interfaces for User Interactions in Virtual Spaces| US20180095635A1|2016-10-04|2018-04-05|Facebook, Inc.|Controls and Interfaces for User Interactions in Virtual Spaces| CN107024981B|2016-10-26|2020-03-20|阿里巴巴集团控股有限公司|Interaction method and device based on virtual reality| CN107015637B|2016-10-27|2020-05-05|阿里巴巴集团控股有限公司|Input method and device in virtual reality scene| US10393312B2|2016-12-23|2019-08-27|Realwear, Inc.|Articulating components for a head-mounted display| US10437070B2|2016-12-23|2019-10-08|Realwear, Inc.|Interchangeable optics for a head-mounted display| US10936872B2|2016-12-23|2021-03-02|Realwear, Inc.|Hands-free contextually aware object interaction for wearable display| US10620910B2|2016-12-23|2020-04-14|Realwear, Inc.|Hands-free navigation of touch-based operating systems| US11099716B2|2016-12-23|2021-08-24|Realwear, Inc.|Context based content navigation for wearable display| US10244204B2|2017-03-22|2019-03-26|International Business Machines Corporation|Dynamic projection of communication data| US10155166B1|2017-09-08|2018-12-18|Sony Interactive Entertainment Inc.|Spatially and user aware second screen projection from a companion robot or device| CN109847343B|2018-12-29|2022-02-15|网易(杭州)网络有限公司|Virtual reality interaction method and device, storage medium and electronic equipment| TWI722674B|2019-11-21|2021-03-21|國立雲林科技大學|Signal matching system for kinetic posture image signal and kinematic signal and method thereof| KR20210150862A|2020-06-04|2021-12-13|주식회사 브이터치|Method, system and non-transitory computer-readable recording medium for recognizing a gesture|
法律状态:
2020-03-03| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-07-20| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2021-11-03| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-01-11| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 20/06/2014, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/926,204|2013-06-25| US13/926,204|US10025378B2|2013-06-25|2013-06-25|Selecting user interface elements via position signal| PCT/US2014/043306|WO2014209772A1|2013-06-25|2014-06-20|Selecting user interface elements via position signal| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|