专利摘要:
APPLIANCE, METHOD, AND ONE OR MORE COMPUTER-READABLE, NON-TRANSITORY STORAGE MEDIA. In one embodiment, an apparatus includes one or more processors and memory coupled to the processors that includes instructions executable by the processors. When executing the instructions, the processors present a first screen of a graphical user interface on a device display. The first screen includes one or more first elements. Processors receive user input indicating a transition in the graphical user interface and, in response to the user input, switch from the first screen to a second screen of the graphical user interface and apply one or more visual transition effects to the transition. The second screen includes one or more second elements.
公开号:BR112015011598B1
申请号:R112015011598-5
申请日:2013-11-20
公开日:2022-01-25
发明作者:Pranav Mistry;Sajid Sadi;Lining Yao;John Snavely;Eva-Maria Offenberg;Link Huang;Cathy Kim
申请人:Samsung Electronics Co., Ltd;
IPC主号:
专利说明:

TECHNICAL FIELD
[0001] This application claims the benefit, pursuant to 35 USC §119(e), of United States Provisional Patent Application No. 61/728765, filed November 20, 2012, of the United States Provisional Patent Application No. 61/728770, filed November 20, 2012, United States Provisional Patent Application No. 61/773803, filed March 6, 2013, United States Provisional Patent Application No. 61/728773, filed at November 20, 2012, United States Provisional Patent Application No. 61/773813, filed March 7, 2013, United States Provisional Patent Application No. 61/773815, filed March 7, 2013, Application for United States Provisional Patent No. 61/773817, filed March 7, 2013, United States Provisional Patent Application No. 61/775688, filed March 11, 2013, United States Provisional Patent Application No. 61 /775687, filed March 11, 2013, Patent Application Provision United States Al No. 61/775686, filed March 11, 2013 and United States Provisional Patent Application No. 14/015,890, filed August 30, 2013, all of which are incorporated herein by reference.
[0002] This disclosure generally refers to a wearable electronic device. FUNDAMENTALS OF THE TECHNIQUE
[0003] Mobile electronic devices provide a user with access to computing capabilities even when the user moves around various locations. Examples of mobile electronic devices include mobile phones, media players, laptops, tablets, PDAs, or hybrid devices that include multi-device functionality of this type.
[0004] Mobile electronic devices can be part of a communication network such as a local area network, a remote area network, a cellular network, the Internet or any other suitable network. A mobile electronic device may use a communications network to communicate with other electronic devices, for example, to access remotely stored data, access remote processing power, access remote displays, provide locally stored data, provide local processing power, or provide access to local displays. For example, networks can provide communication paths and links to servers, which can host applications, content and services that can be accessed or used by users through mobile electronic devices. Content may include text, video data, audio data, user settings or other types of data. Networks can use any communication protocol or technology suitable for facilitating communication between mobile electronic devices, such as, for example, BLUETOOTH, IEEE WI-FI (802.1la/b/g/n/ac) or TCP/IP . DISCLOSURE OF THE INVENTION SOLUTION TO THE PROBLEM
[0005] In one embodiment, an apparatus includes one or more processors and memory attached to the processors, which include instructions executable by the processors. When executing the instructions, the processors present a first screen of a graphical user interface on a device display. The first screen includes one or more first elements. Processors receive user input indicating a transition in the graphical user interface and, in response to the user input, transition from the first screen to a second screen of the graphical user interface and apply one or more visual effects from transition to transition. The second screen includes one or more second elements. ADVANTAGEOUS EFFECTS OF THE INVENTION
[0006] In accordance with the present invention, a method and a device for GUI transitions in the wearable electronic device are provided. BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Figure 1 illustrates an exemplary embodiment of a wearable electronic device.
[0008] Figure 2 illustrates an exemplary stacking of a device.
[0009] Figures 3A-3E illustrate exemplary form factors of a device.
[00010] Figure 4A illustrates an exemplary cross section of a device body.
[00011] Figures 4B-C illustrate exemplary connections between components of a device.
[00012] Figures 5A-5F illustrate exemplary displays of a device.
[00013] Figures 6A-C illustrate exemplary cross-sectional views of a device display.
[00014] Figures 7A-7D illustrate exemplary external elements in relation to a device body.
[00015] Figures 8A-8C illustrate exemplary external elements in relation to a device body.
[00016] Figure 9 illustrates an exemplary gasket of a device.
[00017] Figure 10 illustrates an exemplary retaining ring of a device.
[00018] Figure 11 illustrates several exemplary modalities for using a device.
[00019] Figures 12A-12B illustrate a clamp attached to a body of a device.
[00020] Figures 13A-13I illustrate exemplary embodiments for attaching or affixing a device clamp.
[00021] Figures 14A-D illustrate exemplary camera placements on a device.
[00022] Figure 15 illustrates an exemplary device with a clamp and optical sensor.
[00023] Figure 16 illustrates an exemplary viewing triangle including a user, a device, and an object.
[00024] Figure 17 illustrates an exemplary angle of view for an optical sensor of a device.
[00025] Figures 18A-18B illustrate exemplary optical sensors of a device.
[00026] Figure 19 illustrates an exemplary sensor detection system of a device.
[00027] Figures 20A-20C illustrate exemplary chargers operable with one device.
[00028] Figures 21A-21B illustrate exemplary chargers operable with a device.
[00029] Figures 22A-22B illustrate exemplary charging units operable with a device.
[00030] Figure 23 illustrates an exemplary charging scheme for a charging unit operable with a device.
[00031] Figure 24 illustrates an exemplary charging scheme for a charging unit operable with a device.
[00032] Figures 25A-25E illustrate exemplary embodiments of energy storage and charging in a device and charging unit.
[00033] Figure 26 illustrates an exemplary load unit architecture.
[00034] Figures 27-92 illustrate exemplary gestures for use with a device.
[00035] Figures 93A-93B illustrate exemplary user inputs for a device.
[00036] Figures 94A-94C illustrate exemplary user inputs for a device.
[00037] Figures 95A-95D illustrate exemplary user touch input for a device.
[00038] Figures 96A-96B illustrate models and graphical user interface, examples of a device.
[00039] Figure 97 illustrates an exemplary graphical user interface model of a device.
[00040] Figures 98A-98G illustrate exemplary graphical user interface models of a device.
[00041] Figure 99 illustrates an exemplary graphical user interface model of a device.
[00042] Figures 100A-100C illustrate exemplary graphical user interface models of a device.
[00043] Figures 101A-101B illustrate example screens of an interface
[00044] Figures 102A-102D illustrate exemplary screens of a graphical user interface of a device.
[00045] Figures 103A-103D illustrate exemplary screens of a graphical user interface of a device.
[00046] Figure 104 illustrates an exemplary menu of a graphical user interface of a device.
[00047] Figures 105A-105D illustrate exemplary menus of a graphical user interface of a device.
[00048] Figures 106A-106C illustrate exemplary menus of a graphical user interface of a device.
[00049] Figures 107A-107C illustrate exemplary menus of a graphical user interface of a device.
[00050] Figure 108 illustrates an exemplary menu of a graphical user interface of a device.
[00051] Figures 109A-1090 illustrate exemplary menus of a graphical user interface of a device.
[00052] Figures 11OA-110B illustrate examples of rolling in a graphical user interface of a device.
[00053] Figure 111A-111C illustrates examples of scrolling in a graphical user interface of a device.
[00054] Figure 112 illustrates examples of overlay and background content in a graphical user interface of a device.
[00055] Figures 113A-C illustrate examples of overlay and background content in a graphical user interface of a device.
[00056] Figures 114A-114B illustrate exemplary visual transition effects in a graphical user interface of a device.
[00057] Figures 115A-115B illustrate visual transition effects, exemplary in a graphical user interface of a device.
[00058] Figures 116A-116B illustrate visual transition effects, exemplary in a graphical user interface of a device.
[00059] Figures 117A-117B illustrate visual transition effects, exemplary in a graphical user interface of a device.
[00060] Figures 118A-118C illustrate exemplary visual transition effects in a graphical user interface of a device.
[00061] Figures 119A-119C illustrate exemplary visual transition effects in a graphical user interface of a device.
[00062] Figures 120A-120C illustrate visual transition effects, exemplary in a graphical user interface of a device.
[00063] Figures 121A-121B illustrate exemplary visual transition effects in a graphical user interface of a device.
[00064] Figure 122 illustrates an exemplary use of a physical model in a graphical user interface of a device.
[00065] Figure 123 illustrates example screens of a graphical user interface of a device.
[00066] Figure 124 illustrates example screens of a graphical user interface of a device.
[00067] Figure 125 illustrates an exemplary method for activating automatic camera on a device.
[00068] Figure 126 illustrates an exemplary method for delegation by a device.
[00069] Figure 127 illustrates delegation models, examples including a device.
[00070] Figure 128 illustrates an exemplary method for delegation by a device.
[00071] Figures 129A-129D illustrate exemplary modes of a device.
[00072] Figure 130 illustrates an exemplary mode of a device.
[00073] Figures 131A-131D illustrate exemplary modes of a device.
[00074] Figure 132 illustrates an exemplary method for providing augmented reality functions on a device.
[00075] Figure 133 illustrates an exemplary network environment in which a device may operate.
[00076] Figure 134 illustrates an example of pairing between a device and a target device.
[00077] Figure 135 illustrates an exemplary method for pairing a device with a target device.
[00078] Figure 136 illustrates example screens of a graphical user interface of a device.
[00079] Figure 137 illustrates an exemplary computer system comprising a device. BEST MODE FOR CARRYING OUT THE INVENTION
[00080] In one embodiment, an apparatus includes one or more processors and a memory attached to the processors that includes instructions executable by the processors. When executing the instructions, the processors present a first screen of a graphical user interface on a device display. The first screen includes one or more first elements. Processors receive user input indicating a transition in the graphical user interface and, in response to the user input, transition from the first screen to a second screen of the graphical user interface and apply one or more transition visual effects to the transition. . The second screen includes one or more second elements. MODE FOR THE INVENTION
[00081] Figure 1 illustrates an exemplary embodiment of a wearable electronic device 100. Device 100 includes a body 105 that contains all or some of the circuitry, structure, and display of device 100. For example, body 105 may include all or some some of the processing components, data storage components, memory, sensors, wiring, or communication components of a device 100. In specific embodiments, device 100 may include a display. The display may assume any suitable shape or pattern, such as, for example, a circular shape, as illustrated by circular display 110. As used herein, where appropriate, "circular display" includes substantially circular displays or displays of the circular type, such as such as elliptical displays. In specific embodiments, device 100 may include an element around the display. As used herein, an element around the display includes a swivel element surrounding the display or the body on which or on which the display is seated. As an example, an element may be an outer ring 115 around a circular display 110. In specific embodiments, the element around the display may move relative to the display or body. For example, outer ring 115 can rotate relative to device body 100, as described more fully below. In specific embodiments, device 100 may include a clamp 120 attached to body 105. In specific embodiments, device 100 may include a sensor module, such as, for example, camera module 125 housing a camera, affixed to body 105 or clamp 125, as described more fully below.
[00082] Specific embodiments of a wearable electronic device include stacking that allows some or all of the processing and display system to fit within the body of the device, which may be surrounded by an element, such as an outer ring, that provides at least one force for the user to interact with the device. Additionally, or alternatively, specific embodiments may include external components incorporated into the cuff for additional functionality, as described more fully herein. Figure 2 illustrates an exemplary stack 200 of a wearable electronic device. As illustrated in Figure 2, some or all of the components of the stack 200 may adopt the shape of the device, which is circular in the example of Figure 2. The stack 200 may include a layer of protective glass (or other suitable solid, transparent material) 205. Other components may be laminated to protective glass 205, or may be attached to base 245. Additionally or alternatively, protective layer 205 may be mechanically connected to outer ring 235, or any other suitable component of the device body. Directly under the protective glass 205 may be a touch-sensitive layer 210. The touch-sensitive layer 210 may be composed of any suitable material and be of any suitable type, such as, for example, resistive, where surface acoustics, capacitive (including mutual capacitive or self-capacitive), infrared, optical, dispersive or any other suitable type. The touch sensitive layer 210 can be applied directly to the protective glass 205, laminated thereto, or physically affixed thereto. The touch-sensitive layer 210 can be a completely two-dimensional touch surface, or it can be composed of touch-sensitive regions, such as a number of buttons or capacitive areas. Touch layer 210 may be connected to processor board 215 via a flexible connector on the edge of the touch surface, as described more fully herein.
[00083] Below the touch-sensitive layer 210 may be a circular display 215, which may be laminated or mechanically affixed to any of the preceding or preceding layers. In specific embodiments, lamination can reduce glare and improve display readability by reducing internal reflections. As described more fully below, display 215 may have an external dead area which may be symmetrical or asymmetrical. Display 215 may be positioned such that it is axially centered with respect to shielding layer 205 for visually symmetrical presentation. The display 215 may be of any suitable type, such as, for example, light emitting diode (LED), organic light emitting diode (OLED), or liquid crystal display (LCD). In specific embodiments, the display 215 may be flexible. In specific embodiments, the display 215 may be partially transparent. In specific embodiments, the display 215 may be translucent.
[00084] Below display 215 can be battery 220, which in specific embodiments can be selected so that base 245 can be reduced in diameter without affecting battery size. Battery 220 may be of any suitable type, such as, for example, lithium-ion based. Battery 220 can adopt the circular shape of the device, or it can adopt any other suitable shape, such as a rectangular shape, as illustrated. In specific embodiments, the battery 220 may "float" in the device, for example, it may have space above, below or around the battery to accommodate thermal expansion. In specific embodiments, high-height component such as, for example, tactile triggers or other electronic media can be positioned in the additional space beyond the edge of the battery for optimal packaging of the components. In specific embodiments, connectors from processor board 225 may be placed in this space to reduce the overall height of the device.
[00085] Below battery 220 may be processor board 225. Processor board 225 may include any suitable processing components, such as, for example, one or more processing units, drive units, detection units, caches , memory elements or integrated circuits. Processor board 225 may include one or more heat sensors or cooling units (such as, for example, fans) to monitor and control the temperature of one or more processor board components. In specific embodiments, the device body 105 may itself act as the heat sink.
[00086] Below the processor board may be an encoder 230, surrounded by one or more outer rings 235. As described more fully below, the encoder 230 may be of any suitable type, and may be part of the outer ring 235 or may be a separate component, as illustrated in Figure 2. In specific embodiments, the outer ring 235 may provide tactile sensation of the outer ring detent or position detection of the outer ring 235. When the encoder 230 is a mechanical encoder separate from the device body , as illustrated in Figure 2, the encoder may support the outer ring 235. For example, in specific embodiments, the encoder 230 is mounted to the base 245, and the connections to the base 245 or to the clamp 240 may pass through some portion of the encoder, such as, for example, the center of the encoder. In specific embodiments, processor board 225 and one or more layers above may be attached to a central pillar that passes through encoder 235. The pillar may transfer mechanical forces on device components to the pillar, which may allow components, such as the processor board and display are supported by the pillar rather than the encoder, reducing strain on the encoder. In specific embodiments, the outer ring 235 is attached to the movable portion of the encoder via pins or other suitable connections.
[00087] The device body may complete with a base 245. The base 245 may be stationary with respect to one or more rotating components of the device, such as the outer ring 235. In specific embodiments, the base 245 connects to the clamp 240 , described more fully here. The connections can be mechanical or electrical, such as, for example, part of the circuits connecting the communication components wired in the clamp 240 to the processing board 225. In specific embodiments, the connectors are positioned to avoid the encoder and the attachment points to the clamps. In specific embodiments, bracket 240 may be detachable from base 245. As described more fully herein, bracket 240 may include one or more internal connectors 250, one or more optical detection modules 255, one or more other sensors. In specific embodiments, the interior of the device, or portions thereof, may be sealed from the external environment.
[00088] While this disclosure describes specific examples of components in the stack 200 of wearable electronic device 100 and the shape, size, order, connections and functionality of those components, this disclosure considers that a wearable device, such as device 100, may include any suitable components of any shape, size and order, suitable connected or communicating in any suitable way. As an example only, the battery 220 may be placed further towards the bottom of the stack than is illustrated in Figure 2. As another example, the device body may assume any suitable shape factor, such as ellipsoid or similar to disk, as illustrated by the example of Figure 3A, tapered at one end as illustrated by the example of Figure 3B, or beveled or rounded at one or more edges, as illustrated by the example of Figures 3C-3D illustrating the beveled edge 315. Figure 3E illustrates additional exemplary device body form factors, such as, for example, 320A-E bodies that have a polygonal shape with a flat protective cover or display or a curved protective cover or display. As another example, 325A-D bodies have a partially curved shape with either a flat protective cover or display or a curved protective cover or display. The 330A-C bodies have a curved shape. One or more internal components of the device body, such as, for example, one or more internal components, may assume any form factor suitable for the body in which they are seated.
[00089] Figure 4A illustrates an exemplary cross section of a device body. As illustrated, the device body has a width of D1, such as, for example, approximately 43 millimeters. Specific embodiments may include a slight D4 clearance between the outer ring and the OLED display, such as, for example, a clearance of up to 0.3 millimeters. Similarly, there may also be a distance between the outer ring and the protective glass cover (which may have a width D3, such as, for example, approximately 42.6 millimeters), such as, for example, 0.2 millimeters. In specific embodiments, the gap between the protective glass cover and the outer ring is greater than the gap between the display and the outer ring. The outer ring (which may include a serration) may have a width D2 of, for example, 1.0 millimeters. Figures 4B-4C illustrate example sets of connections between device components. Figure 4B illustrates a touch glass 405 above a display 410. The display is secured to the top of the inner body 440 with, for example, adhesive sealant 425. The flexible display circuit 430 couples the display to the electronics within the body. of device. Adhesive sealing membrane 445 may be used to connect the clamp 450 to the device, and one or more retaining rings 435 may be used to connect the outer ring 415 to the inner body 440. In specific embodiments, the retaining rings may inhibit twisting. of the outer ring on its vertical axis and provide physical spacing between the outer ring and the glass cover. A protective glass layer can be seated on top of the inner body, providing an environmental seal. In specific embodiments, a retaining ring may also provide an environmental seal for the inner body. For example, Figure 5 illustrates an exemplary retaining ring 465 securing an outer ring to the device body and providing an environmental seal between the outer ring and the inner body. Additionally or alternatively, flake-like material, possibly coated with a hydrophobic substance such as, for example, TEFLON can be used to prevent water and dirt from penetrating the cavity. As another example, the outer ring can be sealed to the inner body with a metal or plastic ring, preventing air (and thus water vapor or other particles) from moving through the cavity between the outer ring and the inner body. Clearance 455 allows the outer ring to move, such as by rotation, relative to the inner device body. A 460 adhesive seal secures the display to the body and provides an environmental seal between the display and internal body components.
[00090] In specific embodiments, the device display has a circular or elliptical shape and houses a circular display unit, such as, for example, an LCD display, and an OLED display. The display unit can be mounted such that the visible area is centrally located within the display module. If the display unit has an offset model, one or more masks can be used to obscure part of the display to produce a correctly placed, circular visual outline.
[00091] In specific embodiments, a display module has an outer ring that is part of the device's user interface. The outer ring can rotate while the clamp stably holds the bottom and inside of the device. Figure 5A illustrates an example of a top view of the device display in relation to other components of the device. The outer ring 510 may be attached to the front surface 512 of the device 508, or may be independent of the front surface 512. In specific embodiments, the display 506 does not rotate independently of the rotation of the outer ring 510 surrounding the display 506. This can be achieved by means of attaching the display 506 to the portion 504 of the display module that is attached to the bracket 02, or by programming displayed content to remain static while the display unit rotates. In the last mentioned case, the displayed content is rotated such that the visual vertical axis of the image displayed by the display unit remains parallel to the clamp at all times.
[00092] A display module may additionally incorporate one or more sensors on or near the same surface as the display. For example, the display module may include a camera or other optical sensor, microphone, or antenna. One or more sensors can be placed in an inactive area of the display. For example, Figure 5B illustrates device 522 with a camera module 516 placed coplanar with the battery below display 520, with optical aperture 514 positioned under the clear section of display 520. Camera module 516 may be placed between display connectors. grid line 518 for display 520. Any camera or other suitable sensors can be placed coplanar with the display, such as antenna 524 of Figure 5C, which is placed in dead area 526. Additionally or alternatively, sensors can be placed below or above the display, may be placed in any suitable location on or on the external body of the device, may be placed in any suitable location on or on the armband of a device, or any suitable combination thereof, as described more fully herein. For example, a forward facing camera can be placed under the display, over the display or above the display.
[00093] In the specific modalities, the circular display packaging includes an inactive area, as illustrated in Figure 5D. In a traditional display, row drive lines powering the display are routed to the nearest side edge, then routed down along the inactive areas, or connected directly to onboard trigger chips along that edge. Some approaches can be taken to reduce the amount of idle area for the display. For example, specific modalities reduce the size of the dead area by re-routing the grid control lines by powering the display to an edge of the display. Figure 5D illustrates grid control lines 532 routed to an edge of display 536 and connected to a connector 538 guiding the lines to the processing center of device 528. In this configuration, idle area 530 can be minimized.
[00094] Figure 5E illustrates other exemplary embodiments for reducing the idle area of a display 554 of device 540 by creating a polygonal-type display outline, with a circular area masked in the center by one or more masks 550. Connectors 552 are arranged in a polygonal model. Rows 546 and columns 542 of the grid lines are routed to the nearest connector 552. In specific embodiments, connectors 552 connect to a flexible circuit behind the display that carries the driver chip. Due to the reduced connection density, the electronics in Figure 5E may be easier to connect to a flexible printed circuit board (FPC board) and thus increase throughput. In addition, by moving the driver IC to the back of the display, one or more dead areas 548 can be further reduced while allowing the IC to remain on a flat, stable surface. This model is particularly suitable for OLED displays but can be used with LCDs as a backlight unit (BLU) can be laminated to the device before the FPC board is connected. Although the above example illustrates a polygonal arrangement of connectors, any suitable arrangement of connectors can be used as long as all pixels are reached by the grid lines.
[00095] Figure 5F illustrates an exemplary physical arrangement and dimensioning of a device display. The device has a diameter of D4, such as, for example, approximately 41.1 millimeters. The device includes one or more inactive areas having a width D3, such as, for example, approximately 1.55 millimeters. The device includes a visible area of diameter D2, such as, for example, approximately 38 millimeters. The device includes connectors 568 for rows of columns 564 and rows of rows 566. Connectors 568 may be coupled to the device via one or more FPC connections 570 which have a width of D1, such as, for example, approximately 0.2 millimeters. Connectors 568 may have a width of D5, such as, for example, approximately 6 millimeters. The FPC 556 display connector can be used for connection to the display electronics, such as, for example, circuits from the 568 connectors, to the 558 driver chip, which can be below the display or on the back of the display body. device.
[00096] Figures 6A-C illustrate exemplary cross-sectional views of a device display, including fabrication of the device. In Figure 6A, hot bar blade 605 is used to solder the flexible printed circuit(s) 610 coupling the display electronics to the device processing electronics. A bracket 615 can be used to stabilize the FPC 610 during this process. Figure 6B illustrates the attached FPC 620, which has been folded (portion 625) and glued to the back of the display using adhesive 630. Figure 6C illustrates an exemplary finished display. FPC 645 has been laminated to the back of the protective display glass 635, and is folded over the front of the 635 glass and is secured to the front of the 635 glass by microbonding 649. Adhesive 650 connects the FPC 645 to the device. The FPC passes over the driver chip 655, which is attached to the device by adhesive 650.
[00097] In specific embodiments, all RF components and processing are located inside the device's body, which can create a challenge in allowing RF signals to pass outside the device. The FPC board can be additionally fixed to the sides of the polygon where there is no connection to the display itself to allow the mounting of the strip line, tips, ceramics, or other antennas (or other suitable sensors) in the same plane as the display, as illustrated in Figure 5C. As the antenna of Figure 5C is coplanar with the display, interference from the dense mesh wiring (eg, as illustrated in Figure 5E) from the display is reduced.
[00098] In specific embodiments, a display can be shielded from electromagnetic interference with the main processor board using a metal shield. In specific embodiments, the metal shield can be used as a heat sink for the battery, and thus can improve charge or discharge rates for the battery.
[00099] In specific embodiments, a wearable electronic device may include one or more external elements (which may be of any suitable shape) surrounding the device body. Figure 7A illustrates an outer element via an exemplary outer ring 710 around the display 705. The outer ring may be composed of any suitable material, such as, for example, stainless steel or aluminum. In specific embodiments, the outer ring 710 can be rotatable in one direction, in both directions, or can be used in both configurations based on, for example, a switch. In specific embodiments, an outer ring 710 may rotate in one direction while a second outer ring 710 may rotate in the opposite direction. Ring 710 may be coupled to base 720 of the device via retaining ring 715. Figure 7B illustrates outer ring 710 securing to base 720 via Delrin ring 715A or via a stainless steel retaining ring 715. spring 715B. Springs or clips 725 secure the rings to the base 720. Figures 7C-D illustrate the retaining ring 715 secured to the base 720 via screws 725 screwed into corresponding pillars of the base 720. The device may include fasteners/spacers 730 as illustrated. in Figure 7C.
[000100] In specific embodiments, holders or encoders (which may be used interchangeably where appropriate) of an external element may provide the user with tactile feedback (e.g. a tactile click) provided, for example, by a holder that allows the user to determine when the element has moved in a "step" or "increment", which can be used interchangeably there. This click can be produced directly via a mechanical linkage (eg a spring mechanism) or it can be produced electronically via a tactile trigger (eg a motor or piezo trigger). For example, a motor may provide resistance to the movement of a ring, such as, for example, being shorted to provide resistance and shorted to provide less resistance, simulating the relatively high and low torque provided by a mechanical detent system. . As another example, magnetic systems can be used to provide the tactile sensation of a holder. For example, a solenoid mechanism can be used to disengage the detent or escape spring as needed. The spring or escapement provides the real mechanical feedback. However, this arrangement allows the device to skip some of the detents when necessary, while re-engaging the detent at exact intervals to create the sensation of detents, such as those that have changed in size. As another example, a rotating outer element (such as, for example, the outer ring) can be magnetized, such as by means of an electromagnetic element used to attract the ring into "hold" positions, increasing torque and simulating holding feedback. As another example, a rotating outer element may have alternating north-south poles, which repel and attract corresponding magnetic poles in the device body. As another example, a permanent magnet can be used to lock the ring in place when the electromagnet is not in use, preventing free rolling. As another example, instead of an electromagnet, an easily magnetizable ferromagnetic alloy can be used inside the solenoid. This allows the solenoid's electromagnetic field to "reprogram" the magnetic orientation of the core, thus maintaining the magnetic drive effect even when the solenoid itself is disengaged. While this disclosure provides specific events for holders, holder-like systems, and encoders, this disclosure considers any holders, holder-like systems, or encoders.
[000101] Figure 8A illustrates an outer ring 805 with notches for a spring-based detent system etched into the outer surface of outer ring 805. Springs 820 attached to spring posts 810. Retaining ring 815 may be made of Delrin, steel, or any other suitable material, and may be segmented or solid/continuous. Figure 8B illustrates an exemplary outer ring having small notches 830 that engage a spring-loaded member to provide tactile feedback from the illustrated detent. In the case of an electronic feedback system, the feedback can be produced in rapid synchronization with the movement of the ring, and must have a sufficient rate of attack and decay such that successive movements of the ring can be distinguished from one another. In specific embodiments, an outer ring can be freely rotatable (eg, continuously) without any clicking or staggering. In specific embodiments, the ring may be able to rotate continuously and also to rotate in steps/increments, based, for example, on input from a user indicating which rotational mode the outer ring should be in. The ring may also, or alternatively may, rotate freely in one direction and in increments in the other direction. Different functionality may occur based on the rotational mode used. For example, rotating in continuous mode can change a continuous parameter, such as volume or zooming, while rotating in incremental mode can change a discrete parameter, such as, for example, menu items or contacts in a list, as described more fully here. In specific embodiments, freely rotating the ring can provide tactile feedback to the wearer, e.g. a force applied such that the ring appears to rotate in a viscous medium (e.g. the faster the ring rotates the more it resists rotation) . In specific embodiments, an outer ring may be stepped on or lifted in the direction of the axis in which the outer ring rotates, such as, for example, as part of a gesture or to change modes of rotation. In specific embodiments, an outer ring may have touch sensitive portions.
[000102] In specific embodiments, an encoder or detent may be used to determine the position of the outer ring relative to the device body. Specific embodiments utilize an encoder that is attached to the device body, as illustrated by the encoder 230 of Figure 2. In specific embodiments, the encoder is part of the inner surface of the outer ring itself, as illustrated by the optical elements, printed 825 in Figure 8B. In these embodiments, the outer ring directly acts as the rotating part of the encoder. An optical encoder pattern is printed on the inner surface, and is read by an optical module on the processing board. The encoder inside the outer ring must have sufficient optical contrast for detectors, and may be engraved on the outer ring by, for example, printing or laser engraving. The inner and outer rings can be sealed to the environment with a low friction ring (such as, for example, the 840 ring in Figure 8C) made of a material such as Teflon or Delrin that maintains a tight fit while preventing leakage. contaminants get inside the device. In specific embodiments, a ferrule on the inner ring may engage a similar ferrule on the outer ring, allowing the two rings to be joined while still allowing free rotation. A larger ferrule at the bottom of the inner ring provides additional sealing by deflecting environmental hazards from below. As illustrated in Figure 9, in specific embodiments, the sealing ring 915 may fit into the groove 905 of the base, which may include a gripping area 910.
[000103] In specific embodiments, a retaining ring connecting the outer ring to the device body may have strain gauges to detect pressure on the outer ring. As an example, Figure 10 illustrates a retaining ring connected to four strain gauges (which are also connected to the inner body) that are placed symmetrically around the ring. As used herein, the four voltage meters can be an electronic component that senses voltage. As a result of symmetrical placement, normal motion or contact with the outer ring will place mostly asymmetrical stress on the outer ring because the ring simply shifts relative to the device in the plane of the ring and thus one end compresses and the end opposite elongates, as illustrated by the upper ring of Figure 10. In comparison, tightening a larger portion of the outer ring is likely to produce symmetrical tension in opposing pairs of strain gauges (eg, due to ring stretching under pressure). The relative difference in tension between the two pairs of strain gauges thus differentiates intentional tightening of the outer ring from regular movement of the outer ring or contact with the outer ring. While this disclosure describes specific examples of the number and placement of strain gauges on the retaining ring, this disclosure considers placing any suitable number of strain gauges on any suitable component of the device to detect pressure on the component. As an example, strain gauges can be placed on the device's clamp or on the outer ring.
[000104] When voltage is imposed on a component that contains voltage meters or any other suitable voltage or pressure sensing system, the detected voltage may result in any proper functionality. For example, when tension is placed on the outer ring, such as, for example, by a user tightening the outer ring, feedback can be provided to the wearer. This feedback can take any suitable form, such as tactile feedback (e.g. vibration, agitation or heating/cooling), audible feedback such as a beep, or reproducing a specific, user-defined tone, visual feedback (e.g. via the display). device), or any other suitable return or combination thereof. The functionality associated with the action of tightening a ring is described more fully herein, and that disclosure considers any suitable functionality resulting from tension or pressure imposed on suitable components and detected by any suitable components.
[000105] A wearable electronic device can be attached to a cuff to secure the device to the user. Here, reference to an "armband" may encompass any equipment suitable for securing a device to the wearer, such as, for example, a traditional 1405 armband that can be worn around the wearer's arm, wrist, waist, or leg, as illustrated by example intermediate in Figure 14A; a clip 1415 for securing a garment, as illustrated by way of example in Figure 14B, a necklace or bracelet configuration 1420, as illustrated by way of example in Figure 14C, a key fob 1425 or other accessory configuration for securing the device for example in the pocket of the user, as illustrated by way of example in Figure 14D; or any other suitable configuration. Each of these embodiments may include a camera 1410 located on the device, on the cuff, or on the body. Figure 11 illustrates various modalities for using the device, such as, for example, around the neck as illustrated at 1105; attached to the garment (such as, for example, on the chest as illustrated by 1110), or on a belt as illustrated by 115; on an accessory (such as, for example, an arm as illustrated at 1120); on the wrist as illustrated in 1125 or in a pocket as illustrated in 1130. While this revelation describes specific examples of armbands and ways of affixing devices to a wearer, this revelation considers any suitable armbands or ways of attaching a device to a wearer.
[000106] In specific embodiments, sensors and corresponding electronics may be attached to a clamp where appropriate. For example, the clamps of Figures 14A-14C may be suitable for housing an optical sensor. All of the specific embodiments illustrated may be suitable to include a touch sensitive area. This disclosure considers any suitable clamps including any suitable sensors or electronic means, such as, for example, communication components (such as antennas), environmental sensors or inertial sensors. In specific embodiments, the cuff can be detached from the device, and can communicate remotely with the device when not attached to the device. In specific embodiments, wiring associated with electrical components in the cuff may also be housed in the cuff, for example, to minimize device bulk or to minimize electromagnetic interference with the device's internal components. For example, devices that may cause high levels of internal EMI (e.g. camera or communication systems), which may require additional volume (e.g. battery or speaker), which may require environmental sealing of the main body (e.g. data/power connector), or which may require additional contact with the wearer's skin (eg, biometric sensors) may benefit from housing at least some of the electronics in a device armband. In specific embodiments, when wiring is contained in a clamp, a display module may be attached to the clamp such that electronic connections made in the clamp or through the clamp are not twisted when the outer ring is rotated. The module may use a connector that is removable by the user, such that the display module or device body can be removed and attached at the user's discretion. As an exemplary attachment of a clamp to a device, a clamp 1215 as illustrated in Figure 12A may be attached to the body by being placed over one or more pillars 1205 and then secured to these pillars using fasteners (e.g. screws) 1210 In specific embodiments, in addition to the fasteners and pillars, a retaining plate 1215 may be used to secure the clamp to the device 1225, as illustrated in Figure 12B. This revelation considers any proper interface between the cuff and the device. For example, a USB interface may be provided between the cuff and the body of the device, for example to communicate data between the device and the cuff or components of the device and components of the cuff. In specific embodiments, an interface may enable a user of the device to easily detach, attach, or change the cuff of the device.
[000107] This disclosure considers any structure suitable for connecting a clamp as illustrated in Figure 14A to itself, for example when worn by a user. For example, Figure 13A illustrates exemplary structures for attaching clamp 1305 having a camera module 1310 to a user device 1300. Fasteners may include one or more fasteners 1315, holes 1320 and 1335 and corresponding components, brackets 1314, or clips. 1325 with fastening buttons 1330. Figure 13B illustrates an exemplary mechanism for attaching clamp 1301 to a user using clips 1311 and 1303. Components 1309 are inserted into the cavity on the other side of components 1307 to secure clamp 1301. Figure 13B illustrates further exemplary internal mechanisms for clips 1303 and 1311. Component 1317 of clip 1313 (corresponding to clip 1311) may include one or more magnetic portions which may be attracted to magnets in cavity 1323. For example, component 1317 may include a magnetic portion on its outer edge, and a magnet of opposite polarity may be placed in front of spring 1319 to attract the magnet of component 1317. Components 1 317 can then fill cavity 1323 by attaching clip 1313 to clip 1303 by coupling the magnets. When inserted, components 1321 can be used to engage springs 1319, which force components 1317 out of cavity 1323. Clip 1313 can be detached from clip 1303. In addition to the magnets in components 1317 and cavity 1323, the magnets may also be placed inside clip 1313, for example, to aid in removal of clip 1313 when springs 1319 are engaged or to prevent components 1317 from sliding in and out of clip 1313 when not attached to clip 1303. For example , one or more magnets can be placed in the center of clip 1313 equidistant from components 1317 and in the same plane as components 1317, attracting the magnets of each component (and thus the components themselves) towards the center of clip 1313.
[000108] Figure 13C illustrates exemplary structure for attaching a clamp 1327 using fasteners 1333 and 1331, for example, through the use of cavity 1329 and components 1337 and 1341. Figure 3C illustrates the internal structure of fasteners 1331 and 1333. The fasteners 1339 (corresponding to fastener 1333) includes components 1337. When fastener 1343 (corresponding to fastener 1331) is inserted into fasteners 1339, components 1341 are affixed to components 1337, and may be secured by extension over a ferrule of fastener 1339. As the fastener 1339 is pulled up, the ferrule increasingly forces the components 1337 outward, moving the components 1341 beyond the ferrule of the fastener 1339 and allowing the fastener 1339 to be removed from the fastener 1343. In specific embodiments, the magnets may be placed within fasteners 1333 or over fasteners 1333 and 1331 to secure them together. For example, a magnet can be placed on the edge of each of the components 1341 and 1337. When the fastener 1343 is brought into the fastener 1337 (or vice versa) the magnets attract and secure the component 1341 to the component 1337. , a magnet may be placed on the fastener 1343, for example, to aid in the removal of the component 1341 from the component 1337 or to prevent the components 1341 from sliding in and out of the fastener 1343 when not affixed to the fastener 1339. For example , one or more magnets can be placed at the center of the fastener 1343 equidistant from the components 1341 and in the same plane as the components 1341, attracting the magnets at the end of each component (and thus the components themselves) towards the center of the component. fastener 1343.
[000109] Figure 3D illustrates an alternative arrangement for attaching clamp 1351 using fasteners 1349 and 1353. When attached, fastener 1357 (corresponding to fastener 1353) can be twisted, disengaging components 1359 (which may be rounded) at from cavities 1363, and enabling fastener 1361 (corresponding to fastener 1349) to be removed from fastener 1357, and vice versa. In specific embodiments, one or more magnets may be used to affix fasteners 1357 and 1361 together and/or remove fasteners 1357 and 1361 from each other. For example, magnets may be placed in cavities 1363 and the outer (convex) edge of components 1359, drawing components 1359 into cavities 1363 and securing fastener 1361 to fastener 1357. As another example, magnets may be placed in inner edge of components 1359 (i.e., on the concave surface of components 1359), drawing components 1359 into fastener 1361, for example, to assist in removing components 1359 from cavities 1363 or to prevent components 1359 from slipping in and out of fastener 1361 when not affixed to fastener 1357. Corresponding magnets can also be placed on surfaces of fastener 1361 that are in contact with components 1359 when those components are not extended into cavities 1363. In other words, these magnets can attract (and, in specific embodiments, ultimately make direct contact with) the magnets on the concave surface of the components 135 9, securing components 1359 to fastener 1361.
[000110] Figures 13E-13G illustrate exemplary embodiments of attaching a clamp 1369 with camera module 1373 to itself, for example, when used by a user of device 1367. In Figure 13E, one or more magnets 1371 in a side of the clamp 1369 may be attracted to one or more magnets 1379 on the other side of the clamp 1369. The magnets may be strips of magnetic material partially crossing a clamp, as illustrated by the magnetic strip 1307 in Figure 13H; they may be strips of magnetic material that completely cross the clamp, as illustrated by strips 1321 and 1327 in Figure 131, or they may be areas of magnetic material 1393, as illustrated in Figure 13F. In addition to magnets 1371 and 1379, bracket 1369 may include holes 1391 and one or more posts 1377 to secure bracket 1369 to device user 1367. Figure 13G illustrates fasteners 1387 (e.g., screws 1396) attached to fasteners 1371 ( for example, cap nut 1395) for securing clamp 1381 to a user of device 1367 using retainers 1383 (1398).
[000111] In specific embodiments, a clamp containing electrical components may also incorporate a traditional physical contact connector, as illustrated by connector 250 in Figure 2. The connector may allow connectivity to the device, eg for charging, system upgrades , troubleshooting, or data transfer. Such a connector may be of the pogo type or may be plated surfaces to which a charging cable can interface upon contact. Such connectors can be plated in precious metals to prevent corrosion from exposure to moisture from the environment and the human body. In specific embodiments, physical connectors can be used for power only, and data can be transferred using short-range communication modalities such as BLUETOOTH, near-field communication (NEC) technology, or WI-FI.
[000112] In specific embodiments, a clamp may be used to house flexible batteries (such as, for example, lithium-based batteries) to increase the energy storage of the device. As the energy storage capacity can be linked to the total volume, internal batteries in the armband increase the storage capacity for volume-limited wearable devices without affecting the overall size of the device body.
[000113] As described more fully below, a wearable electronic device may include one or more sensors on or within the device. For example, a wearable electronic device may include one or more optical sensors or depth sensors. Optical sensors can be placed in any suitable location, such as, for example, on the face of the device, in a cuff facing away from the user's body, in a cuff facing away from the face, in an towards the user's body, or any suitable combination thereof. Figure 15 illustrates a device 1500 with a cuff that has an outward facing optical sensor 1505. Placing an optical sensor on the cuff can reduce the number of high frequency signals within the housing, allowing for lighter shielding within the body. device and thus weight and volume savings. Figures 14A-14D illustrate exemplary camera placements for different embodiments of a wearable electronic device. In specific embodiments, electronic means such as those for processing camera input may also be located in the clamp, for example, in a "volcano" shaped housing of the camera, as illustrated by housing 125 in Figure 1. In specific embodiments, others sensors can be placed close to an optical sensor, such as, for example, in the same housing as the optical sensor in the device's clamp. For example, a depth sensor can be used in conjunction with an optical camera to improve the display or detection of a device's environment, or to determine what object a user is pointing at or interacting with through a gesture.
[000114] In specific embodiments, the placement of an optical sensor on the cuff may be user adjustable within a predetermined range. In specific embodiments, placement of an optical sensor in the cuff can be optimized such that the sensor can be conveniently pointed at by the user. For example, as illustrated in Figure 15, if the wearer wears the device around the wrist, the optical sensor 1505 can be placed in an outward facing fashion such that the optical sensor points away from the wearer's body when the user's palm is approximately parallel to the ground.
[000115] In specific embodiments, the placement of the optical sensor may be such that the user can see the device's display while the sensor is pointing outward from the user's body. Thus, the user can see the content captured by the optical sensor and displayed by the device without blocking the user's view of the physical scene captured by the sensor, as illustrated by the viewing triangle in Figure 16. A display 1620 of a device 1600 may have a cone associated viewing space, for example, the volume within which the display can reasonably be viewed. In Figure 16, user 1615 (1) visualizes a real trophy 1610 and (2) visualizes an image of the trophy on the display 1620 of the device 1600 from the display cone 1620 by pointing the sensor 1605 at the real trophy . Sensor 1605 has an associated viewing angle corresponding to a volume within which images can reasonably be captured by sensor 1605. Note that in the example of Figure 16, sensor 1605 is placed such that the user can conveniently point the sensor 1605 in the outward direction while holding the 1600 device's 1620 display in a user-facing direction, and can do so without the 1600 device blocking the user's view of trophy 1610.
[000116] Figure 17 illustrates an exemplary angle of view for an optical sensor. When object 1725 is in the viewing angle of optical sensor 1705, a user can see object 1725 and a 1710 or 1715 image of object 1725 as displayed on device 1700. For example, when user's hand 1720 is in viewing angle , the user can see object 1725, hand 1720, and an image 1710 of object 1725 and hand 1720 on the device's display 1700. In comparison, when the 1720 hand is not in the view angle of the 1705 sensor, the 1720 hand is not displayed by the 1715 image displayed on the 1700 display. When used by a user, the device sensor can capture the hand/arm/fingers of the user in the sensor's viewing angle while performing a gesture to be captured by the same sensor or by other sensors (e.g. a gesture selecting an object in the device's viewing angle, such as, for example, squeezing, tapping, pulling on direction or pushing in the opposite direction). The sensor and display can be oriented such that, when used by a user, an object to be displayed on the device is in the device's viewing angle while the device does not block the user's view of the object and the eye of the user. user is inside the device's display icon. In specific modalities, a user can interact with the image captured by the sensor or displayed on the device, such as, for example, by lightly tapping the portion of the display at or near where the image is displayed, by performing a gesture within the angle of the sensor's sight, or by any other suitable method. This change may provide some functionality related to the object, such as, for example, identifying the object, determining information about the object, and displaying at least some of the information on the display; by capturing an image of the object; or by pairing with the object or otherwise communicating with the object if the object has pairing/communication capabilities.
[000117] In specific embodiments, a depth sensor or optical module (which may be used interchangeably, where appropriate) can communicate with a device via a simple bus extension that the optical sensor would use if mounted directly on the main printed circuit board (PCB), as shown in Figure 18A. In Figure 18A, optical sensor 1825 transmits data via flexible printed circuits or wiring 820 to an integrated control 1810, which in the example of Figure 18A is located within or on the device 1805, which houses the circuit board. printed, main. Figure 18B illustrates the 1850 optical sensor integrated circuit on or within the 1860 optical sensor module, which also houses the 1855 optical sensor. Communication between the main PCB of the 1830 device and the electronics in the camera module 1860 occurs via flexible printed circuit 1845. The arrangement of Figure 18B may allow an integrated circuit to compress and otherwise process the data and send it through a method that requires a smaller number of signal lines, or that requires less data transfer. This can be advantageous as the cuff must flex when the user wears the device, so a smaller number of lines may be desirable. Such an approach can reduce the number of lines to one or two signal lines and two power lines, which is advantageous for packaging, modeling, and reliability. In specific embodiments, one or more of the electronics described above may be shielded to prevent electromagnetic interference from long, high-frequency cabling. The use of parallel bus is common in such cases, and may require the use of a larger cable or FPC.
[000118] In one embodiment, the camera control integrated circuit may be mounted directly onto a small circuit board in the optical module, as illustrated in Figures 18A-B. A wearable electronic device may include any suitable sensors. In specific embodiments, one or more sensors or their corresponding electronic means may be located in a cuff of the device, within the body or on the body of a device, or both. Sensors can communicate with each other and with processing and memory components through any wired or wireless connections, such as, for example, direct electrical connection, NFC, or BLUETOOTH. Sensors can detect the context (eg environment) or state of the device, the user, an application, or another device or application running on another device. This disclosure considers a wearable electronic device containing any suitable configuration of sensors in any suitable location on the wearable electronic device. Furthermore, this disclosure considers any suitable sensor receiving any suitable input described herein, or initiating, involved in, or otherwise associated with the provision of any suitable functionality or services described herein. For example, touch-sensitive sensors may be involved in transitioning between graphical user interfaces displayed on the device, as described more fully here. This disclosure further considers that the functionality associated with the wearable device, activation/deactivation of sensors, sensitivity of sensors, or priority of sensor processing can be customized by the user, where appropriate.
[000119] Figure 19 illustrates an exemplary sensor detection system and illustrates exemplary sensors for a wearable electronic device. Sensors send data in a sensor-specific format to the device's sensor concentrator subsystem. For example, the 19A sensors illustrated in the exemplary sensor module 1924 may include one or more of: face detection cameras 1902, outward facing cameras 1904, face proximity sensors 1906, face touch sensors 1908, sensors 1910 clamp touch sensors, 1912 skin touch acoustic sensors, 1914 inertial measurement system (IMU), 1916 gravity vector sensors, 1918 and 1920 touch sensors, and any other suitable sensors 1922. Data from the sensors are sent to sensor concentrator 19B illustrated in sensor concentrator module, exemplar 1944. The data is packaged and cleared of noise at steps 1928 and 1930 as needed and transferred to a 1942 latched state detector. The 1942 latched state detector detects when the device is idle, and disables sensors as needed to conserve power, while monitoring sensor data for a gesture or other appropriate input that can reactivate the device. For example, numeric gesture detectors receive sensor output and compare that output to one or more numeric thresholds to determine a result. Heuristic gesture detectors 1934 receive sensor output and make decisions based on one or more decision trees, such as, for example, ANDing rules applied to more than one threshold. 1938 pattern-based gesture detectors evaluate sensor input against a predetermined library of 1940 gesture patterns, such as, for example, patterns determined by empirically evaluating sensor output when a gesture is performed. One or more 1948 gesture priority decoders evaluate the output from the gesture detectors, locked state detectors, or both, to determine which, if any, of the detected gestures should be used to provide functionality to a specific application or process. system level. More broadly, in specific modalities, when the device is active, application-requested or system-requested sensor detectors are activated in turn and provide their data to the sensor priority decoder. In specific embodiments, the priority detector determines which, if any, of a plurality of sensor inputs it should process, and this disclosure considers that combined input from multiple sensors may be associated with functionality different from the functionality associated with each sensor input. sensor individually. The decoder decides when a sensor has been detected with sufficient certainty, and provides sensor data to the sensor hub driver. The trigger provides an application programming interface (API) to the end applications and system controllers, which in turn produce the necessary output and navigation. For example, Figure 19 illustrates the exemplary 1950 sensor hub trigger, 1952 application APIs, 1954 system navigation controllers to, for example, determine appropriate system functionality (e.g., 1962 system level navigation through a graphical user interface of the device), and application-level gesture priority detectors for 1956 applications. Although the sensor hub 19B and application processor 19C (illustrated in the exemplary application processor module 1964) of Figure 19 are illustrated as separate entities, they can be expressed by (and their functions performed by) at least some of the same or similar components. In specific embodiments, the boundaries delimiting the components and functions of the sensor concentrator and the application processor can be more or less inclusive. The limits illustrated in Figure 19 are merely an exemplary embodiment. With respect to the sensors themselves, the functions performed by, and components of the sensor concentrator and application processor system may occur or may be in the device body, the armband, or both. Specific embodiments may use more than one sensor concentrator or application processor, or components in place, to receive and process sensor data.
[000120] Sensors can internally produce sensor data, which can simply be filtered or reformatted using, for example, a detector or data conditioner. The raw data can be formatted in a uniform format by the data formatter for ingestion by the Application API. Recognition means for using numerical models (such as decision trees), heuristic models, pattern recognition or any other hardware, suitable software and techniques for detecting sensor data, such as gesture input. Recognition means can be enabled or disabled by the API. In such cases, the associated sensors may also be disabled if the recognition medium does not receive the data from the sensors or is unable to recognize the sensor data.
[000121] A device may incorporate a database of sensor outputs that allow the same detector to detect many different sensor outputs. Depending on the requests produced by the API, a sensor priority decoder can suppress or pass sensor output based on the given criteria. Criteria can be a function of the API model. In specific embodiments, the recognition means may ingest the output of more than one sensor to detect sensor output.
[000122] In specific embodiments, multiple sensors can be used to detect similar information. For example, a normal sensing camera and also a depth sensing camera can be used to detect a finger, or both a gyroscope and a magnetometer can be used to detect orientation. Where appropriate, functionality that relies on or uses sensor information may replace sensors or choose between them based on implementation and runtime considerations such as cost, energy usage, or frequency of use.
[000123] The sensors may be of any suitable type, and as described herein, may be located within or on a device body, within a cuff or on a cuff, or a suitable combination thereof. In specific embodiments, the sensors may include one or more proximity or depth sensors (terms which may be used interchangeably in this document where appropriate), such as, for example, infrared sensor, optical sensors, acoustic sensors, or any other suitable depth sensors or proximity sensors. For example, a depth sensor can be placed on or near the display of a device to detect when, for example, the user's hand, finger, or face approaches the display. As another example, depth sensors can detect any object that a user's finger at the depth sensor's angle of view is pointing, as described more fully in this document. Depth sensors, in addition, or alternatively, may be located on a device clamp, as described here more fully. In specific embodiments, the sensors may include one or more touch-sensitive areas on the device body, cuff, or both. Touch sensitive areas may use any suitable touch sensitive techniques, such as, for example, resistive, surface acoustic wave, capacitive (including mutual capacitive or self-capacitive), infrared, optical, dispersive, or any other technique. suitable techniques. Touch sensitive areas can detect any suitable contact, such as sweeps, light taps, contact at one or more specific points or with one or more specific areas, or multi-touch contact (such as, for example, pressing two or more fingers together. a display or rotate two or more fingers over a display). As described more fully herein, touch sensitive areas may comprise at least a portion of a device display, ring, or armband. As with other sensors, in specific modalities, the touch-sensitive areas can be turned on or off, for example, based on context, power considerations, or user settings. For example, a touch-sensitive portion of a ring can be activated when the ring is "locked" (eg, does not rotate); and deactivated when the ring rotates freely. In specific embodiments, the sensors may include one or more optical sensors, such as suitable cameras or optical depth sensors.
[000124] In specific embodiments, the sensors may include one or more inertial sensors or orientation sensors, such as an accelerometer, a gyroscope, a magnetometer, a GPS chip, or a compass. In specific embodiments, the output from the inertial or orientation sensors can be used to activate or unlock a device, detect one or more gestures, interact with content on the device display screen or a display screen of the paired device, access specific data or activate specific functions of the device or a paired device, initiate communications between a device body and a cuff or a device and a paired device, or any other suitable functionality. In specific embodiments, the sensors may include one or more microphones to detect, for example, a user's speech, or ambient sounds to determine device context. Furthermore, in specific embodiments a device may include one or more speakers in the device body or clamp.
[000125] In specific embodiments, sensors may include components for communicating with other devices, such as network devices (e.g. servers or routers), smartphones, computing devices, display devices (e.g. televisions or kiosks), audio systems, video system, other wearable electronic devices, or between a cuff and a device body. Such sensors may include NFC readers/beacons, BLUETOOTH technology, or antennas for transmitting or receiving on any suitable frequency.
[000126] In specific embodiments, sensors may include sensors that receive or detect tactile input from a user of the device, such as, for example, piezoelectric means, pressure sensors, force sensors, inertial sensors (as described above) , stress/stress sensors, or mechanical actuators. Such sensors may be located in any suitable location on the device. In specific embodiments, device components may also provide haptic feedback to the user. For example, one or more rings, surfaces, or clamps may vibrate, produce light, or produce audio.
[000127] In specific embodiments, a wearable electronic device may include one or more environmental sensors, such as a temperature sensor, humidity sensor, or altimeter. In specific embodiments, a wearable electronic device may include one or more sensors for detecting a physical attribute of the wearable device's user. Such sensors may be located in any suitable area, such as, for example, on a cuff of the device or on the base of the device contacting the wearer's skin. As an example, sensors may include acoustic sensors that detect vibrations from a wearer's skin, such as when the wearer rubs the skin (or clothing covering the skin) near the wearable device, taps the skin near the device, or moves the device up and down the user's arm. As further examples, a sensor may include one or more body temperature sensors, a pulse oximeter, skin response sensors, galvanic, capacitive imaging sensors, electromyography sensors, biometric data readers (e.g. fingerprint or eye), and any other suitable sensors. Such sensors may provide feedback to the user of the user's status, may be used to initiate predetermined functionality (e.g., an alert to take a specific medication, such as insulin for a diabetic), or may communicate detected information to a remote device (such as such as, for example, a terminal in a medical center).
[000128] A wearable electronic device may include one or more load components to charge or power the device. Charging components may use any suitable charging method, such as capacitive charging, electromagnetic charging, gradual charging, direct electrical contact charging, solar, kinetic, inductive or intelligent charging (e.g. charging based on a condition or state of a battery, and modify the charging actions accordingly). The charging components may be located in any suitable portion of the device, such as within the body or on the body of the device or within the cuff or on the cuff of a device. For example, Figure 20A illustrates a charger 2000 with connector 2005 for connecting a load component with the charger. For example, connector 2005 may use mechanical structures, friction (such as couplings or fasteners), magnetism, or any other suitable technique to accept and secure a pin from a loading component in such a way that the pin and loader 2000 do so. direct electrical contact. Figure 20C illustrates pin 2015 on clamp 2010 using pogo connectors to create a circuit connection between charger 2022 and clamp 2010 through contacts 2020. In specific embodiments, pin 2015 can be on charger 2022 and connector 2005 on Figure 20A may be on the cuff or body of the wearable device. In specific embodiments, contacts 2020 (such as, for example, pogo-type connectors) can be on the body of the device, which can be used to create a circuit between the clamp or charger for charging the device. The charger 2000 of Figure 20A can be connected to any suitable power source (such as, for example, power from an alternating current (AC) outlet or direct current (DC) power from a USB port on a computing device) over any suitable wired or wireless connection.
[000129] The magazine 2000 can be made of any suitable material, such as acrylic, and in specific embodiments can have a non-slip material as its coating, such as, for example, rubber. In specific embodiments, the charger 2000 may be affixed or attached to a surface, for example, it may be attached to a wall as illustrated in Figure 20B. The attachment may be by any suitable technique, such as, for example, mechanically, magnetically or adhesively. In specific embodiments, a wearable electronic device may be fully wearable while attached to the charger. For example, when a charging component is located on the device's body, the device can be seated on the charger while a user interacts with the device or other devices communicate with the device.
[000130] As another example of a charging component in a wearable electronic device, Figures 21A-21B illustrate additional exemplary chargers utilizing, for example, inductive charger. As illustrated in Figures 21A-21B, a clamp may include one or more charging coils 2110. As described above, this disclosure considers charging coils (or any other additional charging component) incorporated in the device body or on the device body. device, alternatively or in addition to the device cuff. A magnetic field 2105 generated, for example, by charging surface 2115 or charging surface 2120 passes through charging coil 2110. Charging surface 2120 of Figure 21B can enhance the density of magnetic field 2105 through charging coil 2110 in relative to the 2115 loading surface and allows for more accurate placement than the 2115 loading surface, thereby improving the system's charge transfer rate. This disclosure considers that, where appropriate, charging can energize components in the device body, components in the cuff or on the cuff, or both.
[000131] In specific embodiments, the armband or device may implement an antenna for a wireless charging solution. As wireless charging operates optimally in the absence of ferrous metals, this allows for a wider choice of materials for the device body, while allowing for improved wireless charging transfer capability by allowing the coil to be held between the poles of a load trigger (as described above) rather than simply being coplanar to the trigger. As described above and illustrated in Figure 2, the clamp may also incorporate a traditional 250 internal physical contact connector.
[000132] In specific embodiments a charging unit with an internal cargo reservoir can be associated with a wearable electronic device. When plugged into the wall, the charging unit can charge both an attached device and the charging unit's internal reservoir. When unplugged, the charging unit can still charge an attached device from its power reservoir until the reservoir is depleted. When just the charger is plugged into a power source without a device, it still charges itself so it can provide additional power to the device at a later time. Thus, the charging unit described here is useful with and without being plugged into a power source, and it can also power any partially charged device for a moment when a person is not able to connect to a power source, for example, when traveling, on the plane, at the train station, outdoors, or anywhere a user might need to charge a device but does not have access to a power source. The device can be on standby or in use while the charger charges the device, and no modification to the target device's software or hardware is required. Additional advantages of one or more embodiments of the invention may include reducing the number of items that must be carried, providing the benefits of both a charger and a power module, making the charger useful when charging while moving, and reducing the number of cables and connectors to charge to prolong the battery life of your devices. This disclosure considers that a charging unit can be applied to any suitable electronic device, including but not limited to a wearable electronic device.
[000133] Figures 22A-22B illustrate specific embodiments of an exemplary charging unit 2210 with exemplary connections 2205 to device 2200 and connections 2215 and 2220. For example, Figure 22A illustrates cable connectivity from the charging unit 2210 to the 2200 device and to an external power source. As another example, Figure 22 illustrates the charging unit 2210 with cable connectivity from the device 2200 and direct connectivity to a power source. This disclosure considers any proper connections between a device, the charging unit, and a power source charging the charging unit. For example, connections to both the device and the power source can be direct, wired, or wireless.
[000134] As described above, a charging unit can charge a device from the charging unit's internal charging reservoir even when not connected to an external power source, and can charge itself, a connected device, or both when connected to an external power source. This revelation considers any suitable scheme for allocating charge between the charging unit and the device. Such an allocation scheme may depend on the amount of load internal to the device, internal to the charging unit, and amount of power being consumed by the device, the charging capabilities of an external power source, or any suitable combination thereof. Additionally or alternatively, the load threshold can determine which allocation scheme should be used. For example, one charging scheme can be used when the device is almost fully charged and the charging unit has little charge remaining, and another scheme can be used when the device has little charge remaining. Figures 23-24 illustrate exemplary charging schemes for the charging unit and connected device. For example, as illustrated in Figure 24, when a device is connected to a charger as in step 2400, step 2405 determines whether the device is fully charged. If so, no further loading action is performed. If not, step 2410 determines whether the charger is connected to an external power source, such as, for example, line voltage. If so, the device is charged from the external source at 2425. Otherwise, the step determines whether the charger has left any power remaining, and if so, the device is charged from the charger's internal power source in the step 2420 from the line voltage rather than from the charging unit reservoir when the charging unit is connected to line voltage. Figure 23 illustrates a similar decision tree. If a device is connected to a charger (step 2300) that is connected to a power source (step 2300), then step 2310 determines whether the device is fully charged, and if not, the device is charged from the power source. power with which the charger is connected (step 2315) . Similarly, step 2320 determines whether the charger is fully charged, and if not, the charger unit is charged from the power source in step 2325. In specific embodiments, the allocation scheme used may be determined or standardized by a user. .
[000135] Figures 25A-25E illustrate exemplary embodiments of energy storage and charging means in a device and a charging unit. In Figure 25A of the illustrated embodiment, the device's cargo reservoir 2500 and the charging unit's cargo reservoir 2520 are both depleted. Figures 25B-25C illustrate the charging of the device's load reservoir 2500 and the device's load reservoir 2505 after the charging unit has been connected to the external power source 2510. After a short period of time, the charging unit and the device are charged simultaneously, with the charge being distributed in such a way that each is given the same percentage of its full charge capacity. Both the device's charging reservoir 2500 and the charging unit's charging reservoir 2505 are fully charged after some time, as shown in Figure 25C. As described herein, the amount of charge allocated to the device or charging unit may vary based on any suitable charge allocation scheme. For example, if the charging unit's energy conversion capacity is limited, the charging unit's reservoir is almost full and the device's charging reservoir is almost empty, or the demand of the energy device is too high, the unit loader can prioritize device loading before loading its internal reserves. As another example, charging of the charging unit can continue until a predetermined threshold load has been reached.
[000136] Figures 25D-25E illustrate load transfer between the charging unit and the device when the charging unit is not connected to an external power source. As illustrated in Figure 25D, a device with little charge remaining in its reservoir 2500 is connected to a charging unit with a fully charged reservoir 2505. As discussed above, this disclosure considers any suitable charge allocation scheme between the device and charger. when the charger is not connected to an external power source. This allocation scheme can be the same as or different from the allocation scheme used when the charging unit is connected to an external power source. For example, Figure 25E illustrates an allocation scheme that maximizes the load of the device's loading reservoir 2500. As long as the charging unit still has charge, it continues to charge the device until the device is fully charged or until the charger's 2505 charging reservoir is completely empty.
[000137] Figure 26 illustrates an exemplary internal architecture of an exemplary charging unit 2600. The line voltage converter 2605 produces a lower voltage forward current from the high voltage line current 2610. This voltage is supplied to both the battery charger/regulator 2630 and connector 2615, to which a device can be connected via connection 2620 for charging. Battery charger 2630 uses energy available from line voltage converter 2605 to charge the energy reservoir (battery 2635). It may consume an equal portion of the energy when the device receives a smaller portion when device demand is high (device priority), or consume a larger portion when internal energy reserves are low (charger priority). These priorities can be selected by the user.
[000138] Continuing with the example in Figure 26, when the 2605 line voltage converter is not supplying power, the 2630 charger/regulator produces the proper charging voltage from power in the 2635 battery. The 2630 regulator can be always on , or it can be switched to on by connecting to the device, or by pressing a button that indicates that the user wants to charge the device. When activated, the 2630 regulator will charge the device until the internal reserves are depleted. At this point, some charge may still remain in the 2635 battery to improve battery life, but it will not be available to the user. The device may incorporate an emergency mode that allows access to some of that power for a minimal amount of emergency use time, at the expense of battery life. The 2630 regulator may continue to supply power until the device is unplugged, or until the device consumes only a minimal amount of power, indicating load depletion. Finally, the 2630 charger/regulator can include an on-demand display that shows the amount of power remaining in reserve for the user. As displays generally use power, a button or other means of input can be used to activate the display for a limited time. While Figure 26 illustrates an exemplary internal architecture of an exemplary charging unit 2600, this disclosure considers any suitable internal architecture of any suitable charging unit described herein, and considers that such charging unit may be of any suitable size and shape.
[000139] In specific embodiments, device functionality or components (such as, for example, sensors) can be activated and deactivated, for example, to conserve energy or reduce or eliminate unwanted functionality. For example, a locked-state detector detects when the device is idle, and disables sensors as needed to conserve power, while monitoring the sensor data for a gesture or other suitable input that can wake up the device. A device can have only one or more power modes, such as an idle mode or a fully active mode. As an example, in specific embodiments the device is worn on the arm, and a touch surface of the device may come into contact with objects and people while in regular use. To prevent accidental activation, an accelerometer or other inertial sensor on the device's body or armband can be used to measure the device's approximate position relative to Earth's gravity. If the gravity vector is detected towards the sides of the device (e.g. the device is determined by the user's side or the device is determined to be not pointed at the user) the touch screen can be locked and display disabled to reduce the energy usage. When the gravity vector is determined to be pointing down from the device (e.g., the device is approximately horizontal, resulting in a determination that the user is viewing or otherwise using the device), the system may power the display. and enable the touch screen for additional interactions. In specific embodiments, in addition to or alternatively to the direction of the gravity vector awakening or unlocking a device, a rate of change in the direction or magnitude of the gravity vector may be used to awaken or unlock a device. For example, if the rate of change of the gravity vector is zero for a predetermined period of time (in other words, the device has been held in a specific position for the predetermined period of time) the device can be activated or unlocked. As another example, one or more inertial sensors on the device can detect a specific gesture or sequence of gestures to activate a display or other appropriate component or application. In specific embodiments, the device's encoder is robust against accidental activation, and thus can be left active so that the user can switch between selections while placing the device in their viewing angle. In other embodiments, the encoder can be disabled based on context or user input.
[000140] In addition, or alternatively, to energy conservation, specific modalities can block one or more specific applications to provide security to one or more users. Appropriate sensors can detect the activation or unlocking of security aspects of the device or another device paired with or communicating with the wearable device. For example, a specific gesture performed with the device or a touch-sensitive area of the device can unlock one or more security aspects of the device. As another example, the specific rotation or sequence of rotations of a device's swivel ring can unlock one or more security aspects of the device, either on its own or in combination with other user input. For example, a user can rotate a spinning ring to a unique sequence of symbols, such as numbers or images. In response to receiving the sequence of rotational inputs used to rotate the rotary ring, the display may show the specific symbol(s) corresponding to each rotational input, as described more fully here. In specific embodiments, the symbols used may be user-specific (such as, for example, user images stored on the device or accessible by the device, or symbols pre-selected by the user). In specific embodiments, different symbols can be presented to the user after a predetermined number of unlocks or after a predetermined period of time. The exemplary entries described above can also alternatively be used to enable/disable aspects of the device, specific applications, or access specific data. While this disclosure describes specific examples of user input unlocking security aspects of a device, this disclosure considers any suitable input or combination of inputs to unlock any security aspect of the device. This disclosure considers input or other parameters suitable for unlocking security aspects of a device; or device activation/deactivation components, can be customized by the user.
[000141] In specific embodiments, a wearable electronic device can detect one or more gestures performed with or on the device. Gestures can be of any suitable type, can be detected by any suitable sensor (eg inertial sensors, touch sensors, cameras or depth sensors), and can be associated with any suitable functionality. For example, one or more depth sensors can be used in conjunction with one or more cameras to capture a gesture. In specific embodiments, various depth sensors or cameras can be used to improve the accuracy of detecting a gesture or the background associated with a gesture. Where appropriate, sensors used to detect gestures (or processing startup functionality used in association with a gesture) can be turned on or off to conserve power or provide security, as described more fully above. As shown above, Figure 19 illustrates an exemplary sensor detection system and provides specific examples of gesture detection, processing, and prioritization. In specific modes, specific applications can subscribe to specific gestures or to all available gestures; or a user can select which gestures should be detected by which applications. In specific embodiments, the gestures may include manipulation of another device while using the wearable device. For example, a gesture may include shaking another object while aiming, moving, or otherwise using the wearable device. This disclosure considers that, where appropriate, any of the gestures described herein may involve manipulation of another device. While the examples and illustrations discussed below involve specific aspects or attributes of gestures, this disclosure considers the combination of any appropriate aspects or attributes of the gesture and sensor described herein.
[000142] In specific embodiments, a wearable electronic device can detect one or more gestures performed with the device or on the device. Gestures can be of any suitable type, can be detected by any suitable sensors (e.g. inertial sensors, touch sensors, cameras or depth sensors), and can be associated with any suitable functionality. For example, one or more depth sensors can be used in conjunction with one or more cameras to capture a gesture. In specific embodiments, various depth sensors or cameras can be used to optimize the accuracy of detection of a gesture or the background associated with a gesture. Where appropriate, the sensors used to detect gestures (or processing used to initiate functionality associated with a gesture, can be turned on or off to conserve power or provide security, as described more fully above, Figure 19. As described more fully above, Figure 19 illustrates an exemplary sensor detection system and provides specific examples of gesture detection, processing, and prioritization. In specific modes, specific applications can subscribe to specific gestures or to all available gestures; or a user can select which gestures should be detected by which applications. In specific embodiments, the gestures may include manipulation of another device while using the wearable device. For example, a gesture may include shaking another device while pointing, moving, or otherwise using the wearable device. This disclosure considers that, where appropriate, any r one of the gestures described here may involve manipulating another device. While the examples and illustrations discussed below involve specific aspects or attributes of gestures, this disclosure considers combining any suitable aspects or attributes of the gesture and sensor described herein.
[000143] In specific embodiments, the gestures may include gestures that involve at least one hand of the user and an accessory on which the device is used, such as, for example, the other wrist of the user. For example, in specific embodiments, a user can use the hand/arm in which the device is used to properly aim the device's optical sensor (e.g. a camera or depth sensor) and can move or position the other arm/hand. /fingers to perform a specific gesture. As described here and illustrated in Figures 16-17, in specific embodiments the target scene can be displayed on the device's display, such that a user can view the actual scene, the scene when displayed on the device, and the hand/arm/ user's fingers, if they are in the viewing angle. In specific modes, the displayed scene may include the hands/fingers/arm used for detection by the sensor and used to perform the gestures. Figures 27-28 illustrate exemplary gestures in which the user points a sensor facing outwards (eg, away from the user's body) at the device (eg, on the device's armband, as illustrated in the figures) and moves or positions the device. your other arm/hand/fingers to perform a gesture. For example, in Figure 27, an outward sensor detects an object at the angle of view of the 2705 sensor, an outward sensor (which may be the same sensor detecting the object) detects one or more fingers pointing to the object 2710, and when the pointing finger(s) is determined to be at rest 2715, a gesture is detected 2720. Referring to Figure 19, raw gesture data captured by the outward facing camera can be conditioned and cleared of noise and this data can be sent to the Heuristic Gesture Detector. The Gesture Priority Decoder processes the gesture data and determines when the gesture has been identified with sufficient certainty. When the gesture has been identified, the gesture is sent to the Sensor Hub Trigger which provides an API for the end applications and system controllers.
[000144] As examples of the functionality associated with this gesture, a camera can focus on the object, the detected and pointed object can then appear on the display, information about that object can appear on the display, and the displayed content can be transferred to the display of another device (for example, when the object is on another device). Figure 28 illustrates an exemplary gesture similar to the gesture of Figure 27; however, the gesture illustrated includes the sensor facing outwards sensing a "light tapping" movement of the finger(s) (e.g. that the finger(s) is moving away from the sensor). For example, the gesture of Figure 28 may include detecting an object in the scene from a camera (or other suitable sensor) in step 2805, detecting the finger in the scene in step 2810, detecting an absence of lateral movement of the finger in step 2815 , detecting movement of the fingertip away from the sensor at step 2820, and detecting a gesture at step 2825. The gesture illustrated in Figure 28 may provide any suitable functionality. For example, the "slightly bumped" object can be selected from the objects displayed on the display screen.
[000145] Figures 29-30 illustrate exemplary gestures where the object is detected with a sensor facing outwards together with the movement of the user's fingers and hand. For example, Figure 29 illustrates the sensor facing outward detecting two separate fingers 2915, the two fingers coming together (e.g. in a pinch motion) 2920, and then the pinched fingers moving towards sensor 2925. The movement of the fingers joining and moving towards the sensor may occur simultaneously or in sequence, and performing the steps in sequence (or time between steps in the sequence) or simultaneously may individually constitute a different gesture. In Figure 30, the two fingers illustrated are initially close together 3010, and the outward facing sensor detects the fingers separating 3020 and the hand moving away 3015. As for Figure 30, the movement of the fingers and hand can be simultaneous or in any suitable sequence. Furthermore, aspects of Figures 29-30 can be combined to form a gesture. For example, squeezing your fingers together and moving away from the sensor can be a single gesture. In specific embodiments, the detected fingers or hand may be manipulating another device, and this manipulation may form part of the gesture. With respect to all exemplary gestures described herein, this disclosure considers any suitable functionality associated with the gestures illustrated in Figures 29-30.
[000146] Figures 31-32 illustrate exemplary gestures similar to Figures 29-30, except that here all fingers are used to perform the gesture. In Figure 31, the fingers are detected as initially close (eg, in a closed hand) 3105, the closed hand is detected moving away from the sensor 3110, and the sensor detects the opening of the closed hand 3115. Again, the sequence of Illustrated steps can occur in any suitable order. Figure 32 illustrates the reverse of Figure 31. Figures 31-32 can be associated with any suitable functionality. For example, Figure 31 illustrates an example of sending all or a portion of content displayed on the device to another device, such as the television illustrated in Figure 31. Similarly, the gesture in Figure 32 can bring up part or all of the displayed content. on another device to the wearable device display. For example, the gestures in Figures 31-32 can be implemented when the user performs the gestures with the wearable device in close proximity to another device, such as a smartphone, tablet, personal computing device, smart appliance (e.g., refrigerator, thermostat, etc.). or washing machine), or any other suitable device. The described functionality merely represents an example of functionality that can be associated with the gestures illustrated in Figures 31-32, and that disclosure considers that another suitable gesture can perform the described functionality.
[000147] Figures 33-37 illustrate an outward facing sensor detecting a hand or portion of an arm passing in front of the sensor. In specific embodiments, passing with the front of the hand may be a different gesture than passing with the back of the hand. Figures 33-34 illustrate the hand being passed from right to left 3310-3315 and from left to right 3410-3415 through the sensor's viewing angle, and Figures 35-37 illustrate the hand being passed from bottom to top. 3510-3515 (as well as 3735-3740) and from top to bottom 3610-3615 (as well as 3710-3715) across the sensor's viewing angle. As illustrated, the hand may initially start at the sight angle, pass through the sight angle, and exit the sight angle (as illustrated in Figure 36); it can start outside the sight angle, pass through the sight angle, and leave the sight angle (as illustrated in Figure 37); can start the sight angle, pass through the sight angle portion, and stay in the sight angle (as illustrated in Figures 33-35); or it may start at the sight angle, pass through a portion of the sight angle, and stay at the sight angle. This revelation considers the hand being swiped at other angles, such as, for example, entering at a 45 degree angle below and to the right of the device and exiting at a 45 degree angle relative to the top and left of the device. Additionally, this revelation considers detecting hand sweeps in motion except in a straight line, such as curve passes or triangle passes. This disclosure considers any proper functionality associated with any or all of the gestures illustrated in Figures 33-37, such as, for example, switching between user interfaces displayed on the device or between active and displayed applications on the device, opening or closing applications. , or scrolling through displayed content (for example, documents, web pages, or images). As reiterated elsewhere, this disclosure considers any appropriate gesture associated with the functionality described in connection with Figures 33-37.
[000148] Figures 38-39 illustrate exemplary gestures where the outward facing sensor detects the user's hand at viewing angle 3805 and detects one or more fingers pointing in one direction (together, in specific embodiments, a portion of the hand or user's arm) 3815. The detected gesture may depend on the detected fingers or the direction the detected fingers are pointing. For example, as illustrated in Figure 38, the finger may be a thumb pointing up 3820, and in Figure 39 the finger may be a thumb pointing down 3920. Any suitable functionality can be associated with the gestures illustrated in Figures 38-39. , such as saving or deleting a file locally on the device or an associated device, or approving or disapproving changes made to settings or other content.
[000149] Figure 40 illustrates an exemplary gesture involving a shape made with multiple fingers or a portion of the hand at the sensor's viewing angle facing outwards. As illustrated in Figure 40, the shape may be a ring 4010, and the gesture may include fingers not involved in the shape pointing in a specific direction 4015. As illustrated in Figure 40, a gesture may include maintaining the shape 4020 (and possibly the other fingers) for a predetermined period of time.
[000150] Figures 41-42 illustrate exemplary gestures including covering all or a portion of the sensor facing outwards with the user's fingers or hand. Covering the sensor from the top of the device with a thumbs-down gesture 4105 (as illustrated in Figure 41) can be a different gesture than covering the sensor from below the device 4210 (as illustrated in Figure 42) or the sides of the device. The direction of coverage can be detected by, for example, the shape of the hand when covering the device, the orientation of the hand when covering the device, data from other sensors indicating the direction in which the outward facing sensor is being covered (e.g. detecting that the display and outward facing sensor are covered), or any other suitable technique.
[000151] Figures 43-44 illustrate exemplary gestures where one or more of the user's fingers or portion of a hand/arm are detected at the outward facing sensor's viewing angle 4305/4405, and then shifted within the viewing angle. target (or "frame") to perform a specific gesture 4310/4320/4410/4420. In specific modalities, a gesture can be any suitable movement or it can be movement in a specific pattern. In specific embodiments, a gesture may be associated with the fingers or a detected hand/arm portion. For example, a single pointing finger may be associated with a gesture 4305 (as illustrated in Figure 43) or multiple fingers/a palm may be associated with a gesture 4405 (as illustrated in Figure 44). In specific embodiments, the direction of the palm (eg front/back at an angle) can be detected and associated with a gesture.
[000152] Figure 45 illustrates an exemplary gesture including detection of a multi-finger shape or user's hand/arm 4505, and detection of shape movement at the 4510/4520 angle of view. Figure 45 illustrates the shape of Figure 40 moving through the entire viewing angle of the sensor facing outwards.
[000153] Figure 46 illustrates an exemplary gesture that involves the detection of one or more fingers (some or all of a user's hand/arm) and their initial orientation, and subsequent detection of the change in orientation or the rate of change. guidance over time. For example, Figure 46 illustrates detecting two fingers at a viewing angle in step 4605, detecting fingers and the end of the hand at a viewing angle in step 4610, detecting fingers making a "C" shape in step 4615, decoding an initial orientation of the "C" shape in step 4620, decoding a change in orientation of the "C" shape in step 4625, determining a relative rotational value of the "C" shape in step 4630, and detecting the gesture in step 4635 This disclosure considers any suitable shape made with the user's fingers/hand/arm.
[000154] Figure 47 illustrates an exemplary gesture that involves detecting the number of fingers at a specific position in the sensor's viewing angle facing outwards. For example, Figure 47 illustrates the detection of finger tips at a viewing angle in step 4705, such as, for example, an extended thumb, extended thumb and finger, or extended thumb and two fingers. The specific fingertip orientation setting is detected in step 4710, and mapping the setting to at least one numerical count of the fingers is performed in step 4715 to detect the gesture in step 4725. Each of the displayed images can be a different gesture. This revelation considers any proper position of the fingers that comprises a gesture. Relative to every other exemplary gesture described here, this revelation considers any proper functionality associated with the gestures. For example, each gesture in Figure 47 can be associated with a contact to call, email, or text and the detected gesture can activate the call, email, or text for the contact assigned to the gesture. In specific modalities, the position of the hand/arm/fingers can indicate which contact method should be used for the contact associated with the gesture.
[000155] Figures 48-49 illustrate exemplary gestures involving dual sensors on the device. For example, Figure 48 illustrates a sensor on the lower clamp portion of the device. These sensors detect the position of the user's other hand in relation to the device, and detect the separation of the hand from the sensor. In specific embodiments, the gesture may include determining that both hands are moving, such as, for example, through additional information provided by one or more inertial sensors on the device or through an inward facing camera (for example, facing the user's body) by detecting the movement of the device through a change in the scenery. For example, in Figure 48 a hand is detected at the angle of view at step 4805. A sensor detects that the hand is in a tight shape at step 4810 and the same sensor or another sensor detects that the device is in a horizontal orientation at step 4815. A sensor detects the hand moving relative to the device at step 4820 and estimates the relative position at step 4825. The gesture is detected at step 4830. Similarly, Figure 49 illustrates an exemplary gesture also involving detection of the user's hand. in the angle of view and subsequently moving away from a device sensor. However, in Figure 49 the device sensor is positioned on top of the device (e.g. a forward facing sensor). As an example, a hand is detected at the angle of view of a forward facing camera in step 4905. The hand is detected in a tight shape in step 4915, and the device is detected in a horizontal orientation in step 4915. moves closer or further away from the device in step 4920, and the relative position estimation is performed in step 4925, at which time the gesture is detected in step 4930.
[000156] Figures 50-58 illustrate exemplary gestures detected by at least one forward-facing sensor (eg sensor on top of device). Any of the gestures in Figures 50-58 can be detected by sensors at any other suitable location (e.g., facing outward as described above), and any of the gestures detected by a sensor described elsewhere can be detected by a forward facing sensor where appropriate. Figure 50 illustrates an exemplary gesture involving one or more fingertips hovering over the device, and the forward-facing sensor detects the fingertips at step 5005, detects the position of the fingertips or movement (or lack of movement) of those fingertips. finger tips at steps 5010 and 5015 to detect a gesture at step 5020. Figure 51 illustrates an exemplary gesture in which steps 5105 and 5110 are identical to 5005 and 5010, respectively. However, the detected fingertips move away from the forward facing sensor in step 5115; in specific embodiments, a gesture may include detecting one or more of the fingertips changing position relative to each other, such as, for example, pulling apart as in step 5120. Figure 52 illustrates the fingertips detected by the sensor in step 5205, the fingertips moving together in step 5210, the fingers moving towards the device in step 5215, and the duration of which movement in step 5220 to detect the gesture in step 5225. As illustrated in Figure 53 , in specific embodiments a gesture may include sensing a change in the relative position of the finger tips in addition to the movement of the finger tips towards the sensor. For example, in step 5305 one or two fingers are detected on the front surface; in step 5310 the fingers are detected moving up or down; and a gesture is detected in step 5315. In specific embodiments, the duration of the gesture of Figs. 50-52 can determine whether a gesture is detected; or different durations may comprise different gestures.
[000157] Figures 54-57 illustrate exemplary gestures involving movement of one or more fingers or movement of a portion of a hand/arm across the face of the device (and thus across the forward facing sensor). As illustrated, a gesture can depend on the number of fingers used (eg, two fingers vs. an entire palm); in the direction of movement across the device face (eg bottom-up or left-to-right); the duration of movement across the device face, the proximity of the detected fingers or hand/arm to the device face; the portion of the device face (e.g., all or a portion, and the relative location of the portion (e.g., lower half)); or whether the detected portions are initially in the forward-facing sensor's sight angle, initially out of sight, end in sight, or end out of sight. For example, the gesture of Figure 54 may include detecting one or two fingers detected on the front surface in step 5405; detecting the fingers moving left in step 5410, and detecting the gesture in step 5415. As another example, Fig. 55 may include detecting one or two fingers detected on the front surface in step 5505, detecting the fingers if moving right in step 5510, and detecting the gesture in step 5515. As another example, Fig. 56 may include detecting no finger in step 5605, detecting multiple fingers entering the viewing angle from the left, detecting the front surface covered, detecting fingers leaving the frame in step 5620, and detecting a gesture in step 5625. As yet another example, Fig. 57 may include detecting no fingers in step 5705, detecting multiple fingers entering the angle of view from the right in step 5710, detecting full front surface coverage in step 5715, detecting fingers leaving the angle of sight in step 5720, and detecting a gesture in step 5725. As with all gestures described herein, any suitable combination of these factors (and any other suitable factors associated with the gestures) may be used to determine a gesture or functionality corresponding to the gesture. Any suitable functionality can be associated with a gesture, such as, for example, switching between graphical user interface screens, scrolling through displayed content, or scrolling through applications or devices available for communication/pairing.
[000158] Figure 58 illustrates an exemplary gesture involving one or more fingers detected at the end of the device, and may include movement of those fingers around all or a portion of the end of the device. For example, as illustrated in Figure 58, a gesture may include detecting no finger in step 5805, detecting a single finger at the edge of the front face in step 5810, detecting a finger moving along the edge in step 5815, decoding the angular movement of the finger relative to the device in step 5820, and detecting a gesture in step 5825. As an example of functionality associated with this gesture, the movement of the finger can generate some or all of the content displayed on the device.
[000159] In specific embodiments, a gesture may include a movement of the wearable device, such as, for example, by the arm using the device. Motion can be detected by any suitable sensors, such as inertial sensors, orientation sensors, or any suitable combination thereof. Figures 59-66 illustrate exemplary gestures involving detecting the gravity vector relative to the device (e.g., pointing toward the face of the device or pointing downwards through the base) and detecting subsequent movement of the device relative to that gravity vector. . For example, Fig. 59 may include detecting gravity pointing downwards across the face in step 5905, detecting the device acceleration along the same axis as the gravity vector pointing in step 5910, detecting that the device acceleration remains for some period of time in step 5915, and the detection of a gesture in step 5920. Figure 60 is substantially similar to the gesture of Figure 59, except that the gravity vector points downwards through the base (more properly than the face) in step 6005. Figure 61 illustrates a gesture that uses a gravity vector to determine the orientation/position of the device, for example, that the device is not on the user's body. Device movement from the detected orientation (such as, for example, perpendicular to the gravity vector) can be detected, resulting in a gesture. For example, a gravity orientation detected may indicate that an arm is not at the side of the body at step 6105, a lateral acceleration of the device may be detected at step 6110, acceleration may be detected for some time at step 6115, and a gesture can be detected in step 6120. As indicated in Figures 59-61, detection of an aspect of motion (eg, acceleration duration) can trigger a gesture, and ranges of an aspect (ranges of motion duration) can correspond individually to a different gesture. Figures 62-63 illustrate the rotational motion of a device. As in Figure 61, detecting the initial orientation or position of the device can be part of the gesture detection. For example, the gesture of Figure 62 may include detecting that the gravity vector indicates that the arm is not at the side of the body in step 6205, detecting some rotational movement in step 6210, estimating that the radius of the movement rotation is wide enough for elbow movement at step 6512, estimation of relative rotation at step 6220, and detection of a gesture at step 6225. As another example, the gesture of Fig. 63 may include detecting that the gravity vector indicates that the arm is not at the side of the body at step 6305, the detection of some rotational motion at step 6310, the estimate that the radius of rotational motion is small enough for wrist movement at step 6315, the estimate of relative rotation at step 6320, and detecting a gesture in step 6325. As illustrated in Figures 62-63, a gesture may include estimating the type of rotation of the device, such as, for example, rotation primarily of the shoulder (Figure 62), ro primarily from the elbow (Figure 63), or any other suitable rotation. In addition to or alternatively to the radius of rotation, a gesture may include detection of degree of rotation, duration of rotation, radial acceleration of rotation, any other suitable aspect of rotation, or any suitable combination thereof.
[000160] As for Figures 61-63, Figure 64 indicates a gesture involving detection of the initial orientation or position of the device. For example, the gesture of Figure 64 may include detecting the gravity vector indicating that the arm is not at the side of the body in step 6405, detecting lateral acceleration of the arm along the axis of the arm in step 6410, detecting that the acceleration remains for some time in step 6415, and the detection of a gesture in step 6420. Figure 65 illustrates that a gesture can include movement of the device along the accessory axis using the device, such as, for example, the acceleration of the device along that axis. The gesture may include an impact along the path of movement (eg, caused by the hand stopping or contacting an object) and subsequent reversal of the movement. The back and forth movement can be repeated until the movement is stopped or until the hand returns to some position, such as, for example, to the user's side. In specific modalities, different gestures can be based on the number or frequency of back and forth movement. For example, the gesture of Figure 65 may include detecting the gravity vector indicating that the arm is not at the side of the body in step 6505, detecting that the hand is moving in step 6510, detecting an impulse (impact) along the motion path in step 6515, detecting that the hand has reversed motion along the same linear path in step 6520, repeating steps 6515 and 6520 as appropriate, detecting that the motion is stopped for some time in step 6525, and detecting a gesture in step 6530.
[000161] Figures 66-68 illustrate exemplary gestures based on motion detection that match a predetermined motion template, which can be customized by the user or created by the user. In specific embodiments, the gestures that can be customized may include an initial position or orientation of the device, movement or aspects of movement in a specific direction, stop or start of movement, duration of movement or any other suitable movement parameter. Some or all parameters can be customized by user, in specific modalities. In specific embodiments, a detected gesture can be determined by matching the detected motion to the closest available motion template. For example, as illustrated in Figures 66-68, a gesture can correspond to a horizontal position or movement of the arm or fingers. For example, as illustrated in Figure 66, a gesture may include detecting a downwardly oriented gravity vector across the bottom of the device base at step 6605, detecting forward and inward motion at step 6610, equating a template at step 6615 (e.g., using the heuristic, numerical, or pattern-based gesture recognition modules of Figure 19 ), and detecting a gesture at step 6620. Figure 67 may include detecting a laterally oriented gravity vector through from the bottom of the device base at step 6705, detecting forward and inward motion at step 6710, matching a motion template at step 6715 (e.g., using gesture recognition, heuristic, numerical, or pattern-based modules of the Figure 19), and detecting a gesture in step 6720. Figure 68 may include detecting a gravity vector indicating that an arm is not at the side of the body in step 6805, detecting device motion at step 6810, detecting stop motion at step 6815, matching a motion template at step 6820, selecting the best motion-template matching at step 6825, and detecting a gesture at step 6830. Figures 66-68 illustrate specific examples of gestures that can be customized corresponding to specific motion templates, this disclosure considers any suitable gestures (or any aspect thereof) detected by any suitable sensors being customized by a user of the device.
[000162] In specific modes, the gesture may optionally include the detection of some non-motion or non-orientation input. For example, Figures 69-71 illustrate a gesture comprising acoustic detection, although the illustrated gestures do not require such detection. Figure 69 illustrates an acoustic output (such as, for example, ringing from an incoming or outgoing telephone call) or response, followed by some movement of the device (such as the device being brought to the face of a user). ) . For example, an audio response or output is initiated at step 6905, upward motion is detected at step 6910, interruption of upward motion is detected at step 6915, the gravity vector is within a predetermined window at step 6920, and a gesture is detected in step 6925. In specific embodiments, a gesture may include detection of the gravity vector in a specific orientation or orientation window, as illustrated. The gesture of Figure 69 can also include detection of the user's hand/fingers position. As an example of functionality that can be associated with the gesture illustrated in Figure 69, if the fingers are brought close to the ear or face in the indicated position, the user can answer or establish a phone call. Figure 70 and steps 7005-7025 illustrate an exemplary gesture that has similar attributes as those described for Figure 69, but involving different orientation of the user's hand/fingers. Figure 71 illustrates an exemplary gesture including user-generated acoustics (eg, by the user putting their fingers together), which is detected by a microphone associated with the device. For example, Figure 71 may include detecting a gravity vector indicating that an arm is not at the side of the body at step 7105, detecting relatively high acceleration motion at step 7110, detecting a sudden change in one or more acoustic frequencies at step 7115, and detecting a gesture at step 7120. As illustrated in Figure 71, jerky motion can be detected only by motion generated by jerking motion alone (e.g., by vibration of the user's hand/skin or by some degree or rate speed change due to jerking), or it can be detected by the combination of motion plus an audible input generated by jerking. In specific embodiments, the audible acknowledgment may be detected within a predetermined time from movement to the gesture to be detected.
[000163] Figures 72-73 illustrate exemplary gestures involving periodic movement of the device, such as shaking the arm on which the device is in the vertical lateral direction. Figure 72 illustrates a gesture including gravity vector detection indicating that the arm is not at the side of the body in step 7205, detecting that the device moves laterally forward on an axis in step 7210, detecting that the device moves forward. back on the same axis at step 7215, repeating steps 7210 and 7215 as desired, and detecting a gesture at step 7220. Figure 73 illustrates a gesture including gravity vector detection indicating that the arm is not at the side of the body in step 7305, detecting that the device moves vertically forward on an axis in step 7310, detecting that the device moves backwards on the same axis in step 7315, repeating steps 7310 and 7315 as desired, and detecting a gesture in step 7220. Figure 74 illustrates an exemplary gesture involving an adjustment of the position/orientation of the device in relation to the user's body. For example, the gesture of Figure 74 may include detecting the gravity vector indicating that the arm is at the side of the body in step 7405, detecting the gravity vector indicating that the arm is at the side of the body in step 7410, detecting a gesture at step 7415. Any suitable functionality can be associated with the gestures of Figures 72-75, such as, for example, switching the device from a low power state. Figure 75 illustrates an exemplary gesture involving device height or the relative change in device height from start to stop of the device. In addition to device height, a gesture can include device orientation before, during, or after the gesture. For example, a gesture may include detecting the gravity vector indicating that the arm is not at the side of the body in step 7505, detecting upward movement in step 7510, detecting stop of upward movement in step 7515, detecting that the gravity vector points through the base side of the device in step 7520, and detecting a gesture in step 7525. Any suitable functionality can be associated with the gesture of Fig. 75, such as, for example, activating the equipment paired with the device, turning on one or more lights in a room, or activating equipment near the device.
[000164] In specific modalities, a gesture may include interacting directly with the body or armband of a wearable device. For example, Figure 76 illustrates a gesture involving contact with a touch-sensitive area of a cuff worn around the wearer's wrist. The gesture may include detecting that the device is not in a locked state in step 7605, detecting an absence of touch on a cuff in step 7 610, detecting the touch of the cuff in step 7615, decoding the position of the touch at step 7620 and detecting a gesture at step 7625. Fig. 77 illustrates that taps at multiple positions can be determined to be a single gesture, such as, for example, to unlock a device or aspects of the device. The gesture may include detecting that the device is not in a locked state in step 7705, detecting an absence of touch on a cuff in step 7710, detecting touch on the cuff in step 7715, decoding the touch position in step 7720, decoding an action in step 7725, and detecting a gesture in step 7730. Figure 78 illustrates that a gesture may include contacting a touch-sensitive area of a device and swiping across a touch-sensitive area while keeping in touch with the device. The gesture can include detecting that the device is not in a locked state in step 7805, detecting an absence of touch on an armband in step 7810, detecting a touch on the armband in step 7815, detecting motion of the point touch(s) in step 7820, decoding relative motion in step 7825, and detecting a gesture in step 7830. In specific embodiments, a gesture may include the duration of contact, physical area of contact (e.g., a finger or two fingers), the contact sequence, the presence generated by the contact, or any other appropriate contact-related tribute. Although Figures 76-78 illustrate contact with a touch-sensitive area on an armband, this disclosure considers that a gesture may involve contacting a touch-sensitive area at any suitable location on the device, such as the device armband, ring, display or any combination thereof. For example, Figures 78-80 illustrate contacting touch-sensitive areas on a ring of the device, similar to the gestures in Figures 77-78. For example, a gesture might include detecting that the device is not in a locked state in step 7905, detecting no ring ring in step 7915, detecting ring ringing in step 7920, and detecting a gesture in step 7925. As another example, a gesture might include detecting that the device is not in a locked state in step 8005, detecting missing ring ring in step 8010, detecting ring ring in step 8015, detecting motion of the touch point at step 8020, decoding relative motion at step 8025, and detecting a gesture at step 8030. Fig. 81 illustrates a gesture involving multi-touch contact with a touch-sensitive area. touching a device face, and detecting subsequent movement of the points of contact, caused, for example, by movement of fingers contacting the touch-sensitive area or by movement of the wrist/hand on which the device is used. The gesture may include detecting that the device is not in a locked state in step 8105, detecting the absence of touching a surface in step 8110, detecting at least two fingers touching the surface in step 8115, detecting the movement of touch points on the surface. step 8120, decode the relative movement in step 8125, and detect a gesture in step 8130. The wrist/hand movement can be detected by, for example, inertial sensors in the device, allowing different ways of moving the touch points to constitute two different gestures. Figure 82 illustrates a gesture involving initial contact with a device, which can be detected by one or more proximity sensors on or within the device, or inertial sensors on or near the device. The gesture may involve detecting whether the contact persists, indicating that, for example, the user has turned on the device. For example, the gesture might include detecting no contact with the clamp proximity sensor or back sensor in step 8205, detecting contact by the proximity sensor in step 8210, detecting if the contact persists in step 8215, and detecting a gesture at step 8220. The gesture of Fig. 82 may unlock or turn on an idle device, or provide any other suitable functionality.
[000165] In specific modalities, a gesture may include skin contact near the device. Figure 83 illustrates a gesture involving tapping the skin close to where the device is used. The tapping action can be detected by vibration sensors on the device. The tapping movement can be confirmed, for example, by one or more acoustic sensors detecting the sound generated by the tapping gesture. For example, the gesture may include detecting whether the device is unlocked at step 8305, detecting movement with relatively high acceleration at step 8310, detecting the sound of, for example, a light tap at step 8315, equating the movement or sound with a pattern at step 8320, and detecting a gesture at step 8325. Figure 84 illustrates a gesture involving the passing of skin near the device, which can be detected and confirmed by the sensors described in Figure 83, above. For example, the gesture may include detecting that the device is unlocked at step 8405, detecting movement with relatively high acceleration at step 8410, detecting the sound of, for example, a light tap at step 8415, detecting vibrations or the sound of lateral movement over the skin at step 8420, match the movement or sound to a pattern at step 8425, and detect a gesture at step 8430.
[000166] In specific modalities, gestures may involve detecting metaphorical gestures made by the hand that is not using the device. For example, such a gesture can be detected, for example, by any suitable forward-facing sensor on or near the device's display oriented such that the hand not wearing the device is in the sensor's viewing angle. Figure 85 illustrates an exemplary gesture involving a forward-facing sensor-sensing movement of multiple fingers, such as tapping with fingers. For example, the gesture may include determining that the device is in a predetermined orientation in step 8505, detecting a fingertip in step 8510, detecting movement of the fingertip in step 8515, or detecting a light tapping sound in step 8525, and detecting one or more gestures at steps 8520 and 8530. Figure 86 illustrates an exemplary gesture involving movement of a single finger. For example, the gesture may include determining that the device is in a predetermined orientation in step 8605, detecting a fingertip in step 8610, detecting fingertip movement in step 8615, or detecting a light tapping sound in step 8525, and detecting one or more gestures in step 8620. Figure 87 illustrates a gesture involving detecting the movement of a hand holding an object, detecting the object's movement, connecting to the object, and then detecting the object's subsequent movement. As a specific example, the gesture may include detecting that the device is in a predetermined orientation in step 8705, detecting a hand in step 8710, detecting movement of the hand in step 8715, detecting an additional object moving the hand in the step 8720, connecting to the object in step 8725, detecting the movement of the object in step 8730, and detecting a gesture in step 8735. For example, an object may be a pen or other needle-like implement, and the sensor facing forward on the device can detect implement recording movements, for example to generate/store text on the device or on another device communicating with the wearable device. The example in Figure 87 may allow a user to generate drawings, notes, or other written content without actually generating written content on a display or other recording surface. As described more fully here, any suitable gesture or combination of gestures can be used to impact or initiate augmented reality ("AR") functionality, and can be used to perform tasks utilizing AR functionality. For example, the gestures in Figures 85-87 can be used to capture a user interaction with a virtual keyboard, virtual mouse, or virtual touch screen, and these interactions can generate input on the usable device or any other suitable paired device. While this disclosure describes specific examples of metaphorical gestures and object detection (and associated functionality), this disclosure considers any suitable metaphorical gestures, detection of any suitable objects, and such associated gestures with any suitable functionality.
[000167] In specific embodiments, a gesture may involve the entire accessory on which a device is attached or used. For example, Figures 88-92 illustrate exemplary gestures involving movement of the arm in which the device is used. Gestures can include detecting the initial position of the arm (e.g. via an accelerometer detecting the direction of the gravity vector), detecting device movement (via the arm), detecting the corresponding change in the gravity vector, and detecting that the arm has stopped moving. Such gestures may also include detecting the duration of movement, the amount of movement (e.g. detecting a large radius of movement, confirming that the entire arm has moved), acceleration of movement, or any other attributes related to movement. . As illustrated by Figures 88-92, gestures may involve detecting arm movements overhead, forward, sideways, backwards or downwards from an initially higher starting position. For example, a gesture may include detecting a gravity vector indicating that a hand is on the side of the body in step 8805, detecting upward movement of the hand in step 8810, detecting that the gravity vector indicates that the hand is above in step 8815, detecting movement to stop the movement of the hand in step 8820, and detecting a gesture in step 8825. As another example, a gesture may include detecting a gravity vector indicating a hand on one side of the body. in step 8905, detecting upward and forward movement of the hand in step 8910, detecting whether the gravity vector indicates that the hand is horizontal in step 8915, detecting a stop movement of the hand in step 8920, and detecting a gesture in the step 8925. As another example, a gesture may include detecting a gravity vector indicating that a hand is horizontal in step 9005, detecting the hand moving downwards and backwards in step 9010, detecting that the see gravity signal indicates that the hand is to the side at step 9015, detecting a stop movement of the hand at step 9020, and detecting a gesture at step 9025. As another example, a gesture may include detecting a gravity vector indicating that a hand is beside the body in step 9105, detecting the hand moving up and back in step 9110, detecting whether the gravity vector indicates that the hand is horizontal in step 9115, detecting a stopping motion of the hand in step 9120, and detecting a gesture in step 9125. As another example, a gesture may include detecting a gravity vector indicating that a hand is at the side of the body in step 9205, detecting the hand moving upward and outward in step 9210, detecting if the gravity vector indicates that the hand is horizontal at step 9215, detect stop movement of the hand at step 9220, and detect a gesture at step 9225. In specific embodiments, the gestures may involve a movement of the whole body rather than what only the accessory on which the device is used.
[000168] In specific embodiments, a user can interact with the device through a variety of input mechanisms or types including, for example, the outer ring, touch-sensitive interfaces (eg, touch-sensitive layer), gestures performed (described herein), or a speech interface (eg, including voice input and speech recognition for applications including text input, communication, or search). Additionally, in specific modalities, a user may interact with the graphical interface. of user input presented on a circular display of the device through any of the mechanisms or types of input.
[000169] A wearable electronic device user can interact with the device (including, for example, a graphical user interface presented on the circular display) using the outer ring. In specific embodiments, the outer ring may be touch-sensitive, such that a user's touch on one or more portions of the ring may be detected as input to the device and interpreted, causing one or more actions to be performed by the user. device (e.g. within a graphical user interface of the device). As an example, a touch-sensitive outer ring can be either a capacitive ring or an inductive ring, and a user of the device can perform any suitable touch gesture on the touch-sensitive ring to provide input to the device. Input may, for example, include swiping the ring with one finger, swiping the ring with two or more fingers, performing a rotational gesture with one or more fingers, or pinching the ring. In specific embodiments, the outer ring may be rotatable such that a physical rotation of the ring may serve as an input to the device. Additionally, in specific embodiments, the outer ring may be clicked (e.g., pinched) or pinched. Any of the embodiments of the outer ring may be combined, as appropriate, such that the ring may be one or more touch-sensitive, rotatable, clickable (or pinchable), or squeezable. Inputs from the different modes of the outer ring (eg tapping, rotating, clicking or pressing, or pinching) can be interpreted differently depending on, for example, the combination of input modes provided by a user. As an example, a rotation of the outer ring may indicate a different input than a rotation in combination with a clicking or squeezing action on the ring. Additionally, feedback can be provided to the user when the user provides input via the outer ring, including haptic feedback, audio feedback, or visual feedback, described here.
[000170] Figure 93A illustrates an example of a user clicking (e.g. compressing) the outer ring, indicated by arrows 9310. Figure 93B illustrates an example of a user compressing the outer ring, indicated by arrows 9320. Fig. 94A illustrates an example of a user rotating the outer ring, and such that content 9410 of a device's graphical user interface changes according to rotation (eg, to the right). Figure 94B illustrates an example of a user performing a rotation gesture on a touch-sensitive ring, without the ring itself rotating, such that the content 9420 of a device's graphical user interface changes according to the rotation ( for example to the right). Figure 94C illustrates an example of a user rotating the outer ring while simultaneously squeezing or clicking the ring, and such that the 9430 content of the device's graphical user interface changes according to the rotation (e.g., to the right) and the action of pressing or clicking.
[000171] In specific embodiments, a device touch-sensitive interface (e.g., the touch-sensitive layer) can accept user touch input and allow the device to determine the xy coordinates of a user touch, identify multiple points of touch contact (e.g., in different areas of the touch-sensitive layer), and distinguish between different temporal lengths of touch interaction (e.g., differentiate between gestures including passing, single tap, or double tap). Touch gestures (described here) can include multidirectional swipe or drag, pinch, light double taps, pinch or pinch the display (which can cause the display to physically move in an up or down direction), long press, multiple taps (for example, using multiple fingers or implements to tap or gesture anywhere on the touch-sensitive interface), or rotational tap gestures. Figure 95A illustrates an example of a user tapping 9510 on a touch-sensitive interface (eg, the touch-sensitive layer) to provide input to the device. The exact x-y coordinates of the user's light taps can be determined by the device through input from the touch-sensitive interface (eg, the touch-sensitive layer). Figure 95B illustrates an example of a user performing, respectively, a clockwise rotation gesture 9515, a counterclockwise rotation gesture 9520, a vertical swipe gesture 9525, and a horizontal swipe gesture 9530. Figure 95C illustrates an example of a user touching the display (including a touch-sensitive layer with multi-touch detection capability) using, respectively, one, two or three 9535 contact points (e.g. with one, two or three fingers or implements) simultaneously. Figure 95D illustrates an example of a user performing touch gestures that has multiple points of contact with the touch interface. The user can, in this example, perform an expansion gesture 9540, a pinch gesture 9545, a clockwise rotation gesture 9550, or a counterclockwise rotation gesture 9555 with two fingers.
[000172] In specific embodiments, a graphical user interface of the device may operate according to an interaction and transition model. The model can, for example, determine how modes including applications, functions, sub-modes, confirmations, content, controls, active icons, actions, or other features or elements can be organized (e.g. in a hierarchy) within a device's graphical user interface.
[000173] In one embodiment, the graphical user interface (GUI) includes multiple top-level screens each of which corresponds to a different mode or application (or sub-mode, function, confirmation, content, or any other feature) of the device. Each of these applications can be at the same hierarchy level as the GUI's transition and interaction model. Figure 96A illustrates an exemplary layout of a hierarchy within the GUI in which the multiple top-level screens 9602-9606 and 9610-9614 individually correspond to a different application, and one of the top-level screens 9608 (the splash screen) corresponds to a watch. State transitions within the GUI can be events triggered upon input from an input source such as the device user. An input from a user's device or from another input source (e.g., through any of a variety of mechanisms or input types including outer ring, touch interfaces, gestures, speech, or sensors ) can cause a transition within the GUI (e.g. from one top-level screen to another). For example, an input can cause the GUI to switch from splash screen 9608 (eg clock) to an app (eg 3 or 4) or from one app to another app. If the user rotates the outer ring to the right, for example, the GUI may switch from the native screen 9608 to the application 4 9610, and if the user rotates the outer ring to the left, the GUI may switch from the home screen 9608 to the application 3 9606. In still other embodiments, context (eg, as determined by sensors or other input sources on the device) can cause the GUI to switch from the splash screen to an application or from one application to another application.
[000174] In one embodiment, the model may include operability to differentiate the "left" and "right" sides of the home screen. As an example, one or more top-level screens may be associated with application modes (or other characteristics) in the GUI's transition and interaction model hierarchy that are either fixed (e.g. always available to the user) or contextual or dynamic (eg available depending on context). Contextual screens may, for example, reflect the most recently used modes, applications or functions by the user, the most recently added (e.g. downloaded) modes, applications or functions by the user, ad-hoc registered devices (which may, for example, , enter or exit the communication range of the device as it is used), modes, applications, or functions that are "favorite" by the user (e.g., explicitly designated by the user), or modes, applications, or functions that are suggested for the user (for example, based on previous user activity or current context). Figure 96B illustrates an exemplary layout of a hierarchy within the GUI in which contextual or dynamic apps 9616-9620 and fixed apps 9620-9628 are grouped separately, with the left side (relative to the home clock screen 9622) including contextual apps, and the right side including fixed apps. As an example, Dynamic Application 01 9620 might be the most recently used application, and Dynamic Application 02 9618 might be the second most recently used application, and so on.
[000175] In specific embodiments, the top level of the GUI interaction and transition model hierarchy may only include "faces", and the next level of the hierarchy may include applications (or any other features). As an example, the top level of the hierarchy might include a splash screen (e.g. the clock), and one or more faces, each face corresponding to a different type of background, mode, or activity such as a wallpaper ( for example, which can be customized by the user), weather information, a calendar or daily activity information. Each of the faces can show the time in addition to any other information displayed. Additionally, the face currently displayed can be selected by the user (eg via any suitable mechanism or input type) or automatically changed based on context (eg user activity). The faces to the left of the home screen can be contextual, and the faces to the right of the home screen can be fixed. Figure 97 illustrates an exemplary layout of a hierarchy within the GUI in which the top level of the hierarchy includes faces 9710-9770 (including clock face 9740) and the next level of hierarchy includes applications 9715-9775,
[000176] In specific embodiments, an input from a device user or an input from another input source (e.g., through any of a variety of mechanisms or input types including outer ring, sensitive interfaces). touch, gestures, speech, or sensors), or a device usage context can cause a transition within the GUI from a screen at one level of the hierarchy of the GUI transition and interaction model to a screen at another level of the GUI. hierarchy. For example, a selection event or user input (e.g., a tap or tap on the display, voice input, eye gaze, click or pressure on the outer ring, compression of the outer ring, any appropriate gestures, movement internal muscle detected by sensors, or other sensor input) can cause a transition within the GUI from a higher-level screen to a screen docked deeper in the hierarchy. If, for example, the current screen is a top-level screen associated with an application, a selection event (e.g., pressing the ring) selects the application and causes the GUI to switch to a screen nested in a layer. deeper. This second screen may, for example, allow interaction with a characteristic of the selected application and may, in specific modalities, correspond to a main function of the selected application. There can be multiple screens in this nested second layer, and each of these screens can correspond to different functions or characteristics of the selected application. Similarly, a "backward" selection input or user event (e.g. a double-tap action on the outer ring or a tap gesture on a specific part of the display) can cause a transition within the GUI from a screen (for example, a characteristic of a specific application) to another screen that is higher in the hierarchy (for example, the top-level application screen).
[000177] Figure 98A illustrates an example of transaction and interaction model operation with respect to a device-specific application 9805 function or mode and the use or application of function 9810. As an example, if the application is a camera, the functions, modes, or other elements of the camera application may include image mode, video mode (for example, with a live view), and the action of turning a flash on or off. Various functions, modes, or other elements can be triggered by transitions within a single layer of the model hierarchy. These intra-layer transitions can occur upon receipt or determination of a specific type of transition or input event from an input source such as the device user (for example, a counterclockwise rotation of the outer ring or clockwise) , or from the determination of a specific context of use of the device. In specific embodiments, a transition event input may also include, for example, a tap or tap on the display, voice input, eye gaze, clicking or squeezing the outer ring, pinching the outer ring, any appropriate gesture, internal muscle movement detected by the sensors, or other sensor input. To select and use a function, mode, or other application element, the user can provide a specific type of selection or input event (e.g., a light tap or tap on the display, an action of pressing or clicking the outer ring, a specific gesture, or sensor input), causing a transition between layers within the GUI to a deeper layer of the hierarchy. As an example, to take a video, the user can tap a screen associated with the video mode feature of the camera application. When in this deeper layer of the hierarchy, making a video, the user can make the GUI switch between different options in that layer, if available (eg options related to the video mode). In specific modes, the user can select one of the options in the deepest layer, causing the GUI to switch to an even deeper layer. As an example, when recording video in video mode, the user can again tap the display to switch the GUI to a deeper layer, which in this case may include the option to stop video recording. Additionally, the user can return to a higher layer of the hierarchy by providing a specific type of event or selection input (eg, a "back" input, described here). As an example, when recording video in video mode, the user can tap a specific "back" portion of the display, causing the video recording to be canceled and causing the GUI to switch to the screen associated with the mode feature. video from the camera application (for example, in the features layer of the hierarchy). The GUI transition and interaction model hierarchy can have any number of layers and any number of elements (eg functions or content) within a single layer. Figure 98B illustrates an example of the operation of the transition and interaction model with respect to 9815 content on the device. In this exemplary model, the content can behave similarly to an application, except that if the user selects content 9815 (for example, a photo) and the GUI switches to a deeper layer in the hierarchy, the first option 9820, in a menu of options related to the content may be shown (for example, options such as deleting the photo or sharing the photo) . Figure 98C illustrates an example of the operation of the transition and interaction model with respect to a 9825 control on the device. A control element can function like a button, where it can modify a value through a range of possible values. User input to the device (for example, turning the outer ring to the right or left) can modify the 9830 value or state associated with the 9825 control element. The value modified by a control element can be of a substantially continuous nature (for (e.g., the zoom level of a camera, or the volume level of a television) or may be of a substantially discrete nature (e.g., a television channel). In specific modalities, in cases where the value modified by a control is of a discrete nature, a specific user input (eg, action of tightening the outer ring) can "consign" the selection of the value. Figure 98D illustrates an example of the operation of the transition and interaction model with respect to an application 9835 on the device and a main function 9840 of the application. As an example, each device mode or function (eg augmented reality or camera functions) can be an app on the device. Transitions within a single layer (eg, performed upon receipt of user-specific input such as an outer ring rotation) allow the user to change applications, modes, or device functions. Transitions between layers (e.g., performed upon receipt of specific user input such as a tap on the display) allow the user to enter deeper layers (or exit deeper layers) of the hierarchy associated with the application, mode or function, selected.
[000178] Figure 98E illustrates an example of the operation of the interaction and transition model with respect to a 9845 action (eg within an application) on the device. As an example, within the camera application, a captured image can be selected, and one or more actions can be available for the selected image, such as deleting the image, sharing the image on FACEBOOK, sharing the image on TWITTER, or sending an email with the image. In this example, GUI transitions within the "action" layer (eg performed upon receiving specific user input such as an outer ring rotation) allow the user to visualize different actions to be performed. Transitions between layers (for example, performed upon receiving specific user input such as tapping the display) allow the user to enter deeper layers (or exit deeper layers) of the hierarchy associated with the action. selected. In this example, the deeper layer introduced by selecting an action 9845 shows secondary information 9850 or an acknowledgment (eg, that the application is sending the image information to a selected sharing service). A 9855 acknowledgment (eg that the image was sent) can also be shown in this deeper layer. The GUI can automatically switch back to a higher layer (eg the action layer). However, there may be a deeper layer of the hierarchy including the commit information, and this deeper layer may be introduced by the GUI from user input or automatically. Figure 98F illustrates an example of the operation of the interaction and transition model with respect to an icon (for example, an active icon 9860 including a top-level on/off option) and switching the state of icon 9865. As an example , a television communicatively paired with the device may be indicated by an active icon, for example a television screen. In this example, GUI transitions within the upper device/application layer (e.g., performed upon receipt of user-specific input such as an outer ring rotation) allow the user to view different applications, device, or other characteristics. . The television may appear in a menu in the device GUI even when the television is turned off, but the television may be turned on before it can be used. If the user selects the television (e.g. by tapping the screen when the television icon is displayed by the GUI) when it is turned off 9860, the GUI may switch to a state deeper in the transition model hierarchy and interaction in which the television is turned on 9865. When the television is turned on, the icon associated with the television (displayed, for example, in the top layer of the model in the GUI) 9870 may change to directly represent that the television has been turned on 9875, as illustrated in Figure 98G. If the user selects the television again (now turned on), the GUI can move to an even deeper layer of the hierarchy in which the television's functions or capabilities (eg, changing channel or volume) are exposed. In specific embodiments, the option to turn the television off again can be the first menu item in this deeper layer of the hierarchy, to allow quick access to the turn off function (for example, in case the user has accidentally turned the television on). In specific embodiments, if the user selects the television when it is off, the television can be turned on and the icon associated with the television can change to directly represent that the television has been turned on without the GUI switching to a different layer of the hierarchy or to a different user interface. The active television icon can therefore directly indicate within the upper level of the hierarchy (eg a main menu) the state of the paired television.
[000179] Figure 99 illustrates an example of the transition and interaction model hierarchy from a GUI to an imaging application. In this example, the first screen 9902 was arrived at after the application selection (on screen 9900) could correspond to a "live view" function of the application. Other fixed features of the image capture application, including video mode 9904, zoom 9906, or flash 9908, may be available to the right of the main home function screen 9902 of the selected application. Dynamically or contextually available features (eg 9910 captured images) of the selected application may be available to the left of the initial main function screen. A selection event in this function layer of the hierarchy can cause a transition within the GUI to another layer nested even deeper within the hierarchy. If, for example, the user selects the "zoom" function, the GUI may switch to a 9912 screen in which the user can control the zoom setting of a camera with any suitable input (e.g. a rotation of the outer ring to the right to zoom in or a rotation of the outer ring to the left to zoom out). Similarly, the user may be able to control the status of different features (e.g. turn a 9914 flash feature on or off, or switch from a photography mode to a 9916 video mode), browse content (e.g. 9918- 9922), enter a deeper layer of the hierarchy in which actions 9924-9930 can be performed, or enter yet another deeper layer of the hierarchy in which confirmations 9932-9938 are provided when an action is selected.
[000180] In specific embodiments, an interaction layout can structure a transition and interaction model of a device GUI. An interaction layout can be applied to any suitable interaction model and need not be dependent on any specific type of motion or animation within a device GUI, for example. While specific interaction layout examples are discussed below, any suitable interaction layout can be used to structure a transition and interaction model.
[000181] As an example, a panning linear interaction layout can structure a transition and interaction model of a device GUI. In a panning linear type GUI, elements or features within a layer can be arranged to the left and right of the currently displayed element or feature. User input such as rotating the outer ring in a clockwise or counterclockwise direction navigates within a single layer of the model hierarchy. As an example, a clockwise rotation of the outer ring by a rotational increment might display the element or feature on the right (e.g., the next element), and a counterclockwise rotation by a rotational increment might display the element or feature. characteristic on the left (for example, the previous element). In specific modes, a quick rotation clockwise and counterclockwise can cause the GUI to perform accelerated navigation. In such an embodiment, a single spin can cause the GUI to switch across multiple elements or resources rather than a single element or resource, as described herein. Different user input can navigate between layers (eg deeper layers or higher layers) in the model hierarchy. As an example, if the user taps or taps the touch-sensitive layer of the display, the GUI can switch to a layer deeper in the model hierarchy (e.g. confirming the user's selection or providing options related to the selection) . Any suitable input by the user can cause the GUI to switch between layers in the model hierarchy, either in place of, or in addition to, touch-based or tapping input.
[000182] As another example, if the user presses a specific region of the touch-sensitive layer of the display (for example, designated as a "back" button), or if the user double-taps the touch-sensitive layer of the display , the GUI can switch to a higher layer in the model hierarchy (for example, to the previous layer). If, for example, the user holds a long press on the display or screen, the GUI can switch back to the home screen (eg a clock). Without additional user input, the GUI can also switch back to the splash screen after a predetermined period of time (eg, an interval period). As described here, when a user starts, for example, rotating the outer ring in a clockwise or counterclockwise direction, the GUI changes within the same layer, and the next UI element or feature (for (e.g. an icon of fragments on the same layer) to the right or left respectively may start to appear while the current UI element or feature may start to disappear.
[000183] Figure 100A illustrates an example of the panning linear interaction layout. In this example, GUI elements 10001, 10002, 10003, and 10004 are on the same layer of the transition and interaction model hierarchy of the linear panning type GUI. GUI elements 10002A, 10002B, and 10002C are elements in a second layer deeper in the hierarchy and are child elements of element 10002. As an example, the first layer may include devices paired with device element 10001 representing an automobile, element 10002 may represent a television, element 10003 may represent a mobile phone, element 10004 may represent a home thermostat. Element 10002A may be a volume control element for television, element 10002B may be a channel control element for television, and element 10002C may be a picture control element for television. As yet another example, the GUI can switch to a deeper layer in the hierarchy if the user clicks on the ring (for example, press the ring once), and then the child elements in the deeper layer can be panned by rotating the ring. ring. Alternatively, the user can pan the sub-elements in the deeper layer by rotating the ring while simultaneously depressing the ring. The device may include a switch to select how user input is used to navigate between layers.
[000184] As another example, a radial panning (or circular panning) interaction layout can structure a transition and interaction model of a device GUI. In a radial pan type GUI, elements or features on a layer can be arranged above and below the currently displayed element or feature. User input such as rotating the outer ring clockwise or counterclockwise navigates between layers of the model hierarchy. As an example, rotating the outer ring clockwise by one increment can cause the GUI to move to a deeper layer in the model hierarchy (e.g. entering a specific application layer or confirming the application selection), and a counterclockwise rotation by one increment can cause the GUI to shift higher in the model hierarchy (eg, moving from one application-specific tier to the previous tier). In specific embodiments, a rapid rotation clockwise or counterclockwise can cause the GUI to perform accelerated navigation, as described here. In such a mode, a single rotational increment can cause the GUI to move through multiple layers of the hierarchy rather than a single layer. Different user input can navigate within a single layer in the model hierarchy. As an example, if the user taps or taps the touch-sensitive layer of the display, the GUI can switch to the next element or feature (eg, the element below the currently displayed element). As another example, if the user presses a specific region of the display's touch-sensitive layer (for example, designated as the "back" button), or if the user double-taps the display's touch-sensitive layer, the GUI may switch to a previous element or feature (for example, the element above the currently displayed element). If, for example, the user holds a long press of the display or screen, the GUI can switch back to the home screen (eg a clock). Without additional user input, the GUI can also switch back to the splash screen after a predetermined period of time (eg, an interval period). As described here, when a user starts, for example, rotating the outer ring in a clockwise or counter-clockwise direction, the GUI switches to a different layer, and the next UI element or feature (for (e.g. on a different layer) may begin to appear while the current UI element or feature may begin to disappear. Figure 100B illustrates an example of the panning radial interaction layout. In this example, GUI elements 10001, 10002, 10003, and 10004 are on the same layer of the transition and interaction model hierarchy of the radial panning type GUI. GUI elements 10002A, 10002B and 10002C are elements in a second layer deeper in the hierarchy and are child elements of element 10002. As before, the first layer can include devices paired with the device and element 10001 can represent an automobile, the element 10002 may represent a television, element 10003 may represent a mobile phone, element 10004 may represent a home thermostat. Element 10002A may be a volume control element for television, element 10002B may be a channel control element for television, and element 10002C may be a picture control element for television.
[000185] As yet another example, an accordion-like interaction layout can structure an interaction and transition model of a device GUI. In an accordion-like GUI, elements or resources from multiple layers can be arranged in a circular list structure. For example, rotating within the list structure (e.g. rotating the outer ring) in a first direction passing through a screen associated with the last element or feature in that direction (e.g. the last sticky application on the device) can cause the GUI switches to a screen associated with the last element or feature in a second direction (eg, the device's least recently used contextual application). Continuing to rotate in the first direction can cause the GUI to switch through screens associated with contextual applications in "reverse" order (eg from least recently used to most recently used). Similarly, rotating in the second direction past the screen of the least recently used contextual application may cause the GUI to switch to the screen associated with the last pinned application, and continuing to rotate in the second direction may cause the GUI to switch across screens. of pinned apps in reverse order (e.g. from the last pinned app to the first one adjacent to the home screen). In an accordion-like GUI, the currently displayed element or feature can be "expanded" (eg if selected by the user) such that its child elements or child features can become part of the single-layer list structure. In specific embodiments, an element or feature with child elements can indicate (when displayed) that it has child elements through, for example, visible edges of the child elements. User input such as rotating the outer ring in a clockwise or counterclockwise direction navigates within a single model layer, which may include elements or features, as well as child elements or child features of an element or feature selected. As an example, a clockwise rotation of the outer ring by one increment might display the element or feature on the right (for example, the next element), and a counterclockwise rotation by one increment might display the element or feature on the right. left (for example, the previous element). In specific modes, a quick rotation clockwise or counterclockwise can cause the GUI to perform accelerated navigation. In such a modality, a single rotational increment can cause the GUI to shift across multiple elements or resources rather than a single element or resource. Different user input may cause selection and expansion of an element or feature in the model. As an example, if the user taps or taps the touch-sensitive layer of the display, the GUI can expand the displayed feature or element within the existing layer and switch to a sub-element or sub-feature. As another example, if the user presses a specific region of the touch-sensitive layer of the display (e.g., designated as the "back" button), or if the user double-taps the touch-sensitive layer of the display, the GUI may close the expanded child elements or child features and switch to an element or feature in the list. If, for example, the user holds a long press of the display or screen, the GUI can switch back to the home screen (eg a clock). Without additional user input, the GUI can also switch back to the splash screen after a predetermined period of time (eg, an interval period). As described here, when a user starts, for example, rotating the outer ring in a clockwise or counter-clockwise direction, the GUI changes within the same layer, and the next UI element or feature (e.g. (e.g. an icon of fragments on the same layer) on the right or left respectively may start to appear while the current UI element or feature may start to disappear. Figure 100C illustrates an example of the accordion-type interaction layout. In this example, the GUI elements 10001, 10002, 10003, and 10004 are on the same layer as the accordion-style GUI interaction and transition model. As element 10002 has been selected by the user, GUI sub-elements 10002A, 10002B and 10002C are expanded and also included in the list structure of the same layer of the model. Thus, the GUI can switch from child element 10002C to child element 10002B or directly to child element 10003. If, however, the user wishes to close the child elements (for example, via a "back" input such as tapping the screen associated with element 10002 again), then the list structure will only include GUI elements 10001, 10002, 10003 or 10004 again.
[000186] In specific embodiments, the GUI can navigate to a splash screen based on input received by a user from the device. User input may include, for example, pinching and holding (e.g. a long press) the touch-sensitive layer, pinching and holding the display, pinching (e.g. clicking) and holding the outer ring, pinching and hold the outer ring, cover the face (e.g. display) of the device, cover a device-specific sensor, turn the face of the device in a downward direction, push a software button (discussed here), push a button hardware on the device, or shake the device (or any other suitable gesture). Any of these inputs or any variation of these inputs (including, for example, shorter durations) can be used as user inputs to "go back" within a transition and interaction model. Figures 101A-101B illustrate examples of a "back" software button layout in the GUI. In Figure 101A, receiving user touch input on the 10110 button portion of the display causes the GUI to confirm a selection or transition to a deeper layer in the model hierarchy. Receiving user touch input in the upper 10120 portion of the display causes the GUI to switch "back" to a higher layer in the model hierarchy. Figure 101B illustrates a similar layout, with the "back" region 10130 including a fragments icon 10135 to indicate to the user where to navigate "back" will transition. In specific embodiments (for example, when the touch-sensitive layer is operable to determine the exact xy coordinates of a touch), any region of the display can be designated as a "back" region, a "confirm/select" region, or any other region of the display. another suitable functional region.
[000187] In specific modes, the device GUI can display specific types of content including, for example, lists. Figure 102A illustrates an example of a GUI displaying a vertical list of items. Input from the user (eg any suitable engine or input type) can cause a GUI selection frame 10210 to scroll through vertical list elements. As an example, if the user rotates right in a clockwise direction, the selection frame 10210 can move from the top of the vertical list towards the bottom of the vertical list. Each rotational increment of the outer ring (for example, if the outer ring moves in discrete increments), causes selection frame 10210 to move one item within the list. In the example in Figure 102A, when the user rotates the ring clockwise, the displayed list items remain constant, and the selection frame 10210 scrolls down through the list items. In other embodiments, the selection frame may remain constant (e.g. in the center of the display) and list items may scroll up and down (e.g. one item at a time), depending on the direction of rotation of the ring . Figure 102B illustrates an example of the GUI displaying a horizontal list of items. Input from the user (eg any suitable engine or input type) can cause a GUI selection frame 101210 to scroll through horizontal list elements. As an example, if the user rotates right in a clockwise direction, the selection frame 10210 can move from the left of the horizontal list to the right of the horizontal list. Each increment of rotation of the outer ring (for example, if the outer ring moves in discrete increments), causes selection frame 10210 to move one item within the list. In the example of Figure 102B when the user rotates the ring clockwise, the selection frame 10210 remains constant in the center of the display, and the list items shift to the left (for example, one item at a time) in response to the clockwise rotation. In other embodiments, the displayed list items remain constant, and the selection frame moves left or right through the list items, depending on the direction of rotation of the outer ring.
[000188] In specific embodiments, the device GUI can display vertically or horizontally continuous (or substantially continuous) content including, for example, graphics or text. In specific embodiments, input from the user (eg, any suitable mechanism or input type) can cause a GUI selection indicator to scroll through the continuous content. In other embodiments, input from the user can cause content to scroll in and out of the display in a horizontal direction, a vertical direction, or any other direction mapped to the user input (and the selection indicator, if present, may remain in a constant position). In the example in Figure 102C, a temperature graph is displayed. When the user rotates the outer ring in a clockwise direction, the 12020 selection indicator remains in the center of the display, and the content moves into the display from the right and out of the display towards the left. In the example in Figure 102D, a portion of a larger piece of text 10230 is displayed. When the user rotates the outer ring in a clockwise fashion, additional text enters the display from the bottom and exits the display towards the top. Figures 103A-103D illustrate an exemplary calendar application displayed in the device GUI. In Figure 103A, a user can click or press the outer ring (denoted by arrow 10305), causing the GUI to display a circular menu 10310 with the options "Up", "Weekly" (the default setting), "Monthly", and "Daily". In Figure 103C the user can again click or press the outer ring (indicated by arrow 10305), confirming the selection of "Weekly" and causing the GUI to display the weekly view 10320 of the user's calendar.
[000189] In specific modes, the GUI can display content that is of a larger size than the display. In such embodiments, the GUI can scale or crop (or otherwise shrink or adjust) the content so that all the content can be displayed within the display at once. In other embodiments, the GUI does not change the size of the content, and instead provides the ability for the user to pan through the content one portion at a time, for example using scrolling (described here).
[000190] In specific embodiments, the device includes the circular display, and the GUI includes circular navigation and menu layouts. This revelation considers any format for the display, however, and any proper menu or navigation layout for the GUI. The menu layout can provide a user with a visual indication of where the user is located within a GUI interaction and transition model hierarchy, for example. The menu layout can also provide visual indicators that allow the user to differentiate between different types of menu items, as well as showing a global view of menu options. Additionally, the menu can be displayed over any background or suitable device content.
[000191] Figure 104 illustrates an exemplary circular menu layout in which each segment 10410 represents an item or option on the menu and visual gaps such as 10420 separate the items from each other. The currently selected or default 10430 item is at the top of the visual display (but can be anywhere on the display), and may remain at the top of the display as the user orients the device's display in different ways during use. Figures 105A-105B illustrate an example of navigating items in a circular menu. The user may provide input such as a clockwise rotation of the outer ring, and in response to this user input, the next item in menu 10520 (eg, to the right of the currently selected item 10510) may be highlighted for selection. The content in the center of the 10530 display may change automatically to reflect the user's rotation input or may, in specific modes, change only after the user provides other input (e.g., pressing or clicking the outer ring when the menu item desired is highlighted). Figures 105C-105D illustrate an example of navigating a circular menu by rotating the outer ring, causing the next item in menu 10550 (eg clockwise or to the right of the currently selected item 10540) to be highlighted for selection. In this example, user input also causes a central "pointer" 10560 to be rotated that points to the highlighted menu segment corresponding to the currently selected menu item. In this example, the content in the center of the display automatically changes to reflect the user's rotation.
[000192] Each of Figures 106A-106C illustrates different alignments and arrangements of a circular menu layout for the device GUI. The circular menu can, for example, be displayed directly at the edge of the display (as shown in Figure 106A) or it can be shown further into the display, or as an overlay on the background of the device (shown in Figures 106B-106C). . Figures 107A-107C illustrate other shapes and alignment of a circular menu layout for the device GUI. As examples, the menu may consist of line segments (of various possible sizes) arranged in a circle 10710, line segments arranged in a semicircle 10720, or points arranged in a circle or semicircle, 10730 or 10740. In specific modes, the visual indicator of the currently selected or default menu item 10732 can remain in the top center of the display, and the visual indicators of items in the 10734 menu can shift left or right based on user input (Figure 107C). In other embodiments, the visual indicator of the currently selected or default item 10732 may scroll through the indicators of the menu items, which remain fixed in position (Figure 107B). In specific modalities, instead of segments or dots, the visual indicators of menu items may be icons (eg fragment icons) associated with menu items. Figure 108 illustrates that the menu layout need not be circular and can be any suitable layout, including a layout in which the 10810 menu item indicators are spread across the entire display. With user input (eg a rotation of the outer ring), different items can be selected according to their position in the menu layout. As an example, if the user rotates a shape clockwise, the next menu item 10820 in a clockwise direction can be selected.
[000193] Figures 109A-109C illustrate different menu layouts with respect to "left" and "right" menu items (eg in the transition and interaction model hierarchy) of the currently selected or displayed menu item 10915 In Figure 109A, all 10910 menu items are evenly distributed in the circular menu around the display. In Figure 109B, the menu includes a gap that indicates a differentiation of the items 10910 on the left and the items on the right of the currently displayed or selected menu item 10915 (eg, according to the interaction and transition model described here). Figure 109C illustrates an example where there are more items 10910 to the left than to the right of the currently selected or displayed item 10915, so the segments to the left of the circular menu are adjusted in size to accommodate the number of items available for selection. . In the case of a large number of menu items (for example, beyond a specific threshold such as 40 captured images), the circular menu segments may disappear, and the visual indicator presented to the user may be a scroll bar 11020 that allows the user to scroll through the various menu items as illustrated in Figure 110A. In other embodiments, a similar scrollbar type visual indicator 11020 may allow the device user to manipulate an absolute or fixed value (e.g. a camera zoom level) over a fixed range of values 11030, as illustrated in Figure 110B. In still other embodiments, the length of a scrollbar type visual indicator can show the user the level of a certain value. For example, if the user is controlling the volume of a television using the device's outer ring, when the user rotates the ring (e.g. clockwise) to increase the volume level, the visual indicator 11120 will become longer, until it surrounds or nearly surrounds the internal display, as illustrated in Figures 111A-111C.
[000194] In specific modes, the GUI may display reference or background content as well as an indication of an action or function available to be performed with respect to the reference or background content. Figure 12 illustrates exemplary layouts within the reference content GUI and contextual override actions or functions. Different types of layouts (eg, including those illustrated) can be selected based on the different types of reference or background content presented, for example, to minimize obscuration of reference or background content. For example, if the reference or background content is an image of a person, an overlay that does not obscure the center of the photograph can be selected. In specific embodiments, the perceptual brightness of the pixels of the reference or background content (eg, behind the overlay) can be determined on a pixel-by-pixel basis. In cases where the contrast between the contextual overlay and the reference or background content (e.g. an image) is too low (e.g. based on a predetermined threshold), a shadow that pushes the underlying colors in the opposite direction may be used. An exemplary algorithm might include determining the pixels under the overlay, reducing their saturation, considering the inverse of visual brightness (e.g. such that colors remain the same but brightness is selected to produce contrast), blur, and creates a composite between the underlying reference or background content and the overlay. Figures 103A-103C illustrate examples 11310-11350 of contextual overlays composed with background or reference content (here, images captured by a device camera). As illustrated, contextual overlay can allow the user to perform actions or functions (e.g. deleting an image 11130 or sharing an image 11325, looking for coffee shops 11330, looking for restaurants 11340, or making a location a "favorite" location 11350), provides confirmation to the user (for example, that an image has been shared 11320), or provides any other type of information to the user. In specific embodiments, contextual overrides can be used anywhere within a GUI menu layout except for the top level of the interaction and transition model hierarchy.
[000195] In specific modes, icons displayed on the device GUI can optimize device power or battery usage. As an example, an icon may include a primarily black background with the icon itself being composed of thin white strokes. This can allow the amount of white color on the display screen to be very low, allowing for reduced display power consumption while the GUI is used. Icons displayed in the GUI can also include real-time notifications. For example, a mobile phone icon might include a notification with the number of new voicemails, an email icon might include a notification with the number of new emails, a chat icon might include a notification with the number of new chat messages, and a phone icon can include a notification with the number of missed calls. In specific embodiments, the device GUI displays only colors other than black and white for user-generated content (eg, images, files, contacts, notifications, or schedules). Other information, including menu items, may be displayed in black and white.
[000196] In specific embodiments, when the GUI switches from one element (eg, resource, content item, or icon) to another (eg, upon receiving input from a user), the GUI may display visual transition effects. These transition effects may depend, for example, on the type of input received from a device user. As an example, a single tap on the display can trigger different transition effects, while a rotation of the outer ring can trigger a different (potentially overlapping) set of transition effects.
[000197] In specific modes, a user's touch input on the touch-sensitive layer can trigger transition effects including center-oriented expansion, directional sliding, and scaling up or down. Figure 104A illustrates the center-oriented or role expansion or escalation mode. Figure 114B illustrates center-oriented mode or function erasing or downscaling. Figure 115A illustrates center-oriented scaling of an icon. Figure 115B illustrates center-oriented scaling of an icon. Figure 106A illustrates an example of scaling the icon oriented towards the center with a twisting motion. Figure 105B illustrates an example of reducing the icon oriented towards the center with the twisting motion. Figure 17A illustrates an example of center-oriented unfolding and outward expansion of an icon. Figure 17B illustrates an example of center-oriented folding and inward erasing of an icon. Figure 18A illustrates an example of text sliding vertically into the display, where the text is revealed upon unmasking. Figure 118B illustrates an example of text sliding from the left to the right of the display. Figure 118C illustrates an example of text sliding horizontally from left to right of the display within a masked region (eg, a contextual overlay). Figure 119A illustrates a right-to-left horizontal swipe transition for content or an icon. Figure 119B illustrates a right-to-left horizontal slide transition with fading effects; the icon or content leaving the screen gradually fades as it reaches the edge of the screen, and the icon or content entering the screen gradually fades as it crosses the edge of the screen. Figure 119C illustrates an example of a right-to-left horizontal slide transition with scaling effects; the content or icon leaving the screen is shrunk, and the content or icon entering the screen is enlarged to full size.
[000198] In specific modes, a user rotation of the outer ring can trigger visual transition effects including zooming, directional scrolling, blurring, masking, page folding, rotational movement, and accelerated movement. Figure 120A illustrates an example of a transition in response to a low-acceleration rotation of the outer ring. In this example, a single rotational increment can correspond to a single item, such that a counterclockwise rotation (e.g. rotational increment) causes the next element (e.g. icon or content item) to enter the screen from left to right, and there is no scaling of elements. Figures 120B-120C conjointly illustrate an example of a transition in response to a high acceleration rotation of the outer ring. In this example, a single turn (e.g. rotational increment) counterclockwise causes the GUI to pan quickly through multiple elements (which may be reduced in size, enter the screen from the left, and exit the screen from the right) until the user stops turning the ring. When the user stops rotating the outer ring, the element may increase in size to normal size, and a single icon or content item may fill the display. Figure 121A illustrates an example of a transition within the GUI in which the content is reduced in size in response to rotation of the outer ring. Figure 121B illustrates an example of a transition within the GUI in which a first screen 1 "folds" into an animation, resulting in a second screen 2 (eg, to the next feature or content item) being displayed to the user.
[000199] In specific embodiments, the device GUI may include a physical model that considers the user's movement and produces visual feedback reflecting the user's movements. As an example, when there is activation input (eg in the form of a specific gesture) by the user, the user's movement can be tracked continuously through input from one or more of the device's sensors. Visual feedback can reflect user movement within the user interface, while the underlying content remains static, so gestures can be recorded and parallax can be used to distinguish between UI features or controls and underlying content. In specific embodiments, the physical model may include a generalized spring model with damping. In such a model, items can be layered. The deeper layer can have a "harder" spring in the physical model holding the items in place. This can cause the lower layers of the UI to shift slightly when the device is moved, while the upper layers may move more, creating a sense of parallax. Additionally, the spring model can include damping, which causes motion retardation, creating a smoother, more variable motion. Figure 122 illustrates an example of using a physical model in the GUI. The user wears the device 100 on his arm. When the user moves the arm in a downward direction, the 12210 icon displayed on the screen (for example, a light bulb) moves in a way reflecting the user's movement. The underlying content (eg the background image) on the screen, however, does not move. This type of floating icon or menu item can, for example, be useful when the display is of a size that does not allow many icons or menu items to be displayed simultaneously due to visual crowding. Additionally, this kind of floating behavior can also be used with notification means to present an event to the user.
[000200] In specific embodiments, the device GUI can include faces as default screens or wallpapers for the device, and these faces can be part of a transition and interaction model hierarchy (e.g. at the top layer of the hierarchy or as a splash screen). As described here, these faces can be applications that can be switched or modes that can automatically respond contextually to a user's activity. As an example, faces can change depending on the user's environment, needs, tastes, location, activity, sensor data, gestures or programming. The availability of a face (or the transition in the GUI from one face to another) can be determined based on contextual information. As an example, if the user has an upcoming event scheduled in their calendar, the device face may change to a calendar face that displays the upcoming event information to the user. As another example, if the user is determined to be in the vicinity of their home (eg, based on GPS data), the device face may change to a face associated with a home automation application. As yet another example, if it is determined that the user (e.g., based on various biometric sensors such as heart rate or arousal sensors, or based on accelerometers, is moving vigorously, the face of the device may change to a fitness mode, showing the user's measured pulse, calories burned, time elapsed since starting the activity (eg running), and time. Any appropriate sensor data (eg from sensors including biometric sensors, focus sensors, or sensors that can determine the position of a user's hand while driving a car) can be used to determine an appropriate context and face to display to the user. The user's historical use of the device (e.g. , a specific time of day when the user has used your fitness app, such as in a fitness class) can also determine which face is displayed on the device. As an example , the device can anticipate the user's need for fitness mode at the specific time of day when the user tends to exercise. Contextual faces can also be associated with suppression of notifications (for example, if the user is determined to be driving or if the device is not being used) or a change in how notifications are expressed (for example, visually or audible). In specific embodiments, device faces do not need to be associated with any application on the device and can be wallpapers, or backgrounds on the device's display. Faces can be dedicated to specific information channels (eg calendar news, health or activity news, notifications, weather news, or news). As an example, a notification or alert of severe weather conditions (received, for example, through a weather notice) may cause the weather face to appear on the display in conjunction with the notification. Faces can display the time (eg in digital analog format) regardless of face type. Faces can be customized by the user. User customizations or preferences may be explicitly introduced by the user (e.g. for management software on the device or a paired device) or learned directly by the device (e.g. using usage and sensor data to create a model over time. time) . Figure 23 is exemplary faces that include an analog clock 12310, an analog clock with a circular menu layout 12320, a health mode face 12330, and a weather face 12340. Figure 124 illustrates an exemplary set of faces 12410-12440 to the device on which appointment and calendar information is displayed.
[000201] In specific embodiments, the device may be worn on a user's limb (without obscuring the user's face and without requiring the user to hold the device) and may include augmented reality (AR) functionality. This AR functionality can be based on using body movement to point a device camera, which can allow the device to aim more accurately due to the user's sense of proprioception. This type of system can allow the device user to see an object in the real world while the user views a version of the object (eg captured by a device camera) on the display. An example of this AR capability is illustrated in Figure 16. Such an AR system may allow "direct view" capability using an aligned camera and sensor on opposite sides of a user's limb. Several AR applications can be enabled by this type of arrangement, described here. In specific modalities, applications can be designed specifically for the device to allow immediate, opportunistic use. Additionally, a delegation model can be provided on the device, allowing the use of external resources to improve the breadth of applications available to run on the device while incurring less (or no) penalties in terms of processing requirements or power usage. In specific embodiments, the device may control or be controlled by other devices (eg, nearby devices discovered via a network and communicatively paired with the device). This type of control can be obtained through proximity interfaces, gestures, or traditional interfaces. Pairing can be achieved using a variety of technologies including a device camera, discussed here in further detail.
[000202] Figure 125 illustrates an example of an automatic camera activation decision flow for the device. In specific modes, whether the camera is enabled and whether automatic camera activation (eg for object recognition) is enabled may depend on the application or mode the device is currently in. In specific modes, automatic camera activation can be enabled on the 12510 device. If this feature is enabled (determined in step 12520) and there is sufficient CPU and power available on the device (for example, to calculate the features of interest from of an image, determined in step 12530), then a device camera (e.g., an outward facing camera) can automatically capture, process, or display 12560 one or more images if the camera is held stably in a user pointing position for a predetermined period of time (e.g., as detected by an inertial measurement unit on the wearable device or as calculated by image blur, determined in step 12540). In other modes, the camera can be activated and search for images at all times. In still other embodiments, the camera can capture an image and perform feature recognition only if the user manually triggers image capture (eg, pressing or clicking the outer ring, or tapping the display, determined in step 12550). In specific modes, when the camera is activated (by any suitable method), augmented reality (AR) functionality can be enabled. AR functionality can be automatically enabled (depending, for example, on CPU capacity and available power on the device). In other embodiments, the AR functionality can be explicitly enabled by the user through any suitable input by the user. The user can, for example, provide touch input on the display to enable AR functionality. As an example, a user can capture an object such as a bird (eg, by pointing a device's camera at the bird), and the user can touch the image of the bird as it appears on the display. This action can enable the device's AR functions, causing, for example, the device to recognize the bird as an object and return information about the bird to the user. In other modalities, as described herein, the user can perform one or more gestures to enable the AR functionality, as well as to perform tasks using the AR functionality (for example, using a "virtual" keyboard by performing typing gestures in front of a device camera).
[000203] In specific embodiments, if the device does not have the ability to calculate the features of interest itself, the device may capture an image, transfer the image to a communicatively coupled device (e.g., a nearby device such as a telephone or personal computer) or to an Internet-based service where features of interest can be calculated remotely. Once the resources of interest are determined, an Internet-based service or local data catalog can be consulted for additional information about a recognized object. If information is found, the relevant data can be displayed to the user on the device together with the recognized feature.
[000204] The device may, in specific embodiments, have a small form factor, and may be limited in terms of available memory, processing, and power. A delegation model may allow the device to delegate portions of one or more processing tasks (e.g. tasks related to AR functionality) to nearby devices (e.g. personal computer or telephone) or to network-based or internet-based services. , for example. As an example, for tasks that can be delegated, the application requesting the task provides the system (e.g., a device operating system kernel) with the characteristics or a profile of the task, including the task's latency sensitivity, processing requirements , and network payload size. This can be done for each delegateable sub-task of the global delegated task. As tasks are normally piped, continued fractions of the task pipe can be delegated. The system may, in specific embodiments, perform measurements of, or build a model of, one or more features of the device. Device characteristics may include static device properties, for example, properties of the device's hardware components including total installed memory, maximum CPU speed, maximum battery power, or maximum bandwidth of a network interface. Device characteristics may also include dynamic device properties, for example, device operation properties including available memory, current CPU capacity, available power, current network connectivity, availability of network-based services, a record of average device behavior, user among one or more users, or an expected or expected processing time of a task (for example, given a specific usage scenario). In specific embodiments, the device may have a model that incorporates past and current measurements of device characteristics to assist in determining future device behavior. Based on the characteristics of the task or profile and these measurements or models, as well as whether the task can be performed on the device, the system may delegate (or not delegate) one or more portions of the task or task pipeline. For example, if the available memory on the device cannot support processing a task (e.g., playing a video), one or more portions of the task may be delegated. As another example, if the device's CPU capacity cannot support processing a task (for example, if the CPU is running at capacity due to its existing load), one or more portions of the task may be delegated. As another example, if the device's battery level is low and the battery is not expected to supply power to the device for as long as the task's expected processing time, one or more portions of the task may be delegated. As another example, if the device's network connectivity is low or non-existent, one or more portions of the task may not be delegated (for example, if the device also has sufficient available memory, CPU capacity, and power). As another example, if one or more network-based services are available to the device (e.g., cloud-based services for processing) and the device has adequate network connectivity (e.g., adequate available bandwidth), one or more portions of the task can be delegated. As another example, if a device user typically (eg historically) delegates video playback, one or more portions of a video playback task may be delegated. As another example, if a task's predicted processing time (e.g., predicted based on a model incorporating past and current measurements of device characteristics) is beyond a certain threshold (e.g., several minutes), the task can be delegated. . Any suitable device characteristics (eg, static or dynamic properties) in any suitable combination can be used to determine the delegation of a task. In addition, any suitable characteristics of a device task (e.g., including a task profile or task characteristic including latency sensitivity, processing requirements, or network payload size) can be used to determine the delegation of a task. task, either individually or in conjunction with device characteristics. Additionally, any device model (eg, device behavior) can be used, alone or in conjunction with device or task characteristics, to determine whether to delegate a task. In specific embodiments, devices paired with the device may also include a delegation model, such that the paired device (e.g. a phone) performs the same tasks, delegating tasks based on its own power models, connectivity, requirements runtime, and practicality. The delegated task can be processed or run to completion on the paired device (e.g. phone), and the delegated task processing results can be returned to the device. In specific embodiments, the device can operate in standalone mode (eg, without delegating any processing tasks) when it has no connectivity or when no paired device is within range of the device. When the device regains connectivity, or when a device is paired with the device, task delegation can resume.
[000205] An exemplary algorithm of a device delegation model is illustrated in Figure 126. In this example, a delegateable task process starts at the device (12610). The device system performs a power usage analysis and prediction (12620) (based, for example, on the historical power usage of the 12630 user and the expected time to a charge of the 12640 device). Based on this, the system determines in step 12650 whether there is sufficient charge remaining for the required uptime of the task that can be delegated. If sufficient charge remains, the device system can increase the 12660 power usage and process the task that can be delegated to the 12670 device itself. If, however, the device does not have sufficient charge for the required operating time, the device can consult a paired device (for example, a phone) 12680 to determine the power status of the paired device (12690). If, in the example of a telephone, there is sufficient charge remaining in the telephone for the required operating time, the task can be processed on telephone 12694. If, however, there is not sufficient charge in the telephone, the system can determine in step 12692 whether the device has connectivity to an Internet-based service (eg, cloud) or other network-based service. Otherwise, the device can delegate the process to phone 12694. If there is connectivity, the device can delegate the process to cloud 12696, where the task is processed and the results are later returned to the device. In specific embodiments, tasks that can be delegated can be delegated by the device in a split fashion to one or more paired devices (eg mobile phones or personal computers) or network/Internet services. That is, secondary tasks that can be delegated from a task, or process that can be delegated, can be delegated by the device to different locations.
[000206] It is assumed by this disclosure that a delegation model for a specific device (or for a family or range of devices) can be dynamic or contextual. As an example, a delegation model might consider the available memory, CPU capacity, and available power of a specific device (or a family of devices), factors that can all change gradually. The delegation model can also consider the availability of network-based or cloud-based services (and the capacity of each), as well as network connectivity (e.g. bandwidth and latency), which can also change gradually. . For example, with reference to Figure 127, according to a first delegation model 12710 (which may, for example, be applicable to devices manufactured in the next year), most of the processing can be divided equally between the device and a device paired (e.g. smartphone), with only a small amount of delegation to a server from a cloud-based service. According to a second delegation model 12720 (which can, for example, be applicable for devices manufactured in a time period of three years) most of the processing can be handled locally by the device (for example, due to anticipated advances in memory, CPU, and power capacity in a small form factor). In this second model, some processing can be delegated to a server (for example, more than in the first delegation model, due to improved network connectivity) and only a small amount of delegation can occur to the locally peered device. In accordance with a third delegation model 12730 (which may, for example, be applicable for devices manufactured within a period of five years), all or almost all processing tasks can be divided equally between the device and a server of a service. cloud-based, with little or no processing being delegated to a locally paired device. Any number of delegation models can be created, as the factors considered by a delegation model are dynamic. As an example, all or almost all tasks can be performed locally on the device according to one delegation model, and all or almost all tasks can be delegated by the device in another delegation model.
[000207] The device may choose to delegate functionality to a paired rich processing device (e.g., phone, computer, tablet, television, frequency converter, refrigerator, washer, or dryer) or to the Internet based on reservations of power or connectivity bandwidth for each of these locations. For example, a device with a powerful processor may delegate to the paired device when it is low on power, or it may choose to delegate to the Internet service when the paired device does not have sufficient power reserves. Similarly, the device system may choose to process locally if the Internet connection is showing higher latency to reduce the data transfer size.
[000208] In specific embodiments, an entire application or a portion of an application can be delegated by a device user to a paired device or vice versa. This can occur on a per-application basis. When the application on a target device (e.g. a television) is to be delegated to the device, the target device may send a request over a peered connection (possibly via an intermediary device such as a smartphone or personal computer) to load the app on the device. The device can then act as a client for a server running on the paired device (eg television). Similarly, an application running on the device can be delegated to the paired device (for example, a video running on the device can be delegated to play on a paired television). For example, if the device is running a first application, and a user of the device wants to interact with the second application, the device can automatically delegate a task from the first application to be processed by another device (e.g. a paired television).
[000209] Figure 128 illustrates an example of a decision flow in the device operating according to a delegation model. In this example, an image capture application is running on the device. A scene is captured on device 12810, and device 12820 determines whether it has enough CPU power for image resource calculations. If the device does not have enough CPU power, it calculates the features of interest in the scene locally 12830. If the device does not have enough CPU power, it can first determine 12840 if it is communicatively paired with another device with more processing power (for example, a mobile phone or a personal computer). If it is paired with such a device, the device can send the data to the paired device so that the paired device can calculate the features of interest in the 12850 image. If the device is not paired with such a device, it can determine whether it is connected to an Internet-based service (eg cloud) 12860. If not, the device takes no further action. If it is, the device can send the data to the cloud service so that the service can calculate the features of interest in the 12870 scene. The features of interest can be calculated (whenever they are calculated) using any suitable algorithm including, for example, example, SURF. In this example, the resources of interest can be compared to a local catalog or an Internet-based service to determine if any matches are found (and if so, relevant information of interest) 12880. If a match is found 12890, the result can be presented to a user on the 12895 device. If no match is found, no further action is taken.
[000210] In specific embodiments, a camera or other optical sensor on the device can be used to recognize any gestures performed by the user (eg, in the space between the camera and a target in the real world). These gestures can, for example, be used to act on the presented data (e.g. the real world target such as text including signs) or can be used to point to specific items on which augmented reality functions can be performed. . For example, the user can point to a word in a sign, causing the device to convert the same and display the conversion to the user. Figure 17 illustrates two examples of images captured by a device camera. In one example, a truck 1725 and a device user's hand 1720 are both within the viewing angle of a device camera 1705 and displayed by the device (shown at 1710). As such, gestures performed by the user on the truck can be recognized by the device and processed by the device to provide, for example, AR functionality. In the second example, only the truck is within the camera's viewing angle (shown at 1715), and as such, gestures performed by the user are not captured or recognized by the device. Gesture recognition can be delegated by the device.
[000211] In specific embodiments, objects or images can be recognized by the device when they are within the view of a device camera. As described here, there can be multiple ways for the device to recognize an object. As an example, a gesture performed by the user (eg, a pointing gesture indicating a specific object) can enable AR functionality on the device and cause the device to recognize the object. As another example, automatic object recognition can occur when, for example, the user positions the camera for a certain period of time on a specific object (eg a section of text). As a third example, AR or object recognition functionality can be explicitly enabled by the user when, for example, the user taps or taps the display (or, for example, clicks the outer ring) when the device's camera has captured an object of interest. Global object recognition can, in some cases, be computationally intensive and error prone. As such, in specific embodiments, a limiting set (for example, the pages of a magazine or catalog or a catalog of a specific type of object such as plant leaves or book covers) may be applied to improve accuracy. There are a few options for calculating feature vectors from images, which the system designer for the device can select from. In some cases, converting feature vectors between different approaches can be computationally expensive, so the database's choice of possible combinations is replicated across the device. The calculation of feature vectors can be delegated, as described here.
[000212] In specific modes, barcodes of various types can be recognized by the device. These barcodes can be used to query Internet-based services for additional data, as well as options to purchase, review, or indicate the item with barcodes for further review. Although two-dimensional bar codes can usually be read directly, the device system may offer an additional close focus mode for particularly small or one-dimensional bar codes to improve recognition rate. If the system lacks the ability to decode the barcode, it can simply focus the camera, take a picture, and delegate recognition to a remote service, as described here. Figures 129A-129D illustrate an example of barcode recognition mode. The device may point at an item (129A), recognize the item (129B), display additional information obtained from the Internet about the item (129C), and provide the user with an interface to purchase the item (129D).
[000213] In specific modes, the device can perform conversion. The conversion functionality can be divided into two parts: optical character recognition (OCR), and conversion of recognized characters, words or phrases. OCR can be completed on the device or delegated (eg to a paired processing device) to reduce the amount of data to be converted by the device. Simple word translations can be performed on the device or delegated (eg to a paired processing device). As with other functionality described here, part or all of the recognition or conversion process can be delegated as needed. The user can optionally use a gesture to indicate the word to be translated, as shown in Figure 130 (eg the word "Warning"). As individual words can be circumscribed by white space, the system can segment the word before attempting the translation. Additionally, if the device can perform OCR with low latency, it can show the text to the user so that the user knows when the device is aiming and correctly recognizing the correct text. If automatic OCR is enabled, then the device can automatically identify images in an outward facing camera's viewing angle and present information about the identified images on the device's display. If automatic translation is enabled, then the device can automatically translate the text in the camera's outward viewing angle and present the translated text on the device's display.
[000214] Figures 131A-131D illustrate examples of the device operating in various augmented reality modes described herein, including barcode recognition mode (131A), image recognition mode (131B), OCR and translation mode. (131C), and object recognition mode (131D).
[000215] Figure 132 illustrates an example of the total flow of actions for an augmented reality system for the device. While this example illustrates an image capture application, any suitable task or process on the device can follow a similar flow. Additionally, any tasks after the device captures an image and before the device displays the results to the user can (as appropriate) be delegated by the device. In this example, an image from a device camera is captured (in image capture section 13210), pre-processed (in section 13220), features are extracted and recognized to produce image recognition results (in section 13230 ), and any objects can be recognized (in section 13240) . Object data can be formatted for action by a device user. The user can activate an augmented reality mode of the device 13211 (for example, through a user gesture or by pointing the device's camera at an object for a predetermined period of time), and an image in the field of view of the camera 13212 can be captured (for example, based on a trigger event such as user input or automatic camera activation) by device camera 13213 to produce an image from camera 13214. At this point, one can enter the pre-processing stage 13220. Preprocessing 13220 may, for example, include contrast enhancement, grayscale conversion, sharpening, or downsampling. In specific modes, the camera can operate in a general augmented reality mode in which anything in front of the camera can be processed and recognized. In other embodiments, the camera may operate in specific modes (eg, OCR, barcode, or visual marker) and only recognize specific items when in that mode. In specific embodiments, if it is determined that the image may include known shapes, symbols, or organizations of shapes or symbols (for example, if the camera or device is in OCR mode, barcode mode, or visual marker mode), processing of AR image can proceed in a first way. This first path starts with preliminary processing 13221, proceeds to segmentation 13231 (which may, for example, determine symbol or group boundaries of symbols such as letters or words), and starts with one or more optical character recognition 13234 (for example, if the image is determined to contain characters, determining what those characters are), barcode recognition 13235 (for example, if the image is determined to contain a barcode, recognizing the barcode), or visual marker recognition (e.g. recognizing other types of visual markers) 13236 (e.g. for all other types of visual markers). The results of this first path are sent to the object recognizer 13242. In specific embodiments, if it is determined that the image may include features that are not necessarily known, AR image processing may proceed in a second path. The second path starts with extracting feature 13222 (eg, where the presence of edges or lines, changes in angles of lines, edges, points of interest or patterns, are detected in the captured image). The second path proceeds to image recognition 13232, in which the image resources are compared with resource data from a recognition database 13233 (which may, for example, reside on the device, on a locally paired device , or on a server or remote computer). The results of the image recognition comparison are provided 13237 and sent to the object recognizer 13242. In the object recognition section 13240, the first and second paths converge in the object recognizer 13242. Here, the results from a database of object data 13241 is used to recognize objects (for example, that a phone recognized using the image recognition database 13233 is of a specific make and model of phone). 13243 object data about the object recognized by the 13242 recognizer (eg, the price of the recognized phone model, or where the phone may be available for purchase) can be provided. For text, there may be definitions or conversions that take place and are displayed to the user. For barcodes, there may be product information and links to purchase the recognized object that is displayed to the user. In specific embodiments, the data can be purely descriptive (eg the price of the phone) or it can be active (eg a link where the user can buy the phone). If the data includes 13244 action data, then a 13250 action controller (which controls, formats and outputs a GUI to the device user) can show a UI to the 13255 user including the active data (e.g. the link to buy The telephone). If the user selects an action 13260 (for example, clicking the link), then the action controller shows the action UI to the user 13265 (for example, opening the link), and if the action is confirmed 13270, then the action ( for example, the effective opening of the webpage associated with the link) is performed 13275.
[000216] Figure 133 illustrates an example of a network environment. As described herein, in specific embodiments, device 13310 may be paired with other devices (e.g., nearby devices). The device can connect directly to a 13320 personal area network (which can connect through other devices on the same or a local area network), or the device can connect directly to a 13330 local area network The personal area network may include, for example, non-WI-FI radio technology such as BLUETOOTH, NFC or ZIGBEE. The personal area network may, for example, include a smart media gateway 13322 (e.g., a media server), a smart TV 13324, another processing provider 13326, or a telephone 13328. The telephone 13328 may allow the device connects to a 13340 cellular network, and from there to the 13350 network. The 13330 local area network can include, for example, WI-FI with or without authentication. The local area network may, for example, include a 13332 wireless LAN router, 13334 smart media devices, 13336 smart appliances, and 13838 home automation technology. to the global Internet 13350 via, for example, the local router 13332 that connects to an Internet service (for example, a proprietary cloud service 13352 or other network service partners 13354). Some devices can be reached by the device either through direct access (for example, through the personal area network) or through the local area network. Those devices reachable by the device can be paired with the device and can be controlled by the device or control the device. The device can connect to the personal area network or the local area network using, for example, any suitable RF technology. As shown in Figure 133, pairing to a target device on the periphery can occur primarily through the RF network. This lets the device know what is "nearby". This can happen over the personal area network (eg, an ad-hoc or peer-to-peer network), or it can use a mediated network such as 802.11 wireless networking (eg, the local area network). When a neighborhood is established, the device can ask nearby devices to enter pairing mode. This can be done directly or through a processing device paired with a wider range of connectivity options, such as a mobile phone. When the target devices have entered pairing mode, they can display their pairing signals. For example, devices with displays can show a visual identifier on their display, while others can enable an NFC identifier allowing a scanner to identify them. Other approaches such as selecting from a list or by pin code can also be used. When a device is uniquely identified as a pairing target, the device can exchange a security token with the target device to complete the pairing.
[000217] Figure 134 illustrates an example of different types of pairing technology that can be used to pair a target device with the device. The target device, which may be a smart device such as a telephone, may include passive NFC tags 13402 or active NFC transmitters 13404 (which can be recognized by an NFC tag reader 13420 and NFC decoder 13428 of the device); an NFC 13406 decoder (which may recognize the NFC identifiers recorded by the device's NFC identifier recorder 13422), passive visual 13408 identifiers (e.g., labels), 13420 barcodes, or other 13412 display information (which may be recognized by a 13424 camera on the device); or other pairing system 13416. An active identifier generator 13414 of the target device may create display information 13412 or provide information to the other pairing system 13416 of the target device (which is recognized by an inverted pairing system 13426 with decoder). device pairing code 13438). The device may record the data to NFC tags (eg with an NFC tag recorder 13422) to transmit this data to other target devices which can then be paired to the device. Device-written identifiers can be recognized by NFC 13406 identifier decoders on a target device. The device may include any of a number of decoders including barcode decoder 13430, visual identifier decoder 13432, image identifier 13434, or other image-based decoder 13436 (e.g., a decoder for QR codes, logos or patterns flickering LEDs), all collecting input from the device's 13424 camera. After the device receives and recognizes pairing information, it can decode (eg through a variety of decoders) the relevant information to proceed with pairing with the target device. In specific embodiments, pairing may be achieved using a motion-sensitive target device (e.g. mobile or remote phone) paired with the device by holding and moving the target device in the same hand as the device (e.g. if both devices include accelerometers, the similar motion pattern can be detected and used to pair the devices). As another example, a fixed target device can be paired with the device by, for example, lightly tapping the fixed target device in a random pattern while holding the fixed target device in the same hand as the device (for example, if the two devices include , touch detection, the similar pattern of light tapping can be detected and used to pair the devices). Additionally, pairing can be done using audio if both the device and the target device have audio receiving capabilities, a user can make a sound (eg say a phrase) that the two devices detect and then establish a pairing. Any suitable technology (including, for example, augmented reality functions) of the device can be used to pair with and control the local devices. The device and the target device can connect individually to other intermediate network devices, possible 13440, and also to a local area network 13450.
[000218] Figure 135 illustrates an exemplary process for pairing a target device (eg, using any of the methods described herein) with the device. When pairing mode is enabled 13510, the device determines whether the RF network contains target devices that can be paired 13512. If not, no further action is taken (eg the device may continue to scan periodically). If so, the device can request that the pairable devices enter pairing mode 13514. The device can then proceed (in any order, or in a parallel fashion) to scan, through different different technologies, for available target devices. These may include NFC 13516 identifier scans, 13518 camera angle of view visual identifier scans, 13520 camera angle of view bar code scans, or any other 13522 method. of these methods, the target device is paired with the 13524 device. When pairing has taken place, the device can show menu items to the user to control the paired device(s). The device can allow visual gesture control as well as motion-based gesture control of the paired devices. For example, the user can perform a gesture (e.g., waving his hand) to change channels on a paired television, or can perform a pinch gesture to transfer video media from the device to a paired display (using, e.g. AR functionality). Device control mediated over an RF network can be either local or protected. Figure 36 illustrates exemplary device-enabled controls for a paired and controlled television including an active ON/OFF icon 13610, favorite channels 13620, a current channel display 13630, and volume 13640. As described here, any suitable input from the user can be used to control the functionality of a paired device. For example, gesture input, click or press input, or touch input can be used, for example, to change channels, adjust volume or control other features of the paired television.
[000219] In specific embodiments, a pairing and control model for the device may include the following characteristics. The device may act as the host for an application that interacts with or controls one or more functions of a remote device (eg, an application accessory such as a controllable thermostat). A smartphone (or other locally paired device), which may have previously been the host for the application, can now function only as a local target device to which the device can delegate certain functions related to remote device interaction or control (e.g. , longer-range wireless connectivity to the remote device, sending commands to the remote device, receiving data from the remote device, or processing tasks). Control of the remote application accessory device may be done by the device using any suitable means including, for example, visual means (eg using the camera) or motion-based gestures. In other embodiments, the locally paired smartphone continues to function as the host for the application that interacts with the remote application accessory, but the device may provide some or all of the user interface for data input and output to and from the application. (for example, a "lightweight" version of the smartphone-hosted app). For example, the user can control the application using the device, but the smartphone can still function as the application's host.
[000220] In specific modes, the device may be operable with one or more services. These services can fall into categories including security, energy, home automation and control, content sharing, healthcare, sports and entertainment, commerce, vehicles and social applications.
[000221] Exemplary security applications include the following. The device can authenticate a user (who is using the unlocked device) against another device close to the user (eg, paired with the device). The device can be unlocked with a user-entered code using any suitable input including, for example, turning the outer ring of the device. As an example, while a user rotates (or presses or clicks) the outer ring, the display may show alphanumeric or symbolic data corresponding to rotation (or press or click) by the user. If, for example, the user rotates the outer ring by a rotational increment in a clockwise direction (or, for example, clicks or squeezes the outer ring once), the display may show the user a "1", and if the user rotates the outer ring in two rotational increments (for example, within a certain period of time, such as a millisecond) in a clockwise direction (or, for example, clicking or double-pressing the outer ring), the display can show the user a "2". In specific embodiments, the display of alphanumeric or symbolic data corresponding to a rotation (or press or click) by the user may allow the user to unlock the device using the metaphor of a combination lock. The device can also be unlocked using biometric data (eg, through the user's skin or bone signatures).
[000222] In an exemplary energy application, the device can automatically display information about the energy consumption of the environment or other location in which the user is located. The device may also be able to display energy consumption information from other paired devices and dynamically update all this information as the user changes location.
[000223] In an exemplary home control application, the user can directly select and control the paired home control devices using, for example, outer ring rotation or a gesture input.
[000224] The user can use gestures to control the sharing or transfer of content to or from the device (eg, transferring video running on the device to a paired television as described here). Additionally, auxiliary information (eg movie subtitles) can be provided to the device for content shown on another larger device (eg television screen playing the movie).
[000225] The device can automatically determine a healthcare context (eg, whether the user is exercising or sleeping). When it determines that context, the device can open apps corresponding to the healthcare context (e.g. to record heart rate during exercise, movement during exercise, exercise duration, pulse oximetry during exercise, sleep patterns, sleep duration or galvanic skin response). The device may, for example, measure the user's health-related data (eg, heart rate, movement, or pulse oximetry) and send some or all of this data to a paired device or server. Although illustrated in the healthcare context, determining a relevant context (e.g. based on a user's behavior), opening corresponding apps, recording data, or transmitting that data, may be applicable in any suitable context.
[000226] The device can assist in sport-related applications such as, for example, automatically evaluating a user's golf movement and suggesting corrections.
[000227] In a commercial scenario, the device may automatically identify a product (e.g. using RFID, NFC, barcode recognition or object recognition) when the user picks up the product and can provide information about the product (e.g. , nutrition information, source information or analysis) or the option to purchase the product. Payment for the product can, for example, be carried out using barcode technology, visual on the device. In specific embodiments, the device can be used to pay for a product using NFC, RFID or any other suitable form of short-distance communication. During payment, the user's information can, for example, be authenticated by the device, which can detect the user's biometric information (eg, bone structure or skin signature). The device may also automatically provide an indication to the user (e.g. a vibration) when the user is near a product on their shopping list (e.g. stored on the device) or another list (e.g. a wish list of the user's friend).
[000228] The device can function as a key to unlock or start one or more vehicles. The user can, for example, enter a code using the outer ring to unlock or start the vehicle (eg using NEC technology), as described above. In specific embodiments, the user's biometric information and a user-entered code may be required to unlock the car, allowing enhanced security for a car-based application. Additionally, the device may include profiles for one or more users, each profile containing vehicle settings (eg temperature or seat position). As another example, biometric information of a specific user can be used not only to unlock the vehicle, but also to determine which user profile to load while operating the car. The device's proximity to the vehicle may automatically cause the vehicle to implement the vehicle's user profile settings. The device may also be operable for GPS navigation (either directly on the device or when paired with and controlling a phone, for example).
[000229] The device can access and operate in conjunction with a service that provides support for mixed reality games or massively multiplayer reality-based games. This functionality may, for example, include logging, managing user data (eg user profiles and game-related data such as completed levels or supply inventories), and managing achievement lists. Device and service functionality may also include connectivity management (eg, hub functionality) that handles fragile, wireless communication channels and provides a unified API for third-party game servers.
[000230] The device may access and operate in conjunction with a service that allows a user of the device to post locations, check-ins, or other location-based data that allow various services to access a consistent reservoir of the most current information regarding the user's position and status. As an example, the device user can find friends using similar devices. The service and device together can handle status updates, profile management, application access permissions, blacklisting, or user-to-user access permissions. The service can be a trusted, centralized touch point for private data. By combining access to a unified location service, battery life and energy can, in specific ways, be conserved. In specific modalities, some functionality tokens can be made available based on the user's position. An application can, for example, connect to the device to verify that this token is available and act accordingly. On the server side, APIs can allow developers to see token usage or allow redemption. In specific embodiments, information may be distributed by the device to other users (eg, a single other user, or in a broadcast mode to multiple users).
[000231] The device can access and operate in conjunction with a service that provides a unified polling interface that allows devices to receive and send polls. The device and service together can manage distribution lists, scoring criteria, and polling availability boards (both temporal and geographic, for example). This service can be exposed on the device and on a server such that third parties can use the APIs to write applications and receive results back via online APIs.
[000232] In specific embodiments, the device may access and operate in conjunction with a service that provides optimizations for the presentation of text, images, or other information on a circular display of the device. As an example, a web site can be rendered or formatted for display on a computer monitor, but a service can customize the rendering and formatting for a smaller circular display by emphasizing images and truncating text. Custom rendering and formatting can, for example, be a task that can be delegated between the device and one or more servers or locally paired devices. This service may also include news or advertising services.
[000233] Figure 137 illustrates an exemplary computer system 13700. In specific embodiments, one or more computer systems 13700 perform one or more steps of one or more methods described or illustrated herein. In specific embodiments, one or more 13700 computer systems provide the functionality described or illustrated herein. In specific embodiments, software running on one or more 13700 computer systems perform one or more steps of one or more methods described or illustrated herein or provide functionality described or illustrated herein. Specific embodiments include one or more portions of one or more computer systems 13700. Here, reference to a computer system may encompass a computing device, and vice versa, where appropriate. In addition, reference to a computer system may encompass one or more computer systems, where appropriate.
[000234] This disclosure considers any suitable number of 13700 computer systems. This disclosure considers 13700 computer systems assuming any suitable physical form. By way of example, and not by way of limitation, the 13700 computer system may be an integrated computer system, a system on chip (SOC), a single board computer system (SBC) (such as, for example, a module computer (COM) or system-in-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a network of computer systems, a mobile phone, an assistant digital personnel (PDA), a server, a tablet computer system, or a combination of two or more thereof. Where appropriate, computer system 13700 may include one or more computer systems 13700; whether unitary or distributed; cover multiple locations; cover multiple machines; cover multiple data centers; or reside in a cloud, which may include one or more cloud components on one or more networks. Where appropriate, one or more computer systems 13700 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. By way of example and not by way of limitation, one or more 13700 computer systems may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more 13700 computer systems may perform at different times or different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
[000235] In specific embodiments, the computer system 13700 includes a processor 13702, memory 13704, storage medium 13706, an input/output (I/O) interface 13708, a communication interface 13710, and a bus 13712. this disclosure describes and illustrates a specific computer system having a specific number of specific components in a specific arrangement, this disclosure considers any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
[000236] In specific embodiments, the 13702 processor includes hardware for executing instructions, such as those composing a computer program. As an example and not a limitation, to execute instructions, processor 13702 may retrieve (or fetch) instructions from an internal register, an internal cache, memory 13704, or storage medium 13706; decode and execute them; and then writing one or more results to an internal register, an internal cache, memory 13704, or storage medium 13706. In specific embodiments, the processor 13702 may include one or more internal caches for data, instructions, or addresses. This disclosure considers the 13702 processor including any suitable number of any suitable internal cache where appropriate. As an example and not a limitation, the 13702 processor may include one or more instruction caches, one or more data caches, and one or more translation temporary stores (TLBs). Instructions in instruction caches can be copies of instructions in memory 13704 or storage medium 13706, and instruction caches can speed up retrieval of these instructions by the 13702 processor. Data in data caches can be copies of data in memory 13704 or medium storage 13706 for instructions executing on processor 13702 to be operated on; the results of previous instructions executed on processor 13702 for access by subsequent instructions executing on processor 13702 or for writing to memory 13704 or storage medium 13706; or other suitable data. Data caches can accelerate read or write operations by the 13702 processor. TLBs can accelerate virtual address translation for the 13702 processor. In specific embodiments, the 13702 processor can include one or more internal registers for data, instructions, or Adresses. This disclosure considers the 13702 processor including any suitable number of suitable internal registers, where appropriate. Where appropriate, the 13702 processor may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more 13702 processors. While this disclosure describes and illustrates a specific processor, this disclosure considers any processor suitable.
[000237] In specific embodiments, memory 13704 includes main memory for storing instructions for processor 13702 that must be executed, or data that must be operated on by processor 13702. By way of example and not limitation, computer system 13700 may load instructions from storage medium 13706 or another source (such as, for example, another computer system 13700) to memory 13704. Processor 13702 can then load instructions from memory 13704 to an internal register or internal cache . To execute the instructions, the 13702 processor can retrieve the instructions from the internal register or internal cache and decode them. During or after the execution of instructions, the 13702 processor may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor 13702 may then write one or more of these results to memory 13704. In specific embodiments, processor 13702 executes only instructions in one or more internal registers or internal caches or in memory 13704 (as opposed to storage medium 13706 or elsewhere). ) and operates only on data in one or more internal registers or internal caches or 13704 memory (unlike the 13706 storage medium or elsewhere). One or more memory buses (which may individually include an address bus and a data bus) may couple the 13702 processor to the 13704 memory. The 13712 bus may include one or more memory buses, as described below. In specific embodiments, one or more memory management units (MMUs) reside between the 13702 processor and the 13704 memory and facilitate access to the 13704 memory requested by the 13702 processor. In specific embodiments, the 13704 memory includes random access memory (RAM) . This RAM can be volatile memory where appropriate, and this RAM can be dynamic RAM (DRAM) or static RAM (SRAM) where appropriate. Also, where appropriate, this RAM can be single-port or multi-port RAM. This revelation considers any RAM suitable. The 13704 memory may include one or more 13704 memories, where appropriate. While this revelation describes and illustrates specific memory, this revelation considers any memory adequate.
[000238] In specific embodiments, the 13706 storage medium includes mass storage for data or instructions. By way of example and not limitation, the 13706 storage medium may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disk, a magneto-optical disk, magnetic tape, or a Serial Bus drive. Universal (USB) or a combination of two or more of them. The 13706 storage medium can include removable or non-removable (or fixed) media where appropriate. The 13706 storage medium may be internal or external to the 13700 computer system, where appropriate. In specific embodiments, the storage medium 13706 is solid-state, non-volatile memory. In specific embodiments, storage medium 13706 includes read-only memory (ROM). Where appropriate, this ROM may be mask programmed ROM, programmable ROM (PROM), erasable FROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of the same. . This revelation considers 13706 mass storage assuming any suitable physical form. Storage medium 13706 may include one or more storage control units facilitating communication between processor 13702 and storage medium 13706, where appropriate. Where appropriate, storage medium 13706 may include one or more storage medium 13706. While this disclosure describes and illustrates specific storage medium, this disclosure considers any storage medium suitable.
[000239] In specific embodiments, the 13708 I/O interface includes hardware, software, or both, providing one or more interfaces for communication between the 13700 computer system and one or more I/O devices. The 13700 computer system may include one or more such I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and the 13600 computer system. By way of example and not limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, pen, tablet, touch screen, trackball, video camera, other suitable I/O device, or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure considers any suitable I/O device and any suitable I/O interface 13708 for them. Where appropriate, the 13708 I/O interface may include one or more devices or software drivers enabling the 13702 processor to drive one or more of these I/O devices. The 13708 I/O interface may include one or more 13708 I/O interfaces, where appropriate. While this disclosure describes and illustrates a specific I/O interface, this disclosure considers any I/O interface to be suitable.
[000240] In specific embodiments, the 13710 communication interface includes hardware, software, or both, providing one or more interfaces for communication (such as, for example, packet-based communication) between the 13700 computer system and one or more other 13700 computer systems or one or more networks. By way of example and not by way of limitation, the 13610 communication interface may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wired network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure considers any network suitable and any 13710 communication interface suitable for it. As an example and not a limitation, the 13700 computer system can communicate with an ad-hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a body area network (BAN), one or more portions of the Internet, or a combination of two or more thereof. One or more portions of one or more of these networks may be wired or wireless. As an example, the 13700 computer system can communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network ( such as, for example, a Global System for Mobile Communications (GSM) network), or another suitable wireless network, or a combination of two or more thereof. The computer system 13700 may include any suitable communication interface 13710 for any such network, where appropriate. The 13710 communication device may include one or more 13710 communication interfaces, where appropriate. While this disclosure describes and illustrates a specific communication interface, this disclosure considers any communication interface suitable.
[000241] In specific embodiments, the 13712 bus includes hardware, software, or both coupling components of the 13700 computer system with each other. As an example and not a limitation, the 13712 bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standards Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT interconnect (HT), an Industry Standards Architecture (ISA) bus, an INFINIBAND interconnect, a low pin count (LPC) bus, a memory bus, a Microchannel Architecture (MCA) bus, an Interconnection bus Peripheral Components (PCI) bus, a PCI-Express bus (PCIe), an Advanced Serial Technology Attachment (SATA) bus, a Video Electronics Standards Association (VLB) local bus, or another suitable bus, or a combination of two or more of them. Bus 13712 may include one or more buses 13712 where appropriate. While this disclosure describes and illustrates a specific bus, this disclosure considers any suitable bus or interconnection.
[000242] Here, a non-transient computer-readable storage medium or media may include one or more semiconductor-based circuits or other integrated circuits (ICs) (such as, for example, field programmable gate arrays (FPGAs), or Application Specific ICs (ASICs), hard disk drives (HDDs), hybrid disk drives (HHDs), optical disks, optical disk drives (ODDs), magneto-optical disks, magneto-optical drives, floppy disks, floppy disks (FDDs), magnetic tapes, solid state drives (SSDs), RAM drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transient storage media, or any two or more of the same, where appropriate. A non-transient computer-readable storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
[000243] Here, "or" is inclusive and not exclusive, unless expressly stated otherwise or indicated otherwise by context. Therefore, herein, "A or B" means "A, B or both", unless expressly stated otherwise or otherwise indicated by the context. Also, "and" means together and separately, unless expressly stated otherwise or otherwise indicated by the context. Therefore, herein "A and B" means "A and B, together or separately", unless expressly stated otherwise or otherwise indicated by the context.
[000244] The scope of this disclosure encompasses all changes, substitutions, variations, alterations and modifications in the exemplary embodiments described or illustrated herein that those of ordinary skill in the art will understand. The scope of this disclosure is not limited to the exemplary embodiments described or illustrated herein. Furthermore, while this disclosure describes and illustrates respective embodiments as including specific components, elements, aspects, functions, operations or steps, any such embodiment may include any combination or permutation of any of the components, elements, aspects, functions, operations or steps herein. described or illustrated elsewhere that those of ordinary skill in the art would understand. Furthermore, reference in the modalities attached to an apparatus or system or to a component of an apparatus or system being adapted to, arranged for, capable of, configured for, enabled for, operable for, or operative to perform a specific function covers that apparatus , system, component, whether or not that specific function is activated, turned on, or unlocked provided that device, system or component is so adapted, arranged, capable, configured, enabled, operable or operative.
[000245] While this disclosure describes specific structures, features, instructions, and functionality in the context of a wearable device, this disclosure considers that these structures, features, instructions, or functionality can be applied to, used for, or used in any other suitable electronic device (such as, for example, a smartphone, tablet, camera, or personal computing device), where appropriate.
权利要求:
Claims (10)
[0001]
1. DEVICE, comprising a device body (105) comprising: one or more processors; a display (100); a rotating element around the display (115); a detector configured to detect rotation of the rotating element (115); and a memory coupled to the processors comprising instructions executable by the processors, the processors being operable when executing the instructions for: presenting on a display (100) a first graphical element of a graphical user interface; receiving user input comprising rotating the rotating element (115) detected by the detector; and in response to received user input, control the transition from the presentation of the first graphic element to the presentation of a second graphic element; characterized by controlling the transition from the presentation of the first graphic element to the presentation of the second graphic element, the processors being operable when the instructions are executed to: determine if the rotation of the rotating element (15) has a high acceleration or a low acceleration, low acceleration being low relative to high acceleration; if the rotation of the rotating element (115) has a low acceleration, controlling for the transition from the presentation of the first graphic element to the presentation of the second graphic element, without increment of said first and second graphic elements; and if the rotation of the spinner (115) has a high acceleration, controlling to sub-increment the first graphic element and distribute on the display through a plurality of sub-incrementing secondary graphic elements.
[0002]
Apparatus as claimed in claim 1, further comprising a clamp (120) coupled to the device body (105) and an optical sensor in or on the clamp (120).
[0003]
An apparatus according to claim 1, characterized in that the user input comprises a low-acceleration rotation of the rotating element, the low-acceleration rotation comprising a rotational increment, and wherein the processors are operable when executing the instructions to display at least a portion of the first graphic element and at least a portion of the second graphic element in response to the rotational increment.
[0004]
An apparatus according to claim 1, characterized in that the user input comprises a high-acceleration rotation of the rotating element, the high-acceleration rotation comprising a rotational increment; wherein the processors are operable when executing instructions to: display a scaled version of at least a portion of the first graphic; display a scaled version of at least a portion of the second graphic; and displaying a scaled version of at least a portion of a third graphic element.
[0005]
5. Apparatus according to claim 1, characterized in that it comprises a physical model, which takes into account the movements of a user and produces visual responses that reflect the user's movements on the display (115).
[0006]
6. A method comprising: presenting in a first graphic element of a graphical user interface on a display of a computing device (100), the computing device (100) further having a rotating element around the display (115), and a detector configured to detect rotation of the rotating element (115); receiving user input comprising rotating the rotating element (15) detected by the detector; and in response to received user input, transitioning from the presentation of the first graphic element to the presentation of a second graphic element; characterized in that the transition from the display of the first graphic element to the presentation of the second graphic element, it has further to comprise: determining whether the rotation of the rotating element (115) has high acceleration or low acceleration, the low acceleration being low relative to the high acceleration ; if the rotation of the spinner (115) has a low acceleration, transitioning the first graphic element to the display of the second graphic element, without incrementing said first and second graphic elements; and if the rotation of the spinner (15) has a high acceleration, sub-incrementing the first graphic element and distributing over the display through a plurality and second sub-incrementing graphic elements.
[0007]
A method as claimed in claim 6, characterized in that the computing device (100) comprises a clamp (120) coupled to the device body (105); and an optical sensor on or on the cuff (120).
[0008]
A method as claimed in claim 6, characterized in that the computing device (100) includes a physical model that takes into account a user's movements and produces visual responses that reflect the user's movements on the display (115).
[0009]
9. One or more non-transient, computer-readable storage media that incorporate software that is operable when executed to: present a first graphical element of a graphical user interface on a display (110) of a computing device (100), the computing device (100) further comprising a rotating element around the display (115), and a detector configured to detect rotation of the rotating element (115); receiving user input comprising rotation of the rotating element (15) detected by the detector; and in response to the received user input, transitioning from the display of the first graphic to the presentation of the second graphic, the software still being operable when executed to: determine whether the rotation of the spinner (115) has a high acceleration or a low acceleration, the low acceleration being low relative to the high acceleration; if the rotation of the spinner (115) has a low acceleration, transitioning from the display of the first graphical element to the display of the second graphical element, without incrementing said first and second graphical elements; and if the rotation of the spinner (115) has a high acceleration, sub-incrementing the first graphic element and distributing the display across a plurality of sub-incrementing second graphics elements.
[0010]
10. One or more non-transient, computer-readable storage media according to claim 9, characterized in that it is still operable when executed to account for a user's movements and produce visual response reflecting the user's movements on the display.
类似技术:
公开号 | 公开日 | 专利标题
BR112015011598B1|2022-01-25|Apparatus, method, and one or more non-transient, computer-readable storage media
US11240408B2|2022-02-01|Wearable electronic device
US11157436B2|2021-10-26|Services associated with wearable electronic device
CN104919421B|2020-02-07|Controlling a remote electronic device with a wearable electronic device
CN104919420B|2020-02-28|Delegating processing from wearable electronic devices
RU2623805C2|2017-06-29|Gesture input on wearable electronic device by user comprising device movement
AU2013260684B2|2019-01-31|Placement of optical sensor on wearble electronic device
JP5712269B2|2015-05-07|User gesture input to wearable electronic devices, including device movement
US20140143737A1|2014-05-22|Transition and Interaction Model for Wearable Electronic Device
BR112015011600B1|2021-11-03|DEVICE
同族专利:
公开号 | 公开日
JP2019164822A|2019-09-26|
BR112015011598A2|2017-07-11|
IN2013MU03645A|2015-07-31|
RU2015124029A|2017-01-10|
RU2754525C2|2021-09-02|
MX354503B|2018-03-08|
WO2014081179A1|2014-05-30|
RU2017142722A3|2021-04-02|
US20140143678A1|2014-05-22|
AU2013260686B2|2019-05-02|
US10551928B2|2020-02-04|
JP6804586B2|2020-12-23|
CN105051663A|2015-11-11|
EP2733598A2|2014-05-21|
RU2017142722A|2019-02-14|
EP2733598A3|2016-06-08|
MX2015006361A|2015-10-05|
MY177407A|2020-09-14|
RU2641462C2|2018-01-17|
EP2733598B1|2019-04-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US2473226A|1946-08-22|1949-06-14|Sheldon Lillian|Wrist band|
US3062369A|1960-08-30|1962-11-06|Esber E Moubayed|Leak-proof variable-volume holder for frozen confections|
US3477285A|1967-08-04|1969-11-11|Joseph M Krafft|Four-lobed diametral-strain gage|
US3915534A|1967-08-15|1975-10-28|Joslyn Mfg & Supply Co|Grounded surface distribution apparatus|
JPS4899108A|1972-04-05|1973-12-15|
USD249874S|1976-10-12|1978-10-10|Texas Instruments Incorporated|Digital wrist watch|
US4757456A|1981-05-19|1988-07-12|Ralph Benghiat|Device and method for utility meter reading|
US4427303A|1982-06-17|1984-01-24|Hermann Hirsch Leder Und Kunstoffwarenfabrik|Electronic wristwatch|
USD282914S|1983-10-13|1986-03-11|Jeffrey Maron|Indicator watch|
USD284949S|1984-04-16|1986-08-05|Kong Hung W|Watch frame|
USD300828S|1985-08-21|1989-04-25|Divetronic Ag|Wrist computer for divers|
USD297121S|1985-12-03|1988-08-09|Porsche Design Gmbh|Sport watch|
US4636047A|1986-03-21|1987-01-13|Green Ronald N|Wrist watch mirror accessory|
USD302664S|1986-09-08|1989-08-08|Dawson Jr Melvyn H|Electronic watch|
USD300678S|1987-06-11|1989-04-11|Moulinex, Societe Anonyme|Re-charging stand for a cordless electric iron|
US4906207A|1989-04-24|1990-03-06|W. L. Gore & Associates, Inc.|Dielectric restrainer|
USD362396S|1990-06-27|1995-09-19|Chester Edna L|Wristwatch|
USD335263S|1990-06-28|1993-05-04|Willis John E|Wristband|
USD351558S|1991-11-06|1994-10-18|Junho Moon|Wrist watch|
USD355132S|1992-02-28|1995-02-07|Kenneth Williams|Combined sweatband and watch|
USD347589S|1992-03-12|1994-06-07|Labate Joseph A|Wrist band and mounting for a tape measure|
US5418760A|1992-08-18|1995-05-23|Casio Computer Co., Ltd.|Electronic devices with a liquid crystal display|
USD356960S|1993-02-03|1995-04-04|Societe Nouvelle Chaumet S.A.|Watch with band portions|
US5361169A|1993-04-29|1994-11-01|Deal Jerry L|Diver's mirror|
USD365550S|1993-05-17|1995-12-26|Timex Corporation|Personal digital assistant to be worn on a wrist|
USD366036S|1993-05-28|1996-01-09|Timex Corporation|Personal digital assistant to be worn on a wrist|
US20040210479A1|1996-10-25|2004-10-21|Ipf, Inc.|Internet-based brand marketing communication instrumentation network for deploying, installing and remotely programming brand-building server-side driven multi-mode virtual kiosks on the World Wide Web , and methods of brand marketing communication between brand marketers and consumers using the same|
USD386696S|1995-02-16|1997-11-25|Christophe Walch|Multipiece plastic wristwatch|
US5832296A|1995-04-26|1998-11-03|Interval Research Corp.|Wearable context sensitive user interface for interacting with plurality of electronic devices of interest to the user|
USD372878S|1995-05-09|1996-08-20|Danny Finnegan|Workman's magnetic wrist band for holding small metal objects|
JPH0985983A|1995-09-21|1997-03-31|Canon Inc|Printer control device|
US6636635B2|1995-11-01|2003-10-21|Canon Kabushiki Kaisha|Object extraction method, and image sensing apparatus using the method|
USD383985S|1995-12-22|1997-09-23|Doris Davenport|Combination watch and pager|
US6047301A|1996-05-24|2000-04-04|International Business Machines Corporation|Wearable computer|
USD384661S|1996-08-28|1997-10-07|Garmin Corporation|Portable handheld combination GPS and communication transceiver|
JP3321053B2|1996-10-18|2002-09-03|株式会社東芝|Information input device, information input method, and correction data generation device|
US20040209657A1|1999-01-04|2004-10-21|Yoram Ghassabian|Wrist-mounted telephone device|
US6285757B1|1997-11-07|2001-09-04|Via, Inc.|Interactive devices and methods|
USD410854S|1997-12-11|1999-06-15|Imus Co., Ltd.|Wrist watch|
US5915580A|1997-12-24|1999-06-29|Outer Circle Products, Ltd.|Container covering|
USD401515S|1998-01-23|1998-11-24|Yacov Yida|Watch|
USD404317S|1998-02-05|1999-01-19|Bernice M Cameron|Bottle mounted timer|
JP2000050133A|1998-02-10|2000-02-18|Asulab Sa|Portable object for low power consumption, especially watch having camera and radio data transmitting means|
US6031525A|1998-04-01|2000-02-29|New York University|Method and apparatus for writing|
JPH11298362A|1998-04-06|1999-10-29|Sony Corp|Small portable terminal|
USD416814S|1998-05-15|1999-11-23|Nancy Lynn Welsh|Ring having a time piece mounted within its central finger opening|
US6573883B1|1998-06-24|2003-06-03|Hewlett Packard Development Company, L.P.|Method and apparatus for controlling a computing device with gestures|
US7854684B1|1998-06-24|2010-12-21|Samsung Electronics Co., Ltd.|Wearable device|
JP3627531B2|1998-09-30|2005-03-09|セイコーエプソン株式会社|Information processing device|
KR100835757B1|1998-10-28|2008-06-05|비아, 인크|Flex-to-fixed user interface devices|
JP3988102B2|1998-11-06|2007-10-10|富士フイルム株式会社|Arm-mounted camera|
USD413817S|1998-12-18|1999-09-14|Casio Keisanki Kabushiki Kaisha|Watch case with global positioning system|
US7225229B1|1998-12-18|2007-05-29|Tangis Corporation|Automated pushing of computer user's context data to clients|
USD422513S|1998-12-22|2000-04-11|Gucci Timepieces Sa|Wristwatch|
US6400996B1|1999-02-01|2002-06-04|Steven M. Hoffberg|Adaptive pattern recognition based control system and method|
JP2000267797A|1999-03-15|2000-09-29|Seiko Epson Corp|Information processor|
US7293231B1|1999-03-18|2007-11-06|British Columbia Ltd.|Data entry for personal computing devices|
GB2350523B|1999-05-26|2003-11-26|Nokia Mobile Phones Ltd|Communication device|
US6359837B1|1999-06-30|2002-03-19|Casio Computer Co., Ltd.|Camera, camera system, information recording system, timepiece, and link system for camera and timepiece|
AT322711T|1999-08-25|2006-04-15|Swatch Ag|CLOCK WITH TOUCH-FREE CONTROL DEVICE FOR A COMPUTER MOUSE|
USD433949S|1999-10-04|2000-11-21|Chanel, Inc.|Wrist watch case|
US6424743B1|1999-11-05|2002-07-23|Motorola, Inc.|Graphical handwriting recognition user interface|
JP2001167054A|1999-12-09|2001-06-22|Casio Comput Co Ltd|Portable information equipment, device and system for authentication|
USD434675S|1999-12-13|2000-12-05|Casio Keisanki Kabushiki Kaisha|Watch case with global positioning system|
US6747680B1|1999-12-13|2004-06-08|Microsoft Corporation|Speed-dependent automatic zooming interface|
JP2001251542A|1999-12-28|2001-09-14|Casio Comput Co Ltd|Portable image pickup device|
JP2005174356A|2000-02-01|2005-06-30|Toshiba Corp|Direction detection method|
GB2365676B|2000-02-18|2004-06-23|Sensei Ltd|Mobile telephone with improved man-machine interface|
GB0004688D0|2000-02-28|2000-04-19|Radley Smith Philip J|Bracelet|
US6597345B2|2000-03-03|2003-07-22|Jetway Technologies Ltd.|Multifunctional keypad on touch screen|
JP2001344039A|2000-03-31|2001-12-14|Seiko Epson Corp|Information processor and method for controlling the same and control program of the same|
USD453005S1|2000-04-26|2002-01-22|Dinh Van|Wrist-watch|
JP4042340B2|2000-05-17|2008-02-06|カシオ計算機株式会社|Information equipment|
JP2001344352A|2000-05-31|2001-12-14|Toshiba Corp|Life assisting device, life assisting method and advertisement information providing method|
USD455356S1|2000-06-19|2002-04-09|Rainer Saffer|Watch|
DE60138519D1|2000-06-21|2009-06-10|Seiko Epson Corp|MOBILE PHONE AND RADIO COMMUNICATION DEVICE FOR THE COMMON PROCESSING OF AN INCOMING CALL|
US6714233B2|2000-06-21|2004-03-30|Seiko Epson Corporation|Mobile video telephone system|
US7477890B1|2000-06-30|2009-01-13|International Business Machines Corporation|Demand pull—multichannel asynchronous data and application synchronization for pervasive devices|
US6556222B1|2000-06-30|2003-04-29|International Business Machines Corporation|Bezel based input mechanism and user interface for a smart watch|
US6477117B1|2000-06-30|2002-11-05|International Business Machines Corporation|Alarm interface for a smart watch|
US7081905B1|2000-06-30|2006-07-25|International Business Machines Corporation|Method and apparatus for dynamically controlling scroller speed employed for a user interface of a wearable appliance|
JP3785902B2|2000-07-11|2006-06-14|インターナショナル・ビジネス・マシーンズ・コーポレーション|Device, device control method, pointer movement method|
JP2002099476A|2000-09-26|2002-04-05|Sanyo Electric Co Ltd|Software distribution method, software distribution device and user terminal using them|
USD463296S1|2000-10-23|2002-09-24|Eseoghene Esther Sanomi|Wrist radius watch|
JP3948260B2|2000-11-22|2007-07-25|松下電器産業株式会社|Text input method and apparatus|
JP2002257955A|2000-12-25|2002-09-11|Seiko Epson Corp|Wristwatch device with communication function, method of displaying information, control program and recording medium|
JP3613177B2|2000-12-26|2005-01-26|インターナショナル・ビジネス・マシーンズ・コーポレーション|Input object selection apparatus and method|
USD460430S1|2000-12-28|2002-07-16|Seiko Instruments Inc.|Mobile phone|
US20020135615A1|2001-01-31|2002-09-26|Microsoft Corporation|Overlaid display for electronic devices|
US20020101457A1|2001-01-31|2002-08-01|Microsoft Corporation|Bezel interface for small computing devices|
US6744427B2|2001-03-01|2004-06-01|International Business Machines Corporation|Character input interface for compact electronic devices|
US6447117B1|2001-03-12|2002-09-10|Rene Estrada|Eyeglass lens mounting system|
US6937135B2|2001-05-30|2005-08-30|Hewlett-Packard Development Company, L.P.|Face and environment sensing watch|
JP2003018923A|2001-07-09|2003-01-21|Onga Eng:Kk|Apparatus for opening and closing greenhouse|
US6774796B2|2001-08-01|2004-08-10|Motorola, Inc.|Master authenticator|
EP1282018A1|2001-08-03|2003-02-05|Nokia Corporation|A wearable electronic device|
USD459352S1|2001-08-10|2002-06-25|Michael C. Giovanniello|Wireless mouse wristband|
US20030046228A1|2001-08-28|2003-03-06|Jean-Marc Berney|User-wearable functional jewelry with biometrics and smartcard to remotely sign and/or authenticate to e-services|
US7634403B2|2001-09-05|2009-12-15|Voice Signal Technologies, Inc.|Word recognition using word transformation commands|
JP2003131785A|2001-10-22|2003-05-09|Toshiba Corp|Interface device, operation control method and program product|
US7312785B2|2001-10-22|2007-12-25|Apple Inc.|Method and apparatus for accelerated scrolling|
US6990662B2|2001-10-31|2006-01-24|Hewlett-Packard Development Company, L.P.|Method and system for offloading execution and resources for resource-constrained networked devices|
US6744423B2|2001-11-19|2004-06-01|Nokia Corporation|Communication terminal having a predictive character editor application|
US8176432B2|2001-11-20|2012-05-08|UEI Electronics Inc.|Hand held remote control device having an improved user interface|
US7091964B2|2001-11-30|2006-08-15|Palm, Inc.|Electronic device with bezel feature for receiving input|
TWI258647B|2001-12-27|2006-07-21|Asulab Sa|Control method for executing functions in a diary watch|
GB0201074D0|2002-01-18|2002-03-06|3G Lab Ltd|Graphic user interface for data processing device|
GB2384395A|2002-01-19|2003-07-23|Hewlett Packard Co|Personal article capable of receiving specified items or data|
US7111788B2|2002-04-22|2006-09-26|Nokia Corporation|System and method for navigating applications using a graphical user interface|
AU2003239385A1|2002-05-10|2003-11-11|Richard R. Reisman|Method and apparatus for browsing using multiple coordinated device|
US8001488B1|2002-05-31|2011-08-16|Hewlett-Packard Development Company, L.P.|User interface dial with display|
JP2004021893A|2002-06-20|2004-01-22|Sony Corp|Portable information communication terminal, program and recording medium recording program|
US9886309B2|2002-06-28|2018-02-06|Microsoft Technology Licensing, Llc|Identity-based distributed computing for device resources|
US7874917B2|2003-09-15|2011-01-25|Sony Computer Entertainment Inc.|Methods and systems for enabling depth and direction detection when interfacing with a computer program|
US6968508B2|2002-07-30|2005-11-22|Motorola, Inc.|Rotating user interface|
JP2004072450A|2002-08-07|2004-03-04|Casio Comput Co Ltd|Electronic camera|
US7386855B2|2002-08-12|2008-06-10|Ntt Docomo, Inc.|Application mobility service|
USD474982S1|2002-08-15|2003-05-27|Nike, Inc.|Portion of a watch|
US7036007B2|2002-09-09|2006-04-25|Intel Corporation|Firmware architecture supporting safe updates and multiple processor types|
JP2004184396A|2002-10-09|2004-07-02|Seiko Epson Corp|Display device, clock, method for controlling display device, control program, and recording medium|
JP2004178584A|2002-11-26|2004-06-24|Asulab Sa|Input method of security code by touch screen for accessing function, device or specific place, and device for executing the method|
US7199786B2|2002-11-29|2007-04-03|Daniel Suraqui|Reduced keyboards system using unistroke input and having automatic disambiguating and a recognition method using said system|
US20040130581A1|2003-01-03|2004-07-08|Microsoft Corporation|Interaction model|
JP2004259063A|2003-02-26|2004-09-16|Sony Corp|Device and method for display processing for three dimensional object and computer program|
JP2004288172A|2003-03-04|2004-10-14|Sony Corp|Input device, information terminal device and mode switching method|
US7096048B2|2003-04-01|2006-08-22|Sanders Donald T|Portable communications device|
KR100617681B1|2003-04-15|2006-08-28|삼성전자주식회사|Method for using rotation key apparatus in wireless terminal|
US7385361B2|2003-05-14|2008-06-10|Matsushita Electric Industrial Co., Ltd.|Ballast for high-pressure discharge lamp and method of operating the same|
US20040261031A1|2003-06-23|2004-12-23|Nokia Corporation|Context dependent auxiliary menu elements|
US7567239B2|2003-06-26|2009-07-28|Motorola, Inc.|Method and system for message and note composition on small screen devices|
KR100590528B1|2003-06-28|2006-06-15|삼성전자주식회사|Device of sensing finger's motion in wearable type and method for sensing finger's motion using the same|
US20050001821A1|2003-07-02|2005-01-06|Low Tse How|Option selector and electronic device including such an option selector|
US7478390B2|2003-09-25|2009-01-13|International Business Machines Corporation|Task queue management of virtual devices using a plurality of processors|
US20050137470A1|2003-12-08|2005-06-23|Rosenthal Robert D.|Method and apparatus for low blood glucose level detection|
KR100571428B1|2003-12-24|2006-04-17|한국전자통신연구원|Wearable Interface Device|
WO2005065404A2|2003-12-31|2005-07-21|James Marin|Rotating bezel watch|
US7872669B2|2004-01-22|2011-01-18|Massachusetts Institute Of Technology|Photo-based mobile deixis system and related techniques|
USD546203S1|2004-02-13|2007-07-10|Cartier Creation Studio, Sa|Watch|
US20060081714A1|2004-08-23|2006-04-20|King Martin T|Portable scanning device|
US7398151B1|2004-02-25|2008-07-08|Garmin Ltd.|Wearable electronic device|
GB2411337B|2004-02-27|2006-12-06|Simon Richard Daniel|Modular interface strap for multi-unit wristband and necklace assemblies|
JP2005244676A|2004-02-27|2005-09-08|Dowango:Kk|Remote control system|
US7618260B2|2004-02-27|2009-11-17|Daniel Simon R|Wearable modular interface strap|
US8886298B2|2004-03-01|2014-11-11|Microsoft Corporation|Recall device|
US7173604B2|2004-03-23|2007-02-06|Fujitsu Limited|Gesture identification of controlled devices|
KR100853605B1|2004-03-23|2008-08-22|후지쯔 가부시끼가이샤|Distinguishing tilt and translation motion components in handheld devices|
US7301529B2|2004-03-23|2007-11-27|Fujitsu Limited|Context dependent gesture response|
TWI370979B|2004-05-14|2012-08-21|Ibm|Grid computing system, information processing unit, job execution request generation unit, control method, program, and recording medium|
US7404146B2|2004-05-27|2008-07-22|Agere Systems Inc.|Input device for portable handset|
JP4379214B2|2004-06-10|2009-12-09|日本電気株式会社|Mobile terminal device|
US7840681B2|2004-07-30|2010-11-23|International Business Machines Corporation|Method and apparatus for integrating wearable devices within a SIP infrastructure|
US8479122B2|2004-07-30|2013-07-02|Apple Inc.|Gestures for touch sensitive input devices|
US20060253010A1|2004-09-28|2006-11-09|Donald Brady|Monitoring device, method and system|
USD519858S1|2004-10-06|2006-05-02|Giochi Preziosi S.P.A.|Watch|
US20060092177A1|2004-10-30|2006-05-04|Gabor Blasko|Input method and apparatus using tactile guidance and bi-directional segmented stroke|
USD526973S1|2004-11-17|2006-08-22|Xirrus, Inc.|Electronic device housing|
WO2006059454A1|2004-11-30|2006-06-08|Konica Minolta Holdings, Inc.|Information processing apparatus|
TWM277958U|2004-12-31|2005-10-11|Ind Tech Res Inst|An flexible displaying device for electronic information includes a device housing|
US20060149652A1|2005-01-06|2006-07-06|Fellenstein Craig W|Receiving bid requests and pricing bid responses for potential grid job submissions within a grid environment|
US20060197835A1|2005-03-04|2006-09-07|Anderson Robert C|Wrist-attached display system for unmanned vehicle imagery and communication|
USD527286S1|2005-03-22|2006-08-29|Rolex Watch U.S.A., Inc.|Watch|
JP2006279137A|2005-03-28|2006-10-12|Sharp Corp|Remote control system|
US20060224766A1|2005-03-31|2006-10-05|Malackowski Donald W|Operating room communication bus and method|
US20060288233A1|2005-04-25|2006-12-21|Douglas Kozlay|Attachable biometric authentication apparatus for watchbands and other personal items|
ITUD20050074A1|2005-05-11|2006-11-12|Eurotech S P A|ENERGY SAVING DEVICE FOR A PORTABLE ELECTRONIC EQUIPMENT|
US8117540B2|2005-05-18|2012-02-14|Neuer Wall Treuhand Gmbh|Method and device incorporating improved text input mechanism|
US9606634B2|2005-05-18|2017-03-28|Nokia Technologies Oy|Device incorporating improved text input mechanism|
US20060271867A1|2005-05-27|2006-11-30|Wang Kong Q|Mobile communications terminal and method therefore|
EA200800069A1|2005-06-16|2008-06-30|Фируз Гассабиан|DATA INPUT SYSTEM|
US7487467B1|2005-06-23|2009-02-03|Sun Microsystems, Inc.|Visual representation and other effects for application management on a device with a small screen|
US20070004969A1|2005-06-29|2007-01-04|Microsoft Corporation|Health monitor|
JP2007014471A|2005-07-06|2007-01-25|Sumitomo Precision Prod Co Ltd|Wireless sensor terminal|
USD549602S1|2005-07-18|2007-08-28|Garmin Ltd.|GPS exercise monitoring device|
JP2007064758A|2005-08-30|2007-03-15|Sony Corp|Mobile electronic apparatus, display control program, display control method, timepiece device|
KR100814395B1|2005-08-30|2008-03-18|삼성전자주식회사|Apparatus and Method for Controlling User Interface Using Jog Shuttle and Navigation Key|
JP4676303B2|2005-10-18|2011-04-27|株式会社日立製作所|Terminal device|
US20070100244A1|2005-10-31|2007-05-03|Chin-Shu Lin|Wrist-worn monitor for heartbeat detection|
US7454841B2|2005-11-01|2008-11-25|Hunter Engineering Company|Method and apparatus for wheel alignment system target projection and illumination|
USD547212S1|2005-11-10|2007-07-24|Dias M Carlos|Wrist watch|
EP1890238B1|2005-12-28|2011-10-19|Panasonic Electric Works Co., Ltd.|Distributed functional processing system in a network|
KR100761262B1|2005-12-28|2007-09-28|엘지전자 주식회사|Character inputting apparatus for mobilecommunication terminal|
RU2319997C2|2005-12-30|2008-03-20|ЭлДжи ЭЛЕКТРОНИКС ИНК.|Electronic device for conducting programmed functions and method for inputting commands in such a device|
US7860536B2|2006-01-05|2010-12-28|Apple Inc.|Telephone interface for a portable communication device|
USD537409S1|2006-01-10|2007-02-27|Sony Corporation|Combined charger and infrared ray emitter|
JP5220278B2|2006-01-27|2013-06-26|任天堂株式会社|Game device and handwriting input program|
USD537371S1|2006-02-03|2007-02-27|Bell & Ross B.V.|Watch|
USD529402S1|2006-02-13|2006-10-03|Nike, Inc.|Portion of a watch|
USD528928S1|2006-02-14|2006-09-26|Nike, Inc.|Portion of a watch|
KR100754674B1|2006-03-10|2007-09-03|삼성전자주식회사|Method and apparatus for selecting menu in portable terminal|
USD537738S1|2006-03-17|2007-03-06|Citizen Tokei Kabushiki Kaisha|Wrist watch case with band|
EP1998666B1|2006-03-21|2014-01-01|Koninklijke Philips N.V.|Indication of the condition of a user|
USD545305S1|2006-03-22|2007-06-26|Mulcahy Michael R|Personal data assistant, particularly for the visually impaired|
JP4516042B2|2006-03-27|2010-08-04|株式会社東芝|Apparatus operating device and apparatus operating method|
USD574263S1|2006-03-28|2008-08-05|The British Masters Sa|Wristwatch|
US9395905B2|2006-04-05|2016-07-19|Synaptics Incorporated|Graphical scroll wheel|
GB2437592A|2006-04-10|2007-10-31|Skype Ltd|Indicating communication events on an alternative interface whilst running an application on another interface|
US8644396B2|2006-04-18|2014-02-04|Qualcomm Incorporated|Waveform encoding for wireless applications|
US7382691B2|2006-04-21|2008-06-03|Nixon, Inc.|Multifunction watch system and method|
USD543122S1|2006-05-10|2007-05-22|Ronald Lafever|Combined flexible band and clip-on watch|
US7539532B2|2006-05-12|2009-05-26|Bao Tran|Cuffless blood pressure monitoring appliance|
USD574262S1|2006-05-19|2008-08-05|Lorenz S.P.A.|Wrist watch|
EP1860534A1|2006-05-22|2007-11-28|LG Electronics Inc.|Mobile terminal and menu display method thereof|
US7558622B2|2006-05-24|2009-07-07|Bao Tran|Mesh network stroke monitoring appliance|
USD554636S1|2006-06-27|2007-11-06|Qualcomm Incorporated|Media display unit|
KR100782927B1|2006-06-27|2007-12-07|삼성전자주식회사|Apparatus and method for inputting character in portable terminal|
US20080018591A1|2006-07-20|2008-01-24|Arkady Pittel|User Interfacing|
USD558207S1|2006-07-28|2007-12-25|Microsoft Corporation|Docking station|
USD558208S1|2006-07-28|2007-12-25|Microsoft Corporation|Docking station|
USD558209S1|2006-07-28|2007-12-25|Microsoft Corporation|Docking station|
US8511890B2|2006-08-27|2013-08-20|Nike, Inc.|Rocking bezel control|
US20080126933A1|2006-08-28|2008-05-29|Apple Computer, Inc.|Method and apparatus for multi-mode traversal of lists|
USD602858S1|2006-09-06|2009-10-27|Amx Llc|Remote control charging base|
US8564543B2|2006-09-11|2013-10-22|Apple Inc.|Media player with imaged based browsing|
US8243017B2|2006-09-11|2012-08-14|Apple Inc.|Menu overlay including context dependent menu icon|
USD550614S1|2006-09-22|2007-09-11|10Charge, Inc.|Battery charger|
JP2008089039A|2006-09-29|2008-04-17|Jtekt Corp|Tapered roller bearing and differential device|
US7714839B2|2006-09-29|2010-05-11|Sony Ericsson Mobile Communications Ab|Jog dial for mobile terminal|
US7463557B2|2006-10-06|2008-12-09|Timex Group B.V.|Electronic device with changeable display configurations|
US7925986B2|2006-10-06|2011-04-12|Veveo, Inc.|Methods and systems for a linear character selection display interface for ambiguous text input|
US7667148B2|2006-10-13|2010-02-23|Apple Inc.|Method, device, and graphical user interface for dialing with a click wheel|
US8355709B2|2006-10-23|2013-01-15|Qualcomm Incorporated|Device that determines whether to launch an application locally or remotely as a webapp|
USD575289S1|2006-12-11|2008-08-19|Hon Hai Precision Ind. Co., Ltd.|Dock connector|
USD545697S1|2007-01-10|2007-07-03|Nike, Inc.|Portion of a watch|
KR100896055B1|2007-01-15|2009-05-07|엘지전자 주식회사|Mobile terminal having a rotating input device and display method thereof|
US8311530B2|2007-01-26|2012-11-13|Research In Motion Limited|Touch entry of password on a mobile device|
US7966039B2|2007-02-02|2011-06-21|Microsoft Corporation|Bidirectional dynamic offloading of tasks between a host and a mobile device|
US8789053B2|2007-04-05|2014-07-22|Newton Howard|Task execution and delegation by autonomous mobile agents based on intent knowledge base|
CN101675652A|2007-04-26|2010-03-17|诺基亚公司|Improved method and apparatus for switching between different modes in a mobile communication terminal|
USD590277S1|2007-04-27|2009-04-14|New Heritage Ltd.|Watch case|
USD590727S1|2007-04-27|2009-04-21|New Heritage Ltd.|Watch case|
JP5453246B2|2007-05-04|2014-03-26|クアルコム,インコーポレイテッド|Camera-based user input for compact devices|
USD604643S1|2007-05-16|2009-11-24|Suunto Oy|Multi-function wristwatch|
JP2009005320A|2007-06-21|2009-01-08|Rumiko Aso|Watch type portable cell phone|
US10178965B2|2007-06-22|2019-01-15|Ipventure, Inc.|Activity monitoring system for pregnant women|
US8442015B2|2007-07-20|2013-05-14|Broadcom Corporation|Method and system for an atomizing function of a mobile device|
KR101515089B1|2007-08-16|2015-04-24|삼성전자 주식회사|Apparatus and method of providing graphic user interface|
US8676273B1|2007-08-24|2014-03-18|Iwao Fujisaki|Communication device|
US8593404B2|2007-08-27|2013-11-26|Blackberry Limited|Reduced key arrangement for a mobile communication device|
US7778118B2|2007-08-28|2010-08-17|Garmin Ltd.|Watch device having touch-bezel user interface|
ES1066912Y|2007-09-13|2008-07-01|Fagor S Coop|REMOTE CONTROL DEVICE FOR ELECTRICAL APPLIANCES|
JP4960812B2|2007-09-14|2012-06-27|株式会社リコー|Image processing apparatus, load distribution system, and load distribution program|
JP5184018B2|2007-09-14|2013-04-17|京セラ株式会社|Electronics|
EP2203865A2|2007-09-24|2010-07-07|Apple Inc.|Embedded authentication systems in an electronic device|
KR100949581B1|2007-10-08|2010-03-25|주식회사 자코드|Apparatus and method for inputting character and numeral on communication device|
US8031172B2|2007-10-12|2011-10-04|Immersion Corporation|Method and apparatus for wearable remote interface device|
US8639214B1|2007-10-26|2014-01-28|Iwao Fujisaki|Communication device|
US20090134838A1|2007-11-27|2009-05-28|Puthalath Koroth Raghuprasad|Circular self-powered magnetic generator|
USD564367S1|2007-11-30|2008-03-18|Nike, Inc.|Watch|
US20090167486A1|2007-12-29|2009-07-02|Shah Rahul C|Secure association between devices|
US9467850B2|2007-12-31|2016-10-11|Intel Corporation|Handheld device association via shared vibration|
US7946758B2|2008-01-31|2011-05-24|WIMM Labs|Modular movement that is fully functional standalone and interchangeable in other portable devices|
RU74259U1|2008-02-19|2008-06-20|Татаренков Валерий Владимирович|MOBILE BRACELET PHONE|
US8555207B2|2008-02-27|2013-10-08|Qualcomm Incorporated|Enhanced input using recognized gestures|
US9398046B2|2008-03-06|2016-07-19|Qualcomm Incorporated|Image-based man-in-the-middle protection in numeric comparison association models|
US20090234967A1|2008-03-17|2009-09-17|Nokia Corporation|Method, system, and apparatus for transferring P2P file distribution tasks between devices|
JP2009245122A|2008-03-31|2009-10-22|Fujitsu Ltd|Authentication device, electronic apparatus, authentication method and authentication program thereof|
USD585898S1|2008-04-01|2009-02-03|Sdi Technologies, Inc.|Base unit for digital media player|
US8949743B2|2008-04-22|2015-02-03|Apple Inc.|Language input interface on a device|
EP2279504A4|2008-05-08|2013-12-18|Robert P Stratton|Remote control system and method|
US20090280861A1|2008-05-09|2009-11-12|Ashana Sabana Nisha Khan|Multifunctional all-in-one detachable wrist wireless mobile communication device|
US8638229B2|2008-05-29|2014-01-28|Garmin Switzerland Gmbh|Swim watch|
US8295879B2|2008-05-30|2012-10-23|Motorola Mobility Llc|Devices and methods for initiating functions based on movement characteristics relative to a reference|
DE102008027746A1|2008-06-11|2009-12-17|Kirner, Markus A.|Wrist-watch arrangement for use with mobile telephone, comprises clock for indicating time, bracelet, with which clock is connected, mobile telephone unit, projection unit and camera|
JP2009301485A|2008-06-17|2009-12-24|Olympus Corp|Information display system and portable information terminal therewith|
US8112526B2|2008-06-24|2012-02-07|International Business Machines Corporation|Process migration based on service availability in a multi-node environment|
US8447704B2|2008-06-26|2013-05-21|Microsoft Corporation|Recognizing gestures from forearm EMG signals|
US8078873B2|2008-06-30|2011-12-13|Intel Corporation|Two-way authentication between two communication endpoints using a one-way out-of-band channel|
US20100020033A1|2008-07-23|2010-01-28|Obinna Ihenacho Alozie Nwosu|System, method and computer program product for a virtual keyboard|
KR101546774B1|2008-07-29|2015-08-24|엘지전자 주식회사|Mobile terminal and operation control method thereof|
US20100039393A1|2008-08-15|2010-02-18|At&T Intellectual Property I, L.P.|Text entry on touch screen cellphones by different pressure levels|
JP4899108B2|2008-08-24|2012-03-21|照彦 矢上|Wristwatch type electronic memo device|
US8319742B2|2008-08-26|2012-11-27|Research In Motion Limited|Portable electronic device and method of controlling same|
KR20100028465A|2008-09-04|2010-03-12|백상주|The letter or menu input method which follows in drag direction of the pointer|
US20100070913A1|2008-09-15|2010-03-18|Apple Inc.|Selecting an item of content in a graphical user interface for a portable computing device|
JP2010073830A|2008-09-17|2010-04-02|Sumitomo Metal Mining Co Ltd|Lead frame and method of manufacturing same|
US20100082485A1|2008-09-30|2010-04-01|Apple Inc.|Portable point of purchase devices and methods|
US9026462B2|2008-09-30|2015-05-05|Apple Inc.|Portable point of purchase user interfaces|
KR101537596B1|2008-10-15|2015-07-20|엘지전자 주식회사|Mobile terminal and method for recognizing touch thereof|
US8335993B1|2008-10-24|2012-12-18|Marvell International Ltd.|Enhanced touch sensitive interface and methods and software for making and using the same|
JP4506890B2|2008-10-28|2010-07-21|トヨタ自動車株式会社|Walking assist device|
USD596610S1|2008-11-07|2009-07-21|Cheng Uei Precision Industry Co., Ltd.|Wrist type mobile phone|
US9201582B2|2008-11-11|2015-12-01|Canon Kabushiki Kaisha|Display control apparatus and display control method|
US8503932B2|2008-11-14|2013-08-06|Sony Mobile Comminications AB|Portable communication device and remote motion input device|
JP5343523B2|2008-11-17|2013-11-13|富士通株式会社|Job management apparatus, job management method, and job management program|
JP4752900B2|2008-11-19|2011-08-17|ソニー株式会社|Image processing apparatus, image display method, and image display program|
JP2010122934A|2008-11-20|2010-06-03|Sony Corp|Image processing apparatus, image processing method, and program|
US8560465B2|2009-07-02|2013-10-15|Samsung Electronics Co., Ltd|Execution allocation cost assessment for computing systems and environments including elastic computing systems and environments|
US9355554B2|2008-11-21|2016-05-31|Lenovo Pte. Ltd.|System and method for identifying media and providing additional media content|
AU326155S|2008-12-22|2009-06-01|Swarovski Aktiengesselschaft|Wrist watch|
US8289162B2|2008-12-22|2012-10-16|Wimm Labs, Inc.|Gesture-based user interface for a wearable portable device|
US20100167646A1|2008-12-30|2010-07-01|Motorola, Inc.|Method and apparatus for device pairing|
US9591118B2|2009-01-01|2017-03-07|Intel Corporation|Pose to device mapping|
USD596509S1|2009-01-08|2009-07-21|Seiko Watch Kabushiki Kaisha|Watch case|
US9569001B2|2009-02-03|2017-02-14|Massachusetts Institute Of Technology|Wearable gestural interface|
US8326221B2|2009-02-09|2012-12-04|Apple Inc.|Portable electronic device with proximity-based content synchronization|
EP2220997A1|2009-02-24|2010-08-25|Koninklijke Philips Electronics N.V.|Device, system and method for monitoring motion sequences|
US8098141B2|2009-02-27|2012-01-17|Nokia Corporation|Touch sensitive wearable band apparatus and method|
USD596959S1|2009-03-05|2009-07-28|Zino Davidoff Trading Ag|Watch case|
AU327459S|2009-03-06|2009-09-03|Rado Uhren Ag Rado Watch Co Ltd Montres Rado Sa|Watch|
US20100245078A1|2009-03-26|2010-09-30|Wellcore Corporation|Wearable Motion Sensing Device|
US20100250789A1|2009-03-27|2010-09-30|Qualcomm Incorporated|System and method of managing memory at a portable computing device and a portable computing device docking station|
US9189472B2|2009-03-30|2015-11-17|Touchtype Limited|System and method for inputting text into small screen devices|
CN103752006B|2009-04-26|2017-12-26|耐克创新有限合伙公司|GPS features and function in sports watch system|
US20120069027A1|2009-04-28|2012-03-22|Wataru Yamazaki|Input device|
US20100289740A1|2009-05-18|2010-11-18|Bong Soo Kim|Touchless control of an electronic device|
JP5263833B2|2009-05-18|2013-08-14|国立大学法人奈良先端科学技術大学院大学|Ring-type interface, interface device, and interface method used for wearable computer|
US8294105B2|2009-05-22|2012-10-23|Motorola Mobility Llc|Electronic device with sensing assembly and method for interpreting offset gestures|
US8287320B2|2009-05-22|2012-10-16|John Mezzalingua Associates, Inc.|Coaxial cable connector having electrical continuity member|
USD615955S1|2009-05-26|2010-05-18|Samsung Electronics Co., Ltd.|Mobile phone|
USD626558S1|2009-05-26|2010-11-02|Implanet|PDA base station|
JP5282661B2|2009-05-26|2013-09-04|ソニー株式会社|Information processing apparatus, information processing method, and program|
CN102227616B|2009-05-27|2015-04-15|松下电器产业株式会社|Behavior recognition device|
US20100304673A1|2009-05-29|2010-12-02|Junichi Yoshizawa|Wireless Communication Apparatus and Wireless Communication Method|
US9239740B2|2009-06-16|2016-01-19|Microsoft Technology Licensing, Llc|Program partitioning across client and cloud|
CH701440A2|2009-07-03|2011-01-14|Comme Le Temps Sa|Wrist touch screen and method for displaying on a watch with touch screen.|
US9189156B2|2009-07-14|2015-11-17|Howard Gutowitz|Keyboard comprising swipe-switches performing keyboard actions|
US8818274B2|2009-07-17|2014-08-26|Qualcomm Incorporated|Automatic interfacing between a master device and object device|
US8245234B2|2009-08-10|2012-08-14|Avaya Inc.|Credit scheduler for ordering the execution of tasks|
USD612269S1|2009-08-13|2010-03-23|P.S.L. Limited|Slap-on watch|
US8060560B2|2009-08-27|2011-11-15|Net Power And Light, Inc.|System and method for pervasive computing|
JP5254906B2|2009-08-27|2013-08-07|京セラ株式会社|Electronics|
US8572375B2|2009-09-23|2013-10-29|Microsoft Corporation|Device pairing based on graphically encoded data|
US8717291B2|2009-10-07|2014-05-06|AFA Micro Co.|Motion sensitive gesture device|
GB2474536B|2009-10-13|2011-11-02|Pointgrab Ltd|Computer vision gesture based control of a device|
US8279052B2|2009-11-04|2012-10-02|Immersion Corporation|Systems and methods for haptic confirmation of commands|
US20110221666A1|2009-11-24|2011-09-15|Not Yet Assigned|Methods and Apparatus For Gesture Recognition Mode Control|
US10095276B2|2009-11-25|2018-10-09|Visa International Service Association|Information access device and data transfer|
USD627718S1|2009-12-11|2010-11-23|Black & Decker Inc.|Charger base for a hand-held vacuum cleaner|
KR20110069958A|2009-12-18|2011-06-24|삼성전자주식회사|Method and apparatus for generating data in mobile terminal having projector function|
CA135976S|2009-12-23|2011-11-24|Simone Dalla Libera|Watch|
US9229779B2|2009-12-28|2016-01-05|Empire Technology Development Llc|Parallelizing heterogeneous network communications in smart devices based on selection of task allocation strategy|
US20120256959A1|2009-12-30|2012-10-11|Cywee Group Limited|Method of controlling mobile device with touch-sensitive display and motion sensor, and mobile device|
KR101626621B1|2009-12-30|2016-06-01|엘지전자 주식회사|Method for controlling data in mobile termina having circle type display unit and mobile terminal thereof|
EP2341420A1|2010-01-04|2011-07-06|Research In Motion Limited|Portable electronic device and method of controlling same|
KR101697937B1|2010-01-08|2017-02-01|삼성전자주식회사|System and method for dynamic task migration on a multiprocessor system|
CN101815326B|2010-01-11|2011-06-15|北京邮电大学|Method for allocating tasks in wireless sensor network based on negotiation|
WO2011096702A2|2010-02-03|2011-08-11|고려대학교 산학협력단|Written character inputting device and method|
US8869263B2|2010-02-26|2014-10-21|Blackberry Limited|Wireless communications system providing mobile device authentication bypass based upon user-wearable security device and related methods|
US20120194551A1|2010-02-28|2012-08-02|Osterhout Group, Inc.|Ar glasses with user-action based command and control of external devices|
US20110219427A1|2010-03-04|2011-09-08|RSSBus, Inc.|Smart Device User Authentication|
US8730156B2|2010-03-05|2014-05-20|Sony Computer Entertainment America Llc|Maintaining multiple views on a shared stable virtual space|
EP2545426A4|2010-03-12|2017-05-17|Nuance Communications, Inc.|Multimodal text input system, such as for use with touch screens on mobile phones|
KR101659023B1|2010-03-15|2016-09-23|엘지전자 주식회사|Watch type mobile terminal|
US8370878B2|2010-03-17|2013-02-05|Verizon Patent And Licensing Inc.|Mobile interface for accessing interactive television applications associated with displayed content|
CH702862A1|2010-03-30|2011-09-30|Comme Le Temps Sa|Wristwatch electronic display.|
US9811385B2|2010-04-28|2017-11-07|Wind River Systems, Inc.|Optimizing task management|
JP5681865B2|2010-05-10|2015-03-11|レノボ・イノベーションズ・リミテッド(香港)|User interface device|
US8683377B2|2010-05-12|2014-03-25|Adobe Systems Incorporated|Method for dynamically modifying zoom level to facilitate navigation on a graphical user interface|
WO2011145700A1|2010-05-19|2011-11-24|シャープ株式会社|Reproduction device, display device, television receiver, system, recognition method, program, and recording medium|
US20110289519A1|2010-05-21|2011-11-24|Frost Gary R|Distributing workloads in a computing platform|
US8750857B2|2010-06-04|2014-06-10|Qualcomm Incorporated|Method and apparatus for wireless distributed computing|
US8935317B2|2010-06-23|2015-01-13|Microsoft Corporation|Dynamic partitioning of applications between clients and servers|
USD631373S1|2010-06-29|2011-01-25|Worldwide Watch Company Limited|Watch|
WO2012009335A1|2010-07-14|2012-01-19|Dynavox Systems Llc|A wearable speech generation device|
US9532734B2|2010-08-09|2017-01-03|Nike, Inc.|Monitoring fitness using a mobile device|
JP5234066B2|2010-08-25|2013-07-10|横河電機株式会社|Device information display device and device information display method|
WO2012030653A2|2010-08-29|2012-03-08|Vascode Technologies Ltd.|A system and methods for multi-tasking in a clientless mobile phone|
US9007304B2|2010-09-02|2015-04-14|Qualcomm Incorporated|Methods and apparatuses for gesture-based user input detection in a mobile device|
US9489016B2|2010-09-08|2016-11-08|Live Lids, Inc.|Electronic display wearable item|
KR101219933B1|2010-09-13|2013-01-08|현대자동차주식회사|System for controlling device in vehicle using augmented reality and thereof method|
US8195313B1|2010-11-19|2012-06-05|Nest Labs, Inc.|Thermostat user interface|
US9122318B2|2010-09-15|2015-09-01|Jeffrey R. Spetalnick|Methods of and systems for reducing keyboard data entry errors|
USD645360S1|2010-09-17|2011-09-20|Simpel, LLC|Slap band watch|
USD669369S1|2010-09-20|2012-10-23|Daniel Rasumowsky|Watch|
US8595289B2|2010-09-21|2013-11-26|Telefonaktiebolaget L M Ericsson |Cloud phone with distributed processing|
US20130165180A1|2010-09-27|2013-06-27|Adobe Systems Incorporated|Integrating Operation Of Consumer Electronic Devices|
US20120075204A1|2010-09-27|2012-03-29|Google Inc.|Using a Touch-Sensitive Display of a Mobile Device with a Host Computer|
JP2012073830A|2010-09-29|2012-04-12|Pioneer Electronic Corp|Interface device|
US8345412B2|2010-10-01|2013-01-01|Hex Holdings, Llc|Wrist band for portable electronic devices|
US9462444B1|2010-10-04|2016-10-04|Nortek Security & Control Llc|Cloud based collaborative mobile emergency call initiation and handling distribution system|
US8506158B2|2010-10-12|2013-08-13|P.S.L. Limited|Watch|
AU334766S|2010-10-13|2011-01-24|Swarovski Ag|Case for a wristwatch|
US20120096345A1|2010-10-19|2012-04-19|Google Inc.|Resizing of gesture-created markings for different display sizes|
US8635630B2|2010-10-25|2014-01-21|Microsoft Corporation|Application lifetime management|
US9086741B2|2010-10-29|2015-07-21|Microsoft Corporation|User input device|
JP2012098771A|2010-10-29|2012-05-24|Sony Corp|Image forming apparatus and image forming method, and program|
US8814754B2|2010-11-01|2014-08-26|Nike, Inc.|Wearable device having athletic functionality|
US8184983B1|2010-11-12|2012-05-22|Google Inc.|Wireless directional identification and subsequent communication between wearable electronic devices|
USD640948S1|2010-11-15|2011-07-05|Hook-Link, LLC|Magnetically hooking bracelet|
JP2012108771A|2010-11-18|2012-06-07|Panasonic Corp|Screen operation system|
JP5733656B2|2010-11-25|2015-06-10|公立大学法人会津大学|Gesture recognition device and gesture recognition method|
US9155168B2|2010-12-03|2015-10-06|Surefire, Llc|Wearable lighting device|
CN102572565A|2010-12-15|2012-07-11|深圳市同洲软件有限公司|Method, device and system for controlling digital TV receiving terminal by mobile terminal|
USD640936S1|2011-01-01|2011-07-05|Teresa Teixeia|Combined watch and bracelet|
USD636686S1|2011-01-04|2011-04-26|Nike, Inc.|Watch|
US9436286B2|2011-01-05|2016-09-06|Qualcomm Incorporated|Method and apparatus for tracking orientation of a user|
EP2474950B1|2011-01-05|2013-08-21|Softkinetic Software|Natural gesture based user interface methods and systems|
USD651099S1|2011-01-10|2011-12-27|Montblanc-Simplo Gmbh|Watch case|
US9778747B2|2011-01-19|2017-10-03|Hewlett-Packard Development Company, L.P.|Method and system for multimodal and gestural control|
US8699948B2|2011-01-25|2014-04-15|Sony Corporation|Connection method for near field communication|
US20120209413A1|2011-02-14|2012-08-16|Microsoft Corporation|Background Audio on Mobile Devices|
US9645986B2|2011-02-24|2017-05-09|Google Inc.|Method, medium, and system for creating an electronic book with an umbrella policy|
USD678081S1|2011-03-14|2013-03-19|Montblanc-Simplo Gmbh|Watchcase|
US8881059B2|2011-03-15|2014-11-04|Panasonic Intellectual Property Corporation Of America|Object control device, object control method, computer-readable recording medium, and integrated circuit|
US8655027B1|2011-03-25|2014-02-18|The United States of America, as represented by the Director, National Security Agency|Method of image-based user authentication|
WO2012135547A1|2011-03-29|2012-10-04|Qualcomm Incorporated|Cloud storage of geotagged maps|
EP3557855B1|2011-03-31|2020-12-23|Sony Corporation|System and method for establishing a communication session|
US10630795B2|2011-03-31|2020-04-21|Oath Inc.|Systems and methods for transferring application state between devices based on gestural input|
US10061387B2|2011-03-31|2018-08-28|Nokia Technologies Oy|Method and apparatus for providing user interfaces|
US8811964B2|2011-04-04|2014-08-19|Numera, Inc.|Single button mobile telephone using server-based call routing|
USD661206S1|2011-04-06|2012-06-05|Garmin Switzerland Gmbh|Electronic device|
USD659093S1|2011-04-08|2012-05-08|The Gillette Company|Charger|
US9152373B2|2011-04-12|2015-10-06|Apple Inc.|Gesture visualization and sharing between electronic devices and remote displays|
KR101836403B1|2011-04-12|2018-04-19|엘지전자 주식회사|Mobile terminal performing remote control fucntion for display device|
US8928589B2|2011-04-20|2015-01-06|Qualcomm Incorporated|Virtual keyboards and methods of providing the same|
JP5824859B2|2011-05-02|2015-12-02|船井電機株式会社|Mobile device|
WO2012154620A2|2011-05-06|2012-11-15|Magic Leap, Inc.|Massive simultaneous remote digital presence world|
US8978120B2|2011-05-18|2015-03-10|Panasonic Intellectual Property Management Co., Ltd.|Communication control system and method, and communication device and method|
KR101844222B1|2011-05-27|2018-04-02|엘지전자 주식회사|Mobile terminal and Method for controlling mode thereof|
JP2012248067A|2011-05-30|2012-12-13|Canon Inc|Information input device, control method for the same and control program|
KR20120134732A|2011-06-03|2012-12-12|삼성전자주식회사|Apparatus and method for performing web browsing using remote controller|
US20120316456A1|2011-06-10|2012-12-13|Aliphcom|Sensory user interface|
US20120317024A1|2011-06-10|2012-12-13|Aliphcom|Wearable device data security|
KR101819506B1|2011-06-28|2018-01-17|엘지전자 주식회사|Mobile terminal and Method for controlling display thereof|
US8194036B1|2011-06-29|2012-06-05|Google Inc.|Systems and methods for controlling a cursor on a display using a trackpad input device|
KR101251250B1|2011-07-01|2013-04-08|알서포트 주식회사|System for performing remote control using remote device driver and method for performing the same|
RU2455676C2|2011-07-04|2012-07-10|Общество с ограниченной ответственностью "ТРИДИВИ"|Method of controlling device using gestures and 3d sensor for realising said method|
US8184070B1|2011-07-06|2012-05-22|Google Inc.|Method and system for selecting a user interface for a wearable computing device|
US9069164B2|2011-07-12|2015-06-30|Google Inc.|Methods and systems for a virtual input device|
US8228315B1|2011-07-12|2012-07-24|Google Inc.|Methods and systems for a virtual input device|
US20130018659A1|2011-07-12|2013-01-17|Google Inc.|Systems and Methods for Speech Command Processing|
US8179604B1|2011-07-13|2012-05-15|Google Inc.|Wearable marker for passive interaction|
US20130021374A1|2011-07-20|2013-01-24|Google Inc.|Manipulating And Displaying An Image On A Wearable Computing System|
US9015708B2|2011-07-28|2015-04-21|International Business Machines Corporation|System for improving the performance of high performance computing applications on cloud using integrated load balancing|
US9256361B2|2011-08-03|2016-02-09|Ebay Inc.|Control of search results with multipoint pinch gestures|
USD650706S1|2011-08-08|2011-12-20|Blue Sky SRL|Wrist watch|
US8641306B2|2011-08-16|2014-02-04|Argotext|Wristwatch keyboard|
US9720708B2|2011-08-19|2017-08-01|Advanced Micro Devices, Inc.|Data layout transformation for workload distribution|
USD661275S1|2011-08-19|2012-06-05|Cox Communications, Inc.|Mobile communications device with wrist band|
US20130080143A1|2011-09-27|2013-03-28|Paul E. Reeves|Unified desktop docking behavior with device as master|
USD654431S1|2011-08-29|2012-02-21|Eveready Battery Company, Inc.|Charger for a lighting device|
US9529448B2|2011-08-31|2016-12-27|Farzan Fallah|Data entry systems and methods|
US9173090B2|2011-09-15|2015-10-27|Teletech Holdings, Inc.|Method for activating services associated with a product via a service center supporting a variety of products|
US8941560B2|2011-09-21|2015-01-27|Google Inc.|Wearable computer with superimposed controls and instructions for external device|
CN103034362B|2011-09-30|2017-05-17|三星电子株式会社|Method and apparatus for handling touch input in a mobile terminal|
US9336687B2|2011-10-11|2016-05-10|Teletech Holdings, Inc.|Method for providing learning courses via a service center supporting a variety of products|
US8707211B2|2011-10-21|2014-04-22|Hewlett-Packard Development Company, L.P.|Radial graphical user interface|
US8467270B2|2011-10-26|2013-06-18|Google Inc.|Smart-watch with user interface features|
US8279716B1|2011-10-26|2012-10-02|Google Inc.|Smart-watch including flip up display|
US8935629B2|2011-10-28|2015-01-13|Flipboard Inc.|Systems and methods for flipping through content|
US9936036B2|2011-10-28|2018-04-03|Sharp Laboratories Of America, Inc.|Method and system for indirectly establishing a unique communication channel between a plurality of devices|
US9164589B2|2011-11-01|2015-10-20|Intel Corporation|Dynamic gesture based short-range human-machine interaction|
WO2013067392A1|2011-11-02|2013-05-10|Hendricks Investment Holdings, Llc|Device navigation icon and system, and method of use thereof|
US9288229B2|2011-11-10|2016-03-15|Skype|Device association via video handshake|
US9310889B2|2011-11-10|2016-04-12|Blackberry Limited|Touchscreen keyboard predictive display and generation of a set of characters|
US9007302B1|2011-11-11|2015-04-14|Benjamin D. Bandt-Horn|Device and user interface for visualizing, navigating, and manipulating hierarchically structured information on host electronic devices|
US9183658B2|2011-11-14|2015-11-10|Microsoft Technology Licensing, Llc|Animation creation and management in presentation application programs|
US20130120106A1|2011-11-16|2013-05-16|Motorola Mobility, Inc.|Display device, corresponding systems, and methods therefor|
US20130132848A1|2011-11-18|2013-05-23|Apple Inc.|Application interaction via multiple user interfaces|
US8676123B1|2011-11-23|2014-03-18|Evernote Corporation|Establishing connection between mobile devices using light|
US20140330900A1|2011-11-23|2014-11-06|Evernote Corporation|Encounter-driven personal contact space|
US9162144B2|2011-12-05|2015-10-20|Microsoft Technology Licensing, Llc|Portable device pairing with a tracking system|
KR101242390B1|2011-12-29|2013-03-12|인텔 코오퍼레이션|Method, apparatus and computer-readable recording medium for identifying user|
US9137627B2|2011-12-30|2015-09-15|Linkedin Corporation|Systems and methods for mobile device pairing|
USD664880S1|2012-01-18|2012-08-07|Nike, Inc.|Wristband|
USD671858S1|2012-01-18|2012-12-04|Nike, Inc.|Wristband|
USD664881S1|2012-01-18|2012-08-07|Nike, Inc.|Wristband|
EP2618239B1|2012-01-19|2016-10-05|BlackBerry Limited|Next letter prediction for virtual keyboard|
US9026461B2|2012-01-23|2015-05-05|Bank Of America Corporation|Enhanced mobile application for assisting users at a point of transaction|
US9367085B2|2012-01-26|2016-06-14|Google Technology Holdings LLC|Portable electronic device and method for controlling operation thereof taking into account which limb possesses the electronic device|
US8872849B2|2012-01-27|2014-10-28|Microsoft Corporation|Relational rendering of multi-faceted data|
US20130198056A1|2012-01-27|2013-08-01|Verizon Patent And Licensing Inc.|Near field communication transaction management and application systems and methods|
JP5193404B2|2012-02-10|2013-05-08|オリンパスイメージング株式会社|Information equipment|
US20130211843A1|2012-02-13|2013-08-15|Qualcomm Incorporated|Engagement-dependent gesture recognition|
USD687736S1|2012-02-16|2013-08-13|Christopher D. Wycoff|Bracelet with removable chip|
US9186077B2|2012-02-16|2015-11-17|Google Technology Holdings LLC|Method and device with customizable power management|
WO2013130682A1|2012-02-27|2013-09-06|5 Examples, Inc.|Date entry system controllers for receiving user input line traces relative to user interfaces to determine ordered actions, and related systems and methods|
US8947382B2|2012-02-28|2015-02-03|Motorola Mobility Llc|Wearable display device, corresponding systems, and method for presenting output on the same|
US20130254705A1|2012-03-20|2013-09-26|Wimm Labs, Inc.|Multi-axis user interface for a touch-screen enabled wearable device|
US10332112B2|2012-03-27|2019-06-25|International Business Machines Corporation|Authentication for transactions using near field communication|
USD680541S1|2012-04-11|2013-04-23|Coby Electronics Corp.|Docking station for a portable media player|
USD672255S1|2012-04-16|2012-12-11|I'M SpA|Watch|
GB2506231A|2012-04-30|2014-03-26|Blackberry Ltd|Touchscreen keyboard with correction of previously input text|
US20130300719A1|2012-05-10|2013-11-14|Research In Motion Limited|Method and apparatus for providing stylus orientation and position input|
CN102681786B|2012-05-14|2014-12-24|华为技术有限公司|Method and device for text selection|
CN102779002B|2012-06-19|2015-05-20|邓朝进|Touch screen keyboard|
US9256366B2|2012-08-14|2016-02-09|Google Technology Holdings LLC|Systems and methods for touch-based two-stage text input|
US9619282B2|2012-08-21|2017-04-11|Lenovo Pte. Ltd.|Task scheduling in big and little cores|
US9360997B2|2012-08-29|2016-06-07|Apple Inc.|Content presentation and interaction across multiple displays|
US20140078065A1|2012-09-15|2014-03-20|Ahmet Akkok|Predictive Keyboard With Suppressed Keys|
US8963806B1|2012-10-29|2015-02-24|Google Inc.|Device authentication|
US9373230B2|2012-10-31|2016-06-21|Wal-Mart Stores, Inc.|Customer reprint of a physical receipt from an electronic receipt|
US10423214B2|2012-11-20|2019-09-24|Samsung Electronics Company, Ltd|Delegating processing from wearable electronic device|
US8994827B2|2012-11-20|2015-03-31|Samsung Electronics Co., Ltd|Wearable electronic device|
US20140143737A1|2012-11-20|2014-05-22|Samsung Electronics Company, Ltd.|Transition and Interaction Model for Wearable Electronic Device|
US9477313B2|2012-11-20|2016-10-25|Samsung Electronics Co., Ltd.|User gesture input to wearable electronic device involving outward-facing sensor of device|
US11237719B2|2012-11-20|2022-02-01|Samsung Electronics Company, Ltd.|Controlling remote electronic device with wearable electronic device|
US9030446B2|2012-11-20|2015-05-12|Samsung Electronics Co., Ltd.|Placement of optical sensor on wearable electronic device|
US10185416B2|2012-11-20|2019-01-22|Samsung Electronics Co., Ltd.|User gesture input to wearable electronic device involving movement of device|
US20140181758A1|2012-12-20|2014-06-26|Research In Motion Limited|System and Method for Displaying Characters Using Gestures|
US10063598B2|2012-12-28|2018-08-28|Nokia Technologies Oy|Method and apparatus for establishing, authenticating, and accessing a content channel|
US20160132233A1|2013-02-17|2016-05-12|Keyless Systems Ltd.|Data entry systems|
US9274685B2|2013-03-15|2016-03-01|Google Technology Holdings LLC|Systems and methods for predictive text entry for small-screen devices with touch-based two-stage text input|
US20140282274A1|2013-03-15|2014-09-18|Qualcomm Incorporated|Detection of a gesture performed with at least two control objects|
US20140306898A1|2013-04-10|2014-10-16|Barnesandnoble.Com Llc|Key swipe gestures for touch sensitive ui virtual keyboard|
JP2016536595A|2013-08-23|2016-11-24|シクパ ホルディング ソシエテ アノニムSicpa Holding Sa|Method and system for authenticating using a crystal oscillator|
CN103488420A|2013-09-02|2014-01-01|宇龙计算机通信科技有限公司|Virtual keyboard adjusting method and device|
US9100395B2|2013-09-24|2015-08-04|International Business Machines Corporation|Method and system for using a vibration signature as an authentication key|
US9450682B2|2013-10-07|2016-09-20|International Business Machines Corporation|Method and system using vibration signatures for pairing master and slave computing devices|
US9022291B1|2014-07-24|2015-05-05|Apple Inc.|Invisible optical label for transmitting information between computing devices|
US10326590B2|2014-11-11|2019-06-18|Intel Corporation|Technologies for trusted device on-boarding|AU1328597A|1995-11-30|1997-06-19|Virtual Technologies, Inc.|Tactile feedback man-machine interface device|
US8645137B2|2000-03-16|2014-02-04|Apple Inc.|Fast, language-independent method for user authentication by voice|
US8677377B2|2005-09-08|2014-03-18|Apple Inc.|Method and apparatus for building an intelligent automated assistant|
US10002189B2|2007-12-20|2018-06-19|Apple Inc.|Method and apparatus for searching using an active ontology|
US9330720B2|2008-01-03|2016-05-03|Apple Inc.|Methods and apparatus for altering audio output signals|
US8996376B2|2008-04-05|2015-03-31|Apple Inc.|Intelligent text-to-speech conversion|
US8594740B2|2008-06-11|2013-11-26|Pantech Co., Ltd.|Mobile communication terminal and data input method|
US20100030549A1|2008-07-31|2010-02-04|Lee Michael M|Mobile device having human language translation capability with positional feedback|
US8676904B2|2008-10-02|2014-03-18|Apple Inc.|Electronic devices with voice command and contextual data processing capabilities|
US10241752B2|2011-09-30|2019-03-26|Apple Inc.|Interface for a virtual digital assistant|
US9431006B2|2009-07-02|2016-08-30|Apple Inc.|Methods and apparatuses for automatic speech recognition|
US9684394B2|2011-01-10|2017-06-20|Apple Inc.|Button functionality|
US9318108B2|2010-01-18|2016-04-19|Apple Inc.|Intelligent automated assistant|
US10276170B2|2010-01-18|2019-04-30|Apple Inc.|Intelligent automated assistant|
US8682667B2|2010-02-25|2014-03-25|Apple Inc.|User profiling for selecting user specific voice input processing information|
CA3009798A1|2017-07-12|2019-01-12|General Electric Company|Graphic overlay for measuring dimensions of features using a video inspection device|
US9262612B2|2011-03-21|2016-02-16|Apple Inc.|Device access using voice authentication|
US10255566B2|2011-06-03|2019-04-09|Apple Inc.|Generating and processing task items that represent tasks to perform|
US10241644B2|2011-06-03|2019-03-26|Apple Inc.|Actionable reminder entries|
US8994660B2|2011-08-29|2015-03-31|Apple Inc.|Text correction processing|
US9600169B2|2012-02-27|2017-03-21|Yahoo! Inc.|Customizable gestures for mobile devices|
US10134385B2|2012-03-02|2018-11-20|Apple Inc.|Systems and methods for name pronunciation|
US10097496B2|2012-05-09|2018-10-09|Apple Inc.|Electronic mail user interface|
US10304347B2|2012-05-09|2019-05-28|Apple Inc.|Exercised-based watch face and complications|
US10990270B2|2012-05-09|2021-04-27|Apple Inc.|Context-specific user interfaces|
US10235014B2|2012-05-09|2019-03-19|Apple Inc.|Music user interface|
US10649622B2|2012-05-09|2020-05-12|Apple Inc.|Electronic message user interface|
US9804759B2|2012-05-09|2017-10-31|Apple Inc.|Context-specific user interfaces|
US10613743B2|2012-05-09|2020-04-07|Apple Inc.|User interface for receiving user input|
US9547425B2|2012-05-09|2017-01-17|Apple Inc.|Context-specific user interfaces|
US9280610B2|2012-05-14|2016-03-08|Apple Inc.|Crowd sourcing information to fulfill user requests|
US10417037B2|2012-05-15|2019-09-17|Apple Inc.|Systems and methods for integrating third party services with a digital assistant|
US9721563B2|2012-06-08|2017-08-01|Apple Inc.|Name recognition system|
US9547647B2|2012-09-19|2017-01-17|Apple Inc.|Voice-based media searching|
US8994827B2|2012-11-20|2015-03-31|Samsung Electronics Co., Ltd|Wearable electronic device|
US11237719B2|2012-11-20|2022-02-01|Samsung Electronics Company, Ltd.|Controlling remote electronic device with wearable electronic device|
US9477313B2|2012-11-20|2016-10-25|Samsung Electronics Co., Ltd.|User gesture input to wearable electronic device involving outward-facing sensor of device|
US10185416B2|2012-11-20|2019-01-22|Samsung Electronics Co., Ltd.|User gesture input to wearable electronic device involving movement of device|
US10423214B2|2012-11-20|2019-09-24|Samsung Electronics Company, Ltd|Delegating processing from wearable electronic device|
US11157436B2|2012-11-20|2021-10-26|Samsung Electronics Company, Ltd.|Services associated with wearable electronic device|
US20140180595A1|2012-12-26|2014-06-26|Fitbit, Inc.|Device state dependent user interface management|
US10691230B2|2012-12-29|2020-06-23|Apple Inc.|Crown input for a wearable electronic device|
US10275117B2|2012-12-29|2019-04-30|Apple Inc.|User interface object manipulations in a user interface|
CN113470640A|2013-02-07|2021-10-01|苹果公司|Voice trigger of digital assistant|
US10314492B2|2013-05-23|2019-06-11|Medibotics Llc|Wearable spectroscopic sensor to measure food consumption based on interaction between light and the human body|
WO2014197336A1|2013-06-07|2014-12-11|Apple Inc.|System and method for detecting errors in interactions with a voice-based digital assistant|
US9582608B2|2013-06-07|2017-02-28|Apple Inc.|Unified ranking with entropy-weighted information for phrase-based semantic auto-completion|
WO2014197334A2|2013-06-07|2014-12-11|Apple Inc.|System and method for user-specified pronunciation of words for speech synthesis and recognition|
WO2014197335A1|2013-06-08|2014-12-11|Apple Inc.|Interpreting and acting upon commands that involve sharing information with remote devices|
EP3008641A1|2013-06-09|2016-04-20|Apple Inc.|Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant|
US10176167B2|2013-06-09|2019-01-08|Apple Inc.|System and method for inferring user intent from speech inputs|
US20150006385A1|2013-06-28|2015-01-01|Tejas Arvindbhai Shah|Express transactions on a mobile device|
KR102179812B1|2013-07-18|2020-11-17|엘지전자 주식회사|Watch type mobile terminal|
CN105683876B|2013-09-03|2019-11-01|苹果公司|Crown for wearable electronic inputs|
US10545657B2|2013-09-03|2020-01-28|Apple Inc.|User interface for manipulating user interface objects|
US10503388B2|2013-09-03|2019-12-10|Apple Inc.|Crown input for a wearable electronic device|
US10001817B2|2013-09-03|2018-06-19|Apple Inc.|User interface for manipulating user interface objects with magnetic properties|
US11068128B2|2013-09-03|2021-07-20|Apple Inc.|User interface object manipulations in a user interface|
US20150153928A1|2013-12-04|2015-06-04|Autodesk, Inc.|Techniques for interacting with handheld devices|
KR20150065336A|2013-12-05|2015-06-15|삼성전자주식회사|Method, apparatus and computer readable recording medium for recognizing gesture through an electronic device|
US20150177947A1|2013-12-20|2015-06-25|Motorola Mobility Llc|Enhanced User Interface Systems and Methods for Electronic Devices|
US10429888B2|2014-02-25|2019-10-01|Medibotics Llc|Wearable computer display devices for the forearm, wrist, and/or hand|
US9582035B2|2014-02-25|2017-02-28|Medibotics Llc|Wearable computing devices and methods for the wrist and/or forearm|
US10691332B2|2014-02-28|2020-06-23|Samsung Electronics Company, Ltd.|Text input on an interactive display|
US20150277563A1|2014-03-28|2015-10-01|Wen-Ling M. Huang|Dynamic tactile user interface|
US9483763B2|2014-05-29|2016-11-01|Apple Inc.|User interface for payments|
US10313506B2|2014-05-30|2019-06-04|Apple Inc.|Wellness aggregator|
US11200542B2|2014-05-30|2021-12-14|Apple Inc.|Intelligent appointment suggestions|
WO2015184186A1|2014-05-30|2015-12-03|Apple Inc.|Multi-command single utterance input method|
US10170123B2|2014-05-30|2019-01-01|Apple Inc.|Intelligent assistant for home automation|
US9842101B2|2014-05-30|2017-12-12|Apple Inc.|Predictive conversion of language input|
US9715875B2|2014-05-30|2017-07-25|Apple Inc.|Reducing the need for manual start/end-pointing and trigger phrases|
US9430463B2|2014-05-30|2016-08-30|Apple Inc.|Exemplar-based natural language processing|
US9633004B2|2014-05-30|2017-04-25|Apple Inc.|Better resolution when referencing to concepts|
US9575560B2|2014-06-03|2017-02-21|Google Inc.|Radar-based gesture-recognition through a wearable device|
USD761860S1|2014-06-20|2016-07-19|Samsung Electronics Co., Ltd.|Display screen or portion thereof with icon|
USD761859S1|2014-06-20|2016-07-19|Samsung Electronics Co., Ltd.|Display screen or portion thereof with icon|
EP3584671A1|2014-06-27|2019-12-25|Apple Inc.|Manipulation of calendar application in device with touch screen|
US9338493B2|2014-06-30|2016-05-10|Apple Inc.|Intelligent automated assistant for TV user interactions|
KR20210131469A|2014-08-02|2021-11-02|애플 인크.|Context-specific user interfaces|
US9811164B2|2014-08-07|2017-11-07|Google Inc.|Radar-based gesture sensing and data transmission|
US9921660B2|2014-08-07|2018-03-20|Google Llc|Radar-based gesture recognition|
US10339293B2|2014-08-15|2019-07-02|Apple Inc.|Authenticated device used to unlock another device|
US9588625B2|2014-08-15|2017-03-07|Google Inc.|Interactive textiles|
US10452253B2|2014-08-15|2019-10-22|Apple Inc.|Weather user interface|
US10268321B2|2014-08-15|2019-04-23|Google Llc|Interactive textiles within hard objects|
DE102014012629A1|2014-08-22|2016-02-25|Diehl Ako Stiftung & Co. Kg|Operating device and mobile operating device, in particular for an electronic household appliance|
US11169988B2|2014-08-22|2021-11-09|Google Llc|Radar recognition-aided search|
US20160054792A1|2014-08-22|2016-02-25|Google Inc.|Radar-Based Biometric Recognition|
US9778749B2|2014-08-22|2017-10-03|Google Inc.|Occluded gesture recognition|
DE202015006141U1|2014-09-02|2015-12-14|Apple Inc.|Electronic touch communication|
US10254948B2|2014-09-02|2019-04-09|Apple Inc.|Reduced-size user interfaces for dynamically updated application overviews|
US9547419B2|2014-09-02|2017-01-17|Apple Inc.|Reduced size configuration interface|
US10114521B2|2014-09-02|2018-10-30|Apple Inc.|Multi-dimensional object rearrangement|
KR101901796B1|2014-09-02|2018-09-28|애플 인크.|Reduced-size interfaces for managing alerts|
WO2016036427A1|2014-09-02|2016-03-10|Apple Inc.|Electronic device with rotatable input mechanism|
WO2016036509A1|2014-09-02|2016-03-10|Apple Inc.|Electronic mail user interface|
CN106797493A|2014-09-02|2017-05-31|苹果公司|Music user interface|
WO2016036552A1|2014-09-02|2016-03-10|Apple Inc.|User interactions for a mapping application|
EP3291534A1|2014-09-02|2018-03-07|Apple Inc.|Remote camera user interface|
TW201610758A|2014-09-02|2016-03-16|蘋果公司|Button functionality|
CN106797415A|2014-09-02|2017-05-31|苹果公司|Telephone user interface|
US9818400B2|2014-09-11|2017-11-14|Apple Inc.|Method and apparatus for discovering trending terms in speech requests|
US10789041B2|2014-09-12|2020-09-29|Apple Inc.|Dynamic thresholds for always listening speech trigger|
US9668121B2|2014-09-30|2017-05-30|Apple Inc.|Social reminders|
US9886432B2|2014-09-30|2018-02-06|Apple Inc.|Parsimonious handling of word inflection via categorical stem + suffix N-gram language models|
US10127911B2|2014-09-30|2018-11-13|Apple Inc.|Speaker identification and unsupervised speaker adaptation techniques|
US9646609B2|2014-09-30|2017-05-09|Apple Inc.|Caching apparatus for serving phonetic pronunciations|
US10074360B2|2014-09-30|2018-09-11|Apple Inc.|Providing an indication of the suitability of speech recognition|
US9600080B2|2014-10-02|2017-03-21|Google Inc.|Non-line-of-sight radar-based gesture recognition|
US10013025B2|2014-12-11|2018-07-03|Intel Corporation|Wearable device with power state control|
DE102014226546A1|2014-12-19|2016-06-23|Robert Bosch Gmbh|Method for operating an input device, input device, motor vehicle|
USD777186S1|2014-12-24|2017-01-24|Logitech Europe, S.A.|Display screen or portion thereof with a graphical user interface|
AU2016215440B2|2015-02-02|2019-03-14|Apple Inc.|Device, method, and graphical user interface for establishing a relationship and connection between two devices|
KR20160101295A|2015-02-16|2016-08-25|삼성디스플레이 주식회사|Circular touch panel and manufacturing method of the same|
US10466883B2|2015-03-02|2019-11-05|Apple Inc.|Screenreader user interface|
US10365807B2|2015-03-02|2019-07-30|Apple Inc.|Control of system zoom magnification using a rotatable input mechanism|
US10152299B2|2015-03-06|2018-12-11|Apple Inc.|Reducing response latency of intelligent automated assistants|
US9865280B2|2015-03-06|2018-01-09|Apple Inc.|Structured dictation using intelligent automated assistants|
US10055121B2|2015-03-07|2018-08-21|Apple Inc.|Activity based thresholds and feedbacks|
EP3250997A1|2015-03-08|2017-12-06|Apple Inc.|User interface using a rotatable input mechanism|
US9886953B2|2015-03-08|2018-02-06|Apple Inc.|Virtual assistant activation|
US10216351B2|2015-03-08|2019-02-26|Apple Inc.|Device configuration user interface|
WO2016144385A1|2015-03-08|2016-09-15|Apple Inc.|Sharing user-configurable graphical constructs|
US10567477B2|2015-03-08|2020-02-18|Apple Inc.|Virtual assistant continuity|
US9721566B2|2015-03-08|2017-08-01|Apple Inc.|Competing devices responding to voice triggers|
US9899019B2|2015-03-18|2018-02-20|Apple Inc.|Systems and methods for structured stem and suffix language models|
US10016162B1|2015-03-23|2018-07-10|Google Llc|In-ear health monitoring|
US9983747B2|2015-03-26|2018-05-29|Google Llc|Two-layer interactive textiles|
CN106019606B|2015-03-27|2019-08-27|联想有限公司|Electronic equipment|
US9848780B1|2015-04-08|2017-12-26|Google Inc.|Assessing cardiovascular function using an optical sensor|
US20180101290A1|2015-04-14|2018-04-12|Huawei Technologies Co., Ltd.|Wearable device, touchscreen thereof, touch operation method thereof, and graphical user interface thereof|
US9842105B2|2015-04-16|2017-12-12|Apple Inc.|Parsimonious continuous-space phrase representations for natural language processing|
WO2016171467A1|2015-04-23|2016-10-27|Samsung Electronics Co., Ltd.|Electronic device including rotary member and display method thereof|
EP3958557A1|2015-04-23|2022-02-23|Apple Inc.|Digital viewfinder user interface for multiple cameras|
KR20160128120A|2015-04-28|2016-11-07|엘지전자 주식회사|Watch type terminal and control method thereof|
USD760285S1|2015-04-28|2016-06-28|Include Fitness, Inc.|Display screen with an animated graphical user interface|
USD761297S1|2015-04-28|2016-07-12|Include Fitness, Inc.|Display screen with an animated graphical user interface|
EP3289433A1|2015-04-30|2018-03-07|Google LLC|Type-agnostic rf signal representations|
USD775657S1|2015-04-30|2017-01-03|Brillio LLC|Display screen with animated graphical user interface|
CN107430443B|2015-04-30|2020-07-10|谷歌有限责任公司|Gesture recognition based on wide field radar|
EP3885882A1|2015-04-30|2021-09-29|Google LLC|Rf-based micro-motion tracking for gesture tracking and recognition|
KR102356449B1|2015-05-13|2022-01-27|삼성전자주식회사|Apparatus and method for providing additional information according to rotary input|
USD795917S1|2015-05-17|2017-08-29|Google Inc.|Display screen with an animated graphical user interface|
US10080528B2|2015-05-19|2018-09-25|Google Llc|Optical central venous pressure measurement|
US10088908B1|2015-05-27|2018-10-02|Google Llc|Gesture detection and interactions|
US10083688B2|2015-05-27|2018-09-25|Apple Inc.|Device voice control for selecting a displayed affordance|
US10200824B2|2015-05-27|2019-02-05|Apple Inc.|Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device|
US9693592B2|2015-05-27|2017-07-04|Google Inc.|Attaching electronic components to interactive textiles|
US10127220B2|2015-06-04|2018-11-13|Apple Inc.|Language identification from short strings|
US10376195B1|2015-06-04|2019-08-13|Google Llc|Automated nursing assessment|
US9578173B2|2015-06-05|2017-02-21|Apple Inc.|Virtual assistant aided communication with 3rd party service in a communication session|
US9916075B2|2015-06-05|2018-03-13|Apple Inc.|Formatting content for a reduced-size user interface|
US10101822B2|2015-06-05|2018-10-16|Apple Inc.|Language input correction|
US10186254B2|2015-06-07|2019-01-22|Apple Inc.|Context-based endpoint detection|
US11025565B2|2015-06-07|2021-06-01|Apple Inc.|Personalized prediction of responses for instant messaging|
US10275116B2|2015-06-07|2019-04-30|Apple Inc.|Browser with docked tabs|
US10255907B2|2015-06-07|2019-04-09|Apple Inc.|Automatic accent detection using acoustic models|
US9939923B2|2015-06-19|2018-04-10|Microsoft Technology Licensing, Llc|Selecting events based on user input and current context|
US20160378747A1|2015-06-29|2016-12-29|Apple Inc.|Virtual assistant for media playback|
KR20170019040A|2015-08-11|2017-02-21|삼성전자주식회사|Organism condition information providing method and electronic device supporting the same|
US10114450B2|2015-08-31|2018-10-30|Semiconductor Energy Laboratory Co., Ltd.|Information processing device|
US10747498B2|2015-09-08|2020-08-18|Apple Inc.|Zero latency digital assistant|
US10671428B2|2015-09-08|2020-06-02|Apple Inc.|Distributed personal assistant|
US9697820B2|2015-09-24|2017-07-04|Apple Inc.|Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks|
US11010550B2|2015-09-29|2021-05-18|Apple Inc.|Unified language modeling framework for word prediction, auto-completion and auto-correction|
US10366158B2|2015-09-29|2019-07-30|Apple Inc.|Efficient word encoding for recurrent neural network language models|
US10817065B1|2015-10-06|2020-10-27|Google Llc|Gesture recognition using multiple antenna|
KR20170041467A|2015-10-07|2017-04-17|삼성전자주식회사|Wearable Device for Controlling Application Executed on Device and Method Thereof|
EP3371855A1|2015-11-04|2018-09-12|Google LLC|Connectors for connecting electronics embedded in garments to external devices|
US10691473B2|2015-11-06|2020-06-23|Apple Inc.|Intelligent automated assistant in a messaging environment|
US10049668B2|2015-12-02|2018-08-14|Apple Inc.|Applying neural network language models to weighted finite state transducers for automatic speech recognition|
US10162311B2|2015-12-15|2018-12-25|Qualcomm Incorporated|Display panel with minimum borders|
US10223066B2|2015-12-23|2019-03-05|Apple Inc.|Proactive assistance based on dialog communication between devices|
JP6768811B2|2016-01-14|2020-10-14|華為技術有限公司Huawei Technologies Co.,Ltd.|Electronic devices and how to operate such electronic devices|
WO2017134541A1|2016-02-03|2017-08-10|Semiconductor Energy Laboratory Co., Ltd.|Information processing device|
US10446143B2|2016-03-14|2019-10-15|Apple Inc.|Identification of voice inputs providing credentials|
WO2017192167A1|2016-05-03|2017-11-09|Google Llc|Connecting an electronic component to an interactive textile|
US10175781B2|2016-05-16|2019-01-08|Google Llc|Interactive object with multiple electronics modules|
USD799540S1|2016-05-23|2017-10-10|IncludeFitness, Inc.|Display screen with an animated graphical user interface|
US9934775B2|2016-05-26|2018-04-03|Apple Inc.|Unit-selection text-to-speech synthesis based on predicted concatenation parameters|
US10911255B2|2016-05-31|2021-02-02|Honeywell International Inc.|Devices, methods, and systems for hands free facility status alerts|
US9972304B2|2016-06-03|2018-05-15|Apple Inc.|Privacy preserving distributed evaluation framework for embedded personalized systems|
US11227589B2|2016-06-06|2022-01-18|Apple Inc.|Intelligent list reading|
US10249300B2|2016-06-06|2019-04-02|Apple Inc.|Intelligent list reading|
US10049663B2|2016-06-08|2018-08-14|Apple, Inc.|Intelligent automated assistant for media exploration|
DK179309B1|2016-06-09|2018-04-23|Apple Inc|Intelligent automated assistant in a home environment|
US10509862B2|2016-06-10|2019-12-17|Apple Inc.|Dynamic phrase expansion of language input|
US10192552B2|2016-06-10|2019-01-29|Apple Inc.|Digital assistant providing whispered speech|
US10586535B2|2016-06-10|2020-03-10|Apple Inc.|Intelligent digital assistant in a multi-tasking environment|
US10490187B2|2016-06-10|2019-11-26|Apple Inc.|Digital assistant providing automated status report|
US10067938B2|2016-06-10|2018-09-04|Apple Inc.|Multilingual word prediction|
DK179343B1|2016-06-11|2018-05-14|Apple Inc|Intelligent task discovery|
DK201670540A1|2016-06-11|2018-01-08|Apple Inc|Application integration with a digital assistant|
AU2017100667A4|2016-06-11|2017-07-06|Apple Inc.|Activity and workout updates|
DK179415B1|2016-06-11|2018-06-14|Apple Inc|Intelligent device arbitration and control|
DK179049B1|2016-06-11|2017-09-18|Apple Inc|Data driven natural language event detection and classification|
US11216119B2|2016-06-12|2022-01-04|Apple Inc.|Displaying a predetermined view of an application|
DK201670580A1|2016-06-12|2018-01-02|Apple Inc|Wrist-based tactile time feedback for non-sighted users|
US10873786B2|2016-06-12|2020-12-22|Apple Inc.|Recording and broadcasting application visual output|
US9716825B1|2016-06-12|2017-07-25|Apple Inc.|User interface for camera effects|
KR20180012438A|2016-07-27|2018-02-06|삼성전자주식회사|Electronic device and method for operating electronic device|
USD817341S1|2016-08-26|2018-05-08|Samsung Electronics Co., Ltd.|Display screen or portion thereof with transitional graphical user interface|
KR20180024528A|2016-08-30|2018-03-08|삼성전자주식회사|Method for providing visual effects according to interaction based on bezel and electronic device for the same|
US10474753B2|2016-09-07|2019-11-12|Apple Inc.|Language identification using recurrent neural networks|
US10043516B2|2016-09-23|2018-08-07|Apple Inc.|Intelligent automated assistant|
CN107037955A|2016-10-24|2017-08-11|阿里巴巴集团控股有限公司|A kind of method and device of display image information|
US10579150B2|2016-12-05|2020-03-03|Google Llc|Concurrent detection of absolute distance and relative movement for sensing action gestures|
US10593346B2|2016-12-22|2020-03-17|Apple Inc.|Rank-reduced token representation for automatic speech recognition|
US11204787B2|2017-01-09|2021-12-21|Apple Inc.|Application integration with a digital assistant|
DK201770383A1|2017-05-09|2018-12-14|Apple Inc.|User interface for correcting recognition errors|
US10417266B2|2017-05-09|2019-09-17|Apple Inc.|Context-aware ranking of intelligent response suggestions|
US10395654B2|2017-05-11|2019-08-27|Apple Inc.|Text normalization based on a data-driven learning network|
DK201770439A1|2017-05-11|2018-12-13|Apple Inc.|Offline personal assistant|
US10726832B2|2017-05-11|2020-07-28|Apple Inc.|Maintaining privacy of personal information|
DK179496B1|2017-05-12|2019-01-15|Apple Inc.|USER-SPECIFIC Acoustic Models|
DK201770427A1|2017-05-12|2018-12-20|Apple Inc.|Low-latency intelligent automated assistant|
DK179745B1|2017-05-12|2019-05-01|Apple Inc.|SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT|
DK179412B1|2017-05-12|2018-06-06|Apple Inc|Context-Specific User Interfaces|
DK201770431A1|2017-05-15|2018-12-20|Apple Inc.|Optimizing dialogue policy decisions for digital assistants using implicit feedback|
US10845955B2|2017-05-15|2020-11-24|Apple Inc.|Displaying a scrollable list of affordances associated with physical activities|
DK201770432A1|2017-05-15|2018-12-21|Apple Inc.|Hierarchical belief states for digital assistants|
US10311144B2|2017-05-16|2019-06-04|Apple Inc.|Emoji word sense disambiguation|
US10403278B2|2017-05-16|2019-09-03|Apple Inc.|Methods and systems for phonetic matching in digital assistant services|
US20180336275A1|2017-05-16|2018-11-22|Apple Inc.|Intelligent automated assistant for media exploration|
DK179549B1|2017-05-16|2019-02-12|Apple Inc.|Far-field extension for digital assistant services|
US10657328B2|2017-06-02|2020-05-19|Apple Inc.|Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling|
US10445429B2|2017-09-21|2019-10-15|Apple Inc.|Natural language understanding using vocabularies with compressed serialized tries|
US10755051B2|2017-09-29|2020-08-25|Apple Inc.|Rule-based natural language processing|
US10636424B2|2017-11-30|2020-04-28|Apple Inc.|Multi-turn canned dialog|
US10733982B2|2018-01-08|2020-08-04|Apple Inc.|Multi-directional dialog|
US10733375B2|2018-01-31|2020-08-04|Apple Inc.|Knowledge-based framework for improving natural language understanding|
US11112964B2|2018-02-09|2021-09-07|Apple Inc.|Media capture lock affordance for graphical user interface|
US10789959B2|2018-03-02|2020-09-29|Apple Inc.|Training speaker recognition models for digital assistants|
US10592604B2|2018-03-12|2020-03-17|Apple Inc.|Inverse text normalization for automatic speech recognition|
DK201870599A1|2018-03-12|2019-10-16|Apple Inc.|User interfaces for health monitoring|
US10818288B2|2018-03-26|2020-10-27|Apple Inc.|Natural assistant interaction|
US10909331B2|2018-03-30|2021-02-02|Apple Inc.|Implicit identification of translation payload with neural machine translation|
DK201870380A1|2018-05-07|2020-01-29|Apple Inc.|Displaying user interfaces associated with physical activities|
US11145294B2|2018-05-07|2021-10-12|Apple Inc.|Intelligent automated assistant for delivering content from user experiences|
US10928918B2|2018-05-07|2021-02-23|Apple Inc.|Raise to speak|
US10984780B2|2018-05-21|2021-04-20|Apple Inc.|Global semantic word embeddings using bi-directional recurrent neural networks|
US10892996B2|2018-06-01|2021-01-12|Apple Inc.|Variable latency device coordination|
DK180639B1|2018-06-01|2021-11-04|Apple Inc|DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT|
DK179822B1|2018-06-01|2019-07-12|Apple Inc.|Voice interaction at a primary device to access call functionality of a companion device|
US10887193B2|2018-06-03|2021-01-05|Apple Inc.|User interfaces for updating network connection settings of external devices|
US10496705B1|2018-06-03|2019-12-03|Apple Inc.|Accelerated task performance|
DK179888B1|2018-09-11|2019-08-27|Apple Inc.|CONTENT-BASED TACTICAL OUTPUTS|
US11010561B2|2018-09-27|2021-05-18|Apple Inc.|Sentiment prediction from textual data|
US11128792B2|2018-09-28|2021-09-21|Apple Inc.|Capturing and displaying images with multiple focal planes|
US11170166B2|2018-09-28|2021-11-09|Apple Inc.|Neural typographical error modeling via generative adversarial networks|
US10839159B2|2018-09-28|2020-11-17|Apple Inc.|Named entity normalization in a spoken dialog system|
CN109634542B|2018-10-26|2021-09-03|努比亚技术有限公司|Switching display control method, terminal and computer readable storage medium|
US10831346B2|2018-10-30|2020-11-10|International Business Machines Corporation|Ergonomic and sensor analysis based user experience design|
USD922404S1|2019-02-18|2021-06-15|Samsung Electronics Co., Ltd.|Display screen or portion thereof with animated graphical user interface|
USD910654S1|2019-03-05|2021-02-16|Includehealth, Inc.|Display screen with an animated graphical user interface|
SG10201903293XA|2019-04-12|2020-11-27|Kaha Pte Ltd|Enhancing security in home automation system|
DK201970509A1|2019-05-06|2021-01-15|Apple Inc|Spoken notifications|
US11131967B2|2019-05-06|2021-09-28|Apple Inc.|Clock faces for an electronic device|
US10645294B1|2019-05-06|2020-05-05|Apple Inc.|User interfaces for capturing and managing visual media|
US11140099B2|2019-05-21|2021-10-05|Apple Inc.|Providing message response suggestions|
DK180129B1|2019-05-31|2020-06-02|Apple Inc.|User activity shortcut suggestions|
DK201970533A1|2019-05-31|2021-02-15|Apple Inc|Methods and user interfaces for sharing audio|
DK201970534A1|2019-06-01|2021-02-16|Apple Inc|User interfaces for monitoring noise exposure levels|
US11209957B2|2019-06-01|2021-12-28|Apple Inc.|User interfaces for cycle tracking|
US11152100B2|2019-06-01|2021-10-19|Apple Inc.|Health application user interfaces|
US11234077B2|2019-06-01|2022-01-25|Apple Inc.|User interfaces for managing audio exposure|
US11228835B2|2019-06-01|2022-01-18|Apple Inc.|User interfaces for managing audio exposure|
US10996761B2|2019-06-01|2021-05-04|Apple Inc.|User interfaces for non-visual output of time|
USD923652S1|2019-08-23|2021-06-29|Beijing Xiaomi Mobile Software Co., Ltd.|Display screen or portion thereof with animated graphical user interface|
US10852905B1|2019-09-09|2020-12-01|Apple Inc.|Techniques for managing display usage|
WO2021051121A1|2019-09-09|2021-03-18|Apple Inc.|Research study user interfaces|
US20210349426A1|2020-05-11|2021-11-11|Apple Inc.|User interfaces with a character having a visual state based on device activity state and an indication of time|
US11054973B1|2020-06-01|2021-07-06|Apple Inc.|User interfaces for managing media|
US20210373748A1|2020-06-02|2021-12-02|Apple Inc.|User interfaces for health applications|
US11212449B1|2020-09-25|2021-12-28|Apple Inc.|User interfaces for media capture and management|
法律状态:
2018-11-21| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-04-07| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-11-09| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2022-01-25| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 20/11/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US201261728765P| true| 2012-11-20|2012-11-20|
US201261728770P| true| 2012-11-20|2012-11-20|
US201261728773P| true| 2012-11-20|2012-11-20|
US201361773803P| true| 2013-03-06|2013-03-06|
US61/773,803|2013-03-06|
US201361773813P| true| 2013-03-07|2013-03-07|
US201361773815P| true| 2013-03-07|2013-03-07|
US201361773817P| true| 2013-03-07|2013-03-07|
US61/773,813|2013-03-07|
US61/773,817|2013-03-07|
US61/773,815|2013-03-07|
US201361775686P| true| 2013-03-11|2013-03-11|
US201361775688P| true| 2013-03-11|2013-03-11|
US201361775687P| true| 2013-03-11|2013-03-11|
US61/775,687|2013-03-11|
US14/015,890|US10551928B2|2012-11-20|2013-08-30|GUI transitions on wearable electronic device|
PCT/KR2013/010544|WO2014081179A1|2012-11-20|2013-11-20|Gui transitions on wearable electronic device|
[返回顶部]