专利摘要:
The presented design is a visual comprehension system for the blind. It consists of special glasses with 3d cameras and software that converts spatial and visual information into sound or tactile signals that are understandable to the blind. The signals can be synthesized sound that allows to identify shapes and locate them in space. This allows perceiving volumes and spaces, in addition to an unprecedented level of detail. The use of cochlear headphones or sound or touch signals allow long sessions of use, without interfering with the ear. In the case of sound, it is preferably non-verbal, which eliminates the language barrier and facilitates learning. (Machine-translation by Google Translate, not legally binding)
公开号:ES2597155A1
申请号:ES201530825
申请日:2015-06-12
公开日:2017-01-16
发明作者:Antonio QUESADA HERVÁS
申请人:Eyesynth S L;Eyesynth SL;
IPC主号:
专利说明:

Technical Field of the Invention
The invention relates to assistive devices for people with some limitation or disability. In particular, the invention relates to a support system aimed primarily at blind and visually impaired people.
Background of the invention or State of the Art
Traditionally, visually impaired users rely on basic aids, such as canes and guide dogs to move around or recognize their surroundings.
Although systems have been developed that employ a higher technological level, they are often invasive and complex to handle. They are also usually too much
20 expensive so that its use ceases to be exclusive.
Usually, current systems usually measure distances to a point, for example by using a laser, acoustically warning if an object gets in the way or not. Such systems do not provide a volumetric analysis of the scene, nor does their response have nuances associated with changes in position, size or
25 geometry (curves, edges, position with respect to the horizon line).
On the other hand, an analysis of a scene through measuring distances with many points requires a large computing capacity that generally makes it unfeasible for use in real time and / or on a device that is portable.
Therefore, there is a need for effective systems to help people with visual impairment capable of providing an understandable description of the environment.
Brief Description of the Invention
The invention is mainly applicable to people with vision problems. Do not
However, it could be applicable in other types of scenarios and circumstances where the sense of sight cannot be used or an alternative guidance system is necessary.
Assistance for people is provided with a description of the environment through an interpretation of the objects and obstacles that exist around it that is transmitted through a haptic signal. The generation of the haptic signal is made from a stereo image processing to obtain a representation of darker areas that correspond to farther regions while the others are more


clear are associated with areas located closer. A haptic signal must be understood as a sound or tactile signal (for example by vibration pulses).
The object of the present invention is a portable sound or tactile interpretation system of the environment for a blind person comprising:
5 -two cameras, separated from each other, to simultaneously capture an image of the environment,
- processing means that combine both images in real time and establish at least one vertical strip with information about the depth of the elements of the combined image, where said processing means also
10 divide the vertical strip into a plurality of regions; define, for each region, a sound or tactile signal based on its depth and height in the image; define a sound or tactile output signal from the sound or touch signals of each region of the vertical strip;
- means of reproduction of the sound or tactile output signal.
Preferably, in the operation mode called tracking, the vertical strip is central to the combined image and the user when moving scans the environment.
Preferably, in the so-called full landscape mode of operation, a plurality of lateral vertical stripes are established in the combined image on each side of the central vertical strip, and a lateral sound or tactile signal is defined
20 left and right side from the regions of each left lateral strip and each right lateral strip respectively, the user without moving can scan the environment.
Preferably, the playback media reproduces in stereo by combining a left side sound or touch signal and a right side sound or touch signal.
Preferably, the processing means define a sound or tactile intensity of the sound or touch signal as a function of the height of the region in the vertical strip.
Preferably, the processing means define a frequency of the sound or touch signal as a function of the depth of the region.
Preferably, the depth of a region is determined based on the level of gray on a depth map of the surrounding image.
Preferably, the region comprises at least one pixel.
Preferably, the system comprises a support structure to be carried by the user where the reproduction means and the two cameras can be located.
Optionally, the sound or touch signal can be tactile in some embodiments. By
For example, by an electroactive polymer (EAP) or by a membrane of an elastomer capable of modifying its shape in response to voltage. Also mechanically, through a small vibration generator motor.
Alternatively, the audible or tactile signal may be audible in other embodiments.
Preferably, the frequency of said sound signal is chosen within the range 40 between 100 Hz and 18000 Hz.
Preferably, the reproduction means are cochlear headphones. Advantageously, it leaves the ear free and the signal is received via bone. You get that


The user can chat at the same time without interfering with the generated sound signal or vice versa.
Preferably, the support structure is chosen from at least a pair of glasses, a headband, neck support, chest support, shoulder support.
5 Preferably, the generated sound signal is non-verbal to avoid saturating the user with continuous messages that, after prolonged use, cause discomfort and fatigue. In contrast, a non-verbal message is faster to recognize and can be combined with other tasks. An advantage is that the invention is usable without language barriers. 10 Brief description of the figures
FIG. 1 shows a simplified block diagram.FIG. 2 shows the pixelated image of a toroid.FIG. 3 shows pixelated image of the processed toroid.
15 FIG. 4 shows a simplified flow chart. FIG. 5 shows an embodiment of the invention as glasses.
Detailed description of the invention
For clarity, an embodiment of the invention is described with reference to the figures without limitation and focused on sound or tactile sound signals.
In FIG. 1 several blocks corresponding to the system are shown. The images are acquired through a pair of 3i, 3d cameras in stereo. Preferably, it
25 placed on both sides of the face and at the eye level of the user to facilitate the focus towards the region of interest with head movements. The 3i, 3d cameras are aligned in parallel.
The circuitry of the cameras 3i, 3d makes a preprocessing of the captured image to serve a stable flow of images, avoiding artifacts and aberrations
30 geometric or chromatic. The circuitry of the sensors offers a pair of images synchronized over time.
As a result, this video stream is transmitted to a process unit 2. Process unit 2 is preferably a specific hardware design that implements the image to audio conversion algorithm. To communicate the
35 cameras 3i, 3d with process unit 2 a cable 6 is provided. In other more complex embodiments, wireless transmission is contemplated.
Process unit 2 converts stereoscopic images into a grayscale depth map. Previously, a disparity map is generated (without scale information).


Depth map means a grayscale image in which the absolute black color means maximum distance (depending on the scale we use) and the pure white color means maximum proximity (depending on the scale we use). The rest of the gray range specifies intermediate distances.
5 Disparity map means the resulting image that is obtained from the superposition of a pair of stereo images, which are subjected to a mathematical process. The binocular disparity map expresses in one image the pixel-level differences between two stereo images. A mathematical disparity algorithm is applied. Having the distance between cameras and some calibration files of
10 the same, we can translate the difference between pixels at real distances. Thanks to this process, it is known how far from the camera each portion (pixel size) of the image taken is. A gray scale is used to express that distance.
Below is a conversion to depth map. After a process
15 Mathematical in which distance / gray level scale is applied, a depth map is obtained.
From the generated depth map, a conversion algorithm developed for this purpose is applied that allows spatial depth data to be converted to audio.
20 The result is that with a pair of initial stereo images, a non-verbal stereo sound signal is achieved that is transmitted to the user through cochlear headphones 4i, 4d. Thus, it is possible to define an audiovisual language that intuitively transfers the user visual information to auditory information with fidelity.
In FIG. 2 an example of a low resolution depth map of a
25 toroid Each pixel in the depth map has an associated coordinate (X, Y) that corresponds to the pixel positions captured by the cameras. In addition, each pixel has an associated gray level (G) that provides information about the depth, that is, the distance at which the region associated with that pixel is located.
30 FIG. 3 illustrates in a simplified way a division of the column or central vertical strip into 3 zones according to their gray level. Zone "A" is black, zone "B" is white and zone "C" is gray. According to the above, 3 different intensity values are associated with each zone (silence for zone “A”, maximum volume for zone “B” and an average sound intensity for zone “C.” It should be understood that they are usually defined
35 many more gray level ranges and therefore associated sound intensity. The sound signal is composed by adding the individual signals corresponding to the pixels of each zone.
With the information coming from the depth map a matrix or table is constructed with the information of the environment at that time. This information must be
40 converted to audio according to the following considerations.
- With each pair of stereo frames a disparity mapping is made: Given the difference between pixels of the images and having camera data (FOV, interocular distance, specific calibration) triangulations can be established, therefore associating pixels with distances in the world real. With
45 this information, the image is processed to give a depth map. Is


a contour image of objects and grayscale expressing their
Actual volumes and distances. In this way, we have only one image
joint containing spatial information of the scene.
- Example of operation in Tracking Mode: We take FIG. 3. To analyze
5 the image will move the head from left to right in the gesture of
negation. Thus, the central cursor (in red) will completely track the toroid. He
generated sound will be heard in the center of the stereo panorama (since always
it will be the center of the axis of our vision) With this tracking, the
horizontal size of the object (the movement of our neck will serve as
10 reference) and the vertical size will give us the frequency range.
- Full Landscape Mode Let's take to analyze FIG. 2. In this case I don't know
He has to move his neck to interpret what lies ahead. The right side
of the toroid will sound right on the stereo panorama. Analogously
the central and left parts will sound. The opening level of the panorama
fifteen stereo will indicate the horizontal size of the toroid. The vertical size will come
expressed by the frequency range, as in the Tracking mode.
- The correspondence of the image with the stereo sound is as follows:
Starting from the image of a landscape, the sound signal corresponds to the zones
that analyzes. The left zone. of the image will sound in the stereo panorama
twenty left. The right zone in the right stereo panorama. The central zone
therefore, in the center of the stereo panorama (or what is the same, 50%
left + 50% right)
- The frequency range that specifies the height factor has a value of 100Hz
at 18000Hz, divided into 128 equal fragments. We have chosen that range
25 because it's wide enough to show sound in detail and what
narrow enough for an average person to cover them without
problems (the human sound range goes from 20Hz to 20000Hz). The frequency
base (100Hz) is associated with the first lower row of pixels on the screen. The
higher frequency (18000Hz) to the upper row of pixels. In between
30 assign the other frequency fragments. If the image had 128 pixels
in height, each row would correspond to a fragment. If we change the resolution,
the fragments will be assigned proportionally to the height. This method serves
for systems with low computing power. If we have power
gross in which the sound synthesis is generated in real time, we will do the
35 division of the frequency range between the number of pixels of height and
we will assign each frequency segment to each pixel, without interpolations or
averaged
- The spatial distance factor with respect to the user (Z axis) is associated with the factor
volume generated by the algorithm, so that a black pixel will not have
40 noticeable volume (i.e., infinite) and a white pixel will have the maximum
volume (0dB). This scale will be flexible and adaptive to the use of different ranges
measuring (40cm, 2m, 6m)
- The sound duration per pixel is directly proportional to its "presence"
onscreen. If a pixel remains continuously white, the sound will be repeated.
Four. Five continually. Note: Being white noise (of a random nature) it is not

they perceive patterns of cyclic repetition (“loops”) or cut and repetition points in the sound.
- The center column analysis is only used in tracking mode. In principle, a central column of 1 pixel width can be used. However, in order
5 to soften the sound and avoid artifacts, the pixel values of the 3 central columns, or even 5, will be averaged, depending on the resolution of the depth map (dependent on computing power).
A volume intensity (I) is associated with the gray scale value of a pixel. Thus, the pixel with values 0.0.0 (RGB model) corresponds to a remote region and the
10 associated intensity is silence (I = 0). A pixel with values 255.255.255 corresponds to a very close region and the signal volume is maximum (I = 0 dB). In this way, each pixel can be seen as a "sound unit" with which it makes an audio composition. Preferably, the sound frequency ranges from 100Hz to 18000Hz.
15 Depending on the mode of operation, the X position of the pixel can be interpreted in two ways.
Tracking mode: Only the signals corresponding to the pixels in the center column will sound (X = 1/2 of the horizontal screen resolution, Y = 0 to Y = n, where n is the vertical screen size in pixels). The scene is tracked when the user
20 shakes his head with the gesture of denial. This is analogous to tracking with a cane.
Full Landscape Mode: Several columns of pixels associated with the scene will sound simultaneously. With this mode, it is not necessary to track. The image is represented (or "sounds") in full. For example, the further to the right the pixels are, the more it will sound on the right of the stereo panorama. Equally for
25 central and left regions.
Note: Full Landscape mode requires high computational power, so depending on the performance of process unit 2, instead of sounding all the columns of the image, it can be optimized using 5 columns: Central, 45º, -45º , 80º, -80º. More columns may be used depending on the process power.
30 The Y position of the pixel (object height) will define how it sounds in terms of frequency: We will use a pass-band filter (or a generated sine frequency, or a pre-calculated sample with a specific frequency range [alternatives depending on the power of device calculation]), with which the pixels in the upper zone will sound high and those in the lower zone will sound low. The sound spectrum that will cover each
35 pixels will be defined by the number of pixels that Y will have.
Simple example:
To clarify how sound generation is done from the depth image, this example is presented. Suppose the mode has been selected
40 tracing and a depth image was obtained as FIG. 3 in which only 3 gray levels are distinguished as an approximation. Therefore, in the central column there are (from bottom to top):
10 black pixels, 12 white pixels, 2 black pixels, 8 gray pixels and 15 black pixels. Assume that the target is assigned: 0 dB; to gray: -30 dB and to black -∞ dB.


The intensity of the signal that at that moment would be the analog mix of all the signals.
The user would appreciate different frequencies according to the pixel height position. More severe in the pixels of smaller height and more acute for those of greater height. Sound
5 generated by this column can be divided into a serious component with a high sound intensity (zone B) and an intermediate intensity component of more acute frequency (zone C).
This signal would be generated for both left and right channels (and would be played respectively on headphones 4i, 4d).
10 When the user changes the position of the cameras when turning the head, the depth image will be modified and with it the associated sound signal.
In FIG. 4 you see a flow chart with some of the important steps that are carried out in the tracking mode. A first step P1 of image capture by cameras 3i, 3d, a processing step P2 to generate the map of
15 depth, an assignment step P3 to associate frequency and sound intensity to each pixel or groups of pixels in the central column of the depth map, a step P4 generating the resulting sound signal corresponding to the central column.
In FIG. 5 an embodiment of the invention implemented in glasses 1 is illustrated.
20 However, it can be implemented in other types of products that serve as support. For example, it can be implemented in a cap, headband, neck support, chest support, shoulder support. The advantage of the glasses is that they are comfortable to wear and allow on the one hand the placement of the headphones 4i, 4d in the desired position and on the other the precise focus of the cameras 3i, 3d to the region of interest. The unit of
Process 2 is designed to be carried by the user in a pocket or on a belt. It is planned in the future to reduce its size to integrate it together with the glasses 1. When separated, a cable 6 carries the information captured by the cameras 3i, 3d to the process unit 2. On the other hand, once this information has been processed, process unit 2 transmits the corresponding audio signals to headphones 4i, 4d.
30 The amount of information and detail presented by the sound makes it possible to identify shapes and spaces with unprecedented precision so far. In the tests carried out with blind people, it has been found that it allows us to recognize specific forms for what the associated sound is like after a short training period. For example, bottles, glasses and plates on a table have sound characteristics that allow
35 distinguish them.
In order to transmit the sound, cochlear headphones are preferably used to free the ear canal. This improves user comfort, greatly reducing hearing fatigue and being much more hygienic for extended use sessions.
In one embodiment, an interface associated with the processing unit 2 is provided with a range selection button to determine the Analysis Distance. For example: near, normal and far, with distances of 40cm, 2m and 6m, respectively. Pressing the button will select cyclic distances. The range selection typically serves to adapt the scope to different scenarios and circumstances. By


example to place objects on a table: 40cm; to move around house 2m; and to cross the street: 6m.
In one embodiment, the interface associated with the processing unit 2 is provided with an Analysis Mode button. The selection between modes will be cyclic.
5 Tracking Mode: Analysis only in the central area of the image. The user will rotate the head cyclically from left to right, tracing the scene in a similar way as with a cane. The sound is monaural.
Full Landscape Mode: The analysis is performed over the entire image. The sound is stereo. In this way, the user can perceive shapes and spaces throughout the entire field of vision simultaneously. For example, on the left (left stereo panorama) a column is perceived, in the center (central stereo panorama) a low table is perceived and on the right (right stereo panorama) the passage is free. This scan mode is more complex in terms of sound, offering more information than Tracking Mode. It is easy to master but requires more than
15 training

权利要求:
Claims (11)
[1]
1. Portable system of sound or tactile interpretation of the environment for a blind person characterized by comprising:
5 -two cameras (3i, 3d), separated from each other, configured to simultaneously capture an image of the environment,
- processing means (2) configured to combine both images in real time and to establish at least one vertical strip with information about the depth of the elements of the combined image, where said means of
10 processing (2) are further configured to divide the vertical strip into a plurality of regions; define, for each region, a sound or tactile signal based on the depth of the region and the height of the region; define a sound or tactile output signal from the sound or tactile signals of each region of the vertical strip;
15 -a generation means (4i, 4d) of the sound or tactile output signal.
[2]
2. System according to claim 1, characterized in that the vertical strip is central in the combined image.
System according to claim 2, characterized in that the processing means (2) also establish a plurality of lateral vertical stripes in the combined image on each side of the central vertical strip, and that they define a lateral sound or tactile signal left and right lateral from the regions of each left lateral strip and each right lateral strip respectively.
[4]
System according to claim 3, characterized in that the generation means (4i, 4d) operate in stereo combining a left lateral sound or tactile signal and a right lateral sound or tactile signal.
System according to any one of the preceding claims, characterized in that the processing means (2) define a sound or tactile signal intensity as a function of the depth of the region.
[6]
6. System according to claim 5, characterized in that the means of
The processing (2) defines a frequency of the sound or touch signal as a function of the height of the region in the vertical strip.
[7]
7. System according to claim 6, characterized in that the processing means (2) determine the depth of a region according to the

color coding in grayscale on a depth map of the surrounding image.
[8]
8. System according to claim 7, characterized in that the region comprises at least one pixel.
[9]
9. System according to any one of the preceding claims, characterized in that it comprises a support structure (1) to be carried by the user configured to locate the reproduction means (4i, 4d) and the two cameras (3i, 3d).
[10]
10. System according to any one of the preceding claims, characterized in that the sound or touch signal is a touch signal.
[11]
eleven. System according to any one of the preceding claims, characterized in that the sound or touch signal is a sound signal.
[12]
12. System according to claim 11, characterized in that the frequency of the sound signal is chosen within the range between 100 Hz and 18000 Hz.
A system according to claim 12, characterized in that the generation means (4i, 4d) are cochlear earphones.
[14]
14. System according to any one of the preceding claims, characterized in that the support structure is chosen from at least 25 -a glasses (1), -a headband, -neck support,
pectoral support,-support on shoulder.

 Fig. 1 
 Fig. 2 
 -�12�

 Fig. 3 

 Fig. 4 
 -�14�

 Fig. 5 
- �15�
类似技术:
公开号 | 公开日 | 专利标题
ES2597155B1|2017-09-18|PORTABLE SOUND OR TOUCH INTERPRETATION SYSTEM OF THE ENVIRONMENT FOR AN INVIDENT
JP2020092448A|2020-06-11|Technique for directing audio in augmented reality system
US10111013B2|2018-10-23|Devices and methods for the visualization and localization of sound
US9579236B2|2017-02-28|Representing visual images by alternative senses
CN204744865U|2015-11-11|Device for environmental information around reception and registration of visual disability personage based on sense of hearing
Hoang et al.2017|Obstacle detection and warning system for visually impaired people based on electrode matrix and mobile Kinect
Brimijoin et al.2012|Undirected head movements of listeners with asymmetrical hearing impairment during a speech-in-noise task
JP2015041936A|2015-03-02|Image display device, image processing apparatus and image processing method
González-Mora et al.2006|Seeing the world by hearing: Virtual Acoustic Space | a new space perception system for blind people.
Strumillo et al.2018|Different approaches to aiding blind persons in mobility and navigation in the “Naviton” and “Sound of Vision” projects
Dunai et al.2015|Virtual sound localization by blind people
ES2692828T3|2018-12-05|Assistance procedure in following up a conversation for a person with hearing problems
JP4955718B2|2012-06-20|Stereoscopic display control apparatus, stereoscopic display system, and stereoscopic display control method
CN110991336A|2020-04-10|Auxiliary perception method and system based on sensory substitution
JP2019125278A|2019-07-25|Information processing device, information processing method, and recording medium
ES2517765A1|2014-11-03|Device and method of spatial analysis, storage and representation by means of sounds |
Matta et al.2005|Auditory eyes: Representing visual information in sound and tactile cues
US11259134B2|2022-02-22|Systems and methods for enhancing attitude awareness in telepresence applications
KR101862381B1|2018-05-29|Mobile terminal device for providing image matching with focus of user's two eyes
Skulimowski et al.2016|Interactive sonification of the u-disparity maps of 3d scenes
Kawata et al.2017|Study on Kinect-Based Sonification System for Blind Spot Warning
Feakes2018|Auditory Displays and Assistive Technologies: the use of head movements by visually impaired individuals and their implementation in binaural interfaces
Silva et al.2011|Perceiving graphical and pictorial information via touch and hearing
CN112506336A|2021-03-16|Head mounted display with haptic output
JP2011067479A|2011-04-07|Image auralization apparatus
同族专利:
公开号 | 公开日
US20180177640A1|2018-06-28|
JP2018524135A|2018-08-30|
ES2780725T3|2020-08-26|
CN107708624A|2018-02-16|
IL255624D0|2018-01-31|
US11185445B2|2021-11-30|
EP3308759A1|2018-04-18|
EP3308759B1|2019-11-27|
JP6771548B2|2020-10-21|
EP3308759A4|2019-02-27|
AU2016275789B2|2021-03-11|
PT3308759T|2020-04-01|
KR20180018587A|2018-02-21|
RU2017144052A3|2019-10-29|
MX2017015146A|2018-03-28|
DK3308759T3|2020-03-02|
CN107708624B|2021-12-14|
ES2597155B1|2017-09-18|
HK1248093A1|2018-10-12|
IL255624A|2021-04-29|
AR104959A1|2017-08-30|
WO2016198721A1|2016-12-15|
AU2016275789A1|2018-01-25|
RU2719025C2|2020-04-16|
RU2017144052A|2019-07-12|
BR112017026545A2|2018-08-14|
CO2017012744A2|2018-02-20|
CA2986652A1|2016-12-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US3172075A|1959-11-27|1965-03-02|Nat Res Dev|Apparatus for furnishing information as to positioning of objects|
EP0008120B1|1978-08-14|1984-02-15|Leslie Kay|Method of and apparatus for providing information as to the existence and/or position of objects|
EP0410045A1|1989-07-27|1991-01-30|Koninklijke Philips Electronics N.V.|Image audio transformation system, particularly as a visual aid for the blind|
KR100586893B1|2004-06-28|2006-06-08|삼성전자주식회사|System and method for estimating speaker localization in non-stationary noise environment|
US20070016425A1|2005-07-12|2007-01-18|Koren Ward|Device for providing perception of the physical environment|
US20090122648A1|2007-11-12|2009-05-14|Trustees Of Boston University|Acoustic mobility aid for the visually impaired|
PT104120B|2008-06-30|2010-11-23|Metro Do Porto S A|GUIDANCE, NAVIGATION AND INFORMATION SYSTEM SPECIFICALLY ADAPTED TO BLIND OR AMBITIOUS PEOPLE|
US9370459B2|2009-06-19|2016-06-21|Andrew Mahoney|System and method for alerting visually impaired users of nearby objects|
WO2013018090A1|2011-08-01|2013-02-07|Abir Eliahu|System and method for non-visual sensory enhancement|WO2018226252A1|2016-12-07|2018-12-13|Second Sight Medical Products, Inc.|Depth filter for visual prostheses|
CN107320296A|2017-06-23|2017-11-07|重庆锦上医疗器械有限公司|The space three-dimensional acoustic expression system and method for visual signal|
US10299982B2|2017-07-21|2019-05-28|David M Frankel|Systems and methods for blind and visually impaired person environment navigation assistance|
CN108245385B|2018-01-16|2019-10-29|曹醒龙|A kind of device helping visually impaired people's trip|
EP3924873A1|2019-02-12|2021-12-22|Can-U-C Ltd.|Stereophonic apparatus for blind and visually-impaired people|
KR20220008659A|2020-07-14|2022-01-21|김재현|Necklace for blinded|
法律状态:
2017-09-18| FG2A| Definitive protection|Ref document number: 2597155 Country of ref document: ES Kind code of ref document: B1 Effective date: 20170918 |
2021-12-03| FD2A| Announcement of lapse in spain|Effective date: 20211203 |
优先权:
申请号 | 申请日 | 专利标题
ES201530825A|ES2597155B1|2015-06-12|2015-06-12|PORTABLE SOUND OR TOUCH INTERPRETATION SYSTEM OF THE ENVIRONMENT FOR AN INVIDENT|ES201530825A| ES2597155B1|2015-06-12|2015-06-12|PORTABLE SOUND OR TOUCH INTERPRETATION SYSTEM OF THE ENVIRONMENT FOR AN INVIDENT|
ARP160101728A| AR104959A1|2015-06-12|2016-06-10|PORTABLE SOUND OR TOUCH INTERPRETATION SYSTEM OF THE ENVIRONMENT FOR INVIDENT OR VISUAL DEFICIENT PERSONS|
KR1020177037503A| KR20180018587A|2015-06-12|2016-06-10|Portable system that allows the blind or visually impaired to understand the environment by sound or touch|
EP16806940.9A| EP3308759B1|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
US15/578,636| US11185445B2|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound and touch|
PT168069409T| PT3308759T|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
CN201680034434.0A| CN107708624B|2015-06-12|2016-06-10|Portable system allowing blind or visually impaired people to understand the surroundings acoustically or by touch|
MX2017015146A| MX2017015146A|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch.|
BR112017026545-1A| BR112017026545A2|2015-06-12|2016-06-10|portable system that allows blind or visually impaired people to interpret the surrounding environment through sound or touch|
DK16806940.9T| DK3308759T3|2015-06-12|2016-06-10|PORTABLE SYSTEM THAT MAKES IT FOR BLIND OR DISABLED PERSONS TO INTERPRET THE ENVIRONMENT THROUGH SOUND OR TOUCH|
RU2017144052A| RU2719025C2|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
PCT/ES2016/070441| WO2016198721A1|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
AU2016275789A| AU2016275789B2|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
ES16806940T| ES2780725T3|2015-06-12|2016-06-10|Portable system of sound or tactile interpretation of the environment for the blind or visually impaired|
JP2018516636A| JP6771548B2|2015-06-12|2016-06-10|A portable system that allows the blind or visually impaired to interpret the surrounding environment by voice or touch.|
CA2986652A| CA2986652A1|2015-06-12|2016-06-10|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
IL255624A| IL255624A|2015-06-12|2017-11-13|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
CONC2017/0012744A| CO2017012744A2|2015-06-12|2017-12-12|Portable sound or touch interpretation system of the environment for blind or visually impaired people|
HK18107660.4A| HK1248093A1|2015-06-12|2018-06-13|Portable system that allows blind or visually impaired persons to interpret the surrounding environment by sound or touch|
[返回顶部]