专利摘要:
The invention belongs to the robot field, in particular, to an intelligent robot. Known intelligent robots do not automatically adjust height based on human height, accurately identify human expression and gesture and automatically match appropriate expression and gesture for interaction. The invention provides an intelligent robot, comprising a bottom base and a lower torso welded on top of the bottom base, wherein, an upper torso is formed right above the lower torso; the lower torso is mounted with a human sensing unit by bolts; a first placement cavity is formed on the lower torso; and the bottom inner wall of the first placement cavity is mounted with a first push rod motor. The invention can automatically adjust height based on human height to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. The invention provides high intelligence, simple structure and convenient usage.
公开号:NL2020224A
申请号:NL2020224
申请日:2018-01-02
公开日:2018-07-23
发明作者:Zhu Xuan
申请人:Zhuhai Hengqin Qi Xiang Tech Co Ltd;
IPC主号:
专利说明:

Intelligent Robot
FIELD
The present invention relates to the technical field of robots, in particular to an intelligent robot.
BACKGROUND
As technology develops, more attention has been paid to and research and development have been made for intelligent robots; as intelligent robots quickly become a part of our work and life with increasingly popular applications, it poses higher requirements on intelligent robots.
Patent 201510955745.8 discloses an intelligent robot used for improving intelligence of the intelligent robot in the prior art. However, it is still of low intelligence for failure to automatically adjust height based on human height, failure to accurately identify human expression and gesture and failure to automatically match appropriate expression and gesture for interaction.
Patent 201510339278.6 discloses an intelligent robot capable of simulating human walking, attracting the attention of children, and free obstacle avoidance and free walking within a certain scope; in addition, it can play learning files to raise learning interests of childr en. However, it is of poor intelligence for failure to automatically adjust height based on human height, failure to accurately identify human expression and gesture and failure to automatically match appropriate expression and gesture for interaction.
SUMMARY
The present invention provides an intelligent robot to solve the problem of poor intelligence of the prior art for failure to automatically adjust height based on human height, failure to accurately identify human expression and gesture and failure to automatically match appropriate expression and gesture for interaction.
To achieve the above object, the present invention provides the following technical scheme:
An intelligent robot comprises a bottom base and a lower torso welded on the top of the bottom base; an upper torso is formed right above the lower torso; the lower torso is mounted with a human sensing unit by bolts; a first placement cavity is formed on the lower torso; the bottom inner wall of the first placement cavity is mounted with a first push rod motor by bolts; the output shaft of the first push rod motor is welded on the bottom of the upper torso; the upper torso is mounted with a gesture identification unit by bolts; both sides of the upper torso are flexibly mounted with an arm; a top base is mounted right above the upper torso; a second placement cavity is formed on the upper torso; the bottom inner wall of the second placement cavity is mounted with a second push rod motor by bolts; the output shaft of the second push rod motor is welded on the bottom of the top base; a head is flexibly arranged on the top of the top base, wherein, the head is mounted with an expression identification unit and a display unit by bolts;
The human sensing unit, the gesture identification unit and the expression identification unit form a sensing identification module; the sensing identification module is connected to a matching module and a data processing module respectively; the matching module is connected to a multiple databases, an retrieving module and a data processing module respectively; the retrieving module is connected to a multiple databases, an execution module and a data processing module respectively; the data processing module is connected to a driver module and a multiple databases respectively; the driver module is connected to a first push rod motor and a second push rod motor respectively; and the execution module is connected to an arm and a display unit respectively.
Preferably, a first through hole connected to the first placement cavity is formed on the top of the lower torso, and the output shaft of the first push rod motor is mounted in the first through hole in a sliding manner.
Preferably, a second through hole connected to the second placement cavity is formed on the top of the upper torso, and the output shaft of the second push rod motor is mounted in the second through hole in a sliding manner.
Preferably, the human sensing unit is used for human sensing, and sends signals to the data processing module; the gesture identification unit is used for gesture identification, and transmits identification results to the matching module; and the expression identification unit is used for expression identification, and transmits identification results to the matching module.
Preferably, the matching module comprises an expression matching unit and a gesture matching unit, wherein, the expression matching unit and the gesture matching unit are connected to the expression identification unit and the gesture identification unit respectively; the expression matching unit is used for matching expression data in the multiple databases based on identification results of the expression identification unit, and transmits matching results to the retrieving module; and the gesture matching unit is used for matching gesture data in the multiple databases based on identification results of the gesture identification unit, and transmits matching results to the retrieving module.
Preferably, the retrieving module comprises an expression retrieve unit and a gesture retrieve unit, wherein, the expression retrieve unit and the gesture retrieve unit are connected to the expression matching unit and the gesture matching unit respectively; the expression retrieve unit is used for retrieving expression data in the multiple databases based on matching results of the expression matching unit, and transmitting retrieved expression data to the execution module; the gesture retrieve unit is used for retrieving gesture data in the multiple databases based on matching results of the gesture matching unit, and transmitting the retrieved gesture data to the execution module.
Preferably, the execution module comprises an expression executing unit and a gesture executing unit, wherein, the expression executing unit and the gesture executing unit are connected to the expression retrieve unit and the gesture retrieve unit respectively, and the expression executing unit and the gesture executing unit are connected to the display unit and the arm respectively; the expression executing unit is used for controlling the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; and the gesture executing unit is used for controlling the arm to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit.
Preferably, the driver module comprises a driving circuit, a first switch circuit and a second switch circuit, wherein, the driving circuit, the first switch circuit and the second switch circuit are connected to the data processing module; the first switch circuit and the second switch circuit are connected to the first push rod motor and the second push rod motor; and the driving circuit is used for driving the first push rod motor and the second push rod motor for operation.
Preferably, the multiple databases comprises a corresponding expression library, an expression library, a corresponding gesture library and a gesture library, wherein, the expression library and the gesture library are connected to the matching module; the corresponding expression library and the corresponding gesture library are connected to the retrieving module; expression data in the corresponding expression library correspond to expression data in the expression library; and gesture data in the corresponding gesture library correspond to gesture data in the gesture library.
Preferably, the data processing module is used for controlling operation of the driver module based on the sensing signals of the human sensing unit, and the data processing module is used for dr iving and controlling the sensing identification module, the matching module, the retrieving module and the execution module.
Compared with the prior art, the present disclosure has the advantages that: 1. Through the human sensing unit, the data processing module, the driver module, the first push rod motor and the second push rod motor, height of the gesture identification unit and the expression identification unit can be automatically adjusted so as to correctly identify human expression and gesture; 2. Through the gesture identification unit, the expression identification unit, the matching module, the retrieving module and the execution module, appropriate expression and gesture can be automatically matched for interaction to realize high intelligence.
The present invention can automatically adjust height based on human height so as to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. The present invention has the advantages of high intelligence, simple structure and convenient usage.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a structural diagram of the intelligent robot according to the present invention; FIG. 2 is profiled structural diagram of the intelligent robot according to the present invention; FIG. 3 is a block diagram for working principles of the intelligent robot according to the present invention; FIG. 4 is a block diagram for working principles of the sensing identification module of the intelligent robot according to the present invention; FIG 5 is a block diagram for working principles of the matching module of the intelligent robot according to the present invention; FIG. 6 is a block diagram for working principles of the retrieving module of the intelligent robot according to the present invention; FIG. 7 is a block diagram for working principles of the execution module of the intelligent robot according to the present invention; FIG 8 is a block diagram for working principles of the driver module of the intelligent robot according to the present invention; FIG 9 is a block diagram for working principles of the multiple databases of the intelligent robot according to the present invention.
In the drawings: 1 bottom base, 2 lower torso, 3 upper torso, 4 first placement cavity, 5 first push rod motor, 6 first through hole, 7 top base, 8 second placement cavity, 9 second push rod motor, 10 second through hole, 11 gesture identification unit, 12 head, 13 expression identification unit.
EMBODIMENTS
The following clearly and comprehensively describes the technical scheme according to the embodiments of the present invention in combination of the drawings. Apparently, the embodiments in the following description are merely a part rather than all of the embodiments of the present invent
Referring to FIGS. 1-9, an intelligent robot, comprises a bottom base 1 and a lower torso 2 welded on the top of the bottom base 1; wherein, an upper torso 3 is formed right above the lower torso 2; the lower torso 2 is mounted with a human sensing unit by bolts; a first placement cavity 4 is formed on the lower torso 2; the bottom inner wall of the first placement cavity 4 is mounted with a first push rod motor 5 by bolts; the output shaft of the first push rod motor 5 is welded on the bottom of the upper torso 3; the upper torso 3 is mounted with a gesture identification unit 11 by bolts; both sides of the upper torso 3 are flexibly mounted with an arm; a top base 7 is formed right above the upper torso 3; a second placement cavity 8 is formed on the upper torso 3; the bottom inner wall of the second placement cavity 8 is mounted with a second push rod motor 9 by bolts; the output shaft of the second push rod motor 9 is welded on the bottom of the top base 7; a head 12 is flexibly mounted on the top of the top base 7, wherein, the head 12 is mounted with an expression identification unit 13 and a display unit by bolts;
The human sensing unit, the gesture identification unit 11 and the expression identification unit 13 form a sensing identification module, wherein, the sensing identification module is connected to a matching module and a data processing module respectively; the matching module is connected to a multiple databases, an retrieving module and a data processing module; the retrieving module is connected to a multiple databases, an execution module and a data processing module respectively; the data processing module is connected to a driver module and a multiple databases; the driver module is connected to the first push rod motor 5 and the second push rod motor 9 respectively; the execution module is connected to the arm and the display unit respectively.
In this embodiment, after sensing a human body, the human sensing unit sends signals to the data processing module; the data control module controls operation of the driving circuit, and the open or close of the first switch circuit and the second switch circuit; when the first switch circuit is closed, the driving circuit controls operation of the first push rod motor 5, and adjusts height of the upper torso 3, so as to adjust height of the gesture identification unit 11 to the extent that the gesture identification unit 11 can accurately identify gesture; when the second switch circuit is closed, the driving circuit controls operation of the second push rod motor 9, and adjusts height of the head 12, so as to adjust height of the expression identification unit 13 to the extent that the expression identification unit 13 can accurately identify expression; the expression identification unit 13 and the gesture identification unit 11 identify human expression and gesture, and transmit identification results to the expression matching unit and the gesture matching unit respectively; the expression matching unit matches expression data in the expression library according to identification results of the expression identification unit 13, and transmits matching results to the expression retrieve unit; the gesture matching unit matches gesture data in the gesture library based on identification results of the gesture identification unit 11, and transmits matching results to the gesture retrieve unit; the expression retrieve unit retrieves expression data in the corresponding expression library based on matching results of the expression matching unit, and transmits retrieved expression data to the expression executing unit; the gesture retrieve unit retrieves gesture data in the corresponding gesture library based on matching results of the gesture matching unit, and transmits retrieved gesture data to the gesture executing unit; the expression executing unit controls the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; and the gesture executing unit controls the aim to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit, thus completing interaction.
In this embodiment, a first through hole 6 connected to the first placement cavity 4 is formed on the top of the lower torso 2, and the output shaft of the first push rod motor 5 is mounted in the first through hole 6 in a sliding manner; a second through hole 10 connected to the second placement cavity 8 is formed on the top of the upper torso 3, and the output shaft of the second push rod motor 9 is mounted in the second through hole 10 in a sliding manner; the human sensing unit is used for human sensing, and sends signal to the data processing module; the gesture identification unit 11 is used for gesture identification, and transmits identification results to the matching module; the expression identification unit 13 is used for expression identification, and transmits identification results to the matching module; the matching module comprises an expression matching unit and a gesture matching unit, wherein, the expression matching unit and the gesture matching unit are connected to the expression identification unit 13 and the gesture identification unit 11 respectively; the expression matching unit is used for matching expression data in the multiple databases based on identification results of the expression identification unit 13, and transmits matching results to the retrieving module; the gesture matching unit is used for matching gesture data in the multiple database based on identification results of the gesture identification unit 11, and transmits matching results to the retrieving module; the retrieving module comprises an expression retrieve unit and a gesture retrieve unit, wherein, the expression retrieve unit and the gesture retrieve unit are connected to the expression matching unit and the gesture matching unit respectively; the expression retrieve unit is used for retrieving expression data in the multiple databases based on matching results of the expression matching unit, and transmits retrieved expression data to the execution module; the gesture retrieve unit is used for retrieving gesture data in the multiple databases based on matching results of the gesture matching unit, and transmits retrieved gesture data to the execution module; the execution module comprises an expression executing unit and a gesture executing unit, wherein, the expression executing unit and the gesture executing unit are connected to the expression retrieve unit and the gesture retrieve unit respectively; the expression executing unit and the gesture executing unit are connected to the display unit and the arm respectively; the expression executing unit is used for controlling the display unit to simulate corresponding expression based on expression data retrieved by the expression retrieve unit; the gesture executing unit is used for controlling the arm to simulate corresponding gesture based on gesture data retrieved by the gesture retrieve unit; the driver module comprises a driving circuit, a first switch circuit and a second switch circuit, wherein, the driving circuit, the first switch circuit and the second switch circuit are connected to the data processing module; the first switch circuit and the second switch circuit are connected to the first push rod motor 5 and the second push rod motor 9 respectively; and the driving circuit is used for driving operation of the first push rod motor 5 and the second push rod motor 9; the multiple databases comprises a corresponding expression library, an expression library, a corresponding gesture library and a gesture library, wherein, the expression library and the gesture library are connected to the matching module; the corresponding expression library and the corresponding gesture library are connected to the retrieving module; expression data in the corresponding expression library correspond to expression data in the expression library, and gesture data in the corresponding gesture library correspond to gesture data in the gesture library; the data processing module is used for controlling operation of the driver module based on sensing signals of the human sensing unit, and the data processing module is used for dr iving and controlling the sensing identification module, the matching module, the retrieving module and the execution module; compared with the prior art, this embodiment has the advantages that: through the human sensing unit, the data processing module, the driver module, the first push rod motor 5 and the second push rod motor 9, height of the gesture identification unit 11 and the expression identification unit 13 can be automatically identified so as to accurately identify human expression and gesture; through the gesture identification unit 11, the expression identification unit 13, the matching module, the retrieving module and the execution module, appropriate expression and gesture can be automatically matched for interaction, thus realizing high intelligence; the present invention can automatically adjust height based on human height, so as to accurately identify human expression and gesture, and automatically match appropriate expression and gesture for interaction. Therefore, it has the advantages of high intelligence, simple structure and convenient usage.
The above embodiments are merely preferred embodiments of the present invention, and should not be used to limit the present invention in any way. Equivalent substitutions or modifications made by those skilled in the art in accordance with the technical scheme and ideas of the present disclosure within the disclosed technical scope shall fall within the protection scope of the present invention.
权利要求:
Claims (10)
[1]
An intelligent robot consisting of a base (1) and a lower fuselage (2) welded to the top of the base (1), wherein an upper fuselage (3) is provided just above the lower fuselage (2) the lower body (2) is fixedly provided with a body detection unit by means of bolts, and a first placement cavity (4) is formed in the lower body (2); an inner inner wall of the first placement cavity (4) is fixedly provided with a first connecting rod motor (5) through the bolts, and an output shaft of the first connecting rod motor (5) is welded to the bottom of the upper hull (3); the upper hull (3) is fixedly provided with a bolt recognition unit (11), the two sides of the upper hull (3) are movably provided with an arm, and a syringe (7) is provided just above the upper body (3); a second locating cavity (8) is formed in the upper hull (3), and an inner inner wall of the second locating cavity (8) is fixedly provided with a second connecting rod motor (9) by bolts; an output shaft of the second connecting rod motor (9) is welded to the bottom of the syringe (7), and a head (12) is movably provided on the top of the syringe (7), and the head (12) is fixedly provided with a facial expression recognition unit (13) and a display unit by bolts; The body detection unit, the gesture recognition unit (11) and the facial expression unit (13) together form a detection recognition module, the detection recognition module is connected to a coupling module and a data processing unit, respectively; the coupling module is respectively connected to a database group, a calling module and a data processing module; the calling module is respectively connected to the database group, an execution module and the data processing module, and the data processing module is respectively connected to a drive module and the database group; the drive module is connected to the first connecting rod motor (5) and the second connecting rod motor (9), respectively, and the execution module is connected to the arm and the display unit, respectively.
[2]
The intelligent robot according to claim 1, wherein a first hole (6) communicating with the first placement cavity (4) is formed at the top of the lower hull (2), and an output shaft of the first connecting rod motor (5) is provided in the first hole (6) in a sliding manner.
[3]
The intelligent robot according to claim 1, wherein a second hole (10) communicating with the second placement cavity (8) is formed on the top of the upper fuselage (3), and an output shaft of the second connecting rod motor (9) is provided in the second hole (10) in a sliding manner.
[4]
The intelligent robot of claim 1, wherein the body detection unit is used to detect the human body, and then passes a signal to the data processing module; the gesture recognition unit (11) is used to recognize the gesture, and then to pass on a recognition result to the link module; the facial expression recognition unit (13) is used to recognize the facial expression, and then to pass on the recognition result to the coupling module.
[5]
The intelligent robot according to claim 4, wherein the coupling module consists of a facial expression coupling unit and a gesture coupling unit, facial expression coupling unit and the gesture coupling unit are respectively connected to the facial expression recognition unit (13) and the gesture recognition unit (11); the facial expression coupling unit is used to link facial expression data in the database group based on the recognition result of the facial expression recognition unit (13), and then forwards the appropriate result to the calling module; the gesture coupling unit is used to link gesture data in the database module based on the recognition result of the gesture recognition unit (11), and then to pass on the appropriate result to the calling module.
[6]
The intelligent robot according to claim 5, wherein the calling module comprises a unit that calls facial expressions and a unit that invokes gestures, the unit that invokes facial expressions and the unit that invokes gestures are respectively connected to the facial expression coupling unit and the gesturing coupling unit ; the facial expression unit is used to call facial expression data from the database group based on the appropriate result of the facial expression coupling unit, and then send the called expression data to the execution module; the gesture-calling unit is used to call-up data about gestures from the database group based on the appropriate result of the gesture-linking unit, and then send the called-up gesture data to the execution module.
[7]
The intelligent robot according to any of claims 1-6, wherein the execution module consists of a facial expression execution unit and a gestures execution unit, the facial expressions execution unit and the gestures execution unit are respectively connected to the facial expressions unit and the unit invoking gestures, and the facial expression execution unit and the gesturing execution unit are connected to the display unit and the arm, respectively; the facial expression execution unit is used to control the display unit to generate the corresponding facial expressions from the facial expression data invoked by the facial expression unit; the gestures execution unit is used to control the arm to generate the corresponding gestures based on the gestures data invoked by the gestures unit.
[8]
The intelligent robot of claim 1, wherein the drive module consists of a drive circuit, a first switching circuit and a second switching circuit, and the driving circuit, the first switching circuit and the second switching circuit are all connected to the data processing module; the first switching circuit and the second switching circuit are connected to the first driving motor (5) and the second driving motor (9), respectively, and the driving circuit is used to operate the first driving motor (5) and the second driving motor (9).
[9]
The intelligent robot of claim 1, wherein the database group consists of a library of corresponding facial expressions, a library of facial expressions, a library of corresponding gestures, and a library of gestures, and the library of facial expressions and the library of gestures are both connected the link module, the library of corresponding facial expressions, and the library of corresponding gestures are both connected to the calling module; data on facial expressions in the library for corresponding facial expressions correspond to data on facial expressions in the library for facial expressions, and data on gestures in the library for corresponding gestures correspond to data on gestures in the library for gestures.
[10]
The intelligent robot according to claim 1, wherein the data processing module is used to control the drive module to operate on the basis of a detection signal from the body detection unit, and the data processing unit is used to drive the module respectively control for detection recognition, the coupling module, the calling module and the execution module.
类似技术:
公开号 | 公开日 | 专利标题
NL2020224B1|2018-10-10|Intelligent Robot
CN106514667B|2020-12-08|Man-machine cooperation system based on Kinect skeleton tracking and calibration-free visual servo
CN107116553B|2020-04-21|Mechanical arm operation method and device
US20080166945A1|2008-07-10|Lifelike covering and lifelike electronic apparatus with the covering
CN108766118A|2018-11-06|It is easily programmed the tangible programming building blocks of education
CN104699115A|2015-06-10|Intelligent self-positioning system of spraying robot
CN102685961A|2012-09-19|Method for controlling lamp effect on electronic device and electronic device
TWI691864B|2020-04-21|Intelligent robot
CN107803832A|2018-03-16|A kind of control system of robot leg guy structure
Zhou et al.2020|Advanced Robot Programming: A Review
Sandamirskaya et al.2010|Natural human-robot interaction through spatial language: a dynamic neural field approach
CN106096716A|2016-11-09|A kind of facial expression robot multi-channel information emotional expression mapping method
Mocan et al.2016|Designing a multimodal human-robot interaction interface for an industrial robot
CN205596194U|2016-09-21|Multi -media stand controlling means
CN105116755A|2015-12-02|Gesture & posture simulation control robot control system
CN209256992U|2019-08-16|Anticollision circuit, robot and the collision avoidance system of robot arm
JP2020062743A|2020-04-23|Method and device for robot control
Chen et al.2019|Intelligent robot arm: Vision-based dynamic measurement system for industrial applications
Islam et al.2019|IoT-Based Robot with Wireless and Voice Recognition Mode
LaViola et al.2015|Natural user interfaces for adjustable autonomy in robot control
JP6568601B2|2019-09-04|Robot, robot control method, and program
Cheng et al.2021|Human-Robot Interaction Method Combining Human Pose Estimation and Motion Intention Recognition
WO2021134392A1|2021-07-08|Motherboard extension device and method applied to unmanned forklift
Ravipati et al.2014|Real-time gesture recognition and robot control through blob tracking
CN107650150A|2018-02-02|A kind of 2D walking rock-steady structures of biped robot
同族专利:
公开号 | 公开日
NL2020224B1|2018-10-10|
CN106737745A|2017-05-31|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20020198626A1|2001-06-21|2002-12-26|Atr Media Integration & Communications Research Laboratories|Communication robot|
US20110288684A1|2010-05-20|2011-11-24|Irobot Corporation|Mobile Robot System|
EP2933067A1|2014-04-17|2015-10-21|Aldebaran Robotics|Method of performing multi-modal dialogue between a humanoid robot and user, computer program product and humanoid robot for implementing said method|
WO2015185474A1|2014-06-05|2015-12-10|Aldebaran Robotics|Device for prepositioning and removably attaching articulated limbs of a humanoid robot|
CN202315292U|2011-11-11|2012-07-11|山东科技大学|Comprehensive greeting robot based on smart phone interaction|
CN104102346A|2014-07-01|2014-10-15|华中科技大学|Household information acquisition and user emotion recognition equipment and working method thereof|
CN105563493A|2016-02-01|2016-05-11|昆山市工业技术研究院有限责任公司|Height and direction adaptive service robot and adaptive method|
CN205594506U|2016-04-12|2016-09-21|精效新软新技术(北京)有限公司|Human -computer interaction device among intelligence work systems|
CN205651333U|2016-04-21|2016-10-19|深圳市笑泽子智能机器人有限公司|Guest -meeting robot|CN108406782A|2018-05-29|2018-08-17|朱晓丹|A kind of financial counseling intelligent robot easy to use|
CN109920347B|2019-03-05|2020-12-04|重庆大学|Motion or expression simulation device and method based on magnetic liquid|
法律状态:
2021-09-08| MM| Lapsed because of non-payment of the annual fee|Effective date: 20210201 |
优先权:
申请号 | 申请日 | 专利标题
CN201710007531.7A|CN106737745A|2017-01-05|2017-01-05|Intelligent robot|
[返回顶部]