专利摘要:
A computer-implemented method for filling an electronic record may include generating first transcription data of a first audio signal of a first interlocutor during a conversation between the first party and a second party. The method may also include generating second transcription data of a second audio signal of the second party during the conversation and identifying one or more words of the first transcription data as a value for a field based on the identified words corresponding to the file field and the fact that the word (s) originates from the first transcription data and not from the second transcription data. The method may further include transmitting the identified words to an electronic record database as a value for the file field of a user's record of the first party.
公开号:FR3067157A1
申请号:FR1851483
申请日:2018-02-21
公开日:2018-12-07
发明作者:Adam Montero;Scott Lorin Brooksby
申请人:Sorenson IP Holdings LLC;
IPC主号:
专利说明:

Automatic filling of electronic files
The embodiments described herein relate to the automatic filling of electronic records.
Communication systems allow participants present in different locations to communicate. The communication media used by different communication systems may vary. For example, in certain circumstances, the communication media of a communication system may be written, audio, video, or certain combinations thereof. Sometimes, records can be generated based on communications established between individuals. For example, information shared between a first individual and a second individual may be stored in a file by the first individual, the second individual, or another individual who is part of the communication between the first individual and the second individual.
The subject matter claimed here is not limited to embodiments which resolve any disadvantage or which operate only in environments such as those described in the background above. Indeed, this background is only given to illustrate an example of a technological field in which certain embodiments described here can be put into practice.
According to embodiments of the invention, a computer-implemented method for filling out an electronic file based on communications established between two people can comprise the generation, using a processing system, of first transcription data of a first audio signal from a first interlocutor during a conversation between the first interlocutor and a second interlocutor. The first transcription data may be a transcription of the first audio signal. The method may also include generating, using the processing system, second transcription data from a second audio signal of the second party during the conversation. The second transcription data may be a transcription of the second audio signal. The method may further comprise identifying, using the processing system, one or more words of the first data transcripts as a value for a file field from the identified words corresponding to the file field and the fact that said word (s) come from the first transcription data and not from the second transcription data. The method may further include transmitting, over a network, identified words to an electronic record database as a value for the record field of a user record of the first party.
According to one aspect of the invention, a method implemented by computer is proposed for filling out an electronic file based on communications between two people, the method comprising:
generating, using a processing system, first transcription data of a first audio signal from a first interlocutor during a conversation between the first interlocutor and a second interlocutor, the first transcription data being a transcription of the first audio signal ;
generating, using the processing system, second transcription data from a second audio signal from the second party during the conversation, the second transcription data being a transcription from the second audio signal;
obtaining a first user identifier associated with the first contact;
identifying a plurality of general record fields to be filled in using the information of the first audio signal and the second audio signal during the conversation, the plurality of general record fields being identified by the first identifier user;
after identifying the plurality of general record fields, identifying, using the processing system, one or more word (s) of the first transcription data as a value for a record field by itself based on the identified words corresponding to the file field and the word (s) coming from the first transcription data and not from the second transcription data, where the file field is one of the fields of the plurality of file fields general identified; and transmitting, over a network, identified words to an electronic records database as a value for the record field of a user record of the first party.
According to one embodiment, the method further comprises:
the identification of one or more second word (s) from the second transcription data as a value for a second file field based on the second identified words corresponding to the second file field and the second word (s) from the second transcription data and not from the first transcription data; and transmitting the second identified words to the electronic records database as a value for the second record field of a user record.
According to one embodiment, the method further comprises establishing a communication session between a first device and a second device such that a first device audio signal is sent from the first device to the second device and a second device audio signal is sent from the second device to the first device during the communication session, the first audio signal being the first device audio signal and the second audio signal being the second device audio signal.
According to one embodiment, the file field is a first file field, and before the transmission of the identified words, the method further comprises:
selecting a first type of electronic record from a plurality of types of electronic records based on the first user identifier, the user record being of the first type of electronic record; and determining a second record field in the first type of electronic record that corresponds to the first record field.
According to another aspect of the invention, a method implemented by computer is proposed for filling out an electronic file based on communications between two people, the method comprising:
establishing a communication session between a first device and a second device such that a first device audio signal is sent from the first device to the second device and a second device audio signal is sent from the second device to the first device device during the communication session;
generating first transcription data of the first device audio signal, the first transcription data being a transcription of the first device audio signal and comprising one or more first word (s);
generating second transcription data of the second device audio signal, the second transcription data being a transcription of the second device audio signal and comprising one or more second word (s);
the identification of one or more of said second word (s) as field identification data which is associated with an identifier for a general file field which must be filled in using information coming from the first device audio signal;
selecting one or more of said first word (s) which appear after the field identification data as selected words; and identifying one or more of the selected words as a value for the general folder field based on the identified words corresponding to the general folder field.
According to one embodiment, the method further comprises:
obtaining a first user identifier associated with the first device based on data from the first device;
selecting a first type of electronic record from a plurality of types of electronic records based on the first user identifier;
the mapping of the general folder field with a first folder field of the first type of electronic record according to the general folder field and the first folder field being configured for similar values;
the transmission of the first user identifier to an electronic health record database which stores the first type of electronic record; and transmitting to the electronic health record database words identified as a value for the first record field of a user record of the first type of electronic record, the user record being associated with the first user identifier in the electronic health record database.
According to one embodiment, the method further comprises interleaving the first transcription data and the second transcription data to generate third transcription data such that a combined transcription included in the third transcription data is substantially in chronological order.
According to one embodiment, the method further comprises the transmission to the electronic health record database of the third transcription data as the value of a second field of file in the user file.
According to one embodiment, the method further comprises the transmission to the second device of the field identification data for the presentation of the field identification data by the second device.
According to another aspect of the invention, there is provided a method implemented by computer for filling in an electronic file based on communications, the method comprising:
generating, using a processing system, first transcription data of a first audio signal from a first party, the first transcription data being a transcription of the first audio signal;
obtaining a first user identifier associated with the first contact;
identifying a plurality of general folder fields to be filled in using the information from the first audio signal, the plurality of general folder fields being identified by the first user identifier;
after identifying the plurality of general record fields, identifying, using the processing system, one or more word (s) of the first transcription data as a value for a record field by itself based on the identified words corresponding to the file field and the word or words from the first transcription data, the file field being one of the fields of the plurality of general file fields identified; and transmitting, over a network, identified words to an electronic records database as a value for the record field of a user record of the first party.
According to one embodiment, the method further comprises the generation, by using the processing system, of second transcription data of second audio signal from a second caller, the second transcription data being a transcription of the second audio signal.
According to one embodiment, the method further comprises:
the identification of one or more second word (s) from the second transcription data as a value for a second file field based on the second identified words corresponding to the second file field and the second word (s) from the second transcription data; and transmitting the second identified words to the electronic records database as a value for the second record field of a user record.
According to one embodiment, the method further comprises establishing a communication session between a first device and a second device such that a first device audio signal is sent from the first device to the second device and a second device audio signal is sent from the second device to the first device during the communication session, the first audio signal being the first device audio signal and the second audio signal being the second device audio signal.
According to one embodiment, the second contact is a healthcare professional and the first contact is a patient of the healthcare professional.
According to one embodiment, the file field is a first file field, and before the transmission of the identified words, the method further comprises:
selecting a first type of electronic record from a plurality of types of electronic records based on the first user identifier, the user record being of the first type of electronic record; and determining a second record field in the first type of electronic record that corresponds to the first record field.
The objects and advantages of the embodiments will be achieved and attained at least by the elements, characteristics and combinations described particularly in the claims. The foregoing general description and the following detailed description are given by way of explanatory examples and do not limit the invention, as claimed.
Examples of embodiments will be described and explained in more detail with the aid of the appended drawings, in which:
FIG. 1 represents an example of an environment relating to the transcription and filling of electronic files based on a communication from device to device;
Figure 2 is an example of a flowchart that can be used for transcribing and filling out electronic records;
FIG. 3 represents an example of an electronic file;
FIG. 4 represents an example of transcription;
FIG. 5 represents an example of a combination diagram of transcription data;
FIG. 6 is a flow diagram of an example of a process for transcribing and filling in electronic files;
FIG. 7 is a flow diagram of another example of a process for transcribing and filling in electronic files; and FIG. 8 represents an example of a computer system which can be used for transcribing or identifying data for filling in electronic files, all arranged according to one or more embodiment (s) described herein.
Certain embodiments of the present invention relate to the automatic filling of electronic files based on transcripts of communication sessions between users. In certain embodiments, the present invention relates to the automatic filling of electronic health records based on transcripts of communication sessions between health professionals and their patients. In these and other embodiments, communication sessions can occur when patients and healthcare professionals are in the same place or in different places.
In some embodiments, a system can be configured to generate a transcript of what is said, for example communicated verbally, during a communication session between multiple participants. The system can use this transcription to identify values for fields in electronic records and to fill these fields with said values. In some embodiments, the system can generate a transcript for each participant in the communication session based on a networking configuration that obtains the audio signals produced by the participants during the communication session. As a result, in these and other embodiments, the system may have certainty or a high degree of certainty regarding what has been said by each participant in the communication session. This increased certainty about what was said by each participant can help identify information from participants' transcripts, such as a word or group of words, that are likely to fill in a field in a record. Furthermore, in some embodiments, the system can be configured to use identification information from the transcripts to fill in fields of general records so that specific fields among several different types of records can be filled in using the same information.
In summary, the present invention provides solutions to problems that exist in artificial intelligence, networking, telecommunications and other technologies to improve the automatic filling of files. Embodiments of the present invention are explained in more detail below with reference to the accompanying drawings.
In the figures, Figure 1 shows an example of an environment 100 relating to the transcription and filling of electronic files based on device-to-device communication. The environment 100 can be arranged according to at least one embodiment described herein. The environment 100 may include a first network 102; a second network 104; a first device 110; second devices 120, including a first second device 120a and a second second device 120b; a communications routing system 140; a transcription system 160; and a 170 filing system.
The first network 102 can be configured to couple the first device 110 and the communications routing system 140 in a communicating manner. The second network 104 can be configured to communicate the second devices 120, the communications routing system 140, the transcription system 160 and the file system 170.
In some embodiments, the first and second networks 102 and 104 may each include any network or any configuration of networks configured to send and receive communications between the devices. In some embodiments, the first and second networks 102 and 104 may each include a conventional type network, a wired or wireless network, and they may have many different configurations. In addition, the first and second networks 102 and 104 may each include a local area network (LAN), a wide area network (WAN) (e.g. the Internet), or other interconnected data paths through which multiple devices and / or entities can communicate.
In some embodiments, the first and second networks 102 and 104 may each include a peer-to-peer network. The first and second networks 102 and 104 can also each be coupled to or can include parts of a telecommunications network for sending data in various different communication protocols. In some embodiments, the first and second networks 102 and 104 may each include Bluetooth® communication networks or cellular communication networks for sending and receiving communications and / or data. The first and second networks 102 and 104 may also each include a mobile data network which may include a third generation (3G), fourth generation (4G), long term evolution technology (LTE) mobile data network. , advanced long-term evolution technology (LTE-A), voice over LTE (“VoLTE”) or any other mobile data network or combination of mobile data networks. In addition, the first and second networks 102 and 104 can each comprise one or more wireless network (s) according to the IEEE 802.1 standard. In certain embodiments, the first and second networks 102 and 104 can be configured in the same way. way or a different way. In some embodiments, the first and second networks 102 and 104 may share various parts of one or more networks (x). For example, each of the first and second networks 102 and 104 can comprise the Internet network or another network.
The first device 110 can be any electronic or digital device. For example, the first device 110 may include or may be included in a desktop computer, a laptop computer, a smart phone (or “smartphone”), a mobile phone, a digital tablet, a set-top box, a connected television or any another electronic device provided with a processor. In some embodiments, the first device 110 may include computer-readable instructions stored on one or more computer-readable media (s) that are configured to be executed by one or more processor (s) in the first device 110 to perform of the operations described herein. The first device 110 can be configured to communicate with, receive data from, and direct data to the communications routing system 140 and / or the second devices 120. During a communication session, audio, video and / or a transcription of the audio documents can be presented by the first device 110.
In certain embodiments, the first device 110 can be associated with a first user. The first device 110 can be associated with the first user based on the fact that the first device 110 is configured to be used by the first user. In these embodiments, as well as in others, the first user can be registered with the communications routing system 140 and the first device 110 can be listed in the registration of the first user. As a variant, or in addition, the first device 110 can be associated with the first user by the fact that the first user is the owner of the first device 110 and / or can be controlled by the first user.
The second devices 120 can be any electronic or digital device. For example, the second devices 120 may include or may be included in a desktop computer, a laptop computer, a smart phone, a mobile phone, a digital tablet, a set-top box, a connected television or any other electronic device provided with a processor. In some embodiments, the second devices 120 may each include or be included in the same device, in different devices, or in combinations of electronic or digital devices. In some embodiments, the second devices 120 may each include computer readable instructions stored on one or more computer readable media (s) that are configured to be executed by one or more processor (s) in the second devices 120 for carry out the operations described herein.
The second devices 120 may each be configured to communicate with, receive data from, and direct data to the communications routing system 140. Alternatively or in addition, each of the second devices 120 may be configured to participate, individually or in groups , to a communication session with the first device 110 via the communications routing system 140. In certain embodiments, the second devices 120 may each be associated with a second user or may be configured to be used by a user. second user. During a communication session, audio, video documents and / or a transcription of the audio documents can be presented by the second devices 120 for the second users.
In some embodiments, the second users can be healthcare professionals. In these embodiments, as well as in others, health professionals may be individuals trained or skilled in giving advice regarding mental or physical health, including nurses, nurse practitioners, medical assistants, doctors, physician assistants, counselors, psychiatrists, psychologists and doulas, among other health professionals. In these embodiments, as well as in others, the first user can be an individual at home who is in need of care. For example, the first user may be an individual who returns home after surgery and needs care from a healthcare professional. As a variant or in addition, the first user can be an individual at home who has a disease for which home care provided by a health professional is advised. As a variant or in addition, the first user can be an individual in a health establishment or in another type of establishment.
In some embodiments, the communications routing system 140, the transcription system 160 and the folder system 170 may each include any hardware configuration, such as processors, servers, and databases that are set up. networked together and configured to perform one or more tasks. For example, the communications routing system 140, the transcription system 160 and the file system 170 may each include multiple computer systems, such as multiple servers which each include memory and at least one processor, which are networked together and configured to perform operations as described herein, among other operations. In some embodiments, the communications routing system 140, the transcription system 160 and the file system 170 may each include computer-readable instructions on one or more media that are configured to be executed by one or more multiple processors in each system among the communications routing system 140, the transcription system 160 and the file system 170 to perform the operations described herein.
Generally, the communications routing system 140 can be configured to establish and manage communication sessions between the first device 110 and one or more of the second devices 120. The transcription system 160 can be configured to generate and provide transcripts of documents audio from communication sessions established by the communications routing system 140. The transcription system 160 can also be configured to identify values for filling in fields in files of the file system 170 using transcriptions of audio documents from communication sessions.
The records system 170 can be a combination of hardware, including processors, memory, and other hardware configured to store and manage data. In some embodiments, the file system 170 can be configured to store electronic files having different fields. For example, the records system 170 can be configured to store electronic health records (EHR).
An example of the interaction of the elements illustrated in Environment 100 is now provided. As described below, the elements illustrated in the environment 100 can interact to establish a communication session between the first device 110 and one or more of the second devices 120, to transcribe the communication session, and fill in the fields with a electronic record in the 170 record system based on the transcript of the communication session.
The first device 110 may send a communication session request to the communications routing system 140. The communications routing system 140 may obtain the request from the first device 110. In some embodiments, this request may include an identifier of the first device 110.
Thanks to the identifier of the first device 110, the communications routing system 140 can obtain profile data concerning the first user associated with the first device 110. This profile data can include information about the first user, such as demographic information. , i.e. name, age, gender, address, etc., among other demographic data. The profile data may further include information relating to the health of the first user. For example, this health information may include height, weight, drug allergies and current health status, etc., among other health information. Profile data may further include other information about the first user, such as information that identifies the first user to the filing system 170, such as a first user identifier. In some embodiments, the profile data may include transcripts of conversations between the first user and the second users.
Using the profile data and / or other information about the first user, such as medical data about the first user, the communications routing system 140 can select one or more of the second devices 120 for the communication session with the first device 110. After selecting one or more of the second devices 120, the communications routing system 140 can establish the communications session. Alternatively or in addition, the communications routing system 140 may select one or more of the second devices 120 for the communication session with the first device 110 based on the fact that one or more of the second devices 120 is identified in the request issued by the first device 110.
During a communication session, the communications routing system 140 can be configured to receive media data from the first device 110 and from the selected second device (s) 120. The communications routing system 140 can route the media data to the transcription system 160 for generation of transcription data. The transcription system 160 can generate transcription data. The transcription system 160 can also analyze the transcription data in order to determine values for filling in the fields of a file associated with the first user of the first device 110. The transcription system 160 can send these values to the file system 170 for filling the fields of the first user's folders. Transcription data can also be transmitted to the first device 110 and to the second device (s) 120 selected for presentation by the first device 110 and the second device (s). (s) 120 selected.
A more advanced explanation of the transcription and routing process will now be given. However, for the sake of clarity, this will be done in the context of a communication session between the first device 110 and the first second device 120a.
As mentioned, the first device 110 and the first second device 120a can exchange media data during a communication session. In some embodiments, the media data may include video and audio data. For example, the first device 110 can send first audio data and first video data to the first second device 120a and the first second device 120a can send second audio data and second video data to the first device 110. As a variant or in addition , the media data may include audio data but not video data.
During the communication session, the media data exchanged between the first device 110 and the first second device 120a can be routed through the communications routing system 140. During the routing of the media data between the first device 110 and the first second device 120a, the communications routing system 140 can be configured to duplicate the audio data from the media data and to supply the duplicated audio data to the transcription system 160.
The transcription system 160 can receive the first duplicated audio data. The transcription system 160 can generate the first transcription data of the first duplicated audio data. The first transcription data may include a transcription of the first duplicated audio data.
In some embodiments, the transcription system 160 can generate the segments of the first transcription data using automatic transcription of the first duplicated audio data. In some embodiments, before an automatic transcription of the first duplicated audio data is carried out, the first duplicated audio data can be listened to and re-vocalized by another person. In these embodiments, as well as in others, the other person can make corrections to the automatic transcription.
The transcription system 160 can supply the first transcription data to the communications routing system 140. The communications routing system 140 can route the first transcription data to the first second device 120a. The first second device 120a can present the first transcription data to a user of the first second device 120a on a screen of the first second device 120a.
The communications routing system 140 and the transcription system 160 can process the second media data from the first second device 120a in a similar manner. For example, the communications routing system 140 can generate second duplicate audio data from second audio data from the second media data and the transcription system 160 can generate second transcription data from the second duplicate audio data. Second transcription data can be provided to the first device
110 for a presentation to the first user of the first device 110.
In some embodiments, the generation and distribution of transcription data from the first and second media may both be substantially in real time or in real time. In these embodiments, as well as in others, the first device 110 can present the second transcription data simultaneously with the second media data, substantially in real time or in real time. Simultaneous presentation of the second transcription data and the second media data substantially in real time may indicate that when audio data is presented, a transcription which corresponds to the presented audio data is also presented with a delay of less than 1, 2, 5, 10 or 15 seconds between transcription and audio data. As a variant or in addition, the generation and distribution of transcription data from one of the first and second media can be carried out substantially in real time or in real time and the generation and / or distribution of transcription data from another first and second media may not be in real time.
In some embodiments, when a third device, such as the second second device 120b, participates in a communication session between the first device 110 and the first second device 120a, third transcription data may be generated for third generated audio data by the third device. In these and other embodiments, the third transcription data can be provided to the first device 110 and / or the first second device 120a and the third device can receive the first and / or second transcription data from respectively the first device 110 and the first second device 120a.
A more detailed explanation of the process for filling in file fields will now be given.
After generation of the first transcription data of the first audio data and the second transcription data of the second audio data, the transcription system 160 can be configured to use data from the first transcription data and the second transcription data to complete a or several fields of one or more files stored by the file system 170.
FIG. 3 represents an example of an electronic file 300. The file 300 comprises a first field 302a and a second field 302b, called fields 302. The first field 302a comprises a first identifier 304a and a first value 306a. The second field 302b includes a second identifier 304b and a second value 306b. Although folder 300 only shows the first and second fields 302, folder 300 may include any number of fields.
In the first field 302a, the first identifier 304a can identify or give a context for the first value 306a. In the second field 302b, the second identifier 304b can identify or give a context for the second value 306b. For example, the first identifier 304a can be "Name" and the first value 306a can be "Jane Doe". As another example, the second identifier 304b can be “Processing in progress” and the second value 306b can be “Ibuprofen”. As discussed here, filling in a field may include assigning data to the value of the field. For example, one or more words, numbers, expressions, characters, or other information can be assigned to a value in a field. Assigning data to the field value can include replacing previous data or assigning data for the first time when the value is zero. For example, the words "Jane Doe" can be assigned to the first value 306a to fill in the first field 302a.
Returning to the description of FIG. 1, in certain embodiments, the fields which are likely to be filled in can be identified. In certain embodiments, it is possible to identify all the possible fields of the folders of the folder system 170. As a variant or in addition, it is possible to identify a subset of fields of the folders of the folder system 170. The subset of fields can be identified by the subset of fields associated with the first user. For example, in some embodiments, the profile data of the first user can be reached and used to determine fields that can be associated with the first user. For example, medical information about the first user can be used to determine the subset of fields that can be completed. For example, if the first user has undergone hip surgery, fields relating to other areas such as chemotherapy, organ transplants and other medical cases not associated with hip surgery can be ignored. In some embodiments, a particular folder in the folder system 170 having fields that must be completed can be reached using the profile data of the first user. In these embodiments, as well as in others, the fields of the particular folder can be identified.
To fill in the identified fields, data such as one or more words, numbers, expressions, characters or other information can be identified in the first and second transcription data which correspond to the identified fields. Data can be marked by their corresponding field. Data marking may include annotating or adding metadata to the first and second transcription data relating to the correlation between the data and the identified fields. As a variant or in addition, the marking of the data may include copying the data corresponding to the identified fields and storing the copied data in places which are associated with the corresponding identified fields.
In certain embodiments, the identification of the data to fill the identified fields can be based on the source of the data, namely the first transcription data or the second transcription data. As previously described, the first transcription data is a transcription of first audio data from the first device and the second transcription data is a transcription of second audio data from the second device. Thus, the first transcription data is a transcription of words spoken by the first user during the communication session and the second transcription data is a transcription of words spoken by the second user during the communication session. There is thus little or no uncertainty as to what is said by the first user and what is said by the second user. Increased certainty about what is said by each user can increase the likelihood of filling in the fields with the correct data from the first and second transcripts. Conversely, in other systems, a conversation between two people can be recorded in a single audio stream, so that a transcription includes words spoken by both people and it can be difficult to determine which person said what word.
For example, a first identified field may be of a type such that the first identified field will be completed or is more likely to be filled with data from one of the first and second transcripts, but not from the other first and second transcripts. In these embodiments, as well as in others, the data from the first transcription can be marked for the first identified field.
For example, for a field with an identifier of "Patient's current pain", the value for the identified field would most certainly be provided by a transcription of words spoken by the patient. When the first user is the patient, the data for the identified field can be identified in the first transcription data because the first transcription data comes from the first audio data generated from words spoken by the first patient user. For example, in a discussion between a healthcare professional and a patient, the healthcare professional may say, "You may experience different types of pain, such as a burning sensation or stabbing pain." The patient can respond, "My pain is acute pain." Failure to distinguish between words spoken by the healthcare professional and those spoken by the patient may result in an error in marking the patient's current pain. Conversely, knowing that the words "acute pain" were spoken by the patient can increase the likelihood that the words are marked for the correct field.
In some embodiments, a field of a folder can be all of the first transcription data and the second transcription data. In these embodiments, as well as in others, the transcription system 160 can combine the first transcription data and the second transcription data to generate third transcription data. The third transcription data may include a transcription of the complete communication session. For their part, the first transcription data may include a transcription of the audio data generated by the first device 110 and the second transcription data may include a transcription of the audio data generated by one of the second devices 120.
In some embodiments, the first transcription data and the second transcription data can be combined by interleaving the data segments of the first transcription data and the second transcription data. In these and other embodiments, the data segments of the first transcription data and the second transcription data can be interleaved such that the data segments of the first transcription data and the second transcription data are combined in substantially chronological order. FIG. 5 shows an example relating to the combination of transcription data in chronological order.
Once the data has been marked, the transcription system 160 can be configured to transmit the marked data to the file system 170. In these embodiments, as well as in others, the transcription system 160 can transmit the data marked with in such a way, for example with other data, that the records system 170 can easily identify the record and the field associated with each occurrence of marked data. For example, in some embodiments, the transcription system 160 may use an application programming interface (API) or other data structuring to provide the marked data and other data to the file system 170.
Once the records system 170 has received the marked data and the other data, the records system 170 can identify a record to be filled. In these embodiments, as well as in others, the folder system 170 can identify the folder from the other data. The other data may include data from the profile data of the first user, such as a user identifier, which may allow the folder system 170 to identify a folder that corresponds to the first user. After identifying the file to be filled, the file system 170 can fill in the fields corresponding to the marked data with the marked data.
In some embodiments, the transcription system 160 can be configured to communicate with several different types of file systems. In these and other embodiments, the transcription system 160 can be configured to communicate with each of the file systems and to provide the marked data with sufficient data to allow each of the file systems to complete its files. .
For example, the transcription system 160 can be operated by a company that does not manage the file systems but provides services to people who have files stored in various different file systems. The transcription system 160 can be configured to communicate with each of the file systems such that the files on each system are completed correctly.
Here now is a simple example of the interaction of components in environment 100. A video call between a patient using the first device 110 and a nurse using the first second device 120a can occur via the routing system of communications 140. The nurse may ask the patient questions concerning the patient's current state of health. The audio data coming from the first device 110 and the first second device 120a, for example the conversation between the nurse and the patient, can be transcribed by the transcription system 160. The transcription system 160 can mark parts of the transcription of the conversation and transmitting the marked portions to the records system 170. The records system 170 can access an electronic patient health record and update the record to reflect current information on the patient's condition.
Environment 100 thus represents an example in which the filling of electronic files with data, in particular data resulting from a conversation between two people, can be automated by computer systems without human assistance or with reduced human assistance.
Modifications, additions or omissions can be made to the environment 100 without departing from the scope of the present invention. For example, in some embodiments, the transcription system 160 may be part of the communications routing system 140. As an alternative, or in addition, the transcription system 160, the communications routing system 140 and the file system 170 can all be part of the same system.
As another example, in some embodiments, the transcription system 160 can receive the first audio data from the first device 110 and the second audio data from the second device 120 or the first audio data from the second device 120 and the second audio data from the first device 110,
Figure 2 is an example of an organization chart 200 that can be used in connection with transcribing and filling out electronic records. Organizational chart 200 can be arranged according to at least one embodiment described herein. In certain embodiments, the flowchart 200 can be configured to illustrate an embodiment intended to identify words for filling in the fields of an electronic file. In these embodiments, as well as in others, part of the flow diagram 200 can be an example of operation of the transcription system 160 in the environment 100 of FIG. 1.
The flow of flowchart 200 can start in box 204, where a first transcript 206a can be generated and a second transcript 206b can be generated. The first transcript 206a can be generated from first audio data 202a. The second transcript 206b can be generated from second audio data 202b. The first transcript 206a and the second transcript 206b may be referred to herein jointly by the expression "the transcripts 206".
In some embodiments, the first transcript 206a and the second transcript 206b can be generated using automatic transcription of the first audio data 202 and the second audio data 202b respectively. In certain embodiments, before the execution of an automatic transcription, the first audio data 202 and the second audio data 202b can be listened to and re-vocalized by another person. In these embodiments, as well as in others, this other person can make corrections to the automatic transcription by correcting words and adding punctuation, among other possible corrections.
In some embodiments, the transcripts 206 may include multiple data segments. Each of these data segments may include a transcription of part of the corresponding audio data. Consequently, each of the data segments can comprise one or more characters, such as one or more words, groups of words, numbers, etc.
In some embodiments, the transcripts 206 can be separated into data segments based on an elapsed time when a voice signal is detected in the audio signal that is being transcribed. For example when a voice signal is detected in the audio signal which is being transcribed, a segment of data may start and continue until a particular time interval has elapsed or until no signal voice is detected in the audio signal for a specific period of time. After the particular time interval has elapsed or after a voice signal is detected again, another segment of data may begin.
In these embodiments, as well as in others, a speech signal can be detected as being present in an audio signal from an envelope of amplitude of the audio signal. When the envelope is substantially flat over a particular period, there may be no voice signal. When the envelope varies over the particular period, there may be a voice signal. This particular period can be chosen according to the average speech speeds and the inflections of the voice for an adult speaking in the language which is transcribed.
In some embodiments, the transcripts 206 may be separated into data segments based on a particular number of words which each data segment may contain. For example, each data segment can have 1, 3, 5, 10, 15, 20 or more words in a data segment.
In some embodiments, the data segments may further include additional information. For example, the data segments may include information about the source device and / or the destination device, for example the first device 110, the second device 120, etc., for the audio data which gave rise to the data segments . Alternatively, or in addition, the data segments may have a time stamp associated with when the audio data which gave rise to the data segments was captured.
In box 220, fields which can be filled in by the first transcript 206a and the second transcript 206b can be selected. In these embodiments, as well as in others, several fields among available fields 210 can be selected to be filled. In some embodiments, the available fields 210 may be all or most of the fields of the file types that can be filled.
The fields that are selected, i.e. the selected fields 222, can be selected using information about the person who produced the first audio data 202a which can be accessed using an identifier 212. This personal information can be associated with identifiers and / or field values. For example, if the fields relate to health care, the person information may include health information. In these embodiments, as well as in others, the selected fields 222 can be selected from a table that associates different types of fields with different data that can be included in the person information. For example, if the person information indicates that the person has cancer, the fields selected 222 may relate to cancer rather than another medical condition. In certain embodiments, it is possible to select all the available fields 210, so that the selected fields 222 are the available fields 210.
In box 230, data from transcriptions 206 can be marked as values for one or more of the selected fields 222. To mark data from transcriptions 206, transcriptions 206 can be analyzed to determine data, such as one or more words , numbers, groups of words, characters, or other information, which may be a value for one or more of the selected fields 222. Data which may be a value for one or more of the selected fields 222 may be marked for the or the selected field (s) 222 to become marked data 232. The marking of the data can associate the data with the selected field (s) 222.
Data analysis to determine which data should be marked can be done using different methods. Several concepts are presented here, which can be used separately or together. However, any other analytical technique can be used.
In some embodiments, the transcripts 206 can be broken down to determine the individual words and the sentence structure. Once the 206 transcriptions have been broken down, the 206 transcriptions can be analyzed using an automatic natural language processing function which is developed by automatic learning. The automatic natural language processing function can be developed by learning the automatic natural language processing function over time.
For learning the automatic natural language processing function, several transcriptions can be provided to the automatic natural language processing function, several fields can be selected, and the transcription data which must be marked for each of the different selected fields are identified for the automatic processing of natural languages. The natural language processing function can determine, for example, learn, trends or concepts in the transcripts that resulted in the marking of the data provided using various classification techniques, such as support vector machines, Bayesian networks and learning classification systems, among other techniques. After learning, the automatic natural language processing function can then analyze the transcripts 206 using the trends and concepts to mark the data from the transcriptions 206 for the fields on which the automatic natural language processing function has learned.
In certain embodiments, it is possible to develop several functions for automatic processing of natural languages for different combinations of fields. In these embodiments, as well as in others, based on the selected fields 222, an automatic natural language processing function which is trained for the selected fields 222 can be chosen to analyze the transcriptions 206.
In certain embodiments, it is possible to carry out the learning of the automatic processing function of natural languages by using transcriptions 206 which are grouped as forming a conversation but which each include only the words spoken by individuals in the conversation, such as the first transcript 206a and the second transcript 206b. In these and other embodiments, the automatic natural language processing function can learn that data for particular fields can come more spontaneously from one of the types of interlocutors in the conversation, for example a healthcare professional or patient, than other contacts. When the automatic natural language processing function receives information specifying which transcription is associated with which interlocutor in the conversation, the automatic natural language processing function can more precisely mark the data coming from the transcriptions 206 for the selected fields 222. The data can therefore be marked on the basis of the data corresponding to the file field and on the basis that the data comes from one of the transcriptions 206 and not from another of the transcriptions 206.
In some embodiments, additional information may be used on the selected fields 222 and the known sources of the first transcript 206a and the second transcription 206b may be used to mark data from the transcripts 206. For example, in some modes In one embodiment, the data which corresponds to identifiers of the selected fields 222 can be found in one of the transcripts 206 and values for the selected fields 222 can be identified in the other of the transcripts 206 in part of the other transcription which follows in time the data which correspond to the identifiers of the selected fields 222.
For example, an automatic natural language processing function or another analytical function, such as a word matching function, can be used to identify when a segment of data in the second transcript 206b includes data that corresponds to a identifier of one of the selected fields 222. One or more data segments originating from the first transcription 206a which appears after, in chronological order, the identified data segment originating from the second transcription 206b may or may be analyzed ) to mark data which can be used for the value of the first of the selected fields 222. The segment (s) of data originating from the first transcription 206a can be analyzed using an automatic natural language processing function formed therein. analysis or using another analytical function. Depending on the analysis, the data of the data segment (s) from the first transcription 206a can be marked for the first of the selected fields 222.
An example will now be given with reference to FIGS. 2 and 4. FIG. 4 represents an example of transcription dialog 400. The example of transcription dialog 400 comprises a chronological combination of two transcriptions from different audio sources. The first transcription comprises a first part 402a and a second part 402b. The first transcript can be an example of the second transcript 206b in Figure 2 and it can be a transcription of audio data from a healthcare professional. In this example, the first part 402a and the second part 402b can be respectively a first data segment and a second data segment of the first transcription. The second transcript includes a third part 404a and a fourth part 404b. The second transcript can be an example of the first transcript 206a in Figure 2 and it can be a transcription of audio data from a patient. In this example, the third part 404a and the fourth part 404b can be respectively a first data segment and a second data segment of the second transcription.
In this example, an identifier for a selected field can be the patient's current pain level. Consequently, the first part 402a of the first transcription can be identified as the data segment of the first transcription which includes the identifier of the selected field. The third part 404a of the second transcription may be the data segment of the second transcription which appears after, in chronological order, the identified data segment of the first transcription. Thus, the third part 404a can be analyzed to determine the value of the selected field, which in this example can be "the pain is at a level seven".
In this example, there was no additional data segment from the first transcript between the identified data segment from the first transcription and the next data segment from the second transcription. The invention is not, however, limited to this embodiment, as there may be other data segments from the first transcription that may appear before the next data segment from the second transcription.
Referring again to Figure 4, in some embodiments, the data that corresponds to identifiers of the selected fields 222 may be found in one of the transcripts 206 based on a correlation with data presented to a user who generates the audio data which gives the first of the transcripts 206.
For example, the second audio data 202b can be based on words spoken by a user. The user may be asked to pronounce a particular group of words by an electronic device which generates the second audio data 202b. For example, the electronic device can display the group of words for the user. The group of words can be determined by the identifier of a selected field and be configured to request a response that would include a value from the selected field. In these embodiments, as well as in others, the group of words can be transmitted to the electronic device by a system which executes the flow of the flowchart 200.
The electronic device or system that supplies the group of words to the electronic device can note the time at which the group of words is displayed or transmitted to the electronic device for display in real time to allow the display time. of the group of words to be correlated with time markers of data segments of the second transcript 206b. In addition, the group of words can be identified in the second transcription 206b using a simple matching algorithm because the group of words is known to the system which executes the flow of the flowchart 200. Depending on the temporal correlation and of the known group of words, in certain embodiments, the data segment of the second transcription 206b which corresponds to the identifier of the selected field can be identified without using an automatic natural language processing function or another similar analysis .
After the identification of the data segment of the second transcription 206b which corresponds to the identifier of the selected field, one or more consecutive data segment (s) of the first transcription 206a which appear after the identified data segment originating from of the second transcript 206b can be analyzed for the value of the selected field.
For example, with reference to Figure 4 and following the previous example given in relation to Figure 4, an electronic device that captures the speech of the health professional at a particular time could display the question: "What is your pain level today, on a scale of one to ten ". This group of words can be known as an identifier for a first field. Thus, the first part 402a can be identified as comprising the identifier of the first field according to the corresponding language between the group of words displayed and the transcription and a correlation between the particular instant when the group of words is displayed and the timing. of the group of words in the second transcript 206b. In this example, the third part 404a, which is the data segment of the first transcription 206a which appears after the first part 402a, can be analyzed to determine the value of the first field. Which can be in this example: "the pain is at a level seven".
Referring again to FIG. 2, in certain embodiments, an additional verification can be made after the marking carried out by the automatic processing function of natural languages. The automatic natural language processing function can, for example, suggest a mark for data from transcripts 206. In these embodiments, as well as in others, the suggested mark can be verified by a human. In some embodiments, the human who checks the suggested brand may be a participant in the transcribed conversation, such as the healthcare professional. Alternatively, or in addition, the suggested mark can be verified by a person who has re-vocalized the conversation for automatic transcription, as mentioned above. In these embodiments, as well as in others, the automatic natural language processing function may provide one or more suggested mark (s) for data from the transcripts 206. The human who checks the suggested markings may choose the appropriate brand from the suggested brands for the data, or it may reject all of the suggested brands. In these embodiments, as well as in others, the human may be able to choose a mark for the data from among marks that correspond to the selected fields 222.
In box 240, a folder type can be selected from several folder types 214 as the selected folder type 242. In some embodiments, the folder type can be selected using information about the person from which the first audio data 202a which can be accessed using the user identifier 212. This information can for example be the name of a file system which stores files for the person. Alternatively or additionally, the information may allow the name of the records system that stores records for the person to be identified. In these embodiments, as well as in others, the folder type can be the type of folder used by the folder system that contains the folder of the person from whom the first audio data 202a was obtained.
In box 250, the selected fields 222 can be correlated to the type of folder selected 242. In these embodiments, as well as in others, the selected fields 222 can be general fields which can correspond to several different types of files . There may be several different types of files because different entities can manage different files. Fields with the same or similar values in these different types of records may be located in different places and / or include slightly different descriptions, for example identifiers. The selected fields 222 can be general fields which can correspond to different fields among different types of files with the same or similar values.
The selected fields 222 can be correlated with the specific fields of the different types of files according to a conversion table or another algorithm. By correlating the selected fields 222 to the fields of the selected folder type 242, the marked data can fill in the correct field in the selected folder type 242. In addition, by correlating the selected fields 222 to the fields of different types of folders, the data marked can fill in the correct field in any type of record.
In box 260, the marked data can be transmitted to a file database to update a file of the person from whom the first audio data 202a originated.
Modifications, additions or omissions can be made to flowchart 200 without departing from the scope of the present invention. For example, in some embodiments, the flowchart may not include box 220. In these embodiments, as well as in others, the available fields 210 may be used to mark the data. As another example, the flowchart 200 may not include the boxes 240 and 250. In these embodiments, as well as in others, the data marked 232 can be transmitted directly to the file database.
FIG. 5 represents an example of a diagram 500 of combination of transcription data. The diagram 500 can be arranged according to at least one embodiment described herein. Diagram 500 represents first transcription data 502, second transcription data 504 and third transcription data 506. Diagram 500 represents in particular an example of combining the first transcription data 502 with the second transcription data 504 the third transcription data 506.
The first transcription data 502 includes first data segments 503, which include a first data segment 503a, a second first data segment 503b, a third first data segment 503c and a fourth first data segment 503d. Each of the first data segments 503 includes one or more words, represented by A1, A2, A3 and A4. The second transcription data 504 includes second data segments 505, which include a first segment of second data 505a, a second segment of second data 505b, a third segment of second data 505c and a fourth segment of second data 505d. Each of the second data segments 505 includes one or more words, represented by B1, B2, B3 and B4.
Each of the first data segments 503 and each of the second data segments 505 further includes a time. The instants are represented by Instant 1 to Instant 8. The numbering of instants from instant 1 to Instant 8 represents arbitrary instants, but the numbering from 1 to 8 represents a chronological order from instant 1 to instant 8 .
As shown in diagram 500, the first transcription data 502 and the second transcription data 504 are combined by determining which data segment of the first data segments 503 and second data segments 505 is the smallest. The data segment having the smallest instant is used to start the third transcription data 506. The data segment of the first data segments 503 and of the second data segments 505 having the next instant is then added to the third data Transcript 506. The remaining data segments are added in chronological order as shown.
In some embodiments, one or more of the data segments among the first data segments 503 and the second data segments 505 may have a substantially similar or similar time. In these embodiments, as well as in others, the arrangement of the data segments can be chosen according to the words and / or the punctuation in the data segments and the data segments adjacent to the data segments in question. For example, if the second segment of first data 503b and the second segment of second data 505b have substantially similar or similar instants, we can revise the statement or the punctuation of the second segment of first data 503b and / or of the second segment of second data 505b and their neighboring data segments. If either of the second first data segment 503b and second second data segment 505b includes punctuation, then the data segment that includes the punctuation can be placed in front of the other data segment in the third data Transcription 506. As another example, it is possible to analyze the punctuation of the data segments that surround the data segments in question in order to determine a ranking of the data segments.
Modifications, additions or omissions can be made to diagram 500 and / or to the described method of combining transcription data without departing from the scope of the present invention.
FIG. 6 is a flow diagram of another example of a method 600 for transcribing and filling electronic files. The method 600 can be arranged according to at least one embodiment described herein. The method 600 can be implemented, in whole or in part, in certain embodiments by a system or combinations of components of a system or environment as described herein. For example, the method 600 can be implemented, in whole or in part, by an environment 100 in FIG. 1 and / or a system 800 in FIG. 8. In these embodiments, as well as in others, certain operations or all the operations of the method 600 can be carried out based on the execution of instructions stored on one or more non-transient computer-readable medium (s). Although they are represented in the form of discrete boxes, different boxes can be divided into additional boxes, combined into a lower number of boxes, or eliminated, depending on the implementation envisaged.
Method 600 can begin in block 602, where first transcription data of first audio data from a first party during a conversation between a first party and a second party can be generated using a processing system. The first transcription data may be a transcription of the first audio data.
In block 604, second transcription data of second audio data of the second party during the conversation can be generated using the processing system. The second transcription data may be a transcription of the second audio data. In certain embodiments, the second contact can be a healthcare professional and the first contact can be a patient of the healthcare professional.
In box 606, one or more word (s) from the first transcription data can be identified, using the processing system, as being a value for a file field based on the identified words corresponding to the file field and the fact that the word (s) come from the first transcription data and not from the second transcription data.
In box 608, the identified words can be transmitted via a network to an electronic records database as a value for the record field of a user file of the first contact.
A person skilled in the art will appreciate the fact that, for these processes, operations and procedures, the functions and / or operations carried out can be implemented in a different order. In addition, the highlighted functions and operations are given only as examples, and some of the functions and operations may be optional, combined into a smaller number of functions and operations, or developed into additional functions and operations without move away from the essence of the embodiments described.
For example, in certain embodiments, before identifying the word (s), the method 600 may further comprise obtaining a first user identifier associated with the first interlocutor and identifying several fields general files which must be filled in using information from the first audio data and the second audio data during the conversation. In these embodiments, as well as in others, said general folder fields can be identified by the first user identifier. In these embodiments, as well as in others, the folder field can be one of the different general folder fields identified.
As another example, method 600 may further include identifying one or more second word (s) from the second transcription data as a value for a second file field based on the second identified words corresponding to the second file field and the fact that the word (s) come from the second transcription data and not from the first transcription data. In these embodiments, as well as in others, the method 600 may further comprise transmitting the second identified words to the electronic records database as a value for the second record field of a user record.
As another example, the method 600 may further include establishing a communication session between a first device and a second device such that first device audio data is sent from the first device to the second device and second data Device audio is sent from the second device to the first device during the communication session. In these embodiments, as well as in others, the first audio data may be the first device audio data and the second audio data may be the second device audio data.
As another example, in certain embodiments, before the identification of the word (s), the method 600 may further comprise obtaining a first user identifier associated with the first device based on data coming from the first device which is used to establish the communication session and the selection of a first type of electronic file among several types of electronic files based on the first user identifier. In these embodiments, as well as in others, the user folder can be of the first type of electronic folder. The method 600 can further comprise determining the file field in the first type of electronic file which corresponds to the identified words.
FIG. 7 is a flow diagram of an example of a method 700 for transcribing and filling in electronic files. The method 700 can be arranged according to at least one embodiment described herein. The method 700 can be implemented, in whole or in part, in certain embodiments by a system or combinations of components of a system or environment as described herein. For example, the method 700 can be implemented, in whole or in part, by an environment 100 of FIG. 1 and / or a system 800 of FIG. 8. In these embodiments, as well as in others, certain operations or all the operations of the method 700 can be carried out based on the execution of instructions stored on one or more non-transient computer-readable media. Although they are represented in the form of discrete boxes, different boxes can be divided into additional boxes, combined into a lower number of boxes, or eliminated, depending on the implementation envisaged.
The method 700 can start at block 702, where a communication session can be established between a first device and a second device such that first device audio data is sent from the first device to the second device and second audio data from device are sent from the second device to the first device during the communication session.
In block 704, the first device audio data can be received while the first device audio data is routed to the second device. In box 706, the second device audio data can be received while the second device audio data is routed to the first device.
In block 708, first transcription data of the first device audio data can be generated. The first transcription data may be a transcription of the first device audio data and may include several first segments of data. Each of these first data segments may include one or more words from the transcription of the first device audio data.
In box 710, second transcription data of the second device audio data can be generated. The second transcription data may be a transcription of the second device audio data and may include multiple second segments of data. Each of these second data segments can include one or more words from the transcription of the second device audio data.
In box 712, one of the second data segments can be identified, which includes a field identification word which is associated with an identifier for a general file field which must be filled in using information from the first audio data from device and second device audio data during the communication session.
In block 714, one or more first data segments among said first data segments which appear after the second identified data segment can be selected and thus be first selected data segments.
In box 716, one or more word (s) from the word (s) from the first selected data segments can be identified as a value for the general file field based on the identified words corresponding to the file field general.
A person skilled in the art will appreciate the fact that, for these processes, operations and procedures, the functions and / or operations carried out can be implemented in a different order. In addition, the highlighted functions and operations are given only as examples, and some of the functions and operations may be optional, combined into a smaller number of functions and operations, or developed into additional functions and operations without move away from the essence of the embodiments described.
For example, in some embodiments, the method 700 may further comprise obtaining a first user identifier associated with the first device from data from the first device which is used to establish the communication session and selecting a first type of electronic file from among several types of electronic file based on the first user identifier. The method 700 can further comprise matching the general file field with a first file field of the first type of electronic file based on the general file field and the first file field being configured for analogous values and transmitting the first user identifier to an electronic health record database which stores the first type of electronic record. The method 700 may further comprise transmitting to the electronic health record database words identified as a value for the first record field of a user record of the first type of electronic record. In these embodiments, as well as in others, the user record can be associated with the first user identifier in the electronic health record database.
As another example, in some embodiments, Method 700 may further include interleaving the first transcription data and the second transcription data to generate third transcription data such that a combined transcription included in the third transcription data is substantially in chronological order, and the transmission to the electronic health record database of the third transcription data as the value of a second record field in the user record.
As another example, in certain embodiments, the method 700 can also comprise the transmission to the second device of the field identification word for the presentation of the field identification word by the second device. In these and other embodiments, the second identified data segment is identified based on the fact that the second identified data segment includes the field identification mode and that the second identified data segment appears, in time, after the presentation of the field identification word by the second device.
FIG. 8 shows an example of a computer system 800 which can be used to transcribe or identify data for filling in electronic files. The system 800 can be arranged according to at least one embodiment described herein. The system 800 may include a processor 810, a memory 812, a communication unit 816, a display 818, a user interface unit 820 and a peripheral 822, all of which can be communicatively coupled. In some embodiments, the system 800 can be part of any system or device described herein.
For example, system 800 may be part of the first device 110 in Figure 1. In these embodiments, as well as in others, system 800 can be configured to perform one or more of the following tasks: participate in a session communication by: capturing audio and video data from a first user, sending captured audio and video data, receiving audio and video data from a second user, and presenting the received audio and video data, among other tasks described in the current.
As another example, the system 800 may be part of the second devices 120 of Figure 1. In these embodiments, as well as in others, the system 800 may be configured to perform one or more of the following tasks: participate in a communication session by: capturing audio and video data from a second user, sending captured audio and video data, receiving audio and video data from a first user, and presenting the received audio and video data, among other tasks described in this.
As another example, the system 800 can be part of the transcription system 160 of Figure 1. In these embodiments, as well as in others, the system 800 can be configured to perform one or more of the following tasks: generate audio signal transcription data from the first device and the second device, identifying data among the transcription data to fill in file fields, and combining the first and second transcription data, among other tasks described herein.
Generally, the processor 810 can include any specialized or versatile computer, a computing entity or a processing device comprising various hardware or software computing modules, and it can be configured to execute instructions stored on any applicable storage medium. readable by computer. For example, processor 810 may include a microprocessor, a microcontroller, a digital signal processor (DSP), a client integrated circuit (ASIC), a programmable pre-broadcast integrated circuit (FPGA), or any other digital or analog circuit configured to interpret. and / or execute program instructions and / or to process data.
Of course, although shown as a single processor in Figure 8, processor 810 can include any number of processors distributed across any number of networks or physical locations that are configured to run individually or collectively any operation described herein. In some embodiments, processor 810 can interpret and / or execute program instructions and / or process data stored in memory 812. In some embodiments, processor 810 can execute program instructions stored in memory 812.
For example, system 800 may be part of the systems depicted in Figure 1. In these embodiments, as well as in others, instructions may be used to execute one or more of the methods 600 and 700 of Figures 6 and 7 , respectively, and / or the flow of the flowchart 200 of FIG. 2.
The memory 812 may include a computer-readable storage medium or one or more computer-readable storage media intended to carry or contain executable instructions or data structures stored thereon. These computer-readable storage media can be any available media that can be read by a general-purpose or specialized computer, such as the 810 processor. By way of example, and not limitation, these computer-readable storage media can include non-transient computer readable storage media such as random access memory (RAM), read only memory (ROM), electrically erasable and programmable read only memory (EEPROM), CD-ROM or other storage devices by optical disc, storage by magnetic disc or other magnetic storage devices, flash memory devices (for example semiconductor memory devices), or any other storage medium which can be used to carry or store a particular program code in the form of computer instructions or data structures and which can be read by a general-purpose or specialized computer cialized. It is also possible to include in the scope of computer-readable storage media combinations of the elements mentioned above. The computer instructions may include, for example, instructions and data configured to cause the processor 810 to perform a certain operation or group of operations described herein.
The communication unit 816 can include any component, device, system, or combination thereof that is configured to transmit or receive information over a network. In some embodiments, the communication unit 816 can communicate with other devices located in other places, in the same place, or even with other components of the same system. For example, the communication unit 816 may include a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device (such as an antenna), and / or a chipset (such as a Bluetooth device, an 802.6 device (e.g. a MAN metropolitan area network)), a Wi-Fi device, a WiMAX device, cellular communication equipment, etc., and / or the like. The communication unit 816 can allow the exchange of data with a network and / or any other device or system described herein. For example, if the system 800 is included in the first device 110 of FIG. 1, the communication unit 816 can allow the first device 110 to communicate with the communications routing system 140.
The 818 display can be configured as one or more displays, such as a liquid crystal display, a light emitting diode display, or some other type of display. Screen 818 can be configured to present video, text, user interfaces, and other data under the control of processor 810. For example, when system 800 is included in the first device 110 of Figure 1, the Screen 818 can be configured to present a second video signal from a second device and a transcription of a second audio signal from the second device.
The user interface unit 820 can comprise any device making it possible to interface between a user and the system 800. For example, the user interface unit 820 can comprise a mouse, a touchpad, a keyboard, buttons and / or touch screen, among others. The user interface unit 820 can receive input from a user and pass that input to processor 810.
Peripherals 822 can include one or more devices. For example, the peripherals may include a microphone, an imager and / or a speaker, among others. In these embodiments, as well as in others, the microphone can be configured to capture audio signals. The imager can be configured to capture digital images. Digital images can be captured to produce video or image data. In some embodiments, the speaker may broadcast an audio signal received by the 800 system or otherwise generated by the 800 system. Modifications, additions or omissions may be made to the 800 system without departing from the scope of this. invention. For example, the system 800 may not include one or more of the following: the display 818, the user interface unit 820, and the device 822.
In some embodiments, the various components, modules, engines, and services described herein can be implemented as objects or processes that run on a computer system (for example, as separate threads). While some of the systems and methods described herein are generally described as being implemented in software (stored on and / or executed by versatile hardware), specific hardware implementations or a combination of software implementations and Specific materials are also possible and envisaged.
According to current practice, the various elements illustrated in the drawings may not be drawn to scale. The illustrations presented herein are not intended to be actual views of a particular device (e.g. device, system, etc.) or process, rather, they are simple idealized representations that are used to describe various embodiments of the invention. Therefore, the dimensions of the various elements can be increased or reduced arbitrarily for the sake of clarity. In addition, some drawings can be simplified for the sake of clarity. The drawings may therefore not represent all the components of a given device (eg device) or all the operations of a particular process.
The terms used here and in particular in the appended claims (eg the body of the appended claims) are generally intended to be "open" terms (eg the term "comprising" should be interpreted as "comprising, but not limited to ", the term" comprising "must be interpreted as" comprising at least ", the term" includes "must be interpreted as" includes, but is not limited to ", etc.).
Furthermore, if a specific number of the text of an introduced claim is desired, this will is expressly expressed in the claim, and in the absence of such text, there is no such wish. For example, for ease of understanding, the appended claims which follow may contain the use of the introductory phrases "at least one" and "one or more" to introduce claims texts. However, the use of these expressions should not be considered as implying that the introduction of a claim text by the indefinite articles "one" or "one" limits a specific claim containing such a claim text introduced to realization containing only this text, even when the same claim includes the introductory phrases "one or more" or "at least one" and undefined articles such as "one" or "one" (eg "one" and / or "one" should be interpreted as meaning "at least one" or "one or more"); this remains true for the use of definite articles used to introduce claims texts.
Furthermore, even if a specific number of the text of an introduced claim is explicitly expressed, those skilled in the art will recognize that such text must be interpreted to mean at least the number expressed (eg the simple expression of "two recitations ”, without any other modifying element, means at least two recitations, or two or more recitations). In addition, in the case where a convention similar to "at least one element among A, B and C, etc. "Or" one or more element (s) among A, B and C, etc. Is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together, etc. For example, the use of the term "and / or" is intended to be viewed in this way.
Furthermore, any disjunctive word or expression having two or more alternative terms, whether in the description, in the claims or in the drawings, must be understood as providing for the possibilities of including one of the terms, l either or both of the terms. For example, the expression "A or B" should be understood to include the possibilities "A", "B" or "A and B".
However, the use of such expressions should not be taken to imply that the introduction of a text of claims by the indefinite articles "one" or "one" limits any specific claim containing such a text of claim introduced to embodiments which contain only one of these texts, even when this same claim includes the introductory expressions "one or more" or "at least one" and indefinite articles such as "one" or "one" (p. eg "one" and / or "one" should be interpreted as meaning "at least one" or "one or more"); this remains true for the use of definite articles used to introduce claims texts.
In addition, the terms "first", "second", "third", etc. are not necessarily used here to denote a specific order or a specific number of elements. Generally, the terms "first", "second", "third", etc. are used to distinguish between different elements as generic identifiers. In the absence of any element showing that the terms "first", "second", "third", etc. denote a particular order, these terms should not be taken as designating a particular order. In addition, in the absence of evidence showing that the terms "first", "second", "third", etc. denote a particular number of elements, these terms should not be taken as designating a particular number of elements. For example, a first sticker can be described as having a first side and a second sticker can be described as having a second side. The use of the expression "second side" in relation to the second sticker may be intended to distinguish this side of the second sticker from the "first side" of the first sticker and not to express that the second sticker has two sides.
All the examples and conditions of language expressed here are intended for educational purposes to help the reader understand the invention and the concepts brought by the inventor to improve the technique, and should be considered as being without limitation to these examples and conditions listed in particular. Although embodiments of the present invention have been described in detail, it is understood that the various modifications, substitutions and alterations can be made without departing from the spirit and scope of the present invention.
权利要求:
Claims (15)
[1" id="c-fr-0001]
1. Process implemented by computer to fill an electronic file based on communications established between two people, the process comprising:
generating, using a processing system, first transcription data of a first audio signal from a first interlocutor during a conversation between the first interlocutor and a second interlocutor, the first transcription data being a transcription of the first audio signal ;
generating, using the processing system, second transcription data from a second audio signal from the second party during the conversation, the second transcription data being a transcription from the second audio signal;
obtaining a first user identifier associated with the first contact;
identifying a plurality of general record fields to be filled in using the information of the first audio signal and the second audio signal during the conversation, the plurality of general record fields being identified by the first identifier user;
after identifying the plurality of general record fields, identifying, using the processing system, one or more word (s) of the first transcription data as a value for a record field by itself based on the identified words corresponding to the file field and the fact that the word or words come from the first transcription data and not from the second transcription data, where the file field is one of the fields of the plurality of general folder fields identified; and transmitting, over a network, identified words to an electronic records database as a value for the record field of a user record of the first party.
[2" id="c-fr-0002]
2. A computer-implemented method according to claim 1, further comprising:
the identification of one or more second word (s) from the second transcription data as a value for a second file field based on the second identified words corresponding to the second file field and the fact that the second word (s) come from the second transcription data and not from the first transcription data; and transmitting the second identified words to the electronic records database as a value for the second record field of a user record.
[3" id="c-fr-0003]
The computer implemented method of claim 1, further comprising establishing a communication session between a first device and a second device such that a first device audio signal is sent from the first device to the second device and a second device audio signal is sent from the second device to the first device during the communication session, the first audio signal being the first device audio signal and the second audio signal being the second device audio signal.
[4" id="c-fr-0004]
4. A method implemented by computer according to claim 1, in which the file field is a first file field, and before the transmission of the identified words, the method further comprises:
selecting a first type of electronic record from a plurality of types of electronic records based on the first user identifier, the user record being of the first type of electronic record; and determining a second record field in the first type of electronic record that corresponds to the first record field.
[5" id="c-fr-0005]
5. Process implemented by computer to fill an electronic file based on communications established between two people, the process comprising:
establishing a communication session between a first device and a second device such that a first device audio signal is sent from the first device to the second device and a second device audio signal is sent from the second device to the first device device during the communication session;
generating first transcription data of the first device audio signal, the first transcription data being a transcription of the first device audio signal and comprising one or more first word (s);
generating second transcription data of the second device audio signal, the second transcription data being a transcription of the second device audio signal and comprising one or more second word (s);
the identification of one or more of said second word (s) as field identification data which is associated with an identifier for a general file field which must be filled in using information coming from the first device audio signal;
selecting one or more of said first word (s) which appear after the field identification data as selected words; and identifying one or more of the selected words as a value for the general folder field based on the identified words corresponding to the general folder field.
[6" id="c-fr-0006]
6. A computer-implemented method according to claim 5, further comprising:
obtaining a first user identifier associated with the first device based on data from the first device;
selecting a first type of electronic record from a plurality of types of electronic records based on the first user identifier;
the mapping of the general folder field with a first folder field of the first type of electronic record according to the general folder field and the first folder field being configured for similar values;
the transmission of the first user identifier to an electronic health record database which stores the first type of electronic record; and transmitting to the electronic health record database words identified as a value for the first record field of a user record of the first type of electronic record, the user record being associated with the first user identifier in the electronic health record database.
[7" id="c-fr-0007]
7. Process implemented by computer according to claim
6, further comprising interleaving the first transcription data and the second transcription data to generate third transcription data such that a combined transcription included in the third transcription data is substantially in chronological order.
[8" id="c-fr-0008]
8. Process implemented by computer according to claim
7, further comprising transmitting to the electronic health record database the third transcription data as the value of a second record field in the user record.
[9" id="c-fr-0009]
The computer-implemented method of claim 5, further comprising transmitting field identification data to the second device for presentation of the field identification data by the second device.
[10" id="c-fr-0010]
10. Process implemented by computer to fill an electronic file based on communications, the process comprising:
generating, using a processing system, first transcription data of a first audio signal from a first party, the first transcription data being a transcription of the first audio signal;
obtaining a first user identifier associated with the first contact;
identifying a plurality of general folder fields to be filled in using the information from the first audio signal, the plurality of general folder fields being identified by the first user identifier;
after identifying the plurality of general record fields, identifying, using the processing system, one or more word (s) of the first transcription data as a value for a record field by itself based on the identified words corresponding to the file field and the word or words from the first transcription data, the file field being one of the fields of the plurality of general file fields identified; and transmitting, over a network, identified words to an electronic records database as a value for the record field of a user record of the first party.
[11" id="c-fr-0011]
11. Process implemented by computer according to claim
10, further comprising generating, using the processing system, second transcription data of the second audio signal from a second caller, the second transcription data being a transcription of the second audio signal.
[12" id="c-fr-0012]
12. Process implemented by computer according to claim
11, further comprising:
the identification of one or more second word (s) from the second transcription data as a value for a second file field based on the second identified words corresponding to the second file field and the second word (s) from the second transcription data; and transmitting the second identified words to the electronic records database as a value for the second record field of a user record.
[13" id="c-fr-0013]
The computer implemented method of claim 11, further comprising establishing a communication session between a first device and a second device such that a first device audio signal is sent from the first device to the second device and a second device audio signal is sent from the second device to the first device during the communication session, the first audio signal being the first device audio signal and the second audio signal being the second device audio signal.
[14" id="c-fr-0014]
14. A computer implemented method according to claim 11, wherein the second contact is a healthcare professional and the first contact is a patient of the healthcare professional.
[15" id="c-fr-0015]
15. A method implemented by computer according to claim 10, in which the file field is a first file field, and before the transmission of the identified words, the method further comprises:
selecting a first type of electronic record from a plurality of types of electronic records based on the first user identifier, the user record being of the first type of electronic record; and determining a second record field in the first type of electronic record that corresponds to the first record field.
类似技术:
公开号 | 公开日 | 专利标题
FR3067157A1|2018-12-07|AUTOMATIC FILLING OF ELECTRONIC FILES
US9053096B2|2015-06-09|Language translation based on speaker-related information
AU2015206736B2|2019-11-21|Digital personal assistant interaction with impersonations and rich multimedia in responses
US20150186110A1|2015-07-02|Voice interface to a social networking service
US8934652B2|2015-01-13|Visual presentation of speaker-related information
CN103621119B|2019-01-15|System and method for voice message information to be presented to the user for calculating equipment
US20130144619A1|2013-06-06|Enhanced voice conferencing
US20130077835A1|2013-03-28|Searching with face recognition and social networking profiles
Kane et al.2012|What we talk about: designing a context-aware communication tool for people with aphasia
US20210117712A1|2021-04-22|Smart Cameras Enabled by Assistant Systems
US9172795B1|2015-10-27|Phone call context setting
CN105324734A|2016-02-10|Tagging using eye gaze detection
Brady et al.2015|Crowdsourcing accessibility: Human-powered access technologies
KR102249437B1|2021-05-07|Automatically augmenting message exchange threads based on message classfication
US20200320364A1|2020-10-08|Computer System and Method for Facilitating an Interactive Conversational Session with a Digital Conversational Character
US10410655B2|2019-09-10|Estimating experienced emotions
US8572497B2|2013-10-29|Method and system for exchanging contextual keys
FR3063595A1|2018-09-07|AUTOMATIC DELAY OF READING A MESSAGE ON A DEVICE
US9369566B2|2016-06-14|Providing visual cues for a user interacting with an automated telephone system
Eggeling2021|At Work with Practice Theory,‘Failed’Fieldwork, or How to See International Politics in An Empty Chair
US11189290B2|2021-11-30|Interactive selection and modification
JP2021516832A|2021-07-08|Methods and systems for processing images
US20210383288A1|2021-12-09|Neural network for increasing religious campaign activity effectiveness
US20140328472A1|2014-11-06|System for Managing Spontaneous Vocal Communication
EP3624136A1|2020-03-18|Invoking chatbot in a communication session
同族专利:
公开号 | 公开日
WO2018222228A1|2018-12-06|
US20180350368A1|2018-12-06|
US9824691B1|2017-11-21|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5960449A|1994-11-21|1999-09-28|Omron Corporation|Database system shared by multiple client apparatuses, data renewal method, and application to character processors|
US6477491B1|1999-05-27|2002-11-05|Mark Chandler|System and method for providing speaker-specific records of statements of speakers|
US20050149364A1|2000-10-06|2005-07-07|Ombrellaro Mark P.|Multifunction telemedicine software with integrated electronic medical record|
US7487440B2|2000-12-04|2009-02-03|International Business Machines Corporation|Reusable voiceXML dialog components, subdialogs and beans|
KR100425831B1|2001-01-08|2004-04-03|엘지전자 주식회사|Method of stroing data in a personal information terminal|
US20080300856A1|2001-09-21|2008-12-04|Talkflow Systems, Llc|System and method for structuring information|
WO2003062958A2|2002-01-22|2003-07-31|Wesley Valdes|Communication system|
US20030144885A1|2002-01-29|2003-07-31|Exscribe, Inc.|Medical examination and transcription method, and associated apparatus|
US8738396B2|2002-04-19|2014-05-27|Greenway Medical Technologies, Inc.|Integrated medical software system with embedded transcription functionality|
US7200806B2|2002-10-25|2007-04-03|Ubs Ag|System and method for generating pre-populated forms|
US7739117B2|2004-09-20|2010-06-15|International Business Machines Corporation|Method and system for voice-enabled autofill|
US20060143157A1|2004-12-29|2006-06-29|America Online, Inc.|Updating organizational information by parsing text files|
US7808664B2|2005-06-08|2010-10-05|Ricoh Company, Ltd.|Approach for securely printing electronic documents|
US8229745B2|2005-10-21|2012-07-24|Nuance Communications, Inc.|Creating a mixed-initiative grammar from directed dialog grammars|
US9497314B2|2006-04-10|2016-11-15|Microsoft Technology Licensing, Llc|Mining data for services|
US8132104B2|2007-01-24|2012-03-06|Cerner Innovation, Inc.|Multi-modal entry for electronic clinical documentation|
US8886521B2|2007-05-17|2014-11-11|Redstart Systems, Inc.|System and method of dictation for a speech recognition command system|
US20090048903A1|2007-08-13|2009-02-19|Universal Passage, Inc.|Method and system for universal life path decision support|
US20090313076A1|2008-06-17|2009-12-17|Roy Schoenberg|Arranging remote engagements|
WO2010002376A1|2008-07-01|2010-01-07|Pro-Scribe Inc.|System and method for contextualizing patient health information in electronic health records|
EP2288130A1|2009-08-13|2011-02-23|me2me AG|Context- and user-defined tagging techniques in a personal information service with speech interaction|
US8521823B1|2009-09-04|2013-08-27|Google Inc.|System and method for targeting information based on message content in a reply|
US20110145013A1|2009-12-02|2011-06-16|Mclaughlin Mark|Integrated Electronic Health Record System with Transcription, Speech Recognition and Automated Data Extraction|
JP5652406B2|2009-12-17|2015-01-14|日本電気株式会社|Voice input system and voice input program|
US20120173281A1|2011-01-05|2012-07-05|Dilella James M|Automated data entry and transcription system, especially for generation of medical reports by an attending physician|
US20120323574A1|2011-06-17|2012-12-20|Microsoft Corporation|Speech to text medical forms|
US9865025B2|2011-11-28|2018-01-09|Peter Ragusa|Electronic health record system and method for patient encounter transcription and documentation|
US8458193B1|2012-01-31|2013-06-04|Google Inc.|System and method for determining active topics|
KR101977072B1|2012-05-07|2019-05-10|엘지전자 주식회사|Method for displaying text associated with audio file and electronic device|
US20130339030A1|2012-06-13|2013-12-19|Fluential, Llc|Interactive spoken dialogue interface for collection of structured data|
US20140222462A1|2013-02-07|2014-08-07|Ian Shakil|System and Method for Augmenting Healthcare Provider Performance|
US9342846B2|2013-04-12|2016-05-17|Ebay Inc.|Reconciling detailed transaction feedback|
US9274687B1|2013-10-11|2016-03-01|Google Inc.|Managing schedule changes for correlated calendar events|
US20150106091A1|2013-10-14|2015-04-16|Spence Wetjen|Conference transcription system and method|
US9344686B2|2014-04-29|2016-05-17|Vik Moharir|Method, system and apparatus for transcribing information using wearable technology|
US10210204B2|2014-06-16|2019-02-19|Jeffrey E. Koziol|Voice actuated data retrieval and automated retrieved data display|
US9691385B2|2014-06-19|2017-06-27|Nuance Communications, Inc.|Methods and apparatus for associating dictation with an electronic record|US10957427B2|2017-08-10|2021-03-23|Nuance Communications, Inc.|Automated clinical documentation system and method|
WO2019173340A1|2018-03-05|2019-09-12|Nuance Communications, Inc.|System and method for review of automated clinical documentation|
US11250383B2|2018-03-05|2022-02-15|Nuance Communications, Inc.|Automated clinical documentation system and method|
US11120490B1|2019-06-05|2021-09-14|Amazon Technologies, Inc.|Generating video segments based on video metadata|
US10657176B1|2019-06-11|2020-05-19|Amazon Technologies, Inc.|Associating object related keywords with video metadata|
US11216480B2|2019-06-14|2022-01-04|Nuance Communications, Inc.|System and method for querying data points from graph data structures|
US11227679B2|2019-06-14|2022-01-18|Nuance Communications, Inc.|Ambient clinical intelligence system and method|
US11043207B2|2019-06-14|2021-06-22|Nuance Communications, Inc.|System and method for array data simulation and customized acoustic modeling for ambient ASR|
US11222103B1|2020-10-29|2022-01-11|Nuance Communications, Inc.|Ambient cooperative intelligence system and method|
法律状态:
2019-10-25| ST| Notification of lapse|Effective date: 20191006 |
优先权:
申请号 | 申请日 | 专利标题
US15612644|2017-06-02|
US15/612,644|US9824691B1|2017-06-02|2017-06-02|Automated population of electronic records|
[返回顶部]