![]() systems and methods for language learning
专利摘要:
LANGUAGE LEARNING SYSTEMS AND METHODS. The present systems, methods and product invention are to be correlated with language learning software, which automatically extracts text from a linguistic body using various natural language processing product features, which can be combined with custom designed learning activities, to offer an adapted learning methodology based on needs. The system can receive a text, extract pedagogically valuable keywords for non-native language learning, assign a difficulty record to the text using various linguistic attributes of the text, generate a list of potential distractions for each keyword in the text to implement in the learning activities and, topically, tag the text against a content-based taxonomy. This result is then used in conjunction with a series of learning activity types designed to meet the student's language practice needs and thus be used to create dynamic and tailored activities for them. 公开号:BR112015020314B1 申请号:R112015020314-0 申请日:2014-02-14 公开日:2021-05-18 发明作者:Katharine NIELSON;Na-Im Tyson;Andrew Breen;Kasey KIRKHAM 申请人:Voxy, Inc; IPC主号:
专利说明:
Field of Invention [001] The material disclosed here is generally correlated with computer-aided language learning. Background of the Invention [002] Conventional language learning methodologies organize learning material into classes or lessons, which often may contain metalinguistic instructional information, followed by educationally oriented exercises. The participation of students who demonstrate knowledge of a specific subject in various educational activities, comprising a series of questions, is known. Also, educational activities in which students demonstrate knowledge through involvement in various tasks are known. In many cases, educational software can implement educational activities in which students demonstrate knowledge through a series of questions or by engaging in various tasks. [003] Learning activities are normally prepared manually by a teacher, a textbook author or a curriculum planner. Learning activities are commonly prepared by an instructor, an education specialist or some party involved in preparing the educational content who is very familiar with the aspects of language learning. Typically, as soon as these learning activities are generated, they are then mass reproduced. Similarly, conventional distance learning programs are based on pre-built language learning software or traditional textbooks. In these cases of pre-built software and textbooks, curriculum planners create content and then try to make that content serve multiple people. One problem with this paradigm is that learning activities are not dynamically generated to suit a language learner's needs. [004] Conventional language learning software and tools employ a teaching methodology, curriculum and courses that will remain static unless a replacement or supplemental textbook or language learning product is purchased by a language learner or developed by an instructor. Thus, not only is this curriculum impossible to adapt, but also, with each page or new computer exercise, there is no adaptability of learning activities and materials on a more consistent scale. The global curriculum is static, so daily activities and practical exercises cannot adapt to meeting the student's needs. So, for example, a student's past tense conjugation may not be a problem, but the student's spelling may be poor. [005] Conventional language methodologies often implement distractors (which are incorrect answer alternatives, but which must be plausible (referring to the students' possible reasoning)), such as predetermined incorrect answers to academic questions, for example, multiple problems choice. However, conventional language learning methodologies implement distractors that are entirely static and cannot be adapted to address a student's specific skill weaknesses. Also, familiar language learning tools are not adaptable to suit student goals or content preferences. Students are allowed to work with static language learning materials provided to them, without regard to the student's personal interests or the real-life applicability of these materials in the context of the student's goals. [006] What is needed is an efficient and effective way to dynamically create content and learning materials, or learning activities, for a language learning course. What is also needed is a way to dynamically generate learning activities and language learning courses that can automatically adapt an intermediate course to suit a student's needs, goals and interests in a language. [007] As discussed above, the development of distractors for language learning purposes is known in the art field. However, distractors are commonly prepared in some written or manual way so that they can be played in large volumes. In many cases, when developing learning activities based on a given text, the activities must focus on some set of words to exercise several learning objectives. Typically, a person preparing learning activities manually identifies and develops the words on which the learning activities will be based or focused. [008] Thus, what is needed is a language learning context to automatically identify words in the text that may be useful for language learning, with subsequent extraction of these words from the text. Thus, a way to extract useful words from the text or keywords is necessary, with subsequent storage of these keywords for use in the development of learning activities. [009] The identification of keywords in a text to generate a summary of the text content is known. Conventional keyword extractors are typically interested in getting a percentage of keywords to provide the most efficient text summary possible. To achieve this efficiency, conventional keyword extraction tools aim to summarize text content using as few keywords as possible. However, these conventional keyword extraction tools may casually filter out keywords that would otherwise help with language learning, which makes conventional keyword extraction tools ineffective for the context of language learning. [010] What is needed for language learning is a way to obtain several keywords from the text, without taking into account whether the keywords appear inefficient in abstract contexts. What is also needed is a way to extract and store keywords that are useful for language learning. [011] There are known means for identifying various attributes of a word in the text, for example, noun, past tense, first person, etc. It is also known to filter out several inconsequential words from a line of text. Thus, for example, it is known to remove articles such as "o" or "a" from an online search query through the use of a broken word list. Conversely, words that are rare in a linguistic body, having a lower probability of occurring in a given document, can be weighted in an information retrieval system through known techniques, such as TF-IDF. [012] What is needed is a device for identifying and quantifying the pedagogical value of the keywords extracted from the text for language learning. What is needed is a way to look at word attributes to determine if the word can help a student learn a language. [013] There are known methods in the art segment for calculating the text difficulty. Conventional methods for calculating the difficulty of a linguistic body are intended for adult native-speakers. Conventional text difficulty calculation methodologies are typically done holistically by teachers and textbook writers. However, there is no established method of calculating text difficulty that is mandated for second language students. For native language readers and for children learning to read in their first language, there are standards for assessing text difficulty. In addition, there are known systems for qualifying books at levels such as level readings from A to Z, something most school children learn when they are in the learning-to-read phase. [014] In addition, there are several and known reading ability scales, such as Flesch-Kincaid, which measures the number of words in a sentence and the number of words in total. Also, there are known text difficulty calculation methods that are designed for students who already speak the language they are learning to read. When teaching a second language to non-native speakers, particularly young adults and other adults - many of whom already know how to read in their native language but do not know how to read in the second language - there are different challenges that make texts readable. in the second language difficult for them, when compared to students who learn their native language. [015] Known first language text difficulty levels are not always applicable in adult second language or non-native speaker situations. First language students learn to read as a form of skill, and the occurrence of learning to read takes years, while second language students typically already know how to read in their native language. [016] What is needed is a language learning device that applies a means other than the known means of learning a native language. Typically, students learn that the word "ball" maps the concept they already have to the entity, that is, a ball. Adult language learners, on the other hand, already know how words can map into certain concepts. So, instead of having them start with "this is a ball", adult language learners can start with something inherently more complicated, such as, "this is an electron", or "I'll have a last brew of coffee with milk of organic soy". Adult second language learners typically know how to read complicated items in their own language. It is desirable for the adult student to understand these items in a second language. [017] Text difficulty calculation methods can measure text difficulty compared to native speakers, through their childhood development. For second language students, the scale should be different. So, for example, a non-native speaker might review a long block of text containing rather complicated ideas in the second language, but the speaker or speaker will have no problem understanding the concepts. This is particularly true if the text contains complicated ideas but is written in a way that exercises a simplified language. As mentioned earlier, the ideas are no less complicated. On the other hand, A simple concept, such as weather forecasting, can be incomprehensible to a new language learner if written in a linguistically challenging way. This is because the results obtained by second language students are not the same as for first language students. [018] What is needed is a way of calculating the text difficulty of a linguistic body for non-native speakers. Thus, a way of calculating the text difficulty of a linguistic body that is more suitable for adult language speakers is needed. The text difficulty must be calculated, mainly, according to the idiosyncrasies of the language, which make learning the same difficult. What is also needed is a way to automatically determine the adequacy of the text, based on the calculated difficulty, and then automatically prepare the language learning course. [019] The generation of distractors for educational exercised is known in the technical segment. Distractors are typically handwritten by humans to prepare various forms of language learning materials, such as a textbook or skill-exercising problems. In addition, materials that aid in language learning must be mass reproduced. Typically, a teacher or other curriculum specialist writes questions about the form or content of a source or resource and then the answer is presented. This means that teaching methodology, curricula and courses will remain static unless a different textbook or language learning product is purchased. This manual effort is also inefficient and costly. [020] Thus, what is needed is a way to dynamically generate distractors for language learning. What is needed is a way to adapt to the generation of distractors and select them based on the student's needs. Also, you need a way to automatically generate distractors from a source or resource. Also, you need a way to prepare distractors that can adapt to the different types of resources (eg text containing information, audio playback, video playback) used for preparing learning exercises. Invention Summary [021] The systems and methods described here include several natural language processing product features that can be combined with standard learning activities to provide a tailored learning methodology based on needs. The system can receive a resource, an extract of relevant keywords for non-native language learners, assign a degree of difficulty to the resource, generate a definition, as well as a list of potential distractors for each keyword, and topically , label the resource against a content-based taxonomy. This result is then used in conjunction with a series of learning activity types, designed to meet the student's language practice needs, to create dynamic and tailored learning activities. Various system components and methods are described in more detail below. [022] In one modality, a computer-implemented language learning method comprises: selecting, by computer, from a resource supply, a resource having content, in which the selected resource is correlated to a content interest of a student, and has a resource difficulty level based on a student's competency level; identify, via computer, in a user data supply that stores one or more student skill records, language-specific practices; identify, through computer, a specific language practice for improvement, based on one or more skills of the student; identify, through a computer, a type of learning activity that exercises the identified language practice for improvement; identify, by computer, a set of one or more distractors of a type suitable for the identified type of learning activity and having a level of difficulty of distractor based on the student's ability in the specific language practice; generate, by means of a computer, a learning activity of the identified type of activity, using at least one of the sets of one or more distractors; update, by means of computer, the ability for the specific practice of the language in the supply of user data, according to a result arising from the learning activity; and updating, by means of a computer, the student's competence level in the supply of user data. [023] In another modality, a method to facilitate computer-implemented language learning comprises: identifying, by computer, a specific language practice to exercise; select, through a computer, a learning activity that exercises the identified language practice; selecting, by computer, a set of one or more distractors, each of a type of distractor suitable for the selected learning activity, where each one or more of the distractors are associated with a resource; and computer-generating the selected learning activity, wherein the selected learning activity comprises at least one distractor from the selected set of one or more distractors. [024] In another embodiment, a computer-implemented method to enable a language learning pedagogy comprises: identifying, by computer, a set of one or more student competences of a student, comprising a level of language competence and one or more skills in the specific practice of the isioma; identify, via computer, a set of one or more student preferences, associated with the student, comprising one or more student content interests; determine, by means of a computer, a type of learning activity to exercise a language practice, based on a student's ability to practice the language; determine, by means of a computer, a set of one or more distractors, associated with a resource to be implemented in the learning activity, according to the determined type of activity, in which a distractor's difficulty is comparable to one or more capabilities of the student; generate, by means of a computer, the learning activity of the type of activity, in which the learning activity comprises the set of distractors; and generate, by means of a computer, a class comprising a set of one or more of the learning activities, according to a student's objective, in the student's preferences. • Air [025] In another embodiment, a computer-implemented method for producing on-demand language learning for a student comprises: determining, by computer, a student's ability in a specific language practice, based on the result of a determination practice performed by the student; determine, via computer, a student's language competence level, based on at least a student's practical language ability; receiving, via computer, a student content interest and a student goal from the student's computer device; and storing, by computer, a student profile associated with the student, in a user data supply, where the student profile comprises one or more of the student's language practice skills, the level of language competence, the student content interest and student goal. [026] In another modality, a learning and language system to automatically generate activities comprises: a host computer, comprising a processor that runs a set of software modules for language learning; a keyword supply, which stores a set of keywords extracted from a source by means of a keyword extraction module, in the set of software modules executed by the host computer; a user data supply, storing data associated with a student in a student profile, where the student profile comprises a set of the student's language skills and a set of student preferences; and a student computer, comprising a processor configured to run a user interface, to interact with a set of learning activities generated by a learning activity generator module, executed by the host computer, wherein a learning activity is automatically generated , using the student's skills and preferences stored in the student's profile. [027] In one modality, a computer-implemented method to extract keywords from a text comprises: parsing, grammatically, by computer, a set of one or more potential keywords from the text of a resource containing text; store, by computer, in a keyword supply, each potential keyword in the set corresponding to a term in a computer file containing a blank list of keywords, and each potential keyword in the set that matches a placement in the blank keyword list; determine, by computer, for one or more potential keywords in the set of potential keywords, a word difficulty value associated with each one or more of the potential keywords, based on the registration rules that determine the word difficulty value; and storing, by means of computer, within the supply of keywords, each potential keyword having a certain pedagogical value that satisfies a threshold value of word difficulty. [028] In another embodiment, a system comprising a processor and a machine-readable non-transient storage medium, containing a keyword extractor module, which instructs the processor to perform the steps of: parsing the text of a resource within a set of one or more potential keywords; identify one or more placements in the set of potential keywords that match a placement in a file containing a blank list of keywords; determine a pedagogical value for each extracted word, according to one or more registration rules; and storing a set of one or more extracted keywords within a keyword supply, wherein the extracted keyword set comprises each potential keyword having a word difficulty value that satisfies a threshold value and each identified placement. [029] In one embodiment, a computer-implemented method for predicting a text difficulty record for a new resource comprises extracting, by computer, one or more linguistic features having a weighted value from a plurality of training resources containing texts, where the text is associated with a metadata tag containing a text difficulty record of said text; determine, by means of computer, a vector value associated with each training resource based on each of the weighted values of each one or more of the extracted linguistic characteristics; train, by computer, a statistical model using the vector values associated with each training resource, where the statistical model represents a correlation between a set of features selected for extraction, a set of weighted values assigned to the set of selected features for extraction, and a set of text difficulty records associated with training resources; extract, by means of a computer, one or more linguistic characteristics having a weighted value from a new resource; determine, by means of a computer, a vector value for the new resource based on the set of linguistic characteristics extracted; and computer-predicting a text difficulty record for the new feature, based on the vector value for the new feature and the statistical model. [030] In another embodiment, a computer-implemented method for determining text difficulty for a resource comprises: comparing, by computer, at least a portion of text correlated to a resource against a file in a feature database lexical, which comprises a list of semantic features and a list of syntactic features; identify, by computer, a set of lexical features associated with the text based on the comparison, wherein the set of lexical features comprises at least one semantic feature of the text, corresponding to a semantic feature in the list, and at least one syntactic feature of the text corresponding to a syntactic feature in the list; assign, by means of computer, a first value for each of the lexical characteristics in the set of lexical characteristics; and computer-determining a text-to-text difficulty record using each of the first values associated with each of the features in the lexical feature set. [031] In another modality, a system comprising a processor and a machine-readable non-transient storage medium, containing a text difficulty calculator module, instructs the processor to perform the steps of: comparing the text in a resource with a listing of features comprising a list of semantic features and a list of syntactic features; identify one or more semantic features in the text corresponding to the feature listing and identify one or more syntactic features in the text that match the feature listing; assign a value to each of the semantic and syntactic characteristics identified in the text, according to a statistical model, in which the statistical model identifies the value associated with each characteristic in the list of characteristics; and determine a text difficulty record associated with the feature, using each of the values assigned to the identified semantic and syntactic characteristics. [032] In one embodiment, a computer-implemented method to facilitate language learning comprises: generating, by computer, a set of one or more semantic distractors, comprising one or more words having a definition correlated to a target word ; generate, by computer, a set of spelling distractors, comprising one or more words having a publication distance that satisfies a placement that corresponds to the publication distance, where the publication distance is a number of changes to a word required for be identical to the target word, and the placement corresponding to the publication distance determines the number of changes to the word; and generate, by means of a computer, a set of phonetic distractors, comprising one or more homophones of the target word. [033] In another modality, a system comprising a processor and a machine-readable non-transient storage medium, containing a distractor garner module, instructs the processor to perform the steps of: receiving one or more keywords extracted from a resource containing text; identify in one or more dictionary sources, one or more distractors that are of one or more distractor types and associated with a keyword; and generating a set of one or more identified spelling distractors comprising a word having a predetermined publication distance relative to the keyword. [034] Additional features and advantages of a modality will be set forth in the description that follows, and in part, will be evident from the present description. The objectives and other advantages of the invention will be implemented and achieved by the structure particularly pointed out in the exemplary embodiments of the present description, in the description of the claims and in the attached drawings. [035] It should be understood that the above-mentioned general description and the following detailed description are exemplary and explanatory, and are intended to provide a further explanation of the invention as claimed. Brief Description of Drawings [036] This disclosure can be better understood by referring to the attached figures. The components in the figures are not necessarily to scale, instead emphasis is placed on illustrating the principles of the invention. In said figures, the numerical references designate the corresponding parts shown in all the different views. [037] Figure 1 shows an exemplary modality of a language learning system. [038] Figure 2 shows a flowchart of a modality of the exemplary method of the keyword extractor module. [039] Figure 2A shows a flowchart of a modality of the exemplary method of the keyword extractor module. [040] Figure 3 shows a flowchart of an exemplary modality of a method of determining a text difficulty record for learning the text; [041] Figure 4 shows a flowchart of an exemplary modality of the method executed by the distractor generator module. [042] Figure 5 shows a screen image of an exemplary modality of a graphical user interface, presented to a student to start a language learning class. [043] Figure 6 shows a screen image of an exemplary modality of a graphical user interface, presented to a student to engage in a comprehensive reading activity. [044] Figure 7 shows a screen image of an exemplary modality of a graphical user interface, presented to a student to engage in a vocabulary activity. [045] Figure 8 shows a screen image of an exemplary modality of a graphical user interface, presented to a student to engage in a spelling activity. [046] Figure 9 shows an exemplary method modality of a language system implementing a learning module. Detailed Description of the Invention [047] The present invention will now be described in detail with reference to the embodiments illustrated in the drawings, which form part thereof. Other modalities may be used without departing from the spirit or scope of this disclosure. The illustrative embodiments to be described now are not intended as a limitation on the subject matter presented herein. [048] System modalities can automatically develop one or more portions of a series of learning activities from various intended learning outcomes, eg, improved vocabulary, improved reading comprehension, fully improved student competence. Learning activities can fetch data automatically generated by various software modules that run on one or more computing devices and/or data of various types stored in a resource storage medium. [049] Data generator modules can include a keyword extractor, a distractor generator and a text difficulty calculator. The various components of the language learning system can develop outcomes used by a learning module, such as access to portions of lessons generated by the learning module. In this way, the learning module can dynamically create learning activities suitable for language learners, based on a variety of resources. [050] In some modalities, learning activities may use as access various aspects that describe information regarding the language student, such as the student's interests, the student's overall language competence level, and/or the student's skills in several language-specific practices. So, for example, a resource used for a learning activity can be selected based on the student's interest in sports. In another example, the learning activity can be generated to exercise a specific language practice where the student needs to focus, which is determined according to a skill examined in said specific practice (eg spelling, reading comprehension). In another example, a more rigorous or well-balanced set of learning activities can be generated if the student has a goal of preparing for the TOEFL (acronym for "Testing English as a Foreign Language"), as opposed to preparing to take the vacation. [051] The resources selected for use in constructing learning activities may vary in difficulty, type and content. Thus, for example, resources can be linguistically difficult or easy, relatively to each other and/or in relation to the student's competence levels. In another example, the resources can include one or more text portions of a document or transcript, audio, video, audiovisual or images. [052] Reference will now be made to exemplary embodiments illustrated in the drawings, where specific language will be used here to describe them. However, it should be understood that no limitation on the scope of the invention is intended. Any changes and additional modifications of the inventive features illustrated herein and further applications of the principles of the invention as illustrated herein, which may occur to a person skilled in the art and in possession of the present disclosure, are considered to be within the scope of the invention. Components of the Language Learning System [053] Figure 1 shows the components of an exemplary modality of a language learning system (100) . [054] The language learning system (100) shown in Figure 1 comprises a student language server (101), a supply of distractors (102), a supply of resources (103), a supply of keywords ( 104), a user data supply (105), a network (106), a content management computer (107) and an implemented student computing device for language (108). As shown in this exemplary embodiment, a student computing device implemented for language (108) may be a student computer implemented for language (108a) or a student smartphone device implemented for language (108b). [055] A student language server (101) may be any computing device, such as a personal computer, or any other computing device comprising a processor that may be capable of running one or more language learning modules. Student language server modalities (101) may comprise a keyword extractor module, a text difficulty calculator module, a distractor generator module and/or a learning module. [056] In the exemplary embodiment of the language learning system (100) shown in Figure 1, the student language server (101) is shown as a single device. However, in some embodiments, the student language server (101) can comprise multiple computing devices. In such distributed computing arrangements, where a student language server (101) may comprise a plurality of computing devices, each computing device may comprise a processor. Thus, each of these processors can run language learning modules that are hosted on any computer of the plurality of computing devices. [057] The language learning system modalities (100) may comprise one or more supplies of data (102, 103, 104, 105), that is, a supply of distractors (102), a supply of resources (103) , a supply of keywords (104), a supply of user data (105). The data supplies (102, 103, 104, 105) may be databases comprising machine-readable, non-transient storage medium that stores and retrieves data correlated to one or more modules executed by a processor device on the student's language server ( 101). The data supplies (102, 103, 104, 105) may be a single database host device, or the data supplies (102, 103, 104, 105) may be distributed among a plurality of computing devices hosting the databases. databases. [058] In some embodiments of a language learning system (100), one or more of the distractor supplies (102), resource supplies (103), keyword supplies (104), and/or supplies of user data (105) may reside on a student language server (101). In some embodiments, such as the one shown in Figure 1, the language learning server (100) is a single device and each of the data supplies (102, 103, 104, 105) is hosted on different devices, each which communicating with the student's language server (101). A technician of the segment in question may observe that the data supplies (102, 103, 104, 105) can communicate with the student language server (101) through any means of communication capable of facilitating a network communication between computing devices such as LAN, WAN, InfiniBand, 3G, 4G, or any other means of computing communication. [059] A supply of distractors (102) may be a machine-readable, non-transient supply means that stores one or more distractors correlated with keywords. The stored distractors can be dynamically generated by a distractor module run by a student language server (101). [060] A resource supply (103) may be a machine-readable, non-transient storage medium that stores one or more resources. Resources may be received from a content management device (107), a student computing device (108), or another external data source such as a website, blog, or new services. A resource stored in the resource supply can be one or more text portions of a document (eg book, article, webpage, newspaper), an audio output, video output, audiovisual output, an image, or a combination of the same. In some embodiments, a resource supply (103) may also store metadata associated with the stored resources. Non-limiting examples of metadata may include describing information from a text-to-text difficulty record in a document resource, the extent of a resource and/or the content contained in a resource. [061] In some modalities of resource supply (103), the resources may be multimedia resources (eg audio, video, image). In these embodiments, the resource supply (103) can store metadata correlated to a stored multimedia resource. Thus, for example, the text transcript of an audio or audiovisual resource can be stored in the resource supply (103) in the form of metadata associates. In another example, metadata may contain one or more timestamps, which correspond to specific sync points in a video or audiovisual resource. Timestamps can be associated with a transcript or other descriptive text of the resource, for example, a keyword in view of a specific sync point, or a description of events. In another example, the metadata may contain point descriptions (eg coordinates) for linking boxes that include areas of an image resource. Some modalities feature mmultimedia capabilities, where the metadata can identify one or more keywords within the multimedia resources. This metadata can match keywords in a keyword supply. [062] In some modalities of resource provision (103), metadata correlated to multimedia resources can be manually accessed by a content manager, using a user interface of a content management device (107). In some modalities, resources can be automatically retrieved from a variety of external sources or received from external sources on a regular basis. In these modalities, metadata associated with multimedia resources can be automatically retrieved, received or updated, from an external source that discloses the associated resource. In some modalities, metadata can be retrieved from several external sources that store data correlated to multimedia resources already stored in a resource supply (103) . [063] A resource supply (103) can store resources transmitted from a user interface of a computing device (107, 108) within a language learning system (100) or from some external data source. The resource supply (103) may perform search queries to search the resource supply (103) so as to provide resources. Search queries can be received from language learning modules, executed by a language learning server (101) or from a computing device (107, 108). [064] A keyword supply (104) can store one or more keywords extracted from the resources. A keyword supply (104) can store metadata associated with the keywords, such as from the source resource from which the words were extracted, or with a word difficulty record that describes the difficulty. keyword linguistics. A keyword supply (104) can receive and store keywords from a keyword extractor module that runs on a student language server (101). A keyword supply (104) may also store keyword entries from a user interface of the computing device (107, 108). A keyword supply (104) can perform search queries for searching a keyword, in accordance with queries received from the language learning modules, performed by a language learning server (100). [065] A keyword supply (104) is a computer readable storage medium that can store keywords, keyword attributes, and/or various other means for retrieving data stored in the keyword supply. (104), such as an equivalence key or a single database record key. In these modalities of the keyword extraction module, the keyword extraction module proceeds to store certain potential keywords in a keyword supply (104) when the keywords are extracted from a linguistic body or resource. Various algorithms and/or other operating rules that instruct the keyword extraction module can be used to determine which keywords should be stored in the keyword supply (104). [066] A user data supply (105) is a machine-readable, non-transient storage medium that can store student profiles containing information correlated to language learners. A user data supply (105) may receive and store data from one or more student language modules running on a student language server (101), or a user data supply (105) may store input from a user interface of a computing device (107, 108). A user data supply (105) can perform search queries to search for student profiles in accordance with commands received from one or more language learning modules executed by a language learning server (101). [067] A network (106) can connect each of the computing hardware devices in the language learning system (100) to each other. Someone with an ordinary skill in the art will notice that there are several possible permutations for connecting the various devices and components into modalities of a language learning system (100) . Arrangements of a network (106) can facilitate the hardware components of a language learning system (100) to be closely located within a building, a campus, a municipality, or across any geographic breadth. [068] As observed in the exemplary modality of a language learning system (100), which was shown in Figure 1, a network (106) can be a combination of public network, such as the Internet, and private networks. However, in some embodiments, the network (106) can be just a private network, or just a public network. The network (106) shown in Figure 1 represents a combination of the Internet and some internal private network. The network (106) can be a mixture of several data communication technologies, such as, for example, LAN, WAN, and 4G. [069] In some embodiments, the network (106) can connect only some of the hardware components of a language learning system (100), while some other hardware components are connected with the language learning system (100) using different technologies or a different network. Thus, for example, the data supplies (102, 103, 104, 105) shown in Fig. 1 can communicate with the language learning server (101), using, for example, InfiniBand, to devices located nearby. Also, in some modalities, one or more of the components can be arranged in a single device. [070] An expert of ordinary skill in the art may observe that a language learning system (100) may be a distributed computing system, using one or more networking technologies and redundancy technologies, which may utilize a certain number of techniques known basics, to facilitate communication between computing hardware components. [071] A common skill expert in the art may observe that the network architecture shown in the language learning system modality shown in Figure 1 in no way limits the architectural permutations that facilitate communication between various components of other language learning modalities. a language learning system (100). [072] A content management computing device (107) may be a computer, a smartphone, a server, a tablet, a game deployment system, or other computing device comprising a processor configured to implement a user interface, to manage the various modules and components of a language learning system (100). In some embodiments, the content manager computing device (107) may be capable of networked communication with various other computing devices. [073] Modalities of a content management device may run a user interface that may allow a content manager to review results, and/or manually input various pieces of information, resources, and/or metadata within modules , performed by the language learning server (101), or on the data supplies (102, 103, 104, 105) within the language learning system (100). [074] So, for example, in some modalities, a content manager can store a resource in a resource supply and then manually input the managed metadata that is associated with the resource, such as a timestamp for a resource from video or a transcript to an audio resource. The content management computing device can pin metadata to the resource, to associate the resource with a keyword, or to identify various other attributes of the resource. In some embodiments, a content management computing device may receive information associated with a student or input data into a profile associated with a student. Information or data may be received from a content manager by inputting the information or data into a user interface associated with the content manager computing device. [075] In some embodiments, a content manager computing device (107) may allow a content manager to act as an instructor in another live conversation, phone session, and/or video sessions, with a student computing device (108 ), or in correspondence, for example, by e-mail. It should be noted that a content management computer (107) can be a network computing device to communicate with a student language server (101). However, a content management computer (107) may also be the same device as that of the student language server (101). [076] A student computing device (108) may be a computer (108a), a smartphone (108b), a server, a tablet-type computing device, a game implementation system, or any other computing device comprising a processor configured to implement a user interface to communicate with the various language learning server modules. It should be noted that a student computer (108a) may be networked to communicate with a student language server (101), and said student computer (108a) may also be the same device as the language server. (101). [077] In some embodiments, a student computing device (108) can connect within a teaching session, with a content management device (107) acting across the network (106). The teaching session can be maintained using any video calling technology, voice calling technology, or instant messaging service such as Google Hangouts®, Skype®, VoIP, and/or instant messaging software. message. In some modalities, this teaching session may not be the product of a third-party vendor, rather a session native to the language learning system (100). In other embodiments, the language learning system (100) may comprise a teaching messaging service for the instructor and the student to pass the message to another arrangement of a teaching session. [078] A student computing device (108) can send and receive electronic message within the language learning system (100) or through conventional messaging services. A content manager computing device (108) may also facilitate such communication instruction between a content manager and a student. A content management device (108) may also include an interface for generating teaching assignments, grade assignments, learning record activities, and sending or receiving assignments for a student. These assignments can be sent through the native messaging service or through a conventional messaging service. [079] In some disciplines, students may take an initial competency exam. This exam can be a test or questions stored on the language learning server (101). The exam can be presented to students through the user interface of the student computing device (108). The exam can be automatically stored by the language learning server (101) or can be recorded by a content manager using the computing device of the content manager (107). This exam may provide an initial competency record, globally for the student, and/or records that reflect the student's skills, specifically language skills, which are stored in a user data profile. The student's level of competence and/or practical language ability can be automatically updated in the user's data input (105). The language learning server modules (101) can automatically update the user data profile in the user data supply (105) based on student performance in learning activities and, in some modalities, students may have a periodic competency examination, via the student's computing device (108), to increase the competency level, or via a skill record. Keyword Extractor [080] When applied in a language learning context, a keyword extractor as described here may include deficiencies of conventional keyword extractors, since conventional keyword extractors are not commonly applied to learning of language. The objective of conventional keyword extractors is to save a reader's time by reducing the number of words they need to read to extract information significantly from the resource. These algorithms are optimized to find the smallest subset of words that have the highest information content, which results in filtering out common words, phrases, languages, and other vocabularies that reside in a more general topic domain. Keyword extractor modalities described here implement keyword extraction techniques to extract keywords useful for language learning. Conventional keyword extraction techniques are not applied in the context of language learning, so a conventional keyword extraction technique may be ineffective. [081] Modalities of a language learning system may include a keyword extractor, otherwise referred to as a keyword extraction module, which is a component embedded in a computer-readable medium and executed by a processor. Keyword extractor can get one or more keywords from the text. Keywords can be of the type of a specific word or of the type of a set of words (eg, phrase, placement) disclosed in a resource. [082] Conventional keyword extraction techniques simply extract enough words to summarize the content included in the text. However, the modalities described here of a keyword extraction module can be executed in order to capture as many keywords as needed, to facilitate language learning, using a specific document or a set of documents. The keyword extractor can implement keyword extraction techniques adapted for language learning, which require different content identification information and extraction heuristics. So, for example, a word that cannot be closely connected to the meaning of a text could be discarded by conventional keyword extractors. However, this word may still have pedagogical value for a language learner at a lower competence level than the general vocabulary being acquired. [083] As described here, a keyword extraction module can, among other features, extract keywords from a resource that are pedagogically valuable for language learning. Pedagogically valuable keywords can be words from the text of a text-containing resource, such as a document (eg, article, newspaper, blog), or a transcript, which can aid language learning due to certain associated attributes or aspects. to those words. As an example, a keyword can be pedagogically valuable when said keyword is easily defined or when the keyword is basic to the context of a resource. [084] The modalities of an extraction module can implement a blank list of keywords as a way to determine the pedagogical value of potential keywords parsed in the text. A blank list can be any type of computer file containing predetermined words as keywords to extract. In these modalities, a blank keyword list can comprise words that are automatically stored within the keyword supply, such as an extracted keyword. [085] In some embodiments, a broken word file can be any computer file containing words that are never keywords for extraction. The broken word file can act as a filter against words that are not particularly beneficial for language learning. [086] In some modalities, the keyword extractor module can implement one or more registration rules, which can include one or more algorithms to determine the pedagogical values of potential keywords. It should be noted that multiple blank lists and broken word files can be used, or the blank list and broken word file can be the single identical computer file. [087] Figure 2 is a flowchart indicated for a modality of the method of extracting a keyword from the text of a resource. [088] In step (S200), upon receipt of a resource containing text, a keyword extraction module can start the keyword extraction process in this step, by generating a set of potential keywords. In some embodiments, step (S200) may start with a step (S201), in which the keyword extraction module receives a resource and then parses the text of that resource into a resulting set of potential keywords. key. Step (S200) can optionally use a step (S202), in which a keyword extraction module begins to generate a set of potential keywords by processing and/or standardizing the parsed text of the resource. [089] In a next step (S204), the modalities of a keyword extraction module can compare the set of potential keywords against a blank list of keywords to determine if each potential keyword is pedagogically valuable . [090] Based on the comparison of step (S204), in a following step (S20 6) , a keyword extraction module removes potential keywords from a set of potential keywords, which corresponds to a word related filtered in the broken word file. [091] Modalities of a keyword extraction module can implement a broken word file, which can filter out previously identified words as missing of pedagogical value. Non-limiting examples of broken words that are filtered from the pool of potential keywords may include: ordinal numbers, numbers, proper names and/or conjunctions. [092] In a next step (S208), the keyword extractor module can store potential keywords that correspond to a word in the keyword blank list file, identified as having pedagogical value for language learning. Non-limiting examples of these words may include placements, verbal phrases, words or phrases identified as difficult for non-native speakers, words correlated to the resource, and/or words previously identified as critical words for the language being learned. [093] A blank list file may identify one or more placements as having pedagogical value, each of these placements being maintained as an extracted word. Columns are words often found together in texts, such as "put out - throw away". Language students typically learn these placements in an array of chunks. However, conventional keyword extraction tools typically eliminate placements that appear in a document. Keyword extractor modalities can be tailored to specifically look at these types of placements. [094] In a non-limiting example using the collocation "put out - throw away", there may be circumstances where the term "put - throw/put" may otherwise be filtered as missing value, but sentence verbs, such as, "put out - throw away", "put up - by/put on top", "put on - put on" can be recognized as puts and as such should be extracted by the keyword extraction module. Language students should learn each of these uses of the word "put - by/put", since in each of the above examples, the term "put - by/put" has different meanings, which are equally different from the word it expresses. the verb "put - for/put". In this example, the Keyword Extraction module facilitates a language student's teaching of each of these sets of placements and various uses of the term "put - by/put". Therefore, language learners may notice the difference between "put on - wear" with respect only to the term "put - for/put". [095] Continuing with step (S208), potential keywords identified by the keyword extraction module as having pedagogical value can be stored in a keyword supply. The keyword supply is a computer-readable storage medium that can store keywords, keyword attributes, and/or various other ways of retrieving data stored in the keyword supply, such as, for example, on an equivalence key or a single database registry key. [096] Modalities of a keyword extractor module implement a keyword supply that can store keywords extracted from the keyword supply, associated with n-gram keys, a set of one more placements and one or more equivalences. As used herein, the term keyword can refer to a word or phrase comprising one or more words, such as a placement. [097] A keyword supply stores keywords with n-gram keys. An n-gram can be a sequence of one or more words (n) , written within a meaning unit in a text, which can refer to the length of the keyword. In other words, a keyword supply might store unigrams, bigrams, and/or trigrams, or more, depending on the length of the keywords being extracted. [098] So, for example, a unigram can be a text word that has its own meaning, so "dog - dog" is a unigram. Likewise, the expression "dog walker - dog walker" is a bigram, and the expression "dog walking service - dog walking service" could be a trigram. Between each of the three potentially extracted keyword examples, there is some grammatical connection between the three potential keywords. These three examples can be put together to form a unity of meaning, that is, to form a set of concepts together. [099] A keyword supply may also store extracted keywords in association with collocations and/or equivalences. The collocations in this context are similar to other database keys in that they can be stored in association with a keyword extracted from the keyword supply, these collocation keys representing a subset of one or more collocations that disclose the keyword extracted from a feature, disclosed in a feature follow-up story, or manually having been entered by a content manager. The equivalences associated with the keyword in the keyword supply can be one or more values indicating the start and end positions of the keyword associated with a resource. [0100] After the keyword extraction module generates the set of potential keywords in step (S200), the keyword extraction module can store all or a subset of potential keywords in the keyword supply. key, in the form of extracted keywords. The keyword supply can only store keywords extracted from a resource. Some modalities may allow this keyword supply to build over time, thereby adding extracted keywords; some of these modalities can also add the extracted keywords and associate them to their respective resources, from which the keywords were extracted. [0101] In a next step (S210), a keyword extraction module can determine a record for each of the remaining potential keywords, that is, potential keywords not yet deleted or stored within the keyword supply , according to the registration rules that evaluate each pedagogical keyword value. Modalities may record potential keywords using various permutations and combinations of algorithms, software, and/or other tools known or disclosed herein to determine the pedagogical value of a word. [0102] As a non-limiting example, a keyword extractor can record bigrams and trigrams using co-occurrence statistics, implemented through the use of a natural language processing tool, such as The Natural Language Toolkit, from programming Python language, to determine the probability that a word can remain a potential keyword. In this example, the keyword extractor determines the criticality of a word for text by measuring the frequency of the word. [0103] In another non-limiting example, a keyword extractor may refer to a placement list specifically configured for English learners at a specific level of competence. [0104] As discussed last, as another non-limiting example, a keyword extractor can generate an additional word difficulty record, to rank words in terms of probability of difficulty for non-native speakers, to learn and supervise. Non-limiting examples of parameters used to calculate a word record may include a word length and/or the frequency at which the speech part of the word is exercised in a wider population of external linguistic body, such as, an inverse frequency term document attendance record (TF-IDF) . [0105] As shown in Figure 2A, the keyword extractor can include an optional first step review (S200a) and an optional second step review (S218), [0106] In a step (S200a), a keyword extractor implements a first review step in generating the set of potential keywords. Step (S200a) may comprise optional steps (S201a), (S202a), (S202b) and/or (S202c). [0107] In one step (S201a), a keyword extractor can use common language processing tools and techniques to generate the set of potential keywords. [0108] In a step (S202a), the keyword extractor can implement common language processing techniques to identify one or more attributes associated with a potential keyword. In this step, keyword can, for example, automatically identify a speech part of the word as a noun, adjective, verb or other grammatical form. Other non-limiting examples of attributes that a keyword extractor can automatically identify and associate with potential keywords might include a topic and/or subtopic for a potential keyword, a number of syllables in a potential keyword, a number of times a potential keyword appears in a resource, comprising all resources stored in the resource supply, one or more resources stored in the resource supply, or other external resources or TF-IDF, and/or a definition for a potential keyword. [0109] In one step (S202b), a keyword extractor can calculate a word difficulty record for a potential keyword, based on factors pulled from the attributes associated with the potential keyword. These factors may include, but are not limited to, the TF-IDF of the word in the supply of resource, resource, or external sources, a certain number of syllables in a potential keyword, and whether a potential keyword appears in an Academic Word List (AWL). [0110] In a step (S202c), the attributes of the words can be identified and included in the keyword supply as metadata associated with a potential keyword. For example, metadata can include a part of speech/conversation, a source resource, a definition of a word, or a context in which an extracted keyword is used in the text. A context can be determined using metadata associated with the resource, in which resource content has been identified and stored in a resource supply, or manually introduced. The keyword feed may store a potential keyword in association with any resource content that advertises the potential keyword. Some embodiments of the keyword extractor may implement a tagging system for associating or tagging metadata with keywords. [0111] After step (S202c), a keyword extractor can perform steps between (S204) and (S216) as shown in figure 2. [0112] In an optional step (S218), a keyword extractor can initiate an optional second proofing. In this step, each potential keyword and associated metadata can be presented to a user interface of a client computing device operated by a content manager. The content manager can then interact with the user interface to confirm that the associated keywords, metadata and/or attributes, as well as the definitions, are correct in the context of the text. A content manager can further verify that keywords, metadata and/or associated attributes are appropriate to the context of the text. Text Difficulty Calculator [0113] Language learning systems and methods may include a text difficulty calculator module, which is a component of a computer program product embedded in a machine-readable medium and executed by a processor, to perform specified functionality . A text difficulty calculator module can determine a language difficulty for text in a resource. [0114] In some modes, a text difficulty calculator can group resources into a plurality of skill groups or levels, according to the difficulty records. In these modalities, resource groups can correspond to competency levels assigned to language learners. Thus, language learners can be assigned a competency level that determines the difficulty of the resources to which they are presented for language learning purposes, for example, a beginner or first-level language learner can interact with the resource of the first level. [0115] For example, in these modalities, a text difficulty record may identify a resource as having text of minimal difficulty in relation to the rest of a linguistic body that comprises one or more resources stored in a resource supply. This relatively easy feature can be grouped with features having comparable text difficulty records. The group can now be considered as a beginner level, a first level, or other entry level designation. [0116] In some modalities, the number of groups and/or a high detail to which the resources are segmented in these groups, is a decision of the content manager. In other modalities, the determination of the breakdown and/or the number of groups can be automatically determined. [0117] A text difficulty calculator module can analyze the language text difficulty in text using a set of linguistic features. This set of features comprises one or more semantic features and one or more syntactic features. Non-limiting examples of these characteristics may include the difficulty of words in the text, the way in which words are used within the context of the global text, the overall complication of the content included in the text, and the difficulty of syntactic construction of the text. Non-limiting examples of syntactic construction difficulty may include the number of relative clauses in the text, or the distance of pronouns from their correlated antecedents. [0118] In some embodiments, a text difficulty calculator can identify syntactic and semantic characteristics of text in a resource. Using the syntactic and semantic characteristics identified as inputs to one or more algorithms, the text difficulty calculator can determine the language difficulty in the text. Some modalities may implement group analysis to predict text difficulty in a particular resource for specific groups of non-native speakers. [0119] In some modalities, a text difficulty calculator module can be trained to be able to identify features in the text and associate weighted values to the features. The text difficulty calculator module can receive training resources, having a text in which various features are found. Training resources can be tagged or tagged with metadata that indicates the weighted values associated with each of the characteristics in the text, and the resource can be tagged or tagged with metadata that indicates the text difficulty record of a training resource. [0120] As an example, the text difficulty calculator module can receive a set of training features. Each of the training resources is labeled with data that indicates the difficulty score, for example, on a scale of 1-7. For each training resource in the training resource set, the text difficulty calculator module can then extract a set of language characteristics having a weighted value. So, for example, linguistic characteristics in the text of a training resource can be associated with a decimal representation of a floating point. Taken together, the set of linguistic characteristics identified in a training resource can be represented as a vector of numerical values, based on the weighted values of each of the linguistic characteristics. The vector for each training resource in the dataset labeled training resources is then used to train a statistical model to predict text difficulty records of new resources. The statistical model can represent the correlation between the features selected for resource extraction (eg, identified in a list of lexical features for extraction), the weighted values associated with those linguistic features, and the corresponding labels. When new features are received, a vector can be determined for the new feature, based on the linguistic characteristics identified and extracted from the new feature. The new feature vector may allow the system to label the new feature with a predicted text difficulty record using the statistical model. [0121] Figure 3 shows a flowchart of an exemplary modality of a method for determining a text difficulty record for text language in a resource. [0122] The mode shown in Figure 3 shows the steps (S301), (S303), (S305), (S307), (S309) and (S311). However, an expert of ordinary skill in the art may note that other modalities may vary the steps performed. The embodiment of Figure 3 includes text to identify syntactic and semantic features of the text and then use the identified features as inputs into one or more algorithms to determine the language difficulty of the text. [0123] In step (S301), the text difficulty calculator receives a feature containing text and then the feature text is examined for various characteristics, as discussed in a following step (S303). [0124] Continuing with step (S301), in some embodiments, a resource can be received or fetched from a resource supply within a language learning system. In some embodiments, resources of various types can be received from a user interface on the client's computing device. This user interface can be that of a content manager or a language learner, and the user interface can pass one or more selected features to the language learning system, to implement in the various components described here. The resource received from a user interface may be received by a text difficulty calculator to perform a text difficulty determination. [0125] In some cases, a text difficulty calculator may examine only a portion or several non-contiguous portions of a feature's text. For example, the Text Difficulty Calculator can examine only a specific entry in an encyclopedia, without examining the entire encyclopedia. [0126] In a next step (S303) , the text difficulty calculator compares the examined text with the features listed in a feature bank. [0127] A lexical feature bank can be a computer file containing a listing of semantic features and a listing of syntactic features. The features listed in the feature bank can be used to determine language difficulty in a feature's text. Features in the feature bank describe and correspond to various aspects of texts that make reading or understanding a text difficult for non-native speakers and second language learners. [0128] In a next step (S305), a text difficulty calculator can identify a set of features in the text of a resource, based on the comparison with the lexical feature bank in step (S303). This set of identified features can comprise semantic features and syntactic features. [0129] The semantic and syntactic characteristics identified in the text can be characteristics that make the text difficult for language learners to understand. So, for example, an adult language learner, who is capable of competent reading comprehension in his native language and also understands mapping concepts to text, may be suitable for learning English. In this example, the student only needs to learn to read in English, not needing to learn concept mapping for text written in English. [0130] Thus, in this example, the semantic and syntactic characteristics identified during the comparison step (S303) are characteristics of a text, written in English, that focus on the linguistic aspects of the English language, which make reading the English language difficult. Or, characteristics may correlate with concepts in English, which make English different from the language learner's native language. Non-limiting examples of this might include items of negative polarity (yato is, terms like, "some" or "none") that may be unique to the English language and difficult to explain to a non-native language learner. Another example might be a large amount of motion(wh), that is, relative clauses away from the terms being modified by the relative clause. [0131] In a step (S307), after identifying a set of features in a text, each semantic feature and each syntactic feature in the feature set are assigned a value or weight, based on the relative difficulty that each feature contributes to the overall difficulty of the text. [0132] In some modalities of a text difficulty calculator, a lexical feature bank may result from features extracted from a dataset labeled training resources. The lexical feature bank can be a file that lists the lexical features and that is stored on a machine-readable, non-transient storage medium accessible to the text difficulty calculator. [0133] Thus, for example, a text difficulty calculator can employ a data set labeled training resources, with labeling of previously identified semantic and syntactic characteristics, with metadata identifying the characteristics in the text. Non-limiting examples of these characteristics might include: the lexical frequency of several words (ie how often words show up in resources, in the resource storage language body, or in the external language bodies); the length of sentences in the text; the amount of coordination and subordination in sentences; and the distance between pronouns and antecedents. [0134] As previously mentioned, a labeled dataset can include training resources having metadata associated with the semantic and syntactic characteristics in the training resource text. In the labeled dataset, each of the semantic and syntactic features can be assigned a corresponding weight, to find out the degree of difficulty that the particular feature contributes to the overall linguistic difficulty of the text. When a feature extracted from text in a feature is identified as corresponding to a specific feature in the labeled dataset, the text difficulty calculator can assign a specific weight associated with the feature as found in the labeled dataset. [0135] In some embodiments, a content manager can determine each of the associated weights. In some modalities, a text difficulty calculator can accept a manual input for each of the weights of a content manager user interface. In some embodiments, a content manager can review, review and/or update weights for each of the characteristics. Also, in some modalities, the content manager can use the user interface to correct one or more algorithms, such as a logistic regression algorithm, to determine the correct weight for each of the characteristics. [0136] So, for example, a content manager user interface can be used to correct a calculated difficulty score for a specific word, thereby updating a weight automatically associated with a feature or word. As another example, a labeled user weight (representing a difficulty of a feature or word) can be manually entered as a training label to improve the accuracy of algorithms (eg a statistical model) used to automatically compute the record. of text difficulty. [0137] In a step (S309), a text difficulty score is determined using the weights assigned to each of the characteristics as parameters for one or more algorithms used to determine the text difficulty score. Some modalities may assign two values as weights, that is, a first value is assigned and then a second value can be assigned again, before determining the text difficulty record. [0138] For example, some modalities may use an algorithm, such as a probabilistic statistical classification model (for example, a logistic regression model), to determine the correct weights for each of the characteristics in the set of characteristics identified in the resources, within a linguistic training body comprising one or more training resources. A first weighted value can be assigned during lexical feature extraction from steps (S301-S309). Each of the first values assigned to the characteristics later receives a second value, determined by analyzing the characteristics of the text. Therefore, in this exemplary modality, the final text difficulty record is determined using each of the first weighted values obtained during feature extraction and each of the second weighted values determined through text analysis. [0139] In some modalities, as in the present example, the textual characteristics of a resource logically separate the resource text into several weighted records, based on a determined difficulty for each identified characteristic. Then, an algorithm, such as a logistic regression model, converges the weights across a plurality of features in a linguistic body, so that the weights for the features are determined, based, at least partially, on their statistical distribution. , through the resources in the linguistic body. [0140] In a next step (S311), the feature is grouped with features of similar difficulty, based on the text difficulty record, calculated for the text. This step can rank resources based on the text difficulty record, in a grading system corresponding to a grading system that rates a language learner's competence in an important language. [0141] In some modes, resources are grouped by skill level, based on comparable text difficulty records. In these modalities, after the text difficulty record is calculated, the resources are grouped on a scale comprising several limits in a leveling system, used to classify the resources' text difficulty. Thus, for example, a first level comprising a set of linguistic bodies having text difficulty scores in the range "A" to "C"; or an "A" level comprising a set of linguistic bodies having text difficulty scores in the range of "0.1" to "0.3", on a global scale of 0 to 1. [0142] Some text difficulty calculator modalities can implement a group analysis to group the various features by difficulty levels. After assigning weights to each textual feature dacts in a linguistic body, the weights assigned to the textual features of a given feature effectively separate the feature text into various difficulty categories, according to each identified feature, and then an algorithm of Logistic regression converges the weights to classify the global resource into an appropriate difficulty category. Some modalities may implement principal component analysis to determine an optimal number of clusters, based on variance. [0143] In an example of the text difficulty calculator using a grouping method, the grouping method used is a means-(k) methodology, where the inputs include the number of groups implemented and a combination of syntactic and computed semantics of the resource. This methodology can also determine the optimal number of competency-level groups, or clusters, of resources into which resources are classified in a resource supply. [0144] In this example, the text characteristics may include a proportion of words in the resource that can be found in the Academic Word List, such as adjective variation (AdjV), adverb variation (AdvV), bilogarithmic type for token ratio ( B_TTR), lexical word variation (LV), modifier variation (ModV), noun variation (NV), average number of characters (NumChar), average number of syllables (NumSyll), VV1 square (SW1) , Uber Index ( Uber), and I-verb variation [VV1). [0145] In another example, the modalities of the text difficulty record module can comprise two components: (a) a program derived from learning-scki t, for grouping analysis; and (b) a code of syntactic complexity. The text difficulty calculator can use a known programming language, such as Python encoding, to extract the set of semantic and syntactic features, and a common language toolkit, for tagging part of speech, to perform identification of word-feature, and text difficulty calculation. Distractor Generator [0146] Some modalities of the language learning system or method may include a distractor generator module. A distractor-generator module is a component of a computer program product embedded in a machine-readable medium, and executed by a processor to implement specified functionality. [0147] In some modalities of a language learning system, distractors may be used in customized language learning lessons for a language learner. The distractor generator module can automatically identify and generate syntactic, spelling, phonetic and semantic (synonym and antonym) distractors. [0148] Those modalities of a language learning system that implement a distractor generator may include a language learner computing device, which can connect to a distractor supply to request distractors when a particular word is being tested. A distractor supply may be a database comprising a machine-readable, non-transient storage medium that stores one or more distractors generated by the distractor holder. [0149] Modalities of a language learning system that implement a distractor generator may include a keyword supply that stores the keywords extracted from the resource text by the language learning system. Also, in some cases, keywords may be entered from a language learner's computer device user interface, a content manager, or other user device such as an administrator. [0150] The modalities of a distractor generator can access one or more dictionary sources to identify a rough approximation of a definition of a word for appropriate feedback, and of effective distractors. A dictionary source can be a language learning system keyword supply, built by keywords extracted from resources stored in the resource supply. A dictionary source can be another database in the word storage system in association with a definition. A dictionary source can be an externally referenced source, such as a website or a commercial service that provides search access to the words and their associated definitions. [0151] The modalities of a distractor generator can use a number of different sources and tools, such as the dictionary source. So, for example, a dictionary source might be a licensed dictionary software tool having definitions and, in some cases, pronunciation data. Commercially available dictionary tools can also be used dictionary sources, such as WordNet, which is a dictionary software tool that represents a relationship between groupings of words. Also, in some cases, keyword supply can be implemented as a dictionary source; but, keyword supply modalities can present metadata fixed to each entry, such as audio pronunciation, which can be generated through associated recorded speeches. In addition, metadata can correlate distractors and keywords in the keyword supply, with feedback to the resources with which they are associated. [0152] Some modalities of a distractor generator may use a heuristic word meaning clarification function, which can be aided by a common language processing tool from a known computer programming language such as Python or Java. The distractor generator can then identify a similar definition in a dictionary source such as a keyword supply or an external dictionary source such as an Oxford dictionary that can be searched online on a computer. [0153] Exemplary modalities of a distractor generator module can include these components as a common language processing tool for tagging part of speech, such as pyEnchant, Aspell, Oxford ESL Dictionary, and/or WordNet software. [0154] Figure 4 shows a flowchart of an exemplary modality of the method executed by the distractor generator module. [0155] Modalities of a distractor generator module can automatically identify and generate different types of distractors for a target word. In some modalities of a language learning system, distractors generated by a distractor generator can be implemented for various pedagogical purposes in language learning, such as assessment and/or educational activities. [0156] In step (S401), a distractor generator receives a target word from a module or component within a system, which can be a language learning system. A distractor generator can receive a single target word or a plurality of target words, for which distractors will be automatically generated. [0157] In some modalities of a language learning system, a distractor generator can receive a set of keywords extracted from a resource. The set of keywords comprises one or more target words, for which distractors can be automatically generated. The distractor generator can automatically generate a set of distractors for each of the target words. [0158] In a step (S402), the distractor generator module generates a set of semantic distractors correlated to the target word. The set of semantic distractors may comprise one or more synonyms of the target word. Additionally or alternatively, the set of semantic distractors may comprise one or more antonyms of the target word. [0159] Modalities of a distractor generator may use a common language processing, implemented by, for example, a common language processing tool of a known computer programming language, to associate similar words and/or definitions of a dictionary with the target word, to help language learners better supervise the target word. [0160] A distractor generator can search dictionary sources to identify one or more synonyms of the target word and one or more antonyms of the target word. A dictionary source can be a keyword supply, another dictionary service within the language learning system (for example, a manually updated text file), and/or a computer dictionary service external to the system (by example, the website of "Merriam-Webster®). The distractor generator can generate a set of semantic distractors, by choosing one or more of the identified synonyms and/or one or more of the antonyms. The set of semantic distractors can be constituted of words to assist language learners with regard to supervising said learners on the meaning of target words. Some language learning system modalities can implement semantic distractors to distract language learners from the correct meaning of the target word. of distractor can automatically generate the set of semantic distractors to help language learners understand a definition of the word al. grandfather. [0161] In a next step (S405), a distractor generator can identify a set of spelling distractors for the target word. Spelling distractors can be words having a short publication distance to a target word. A publication distance is the total number of letter insertions and/or letter deletions required to have a word resembling a second word. So, for example, changing the word "could" to "cold" constitutes a publication distance of 1, in that deleting only one letter was required to change "could" to "cold". Similarly, in another example, the change from "could" to "would" constitutes a publication distance of 2, in that one letter is deleted and one letter is inserted. In another example, the change from "could" to "should" constitutes a publication distance of 3, in that one letter is deleted and two letters are inserted. [0162] Using words found in one or more dictionary sources, the distractor generator in step (S405) can locate words having a minimum publication distance relative to a target word that is to be exercised. That is, the distractor generator can identify a target word publication distance for each of the words in a dictionary source. The distractor generator can sort the words in a dictionary according to shortest publication distance. A distractor generator can then start from a word that is ranked with the shortest publishing distance, then continue, through the words ranked to the word that has the longest publishing distances, until the set of spelling distractors is generated. The spelling distractor set comprises words having small or minimal publication distances, satisfying a level of difficulty correlated to the student's abilities. [0163] The shorter the publication distance between two words, the more difficult it will be for a non-native speaker to differentiate the two words from each other. Conversely, words having a longer publication distance may be easier to differentiate. Applying this concept to the generation of spelling distractors, shorter publication distances between spelling distractors and a target word can result in comparatively more difficult distractors for language learners. The level of difficulty of spelling distractors, therefore, can be adapted to varying degrees by selecting words from the dictionary source having shorter or longer publication distances. [0164] In some embodiments, the selection of words having the shortest desired publication distance may be based on a predetermined amount of words to be used. For example, if there are to be 10 comparatively difficult spelling distractors in the set, then 10 words ranked closest to the target word will be selected. [0165] As described above, in some modalities, a minimum publication distance, a maximum publication distance, or an exact publication distance may be used for selected words from the set of spelling distractors, based on a desired level of difficulty. The spelling distractor set can be determined by a difficulty setting, which can allow the distractor generator to automatically adapt the complexity of the spelling distractors selected and the number of spelling distractors to be used in the set. So, for example, an exact publication distance of 3 can be used to generate a comparatively less difficult adjustment of spelling distractors. In this case, all words identified as having a publication distance of 3 in a dictionary can be included in the spelling distractor set. The number of spelling distractors can be limited in the various ways described above, or it can include all words that meet the publication distance criteria. [0166] It should be noted that other algorithms for selecting spelling distractors based on a publication distance in relation to a target word can be used to generate a set of spelling distractors. One of ordinary skill in the art will appreciate that other algorithms for automatically identifying and generating spelling distractors based on a publication distance may fall within the scope of one or more portions of the invention described herein. [0167] In a next step (S407), a distractor generator can generate a set of phonetic distractors correlated to a target word. A phonetic distractor can be a word that tends to be distracting, based on phonetic similarity between words. Distractor generator modalities can automatically identify one or more phonetic distractors in one or more dictionary sources and then generate the set of phonetic distractors from one or more of those identified distractors. [0168] Phonetic distractors can be words that provide a sound similar to the target word, but that are not idealized to be identically pronounced when spoken. A phonetic distractor can be a word that sounds similar to the target word, but it can also have a small publication distance from the target word. Some modalities of the language learning system may use the set of phonetic distractors in pedagogical learning activities, where, for example, language learners must choose between two different items that provide a similar sound between them. [0169] Some modalities of a distractor generator may comprise a step (S408) in which words that provide the same sound, also called homophone words, are excluded from the set of phonetic distractors. In these embodiments, a distractor generator can recognize a word as being homophone to a target word. For example, in some cases, a homophone word can be included in a set of spelling distractors because the homophone word has a publication distance that meets the criteria of the spelling distractors, which means that the homophone word can be included in the set of phonetic distractors. However, modalities that implement step (S408), or similar steps, can identify homophone words and exclude them from the set of phonetic distractors. [0170] In a following step (S409), a distractor generator can identify a set of syntactic distractors. A syntactic distractor can be a word that is correlated to a target word but differs in some grammatical way, for example, the word is a different conjugation of a verb, the word is a verb that agrees with a different person being referenced ( first person, second person, third person), the word being a noun form of a verb or the word being a verb form of a noun. [0171] In a next step (S411), a distractor generator can store each of the distractor sets into a distractor supply. In some circumstances, step (S411) may automatically update an existing set of distractors or receive input from a content manager computing device user interface by changing one or more of the distractors. [0172] In a following step (S413), a distractor generator can provide one or more automatically generated sets of distractors for a user interface of a computing device. [0173] In some modalities, the user interface that receives the material produced from the distractor generator can be that of a content manager, which can automatically examine the generated distractors, for accuracy and consistency. If the content manager intends to fix a set of distractors, these modalities can allow the content manager to make the necessary changes through the user interface. [0174] In some modalities, the user interface that receives the produced distractors may be that of a language student who can automatically examine the generated distractors for accuracy and consistency, specifically, when the student is the source of the resource, the from which the words were extracted. If the student intends to correct a set of distractors, these modalities can allow the student to make the necessary changes through the user interface. [0175] In some embodiments, distractors or sets of distractors can be associated with keywords through correlated metadata. That is, distractors can be stored in a supply of distractors, with metadata that associates keywords for specific distractors. Similarly, in some embodiments of a keyword supply, as described earlier here, keywords may be stored with metadata that associates distractors with specific keywords. Learning Activities and Activity Sequences [0176] Modalities of a language learning system may include lessons or learning module lessons, where the objective is to have a language student demonstrate knowledge of a concept or to help a student learn a concept by carrying out specifics learning activities. [0177] A learning module can generate learning activities of various types, which are designed to promote various language practices, such as reading, writing, listening and speaking competence. The different types of activities can also focus on vocabulary building, grammatical competence, and/or pronunciation, among other practices. Examples of activities might include a multiple choice question activity, a vocabulary matching activity, an activity focused on speaking, an activity focused on pronunciation, an activity focused on writing, an activity focused on grammar, an activity focused on listening, a reading focused activity, a spelling focused activity, an identifying a vocabulary word activity, an audio information understanding activity, a video information understanding activity, and a reading comprehension activity. [0178] A learning module can dynamically generate a lesson that can comprise a number of learning activities sized for language learners. When dynamically generating a learning activity, the language module can use the result generated by a keyword extractor, a distractor generator, and/or a text difficulty calculator. [0179] In some embodiments, learning activities can be customized based on user data associated with students that is stored in a user data supply. So, for example, learning activities can be varied, given the difficulty in matching a language learner's level of competence. Other variations based on user data may include customizations based on the language learner's personal goals, needs and performance. The learning module can automatically use the results of the various modules and the data stored in databases to thus automatically generate the appropriate learning activities for a language learner. [0180] The system can establish a pedagogical route for each level of language learning, where each pedagogical route comprises a series of learning activities. Learning activities can vary, based on the student's needs and level of competence, as well as different constraints, such as the number of words used in an activity, a time constraint, in the case of using a suggestion. audio or textual suggestion (eg revealing part of a word). The system can dynamically build an appropriately difficult activity based on the type of learning activity required by the student's past performance. [0181] An exemplary activity for a learning activity can be a vocabulary activity in which the student identifies synonyms of a particular word. The synonyms used in the learning activity can be chosen based on the student's needs and competence. Learning activity types are established within the system, but content and words are dynamically adjusted for each student, allowing for a customized activity. [0182] Each learning activity can use distractors. The system chooses appropriate distractors based on type and relationship to the target word, for use in different activities, including multiple choice questions, matching activities, spelling activities, text reconstruction activities, and memory games. The learning activity module can record a student's performance and automatically assign a grade to the student for one or more practices in an activity. [0183] At a higher performance grouping level, more difficult distractors can be used. Learning activities can be automatically adapted within a class or lesson, or to a subsequent class, in response to a change in student skill and/or competency on a global level. In some modalities, the learning activity module may update the student profile in the user profile supply, to reflect grades and changes in the student's skills. [0184] Thus, for example, a lower level student might have to fill in a blank with a correct word and be presented with options such as the target word, an antonym, and other words with a similar spelling. A higher level student performing the same activity should have options of increasing difficulty, such as more distractors and/or more similarly spelled words. [0185] A class can comprise a series of learning activities, which can be derived from combinations and permutations of the linguistic body, keywords and distractors. Each class is based on resources, for example a document, and includes a specific series of learning activities that can increase in difficulty. For example, a sequence of activities focused on reading and spelling could start with an initial reading comprehension activity, then move to a vocabulary acquisition activity, and end with one or two spelling activities. As the student demonstrates competence and improved performance, the desired keywords may present increasing difficulty. The level of difficulty of a particular learning activity can be determined by a level of difficulty (chosen based on the user's competence, ie exam performance) and a level of difficulty of a learning activity (chosen based on the performance of the user). user during classes/courses). The level of difficulty of the learning activity can be correlated to the level of difficulty of the text of the resource made available for the learning activity. The level of text difficulty can be determined by algorithms using metric variables such as readability index, average sentence length, Oxford 3k word count, AWL word count, number of subordinate clauses , number of relative clauses, number of common conjugation times, number of progressive conjugation times, number of future conjugation times, or tagged words in a picture. [0186] So, for example, a relatively high text difficulty score for a resource may indicate greater language difficulty or complexity in the text, because the text contains words that appear less frequently, contains longer sentences, academic words , and a relatively high frequency of complex shapes. [0187] A relatively higher student competency score indicates that the student is quite competent in one or more domains of language practice, such as reading, writing, listening and speaking. In other words, a student with a high competency record is said to have high ability records in various language-specific practices. Students who have a relatively high competency record may be able to manipulate resources and content in the most difficult language resources, indicating that these students have robust vocabulary, strong grammar command, and strong performance across domains of language practice, that is, high skill in language-specific practices. Resource difficulty levels, such as a text difficulty record, can be mapped to the student's competency levels, so that all resources and learning activities presented by students generally contain the language at one level. of difficulty which is sometimes slightly above a student's current level. [0188] Modalities of a learning activity module can implement various multimedia sources, including audio input or output, video input or output, text input or output, and image input or output. Student responses to a learning activity can be saved to a user data supply for review of user performance and/or for feedback from an administrator or content manager. In some learning activities, a student may receive a resource having errors in the text, and be required to publish the same. The student can then submit that complete learning activity for offline asynchronous feedback. [0189] The learning activities and each type of learning activity can be modified to become more or less difficult, while keeping the essential resource constant. The difficulty of the learning activity can be influenced, for example, by the difficulty of the distractors that are used. So, for example, an "easy" version of a learning activity might use "easier" distractors than the "hard" version. Other non-limiting examples of how the difficulty of learning activities can be varied may include adjusting the number of items tested, the presence/absence of a timer, and/or other activity-specific variations such as larger words in a type of activity in which the vocabulary is focused. In some modalities, the learning activity difficulty can be manually modified or automatically adapted for students within their course goals and lessons. [0190] Difficulty setting can also be based on a word difficulty. Words may have difficulty records associated with them. In some cases, words can be normalized to zero for a scale so that they can be appropriately associated with learning activities. Hits within the word difficulty record can include frequency within a variety of language bodies such as the "American National Corpus" as well as hits on a metric scale such as the number of syllables and the number of definitions for the word. Word difficulty levels can be used to expose the student to word sequences within a sequence of lesson/learning activities. Student word management records can be used to determine which words are characteristic in various learning activities. Examples of Graphical User Interface for a Student [0191] Figure 5 shows a screen of an exemplary modality of a graphical user interface (GUI) (500), showing a home menu before a class (505) for a language student, on a monitor of the computing device of the student. The graphical user interface (GUI) screen (500) comprises a diagnostic record (501), a user's language competence level (502), a learning focus (503) on a specific language practice, an objective (504) for the course, a lesson (505) and a start button (506). Screens (600), (700) and (800) are examples of learning activities from different types of activities, which were dynamically generated for the student; when together, a class can be implemented (505). [0192] An activity in this sample class (505) might be the inclusion of reading as shown in figure 6. Another activity in this sample class (505) might be a vocabulary activity as shown in figure 7. Another activity in this sample class (505) can be an activity involving spelling with different subsets of keywords, as shown in Figure 8. [0193] Figure 6 represents a screen image of a graphical user interface (GUI) (600), for a student to engage in a reading inclusion activity, prepared by the language learning system. The example screen image of (GUI) (600) for a read inclusion activity may comprise the text (601) of a document resource that is used for the example lesson (505), a document title (602) one or more highlighted keywords (603) and a reading inclusion quiz (604) comprising a question (604a) and a set of multiple choice answers (604b). An exemplary screen image (GUI) (600) can highlight several keywords (603) and features an inclusion questionnaire (604) that implements the keywords (603) and a set of distractors in the form of responses. multiple choice (604b). [0194] Figure 7 is a screenshot of a graphical user interface (GUI) (700) for a user to engage in a vocabulary activity prepared by the language learning system. The example (GUI) screen image (700) for a vocabulary activity may comprise a text (701) of a document resource, a document title (702), a worded word (703) in the text (701), and a word match question (704). The word-matching question presents a set of multiple-choice answers (704b), where the correct answer matches the worded word (703). [0195] Figure 8 is a screen image of a graphical user interface (GUI) (800) for a user to engage in a spelling activity prepared by the language learning system. This example screen of (GUI) (800) might present an activity involving spelling with a different subset of keywords. The graphical user interface (GUI) screen (800) comprises a text (801) of a document resource, a document title (802), a worded word (803) in the text (801), and a word quiz. shuffled (804), comprising a short question (804a) and a shuffled set of letters (804b). Example of a Learning Activity Module [0196] Figure 9 shows a method of a language system implementing a learning module, according to an exemplary modality. [0197] The embodiment of the exemplary method of Figure 9 comprises the steps (S901), (S905), (S911), (S904), (S906), (S908), (S910), (S920), (S921), (S922), (S923), (S924), (S925), and (S926), and can implement a user data supply (902), a keyword supply (907), a resource supply (909 ) and a supply of distractor (913) . [0198] In a first step (S901), a language learning system can receive new document resources comprising text. The new text can be received from a language student ("student") computing device. The new text can be accessed from a computing device of a content manager. New text can be automatically downloaded or streamed from a text production source, such as, for example, a website, a blog, a news production or a textbook editor. [0199] In one step (S903), a keyword extractor module implemented by the system can extract one or more keywords from the new text. Keywords can be extracted according to an algorithm adapted to identify and extract keywords that are pedagogically valuable words that effect language learning. The keyword extractor module can be adapted to determine a word difficulty record for a keyword. The keyword extractor module can be adapted to associate one or more other attributes with a keyword, such as word length, number of syllables and/or part of speech. [0200] A keyword extractor module can store the extracted keywords in a keyword supply (907), which is a non-transient, machine-readable storage medium for storing keywords. A keyword supply (907) may store data associated with the keywords, such as, word attributes, word difficulty record, source document, and/or the keys associated with the keywords stored in the keyword supply. keyword (907). [0201] In some embodiments, the keyword extractor module can extract and store different keywords depending on the student's profile. Keywords can be assigned a harder keyword record if they, for example, are associated with an esoteric or unpublished definition. Keywords can also be more difficult, if they are longer, depending on the number of letters and syllables. Keywords can be more difficult if they contain silent letters, abnormal pronunciations, or unintuitive spellings. In these embodiments, a keyword extractor can generate an additional word record (in addition to the word record made to determine candidate keywords) to rank words in terms of probability of difficulty for non-native speakers, using the length of the word and frequency of part of the speech of the linguistic body. As discussed below, a learning activity can be made more difficult by using distractors that are intended to test more difficult keywords extracted from a text. [0202] In a next step (S905), a text difficulty calculator module can determine a text difficulty record for the text of a new feature. In the exemplary mode, a new document resource comprising the text is received, the text difficulty record is determined, and then the new resource can be stored in a resource supply (909). Resource supply (909) may be a machine-readable, non-transient storage medium for storing one or more resources. In some embodiments, the resource supply (909) may also store metadata associated with resources that describe various attributes of the associated resources, such as, for example, a text difficulty record. In some modalities, a document can already be stored in the resource supply (909), in which case step (S905) would not be necessary. [0203] In one step (S911) a distractor generator can generate a set of one or more distractors. Distractors can be generated and stored in a distractor supply (913). Distractors can be of several different types, each type being designed to exercise the language learner's skills, in particular, language practices, ie, spelling, vocabulary, phonetic distinctions. [0204] In some embodiments, the distractor generator can generate as many distractors as needed to test each of the extracted keywords. Distractors can be of varying difficulty, based on a number of attributes of a distractor, such as, for example, the difficulty of the essential keyword being tested, or the proximity of the keyword and the distractor. [0205] Some embodiments of a learning module may utilize a user data supply (902), in which a machine readable non-transient storage medium stores user profiles associated with language learners. User profiles can comprise information about language learners, such as the student's goal for language learning, a subject of interest with respect to the content contained in the text, skill records for language practices, and/or the the student's level of global competence in the language. [0206] In a step (S904), a learning module can identify a student's competence level in providing user data (902). A competency level can be an overall assessment of a student's skill level, while a skill can be for a specific practice. A competency level can be determined by combining the skills of individual exams, or it can be determined by known means of examining a language competency level. [0207] In one step (S206), a learning module can identify the need for target language skill practice based on the student's recorded skills in the user data supply (902). An example of a language practice might be the inclusion of reading or spelling. A skill record can be an indicator of a student's ability to perform a specific language practice. The learning module can make use of any number of language practices when preparing the learning activities. A student profile can store a skill record for each of the language practices exercised by the learning module. [0208] In a step (S908) , a learning module can identify one or more goals that the student presents for learning the language. A goal might be, for example, studying for the test of English as a foreign language (TOEFL exam), preparing for a tourist vacation, preparing for a business trip, going to college, professional requirement, or attending to military performance. A student can be associated with more than one objective. An objective can include a predetermined list from which the student can make a selection, or the objective can be accessed by the student in an alert mode. An objective can also be accessed by a content manager or other admin device. [0209] A learning module can prepare learning activities related to a student based on the student's needs, ie, with weaker skills in specific practices. In some modalities, the learning module can prepare lessons so that students can achieve certain goals. That is, the learning module can track the student's progress or their level of competence and skills in specific practices related to their stated goals. A goal can be achieved when a student's level of competence and/or skills reach a level comparable to the goal. Students who have a goal that demands a higher level of competence, such as preparing for the TOEFL exam, may receive a larger set of classes and/or a greater inclusion of classes, when compared to students who have a goal. less demanding, such as learning basic vocabulary for eventual conversation. [0210] So, for example, two students who are at the exact same level of competence may have different goals for which they intend to achieve. The first student, for example, is only able to order coffee in the new language, in which case that first student would be able to reach that goal more quickly and the first student's content could interact with what usually correlates to food ordering and drink. If the second student, at the same level of competence, has the desire to become fluent in the new language, then the second student will have a greater participation in the course (for example, more classes, and/or more lessons ). In this example, both the first and second students can start with learning activities aimed at ordering coffee or ordering food and drink, as these were the goals that both students wished to be possible to achieve. However, the second student could then interact with learning activities involving increasing difficulty in language practices. In some cases, the second student could receive longer classes, comprising a greater number of learning activities, compared to the first student. [0211] Some learning module modalities may allow students to successfully take a successful completion test to determine if students have achieved their intended goals. The student's competence and skill levels can be recorded and tracked through the interaction made by the language learning system. In some cases, the learning module can inform users when they have achieved a competency level comparable to the stated goals. Students can take a successful completion test, which examines whether they have achieved their goals. [0212] In a next step (S910), a learning module can identify a student content interest with respect to potential resource material, according to the student interests stored in the user data supply (902). A student interest could be, for example, a sport. These interests can be accessed from a student user interface, in which the student lists their interests. [0213] In the following steps (S920-S926), a learning module selects a resource to use to generate a series of learning activities, where the series of learning activities can be a lesson. The lesson can be a set of learning activities generated using the identified student attributes presented in steps (S904), (S906), (S908) and (S910), and also using the learning activity building blocks generated in the steps (S901), (S903), (S905) and (S911). [0214] In a following step (S920), a resource may be selected from the resource supply, based on one or more student attributes. A resource can be selected according to any permutation of the student's attributes, such as the student's level of competence, goals and interests. [0215] In some learning module modalities, learning activities can be generated to automatically include a resource containing content that is important to the student's interests. For a document resource, text content can be identified using metadata associated with extracted keywords. Text content may also be identified using some other common known language processing technique that can identify and/or classify the main story of a resource, linguistic body, or other collection of resources. [0216] As an example of releasing certain resources based on their content, a student with an interest in soccer can interact with learning activities correlated to resources that describe a recent soccer game, in which the text explains the rules of soccer or the text details the history of a famous football club. [0217] As previously mentioned, some modalities of a learning module can select resources based on how complicated their content is. That is, in some cases resource difficulty records may be based, in part, on how complicated their content is. So, for example, if a beginning student is a physicist, with an interest in complicated scientific topics, then text whose content discusses the complicated scientific topics in the language being learned may be too difficult to effectively teach the new language to the novice student. Therefore, some modalities of a learning activity may use a text difficulty record to determine an appropriate resource, to look at a resource supply, when constructing a learning activity. [0218] Some learning activity module modalities can vary the permutations of student attributes and permutations of the learning activity building blocks that are used when generating learning activities. [0219] In a next step (S921), a learning activity module can target a specific language practice to exercise in the learning activity being generated. The specific practice that is aimed at is based on the student's skills in individual language practices. A learning activity module can adaptively build a series of learning activities that utilize activities that correlate with weaknesses in specific student practices. [0220] Modalities of a learning activity module may also adaptively build learning activities based on the student's goals. For example, if the student is a tourist, then learning activities can comprise activities that are focused on simpler vocabulary with respect to attractive locations or directions. In an opposite example, a student preparing for a TOEFL exam may be given more difficult grammar activities. [0221] Next, in a step (S922), aiming for a specific language practice in step (S921), the learning activity module can determine which type of activity, said learning activity should employ, to appropriately address the specific language practice. So, for example, when a learning activity module determines that a student should aim for phonetic understanding practices, the learning activity module can automatically generate an activity based on phonetics. [0222] Non-limiting examples of activities may include: a reading inclusion activity (eg a multiple choice question about the content of a language body or resource consumed), a word matching activity (eg filling in the form in an article choice from the list of all available keywords), a vocabulary challenge activity (eg show a definition and multiple choice questions and answers to identify the target keyword, in text or with audio ) , a sound reduction activity (eg, in a text track, having missing words and asking the user to listen to multiple audio files to find equivalence and reduction) in a memory game activity (eg, have cards with keywords and synonyms/definitions that cover the original content and select them with regard to concentration style to match the words), and a matching activity tion of words (for example, seeing a deleted word/phrase in a text strip and having the letters rearranged so that they are spelled by clicking on them, in the proper order. [0223] In a following step (S923), a learning activity module can select a set of distractors, derived from the selected resource, that are suitable for a type of learning activity. [0224] A learning activity module can identify and select appropriate distractors for the type of learning activity, which is determined for the learning activity within the step (S922). So, for example, phonetic distractors can be selected when the activity type is a sound reduction activity. [0225] A learning activity module can identify and select distractors that present a level of difficulty comparable to a student's skill, in a specific language practice that will be exercised in the learning activity. [0226] In a next step (S924), a learning activity module can generate a learning activity using components of a learning activity building block and based on student attributes. [0227] The components of a learning activity's content block may include a resource, a set of keywords associated with the resource, an activity that exercises a language-specific practice, and a set of distractors derived from a dictionary source using keywords from a text (eg, synonymous semantic distractors or phonetic distractors). Student attributes may include information describing the student's preferences and/or information describing the student's capabilities (for example, a vocabulary skill or an overall competency level). [0228] In a following step (S925), a learning activity module can generate a class comprising a set of learning activities. The class as well as individual learning activities can be customized for a student. A class can provide students with a targeted route to achieving learning goals. A student profile can store design techniques correlated to that route. A class can be followed by learning activities to maximize pedagogical value. [0229] In a following step (S926), a learning activity module can generate a course unit comprising one or more lessons. A unit can be a route to achieving learning objectives. Lessons can be sequenced to maximize pedagogical value. Resource content and learning activities can be personalized and customized throughout the unit. [0230] In some modalities, a student may have an active primary course, composed of objective-oriented segments, and produced from lessons focused on language practice, using resources focused on the student's interest. Units can allow users to obtain a badge by passing optional successful completion tests. [0231] A learning activity module can also include a synchronous administration or a live administration. A learning activity module can also include an asynchronous administration feedback. Asynchronous feedback can be incorporated as a learning activity into classes that are tailored to the student's needs. Live administration can be instituted in classes that can be scheduled in conjunction with a student class or in separate sessions of a class, which can take place at set intervals. [0232] Unless specifically indicated otherwise, as evidenced from the following discussion, it is noted that throughout the description, discussions use terms such as, "creating", "providing", "calculating", "processing", "computing", "transmitting", "receiving", "determining", "showing", "identifying", "presenting", "setting" or similar may refer to the action and processes of a data processing system, or similar electronic device, which manipulates and transforms data represented as physical quantities (electronic device) within system registers or memories into other data similarly represented as physical quantities within system memories or registers or other type of information storage devices , broadcast or display. The system can be installed on a mobile device. [0233] The exemplary modalities can correlate to a device to perform one or more of the functions described here. Such device may be specially constructed for the required purposes, or may comprise a general purpose computer, selectively activated or reconfigured by a computer program stored on the computer. Such a computer program may be stored on a machine-readable storage medium (eg, a computer), such as, but not limited to, any type of disk, including floppy disks, optical disks, CD-ROMs, and disks. optical-magnetic, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a bus. [0234] The various illustrative logic blocks, modules or circuits and algorithmic steps described in connection with the modalities disclosed herein, can be implemented as electronic hardware, computer software or combinations of both. To clearly illustrate this interchangeability of hardware and software, various components, blocks, modules, circuits, and illustrative steps have been described above, generally in terms of their functionality. Whether that functionality is implemented as hardware or software will depend on the specific application and model limitations imposed on the overall system. Those skilled in the art will be able to implement the described functionality in different ways for each specific application, but these implementation decisions should not be interpreted as causes for a departure from the scope of the present invention. [0235] Modalities implemented in computer software can be implemented in software description languages, firmware, middleware, microcode, hardware, or any combination thereof. A segment of code or machine-executable instructions can represent a procedure, a function, a subprograms, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures or indications. of program. A code segment can be coupled to another code segment or to a hardware circuit, by passing and/or receiving information, data, arguments, parameters or memory contents. Information, arguments, parameters, data, etc. can be passed, sent or transmitted through any suitable means, including memory sharing, message passing, token passing, network transmission, etc. [0236] The actual software code or specialized control hardware used to implement these systems and methods is not limiting for the invention. Therefore, the operation and behavior of the systems and methods were described without referring to a specific software code, it being understood that the control software and hardware can be designed to implement the systems and methods, based on the description presented here. [0237] When implemented in software, functions may be stored in the form of one or more instructions or codes, in a non-transient computer-readable or processor-readable storage medium. The steps of a method or algorithms disclosed herein may be incorporated into a processor-executable software module, which may be disposed on a computer-readable or processor-readable storage medium. A non-transient computer-readable or processor-readable storage medium includes a computer storage medium and a tangible storage medium, which facilitates the transfer of a computer program from one location to another. A non-transient, processor-readable storage medium can be any available medium that can be accessed by a computer. Thus, for example, without limitation, such non-transient processor readable storage medium may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or any another tangible storage medium that can be used to store a desired program code in the form of instructions or data structures, and that can be accessed by a computer or processor. The term disk (written as disk), as used herein, includes compact disk (CD), laser disk, optical disk, digital versatile disk (DVD), floppy disk and blu-ray disk, are disks on which data is reproduced magnetically. , while the term disc (written as disc) and as used herein, reproduces data optically with lasers. Combinations of the above two forms should also be included within the scope of computer readable medium. In addition, the operations of a method or algorithm may be arranged as one or any combination or set of codes and/or instructions on a non-transient processor-readable storage medium and/or a computer-readable medium, which may be incorporated in a computer program product. [0238] The foregoing description of the disclosed embodiments is provided to enable any person skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit and scope of the invention. Therefore, the present invention is not intended to be limited to the embodiments presented herein, but is considered to have the broadest scope, consistent with the following claims and principles, as well as the novel features disclosed herein. [0239] Although several aspects and modalities have been disclosed, other aspects and modalities are also contemplated. The various aspects and embodiments presented herein are for illustrative purposes only and are not intended to be limiting, the true spirit and scope of the invention being indicated by the following claims.
权利要求:
Claims (26) [0001] 1. Computer-implemented method, characterized in that it comprises: - identifying, by a computer, one or more text keywords of a resource received from a second computer through one or more networks; - for each respective keyword analyzed from the text, determine, by the computer, a word difficulty score associated with the respective keyword; - store, by computer, in a keyword store, one or more keywords analyzed in the text and the respective word difficulty score; - identify, by computer, in a set of one or more language skills, a specific linguistic skill to exercise based on a skill score in a student record designated for the specific language skill to be exercised; - selecting, by computer, a learning activity from a type of learning activity that exercises the identified linguistic ability in response to the identification of the linguistic ability to be exercised; - generating, by the computer, one or more distractors from one or more types of distractors using the one or more keywords stored in the keyword store, which computer stores in a distractor store configured to store a plurality of distractors at least one distractor of a distractor type corresponding to the type of learning activity selected to exercise language skill; - selecting, by computer, from the distractor store a set of one or more distractors, each distractor from the set of distractor type corresponding to the type of the learning activity; and - generating, by the computer, one or more graphical user interfaces configured to display the selected learning activity on the second computer, wherein the selected learning activity comprises at least one distractor from the selected set of one or more distractors. [0002] 2. Method according to claim 1, characterized in that it further comprises receiving, by the computer, one or more inputs associated with the learning activity through the graphical user interface displayed on the second computer. [0003] 3. Method according to claim 1, characterized in that it further comprises generating, by computer, a class comprising a set of one or more learning activities. [0004] 4. Method according to claim 3, characterized by the fact that the set of one or more distractors in each learning activity in class are individually derived from the same resource. [0005] 5. Method according to claim 4, characterized in that it further comprises generating, by the computer, a learning unit comprising a set of one or more classes. [0006] 6. Method according to claim 1, characterized in that the resource is selected from the group consisting of: one or more portions of a resource with text; an audio track; a video track; an audiovisual track; and an image. [0007] 7. Method according to claim 6, characterized in that it further comprises selecting, by the computer, the resource based on the specific skill of the language to be exercised. [0008] 8. Method according to claim 1, characterized in that the learning activity is selected from the group consisting of: - a multiple choice question activity, a vocabulary matching activity, an activity focused on conversation, an activity focused on pronunciation, an activity focused on writing, an activity focused on grammar, an activity focused on listening, an activity focused on reading, an activity focused on spelling, identifying a vocabulary word activity, understanding a speaking activity. audio information, understanding a video information activity, and a reading comprehension activity. [0009] 9. Method according to claim 1, characterized in that the type of distractor is selected from the group consisting of: a semantic distractor, a spelling distractor, a phonetic distractor and a syntactic distractor. [0010] 10. Method according to claim 1, characterized in that it further comprises determining, by computer, one or more student skill scores for one or more language skills, in which the specific skill to be exercised is identified by the computer as a skill of a student who is comparatively weak. [0011] 11. Method according to claim 1, characterized in that the computer selects the set of distractors to have a level of difficulty comparable to a score on the student's level of competence. [0012] 12. Method according to claim 1, characterized in that it further comprises selecting, by computer, a resource with a text difficulty score comparable to a student's proficiency level score, from a resource deposit and based on one or more student objectives, where the resource selected is the resource used for the learning activity. [0013] 13. Method according to claim 1, characterized in that it further comprises: - receiving, by computer, the resource of a student user interface; and - assign, by the computer, metadata identifying the topic for the resource. [0014] 14. Method according to claim 13, characterized in that it further comprises determining, by the computer, the topic of the content based on keywords extracted from the text, when the resource is a document. [0015] 15. Method according to claim 13, characterized in that it further comprises receiving, by the computer, a transcription file associated with the resource, when the resource is an audio track or when the resource is an audiovisual track, in which the transcript file is a document having text correlated to the resource. [0016] 16. Method according to claim 13, characterized in that it further comprises, when the resource is an image: - receiving, by the computer, a keyword describing a feature in the image and a set of link coordinates around of the feature in the image. [0017] 17. Method according to claim 1, characterized in that it further comprises: - presenting, by computer, a set of one or more resource options to a student user interface; and - search, by computer, from a resource deposit, the resource corresponding to the resource chosen by the student. [0018] 18. Method according to claim 1, characterized in that it further comprises the storage, by the computer, in a user data repository, a set of one or more interests of student content, received from a computing device of the student. [0019] 19. Method according to claim 1, characterized in that it further comprises selecting, by the computer, from a resource repository, the resource associated with the metadata that identify the content in the resource, which correspond to a content interest of the student. [0020] 20. Method according to claim 1, characterized in that it further comprises updating, by computer, the student's ability to a specific language skill, based on the student's performance in the learning activity, in which a deposit of User data stores one or more student skills. [0021] 21. Method according to claim 20, characterized in that it further comprises the storage, by the computer, in the user data store, a set of one or more student learning objectives received from a student computing device . [0022] 22. Method according to claim 21, characterized in that it further comprises: - automatically notifying the student, by computer, when a set of one or more skills meets one or more corresponding skill limits, associated with a goal of the student. [0023] 23. Method according to claim 22, characterized in that it further comprises generating, by the computer, a class comprising several learning activities according to a skill limit associated with the objective. [0024] 24. Method according to claim 23, characterized in that it further comprises generating, by the computer, a unit comprising a series of classes, according to one or more skill limits of the objective. [0025] 25. Method according to claim 20, characterized in that it further comprises adapting, by computer, an activity difficulty of a following learning activity, to be comparable to an updated skill of the student. [0026] 26. Method according to claim 20, characterized in that it further comprises updating, by computer, a student's skill level stored in the user data warehouse, based on one or more skills of the student.
类似技术:
公开号 | 公开日 | 专利标题 US10720078B2|2020-07-21|Systems and methods for extracting keywords in language learning US10319252B2|2019-06-11|Language capability assessment and training apparatus and techniques Frantz et al.2015|Syntactic complexity as an aspect of text complexity US20110076654A1|2011-03-31|Methods and systems to generate personalised e-content Lee et al.2011|Effectiveness of different pinyin presentation formats in learning Chinese characters: A cognitive load perspective Sun et al.2011|A WordNet-based near-synonyms and similar-looking word learning system KR101671179B1|2016-11-09|Method of providing online education service by server for providing online education service US20160364992A1|2016-12-15|Teaching aid using predicted patterns in spelling errors Hsu2019|Voice of America news as voluminous reading material for mid-frequency vocabulary learning Ferreira et al.2015|Using computational resources on bilingual deaf literacy: an analysis of benefits, perspectives and challenges Aeiad et al.2018|An adaptable and personalised e-learning system applied to computer Rao et al.2019|An immersive learning platform for efficient biology learning of secondary school-level students Quan2019|The potential of mobile-based and pattern-oriented concordancing for assisting upper-intermediate ESL students in their academic writing LeBlanc et al.2015|Recycling CRAP: reframing a popular research mnemonic for library instruction Bailey et al.2016|and Despina Pitsoulakis LeBlanc et al.2015|Recycling CRAP O'Connell2012|The grammar test
同族专利:
公开号 | 公开日 WO2014127183A4|2014-12-11| US20190362649A1|2019-11-28| WO2014127183A3|2014-10-16| BR122017002793A2|2019-09-03| BR122017002789B1|2021-05-18| US20140342323A1|2014-11-20| US20140297266A1|2014-10-02| US20190304337A1|2019-10-03| US9711064B2|2017-07-18| US10720078B2|2020-07-21| US20170256179A1|2017-09-07| BR122017002793B1|2021-05-18| US20140295384A1|2014-10-02| US9875669B2|2018-01-23| US20190103038A1|2019-04-04| US10410539B2|2019-09-10| US10325517B2|2019-06-18| BR122017002795B1|2021-05-11| EP2956919A4|2016-09-21| EP2956919A2|2015-12-23| US20160163228A1|2016-06-09| US20190385481A1|2019-12-19| US9852655B2|2017-12-26| US20170256178A1|2017-09-07| US9262935B2|2016-02-16| US20170270823A1|2017-09-21| US10147336B2|2018-12-04| WO2014127183A2|2014-08-21| US10438509B2|2019-10-08| BR122017002789A2|2019-09-03| BR112015020314A2|2017-07-18| MX2015010582A|2017-01-11| US20140342320A1|2014-11-20| US9666098B2|2017-05-30| US20180108273A1|2018-04-19| BR122017002795A2|2019-09-03|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5393236A|1992-09-25|1995-02-28|Northeastern University|Interactive speech pronunciation apparatus and method| US5820386A|1994-08-18|1998-10-13|Sheppard, Ii; Charles Bradford|Interactive educational apparatus and method| US5995921A|1996-04-23|1999-11-30|International Business Machines Corporation|Natural language help interface| CA2239666A1|1998-06-04|1999-12-04|Hsin-Kuo Lee|Digital traffic signal device| US7062441B1|1999-05-13|2006-06-13|Ordinate Corporation|Automated language assessment using speech recognition modeling| US8202097B1|1999-09-01|2012-06-19|Educational Testing Service|Computer based test item generation| WO2003054832A1|2001-12-21|2003-07-03|Brylton Software Ltd.|Synchronised formative learning system, method, and computer program| US20040018479A1|2001-12-21|2004-01-29|Pritchard David E.|Computer implemented tutoring system| US7377785B2|2003-05-22|2008-05-27|Gradiance Corporation|System and method for generating and providing educational exercises| EP1665091A4|2003-08-21|2006-11-15|Idilia Inc|System and method for processing a query| US20050273017A1|2004-03-26|2005-12-08|Evian Gordon|Collective brain measurement system and method| US20060024654A1|2004-07-31|2006-02-02|Goodkovsky Vladimir A|Unified generator of intelligent tutoring| US8272874B2|2004-11-22|2012-09-25|Bravobrava L.L.C.|System and method for assisting language learning| US20060154226A1|2004-12-27|2006-07-13|Maxfield M R|Learning support systems| US20060166173A1|2005-01-27|2006-07-27|Ellis Michael B|Educational method and device| US20070015121A1|2005-06-02|2007-01-18|University Of Southern California|Interactive Foreign Language Teaching| WO2007087565A2|2006-01-24|2007-08-02|Anshu Gupta|Meta-data and metrics based learning| US20090306534A1|2006-04-03|2009-12-10|President And Fellows Of Harvard College|Systems and methods for predicting effectiveness in the treatment of psychiatric disorders, including depression| US20080166693A1|2006-11-27|2008-07-10|Warren Stanton Gifford|Method and system for optimal learning| US20080254434A1|2007-04-13|2008-10-16|Nathan Calvert|Learning management system| US9626875B2|2007-08-01|2017-04-18|Time To Know Ltd.|System, device, and method of adaptive teaching and learning| US20090174142A1|2008-01-09|2009-07-09|Sullivan Richard J|Methods and apparatus for educational spelling games| EP2334226A4|2008-10-14|2012-01-18|Univ Ohio|Cognitive and linguistic assessment using eye tracking| US20100273138A1|2009-04-28|2010-10-28|Philip Glenny Edmonds|Apparatus and method for automatic generation of personalized learning and diagnostic exercises| US20110065069A1|2009-09-16|2011-03-17|Duffy Charles J|Method and system for quantitative assessment of verbal recognition memory| US20110065073A1|2009-09-16|2011-03-17|Duffy Charles J|Method and system for quantitative assessment of word detection latency| US20110065070A1|2009-09-16|2011-03-17|Duffy Charles J|Method and system for quantitative assessment of letter identification latency| US20110065071A1|2009-09-16|2011-03-17|Duffy Charles J|Method and system for quantitative assessment of word identification latency| US20110065072A1|2009-09-16|2011-03-17|Duffy Charles J|Method and system for quantitative assessment of word recognition sensitivity| US20110065082A1|2009-09-17|2011-03-17|Michael Gal|Device,system, and method of educational content generation| US8517739B2|2009-11-10|2013-08-27|Johannes Alexander Dekkers|Method to teach a dyslexic student how to read, using individual word exercises based on custom text| US8392175B2|2010-02-01|2013-03-05|Stratify, Inc.|Phrase-based document clustering with automatic phrase extraction| WO2011163663A1|2010-06-25|2011-12-29|Department Of Veterans Affairs|Computer-implemented interactive behavioral training technique for the optimization of attention or remediation of disorders of attention| WO2012009117A1|2010-06-28|2012-01-19|The Regents Of The University Of California|Methods of suppresing irrelevant stimuli| US20120115112A1|2010-11-10|2012-05-10|Ravi Purushotma|Language training system| CA2720892A1|2010-11-12|2012-05-12|The Regents Of The University Of California|Enhancing cognition in the presence of distraction and/or interruption| US20120158527A1|2010-12-21|2012-06-21|Class6Ix, Llc|Systems, Methods and/or Computer Readable Storage Media Facilitating Aggregation and/or Personalized Sequencing of News Video Content| US20120244510A1|2011-03-22|2012-09-27|Watkins Jr Robert Todd|Normalization and Cumulative Analysis of Cognitive Educational Outcome Elements and Related Interactive Report Summaries| US20120329026A1|2011-06-25|2012-12-27|Bruce Lewolt|Systems and methods for providing learners with an alternative to a textbook or similar educational material| US10490096B2|2011-07-01|2019-11-26|Peter Floyd Sorenson|Learner interaction monitoring system| US20130323692A1|2012-05-29|2013-12-05|Nerdcoach, Llc|Education Game Systems and Methods| WO2014009918A1|2012-07-11|2014-01-16|Fishtree Ltd.|Systems and methods for providing a personalized educational platform| BR112015020314B1|2013-02-15|2021-05-18|Voxy, Inc|systems and methods for language learning| US20150242979A1|2014-02-25|2015-08-27|University Of Maryland, College Park|Knowledge Management and Classification in a Quality Management System| WO2014136108A1|2013-03-06|2014-09-12|Haran Gideon|System and method for enhancing and tracking semantic capabilities and skills| US20140255889A1|2013-03-10|2014-09-11|Edulock, Inc.|System and method for a comprehensive integrated education system| US20140308645A1|2013-03-13|2014-10-16|Ergopedia, Inc.|Customized tests that allow a teacher to choose a level ofdifficulty| US9324242B2|2013-03-13|2016-04-26|Ergopedia, Inc.|Electronic book that can communicate directly with hardware devices via a keyboard API interface| US20140272847A1|2013-03-14|2014-09-18|Edulock, Inc.|Method and system for integrated reward system for education related applications| US9704102B2|2013-03-15|2017-07-11|William Marsh Rice University|Sparse factor analysis for analysis of user content preferences| US20150294577A1|2014-04-11|2015-10-15|Aspen Performance Technologies|Neuroperformance| US20150057994A1|2013-08-20|2015-02-26|Eric Hong Fang|Unified Mobile Learning Platform| US20150325133A1|2014-05-06|2015-11-12|Knowledge Diffusion Inc.|Intelligent delivery of educational resources| US20160180248A1|2014-08-21|2016-06-23|Peder Regan|Context based learning|US11170658B2|2011-03-22|2021-11-09|East Carolina University|Methods, systems, and computer program products for normalization and cumulative analysis of cognitive post content| US10339452B2|2013-02-06|2019-07-02|Verint Systems Ltd.|Automated ontology development| BR112015020314B1|2013-02-15|2021-05-18|Voxy, Inc|systems and methods for language learning| US10467918B1|2013-03-15|2019-11-05|Study Social, Inc.|Award incentives for facilitating collaborative, social online education| US10540906B1|2013-03-15|2020-01-21|Study Social, Inc.|Dynamic filtering and tagging functionality implemented in collaborative, social online education networks| US10303762B2|2013-03-15|2019-05-28|Disney Enterprises, Inc.|Comprehensive safety schema for ensuring appropriateness of language in online chat| JP6225543B2|2013-07-30|2017-11-08|富士通株式会社|Discussion support program, discussion support apparatus, and discussion support method| US20150044644A1|2013-08-12|2015-02-12|Shepherd Development Llc.|Training Apparatus and Method for Language Learning| US20150066506A1|2013-08-30|2015-03-05|Verint Systems Ltd.|System and Method of Text Zoning| JP6241211B2|2013-11-06|2017-12-06|富士通株式会社|Education support program, method, apparatus and system| US10255346B2|2014-01-31|2019-04-09|Verint Systems Ltd.|Tagging relations with N-best| EP2911136A1|2014-02-24|2015-08-26|Eopin Oy|Providing an and audio and/or video component for computer-based learning| US20150242974A1|2014-02-24|2015-08-27|Mindojo Ltd.|Adaptive e-learning engine with embedded datagraph structure| US20150279227A1|2014-04-01|2015-10-01|Morphotrust Usa, Llc|Psychometric Classification| US9740985B2|2014-06-04|2017-08-22|International Business Machines Corporation|Rating difficulty of questions| US9672203B1|2014-12-01|2017-06-06|Amazon Technologies, Inc.|Calculating a maturity level of a text string| WO2016092924A1|2014-12-09|2016-06-16|ソニー株式会社|Information processing device, control method, and program| US11238225B2|2015-01-16|2022-02-01|Hewlett-Packard Development Company, L.P.|Reading difficulty level based resource recommendation| US11030406B2|2015-01-27|2021-06-08|Verint Systems Ltd.|Ontology expansion using entity-association rules and abstract relations| CN106156766B|2015-03-25|2020-02-18|阿里巴巴集团控股有限公司|Method and device for generating text line classifier| US9684876B2|2015-03-30|2017-06-20|International Business Machines Corporation|Question answering system-based generation of distractors using machine learning| US10777090B2|2015-04-10|2020-09-15|Phonize, Inc.|Personalized training materials using a heuristic approach| US10424030B2|2015-06-05|2019-09-24|International Business Machines Corporation|Evaluation of document difficulty| US20170046970A1|2015-08-11|2017-02-16|International Business Machines Corporation|Delivering literacy based digital content| US10191970B2|2015-08-19|2019-01-29|International Business Machines Corporation|Systems and methods for customized data parsing and paraphrasing| US10613825B2|2015-11-30|2020-04-07|Logmein, Inc.|Providing electronic text recommendations to a user based on what is discussed during a meeting| US9779756B2|2015-12-11|2017-10-03|International Business Machines Corporation|Method and system for indicating a spoken word has likely been misunderstood by a listener| US10776757B2|2016-01-04|2020-09-15|Facebook, Inc.|Systems and methods to match job candidates and job titles based on machine learning model| US11152084B2|2016-01-13|2021-10-19|Nuance Communications, Inc.|Medical report coding with acronym/abbreviation disambiguation| MX2018008994A|2016-01-25|2019-02-13|Wespeke Inc|Digital media content extraction natural language processing system.| US20180061256A1|2016-01-25|2018-03-01|Wespeke, Inc.|Automated digital media content extraction for digital lesson generation| JP2019513243A|2016-02-18|2019-05-23|ローレンス,ショーン|Language learning interface| US10438500B2|2016-03-14|2019-10-08|Pearson Education, Inc.|Job profile integration into talent management systems| CN105679122A|2016-03-20|2016-06-15|郑州航空工业管理学院|Multifunctional college English teaching management system| EP3223179A1|2016-03-24|2017-09-27|Fujitsu Limited|A healthcare risk extraction system and method| US10169453B2|2016-03-28|2019-01-01|Microsoft Technology Licensing, Llc|Automatic document summarization using search engine intelligence| US10748118B2|2016-04-05|2020-08-18|Facebook, Inc.|Systems and methods to develop training set of data based on resume corpus| US10325215B2|2016-04-08|2019-06-18|Pearson Education, Inc.|System and method for automatic content aggregation generation| US11188841B2|2016-04-08|2021-11-30|Pearson Education, Inc.|Personalized content distribution| CN109478376A|2016-04-26|2019-03-15|庞帝教育公司|Calculating learning system based on affine knowledge| US9812028B1|2016-05-04|2017-11-07|Wespeke, Inc.|Automated generation and presentation of lessons via digital media content extraction| US11250332B2|2016-05-11|2022-02-15|International Business Machines Corporation|Automated distractor generation by performing disambiguation operations| US10817790B2|2016-05-11|2020-10-27|International Business Machines Corporation|Automated distractor generation by identifying relationships between reference keywords and concepts| US10606952B2|2016-06-24|2020-03-31|Elemental Cognition Llc|Architecture and processes for computer learning and understanding| US11030409B2|2016-08-19|2021-06-08|Accenture Global Solutions Limited|Identifying attributes associated with an entity using natural language processing| US10528874B2|2016-08-19|2020-01-07|International Business Machines Corporation|System, method and computer product for classifying user expertise| US10699592B2|2016-09-30|2020-06-30|International Business Machines Corporation|System and method for assessing reading skills| US10692393B2|2016-09-30|2020-06-23|International Business Machines Corporation|System and method for assessing reading skills| US10056083B2|2016-10-18|2018-08-21|Yen4Ken, Inc.|Method and system for processing multimedia content to dynamically generate text transcript| US10885024B2|2016-11-03|2021-01-05|Pearson Education, Inc.|Mapping data resources to requested objectives| US10319255B2|2016-11-08|2019-06-11|Pearson Education, Inc.|Measuring language learning using standardized score scales and adaptive assessment engines| US11205103B2|2016-12-09|2021-12-21|The Research Foundation for the State University|Semisupervised autoencoder for sentiment analysis| US20180165696A1|2016-12-09|2018-06-14|Authors, Inc.|Predictive Analytics Diagnostic System and Results on Market Viability and Audience Metrics for Scripted Media| US10318634B2|2017-01-02|2019-06-11|International Business Machines Corporation|Enhancing QA system cognition with improved lexical simplification using multilingual resources| US10318633B2|2017-01-02|2019-06-11|International Business Machines Corporation|Using multilingual lexical resources to improve lexical simplification| US20180322798A1|2017-05-03|2018-11-08|Florida Atlantic University Board Of Trustees|Systems and methods for real time assessment of levels of learning and adaptive instruction delivery| US11068659B2|2017-05-23|2021-07-20|Vanderbilt University|System, method and computer program product for determining a decodability index for one or more words| US20180350255A1|2017-05-31|2018-12-06|Pearson Education, Inc.|Automated learner-focused content divisions| US20180350254A1|2017-05-31|2018-12-06|Pearson Education, Inc.|Multi table of contents and courseware generation| US10740365B2|2017-06-14|2020-08-11|International Business Machines Corporation|Gap identification in corpora| CN107436916B|2017-06-15|2021-04-27|百度在线网络技术(北京)有限公司|Intelligent answer prompting method and device| WO2019035033A1|2017-08-16|2019-02-21|Panda Corner Corporation|Methods and systems for language learning through music| US10431203B2|2017-09-05|2019-10-01|International Business Machines Corporation|Machine training for native language and fluency identification| US10417335B2|2017-10-10|2019-09-17|Colossio, Inc.|Automated quantitative assessment of text complexity| US20190114943A1|2017-10-17|2019-04-18|Keith Phillips|Descriptivist language learning system and method| CN107977362B|2017-12-11|2021-05-04|中山大学|Method for grading Chinese text and calculating Chinese text difficulty score| AU2018382226B2|2017-12-14|2021-04-15|Inquisitive Pty Limited|User customised search engine using machine learning, natural language processing and readability analysis| US11158203B2|2018-02-14|2021-10-26|International Business Machines Corporation|Phased word expansion for vocabulary learning| US10846319B2|2018-03-19|2020-11-24|Adobe Inc.|Online dictionary extension of word vectors| US20210343173A1|2018-04-04|2021-11-04|Shailaja Jayashankar|Interactive feedback based evaluation using multiple word cloud| US10629205B2|2018-06-12|2020-04-21|International Business Machines Corporation|Identifying an accurate transcription from probabilistic inputs| TWI683290B|2018-06-28|2020-01-21|吳雲中|Spoken language teaching auxiliary method and device| US10878202B2|2018-08-03|2020-12-29|International Business Machines Corporation|Natural language processing contextual translation| US11200378B2|2018-10-11|2021-12-14|International Business Machines Corporation|Methods and systems for processing language with standardization of source data| US10909986B2|2018-11-29|2021-02-02|International Business Machines Corporation|Assessment of speech consumability by text analysis| US11200336B2|2018-12-13|2021-12-14|Comcast Cable Communications, Llc|User identification system and method for fraud detection| WO2020210434A1|2019-04-09|2020-10-15|Jiveworld, SPC|System and method for dual mode presentation of content in a target language to improve listening fluency in the target language| US11144722B2|2019-04-17|2021-10-12|International Business Machines Corporation|Translation of a content item| TWI721484B|2019-07-05|2021-03-11|巨匠電腦股份有限公司|Intelligent test question assigning system and electronic device| US11245705B2|2019-08-13|2022-02-08|Bank Of America Corporation|Intuitive resource management platform| US11269896B2|2019-09-10|2022-03-08|Fujitsu Limited|System and method for automatic difficulty level estimation| CN110867187A|2019-10-31|2020-03-06|北京大米科技有限公司|Voice data processing method and device, storage medium and electronic equipment| US11127232B2|2019-11-26|2021-09-21|On Time Staffing Inc.|Multi-camera, multi-sensor panel data extraction system and method| US11023735B1|2020-04-02|2021-06-01|On Time Staffing, Inc.|Automatic versioning of video presentations| US11232798B2|2020-05-21|2022-01-25|Bank Of America Corporation|Audio analysis system for automatic language proficiency assessment| US11144882B1|2020-09-18|2021-10-12|On Time Staffing Inc.|Systems and methods for evaluating actions over a computer network and establishing live network connections|
法律状态:
2018-11-13| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-04-22| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-03-02| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-05-18| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 14/02/2014, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201361765105P| true| 2013-02-15|2013-02-15| US61/765,105|2013-02-15| PCT/US2014/016354|WO2014127183A2|2013-02-15|2014-02-14|Language learning systems and methods|BR122017002795-6A| BR122017002795B1|2013-02-15|2014-02-14|systems and methods for language learning| BR122017002793-0A| BR122017002793B1|2013-02-15|2014-02-14|systems and methods for language learning| BR122017002789-1A| BR122017002789B1|2013-02-15|2014-02-14|systems and methods for language learning| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|