US20020128818A1 - Method and system to answer a natural-language question - Google Patents

Method and system to answer a natural-language question Download PDF

Info

Publication number
US20020128818A1
US20020128818A1 US10/060,120 US6012002A US2002128818A1 US 20020128818 A1 US20020128818 A1 US 20020128818A1 US 6012002 A US6012002 A US 6012002A US 2002128818 A1 US2002128818 A1 US 2002128818A1
Authority
US
United States
Prior art keywords
question
user
phrase
database
formats
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/060,120
Inventor
Chi Ho
Peter Tong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IP LEARN Inc
LINDNER ROBERT D JR
IpLearn LLC
Mindfabric Holdings LLC
Benhov GmbH LLC
Hanger Solutions LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/758,896 external-priority patent/US5836771A/en
Application filed by Individual filed Critical Individual
Priority to US10/060,120 priority Critical patent/US20020128818A1/en
Publication of US20020128818A1 publication Critical patent/US20020128818A1/en
Priority to US10/727,701 priority patent/US6865370B2/en
Assigned to HASTUR LIMITED LLC reassignment HASTUR LIMITED LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MINDFABRIC HOLDINGS, LLC
Assigned to IP LEARN, INC. reassignment IP LEARN, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IPLEARN, LLC
Assigned to LINDNER, ROBERT D., JR. reassignment LINDNER, ROBERT D., JR. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MINDFABRIC, INC.
Assigned to MINDFABRIC HOLDINGS, LLC reassignment MINDFABRIC HOLDINGS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROBERT D. LINDNER, JR.
Assigned to PROFESSORQ, INC. reassignment PROFESSORQ, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: IP LEARN, INC.
Assigned to IPLEARN, LLC reassignment IPLEARN, LLC CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE OF THE ASSIGNMENT AS BEING 03/05/2000 PREVIOUSLY RECORDED ON REEL 013530 FRAME 0995. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: TONG, PETER P.
Assigned to MINDFABRIC, INC. reassignment MINDFABRIC, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: PROFESSORQ, INC.
Assigned to IPLEARN, LLC reassignment IPLEARN, LLC CORRECTIVE ASSIGNMENT TO CORRECT THE CLERICAL ERROR IN THE SPELLING OF THE INVENTOR'S NAME, CHI FAI HO PREVIOUSLY RECORDED ON REEL 011806 FRAME 0487. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: HO, CHI FAI
Assigned to HANGER SOLUTIONS, LLC reassignment HANGER SOLUTIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTELLECTUAL VENTURES ASSETS 161 LLC
Assigned to INTELLECTUAL VENTURES ASSETS 161 LLC reassignment INTELLECTUAL VENTURES ASSETS 161 LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BENHOV GMBH, LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • G09B7/04Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S706/00Data processing: artificial intelligence
    • Y10S706/902Application using ai with detail of the ai system
    • Y10S706/927Education or instruction

Definitions

  • the present invention relates generally to methods and systems to answer a question, and more particularly to methods and systems to accurately answer a natural-language question.
  • the present invention provides methods and systems that can quickly provide a handful of accurate responses to a natural-language question.
  • the responses can depend on additional information about the user and about the subject matter of the question so as to significantly improve on the relevancy of the responses.
  • the user is allowed to pick one or more of the responses to have an answer generated.
  • the answer to the question can be in a language different from the language of the question to provide more relevant answers.
  • One embodiment of the present invention includes a system with an input device, an answer generator and an output device.
  • the answer generator having access to a database of phrases and question formats, identifies at least one phrase in the question to generate phrased questions. This identification process uses phrases in the database and at least one grammatical rule.
  • the identified phrase can then be linked to at least one category based on, for example, one semantic rule. Then the system provides a score to the categorized phrase. This score can depend on a piece of information about the user and/or about the subject matter of the question. In one embodiment, this piece of information is different from the fact that the user has asked the question.
  • the piece of information can be related to the user's response to an inquiry from the system.
  • the system can ask the user to specify the subject matter of the question. Assume that the user asks the following question: “In the eighteenth century, what did Indians typically eat?” The system can ask the user if the subject matter of the question is related to India or the aboriginal peoples of North America. Based on the user's response, the system can provide a more relevant response to the user.
  • the piece of information is related to an interest of the user. Again, if the user is interested in traveling, and not food, certain ambiguities in his question can be resolved. Based on the user's response to certain inquiries from the system, the accuracy of the answer can be enhanced.
  • the piece of information about the user is related to a question previously asked by the user. For example, if the user has been asking questions on sports, probably the word, ball, in his question is not related to ball bearings, which are mechanical parts.
  • the score of the categorized phrase can change. In another embodiment, based on information of the subject matter the question is in, the score of the categorized phrase can change.
  • the system can identify at least two question formats in the database based on the score. These question formats can again help the system resolve ambiguities in the question. For example, the question is, “How to play bridge?” Assume that the question is in the general subject area of card games. It is not clear if the user wants to find out basic rules on the card game bridge or to learn some more advanced techniques. Then, one question format can be on basic rules on bridge, and the other format can be on bridge techniques. The user is allowed to pick at least one of the question formats to have the corresponding answer generated.
  • the answer can be in a language different from the language of the question. This improves on the accuracy of the answers to the question. For example, if the user is interested in Japan, and if the user understands Japanese, based on the question format picked, a Japanese answer is identified to his English question. Such answers can provide more relevant information to the user.
  • FIG. 1 shows one embodiment of the invention.
  • FIG. 2 shows one embodiment of an answer generator of the invention.
  • FIG. 3 shows one set of steps implemented by one embodiment of an answer generator of the invention.
  • FIGS. 4 A-B show embodiments implementing the invention.
  • FIG. 5 shows examples of ways to regularize the question in the invention.
  • FIG. 6 shows one set of steps related to identifying phrases in the question of the invention.
  • FIG. 7 shows one set of steps related to identifying question structures in the invention.
  • FIG. 8 shows examples of factors affecting scores in the invention.
  • FIG. 9 shows one set of steps related to identifying question formats in the invention.
  • FIG. 10 shows one set of steps related to identifying answer in the invention.
  • FIGS. 1 - 10 Same numerals in FIGS. 1 - 10 are assigned to similar elements in all the figures. Embodiments of the invention are discussed below with reference to FIGS. 1 - 10 . However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.
  • FIG. 1 shows one embodiment of a system 50 of the present invention. It includes an input device 52 coupled to an answer generator 54 , which is coupled to an output device 56 .
  • FIG. 2 shows one embodiment of the answer generator 54 implementing a set 120 of steps shown in FIG. 3.
  • a user enters a question into the input device 52 , such as a keyboard, a mouse or a voice recognition system.
  • the question or a representation of the question can be transmitted by the input device to the answer generator 54 .
  • the answer generator 54 includes a number of elements.
  • the answer generator 54 can include a question regularizer 80 , a phrase identifier 82 , a question structure identifier 84 , a question format identifier 86 and an answer identifier 88 .
  • the question regularizer 80 regularizes (step 122 ) words in the question, such as by replacing words with their roots; the phrase identifier 82 identifies (step 124 ) phrases in the regularized question to generate phrased questions; the question structure identifier 84 generates (step 126 ) question structures from the phrased question; based on the question structures, the question format identifier 86 identifies (step 128 ) and retrieves one or more question formats, which the user is allowed to pick from; and then the answer identifier 88 identifies (step 132 ) ad retrieves one or more answers for the question. Note that the answer identifier 88 can access the Internet or the Web for answers.
  • the generator 54 can also include a database 90 of relevant information to be accessed by different elements of the generator 54 .
  • the database 90 can be a relational database, an object database or other forms of database.
  • the output device 56 such as a monitor, a printer or a voice synthesizer, can present the answer to the user.
  • FIG. 4A shows one physical embodiment 150 implementing one embodiment of the invention, preferably in software and hardware.
  • the embodiment 150 includes a server computer 152 and a number of client computers, such as 154 , which can be a personal computer.
  • Each client computer communicates to the server computer 152 through a dedicated communication link, or a computer network 156 .
  • the link can be the Internet, intranet or other types of private-public networks.
  • FIG. 4B shows one embodiment of a client computer 154 . It typically includes a bus 159 connecting a number of components, such as a processing unit 160 , a main memory 162 , an I/O controller 164 , a peripheral controller 166 , a graphics adapter 168 , a circuit board 180 and a network interface adapter 170 .
  • the I/O controller 164 is connected to components, such as a harddisk drive 172 and a floppy disk drive 174 .
  • the peripheral controller 166 can be connected to one or more peripheral components, such as a keyboard 176 and a mouse 182 .
  • the graphics adapter 168 can be connected to a monitor 178 .
  • the circuit board 180 can be coupled to audio signals 181 ; and the network interface adapter 170 can be connected to a network 120 , which can be the Internet, an intranet, the Web or other forms of networks.
  • the processing unit 160 can be an application specific chip.
  • the input device 52 and the output device 56 may be in a client computer; and the answer generator 54 may reside in a server computer.
  • the input device 52 , the output device 56 , the answer generator 54 other than the database 90 are in a client computer; and the database 90 is in a server computer.
  • the database 90 can reside in a storage medium in a client computer, or with part of it in the client computer and another part in the server computer.
  • the system 50 is in a client computer.
  • the input device 52 and the output device 56 are in a client computer; the answer generator 54 other than the database 90 is in a middleware apparatus, such as a Web server; and the database 90 with its management system are in a back-end server, which can be a database server. Note that different elements of the answer generator 54 can also reside in different components.
  • the question can be on a subject, which can be broad or narrow.
  • the subject can cover mathematics or history, or it can cover the JAVA programming language.
  • the subject covers information in a car, such as a Toyota Camry, and the user wants to understand this merchandise before buying it.
  • the subject covers the real estate market in a certain geographical area, and again the user wants to understand the market before buying a house.
  • a question can be defined as an inquiry demanding an answer; and an answer can be defined as a statement satisfying the inquiry.
  • the question can be a natural-language question, which is a question used in our everyday language.
  • a natural-language question can be in English or other languages, such as French. Examples of natural-language questions are:
  • one grammatical rule is that a question is made of phrases; another grammatical rule is that every phrase is made of one or more words.
  • Such rules can define a grammatical structure.
  • a question formed under such rules is grammatically context-free, and the question is in a context-free grammatical structure.
  • FIG. 5 shows examples of ways to regularize the question.
  • the question regularizer 80 regularizes words in the question, for example, by replacing certain words in the question with their roots.
  • One objective of the regularizer is to reduce the size of the database 90 and the amount of computation required to analyze the question.
  • the regularizer 80 identifies every word in the question. Then it replaces words with their roots if they are not already in their root forms. For example, the regularizer changes verbs (step 202 ) of different forms in the question into their present tense, and nouns (step 204 ) into singular.
  • Every word in the question can be hashed into a hash value.
  • each character is represented by eight bits, such as by its corresponding eight-bit ASCII codes.
  • the hashing function is performed by first pairing characters together in every word of the question. If a word has an odd number of characters, then the last character of the word is paired with zero. Each paired characters becomes a sixteen-bit number. Every word could have a number of sixteen-bit numbers. The character does not have to be represented by the eight-bit ASCII codes. In another embodiment, with each character represented by its sixteen-bit unicode, the characters are not aired. Again every word could have a number of sixteen-bit numbers.
  • each hash value can be used to represent two different words.
  • One word can be in one language and the other in another language, with both languages represented by unicodes.
  • a 16 Mbit memory could be used to hold different combinations of twenty-four bit hash values to represent different words. This approach should be applicable to most natural languages.
  • commonly-used words have been previously hashed and stored in the database 90 .
  • the hash values of words in the question are compared to hash values in the tables and may be replaced by root-forms hash values.
  • the hash values of verbs of different forms in the question are mapped to and replaced by the hash values of their present tenses, and similarly, the hash values of plural nouns are mapped to and replaced by their corresponding singular form hash values.
  • the phrase identifier 82 can identify phrases in the question.
  • FIG. 6 shows one set 124 of steps related to identifying phrases. Note that the process of identifying does not have to include the process of understanding, determining its presence in the database, or extracting.
  • the identifier identifies phrases from the beginning or the first word (step 252 ) of the question. It identifies the first word in the question, and then determines if the first word is in the database 90 . If it is, it will be classified as a phrase of the question. Then, the identifier identifies the first two words. If there is a corresponding term with such two words in the database 90 , then the two words are classified as another phrase of the question.
  • the phrase determination process can again be done through a hashing function.
  • One approach is to add the hash values of each of the words in a phrase. If the sum has more than 24 bits, throw away the carry. The remaining 24 bits would be the hash value of the phrase.
  • the two words in the question can be hashed into a hash value, which is compared to hash values in the database 90 . If such a hash value exists in the database 90 , then the two words are classified as a phrase. In one embodiment, this process continues on up to the first twenty words in the question.
  • the identifier stops adding another word to identify phrases in the question.
  • a hash value that exists in the database 90 does not mean that its corresponding word or words can have independent meaning.
  • the existence of a hash value in the database 90 can imply that the phrase identifier 82 should continue on adding words to look for phrases.
  • the identifier 82 should continue on adding words to identify the longest matching phrase, which can be a phrase with six words.
  • the term, “with respect” may not be a phrase, or does not have independent meaning. But the hash value of such a term can be in the database 90 .
  • the identifier adds the next word in the question to determine if the three-word combination exists in the database 90 . If the third word is the word “to”, then the three-word combination is a preposition with independent meaning, and can have a hash value in the database 90 .
  • the identifier After identifying all of the phrases from the first word, the identifier starts from identifying (step 254 ) phrases from the second word of the question, and performs similar identification process as it has done from the first word. One difference is that the starting point of the analysis is the second word.
  • the question is, “Are ball bearings round?”
  • the identifier starts from the word, “are”, and stops after the word, “balls”, because there is no hash value for the term, “are ball”. Then, the identifier starts from the word, “ball”, and finds a hash value. This suggests that the identifier should continue on, and found the hash value for the term, “ball bearings”.
  • the identifier can continue on identifying phrases from the remaining words (step 256 ) in the question.
  • the starting point of the analysis moves from one word to the next, down the question, until all of the words in the sentence have been exhausted.
  • the identifier should have identified all of the phrases in the question with corresponding phrases in the database 90 .
  • the identifier then removes (step 258 ) words in the question that are not in any identified phrases. For example, there is a word, “xyz”, in the sentence, which is not found in any of the identified phrases. That word will not be considered in subsequent analysis, or will be ignored. In essence, that word is removed from the question.
  • the phrase identifier 82 From the identified phrases, the phrase identifier 82 generates (step 260 ) a number of phrased questions. Each phrased question is a combination of one or more identified phrases that match the question. All of the phrased questions cover different combinations of the identified phrases that match the question.
  • the question is “Cash cow?”
  • the first phrased question has two phrases, each with one word.
  • the second phrased question has only one phrase, with two words.
  • the question structure identifier 84 identifies one or more question structures.
  • FIG. 7 shows one set 126 of steps related to identifying question structures.
  • phrases in the database 90 are categorized (step 302 ).
  • a phrase can belong to one or more categories.
  • a category can be a group of phrases, with one or more common characteristics, which can be related to a subject matter. For example, there are two categories and they are Congress and finance. Then, one semantic rule can be that the phrase “bill” belongs to both categories, while another semantic rule can be that the phrase “Capital Asset Pricing Model” belongs to the category of finance.
  • each of its phrases can be linked (step 302 ) by a linker to one or more categories.
  • a linker for example, for the phrased question, “cash” “cow”, the phrase “cash” can be linked to the categories of finance, banking and payment; and the phrase “cow” can be linked to the categories of animals and diary products.
  • the phrased question “cash cow”, the phrase “cash cow” can be linked to the categories of finance and banking.
  • Each category can be given a score.
  • the score denotes the importance of the category, or the relevancy of the category to the question.
  • the score can depend on the meaning of the category. For example, the category of Congress is given 10 points, and the category of finance is given 30 terms because more people ask about finance than Congress.
  • the scores can depend on the subject the user is asking. For example, if the question is about travel, the category on city can be given 20 points, and the category on animal 5 points. Scoring categories can be done dynamically. For example, after the question has been determined to be in the area of finance, the category on insect can be dynamically given 0.1 point or even 0 points, while the category on investment can be given 100 points. In this example, one semantic rule can be that in the finance area, the score of the category of investment is higher than that of the category of insect.
  • the scores given to interrogative pronouns can depend on the type of questions asked. In the travel domain or in questions on traveling, the categories for “how”, “where”, “what” and “when” can be given higher scores than the category for “who”.
  • Each phrase in a phrased question can belong to more than one categories. With the categories having scores, each phrase belonging to multiple categories can have more than one scores. In one embodiment, the category with the highest score is selected to be the category of that phrase, or to be the score of that categorized phrase (step 304 ).
  • the score of at least one phrase depends on information about the user. This information can be specific to that user. FIG. 8 shows examples of factors affecting the scores. In one embodiment, the information about the user is more than the fact that the user just asked the system 50 a question.
  • the information can be related to the question's subject matter 350 , identified by the user.
  • the system 50 can ask the user the subject of his question.
  • the user has to select one subject of interest. All of his question would be considered to be related to that subject.
  • the information can be related to the user's previous question 352 .
  • the user has been asking questions related to health, and his question has the word, virus, in it.
  • the system 50 would not consider the question to be related to computer virus, but would focus on the type of virus affecting our health.
  • the input and output devices are in a client computer used by the user to ask questions
  • the answer generator is in a server computer.
  • the Web browser in the client computer stores the question asked by the user in cookies.
  • the cookies are sent back to the server computer.
  • the responses generated by the server computer depend on information in the cookies, such as the one or more questions previously asked by the user.
  • previous questions can be stored in, for example, HTML forms, which support hidden variables, or HTML scripts written in JavaScript, which supports variables.
  • the question format identifier can generate a set of instructions to represent question formats to be sent to the user. The instructions can be written in HTML form.
  • the input device remembers the one or more question asked by the user and the question formats selected by the user. After a user has selected a question format or asked a new question, the input device can send it to the answer generator.
  • the format sent can also include one or more previous questions asked by the user during the same session. Those questions are stored in the hidden variables of the question format.
  • all of the user's inputs can be stored. Whenever anything is sent to the answer generator, all of the user's previous inputs can be sent to the generator in the hidden variables.
  • the information can be related to a profile of the user 354 .
  • the user is asked to fill in a questionnaire about himself before he starts asking question.
  • His profile can include his language skill, 356 , such as the languages he prefers his answer to be in; his areas of interest, 358 , such as the types of songs he likes; and his ethnic background, 360 .
  • the user is an Egyptian.
  • categories related to Egypt can be given higher scores.
  • Such information would help the system 50 tailor more accurate and relevant responses to him.
  • a user enters his identifier when he starts to use the system. Then, next time when he uses the system 50 , based on his identification, the system 50 can retrieve his profile. His profile can be updated based on his usage of the system 50 , for example, based on questions he just asked. Again, his identification can be stored in cookies.
  • certain information 357 in the user's profile does not have to be directly entered by the user.
  • the invention can be implemented in a client-server environment, with the client having an IP address. At least a portion of the answer generator resides in a server, and at least a portion of the input device in a client. Certain information in the user's profile may depend on the IP address 359 .
  • the client After the client has established, for example, a HTTP session with the server, the client's IP address would be passed to the server. Based on the source IP address, the server can go to a domain name database, such as those hosted by Network Solution Incorporated, to access the domain name of the client.
  • the domain name gives a number of information, such as the point of presence of the Internet service provider used by the client.
  • the server would then be aware, for example, the approximate ZIP code of the client, or the approximate geographical location of the client. As an example, such information can be used in the following way. If the user is approximately located in San Francisco, and he is asking for hotel information in Boston, the server can assume that the user intends to travel to Boston. Based on such an assumption, the server can send to the client answers related to car rental information and flight ticket information when responding to his question for hotel information.
  • the system 50 can ask the user to refine the question.
  • the question includes the word “current”.
  • the system 50 can ask the user if his question is related to the subject of electricity or time or other physical phenomena.
  • information related to the user's question can be acquired to improve on the responses to the question.
  • the system can ask the user more than one question so as to refine the answer. For example, after the user has responded that the word “current” is related to other physical phenomena, the system 50 can ask the user if the question is related to wind or ocean or other physical phenomena so as to better understand the question.
  • the score of at least one phrase depends on the subject matter of the question.
  • Information on the subject matter of the question, or the domain knowledge does not have to be from the user.
  • the system 50 can be tailored for a specific subject, such as information related to a specific company, including that company's products. That system can be designed to answer questions related to the company, with many phrases, question formats and answer formats focused on the company.
  • the company is a hardware company, and the user asks the question, “Do you have nails?”
  • the system 50 would not interpret the question to refer to nails as in finger nails.
  • the system 50 assumes the question to be related to nails, as in screws and nails, and responds accordingly.
  • the system is tailored to more than one subject matter, and can be switched from one subject matter to another using the same database. The switch can be done, for example, by adjusting the scores of different categories based on their relevancy to the subject matter.
  • scores on categories and, in turn, phrases can be changed.
  • the change can be dynamic. In other words, scores are modified as the system gains more information. This can be done, for example, by applying multipliers to scores of categories to be changed.
  • scores are changed according to the information related to the user and/or the subject matter of the question.
  • due to changes in the scores of the categories a phrase previously linked to a category is modified to be linked to another category.
  • the structure identifier after categorizing and scoring each phrase in a phrased question, the structure identifier generates (step 306 ) a number of question structures.
  • Each phrase in the question can be linked to a category, and each phrased question can be represented by the corresponding categories.
  • the categorized representation of each phrase question can be known as a question structure, or a question structure can be a list of categories.
  • the question structure of the phrase question, “cash” “cow” can be “finance” “animals”
  • the question structure of the phrase question “cash cow” can be “finance”.
  • the question, “Cash cow?” is linked to two question structures.
  • the number of question structures generated can be reduced, which could increase the speed to generate a response, and could also reduce ambiguity in the question.
  • the method is by reducing the number of categories.
  • One way is to form categories of categories, or a hierarchy of categories. Each category can be given a name. With each question structure being a list of categories, each question structure can be represented by a list of category names.
  • the phrase identifier 82 can operate on the list of category names to determine category-of-categories.
  • the category-of-category approach can be explained by the following example.
  • the original question includes the phrase, “San Jose of California”. “San Jose” is under the category of city, “of” is under the category of “preposition”, and “California” is under the category of State.
  • One semantic rule may be that there is a “City of State” category to replace the list of categories “city” “preposition” “State”.
  • This type of category-of-categories analysis can be extended into a category hierarchy.
  • the “city”, “preposition” and “State” can be considered as first level categories, while the “city of State” category can be considered as a second level category.
  • the method to identify higher level categories can be the same as the method to identify second level categories based on first level categories, as long as each category is given a name.
  • a third level category can replace a list of second level categories.
  • one semantic rule is that a higher level category is assigned a higher score than its lower level categories.
  • the “city of State” category has a higher score than the “city”, “preposition” and “State” categories.
  • the question structure identifier 84 can select a number of the generated question structures.
  • the structure identifier provides (step 308 ) a score to each question structure by summing the scores of all of its categories.
  • the question structure identifier 84 selects (step 310 ), for example, the question structure with the highest score to be the question structures representing the question. In one example, the identifier selects the structures with the top five highest scores to be the structures representing the question.
  • the question format identifier 86 identifies one or more question formats in the database 90 .
  • FIG. 9 shows one set of steps related to identifying question formats.
  • Each question format can be a pre-defined question with one or more phrases and one or more categories.
  • the following can be a question format:
  • Each category in a question format has a number of corresponding phrases.
  • the corresponding phrases of the category “major city” can include all of the cities in the United States with population exceeding one million people.
  • the question formats can be pre-defined and stored in the database 90 .
  • One way to generate them can be based on commonly-asked questions. For example, if more than twenty people ask the same or substantially similar question when they use the system 50 , a system administrator can generate a question format for that question structure. To illustrate, more than twenty people asked the following questions, or a variation of the following questions:
  • a question structure can have one or more categories.
  • the question format identifier identifies every question format that has all of the categories in that question structure (step 402 ). There can be situations when a question structure has a number of categories, and no question format in the database 90 has all of the categories.
  • the question format identifier identifies question formats that have at least one category in the question structure.
  • the question format identifier 86 can select (step 404 ) one or more of them.
  • One selection criterion is based on scores of the question formats.
  • Each question format includes one or more categories, and each category has a score. The sum of all of the scores of the categories in a question format gives the question format a score. In one embodiment, the question format with the highest score is selected. In another embodiment, the five question formats with the highest scores are selected.
  • a category in a question format can have a default value.
  • Each category typically has many phrases.
  • one of the phrases is selected to be the default phrase (step 406 ) of the category in the question format. That phrase can be the corresponding phrase in the original question leading to the selection of the category and the question format.
  • the question is, “What is the temperature in San Francisco?”
  • the question format selected is, “What is the temperature in ‘city’?”
  • the question format becomes, “What is the temperature in ‘San Francisco’?” In other words, San Francisco has been chosen to be the default city.
  • the system 50 allows (step 130 ) the user to pick one or more of the selected question formats. This can be done by the question format identifier generating a number of instructions representing the selected question formats, and sending the instructions to the output device 56 .
  • the browser in the output device 56 based on the instructions, displays the selected question formats to allow the user to pick.
  • the output device 56 can show the user all of the selected question formats. Next to each of the selected question formats there can be an enter icon. If the user clicks the enter icon, that question format would be picked as the selected one.
  • the user can also choose any one of the phrases within each category of a question format. For example, the question format picked is, “What is the temperature in ‘San Francisco’?” And the user decides to find out the temperature in Los Angeles.
  • the user can click the phrase “San Francisco”, then a list of cities shows up on the output device 56 . The user can scroll down the list to pick Los Angeles. Then, by clicking the enter icon next to the question, the user would have selected the question format, “What is the temperature in ‘Los Angeles’?”
  • the answer identifier 88 identifies one or more answers for the user.
  • FIG. 10 shows one set 132 of steps related to identifying answer.
  • each question format has its corresponding answer format.
  • the answer identifier 88 retrieves (step 452 ) one or more answer formats for each question format.
  • the answer format can be an answer or can be an address of an answer. In situations where the answer format is an address of an answer, the answer identifier can also access (step 454 ) the answer based on the answer format.
  • the answer format of a question format is the URL of a Web page. If the user picks that question format, the answer identifier 88 would retrieve the corresponding answer format, and fetch the one or more Web pages with the retrieved URL.
  • a set of instructions are generated to search for information, for example, from different databases or other sources, such as the Web.
  • the instructions can be queries written, for example, in SQL, or HTTP.
  • the output device 56 also shows the answer to the question format with the highest score.
  • the phrase identifier 82 can identify a few phrases, and then the question structure identifier 84 can generate a few question structures. As long as there are, for example, ten question structures with scores more than a threshold value, the system 50 would stop looking for additional question structures. If there are only nine such structures, the phrase identifier 82 would identify some more phrases, and the structure identifier 84 would generate more question structures. In other words, the system 50 does not have to look for all phrases in the question before question structures are identified.
  • the threshold value can be set by experience. For example, from past usage, maximum scores of question structures in certain subject areas, such as the traveling domain, are typically less than eighty. Then, for those types of questions, the question structure identifier 84 could set the threshold value to be seventy five. This approach would speed up the time required to respond to the question.
  • the question formats can be grouped together based on common characteristics. For example, all formats related to San Francisco are grouped together. Then based on information about the subject matter and/or the user, such as the user is asking questions about San Francisco, those formats would be selected to have one or more of them identified by the question format identifier 86 . In another embodiment, if the question is related to San Francisco, the scores of formats related to San Francisco would be multiplied by a factor of, for example, five. This could increase the chance of finding more relevant question formats for the user, and, in turn, more relevant answer to the question.
  • the database can include information of different languages.
  • the user can ask the question in English.
  • the subsequent analysis can be in English, with, for example, the question formats also in English.
  • each selected question format is transformed into instructions, with phrases or categories in the format translated into their equivalent terms in other languages. At least one of those categories can be selected in view of a phrase in the question.
  • the translation can be done, for example, through unicodes. As an illustration, the English name of a person is translated into that person's name as known in his native language.
  • the instructions can be used to search for or retrieve one or more answers to the question.
  • a different embodiment is that the system administrator can previously define certain answer formats to be in French, or to be the URL of a French Web site for the answer. French information would be retrieved to be the answers.
  • the regularizer 80 serves the function of a translator by translating the question into the other language. From that point onwards, the analysis will be in the other language. For example, the question is translated into German. Subsequently, phrases, structures, formats and answers would all be in German.
  • One embodiment includes a computer readable media containing computer program code.
  • the code when executed by a computer causes the computer to perform at least some of the steps of the present invention, such as some of those defined in FIG. 3.
  • a signal is sent to a computer causing the computer to perform at least some of the steps of the present invention, such as some of those defined in FIG. 3.
  • This signal can include the question, or a representation of the question, asked by the user.
  • the computer can include or can gain access to the database 90 .
  • steps shown in FIG. 3 or the answer generator shown in FIG. 2 can be implemented in hardware or in software or in firmware.
  • the implementation process should be obvious to those skilled in the art.

Abstract

Providing methods and systems to quickly and accurately respond to a natural-language question. The responses to the question can depend on additional information about the user asking the question, and the subject matter of the question the user asked. For example, the system knows that the user understands French, and can supply French answers to the user. Such additional information can improve on relevancy of the responses to the question. More than one responses can be provided to the user to allow the user to pick the more appropriate one. One embodiment uses a computer with a database having many phrases and question formats. The computer identifies phrases in the question based on at least one grammatical rules and phrases in the database. Then the computer links the phrases to categories based on at least one semantic rule, the subject matter of the question, and information about the user, such as previous questions asked by the user. The computer then selects at least two question formats based on at least the scores. After the question formats are selected, the system allows the user to pick at least one of the question formats so as to have an answer to the question generated.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • The present invention is a continuation-in-part of co-pending U.S. application entitled, “Learning Method and System Based on Questioning III”, filed on Jul. 2, 1999, invented by Chi Fai Ho and Peter Tong, and having a Ser. No. of 09/347,184, which is hereby incorporated by reference into this application.[0001]
  • BACKGROUND OF THE INVENTION
  • The present invention relates generally to methods and systems to answer a question, and more particularly to methods and systems to accurately answer a natural-language question. [0002]
  • Numerous search engines in the market have provided us with an unprecedented amount of freely-available information. All we have to do is to type in our questions, and we will be inundated by information. For example, there is a search engine that regularly gives us tens of thousands of Web sites to a single question. It would take practically days to go through every single site to find our answer, especially if our network connections are through relatively low speed modems. We do not want thousands of answers to our questions. All we want is a handful of meaningful ones. [0003]
  • Another challenge faced by users of many search engines is to search by key words. We have to extract key words from our questions, and then use them to ask our questions. We might also use enhanced features provided by search engines, such as + or − delimiters before the key words, to indicate our preferences. Unfortunately, this is unnatural. How often do we ask questions using key words? The better way is to ask with a natural language. [0004]
  • There are natural-language search engines. Some of them also provide limited number of responses. However, their responses are inaccurate, and typically do not provide satisfactory answers to our questions. Their answers are not tailored to our needs. [0005]
  • Providing accurate responses to natural language questions is a very difficult problem, especially when our questions are not definite. For example, if you ask the question, “Do you like Turkey?”, it is not clear if your question is about the country Turkey or the animal Turkey. Add to this challenge is the need to get answers quickly. Time is very valuable and we prefer not to wait for a long time to get our answers. [0006]
  • To further complicate the problem is the need to get information from documents written in different languages. For example, if we want to learn about climbing Mount Fuji in Japan, probably most of the information is in Japanese. Many search engines in the United States only search for information in English, and ignore information in all other languages. The reason may be because translation errors would lead to even less accurate answers. [0007]
  • It should be apparent from the foregoing that there is still a need for a natural-language question-answering system that can accurately and quickly answer our questions, without providing us with thousands of irrelevant choices. Furthermore, it is desirable for the system to provide us with information from different languages. [0008]
  • SUMMARY OF THE INVENTION
  • The present invention provides methods and systems that can quickly provide a handful of accurate responses to a natural-language question. The responses can depend on additional information about the user and about the subject matter of the question so as to significantly improve on the relevancy of the responses. The user is allowed to pick one or more of the responses to have an answer generated. Furthermore, the answer to the question can be in a language different from the language of the question to provide more relevant answers. [0009]
  • One embodiment of the present invention includes a system with an input device, an answer generator and an output device. The answer generator, having access to a database of phrases and question formats, identifies at least one phrase in the question to generate phrased questions. This identification process uses phrases in the database and at least one grammatical rule. [0010]
  • The identified phrase can then be linked to at least one category based on, for example, one semantic rule. Then the system provides a score to the categorized phrase. This score can depend on a piece of information about the user and/or about the subject matter of the question. In one embodiment, this piece of information is different from the fact that the user has asked the question. [0011]
  • The piece of information can be related to the user's response to an inquiry from the system. For example, the system can ask the user to specify the subject matter of the question. Assume that the user asks the following question: “In the eighteenth century, what did Indians typically eat?” The system can ask the user if the subject matter of the question is related to India or the aboriginal peoples of North America. Based on the user's response, the system can provide a more relevant response to the user. [0012]
  • In another example, the piece of information is related to an interest of the user. Again, if the user is interested in traveling, and not food, certain ambiguities in his question can be resolved. Based on the user's response to certain inquiries from the system, the accuracy of the answer can be enhanced. [0013]
  • In another embodiment, the piece of information about the user is related to a question previously asked by the user. For example, if the user has been asking questions on sports, probably the word, ball, in his question is not related to ball bearings, which are mechanical parts. [0014]
  • Typically, the more information the system has on the user and the subject matter of the question, the more accurate is the answer to the user's question. The reason is similar to the situation of our responding to our friend's question before he even asks it. Sometimes we understand what they want to know through non-verbal communication or our previous interactions. [0015]
  • Based on information on the user, the score of the categorized phrase can change. In another embodiment, based on information of the subject matter the question is in, the score of the categorized phrase can change. [0016]
  • After providing the score to the categorized phrase, the system can identify at least two question formats in the database based on the score. These question formats can again help the system resolve ambiguities in the question. For example, the question is, “How to play bridge?” Assume that the question is in the general subject area of card games. It is not clear if the user wants to find out basic rules on the card game bridge or to learn some more advanced techniques. Then, one question format can be on basic rules on bridge, and the other format can be on bridge techniques. The user is allowed to pick at least one of the question formats to have the corresponding answer generated. [0017]
  • In another embodiment, the answer can be in a language different from the language of the question. This improves on the accuracy of the answers to the question. For example, if the user is interested in Japan, and if the user understands Japanese, based on the question format picked, a Japanese answer is identified to his English question. Such answers can provide more relevant information to the user. [0018]
  • Other aspects and advantages of the present invention will become apparent from the following detailed description, which, when taken in conjunction with the accompanying drawings, illustrates by way of example the principles of the invention. [0019]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows one embodiment of the invention. [0020]
  • FIG. 2 shows one embodiment of an answer generator of the invention. [0021]
  • FIG. 3 shows one set of steps implemented by one embodiment of an answer generator of the invention. [0022]
  • FIGS. [0023] 4A-B show embodiments implementing the invention.
  • FIG. 5 shows examples of ways to regularize the question in the invention. [0024]
  • FIG. 6 shows one set of steps related to identifying phrases in the question of the invention. [0025]
  • FIG. 7 shows one set of steps related to identifying question structures in the invention. [0026]
  • FIG. 8 shows examples of factors affecting scores in the invention. [0027]
  • FIG. 9 shows one set of steps related to identifying question formats in the invention. [0028]
  • FIG. 10 shows one set of steps related to identifying answer in the invention.[0029]
  • Same numerals in FIGS. [0030] 1-10 are assigned to similar elements in all the figures. Embodiments of the invention are discussed below with reference to FIGS. 1-10. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 shows one embodiment of a [0031] system 50 of the present invention. It includes an input device 52 coupled to an answer generator 54, which is coupled to an output device 56. FIG. 2 shows one embodiment of the answer generator 54 implementing a set 120 of steps shown in FIG. 3.
  • A user enters a question into the [0032] input device 52, such as a keyboard, a mouse or a voice recognition system. The question or a representation of the question can be transmitted by the input device to the answer generator 54.
  • In one embodiment, the [0033] answer generator 54 includes a number of elements. The answer generator 54 can include a question regularizer 80, a phrase identifier 82, a question structure identifier 84, a question format identifier 86 and an answer identifier 88. In general terms, the question regularizer 80 regularizes (step 122) words in the question, such as by replacing words with their roots; the phrase identifier 82 identifies (step 124) phrases in the regularized question to generate phrased questions; the question structure identifier 84 generates (step 126) question structures from the phrased question; based on the question structures, the question format identifier 86 identifies (step 128) and retrieves one or more question formats, which the user is allowed to pick from; and then the answer identifier 88 identifies (step 132) ad retrieves one or more answers for the question. Note that the answer identifier 88 can access the Internet or the Web for answers.
  • The [0034] generator 54 can also include a database 90 of relevant information to be accessed by different elements of the generator 54. The database 90, can be a relational database, an object database or other forms of database.
  • After the answer is generated, the [0035] output device 56, such as a monitor, a printer or a voice synthesizer, can present the answer to the user.
  • FIG. 4A shows one [0036] physical embodiment 150 implementing one embodiment of the invention, preferably in software and hardware. The embodiment 150 includes a server computer 152 and a number of client computers, such as 154, which can be a personal computer. Each client computer communicates to the server computer 152 through a dedicated communication link, or a computer network 156. In one embodiment, the link can be the Internet, intranet or other types of private-public networks.
  • FIG. 4B shows one embodiment of a [0037] client computer 154. It typically includes a bus 159 connecting a number of components, such as a processing unit 160, a main memory 162, an I/O controller 164, a peripheral controller 166, a graphics adapter 168, a circuit board 180 and a network interface adapter 170. The I/O controller 164 is connected to components, such as a harddisk drive 172 and a floppy disk drive 174. The peripheral controller 166 can be connected to one or more peripheral components, such as a keyboard 176 and a mouse 182. The graphics adapter 168 can be connected to a monitor 178. The circuit board 180 can be coupled to audio signals 181; and the network interface adapter 170 can be connected to a network 120, which can be the Internet, an intranet, the Web or other forms of networks. The processing unit 160 can be an application specific chip.
  • Different elements in the [0038] system 50 may be in different physical components. For example, the input device 52 and the output device 56 may be in a client computer; and the answer generator 54 may reside in a server computer. In another embodiment, the input device 52, the output device 56, the answer generator 54 other than the database 90 are in a client computer; and the database 90 is in a server computer. In another situation, the database 90 can reside in a storage medium in a client computer, or with part of it in the client computer and another part in the server computer. In a fourth embodiment, the system 50 is in a client computer. Yet in another embodiment, the input device 52 and the output device 56 are in a client computer; the answer generator 54 other than the database 90 is in a middleware apparatus, such as a Web server; and the database 90 with its management system are in a back-end server, which can be a database server. Note that different elements of the answer generator 54 can also reside in different components.
  • In this invention, the question can be on a subject, which can be broad or narrow. In one embodiment, the subject can cover mathematics or history, or it can cover the JAVA programming language. In another embodiment, the subject covers information in a car, such as a Toyota Camry, and the user wants to understand this merchandise before buying it. In yet another embodiment, the subject covers the real estate market in a certain geographical area, and again the user wants to understand the market before buying a house. [0039]
  • In one embodiment, a question can be defined as an inquiry demanding an answer; and an answer can be defined as a statement satisfying the inquiry. [0040]
  • The question can be a natural-language question, which is a question used in our everyday language. A natural-language question can be in English or other languages, such as French. Examples of natural-language questions are: [0041]
  • Who is the President?[0042]
  • Like cream of mushroom soup? A statement that is not based on a natural language can be a statement that is not commonly used in our everyday language. Examples are: [0043]
  • For Key in Key-Of(Table) do [0044]
  • Do while x>2 [0045]
  • In one embodiment, one grammatical rule is that a question is made of phrases; another grammatical rule is that every phrase is made of one or more words. Such rules can define a grammatical structure. A question formed under such rules is grammatically context-free, and the question is in a context-free grammatical structure. [0046]
  • FIG. 5 shows examples of ways to regularize the question. The [0047] question regularizer 80 regularizes words in the question, for example, by replacing certain words in the question with their roots. One objective of the regularizer is to reduce the size of the database 90 and the amount of computation required to analyze the question.
  • In one embodiment, the [0048] regularizer 80 identifies every word in the question. Then it replaces words with their roots if they are not already in their root forms. For example, the regularizer changes verbs (step 202) of different forms in the question into their present tense, and nouns (step 204) into singular.
  • One approach to implement the replacement process is based on a hashing function. Every word in the question can be hashed into a hash value. In one embodiment, each character is represented by eight bits, such as by its corresponding eight-bit ASCII codes. The hashing function is performed by first pairing characters together in every word of the question. If a word has an odd number of characters, then the last character of the word is paired with zero. Each paired characters becomes a sixteen-bit number. Every word could have a number of sixteen-bit numbers. The character does not have to be represented by the eight-bit ASCII codes. In another embodiment, with each character represented by its sixteen-bit unicode, the characters are not aired. Again every word could have a number of sixteen-bit numbers. [0049]
  • For a word, add all of its sixteen-bit numbers, and represent the sum by a thirty-two bit number. For the thirty-two bit number, add the first two bytes and throw away the carry to generate a twenty-four bit number. This number is the hash value of the word. In one embodiment, each hash value can be used to represent two different words. One word can be in one language and the other in another language, with both languages represented by unicodes. A 16 Mbit memory could be used to hold different combinations of twenty-four bit hash values to represent different words. This approach should be applicable to most natural languages. [0050]
  • In one embodiment, commonly-used words have been previously hashed and stored in the [0051] database 90. There are also tables generated that link the hash values of those words with the hash values of their root forms. Then, the hash values of words in the question are compared to hash values in the tables and may be replaced by root-forms hash values. For example, the hash values of verbs of different forms in the question are mapped to and replaced by the hash values of their present tenses, and similarly, the hash values of plural nouns are mapped to and replaced by their corresponding singular form hash values.
  • In one embodiment, after some of the words in the question have been regularized, the [0052] phrase identifier 82 can identify phrases in the question. FIG. 6 shows one set 124 of steps related to identifying phrases. Note that the process of identifying does not have to include the process of understanding, determining its presence in the database, or extracting.
  • In one embodiment, the identifier identifies phrases from the beginning or the first word (step [0053] 252) of the question. It identifies the first word in the question, and then determines if the first word is in the database 90. If it is, it will be classified as a phrase of the question. Then, the identifier identifies the first two words. If there is a corresponding term with such two words in the database 90, then the two words are classified as another phrase of the question.
  • The phrase determination process can again be done through a hashing function. One approach is to add the hash values of each of the words in a phrase. If the sum has more than 24 bits, throw away the carry. The remaining 24 bits would be the hash value of the phrase. For example, the two words in the question can be hashed into a hash value, which is compared to hash values in the [0054] database 90. If such a hash value exists in the database 90, then the two words are classified as a phrase. In one embodiment, this process continues on up to the first twenty words in the question.
  • In one embodiment, when a hash value for a certain number of words does not exist, the identifier stops adding another word to identify phrases in the question. However, a hash value that exists in the [0055] database 90 does not mean that its corresponding word or words can have independent meaning. The existence of a hash value in the database 90 can imply that the phrase identifier 82 should continue on adding words to look for phrases. For example, the identifier 82 should continue on adding words to identify the longest matching phrase, which can be a phrase with six words. For example, the term, “with respect”, may not be a phrase, or does not have independent meaning. But the hash value of such a term can be in the database 90. Then the identifier adds the next word in the question to determine if the three-word combination exists in the database 90. If the third word is the word “to”, then the three-word combination is a preposition with independent meaning, and can have a hash value in the database 90.
  • After identifying all of the phrases from the first word, the identifier starts from identifying (step [0056] 254) phrases from the second word of the question, and performs similar identification process as it has done from the first word. One difference is that the starting point of the analysis is the second word.
  • As an example, the question is, “Are ball bearings round?” The identifier starts from the word, “are”, and stops after the word, “balls”, because there is no hash value for the term, “are ball”. Then, the identifier starts from the word, “ball”, and finds a hash value. This suggests that the identifier should continue on, and found the hash value for the term, “ball bearings”. [0057]
  • The identifier can continue on identifying phrases from the remaining words (step [0058] 256) in the question. The starting point of the analysis moves from one word to the next, down the question, until all of the words in the sentence have been exhausted.
  • At the end of phrase identification, the identifier should have identified all of the phrases in the question with corresponding phrases in the [0059] database 90. In one embodiment, the identifier then removes (step 258) words in the question that are not in any identified phrases. For example, there is a word, “xyz”, in the sentence, which is not found in any of the identified phrases. That word will not be considered in subsequent analysis, or will be ignored. In essence, that word is removed from the question.
  • From the identified phrases, the [0060] phrase identifier 82 generates (step 260) a number of phrased questions. Each phrased question is a combination of one or more identified phrases that match the question. All of the phrased questions cover different combinations of the identified phrases that match the question.
  • For example, the question is “Cash cow?”[0061]
  • There can be two phrased questions, and they are: [0062]
  • 1. “Cash cow”?[0063]
  • 2. “Cash cow”?[0064]
  • The first phrased question has two phrases, each with one word. The second phrased question has only one phrase, with two words. [0065]
  • Many languages, such as English, favor the use of multiple words that give a different meaning if combined together. Depending on how words are phrased together, the phrased questions can have very different meanings. As in the above example, the meaning of “cash cow” is different from the meaning of, “cash” and “cow”, individually. [0066]
  • Based on the one or more phrased questions, the [0067] question structure identifier 84 identifies one or more question structures. FIG. 7 shows one set 126 of steps related to identifying question structures.
  • In one embodiment, phrases in the [0068] database 90 are categorized (step 302). A phrase can belong to one or more categories. A category can be a group of phrases, with one or more common characteristics, which can be related to a subject matter. For example, there are two categories and they are Congress and finance. Then, one semantic rule can be that the phrase “bill” belongs to both categories, while another semantic rule can be that the phrase “Capital Asset Pricing Model” belongs to the category of finance.
  • For each phrased question, each of its phrases can be linked (step [0069] 302) by a linker to one or more categories. For example, for the phrased question, “cash” “cow”, the phrase “cash” can be linked to the categories of finance, banking and payment; and the phrase “cow” can be linked to the categories of animals and diary products. For the phrased question, “cash cow”, the phrase “cash cow” can be linked to the categories of finance and banking.
  • Each category can be given a score. In one embodiment, the score denotes the importance of the category, or the relevancy of the category to the question. The score can depend on the meaning of the category. For example, the category of Congress is given 10 points, and the category of finance is given 30 terms because more people ask about finance than Congress. The scores can depend on the subject the user is asking. For example, if the question is about travel, the category on city can be given 20 points, and the category on animal 5 points. Scoring categories can be done dynamically. For example, after the question has been determined to be in the area of finance, the category on insect can be dynamically given 0.1 point or even 0 points, while the category on investment can be given 100 points. In this example, one semantic rule can be that in the finance area, the score of the category of investment is higher than that of the category of insect. [0070]
  • In another example, the scores given to interrogative pronouns can depend on the type of questions asked. In the travel domain or in questions on traveling, the categories for “how”, “where”, “what” and “when” can be given higher scores than the category for “who”. [0071]
  • Each phrase in a phrased question can belong to more than one categories. With the categories having scores, each phrase belonging to multiple categories can have more than one scores. In one embodiment, the category with the highest score is selected to be the category of that phrase, or to be the score of that categorized phrase (step [0072] 304).
  • In one embodiment, the score of at least one phrase depends on information about the user. This information can be specific to that user. FIG. 8 shows examples of factors affecting the scores. In one embodiment, the information about the user is more than the fact that the user just asked the system [0073] 50 a question.
  • The information can be related to the question's [0074] subject matter 350, identified by the user. For example, the system 50 can ask the user the subject of his question. In another approach, before the user asks a question, the user has to select one subject of interest. All of his question would be considered to be related to that subject.
  • The information can be related to the user's [0075] previous question 352. For example, the user has been asking questions related to health, and his question has the word, virus, in it. The system 50 would not consider the question to be related to computer virus, but would focus on the type of virus affecting our health.
  • In one embodiment on previous questions, the input and output devices are in a client computer used by the user to ask questions, and the answer generator is in a server computer. At the request of the server computer, the Web browser in the client computer stores the question asked by the user in cookies. The next time when the user accesses the server computer to ask another question, the cookies are sent back to the server computer. The responses generated by the server computer depend on information in the cookies, such as the one or more questions previously asked by the user. [0076]
  • In another embodiment, previous questions can be stored in, for example, HTML forms, which support hidden variables, or HTML scripts written in JavaScript, which supports variables. For the HTML forms example, the question format identifier can generate a set of instructions to represent question formats to be sent to the user. The instructions can be written in HTML form. During a question/answer interactive session between the user and the system, the input device remembers the one or more question asked by the user and the question formats selected by the user. After a user has selected a question format or asked a new question, the input device can send it to the answer generator. The format sent can also include one or more previous questions asked by the user during the same session. Those questions are stored in the hidden variables of the question format. In this example, during the interactive session, all of the user's inputs can be stored. Whenever anything is sent to the answer generator, all of the user's previous inputs can be sent to the generator in the hidden variables. [0077]
  • The information can be related to a profile of the user [0078] 354. For example, the user is asked to fill in a questionnaire about himself before he starts asking question. His profile can include his language skill, 356, such as the languages he prefers his answer to be in; his areas of interest, 358, such as the types of songs he likes; and his ethnic background, 360. For example, the user is an Egyptian. Then, categories related to Egypt can be given higher scores. Such information would help the system 50 tailor more accurate and relevant responses to him. In one embodiment, a user enters his identifier when he starts to use the system. Then, next time when he uses the system 50, based on his identification, the system 50 can retrieve his profile. His profile can be updated based on his usage of the system 50, for example, based on questions he just asked. Again, his identification can be stored in cookies.
  • In yet another embodiment, [0079] certain information 357 in the user's profile does not have to be directly entered by the user. As an example, the invention can be implemented in a client-server environment, with the client having an IP address. At least a portion of the answer generator resides in a server, and at least a portion of the input device in a client. Certain information in the user's profile may depend on the IP address 359. After the client has established, for example, a HTTP session with the server, the client's IP address would be passed to the server. Based on the source IP address, the server can go to a domain name database, such as those hosted by Network Solution Incorporated, to access the domain name of the client. The domain name gives a number of information, such as the point of presence of the Internet service provider used by the client. The server would then be aware, for example, the approximate ZIP code of the client, or the approximate geographical location of the client. As an example, such information can be used in the following way. If the user is approximately located in San Francisco, and he is asking for hotel information in Boston, the server can assume that the user intends to travel to Boston. Based on such an assumption, the server can send to the client answers related to car rental information and flight ticket information when responding to his question for hotel information.
  • Based on the user's question, the [0080] system 50 can ask the user to refine the question. For example, the question includes the word “current”. The system 50 can ask the user if his question is related to the subject of electricity or time or other physical phenomena. Based on the system's inquiry 362, information related to the user's question can be acquired to improve on the responses to the question. In this embodiment, there can be multiple interactive sessions between the user and the system 50. The system can ask the user more than one question so as to refine the answer. For example, after the user has responded that the word “current” is related to other physical phenomena, the system 50 can ask the user if the question is related to wind or ocean or other physical phenomena so as to better understand the question.
  • In one embodiment, the score of at least one phrase depends on the subject matter of the question. Information on the subject matter of the question, or the domain knowledge, does not have to be from the user. For example, the [0081] system 50 can be tailored for a specific subject, such as information related to a specific company, including that company's products. That system can be designed to answer questions related to the company, with many phrases, question formats and answer formats focused on the company. For example, the company is a hardware company, and the user asks the question, “Do you have nails?” The system 50 would not interpret the question to refer to nails as in finger nails. The system 50 assumes the question to be related to nails, as in screws and nails, and responds accordingly. In another embodiment, the system is tailored to more than one subject matter, and can be switched from one subject matter to another using the same database. The switch can be done, for example, by adjusting the scores of different categories based on their relevancy to the subject matter.
  • In one embodiment, based on information related to the user and/or the subject matter of the question, scores on categories and, in turn, phrases can be changed. The change can be dynamic. In other words, scores are modified as the system gains more information. This can be done, for example, by applying multipliers to scores of categories to be changed. In another embodiment, after phrases have been categorized and before the scores of the categorized phrases are determined, those scores are changed according to the information related to the user and/or the subject matter of the question. In yet another embodiment, due to changes in the scores of the categories, a phrase previously linked to a category is modified to be linked to another category. [0082]
  • In one embodiment, after categorizing and scoring each phrase in a phrased question, the structure identifier generates (step [0083] 306) a number of question structures. Each phrase in the question can be linked to a category, and each phrased question can be represented by the corresponding categories. In one embodiment, the categorized representation of each phrase question can be known as a question structure, or a question structure can be a list of categories. For example, the question structure of the phrase question, “cash” “cow” can be “finance” “animals”, and the question structure of the phrase question “cash cow” can be “finance”. In this example, the question, “Cash cow?” is linked to two question structures.
  • In another embodiment, the number of question structures generated can be reduced, which could increase the speed to generate a response, and could also reduce ambiguity in the question. The method is by reducing the number of categories. One way is to form categories of categories, or a hierarchy of categories. Each category can be given a name. With each question structure being a list of categories, each question structure can be represented by a list of category names. In one embodiment, the [0084] phrase identifier 82 can operate on the list of category names to determine category-of-categories.
  • The category-of-category approach can be explained by the following example. The original question includes the phrase, “San Jose of California”. “San Jose” is under the category of city, “of” is under the category of “preposition”, and “California” is under the category of State. One semantic rule may be that there is a “City of State” category to replace the list of categories “city” “preposition” “State”. [0085]
  • This type of category-of-categories analysis can be extended into a category hierarchy. The “city”, “preposition” and “State” can be considered as first level categories, while the “city of State” category can be considered as a second level category. The method to identify higher level categories can be the same as the method to identify second level categories based on first level categories, as long as each category is given a name. For example, a third level category can replace a list of second level categories. [0086]
  • At the end of the category-of-categories analysis, different level categories can be to classified simply as categories. This approach can reduce the number of categories. With fewer categories, some question structures might be identical. Thus, this approach may reduce the number of question structures also. [0087]
  • In one embodiment, one semantic rule is that a higher level category is assigned a higher score than its lower level categories. For example, the “city of State” category has a higher score than the “city”, “preposition” and “State” categories. [0088]
  • The [0089] question structure identifier 84 can select a number of the generated question structures. In one embodiment, the structure identifier provides (step 308) a score to each question structure by summing the scores of all of its categories. The question structure identifier 84 then selects (step 310), for example, the question structure with the highest score to be the question structures representing the question. In one example, the identifier selects the structures with the top five highest scores to be the structures representing the question.
  • In one embodiment, after the question structures representing the question have been selected, the [0090] question format identifier 86 identifies one or more question formats in the database 90. FIG. 9 shows one set of steps related to identifying question formats.
  • Each question format can be a pre-defined question with one or more phrases and one or more categories. The following can be a question format: [0091]
  • What is “a financial term”? The question, “What is preferred stock?”, falls under the above question format. [0092]
  • Each category in a question format has a number of corresponding phrases. For example, the corresponding phrases of the category “major city” can include all of the cities in the United States with population exceeding one million people. [0093]
  • The question formats can be pre-defined and stored in the [0094] database 90. One way to generate them can be based on commonly-asked questions. For example, if more than twenty people ask the same or substantially similar question when they use the system 50, a system administrator can generate a question format for that question structure. To illustrate, more than twenty people asked the following questions, or a variation of the following questions:
  • A list of restaurants in San Francisco?[0095]
  • Restaurants in San Francisco?[0096]
  • Where can I find some good restaurants in San Francisco? All of these questions have a set of similar key words, which are restaurant and San Francisco. The [0097] system 50, after cataloging twenty occurrences of such questions, provides one of the twenty questions to the system administrator, who can then generate the following question format:
  • Would you recommend some good restaurants in “major city” ?[0098]
  • There are a number of ways to identify question formats. A question structure can have one or more categories. In one embodiment, for a question structure, the question format identifier identifies every question format that has all of the categories in that question structure (step [0099] 402). There can be situations when a question structure has a number of categories, and no question format in the database 90 has all of the categories. In one embodiment, for a question structure, the question format identifier identifies question formats that have at least one category in the question structure.
  • Based on the question structures, there can be a number of question formats identified. Then, the [0100] question format identifier 86 can select (step 404) one or more of them. One selection criterion is based on scores of the question formats. Each question format includes one or more categories, and each category has a score. The sum of all of the scores of the categories in a question format gives the question format a score. In one embodiment, the question format with the highest score is selected. In another embodiment, the five question formats with the highest scores are selected.
  • A category in a question format can have a default value. Each category typically has many phrases. In one embodiment, one of the phrases is selected to be the default phrase (step [0101] 406) of the category in the question format. That phrase can be the corresponding phrase in the original question leading to the selection of the category and the question format. For example, the question is, “What is the temperature in San Francisco?” The question format selected is, “What is the temperature in ‘city’?” Instead of the generic term “city” , the question format becomes, “What is the temperature in ‘San Francisco’?” In other words, San Francisco has been chosen to be the default city.
  • In one embodiment, the [0102] system 50 allows (step 130) the user to pick one or more of the selected question formats. This can be done by the question format identifier generating a number of instructions representing the selected question formats, and sending the instructions to the output device 56. In one example, the browser in the output device 56, based on the instructions, displays the selected question formats to allow the user to pick.
  • The [0103] output device 56 can show the user all of the selected question formats. Next to each of the selected question formats there can be an enter icon. If the user clicks the enter icon, that question format would be picked as the selected one.
  • The user can also choose any one of the phrases within each category of a question format. For example, the question format picked is, “What is the temperature in ‘San Francisco’?” And the user decides to find out the temperature in Los Angeles. In one embodiment, the user can click the phrase “San Francisco”, then a list of cities shows up on the [0104] output device 56. The user can scroll down the list to pick Los Angeles. Then, by clicking the enter icon next to the question, the user would have selected the question format, “What is the temperature in ‘Los Angeles’?”
  • After one or more question formats have been picked by the user, the [0105] answer identifier 88 identifies one or more answers for the user. FIG. 10 shows one set 132 of steps related to identifying answer.
  • In one embodiment, each question format has its corresponding answer format. The [0106] answer identifier 88 retrieves (step 452) one or more answer formats for each question format. The answer format can be an answer or can be an address of an answer. In situations where the answer format is an address of an answer, the answer identifier can also access (step 454) the answer based on the answer format.
  • As an example, the answer format of a question format is the URL of a Web page. If the user picks that question format, the [0107] answer identifier 88 would retrieve the corresponding answer format, and fetch the one or more Web pages with the retrieved URL.
  • In another embodiment, based on each of the answer formats, a set of instructions are generated to search for information, for example, from different databases or other sources, such as the Web. The instructions can be queries written, for example, in SQL, or HTTP. [0108]
  • In another embodiment, as the output device shows a list of selected question formats for the user to pick, the [0109] output device 56 also shows the answer to the question format with the highest score.
  • There are different ways to implement the present invention. In one embodiment, the [0110] phrase identifier 82 can identify a few phrases, and then the question structure identifier 84 can generate a few question structures. As long as there are, for example, ten question structures with scores more than a threshold value, the system 50 would stop looking for additional question structures. If there are only nine such structures, the phrase identifier 82 would identify some more phrases, and the structure identifier 84 would generate more question structures. In other words, the system 50 does not have to look for all phrases in the question before question structures are identified. The threshold value can be set by experience. For example, from past usage, maximum scores of question structures in certain subject areas, such as the traveling domain, are typically less than eighty. Then, for those types of questions, the question structure identifier 84 could set the threshold value to be seventy five. This approach would speed up the time required to respond to the question.
  • Another way to speed up the response time is through focusing on a set of question formats in the database. The question formats can be grouped together based on common characteristics. For example, all formats related to San Francisco are grouped together. Then based on information about the subject matter and/or the user, such as the user is asking questions about San Francisco, those formats would be selected to have one or more of them identified by the [0111] question format identifier 86. In another embodiment, if the question is related to San Francisco, the scores of formats related to San Francisco would be multiplied by a factor of, for example, five. This could increase the chance of finding more relevant question formats for the user, and, in turn, more relevant answer to the question.
  • Note that the answer does not have to be presented by the [0112] output device 56 in the same language as the question. The database can include information of different languages. For example, the user can ask the question in English. The subsequent analysis can be in English, with, for example, the question formats also in English. Then, each selected question format is transformed into instructions, with phrases or categories in the format translated into their equivalent terms in other languages. At least one of those categories can be selected in view of a phrase in the question. The translation can be done, for example, through unicodes. As an illustration, the English name of a person is translated into that person's name as known in his native language. After the transformation, the instructions can be used to search for or retrieve one or more answers to the question.
  • A different embodiment is that the system administrator can previously define certain answer formats to be in French, or to be the URL of a French Web site for the answer. French information would be retrieved to be the answers. [0113]
  • In yet another embodiment to have answers in a language different from the question, the [0114] regularizer 80 serves the function of a translator by translating the question into the other language. From that point onwards, the analysis will be in the other language. For example, the question is translated into German. Subsequently, phrases, structures, formats and answers would all be in German.
  • One embodiment includes a computer readable media containing computer program code. The code when executed by a computer causes the computer to perform at least some of the steps of the present invention, such as some of those defined in FIG. 3. In another embodiment, a signal is sent to a computer causing the computer to perform at least some of the steps of the present invention, such as some of those defined in FIG. 3. This signal can include the question, or a representation of the question, asked by the user. The computer can include or can gain access to the [0115] database 90.
  • Note that different embodiments of the present invention can be implemented in software or in hardware. For example, steps shown in FIG. 3 or the answer generator shown in FIG. 2, can be implemented in hardware or in software or in firmware. The implementation process should be obvious to those skilled in the art. [0116]
  • Other embodiments of the invention will be apparent to those skilled in the art from a consideration of this specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims. [0117]

Claims (33)

We claim:
1. A method for answering a natural-language question from a user by a first computing engine having access to a database with a plurality of phrases and question formats, the method comprising the steps of:
identifying at least one phrase in the question, based on phrases in the database and at least one grammatical rule;
linking the identified phrase to at least a category based on at least one semantic rule, and a piece of information about the user, other than the fact that the user asked the question; and
identifying at least two question formats in the database based on at least the category;
so that the user is allowed to pick at least one of the question formats for answering the question.
2. A method as recited in claim 1 wherein the method further comprises the step of providing a score to the identified phrase, with the score related to the category.
3. A method as recited in claim 2 wherein:
the step of identifying at least one phrase includes the step of identifying at least two phrases in the question, with the two phrases having at least one common word, based on phrases in the database and at least one grammatical rule;
the step of providing includes the step of providing a score to each of the identified phrases; and
the step of identifying at least two question formats includes the step of identifying at least two question formats in the database based on at least the scores.
4. A method as recited in claim 1 wherein the step of identifying at least one phrase depends on a hashing function.
5. A method as recited in claim 1 wherein the method further comprises the step of ignoring from the question a word that does not have a corresponding phrase in the database.
6. A method as recited in claim 1 wherein:
the step of identifying at least one phrase includes the step of identifying in the question every phrase that has a corresponding phrase in the database; and
the method further comprises the step of identifying a plurality of question structures based on the identified phrases.
7. A method as recited in claim 1 further comprising the step of selecting a plurality of question formats in the database based on the information about the user for identifying the at least two question formats.
8. A method as recited in claim 1 wherein the step of linking is also based on the subject matter of the question the user is asking.
9. A method as recited in claim I wherein based on the question format picked, a URL is identified.
10. A method as recited in claim 1 wherein:
one phrase in the question is being translated into a language different from the language of the question; and
an answer to the question in the different language is identified.
11. A method as recited in claim 1 wherein the piece of information about the user is related to the user's response to an inquiry from the computing engine, wherein the inquiry is for improving the accuracy of the answer to the question.
12. A method as recited in claim 11 wherein the piece of information about the user is also related to the user's response to another inquiry from the computing engine, presented to the user after the user has responded to the earlier inquiry.
13. A method as recited in claim 11 wherein the inquiry is on the subject matter of the question the user is asking.
14. A method as recited in claim 11 wherein the inquiry is related to an interest of the user.
15. A method as recited in claim 1 wherein the piece of information about the user is related to a question previously asked by the user.
16. A method as recited in claim 15 wherein the question previously asked by the user is stored in cookies.
17. A method as recited in claim 15 wherein:
a question format is written in HTML, with hidden variables;
the question previously asked by the user is stored in the hidden variables.
18. A method as recited in claim 1 wherein the piece of information about the user was not directly entered by the user.
19. A method as recited in claim 18 wherein:
the question was entered into a second computing engine, which is connected to the first computing engine through a network; and
the piece of information depends on the IP address of the second computing engine.
20. A method as recited in claim 2 wherein:
the question includes a plurality of phrases;
at least two of the phrases belong to two different categories;
the two different categories can be classified under a higher-level category; and
a phrase linked to the higher-level category has a higher score than phrases linked to the two different categories.
21. A method as recited in claim 1 wherein before the step of identifying at least one phrase in the question, the method further comprises the step of regularizing at least one phrase in the question.
22. A method as recited in claim 1 wherein:
the computing engine is for answering questions on a subject; and
a plurality of phrases and a plurality of question formats are for answering questions on the subject.
23. A method as recited in claim 1 wherein the category linked to the phrase is changed to another category in view of another piece of information about the user.
24. A method for answering a natural-language question from a user by a computing engine having access to a database with a plurality of phrases and question formats, the method comprising the steps of:
identifying at least one phrase in the question, based on phrases in the database and at least one grammatical rule; and
identifying at least two question formats in the database based on the identified phrase and at least one semantic rule;
wherein:
the user is allowed to pick at least one of the question formats for answering the question;
one phrase in the question is being translated into a language different from the language of the question; and
an answer to the question in the different language is identified.
25. A method for answering a natural-language question from a user by a first computing engine, the method comprising the steps of:
transmitting, through a network, the question to a second computing engine having access to a database with a plurality of phrases and question formats, the second computing engine being configured to:
identify at least one phrase in the question, based on phrases in the database and at least one grammatical rule;
link the identified phrase to at least a category based on at least one semantic rule, and a piece of information about the user, other than the fact that the user asked the question; and
identify at least two question formats in the database based on at least the category;
so that the user is allowed to pick at least one of the question formats for answering the question.
26. A method for answering a natural-language question from a user by a first computing engine, the method comprising the steps of:
transmitting, through a network, the question to a second computing engine having access to a database with a plurality of phrases and question formats, the second computing engine being configured to:
identify at least one phrase in the question, based on phrases in the database and at least one grammatical rule; and
identify at least two question formats in the database based on the identified phrase and at least one semantic rule;
wherein:
the user is allowed to pick at least one of the question formats for answering the question;
one phrase in the question is being translated into a language different from the language of the question; and
an answer to the question in the different language is identified.
27. An apparatus for answering a natural-language question from a user, the apparatus comprising:
a phrase identifier configured to identify at least one phrase in the question, based on phrases in a database and at least one grammatical rule;
a linker configured to link the identified phrase to at least a category based on at least one semantic rule, and a piece of information about the user, other than the fact that the user asked the question; and
a format identifier configured to identify at least two question formats in the database based on at least the category;
wherein the user is allowed to pick at least one of the question formats for answering the question.
28. An apparatus for answering a natural-language question from a user, the apparatus comprising:
a phrase identifier configured to identify at least one phrase in the question, based on is phrases in a database and at least one grammatical rule; and
a format identifier configured to identify at least two question formats in the database based on the identified phrase and at least one semantic rule;
wherein:
the user is allowed to pick at least one of the question formats for answering the question;
one phrase in the question is being translated into a language different from the language of the question; and
an answer to the question in the different language is identified.
29. An apparatus for answering a natural-language question from a user comprising:
a transmitter configured to transmit, through a network, the question to a computing engine having access to a database with a plurality of phrases and question formats, the computing engine being configured to:
identify at least one phrase in the question, based on phrases in the database and at least one grammatical rule; and
link the identified phrase to at least a category based on at least one semantic rule, and a piece of information about the user, other than the fact that the user asked the question; and
identify at least two question formats in the database based on at least the category;
so that the user is allowed to pick at least one of the question formats for answering the question.
30. An apparatus for answering a natural-language question from a user comprising:
a transmitter configured to transmit, through a network, the question to a computing engine having access to a database with a plurality of phrases and question formats, the computing engine being configured to:
identify at least one phrase in the question, based on phrases in the database and at least one grammatical rule; and
identify at least two question formats in the database based on the identified phrase and at least one semantic rule;
wherein:
the user is allowed to pick at least one of the question formats for answering the question;
one phrase in the question is being translated into a language different from the language of the question; and
an answer to the question in the different language is identified.
31. A computer readable media containing computer program code that is useful for answering a natural language question from a user, said code when executed by a first computer, having access to a database with a plurality of phrases and question formats, causing the first computer to perform a method comprising the steps of:
identifying at least one phrase in the question, based on phrases in the database and at least one grammatical rule;
linking the identified phrase to at least a category based on at least one semantic rule, and a piece of information about the user, other than the fact that the user asked the question; and
identifying at least two question formats in the database based on at least the category;
so that the user is allowed to pick at least one of the question formats for answering the question.
32. A computer readable media containing computer program code that is useful for answering a natural language question from a user, said code when executed by a first computer, having access to a database with a plurality of phrases and question formats, causing the first computer to perform a method comprising the steps of:
identifying at least one phrase in the question, based on phrases in the database and at least one grammatical rule; and
identifying at least two question formats in the database based on the identified phrase and at least one semantic rule;
wherein:
the user is allowed to pick at least one of the question formats for answering the question;
one phrase in the question is being translated into a language different from the language of the question; and
an answer to the question in the different language is identified.
33. A method for answering a natural-language question from a user by a first computing engine having access to a database with a plurality of phrases and question formats, the method comprising the steps of:
identifying at least one phrase in the question, based on phrases in the database and at least one grammatical rule;
linking the identified phrase to at least a category based on at least one semantic rule, and the subject matter of the question the user is asking; and
identifying at least two question formats in the database based on at least the category;
so that the user is allowed to pick at least one of the question formats for answering the question;
wherein the method is applicable to more than one subject matter based on the same database.
US10/060,120 1996-12-02 2002-01-28 Method and system to answer a natural-language question Abandoned US20020128818A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/060,120 US20020128818A1 (en) 1996-12-02 2002-01-28 Method and system to answer a natural-language question
US10/727,701 US6865370B2 (en) 1996-12-02 2003-12-03 Learning method and system based on questioning

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US08/758,896 US5836771A (en) 1996-12-02 1996-12-02 Learning method and system based on questioning
US09/139,174 US5934910A (en) 1996-12-02 1998-08-24 Learning method and system based on questioning
US09/347,184 US6501937B1 (en) 1996-12-02 1999-07-02 Learning method and system based on questioning
US09/387,932 US6498921B1 (en) 1999-09-01 1999-09-01 Method and system to answer a natural-language question
US10/060,120 US20020128818A1 (en) 1996-12-02 2002-01-28 Method and system to answer a natural-language question

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/387,932 Continuation US6498921B1 (en) 1996-12-02 1999-09-01 Method and system to answer a natural-language question

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US08/758,896 Continuation US5836771A (en) 1996-12-02 1996-12-02 Learning method and system based on questioning
US10/727,701 Continuation US6865370B2 (en) 1996-12-02 2003-12-03 Learning method and system based on questioning

Publications (1)

Publication Number Publication Date
US20020128818A1 true US20020128818A1 (en) 2002-09-12

Family

ID=23531912

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/387,932 Expired - Lifetime US6498921B1 (en) 1996-12-02 1999-09-01 Method and system to answer a natural-language question
US10/060,120 Abandoned US20020128818A1 (en) 1996-12-02 2002-01-28 Method and system to answer a natural-language question

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/387,932 Expired - Lifetime US6498921B1 (en) 1996-12-02 1999-09-01 Method and system to answer a natural-language question

Country Status (1)

Country Link
US (2) US6498921B1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050071216A1 (en) * 2003-09-30 2005-03-31 Microsoft Corporation Interactive network guide
US20050143999A1 (en) * 2003-12-25 2005-06-30 Yumi Ichimura Question-answering method, system, and program for answering question input by speech
US20060004724A1 (en) * 2004-06-03 2006-01-05 Oki Electric Industry Co., Ltd. Information-processing system, information-processing method and information-processing program
US20060167678A1 (en) * 2003-03-14 2006-07-27 Ford W R Surface structure generation
US20070067155A1 (en) * 2005-09-20 2007-03-22 Sonum Technologies, Inc. Surface structure generation
US20070198499A1 (en) * 2006-02-17 2007-08-23 Tom Ritchford Annotation framework
US20090162824A1 (en) * 2007-12-21 2009-06-25 Heck Larry P Automated learning from a question and answering network of humans
US7925676B2 (en) 2006-01-27 2011-04-12 Google Inc. Data object visualization using maps
US7953720B1 (en) 2005-03-31 2011-05-31 Google Inc. Selecting the best answer to a fact query from among a set of potential answers
US8065290B2 (en) 2005-03-31 2011-11-22 Google Inc. User interface for facts query engine with snippets from information sources that include query terms and answer terms
US8239394B1 (en) 2005-03-31 2012-08-07 Google Inc. Bloom filters for query simulation
US8239751B1 (en) * 2007-05-16 2012-08-07 Google Inc. Data from web documents in a spreadsheet
US20130196305A1 (en) * 2012-01-30 2013-08-01 International Business Machines Corporation Method and apparatus for generating questions
US20130246065A1 (en) * 2006-04-03 2013-09-19 Google Inc. Automatic Language Model Update
US8954426B2 (en) 2006-02-17 2015-02-10 Google Inc. Query language
US8954412B1 (en) 2006-09-28 2015-02-10 Google Inc. Corroborating facts in electronic documents
US9087059B2 (en) 2009-08-07 2015-07-21 Google Inc. User interface for presenting search results for multiple regions of a visual query
US9135277B2 (en) 2009-08-07 2015-09-15 Google Inc. Architecture for responding to a visual query
US9484034B2 (en) 2014-02-13 2016-11-01 Kabushiki Kaisha Toshiba Voice conversation support apparatus, voice conversation support method, and computer readable medium
US20160350279A1 (en) * 2015-05-27 2016-12-01 International Business Machines Corporation Utilizing a dialectical model in a question answering system
US9530229B2 (en) 2006-01-27 2016-12-27 Google Inc. Data object visualization using graphs
US9892132B2 (en) 2007-03-14 2018-02-13 Google Llc Determining geographic locations for place names in a fact repository
US10102275B2 (en) 2015-05-27 2018-10-16 International Business Machines Corporation User interface for a query answering system
US11030227B2 (en) 2015-12-11 2021-06-08 International Business Machines Corporation Discrepancy handler for document ingestion into a corpus for a cognitive computing system
US11074286B2 (en) 2016-01-12 2021-07-27 International Business Machines Corporation Automated curation of documents in a corpus for a cognitive computing system
US11308143B2 (en) 2016-01-12 2022-04-19 International Business Machines Corporation Discrepancy curator for documents in a corpus of a cognitive computing system
US11392778B2 (en) * 2014-12-29 2022-07-19 Paypal, Inc. Use of statistical flow data for machine translations between different languages

Families Citing this family (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6498921B1 (en) * 1999-09-01 2002-12-24 Chi Fai Ho Method and system to answer a natural-language question
US6763342B1 (en) * 1998-07-21 2004-07-13 Sentar, Inc. System and method for facilitating interaction with information stored at a web site
US9171545B2 (en) * 1999-04-19 2015-10-27 At&T Intellectual Property Ii, L.P. Browsing and retrieval of full broadcast-quality video
US7877774B1 (en) * 1999-04-19 2011-01-25 At&T Intellectual Property Ii, L.P. Browsing and retrieval of full broadcast-quality video
US9076448B2 (en) * 1999-11-12 2015-07-07 Nuance Communications, Inc. Distributed real time speech recognition system
US7725307B2 (en) * 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Query engine for processing voice based queries including semantic decoding
US6898411B2 (en) * 2000-02-10 2005-05-24 Educational Testing Service Method and system for online teaching using web pages
JP3587120B2 (en) * 2000-03-15 2004-11-10 日本電気株式会社 Questionnaire response analysis system
US8510668B1 (en) 2000-04-03 2013-08-13 Google Inc. Indicating potential focus in a user interface
US7356604B1 (en) * 2000-04-18 2008-04-08 Claritech Corporation Method and apparatus for comparing scores in a vector space retrieval process
US7120627B1 (en) * 2000-04-26 2006-10-10 Global Information Research And Technologies, Llc Method for detecting and fulfilling an information need corresponding to simple queries
US6999963B1 (en) 2000-05-03 2006-02-14 Microsoft Corporation Methods, apparatus, and data structures for annotating a database design schema and/or indexing annotations
US6993475B1 (en) * 2000-05-03 2006-01-31 Microsoft Corporation Methods, apparatus, and data structures for facilitating a natural language interface to stored information
US6611837B2 (en) * 2000-06-05 2003-08-26 International Business Machines Corporation System and method for managing hierarchical objects
US6745189B2 (en) * 2000-06-05 2004-06-01 International Business Machines Corporation System and method for enabling multi-indexing of objects
US6823328B2 (en) * 2000-06-05 2004-11-23 International Business Machines Corporation System and method for enabling unified access to multiple types of data
US6963876B2 (en) * 2000-06-05 2005-11-08 International Business Machines Corporation System and method for searching extended regular expressions
US7010606B1 (en) 2000-06-05 2006-03-07 International Business Machines Corporation System and method for caching a network connection
US6931393B1 (en) 2000-06-05 2005-08-16 International Business Machines Corporation System and method for enabling statistical matching
US7016917B2 (en) * 2000-06-05 2006-03-21 International Business Machines Corporation System and method for storing conceptual information
US6675010B1 (en) * 2000-06-22 2004-01-06 Hao Ming Yeh Mobile communication system for learning foreign vocabulary
US7103173B2 (en) 2001-07-09 2006-09-05 Austin Logistics Incorporated System and method for preemptive goals based routing of contact records
US7142662B2 (en) 2000-07-11 2006-11-28 Austin Logistics Incorporated Method and system for distributing outbound telephone calls
JP4686905B2 (en) * 2000-07-21 2011-05-25 パナソニック株式会社 Dialog control method and apparatus
US6413100B1 (en) * 2000-08-08 2002-07-02 Netucation, Llc System and methods for searching for and delivering solutions to specific problems and problem types
JP2002278977A (en) * 2001-03-22 2002-09-27 Fujitsu Ltd Device and method for answering question and question answer program
US20030004702A1 (en) * 2001-06-29 2003-01-02 Dan Higinbotham Partial sentence translation memory program
WO2003005166A2 (en) 2001-07-03 2003-01-16 University Of Southern California A syntax-based statistical translation model
US7715546B2 (en) 2001-07-09 2010-05-11 Austin Logistics Incorporated System and method for updating contact records
US7054434B2 (en) 2001-07-09 2006-05-30 Austin Logistics Incorporated System and method for common account based routing of contact records
AU2003210393A1 (en) * 2002-02-27 2003-09-09 Michael Rik Frans Brands A data integration and knowledge management solution
US20030182391A1 (en) * 2002-03-19 2003-09-25 Mike Leber Internet based personal information manager
US7620538B2 (en) 2002-03-26 2009-11-17 University Of Southern California Constructing a translation lexicon from comparable, non-parallel corpora
JP3698689B2 (en) * 2002-03-27 2005-09-21 富士通株式会社 Teaching material defect location notification method and teaching material failure location notification device
JP2005535007A (en) * 2002-05-28 2005-11-17 ナシプニイ、ウラジミル・ウラジミロビッチ Synthesizing method of self-learning system for knowledge extraction for document retrieval system
US7734699B2 (en) * 2002-09-03 2010-06-08 Himanshu Bhatnagar Interview automation system for providing technical support
JP2004118740A (en) * 2002-09-27 2004-04-15 Toshiba Corp Question answering system, question answering method and question answering program
US20040166484A1 (en) * 2002-12-20 2004-08-26 Mark Alan Budke System and method for simulating training scenarios
US20040254794A1 (en) * 2003-05-08 2004-12-16 Carl Padula Interactive eyes-free and hands-free device
US7797146B2 (en) 2003-05-13 2010-09-14 Interactive Drama, Inc. Method and system for simulated interactive conversation
US20050239035A1 (en) * 2003-05-13 2005-10-27 Harless William G Method and system for master teacher testing in a computer environment
US20050239022A1 (en) * 2003-05-13 2005-10-27 Harless William G Method and system for master teacher knowledge transfer in a computer environment
US8548794B2 (en) 2003-07-02 2013-10-01 University Of Southern California Statistical noun phrase translation
US8296127B2 (en) 2004-03-23 2012-10-23 University Of Southern California Discovery of parallel text portions in comparable collections of corpora and training using comparable texts
US8666725B2 (en) 2004-04-16 2014-03-04 University Of Southern California Selection and use of nonstatistical translation components in a statistical machine translation framework
WO2006042321A2 (en) 2004-10-12 2006-04-20 University Of Southern California Training for a text-to-text application which uses string to tree conversion for training and decoding
KR100723404B1 (en) * 2005-03-29 2007-05-30 삼성전자주식회사 Apparatus and method for processing speech
US8886517B2 (en) 2005-06-17 2014-11-11 Language Weaver, Inc. Trust scoring for language translation systems
US8676563B2 (en) 2009-10-01 2014-03-18 Language Weaver, Inc. Providing human-generated and machine-generated trusted translations
US8666928B2 (en) 2005-08-01 2014-03-04 Evi Technologies Limited Knowledge repository
US8548799B2 (en) * 2005-08-10 2013-10-01 Microsoft Corporation Methods and apparatus to help users of a natural language system formulate queries
US9042703B2 (en) * 2005-10-31 2015-05-26 At&T Intellectual Property Ii, L.P. System and method for content-based navigation of live and recorded TV and video programs
US9020326B2 (en) * 2005-08-23 2015-04-28 At&T Intellectual Property Ii, L.P. System and method for content-based navigation of live and recorded TV and video programs
US20070073533A1 (en) * 2005-09-23 2007-03-29 Fuji Xerox Co., Ltd. Systems and methods for structural indexing of natural language text
US8429148B1 (en) 2005-11-01 2013-04-23 At&T Intellectual Property Ii, L.P. Method and apparatus for automatically generating headlines based on data retrieved from a network and for answering questions related to a headline
US10319252B2 (en) * 2005-11-09 2019-06-11 Sdl Inc. Language capability assessment and training apparatus and techniques
WO2007099812A1 (en) * 2006-03-01 2007-09-07 Nec Corporation Question answering device, question answering method, and question answering program
US7599861B2 (en) 2006-03-02 2009-10-06 Convergys Customer Management Group, Inc. System and method for closed loop decisionmaking in an automated care system
US8943080B2 (en) 2006-04-07 2015-01-27 University Of Southern California Systems and methods for identifying parallel documents and sentence fragments in multilingual document collections
US8379830B1 (en) 2006-05-22 2013-02-19 Convergys Customer Management Delaware Llc System and method for automated customer service with contingent live interaction
US7809663B1 (en) 2006-05-22 2010-10-05 Convergys Cmg Utah, Inc. System and method for supporting the utilization of machine language
US20070288576A1 (en) * 2006-06-12 2007-12-13 Illg Jason J Disambiguating Responses to Questions Within Electronic Messaging Communications
US7860815B1 (en) * 2006-07-12 2010-12-28 Venkateswara Prasad Tangirala Computer knowledge representation format, system, methods, and applications
US20080040339A1 (en) * 2006-08-07 2008-02-14 Microsoft Corporation Learning question paraphrases from log data
US8886518B1 (en) 2006-08-07 2014-11-11 Language Weaver, Inc. System and method for capitalizing machine translated text
US7774198B2 (en) * 2006-10-06 2010-08-10 Xerox Corporation Navigation system for text
US8433556B2 (en) 2006-11-02 2013-04-30 University Of Southern California Semi-supervised training for statistical word alignment
US9122674B1 (en) 2006-12-15 2015-09-01 Language Weaver, Inc. Use of annotations in statistical machine translation
CN101563682A (en) * 2006-12-22 2009-10-21 日本电气株式会社 Sentence rephrasing method, program, and system
US8468149B1 (en) 2007-01-26 2013-06-18 Language Weaver, Inc. Multi-lingual online community
US8615389B1 (en) 2007-03-16 2013-12-24 Language Weaver, Inc. Generation and exploitation of an approximate language model
US8831928B2 (en) 2007-04-04 2014-09-09 Language Weaver, Inc. Customizable machine translation service
US8825466B1 (en) 2007-06-08 2014-09-02 Language Weaver, Inc. Modification of annotated bilingual segment pairs in syntax-based machine translation
US8260619B1 (en) 2008-08-22 2012-09-04 Convergys Cmg Utah, Inc. Method and system for creating natural language understanding grammars
US8838659B2 (en) 2007-10-04 2014-09-16 Amazon Technologies, Inc. Enhanced knowledge repository
US8332394B2 (en) * 2008-05-23 2012-12-11 International Business Machines Corporation System and method for providing question and answers with deferred type evaluation
US8275803B2 (en) 2008-05-14 2012-09-25 International Business Machines Corporation System and method for providing answers to questions
US9805089B2 (en) 2009-02-10 2017-10-31 Amazon Technologies, Inc. Local business and product search system and method
US8990064B2 (en) 2009-07-28 2015-03-24 Language Weaver, Inc. Translating documents based on content
US8380486B2 (en) 2009-10-01 2013-02-19 Language Weaver, Inc. Providing machine-generated translations and corresponding trust levels
US20110125734A1 (en) * 2009-11-23 2011-05-26 International Business Machines Corporation Questions and answers generation
US10417646B2 (en) 2010-03-09 2019-09-17 Sdl Inc. Predicting the cost associated with translating textual content
US9110882B2 (en) 2010-05-14 2015-08-18 Amazon Technologies, Inc. Extracting structured knowledge from unstructured text
US8892550B2 (en) 2010-09-24 2014-11-18 International Business Machines Corporation Source expansion for information retrieval and information extraction
US11003838B2 (en) 2011-04-18 2021-05-11 Sdl Inc. Systems and methods for monitoring post translation editing
US8694303B2 (en) 2011-06-15 2014-04-08 Language Weaver, Inc. Systems and methods for tuning parameters in statistical machine translation
US8886515B2 (en) 2011-10-19 2014-11-11 Language Weaver, Inc. Systems and methods for enhancing machine translation post edit review processes
US8942973B2 (en) 2012-03-09 2015-01-27 Language Weaver, Inc. Content page URL translation
US10261994B2 (en) 2012-05-25 2019-04-16 Sdl Inc. Method and system for automatic management of reputation of translators
US10614725B2 (en) 2012-09-11 2020-04-07 International Business Machines Corporation Generating secondary questions in an introspective question answering system
US9424341B2 (en) * 2012-10-23 2016-08-23 Ca, Inc. Information management systems and methods
US9152622B2 (en) 2012-11-26 2015-10-06 Language Weaver, Inc. Personalized machine translation via online adaptation
US9213694B2 (en) 2013-10-10 2015-12-15 Language Weaver, Inc. Efficient online domain adaptation
WO2015051397A1 (en) * 2013-10-10 2015-04-16 Quikser Pty Ltd A server for serving answer data and a computer readable storage medium for serving answer data
US9652451B2 (en) * 2014-05-08 2017-05-16 Marvin Elder Natural language query
US9959006B2 (en) 2014-05-12 2018-05-01 International Business Machines Corporation Generating a form response interface in an online application
GB201620714D0 (en) * 2016-12-06 2017-01-18 Microsoft Technology Licensing Llc Information retrieval system
US10740373B2 (en) 2017-02-08 2020-08-11 International Business Machines Corporation Dialog mechanism responsive to query context
US11182681B2 (en) 2017-03-15 2021-11-23 International Business Machines Corporation Generating natural language answers automatically

Citations (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4586160A (en) * 1982-04-07 1986-04-29 Tokyo Shibaura Denki Kabushiki Kaisha Method and apparatus for analyzing the syntactic structure of a sentence
US4594686A (en) * 1979-08-30 1986-06-10 Sharp Kabushiki Kaisha Language interpreter for inflecting words from their uninflected forms
US4597057A (en) * 1981-12-31 1986-06-24 System Development Corporation System for compressed storage of 8-bit ASCII bytes using coded strings of 4 bit nibbles
US4599691A (en) * 1982-05-20 1986-07-08 Kokusai Denshin Denwa Co., Ltd. Tree transformation system in machine translation system
US4641264A (en) * 1981-09-04 1987-02-03 Hitachi, Ltd. Method for automatic translation between natural languages
US4674065A (en) * 1982-04-30 1987-06-16 International Business Machines Corporation System for detecting and correcting contextual errors in a text processing system
US4773009A (en) * 1986-06-06 1988-09-20 Houghton Mifflin Company Method and apparatus for text analysis
US5070478A (en) * 1988-11-21 1991-12-03 Xerox Corporation Modifying text data to change features in a region of text
US5088048A (en) * 1988-06-10 1992-02-11 Xerox Corporation Massively parallel propositional reasoning
US5111398A (en) * 1988-11-21 1992-05-05 Xerox Corporation Processing natural language text using autonomous punctuational structure
US5224038A (en) * 1989-04-05 1993-06-29 Xerox Corporation Token editor architecture
US5278980A (en) * 1991-08-16 1994-01-11 Xerox Corporation Iterative technique for phrase query formation and an information retrieval system employing same
US5384703A (en) * 1993-07-02 1995-01-24 Xerox Corporation Method and apparatus for summarizing documents according to theme
US5386276A (en) * 1993-07-12 1995-01-31 Xerox Corporation Detecting and correcting for low developed mass per unit area
US5438511A (en) * 1988-10-19 1995-08-01 Xerox Corporation Disjunctive unification
US5500920A (en) * 1993-09-23 1996-03-19 Xerox Corporation Semantic co-occurrence filtering for speech recognition and signal transcription applications
US5560037A (en) * 1987-12-28 1996-09-24 Xerox Corporation Compact hyphenation point data
US5594641A (en) * 1992-07-20 1997-01-14 Xerox Corporation Finite-state transduction of related word forms for text indexing and retrieval
US5598518A (en) * 1993-03-10 1997-01-28 Fuji Xerox Co., Ltd. Text editing apparatus for rearranging sentences
US5625773A (en) * 1989-04-05 1997-04-29 Xerox Corporation Method of encoding and line breaking text
US5638543A (en) * 1993-06-03 1997-06-10 Xerox Corporation Method and apparatus for automatic document summarization
US5649218A (en) * 1994-07-19 1997-07-15 Fuji Xerox Co., Ltd. Document structure retrieval apparatus utilizing partial tag-restored structure
US5675819A (en) * 1994-06-16 1997-10-07 Xerox Corporation Document information retrieval using global word co-occurrence patterns
US5689716A (en) * 1995-04-14 1997-11-18 Xerox Corporation Automatic method of generating thematic summaries
US5696962A (en) * 1993-06-24 1997-12-09 Xerox Corporation Method for computerized information retrieval using shallow linguistic analysis
US5721939A (en) * 1995-08-03 1998-02-24 Xerox Corporation Method and apparatus for tokenizing text
US5727222A (en) * 1995-12-14 1998-03-10 Xerox Corporation Method of parsing unification based grammars using disjunctive lazy copy links
US5745602A (en) * 1995-05-01 1998-04-28 Xerox Corporation Automatic method of selecting multi-word key phrases from a document
US5752021A (en) * 1994-05-24 1998-05-12 Fuji Xerox Co., Ltd. Document database management apparatus capable of conversion between retrieval formulae for different schemata
US5778397A (en) * 1995-06-28 1998-07-07 Xerox Corporation Automatic method of generating feature probabilities for automatic extracting summarization
US5787420A (en) * 1995-12-14 1998-07-28 Xerox Corporation Method of ordering document clusters without requiring knowledge of user interests
US5819210A (en) * 1996-06-21 1998-10-06 Xerox Corporation Method of lazy contexted copying during unification
US5831853A (en) * 1995-06-07 1998-11-03 Xerox Corporation Automatic construction of digital controllers/device drivers for electro-mechanical systems using component models
US5848191A (en) * 1995-12-14 1998-12-08 Xerox Corporation Automatic method of generating thematic summaries from a document image without performing character recognition
US5850476A (en) * 1995-12-14 1998-12-15 Xerox Corporation Automatic method of identifying drop words in a document image without performing character recognition
US5862321A (en) * 1994-06-27 1999-01-19 Xerox Corporation System and method for accessing and distributing electronic documents
US5870741A (en) * 1995-10-20 1999-02-09 Fuji Xerox Co., Ltd. Information management device
US5883986A (en) * 1995-06-02 1999-03-16 Xerox Corporation Method and system for automatic transcription correction
US5892842A (en) * 1995-12-14 1999-04-06 Xerox Corporation Automatic method of identifying sentence boundaries in a document image
US5903796A (en) * 1998-03-05 1999-05-11 Xerox Corporation P/R process control patch uniformity analyzer
US5903860A (en) * 1996-06-21 1999-05-11 Xerox Corporation Method of conjoining clauses during unification using opaque clauses
US5905980A (en) * 1996-10-31 1999-05-18 Fuji Xerox Co., Ltd. Document processing apparatus, word extracting apparatus, word extracting method and storage medium for storing word extracting program
US5911140A (en) * 1995-12-14 1999-06-08 Xerox Corporation Method of ordering document clusters given some knowledge of user interests
US5918240A (en) * 1995-06-28 1999-06-29 Xerox Corporation Automatic method of extracting summarization using feature probabilities
US5937224A (en) * 1998-03-05 1999-08-10 Xerox Corporation Cleaner stress indicator
US5943669A (en) * 1996-11-25 1999-08-24 Fuji Xerox Co., Ltd. Document retrieval device
US5946521A (en) * 1998-03-05 1999-08-31 Xerox Corporation Xerographic xerciser including a hierarchy system for determining part replacement and failure
US5944530A (en) * 1996-08-13 1999-08-31 Ho; Chi Fai Learning method and system that consider a student's concentration level
US5960228A (en) * 1998-03-05 1999-09-28 Xerox Corporation Dirt level early warning system
US5995775A (en) * 1998-03-05 1999-11-30 Xerox Corporation ROS pixel size growth detector
US6006240A (en) * 1997-03-31 1999-12-21 Xerox Corporation Cell identification in table analysis
US6016204A (en) * 1998-03-05 2000-01-18 Xerox Corporation Actuator performance indicator
US6016516A (en) * 1996-08-07 2000-01-18 Fuji Xerox Co. Ltd. Remote procedure processing device used by at least two linked computer systems
US6023760A (en) * 1996-06-22 2000-02-08 Xerox Corporation Modifying an input string partitioned in accordance with directionality and length constraints
US6076086A (en) * 1997-03-17 2000-06-13 Fuji Xerox Co., Ltd. Associate document retrieving apparatus and storage medium for storing associate document retrieving program
US6081348A (en) * 1998-03-05 2000-06-27 Xerox Corporation Ros beam failure detector
US6128634A (en) * 1998-01-06 2000-10-03 Fuji Xerox Co., Ltd. Method and apparatus for facilitating skimming of text
US6167369A (en) * 1998-12-23 2000-12-26 Xerox Company Automatic language identification using both N-gram and word information
US6198885B1 (en) * 1998-03-05 2001-03-06 Xerox Corporation Non-uniform development indicator
US6202064B1 (en) * 1997-06-20 2001-03-13 Xerox Corporation Linguistic search system
US6269189B1 (en) * 1998-12-29 2001-07-31 Xerox Corporation Finding selected character strings in text and providing information relating to the selected character strings
US6282509B1 (en) * 1997-11-18 2001-08-28 Fuji Xerox Co., Ltd. Thesaurus retrieval and synthesis system
US6289304B1 (en) * 1998-03-23 2001-09-11 Xerox Corporation Text summarization using part-of-speech
US6308149B1 (en) * 1998-12-16 2001-10-23 Xerox Corporation Grouping words with equivalent substrings by automatic clustering based on suffix relationships
US6321372B1 (en) * 1998-12-23 2001-11-20 Xerox Corporation Executable for requesting a linguistic service
US6321189B1 (en) * 1998-07-02 2001-11-20 Fuji Xerox Co., Ltd. Cross-lingual retrieval system and method that utilizes stored pair data in a vector space model to process queries
US6321191B1 (en) * 1999-01-19 2001-11-20 Fuji Xerox Co., Ltd. Related sentence retrieval system having a plurality of cross-lingual retrieving units that pairs similar sentences based on extracted independent words
US6339783B1 (en) * 1996-12-10 2002-01-15 Fuji Xerox Co., Ltd. Procedure execution device and procedure execution method
US6366697B1 (en) * 1993-10-06 2002-04-02 Xerox Corporation Rotationally desensitized unistroke handwriting recognition
US6389435B1 (en) * 1999-02-05 2002-05-14 Fuji Xerox, Co, Ltd. Method and system for copying a freeform digital ink mark on an object to a related object
US6393389B1 (en) * 1999-09-23 2002-05-21 Xerox Corporation Using ranked translation choices to obtain sequences indicating meaning of multi-token expressions
US6411962B1 (en) * 1999-11-29 2002-06-25 Xerox Corporation Systems and methods for organizing text
US6430557B1 (en) * 1998-12-16 2002-08-06 Xerox Corporation Identifying a group of words using modified query words obtained from successive suffix relationships
US6446035B1 (en) * 1999-05-05 2002-09-03 Xerox Corporation Finding groups of people based on linguistically analyzable content of resources accessed
US6466213B2 (en) * 1998-02-13 2002-10-15 Xerox Corporation Method and apparatus for creating personal autonomous avatars
US6470334B1 (en) * 1999-01-07 2002-10-22 Fuji Xerox Co., Ltd. Document retrieval apparatus
US6473729B1 (en) * 1999-12-20 2002-10-29 Xerox Corporation Word phrase translation using a phrase index
US6493663B1 (en) * 1998-12-17 2002-12-10 Fuji Xerox Co., Ltd. Document summarizing apparatus, document summarizing method and recording medium carrying a document summarizing program
US6498921B1 (en) * 1999-09-01 2002-12-24 Chi Fai Ho Method and system to answer a natural-language question
US6501937B1 (en) * 1996-12-02 2002-12-31 Chi Fai Ho Learning method and system based on questioning
US6505150B2 (en) * 1997-07-02 2003-01-07 Xerox Corporation Article and method of automatically filtering information retrieval results using test genre
US6570555B1 (en) * 1998-12-30 2003-05-27 Fuji Xerox Co., Ltd. Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
US6574622B1 (en) * 1998-09-07 2003-06-03 Fuji Xerox Co. Ltd. Apparatus and method for document retrieval
US6581066B1 (en) * 1999-11-29 2003-06-17 Xerox Corporation Technique enabling end users to create secure command-language-based services dynamically

Family Cites Families (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4798543A (en) 1983-03-31 1989-01-17 Bell & Howell Company Interactive training method and system
US4816994A (en) * 1984-12-04 1989-03-28 Tektronix, Inc. Rule acquisition for expert systems
US4787035A (en) 1985-10-17 1988-11-22 Westinghouse Electric Corp. Meta-interpreter
US4847784A (en) 1987-07-13 1989-07-11 Teknowledge, Inc. Knowledge based tutor
US4867685A (en) 1987-09-24 1989-09-19 The Trustees Of The College Of Aeronautics Audio visual instructional system
US4914590A (en) 1988-05-18 1990-04-03 Emhart Industries, Inc. Natural language understanding system
SE466029B (en) 1989-03-06 1991-12-02 Ibm Svenska Ab DEVICE AND PROCEDURE FOR ANALYSIS OF NATURAL LANGUAGES IN A COMPUTER-BASED INFORMATION PROCESSING SYSTEM
US5035625A (en) 1989-07-24 1991-07-30 Munson Electronics, Inc. Computer game teaching method and system
US5239617A (en) 1990-01-05 1993-08-24 International Business Machines Corporation Method and apparatus providing an intelligent help explanation paradigm paralleling computer user activity
US5265014A (en) 1990-04-10 1993-11-23 Hewlett-Packard Company Multi-modal user interface
US5404295A (en) 1990-08-16 1995-04-04 Katz; Boris Method and apparatus for utilizing annotations to facilitate computer retrieval of database material
US5309359A (en) 1990-08-16 1994-05-03 Boris Katz Method and apparatus for generating and utlizing annotations to facilitate computer text retrieval
US5418717A (en) 1990-08-27 1995-05-23 Su; Keh-Yih Multiple score language processing system
JPH04113385A (en) 1990-09-03 1992-04-14 Fujitsu Ltd Remote lecture system
US5586218A (en) 1991-03-04 1996-12-17 Inference Corporation Autonomous learning and reasoning agent
DE69230968D1 (en) 1991-03-04 2000-05-31 Inference Corp CASE-BASED DEDUCTIVE SYSTEM
JPH04357549A (en) 1991-03-07 1992-12-10 Hitachi Ltd Education system
JP2804403B2 (en) 1991-05-16 1998-09-24 インターナショナル・ビジネス・マシーンズ・コーポレイション Question answering system
US5301314A (en) 1991-08-05 1994-04-05 Answer Computer, Inc. Computer-aided customer support system with bubble-up
US5265065A (en) 1991-10-08 1993-11-23 West Publishing Company Method and apparatus for information retrieval from a database by replacing domain specific stemmed phases in a natural language to create a search query
US5423032A (en) 1991-10-31 1995-06-06 International Business Machines Corporation Method for extracting multi-word technical terms from text
US5259766A (en) 1991-12-13 1993-11-09 Educational Testing Service Method and system for interactive computer science testing, anaylsis and feedback
US5267865A (en) 1992-02-11 1993-12-07 John R. Lee Interactive computer aided natural learning method and apparatus
AU4286993A (en) 1992-04-15 1993-11-18 Inference Corporation Machine learning with a relational database
GB9209346D0 (en) * 1992-04-30 1992-06-17 Sharp Kk Machine translation system
US5999908A (en) 1992-08-06 1999-12-07 Abelow; Daniel H. Customer-based product design module
JP2973726B2 (en) * 1992-08-31 1999-11-08 株式会社日立製作所 Information processing device
ES2143509T3 (en) 1992-09-04 2000-05-16 Caterpillar Inc INTEGRATED EDITION AND TRANSLATION SYSTEM.
US5286036A (en) 1992-09-08 1994-02-15 Abrasion Engineering Company Limited Method of playing electronic game, and electronic game
US5446883A (en) 1992-10-23 1995-08-29 Answer Systems, Inc. Method and system for distributed information management and document retrieval
CA2119397C (en) 1993-03-19 2007-10-02 Kim E.A. Silverman Improved automated voice synthesis employing enhanced prosodic treatment of text, spelling of text and rate of annunciation
US5454106A (en) 1993-05-17 1995-09-26 International Business Machines Corporation Database retrieval system using natural language for presenting understood components of an ambiguous query on a user interface
US5701399A (en) 1993-06-09 1997-12-23 Inference Corporation Integration of case-based search engine into help database
US5519608A (en) 1993-06-24 1996-05-21 Xerox Corporation Method for extracting from a text corpus answers to questions stated in natural language by using linguistic analysis and hypothesis generation
WO1995002221A1 (en) 1993-07-07 1995-01-19 Inference Corporation Case-based organizing and querying of a database
US5495604A (en) 1993-08-25 1996-02-27 Asymetrix Corporation Method and apparatus for the modeling and query of database structures using natural language-like constructs
US5597312A (en) 1994-05-04 1997-01-28 U S West Technologies, Inc. Intelligent tutoring method and system
WO1995035541A1 (en) 1994-06-22 1995-12-28 Molloy Bruce G A system and method for representing and retrieving knowledge in an adaptive cognitive network
US5758257A (en) 1994-11-29 1998-05-26 Herz; Frederick System and method for scheduling broadcast of and access to video programs and other data using customer profiles
US5794050A (en) 1995-01-04 1998-08-11 Intelligent Text Processing, Inc. Natural language understanding system
US5634121A (en) 1995-05-30 1997-05-27 Lockheed Martin Corporation System for identifying and linking domain information using a parsing process to identify keywords and phrases
US5819260A (en) 1996-01-22 1998-10-06 Lexis-Nexis Phrase recognition method and apparatus
US6076088A (en) * 1996-02-09 2000-06-13 Paik; Woojin Information extraction system and method using concept relation concept (CRC) triples
US5862325A (en) 1996-02-29 1999-01-19 Intermind Corporation Computer-based communication system and method using metadata defining a control structure
US5797135A (en) 1996-05-01 1998-08-18 Serviceware, Inc. Software structure for data delivery on multiple engines
US6101515A (en) 1996-05-31 2000-08-08 Oracle Corporation Learning system for classification of terminology
US5959543A (en) 1996-08-22 1999-09-28 Lucent Technologies Inc. Two-way wireless messaging system with flexible messaging
US5933531A (en) 1996-08-23 1999-08-03 International Business Machines Corporation Verification and correction method and system for optical character recognition
US5933816A (en) 1996-10-31 1999-08-03 Citicorp Development Center, Inc. System and method for delivering financial services
US5909679A (en) 1996-11-08 1999-06-01 At&T Corp Knowledge-based moderator for electronic mail help lists
EP0841624A1 (en) * 1996-11-08 1998-05-13 Softmark Limited Input and output communication in a data processing system
US5963948A (en) 1996-11-15 1999-10-05 Shilcrat; Esther Dina Method for generating a path in an arbitrary physical structure
US6078914A (en) 1996-12-09 2000-06-20 Open Text Corporation Natural language meta-search system and method
US5963965A (en) 1997-02-18 1999-10-05 Semio Corporation Text processing and retrieval system and method
US5819258A (en) 1997-03-07 1998-10-06 Digital Equipment Corporation Method and apparatus for automatically generating hierarchical categories from large document collections
US5933822A (en) 1997-07-22 1999-08-03 Microsoft Corporation Apparatus and methods for an information retrieval system that employs natural language processing of search results to improve overall precision
US6266664B1 (en) 1997-10-01 2001-07-24 Rulespace, Inc. Method for scanning, analyzing and rating digital information content
US6029043A (en) * 1998-01-29 2000-02-22 Ho; Chi Fai Computer-aided group-learning methods and systems
US6393428B1 (en) 1998-07-13 2002-05-21 Microsoft Corporation Natural language information retrieval system
US6349307B1 (en) 1998-12-28 2002-02-19 U.S. Philips Corporation Cooperative topical servers with automatic prefiltering and routing

Patent Citations (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4594686A (en) * 1979-08-30 1986-06-10 Sharp Kabushiki Kaisha Language interpreter for inflecting words from their uninflected forms
US4641264A (en) * 1981-09-04 1987-02-03 Hitachi, Ltd. Method for automatic translation between natural languages
US4597057A (en) * 1981-12-31 1986-06-24 System Development Corporation System for compressed storage of 8-bit ASCII bytes using coded strings of 4 bit nibbles
US4586160A (en) * 1982-04-07 1986-04-29 Tokyo Shibaura Denki Kabushiki Kaisha Method and apparatus for analyzing the syntactic structure of a sentence
US4674065A (en) * 1982-04-30 1987-06-16 International Business Machines Corporation System for detecting and correcting contextual errors in a text processing system
US4599691A (en) * 1982-05-20 1986-07-08 Kokusai Denshin Denwa Co., Ltd. Tree transformation system in machine translation system
US4773009A (en) * 1986-06-06 1988-09-20 Houghton Mifflin Company Method and apparatus for text analysis
US5560037A (en) * 1987-12-28 1996-09-24 Xerox Corporation Compact hyphenation point data
US5088048A (en) * 1988-06-10 1992-02-11 Xerox Corporation Massively parallel propositional reasoning
US5438511A (en) * 1988-10-19 1995-08-01 Xerox Corporation Disjunctive unification
US5070478A (en) * 1988-11-21 1991-12-03 Xerox Corporation Modifying text data to change features in a region of text
US5111398A (en) * 1988-11-21 1992-05-05 Xerox Corporation Processing natural language text using autonomous punctuational structure
US5625773A (en) * 1989-04-05 1997-04-29 Xerox Corporation Method of encoding and line breaking text
US5224038A (en) * 1989-04-05 1993-06-29 Xerox Corporation Token editor architecture
US5278980A (en) * 1991-08-16 1994-01-11 Xerox Corporation Iterative technique for phrase query formation and an information retrieval system employing same
US5625554A (en) * 1992-07-20 1997-04-29 Xerox Corporation Finite-state transduction of related word forms for text indexing and retrieval
US5594641A (en) * 1992-07-20 1997-01-14 Xerox Corporation Finite-state transduction of related word forms for text indexing and retrieval
US5598518A (en) * 1993-03-10 1997-01-28 Fuji Xerox Co., Ltd. Text editing apparatus for rearranging sentences
US5638543A (en) * 1993-06-03 1997-06-10 Xerox Corporation Method and apparatus for automatic document summarization
US5696962A (en) * 1993-06-24 1997-12-09 Xerox Corporation Method for computerized information retrieval using shallow linguistic analysis
US5384703A (en) * 1993-07-02 1995-01-24 Xerox Corporation Method and apparatus for summarizing documents according to theme
US5386276A (en) * 1993-07-12 1995-01-31 Xerox Corporation Detecting and correcting for low developed mass per unit area
US5500920A (en) * 1993-09-23 1996-03-19 Xerox Corporation Semantic co-occurrence filtering for speech recognition and signal transcription applications
US6366697B1 (en) * 1993-10-06 2002-04-02 Xerox Corporation Rotationally desensitized unistroke handwriting recognition
US5752021A (en) * 1994-05-24 1998-05-12 Fuji Xerox Co., Ltd. Document database management apparatus capable of conversion between retrieval formulae for different schemata
US5675819A (en) * 1994-06-16 1997-10-07 Xerox Corporation Document information retrieval using global word co-occurrence patterns
US5862321A (en) * 1994-06-27 1999-01-19 Xerox Corporation System and method for accessing and distributing electronic documents
US6144997A (en) * 1994-06-27 2000-11-07 Xerox Corporation System and method for accessing and distributing electronic documents
US5649218A (en) * 1994-07-19 1997-07-15 Fuji Xerox Co., Ltd. Document structure retrieval apparatus utilizing partial tag-restored structure
US5689716A (en) * 1995-04-14 1997-11-18 Xerox Corporation Automatic method of generating thematic summaries
US5745602A (en) * 1995-05-01 1998-04-28 Xerox Corporation Automatic method of selecting multi-word key phrases from a document
US5883986A (en) * 1995-06-02 1999-03-16 Xerox Corporation Method and system for automatic transcription correction
US5831853A (en) * 1995-06-07 1998-11-03 Xerox Corporation Automatic construction of digital controllers/device drivers for electro-mechanical systems using component models
US5778397A (en) * 1995-06-28 1998-07-07 Xerox Corporation Automatic method of generating feature probabilities for automatic extracting summarization
US5918240A (en) * 1995-06-28 1999-06-29 Xerox Corporation Automatic method of extracting summarization using feature probabilities
US5721939A (en) * 1995-08-03 1998-02-24 Xerox Corporation Method and apparatus for tokenizing text
US5870741A (en) * 1995-10-20 1999-02-09 Fuji Xerox Co., Ltd. Information management device
US5727222A (en) * 1995-12-14 1998-03-10 Xerox Corporation Method of parsing unification based grammars using disjunctive lazy copy links
US5850476A (en) * 1995-12-14 1998-12-15 Xerox Corporation Automatic method of identifying drop words in a document image without performing character recognition
US5892842A (en) * 1995-12-14 1999-04-06 Xerox Corporation Automatic method of identifying sentence boundaries in a document image
US5848191A (en) * 1995-12-14 1998-12-08 Xerox Corporation Automatic method of generating thematic summaries from a document image without performing character recognition
US5787420A (en) * 1995-12-14 1998-07-28 Xerox Corporation Method of ordering document clusters without requiring knowledge of user interests
US5911140A (en) * 1995-12-14 1999-06-08 Xerox Corporation Method of ordering document clusters given some knowledge of user interests
US5903860A (en) * 1996-06-21 1999-05-11 Xerox Corporation Method of conjoining clauses during unification using opaque clauses
US6064953A (en) * 1996-06-21 2000-05-16 Xerox Corporation Method for creating a disjunctive edge graph from subtrees during unification
US5819210A (en) * 1996-06-21 1998-10-06 Xerox Corporation Method of lazy contexted copying during unification
US6023760A (en) * 1996-06-22 2000-02-08 Xerox Corporation Modifying an input string partitioned in accordance with directionality and length constraints
US6016516A (en) * 1996-08-07 2000-01-18 Fuji Xerox Co. Ltd. Remote procedure processing device used by at least two linked computer systems
US5944530A (en) * 1996-08-13 1999-08-31 Ho; Chi Fai Learning method and system that consider a student's concentration level
US5905980A (en) * 1996-10-31 1999-05-18 Fuji Xerox Co., Ltd. Document processing apparatus, word extracting apparatus, word extracting method and storage medium for storing word extracting program
US5943669A (en) * 1996-11-25 1999-08-24 Fuji Xerox Co., Ltd. Document retrieval device
US6501937B1 (en) * 1996-12-02 2002-12-31 Chi Fai Ho Learning method and system based on questioning
US6339783B1 (en) * 1996-12-10 2002-01-15 Fuji Xerox Co., Ltd. Procedure execution device and procedure execution method
US6076086A (en) * 1997-03-17 2000-06-13 Fuji Xerox Co., Ltd. Associate document retrieving apparatus and storage medium for storing associate document retrieving program
US6006240A (en) * 1997-03-31 1999-12-21 Xerox Corporation Cell identification in table analysis
US6202064B1 (en) * 1997-06-20 2001-03-13 Xerox Corporation Linguistic search system
US6505150B2 (en) * 1997-07-02 2003-01-07 Xerox Corporation Article and method of automatically filtering information retrieval results using test genre
US6282509B1 (en) * 1997-11-18 2001-08-28 Fuji Xerox Co., Ltd. Thesaurus retrieval and synthesis system
US6128634A (en) * 1998-01-06 2000-10-03 Fuji Xerox Co., Ltd. Method and apparatus for facilitating skimming of text
US6466213B2 (en) * 1998-02-13 2002-10-15 Xerox Corporation Method and apparatus for creating personal autonomous avatars
US5995775A (en) * 1998-03-05 1999-11-30 Xerox Corporation ROS pixel size growth detector
US5903796A (en) * 1998-03-05 1999-05-11 Xerox Corporation P/R process control patch uniformity analyzer
US5946521A (en) * 1998-03-05 1999-08-31 Xerox Corporation Xerographic xerciser including a hierarchy system for determining part replacement and failure
US6198885B1 (en) * 1998-03-05 2001-03-06 Xerox Corporation Non-uniform development indicator
US6081348A (en) * 1998-03-05 2000-06-27 Xerox Corporation Ros beam failure detector
US6016204A (en) * 1998-03-05 2000-01-18 Xerox Corporation Actuator performance indicator
US5937224A (en) * 1998-03-05 1999-08-10 Xerox Corporation Cleaner stress indicator
US5960228A (en) * 1998-03-05 1999-09-28 Xerox Corporation Dirt level early warning system
US6289304B1 (en) * 1998-03-23 2001-09-11 Xerox Corporation Text summarization using part-of-speech
US6321189B1 (en) * 1998-07-02 2001-11-20 Fuji Xerox Co., Ltd. Cross-lingual retrieval system and method that utilizes stored pair data in a vector space model to process queries
US6574622B1 (en) * 1998-09-07 2003-06-03 Fuji Xerox Co. Ltd. Apparatus and method for document retrieval
US6430557B1 (en) * 1998-12-16 2002-08-06 Xerox Corporation Identifying a group of words using modified query words obtained from successive suffix relationships
US6308149B1 (en) * 1998-12-16 2001-10-23 Xerox Corporation Grouping words with equivalent substrings by automatic clustering based on suffix relationships
US6493663B1 (en) * 1998-12-17 2002-12-10 Fuji Xerox Co., Ltd. Document summarizing apparatus, document summarizing method and recording medium carrying a document summarizing program
US6321372B1 (en) * 1998-12-23 2001-11-20 Xerox Corporation Executable for requesting a linguistic service
US6167369A (en) * 1998-12-23 2000-12-26 Xerox Company Automatic language identification using both N-gram and word information
US6269189B1 (en) * 1998-12-29 2001-07-31 Xerox Corporation Finding selected character strings in text and providing information relating to the selected character strings
US6570555B1 (en) * 1998-12-30 2003-05-27 Fuji Xerox Co., Ltd. Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
US6470334B1 (en) * 1999-01-07 2002-10-22 Fuji Xerox Co., Ltd. Document retrieval apparatus
US6321191B1 (en) * 1999-01-19 2001-11-20 Fuji Xerox Co., Ltd. Related sentence retrieval system having a plurality of cross-lingual retrieving units that pairs similar sentences based on extracted independent words
US6389435B1 (en) * 1999-02-05 2002-05-14 Fuji Xerox, Co, Ltd. Method and system for copying a freeform digital ink mark on an object to a related object
US6446035B1 (en) * 1999-05-05 2002-09-03 Xerox Corporation Finding groups of people based on linguistically analyzable content of resources accessed
US6498921B1 (en) * 1999-09-01 2002-12-24 Chi Fai Ho Method and system to answer a natural-language question
US6393389B1 (en) * 1999-09-23 2002-05-21 Xerox Corporation Using ranked translation choices to obtain sequences indicating meaning of multi-token expressions
US6411962B1 (en) * 1999-11-29 2002-06-25 Xerox Corporation Systems and methods for organizing text
US6581066B1 (en) * 1999-11-29 2003-06-17 Xerox Corporation Technique enabling end users to create secure command-language-based services dynamically
US6473729B1 (en) * 1999-12-20 2002-10-29 Xerox Corporation Word phrase translation using a phrase index

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060167678A1 (en) * 2003-03-14 2006-07-27 Ford W R Surface structure generation
US7599831B2 (en) 2003-03-14 2009-10-06 Sonum Technologies, Inc. Multi-stage pattern reduction for natural language processing
US20050071216A1 (en) * 2003-09-30 2005-03-31 Microsoft Corporation Interactive network guide
US7580835B2 (en) * 2003-12-25 2009-08-25 Kabushiki Kaisha Toshiba Question-answering method, system, and program for answering question input by speech
US20050143999A1 (en) * 2003-12-25 2005-06-30 Yumi Ichimura Question-answering method, system, and program for answering question input by speech
US20060004724A1 (en) * 2004-06-03 2006-01-05 Oki Electric Industry Co., Ltd. Information-processing system, information-processing method and information-processing program
US8224802B2 (en) 2005-03-31 2012-07-17 Google Inc. User interface for facts query engine with snippets from information sources that include query terms and answer terms
US8650175B2 (en) 2005-03-31 2014-02-11 Google Inc. User interface for facts query engine with snippets from information sources that include query terms and answer terms
US7953720B1 (en) 2005-03-31 2011-05-31 Google Inc. Selecting the best answer to a fact query from among a set of potential answers
US8239394B1 (en) 2005-03-31 2012-08-07 Google Inc. Bloom filters for query simulation
US8065290B2 (en) 2005-03-31 2011-11-22 Google Inc. User interface for facts query engine with snippets from information sources that include query terms and answer terms
US20070067155A1 (en) * 2005-09-20 2007-03-22 Sonum Technologies, Inc. Surface structure generation
US9530229B2 (en) 2006-01-27 2016-12-27 Google Inc. Data object visualization using graphs
US7925676B2 (en) 2006-01-27 2011-04-12 Google Inc. Data object visualization using maps
US8954426B2 (en) 2006-02-17 2015-02-10 Google Inc. Query language
US8055674B2 (en) 2006-02-17 2011-11-08 Google Inc. Annotation framework
US20070198499A1 (en) * 2006-02-17 2007-08-23 Tom Ritchford Annotation framework
US9159316B2 (en) * 2006-04-03 2015-10-13 Google Inc. Automatic language model update
US20130246065A1 (en) * 2006-04-03 2013-09-19 Google Inc. Automatic Language Model Update
US10410627B2 (en) 2006-04-03 2019-09-10 Google Llc Automatic language model update
US9953636B2 (en) 2006-04-03 2018-04-24 Google Llc Automatic language model update
US8954412B1 (en) 2006-09-28 2015-02-10 Google Inc. Corroborating facts in electronic documents
US9785686B2 (en) 2006-09-28 2017-10-10 Google Inc. Corroborating facts in electronic documents
US10459955B1 (en) 2007-03-14 2019-10-29 Google Llc Determining geographic locations for place names
US9892132B2 (en) 2007-03-14 2018-02-13 Google Llc Determining geographic locations for place names in a fact repository
US8239751B1 (en) * 2007-05-16 2012-08-07 Google Inc. Data from web documents in a spreadsheet
US7809664B2 (en) 2007-12-21 2010-10-05 Yahoo! Inc. Automated learning from a question and answering network of humans
US20090162824A1 (en) * 2007-12-21 2009-06-25 Heck Larry P Automated learning from a question and answering network of humans
US9087059B2 (en) 2009-08-07 2015-07-21 Google Inc. User interface for presenting search results for multiple regions of a visual query
US9135277B2 (en) 2009-08-07 2015-09-15 Google Inc. Architecture for responding to a visual query
US10534808B2 (en) 2009-08-07 2020-01-14 Google Llc Architecture for responding to visual query
US20130196305A1 (en) * 2012-01-30 2013-08-01 International Business Machines Corporation Method and apparatus for generating questions
US9484034B2 (en) 2014-02-13 2016-11-01 Kabushiki Kaisha Toshiba Voice conversation support apparatus, voice conversation support method, and computer readable medium
US11392778B2 (en) * 2014-12-29 2022-07-19 Paypal, Inc. Use of statistical flow data for machine translations between different languages
US10102275B2 (en) 2015-05-27 2018-10-16 International Business Machines Corporation User interface for a query answering system
US10157174B2 (en) * 2015-05-27 2018-12-18 International Business Machines Corporation Utilizing a dialectical model in a question answering system
US20170255609A1 (en) * 2015-05-27 2017-09-07 International Business Machines Corporation Utilizing a dialectical model in a question answering system
US20160350279A1 (en) * 2015-05-27 2016-12-01 International Business Machines Corporation Utilizing a dialectical model in a question answering system
US9727552B2 (en) * 2015-05-27 2017-08-08 International Business Machines Corporation Utilizing a dialectical model in a question answering system
US10942958B2 (en) 2015-05-27 2021-03-09 International Business Machines Corporation User interface for a query answering system
US11030227B2 (en) 2015-12-11 2021-06-08 International Business Machines Corporation Discrepancy handler for document ingestion into a corpus for a cognitive computing system
US11074286B2 (en) 2016-01-12 2021-07-27 International Business Machines Corporation Automated curation of documents in a corpus for a cognitive computing system
US11308143B2 (en) 2016-01-12 2022-04-19 International Business Machines Corporation Discrepancy curator for documents in a corpus of a cognitive computing system

Also Published As

Publication number Publication date
US6498921B1 (en) 2002-12-24

Similar Documents

Publication Publication Date Title
US6498921B1 (en) Method and system to answer a natural-language question
JP6414956B2 (en) Question generating device and computer program
US6850934B2 (en) Adaptive search engine query
US6829603B1 (en) System, method and program product for interactive natural dialog
JP4398098B2 (en) Glamor template query system
US6999916B2 (en) Method and apparatus for integrated, user-directed web site text translation
US9323848B2 (en) Search system using search subdomain and hints to subdomains in search query statements and sponsored results on a subdomain-by-subdomain basis
US7587389B2 (en) Question answering system, data search method, and computer program
US6571240B1 (en) Information processing for searching categorizing information in a document based on a categorization hierarchy and extracted phrases
US7403938B2 (en) Natural language query processing
US7089236B1 (en) Search engine interface
US7526474B2 (en) Question answering system, data search method, and computer program
Staab et al. GETESS—searching the web exploiting german texts
US10409803B1 (en) Domain name generation and searching using unigram queries
US20020002452A1 (en) Network-based text composition, translation, and document searching
EP1812872A2 (en) Apparatus, method and sytem of artificial intelligence for data searching applications
US10380248B1 (en) Acronym identification in domain names
US7668859B2 (en) Method and system for enhanced web searching
US11256770B2 (en) Data-driven online business name generator
US20200351241A1 (en) Data-driven online domain name generator
JPH10207904A (en) System and method for retrieving knowledge information
JP2012243130A (en) Information retrieval device, method and program
JP2022165715A (en) Information searching program, information searching method, and information searching apparatus
US20200349209A1 (en) Data-driven online social media handle generator
US8676790B1 (en) Methods and systems for improving search rankings using advertising data

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: HASTUR LIMITED LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MINDFABRIC HOLDINGS, LLC;REEL/FRAME:018951/0824

Effective date: 20060906

AS Assignment

Owner name: PROFESSORQ, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:IP LEARN, INC.;REEL/FRAME:019550/0255

Effective date: 20000126

Owner name: IP LEARN, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:IPLEARN, LLC;REEL/FRAME:019550/0182

Effective date: 20000306

Owner name: LINDNER, ROBERT D., JR., OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MINDFABRIC, INC.;REEL/FRAME:019553/0474

Effective date: 20061128

Owner name: MINDFABRIC HOLDINGS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROBERT D. LINDNER, JR.;REEL/FRAME:019553/0525

Effective date: 20061010

AS Assignment

Owner name: IPLEARN, LLC, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EXECUTION DATE OF THE ASSIGNMENT AS BEING 03/05/2000 PREVIOUSLY RECORDED ON REEL 013530 FRAME 0995;ASSIGNOR:TONG, PETER P.;REEL/FRAME:019559/0088

Effective date: 20061120

AS Assignment

Owner name: IPLEARN, LLC, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE CLERICAL ERROR IN THE SPELLING OF THE INVENTOR'S NAME, CHI FAI HO PREVIOUSLY RECORDED ON REEL 011806 FRAME 0487;ASSIGNOR:HO, CHI FAI;REEL/FRAME:019562/0129

Effective date: 20061117

Owner name: MINDFABRIC, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:PROFESSORQ, INC.;REEL/FRAME:019562/0260

Effective date: 20011220

AS Assignment

Owner name: HANGER SOLUTIONS, LLC, GEORGIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTELLECTUAL VENTURES ASSETS 161 LLC;REEL/FRAME:052159/0509

Effective date: 20191206

AS Assignment

Owner name: INTELLECTUAL VENTURES ASSETS 161 LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BENHOV GMBH, LLC;REEL/FRAME:051856/0776

Effective date: 20191126