Inbenta has implemented its software based on the Meaning-Text Theory and its way of conceiving natural language from lexicon and its semantics, and has lead inbenta's specialized linguistic team to create detailed and specific descriptions of the lexical units, in several different languages.
Meaning–Text Theory (MTT) is a theoretical linguistic framework, first put forward in Moscow by Aleksandr Žolkovskij and Igor Mel’čuk, for the construction of models of natural language. The theory provides a large and elaborate basis for linguistic description and, due to its formal character, lends itself particularly well to computer applications.
One important discovery of meaning–text linguistics was the recognition that the elements in the lexicon (lexical units) in a language can be related to one another in an abstract semantic sense. These relations are represented in MTT as lexical functions (LF). Thus, the description of the lexicon in a crucial aspect in inbenta.
Lexical Functions are a tool specially designed to formally represent the relations between lexical units, and therefore, allow us to formalize and describe in a relatively simple manner the complex lexical relationship network that languages present and assign a corresponding semantic weight to each element in a sentence. Most importantly, however, they allow us to relate analogous meanings no matter which form they are presented in.
Natural languages are more restrictive than they may seem at first glance. Consequently, in the majority of the cases, we encounter frozen expressions sooner or later. Although these have varying degrees of rigidity, ultimately they are fixed, and must be described according to this characteristic, for example:
- Obtain a result
- Do a favor
- Ask / Pose a question
- Raise a building
All of these examples show us that it is the lexicon that imposes selection restrictions since we would hardly find "do a question" or "raise a favor" in a text. Actually, the most important factor when analyzing these phrases is that, from the meaning point of view, the elements do not have the same semantic value. As shown in the examples above, the first element hardly provides any information, but all of the meaning or semantic weight is provided by the second element.
The crucial matter here is that the semantic relationship between the first and second element is exactly the same in every example. Roughly, what we are saying is "make X" (a result, a joke, a favor, a question, a building). This type of relation can be represented by the "Oper" lexical function.
MTT collects around 60 different types of lexical functions, which allow, among other things, the description of relations such as synonymy (buying and purchasing are identical actions), hypernymy/hyponymy (a dog is a type of animal) or other relations among lexical units at the sentence level, such as the Oper that we mentioned before, or these ones expressing "a lot": if you smoke a lot you are a heavy smoker, but if you sleep a lot, you are not a “heavy sleeper”.... All we can say is that you sleep like a log.
Linguists at inbenta adapt the principles of the Meaning-Text Theory while describing the languages supported. The objective is that user questions that are completely different on the surface but whose underlying meaning is the same are correctly understood by our semantic-based search systems, so that users always get the best results for their queries.
Let's take these possible user questions:
- purchasing a ticket for an overweight person
- I want to buy a ticket for someone who is obese
Even though the words are different, in both cases the meaning conveyed is the same, so both should get the same answer form a Virtual Assistant. Inbenta's semantic search engine is built upon a rich and complex network of lexical relations in order to be able to understand what users mean with their queries, no matter the exact words they use to pose their questions.