Title of Invention

"SPEECH RECOGNITION DEVICE IMPLEMENTING A SYNTACTIC PERMUTATION RULE"

Abstract The invention relates to a speech recognition device comprising an audio processor (2) for the acquisition of an audio signal and a linguistic decoder (6) for determining a sequence of words corresponding to the audio signal, the linguistic decoder has a recognition engine (9) and a language model (8) defined with the aid of a grammar comprising a syntactic rule for repetitionless permuting of symbols, the recognition engine managing an information associated with each symbol of the permutation allowing to parse each symbol only once, the recognition engine (9) implementing an algorithm of the "beam search" or "n-best" type providing n best results, the most probable solution being chosen from among these best results.
Full Text Information systems or control systems are making ever-increasing use of a voice interface to make interaction with the user fast and intuitive. Since these systems are becoming more complex, the dialogue styles supported are becoming ever more rich, and one is entering the field of very large vocabulary continuous speech recognition.
It is known that the design of a large vocabulary continuous speech recognition system requires the production of a Language Model which defines the probability that a given word from the vocabulary of the application, follows another word or group of words, in the chronological order of the sentence.
This language model must reproduce the speaking style ordinarily employed by a user of the system.
The quality of the language model used greatly influences the reliability of the speech recognition. This quality is most often measured by an index referred to as the perplexity of the language model, and which schematically represents the number of choices which the system must make for each decoded word. The lower this perplexity, the better the quality.
The language model is necessary to translate the voice signal into a textual string of words, a step often used by dialogue systems. It is then necessary to construct a comprehension logic which makes it possible to comprehend the query so as to reply to it.
There are two standard methods for producing large vocabulary language models:
(1) the so-called N-gram statistical method, most often employing a bigram or trigram, consists in assuming that the probability of occurrence of a word in the sentence depends solely on the N words which precede it, independently of its context in the sentence.
If one takes the example of the trigram for a vocabulary of 1000 words, it would be necessary to define 10003 probabilities to define the language

2
model, this being rather impractical. To solve this problem, the words are grouped into sets which are either defined explicitly by the model designer, or deduced by self-organising methods.
This language model is constructed from a text corpus automatically.
(2) The second method consists in describing the syntax by means of a probabilistic grammar, typically a context-free grammar defined by virtue of a set of rules described in the so-called Backus Naur Form or BNF form.
The rules describing grammars are most often handwritten, but may also be deduced automatically. In this regard, reference may be made to the following document:
"Basic methods of probabilistic context-free grammars" by F. Jelinek, J.D. Lafferty and R.L. Mercer NATO ASI Series Vol. 75 pp. 345-359, 1992.
The models described above raise specific problems when they are applied to interfaces of natural language systems:
The N-gram type language models (1) do not correctly model the dependencies between several distant grammatical substructures in the sentence. For a syntactically correct uttered sentence, there is nothing to guarantee that these substructures will be complied with in the course of recognition, and therefore it is difficult to determine whether such and such a sense, customarily borne by one or more specific syntactic structures, is conveyed by the sentence.
These models are suitable for continuous dictation, but their application in dialogue systems suffers from the defects mentioned.
The models based on grammars (2) make it possible to correctly model the remote dependencies in a sentence, and also to comply with specific syntactic substructures. The perplexity of the language obtained is often lower, for a given application, than the N-gram type models.

3
On the other hand, for highly inflected languages such as French or Italian, in which the position of the syntactic groups in the sentence is fairly free, the BNF type grammars raise problems in defining the permutations of the syntactic groups in question.
For less inflected languages such as English, these permutations are also necessary for describing the hesitations and the false starts of ordinary spoken language, and make the language model based on BNFs rather unsuitable.
The subject of the invention is a speech recognition device including an audio processor for the acquisition of an audio signal and a linguistic decoder for determining a sequence of words corresponding to the audio signal,
Characterised in that
the linguistic decoder includes a language model defined with the aid of a grammar comprising a syntactic rule for repetitionless permuting of symbols.
The language model proposed by the inventors extends the formalism of BNF grammars so as to support the syntactic permutations of ordinary language and of highly inflected languages. It makes it possible to reduce the memory required for the speech recognition processing and is particularly suitable for uses in mass-market products.
According to a preferred embodiment, the syntactic rule for permuting symbols includes a list of symbols and as appropriate expressions of constraints on the order of the symbols.
According to a preferred embodiment, the linguistic decoder includes a recognition engine which, upon the assigning of symbols of a permutation to a string of terms of a sentence, chooses a symbol to be assigned to a given term solely from among the symbols of the permutation which have not previously been assigned.

4
According to a particular embodiment, the recognition engine implements an algorithm of the "beam search" or "n-best" type.
Other algorithms may also be implemented. Other characteristics and advantages of the invention will become apparent through the description of a particular non-limiting embodiment, explained with the aid of the appended drawings in which:
Figure 1 is a diagram of a speech recognition system,
Figure 2 is a diagram of a prior art stack-based automaton,
Figure 3 is a diagram of a stack-based automaton according to the invention,
Figure 4 is a schematic illustrating the alternative symbols at the start of the analysis of an exemplary permutation, in accordance with the invention,
Figure 5 is a schematic illustrating the alternative symbols of the example of Figure 4 at a later step, in accordance with the invention,
Figure 6 is a schematic illustrating the alternative symbols in the case of the expression of a permutation with the aid of prior art rules,
Figure 7a is a tree illustrating the set of alternatives at the nodes resulting from the exemplary permutation, in accordance with the invention, and
Figure 7b is a tree illustrating the set of alternatives at the nodes resulting from the exemplary permutation, according to the prior art.
Figure 1 is a block diagram of an exemplary device 1 for speech recognition. This device includes a processor 2 of the audio signal carrying out the digitisation of an audio signal originating from a microphone 3 by way of a signal acquisition circuit 4. The processor also translates the digital samples into acoustic symbols chosen from a predetermined alphabet. For this purpose it includes an acoustic-phonetic decoder 5. A linguistic decoder 6 processes these

5
symbols so as to determine, for a sequence A of symbols, the most probable sequence W of words, given the sequence A.
The linguistic decoder uses an acoustic model 7 and a language model 8 implemented by a hypothesis-based search algorithm 9. The acoustic model is for example a so-called "hidden Markov" model (or HMM) . The language model implemented in the present exemplary embodiment is based on a grammar described with the aid of syntax rules of the Backus Naur form. The language model is used to submit hypotheses to the search algorithm. The latter, which is the recognition engine proper, is, as regards the present example, a search algorithm based on a Viterbi type algorithm and referred to as "n-best". The n-best type algorithm determines at each step of the analysis of a sentence the n most probable sequences of words. At the end of the sentence, the most probable solution is chosen from among the n candidates.
The concepts in the above paragraph are in themselves well known to the person skilled in the art, but information relating in particular to the n-best algorithm is given in the work:
"Statistical methods for speech recognition" by F. Jelinek, MIT Press 1999 ISBN 0-262-10066-5 pp. 79-84. Other algorithms may also be implemented. In particular, other algorithms of the "Beam Search" type, of which the "n-best" algorithm is one example.
The acoustic-phonetic decoder and the linguistic decoder can be embodied by way of appropriate software executed by a microprocessor having access to a memory containing the algorithm of the recognition engine and the acoustic and language models.
The invention also relates to the language model, as well as to its use by the recognition engine.
The following four syntactic rules are customarily used to define a language model probabilistic grammar.
These four rules are: (a) "Or" symbol

6
= |
(b) "And" symbol (concatenation)
=
(c) Optional element
= ? (optional index)
(d) Lexical assignment
= "lexical word"
It should be noted that only rules (a) , (b) and (d) are actually obligatory. Rule (c) can be reproduced with the aid of the other three, although to the detriment of the compactness of the language model.
The language model in accordance with the present exemplary embodiment uses an additional syntactic rule to define the probabilistic grammar of the language model:
(e) "Permutation" symbol
= Permut. {, , ..., }
( >
, o o o ,
> )
This signifies that the symbol A is any one of the repetitionless permutations of the n symbols Al, An, these symbols being adjoined by the "and" rule for each permutation.
Moreover, according to the present exemplary embodiment, only the permutations which satisfy the constraints expressed between brackets and which are read: "the symbol Ai appears in the permutation before the symbol Aj, the symbol Ak appears before the symbol Al", are syntactically valid.
The optional index present in the definition of rule (c) operates as follows:
An optional index is a pair formed of an integer and of a Boolean, which can be true or false.
When a rewrite rule of the type:
= ? (optional index)
is encountered, then:

o If the same integer as that of the present
optional index has never been encountered in the
optional indices of other rules which have
produced the current state in the grammar of the
language model, for the hypothesis currently
under investigation, then the symbol A can:
o be swapped for the symbol B and the optional
index activated;
o be swapped into the empty rule and the
optional index not activated.
o If the same index has been activated by applying
a rule of the same type according to the
protocol described above, then the only valid
expression of the rule is
o to swap the symbol A for the symbol B if the
boolean index is true;
o to swap the symbol A for the empty symbol if
the boolean index is false.
The permutations could be expressed in a context-independent BNF type language, by simply extending the syntactic tree represented by the fifth rule, this extension being achieved solely by employing the first four. For combinatorial reasons, the syntactic tree obtained will be of large size, as soon as the number of permuted symbols increases.
The processing of the permutations is achieved by virtue of a stack-based automaton, hence one which is context dependent, and which marks whether, in the course of the syntactic search, an occurrence of the group participating in the permutation has already been encountered, correctly in relation to the order constraints.
The standard processing of a BNF grammar is achieved by virtue of the objects illustrated by Figure 2.
The exemplary embodiment relies on the other hand on a stack-based automaton which uses the new objects illustrated by Figure 3.

8
To describe the implementation of syntax rule (e), we shall take the example of a simple sentence, composed of a single permutation of three syntactic terms, with no constraints:
= Permut {,,}
The terms A, B and C may themselves be complex terms defined with one or more permutation symbols and/or other symbols.
A speech recognition system based on the conventional principles of description of grammars, that is to say using the simple BNF syntax, will translate this form of sentence in the following manner:
=
|
|
|
|
|
.
There are 3! combinations, connected by the "or" symbol ( l ) . The syntactic tree is completely unfurled, and the information that this tree is in fact the representation of a permutation is lost. The tree described is stored entirely in memory to represent the language model required for speech recognition.
This structure is used to propose candidate terms to be analysed in the course of the "n-best search" algorithm of the recognition engine, which terms will be concatenated to form syntax-compliant sentences from which the engine will retain the n best, that is to say those which exhibit the highest likelihood scores given the sound signal recorded.
The "n-best search" algorithm is coupled with a strategy for pruning the branches of the syntactic tree which, in the course of the left-to-right analysis of the sentence, retains only the n best candidate segments up to the current analysis point.

It may be seen that when investigating the sentence in question, on commencing the analysis, six alternatives will be presented to the acoustic decoding engine, one for each of the combinations of the three terms
, and . The fact that it is possible to distinguish from left to right three subgroups of two combinations (one beginning with the symbol , the second with the symbol , and the last with the symbol ) is lost and the engine will analyse each of the six structures in an undifferentiated manner. If it turns out that the syntactic structures , and are sufficiently complex for pruning to occur in the course of the analysis of these structures, then the n best segments analysed will in fact be composed of pairs of structures which are perfectly identical, and hence only n-best/2 alternatives will actually have been taken into account.
The novel processing proposed by the invention does not suffer from this reduction in the search space: the information that a permutation exists in the grammar is indicated explicitly and the permutation is processed as is.
In what follows, the behaviour of the recognition engine will be described firstly in detail in the case of the implementation of rule (e) for describing a permutation, then we shall concentrate on describing the behaviour of the recognition engine in the case where the permutations are expressed with the aid of rules (a) to (d) . The abovementioned advantages afforded by the invention will emerge from comparing the two behaviours.
Figures 4 and 5 are diagrams illustrating the behaviour of the recognition engine when it is presented with a permutation in accordance with the invention.
On commencing the analysis of the permutation, step illustrated by Figure 3, three possibilities are presented to the recognition engine for the choice of

10
the first term of the sentence: the symbol
, the symbol and the symbol .
An "n-best" analysis with pruning is applied to these structures. The engine firstly considers the symbol
. The path which explores route is negotiated in the left/right analysis as follows:
As it is the path starting with
which is analysed, a logic symbol in memory preserves this information by setting a variable assigned to the permutation in question and to the alternative currently being investigated. This variable, managed by the engine, specifies that this symbol is no longer active for the rest of the analysis of the present path, that is to say it will no longer be available as a candidate symbol for a term situated further away along the same path.
More precisely, the situation at the start of the analysis is that illustrated by Figure 4: the three symbols
, , are active and candidates for the n-best recognition algorithm.
In the course of the search, each of the alternatives is explored. For example, for the first, the symbol
is envisaged. In the course of this exploration, it will be necessary to explore the possible symbol strings beginning with : from the standpoint of the analysis of the second term of the sentence, the situation illustrated by Figure 5 will obtain: the symbol is no longer available for the analysis of the rest of the sentence, for the alternative currently envisaged since it has been used up previously in the left/right analysis of the recorded signal flow.
Hence, two candidate symbols remain, and . In analogous manner, the search route which will analyse for example will mark this symbol as inactive and only the symbol will remain available for the rest of the decoding.
Stated otherwise, the recognition engine according to the invention processes a permutation as defined by

11
rule (e) in the manner illustrated by Figure 7a. It is considered that the engine considers the term of rank, i of the sentence to be analysed. The engine determines the set of possible alternative symbols: in the case of the exemplary permutation with three symbols, there are three possible alternatives at level i:
, , . At rank i+1, there are now only two alternatives, the previous symbol chosen at rank i no longer being considered by the engine. At rank i+2, no choice is now possible.
From the point of view of considering the n best paths, it would appear that the reduction in the number of possible alternatives at the level of certain nodes of the tree of Figure 7a avoids the consideration of partially redundant paths.
The operation of a conventional speech recognition algorithm, which does not use the mechanism of our invention, can likewise be represented.
On commencing the decoding, the situation is that of Figure 6: it may be seen that on commencing the analysis of the sentence, the recognition engine thinks that it is faced with six possibilities. The first two both begin with the symbol
, and their processing will be exactly identical, until the appearance of the actual alternative pertaining to the second term.
Thus, up to this point, the storage space used in the n-best algorithm to preserve the most promising tracks will contain each search hypothesis twice.
If moreover the group
is fairly complex and pruning occurs before the appearance of the differentiating terms which follow , then the "n-best-search" algorithm will in fact carry out only an "n/2 best-search", each route analysed being duplicated.
The example given pertains to a permutation with three terms. For a permutation with four or more terms, the same remarks apply with even more injurious effects to the recognition algorithm. The perplexity seen by

12
the recognition engine is much greater than the actual perplexity of the language model.
Figure 7b illustrates the prior art processing: six alternatives exist at rank i, instead of three.
This example shows that our invention affords two major advantages as compared with the traditional method, even though it does not increase the expressivity of the language model:
Instead of storing syntactic trees describing a permutation, which may use up a lot of memory, one stores only the terms appearing in the permutation, plus variables of simple type which mark the possible activation of the syntactic group in the course of the n-best analysis of the recognition engine.
The BNF grammar-based syntactic processing of the permutations is not suited to the n-best search algorithm imposed by the acoustic part of the speech recognition processing: one and the same analysis hypothesis is considered several times, and the n-best is most often merely an n/m-best, m depending on the number of terms involved in the permutation.
The novel language model presented is intended for large vocabulary man machine voice dialogue applications, for highly inflected languages or for spontaneous speech recognition.
The language based on the rules above is not more expressive or more powerful than a BNF type language expressed with the aid of conventional rules, when the set of grammatical sentences is finite. The benefit of the invention does not therefore pertain to the expressivity of the novel language, but to the advantages at the level of the processing, by the algorithm of the speech recognition engine, of the syntactic rules. Less memory is required for the processing.
Moreover, the novel syntactic rule allows greater ease of writing the grammar.
Since the process relies on a stack-based automaton, it is particularly suitable, unlike the

13
current solutions, for low-cost built-in applications such as applications in mass-market electronic appliances.

14 We Claim
1. Speach recognition device comprising an audio processor (2) for the
acquisition of an audio signal and a linguistic decoder (6) for determining
a sequence of words corresponding to the audio signal,
characterised in that
the linguistic decoder has a recognition engine (9) and a language model (8) defined with the aid of a grammar comprising a syntactic rule for repetitionless permuting of symbols, the recognition engine managing an Information associated with each symbol of the permutation allowing to parse each symbol only once, the recognition engine (9) implementing an algorithm of the "beam search" or "n-best" type providing n best results, the most probable solution being chosen from among these best results.
2. Device as claimed in Claim 1, wherein the syntactic rule for
permuting symbols comprises a list of symbols and as appropriate
expressions of constraints on the order of the symbols.
3. Device according as claimed in one of Claims 1 or 2, wherein, upon the
assigning of symbols of a permutation to a string of terms of a sentence,
the recognition engine (9) chooses a symbol to be assigned to a given
term solely from among the symbols of the permutation which have not
previously been assigned.

15
4. Device as claimed in one of Claims 1 to 3, wherein each element of the sentence is associated with a probability value, the linguistic decoder (6) running with a pruning strategy which retains only the best candidate sentence.
The invention relates to a speech recognition device comprising an audio processor (2) for the acquisition of an audio signal and a linguistic decoder (6) for determining a sequence of words corresponding to the audio signal, the linguistic decoder has a recognition engine (9) and a language model (8) defined with the aid of a grammar comprising a syntactic rule for repetitionless permuting of symbols, the recognition engine managing an information associated with each symbol of the permutation allowing to parse each symbol only once, the recognition engine (9) implementing an algorithm of the "beam search" or "n-best" type providing n best results, the most probable solution being chosen from among these best results.

Documents:

00645-cal-2000-abstract.pdf

00645-cal-2000-claims.pdf

00645-cal-2000-correspondence.pdf

00645-cal-2000-description(complete).pdf

00645-cal-2000-drawings.pdf

00645-cal-2000-form-1.pdf

00645-cal-2000-form-18.pdf

00645-cal-2000-form-2.pdf

00645-cal-2000-form-26.pdf

00645-cal-2000-form-3.pdf

00645-cal-2000-form-5.pdf

00645-cal-2000-letters patent.pdf

00645-cal-2000-priority document others.pdf

00645-cal-2000-priority document.pdf

00645-cal-2000-reply f.e.r.pdf

645-CAL-2000-FORM-27.pdf


Patent Number 206873
Indian Patent Application Number 645/CAL/2000
PG Journal Number 20/2007
Publication Date 18-May-2007
Grant Date 15-May-2007
Date of Filing 17-Nov-2000
Name of Patentee THOMSON MULTIMEDIA
Applicant Address 46,QUAI A.LE GALLO,F-92100 BOULOGNE-BILLANCOURT,
Inventors:
# Inventor's Name Inventor's Address
1 DELAUNAY CHRISTOPHE 11,RUE PIERRE VARIN DE LA BRUMELIERE F-35700 RENNES,
2 FREDERIC SOUFFLET 18,RUE DE NORMANDIE,F-35410 CHATEAUGIRON
PCT International Classification Number G 10L 15/00
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 9915083 1999-11-30 France