Title of Invention

PITCH DETERMINER FOR A SPEECH ANALYER"

Abstract A speech analyzer (107) compresses a voice message for transmission and includes an LPC analyzer ((406) which derives spectral vectors from segments of speech; a memory (1910) which stores predetermined spectral vectors identified by indexes, the indexes also identifying predetermined voicing vectors stored within a receiver; a quantizer (422) which compares the spectral vector derived with the predetermined spectral vectors to select one of the predetermined spectral vectors; and an output buffer for storing the index identifying the predetermined spectral vector selected. The speech analyzer (107) also includes a pitch determiner (414) which includes a pitch function generator (414) which generates a pitch function from a segment of speech. A pitch enhancer (1116) enhances the pitch function of a current segment of speech utilizing the pitch function of one or more sequential segments of speech and a pitch detector (1118) detects the pitch of the current segment of speech.
Full Text VERY LOW BIT RATE TIME DOMAIN SPEECH ANALYZER FOR VOICE MESSAGING
Cross Reference to Related Co-pending Applications
Related, co-pending Patent Application, Attorney's Docket No. PT02122U, filed concurrently herewith, by Huang, et al., entitled "MBE Synthesizer for Very Low Bit Rate Voice Messaging Systems" which is assigned to the Assignee hereof.
Field of the Invention
This invention relates generally to communication systems, and more specifically to a compressed voice digital communication system using a very low bit rate time domain speech analyzer for voice messaging.
Background of the Invention
Communications systems, such as paging systems, have had to in the past compromise the length of messages, number of users and convenience to the user in order to operate the systems profitably. The number of users and the length of the messages were limited to avoid over crowding of the channel and to avoid long transmission time delays. The user's convenience is directly affected by the channel capacity, the number of users on the channel, system features and type of messaging. In a paging system, tone only pagers that simply alerted the user to call a predetermined telephone number offered the highest channel capacity but were some what inconvenient to the users. Conventional analog voice pagers allowed the user to receive a more detailed message, but severally limited the number of users on a given channel. Analog voice pagers, being real time devices, also had the disadvantage of not providing the user with a way of storing and repeating the message received. The introduction of digital pagers with numeric and alphanumeric displays and memories overcame many of the problems associated with the older pagers. These digital pagers improved the message handling capacity of the paging channel, and provide the user with a way of storing messages for later review.
Although the digital pagers with numeric and alpha numeric displays offered many advantages, some user's still preferred pagers with voice announcements. In an attempt to provide this service over a limited capacity digital channel, various digital voice compression techniques and synthesis techniques have been tried, each with their own level of success and

limitation. Voice compression methods, based on vocoder techniques, currently offer a highly promising technique for voice compression. Of the low data rate vocoders, the multi band excitation (MBE) vocoder is among the most natural sounding vocoder.
The vocoder analyzes short segments of speech, called speech frames, and characterizes the speech in terms of several parameters that are digitized and encoded for transmission. The speech characteristics that are typically analyzed include voicing characteristics, pitch, frame energy, and spectral characteristics. Vocoder synthesizers used these parameters to reconstruct the original speech by mimicking the human voice mechanism. Vocoder synthesizers modeled the human voice as an excitation source, controlled by the pitch and frame energy parameters followed by a spectrum shaping controlled by the spectral parameters.
The voicing characteristic describes the repetitiveness of the speech waveform. Speech consists of periods where the speech waveform has a repetitive nature and periods where no repetitive characteristics can be detected. The periods where the waveform has a periodic repetitive characteristic are said to be voiced. Periods where the waveform seems to have a totally random characteristic are said to be unvoiced. The voiced/unvoiced characteristics are used by the vocoder speech synthesizer to determine the type of excitation signal which will be used to reproduce that segment of speech. Due to the complexity and irregularities of human speech production, no single parameter can reliably determine when a speech frame is voiced or unvoiced.
Pitch defines the fundamental frequency of the repetitive portion of the voiced wave form. Pitch is typically defined in terms of a pitch period or the time period of the repetitive segments of the voiced portion of the speech wave forms. The speech waveform is a highly complex waveform and very rich in harmonics. The complexity of the speech waveform makes it very difficult to extract pitch information. Changes in pitch frequency must also be smoothly tracked for an MBE vocoder synthesizer to smoothly reconstruct the original speech. Most vocoders employ a time-domain auto-correlation function to perform pitch detection and tracking. Auto-correlation is a very computationally intensive and time consuming process. It has also been observed that conventional auto-correlation methods are unreliable when used with speech derived from a telephone network. The frequency response of the telephone network (300 Hz to 3400 Hz) causes deep attenuation to the lower harmonics of speech that has a low pitch frequency (the range of the fundamental pitch frequency of the human voice is 50 Hz to 400 Hz). Because

of the deep attenuation of the fundamental frequency, pitch trackers can erroneously identify the second or third harmonic as the fundamental frequency. The human auditory process is very sensitive to changes in pitch and the perceived quality of the reconstructed speech is strongly effected by the accuracy of the pitch derived.
Frame energy is a measure of the normalized average RMS power of the speech frame. This parameter defines the loudness of the speech during the speech frame.
The spectral characteristics define the relative amplitude of the harmonics and the fundamental pitch frequency during the voiced portions of speech and the relative spectral shape of the noise-like unvoiced speech segments. The data transmitted defines the spectral characteristics of the reconstructed speech signal. Non optimum spectral shaping results in poor reconstruction of the voice by an MBE vocoder synthesizer and poor noise suppression.
The human voice, during a voiced period, has portions of the spectrum that are voiced and portions that are unvoiced. MBE vocoders produce natural sounding voice because the excitation source, during a voiced period, is a mixture of voiced and unvoiced frequency bands. The speech spectrum is divided into a number of frequency bands and a determination is made for each band as to the voiced/unvoiced nature of each band. The MBE speech synthesizer generates an additional set of data to control the excitation of the voiced speech frames. In conventional MBE vocoders, the band voiced/unvoiced decision metric is pitch dependent and computationally intensive. Errors in pitch will lead to errors in the band voiced/unvoiced decision that will affect the synthesized speech quality. Transmission of the band voiced/unvoiced data also substantially increases the quantity of data that must be transmitted.
Conventional MBE synthesizers require information on the phase relationship of the harmonic of the pitch signal to accurately reproduce speech. Transmission of phase information, further increasing the data required to be transmitted.
Conventional MBE synthesizers can generate natural sounding speech at a data rate of 2400 to 6400 bit per second. MBE synthesizers are being used in a number of commercial mobile communications systems, such as the INMARSAT (International Marine Satellite Organization) and the ASTRO ™ portable transceiver manufactured by Motorola Inc. of Schaumburg, IL. The standard MBE vocoder compression methods, currently used very successfully by two way radios, fail to provide the degree of compression required for use on

a paging channel. Voice messages that are digitally encoded using the current state of the art would monopolize such a large portion of the paging channel capacity that they may render the system commercially unsuccessful.
Accordingly, what is needed for optimal utilization of a channel in a communication system, such as a paging channel in a paging system or a data channel in a non-real time one way or two way data communications system, is an apparatus that simply and accurately determines the voiced and unvoiced portions of speech, accurately determines and tracks the fundamental pitch frequency when the frequency spectrum of the fundamental pitch components is severely attenuated, and significantly reduces the amount of data necessary for the transmission of the voiced/unvoiced band information. Also what is needed is an apparatus digitally encodes voice messages in such a way that the resulting data is very highly compressed while maintaining acceptable speech quality and can easily be mixed with the normal data sent over the communication channel.
Summary of the Invention
Briefly, according to a first aspect of the invention a speech analyzer for compressing a voice message which is segmented into speech frames representing segments of speech which are digitized to generate digitized speech samples includes an LPC analyzer, a memory, a quantizer, and an output buffer. The speech analyzer includes the following: The LPC analyzer analyzes the digitized speech samples and derives a spectral vector. The memory stores a plurality of predetermined spectral vectors having a plurality of corresponding indexes. The plurality of predetermined spectral vectors are also associated with a plurality of predetermined voicing vectors. The quantizer compares the spectral vector derived with the plurality of predetermined spectral vectors, and derives a set of distance values. The quantizer then selects a distance value having a shortest distance from the set of distance values derived. The output buffer stores an index of the predetermined spectral vectors having the shortest distance.
Briefly/ according to a second aspect of the invention a pitch determiner for use in a speech analyzer determines a pitch within one or more sequential segments of speech, each segment of speech being represented by a predetermined number of digitized speech samples. The pitch determiner includes a pitch function generator, a pitch enhancer, and a pitch detector. The pitch function generator generates, from the digitized speech samples, a plurality of pitch components representing a pitch function. The pitch

function defines an amplitude of each of the plurality of pitch components. The pitch enhancer enhances the pitch function of a current segment of speech utilizing the pitch function of one or more sequential segments of speech. The pitch detector detects the pitch of the current segment of speech by determining the pitch of an enhanced pitch component having a largest amplitude of the plurality of enhanced pitch components.
The present invention relates to a pitch determiner for use with a speech analyzer for determining a pitch within one or more sequential segments of speech, each segment of speech being represented by a predetermined number of digitized speech samples, said pitch determiner comprising: a pitch function generator for generating from the predetermined number of digitized speech samples, a plurality of pitch components representing a pitch function, wherein said pitch function defines an amplitude of each of the plurality of pitch components; a pitch enhancer, for enhancing the pitch function of a current segment of speech utilizing the pitch function of one or more sequential segment of speech, by generating a plurality of enhanced pitch components; and a pitch detector for detecting the pitch of the current segment of speech by determining the pitch of an enhanced pitch component having a largest amplitude of the plurality of enhanced pitch components.
Brief description of the accompanying drawings

FIG. 1 is a block diagram of a communication system utilizing a very low bit rate time domain speech analyzer for voice messaging in accordance with the present invention.
FIG. 2 is a electrical block diagram of a paging terminal and associated paging transmitters utilizing a very low bit rate time domain speech analyzer for voice messaging in accordance with the present invention.
FIG. 3 is a flow chart showing the operation of the paging terminal of FIG. 2.
FIG. 4 is an data flow diagram showing an over view of the speech analyzer used in the paging terminal shown in FIG. 1 and of the data flow between functions.
FIG. 5 shows a flow chart describing the development of the code books used in the speech analyzer shown in FIG. 4.
FIG. 6 shows a example of a segment of an analog speech wave form that when analyzed would be classified as voiced.
FIG. 7 is a plot of two pitch functions developed by communication system shown in FIG. 1 corresponding to the analog waveform shown in FIG.
6.
FIG. 8 shows a example of a portion of an analog speech wave form that when analyzed would be classified as unvoiced.
FIG. 9 is a plot of two pitch functions developed by communication system shown in FIG. 1 corresponding to the analog waveform shown in FIG. 8.
FIG. 10 shows a example of a portion of an analog speech wave form that when analyzed would be classified as transitional from unvoiced to voiced.
FIG. 11 is a plot of two pitch functions developed by communication system shown in FIG. 1 corresponding to the analog waveform shown in FIG. 10.

FIG. 12 is a block diagram representing an overview of the pitch determiner used in the speech analyzer shown in FIG. 4.
FIG. 13 is a flow chart showing details of the pitch function generator used in pitch determiner shown in FIG. 12.
FIG. 14 is a block diagram detailing the operation of the pitch tracker used in the pitch determiner shown in FIG. 12.
FIG. 15 is a flow chart showing the details the operation of the dynamic programming function used in the pitch detector tracker shown in FIG. 14.
FIG. 16 is a flow chart showing a first portion of the localized auto¬correlation function shown in FIG. 14.
FIG. 17 is a flow chart showing a second portion of the localized auto¬correlation function shown in FIG. 14.
FIG. 18 is a flow chart showing the selection logic used to determine the pitch candidate of the two pitch candidates shown in FIG. 14 that most accurately characterizes the pitch of a speech Segment.
FIG. 19 is a block diagram showing the operation of the frame voicing classifier shown in FIG. 4.
FIG. 20 shows an electrical block diagram of the digital signal processor utilized in the paging terminal shown in FIG. 2
Description of a Preferred Embodiment
FIG. 1 shows a block diagram of a communications system, such as a paging or data transmission system, utilizing a very low bit rate time domain speech analyzer for voice messaging in accordance with the present invention. As will be described in detail below, the paging terminal 106 uses an unique speech analyzer 107 to generates excitation parameters and spectral parameters representing the speech data and a communication receiver, such as a paging receiver 114 uses a unique MBE synthesizer 116 to reproduce the original speech.
By way of example, a paging system will be utilized to describe the present invention, although it will be appreciated that any non-real time communication system will benefit from the present invention as well. A paging system is designed to provide service to a variety of users, each requiring different services. Some of the users may require numeric messaging services, other users alpha-numeric messaging services, and still other users may require voice messaging services. In a paging system, the caller originates a page by communicating with a paging terminal 106 via a telephone 102 through a public switched telephone network (PSTN) 104. The

paging terminal 106 prompts the caller for the recipient's identification, and a message to be sent. Upon receiving the required information, the paging terminal 106 returns a prompt indicating that the message has been received by the paging terminal 106. The paging terminal 106 encodes the message and places the encoded message into a transmission queue. In the case of a voice message the paging terminal 106 compresses and encodes the message using a speech analyzer 107. At an appropriate time, the message is transmitted using a radio frequency transmitter 108 and transmitting antenna 110. It will be appreciated that in a simulcast transmission system, a multiplicity of transmitters covering different geographic areas can be utilized as well.
The signal transmitted from the transmitting antenna 110 is intercepted by a receiving antenna 112 and processed by a receiver 114, shown in FIG. 1 as a paging receiver, although it will be appreciated that other communication receivers can be utilized as well. Voice messages received are decoded and reconstructed using an MBE synthesizer 116. The person being paged is alerted and the message is displayed or annunciated depending on the type of messaging being employed.
The digital voice encoding and decoding process used by the speech analyzer 107 and the MBE synthesizer 116, described herein, is readily adapted to the non-real time nature of paging and any non-real time communications system. These non-real time communication systems provide the time required to perform a highly computational compression process on the voice message. Delays of up to two minutes can be reasonably tolerated in paging systems, whereas delays of two seconds are unacceptable in real time communication systems. The asymmetric nature of the digital voice compression process described herein minimizes the processing required to be performed at the receiver 114, making the process ideal for paging applications and other similar non-real time voice communications. The highly computational portion of the digital voice compression process is performed in the fixed portion of the system, i.e. at the paging terminal 106. Such operation, together with the use of an MBE synthesizer 116 that operates almost entirely in the frequency domain, greatly reduces the computation required to be performed in the portable portion of the communication system.
The speech analyzer 107 analyzes the voice message and generates spectral parameters and excitation parameters, as will be described below. The spectral parameters generated include information describing the magnitude and phase of all harmonics of a fundamental pitch signal that fall within the communication system's pass band. Pitch changes significantly from speaker to speaker and will change to a lesser extent while a speaker is talking. A

speaker having a low pitch voice, such as a man, will have more harmonics then a speaker with a higher pitch voice, such as a woman. In a conventional MBE synthesizer the speech analyzer 107 must derive the magnitude and phase information for each harmonic in order for the MBE synthesizer to accurately reproduce the voice message. The varying number of harmonics results in a variable quantity of data required to be transmitted. As will be described below, the present invention uses fixed dimension LPC analysis and a spectral code book to vector quantize the data into a fixed length index for transmission. In the present invention the speech analyzer 107 does not generate harmonic phase information as in prior art analyzers, but instead the MBE synthesizer 116 uses a unique frequency domain technique to artificially regenerate phase information at the receiver 114. The frequency domain technique also reduces the quantity of computation performed by the MBE synthesizer 116.
The excitation parameters include a pitch parameter, an RMS parameter, and a frame voiced / unvoiced parameter. The frame voiced / unvoiced parameter describes the repetitive nature of the sound. Segments of speech that have a highly repetitive waveform are described as voiced, whereas segments of speech that have a random waveform are described as being unvoiced. The frame voiced / unvoiced parameter generated by the speech analyzer 107 determines whether the MBE synthesizer 116 uses a periodic signal as an excitation source or a noise like signal source as an excitation source. The present invention uses a highly accurate nonlinear classifier at the speech analyzer 107 to determine the frame voiced / unvoiced parameter.
Frames, or segments of speech, that are classified as voiced often have spectral portions that are unvoiced. The speech analyzer 107 and MBE synthesizer 116 produce excellent quality speech by dividing the voice spectrum into a number of sub-bands and including information describing the voiced / unvoiced nature of the voice signal in each sub-band. The sub-band voice / unvoiced parameters, in conventional synthesizers, must be regenerated by the speech analyzer 107 and transmitted to the MBE synthesizer 116. The present invention determines a relationship between the sub-band voiced / unvoiced information and the spectral information and appends a ten band voicing code book containing voiced / unvoiced likelihood parameters to a spectral code book. The index of the ten band voicing code book is the same as the index of the spectral code book, thus only one index need be transmitted. The present invention eliminates the necessity of transmitting the ten bits used by a conventional MBE synthesizer to specify the voiced / unvoiced parameters of each of the ten sub bands as will be described below. The MBE synthesizer 116, at the receiver 114, uses the probabilities provided in the ten band voicing code

book along with spectral parameters to determine the voiced / unvoiced parameters for each band.
The pitch parameter defines the fundamental frequency of the repetitive portion of speech. Pitch is measured in vocoders as the period of the fundamental frequency. The human auditory function is very sensitive to pitch, and errors in pitch have a major impact on the perceived quality of the speech reproduced by the MBE synthesizer 116. Communication systems, such as paging systems, that receive speech input via the telephone network have to detect pitch when the fundamental pitch frequency has been severely attenuated by the network. Conventional pitch detectors determine pitch information by use of a highly computational auto-correlation calculations in the time domain, and because of the loss of the fundamental frequency components, sometimes detect the second or third harmonic as the fundamental pitch frequency. In the present invention, a method is employed to regenerate and enhance the fundamental pitch frequency. A frequency domain calculation is used to approximate the pitch frequency and limit the search range of the auto-correlation function to a predetermined range, greatly reducing the auto-correlation calculations. The present invention also utilizes a unique method of regenerating the fundamental pitch frequencies. Pitch information from past and future frames, and a limited auto-correlation search provide a robust pitch detector and tracker capable of detecting and tracking pitch under adverse conditions.
The RMS parameter is a measurement of the total energy of all the harmonics in a frame. The RMS parameter is generated by the speech analyzer 107 and is used by the MBE synthesizer 116 to establish the volume of the reproduced speech.
An electrical block diagram of the paging terminal 106 and the radio frequency transmitter 108 utilizing the digital voice compression process in accordance with the present invention is shown in FIG. 2. The paging terminal 106 shown is of a type that would be used to serve a large number of simultaneous users, such as in a commercial Radio Common Carrier (RCC) system. The paging terminal 106 utilizes a number of input devices, signal processing devices and output devices controlled by a controller 216. Communication between the controller 216 and the various devices that make up the paging terminal 106 are handled by a digital control bus 210. Distribution of digitized voice and data is handled by an input time division multiplexed highway 212 and an output time division multiplexed highway 218. It will be appreciated that the digital control bus 210, input time division

multiplexed highway 212 and output time division multiplexed highway 218 can be extended to provide for expansion of the paging terminal 106.
An input speech processor section 205 provides the interface between the PSTN 104 and the paging terminal 106. The PSTN connections can be either a plurality of multi-call per line multiplexed digital connections shown in FIG. 2 as a digital PSTN connection 202 or plurality of single call per line analog connections shown in FIG. 2 as an analog PSTN connection 208.
Each digital PSTN connection 202 is serviced by a digital telephone interface 204. The digital telephone interface 204 provides the necessary signal conditioning, synchronization, de-multiplexing, signaling, supervision, and regulatory protection requirements for operation of the digital voice compression process in accordance with the present invention. The digital telephone interface 204 can also provide temporary storage of the digitized voice frames to facilitate interchange of time slots and time slot alignment necessary to provide an access to the input time division multiplexed highway 212. As will be described below, requests for service and supervisory responses are controlled by the controller 216. Communication between the digital telephone interface 204 and the controller 216 passes over the digital control bus 210.
Each analog PSTN connection 208 is serviced by an analog telephone interface 206. The analog telephone interface 206 provides the necessary signal conditioning, signaling, supervision, analog to digital and digital to analog conversion, and regulatory protection requirements for operation of the digital voice compression process in accordance with the present invention. The frames, or segments of speech, digitized by the analog to digital converter 207 are temporary stored in the analog telephone interface 206 to facilitate interchange of time slots and time slot alignment necessary to provide an access to the input time division multiplexed highway 212. As will be described below, requests for service and supervisory responses are controlled by a controller 216. Communication between the analog telephone interface 206 and the controller 216 passes over the digital control bus 210.
When an incoming call is detected, a request for service is sent from the analog telephone interface 206 or the digital telephone interface 204 to the controller 216. The controller 216 selects a digital signal processor 214 from a plurality of digital signal processors. The controller 216 couples the analog telephone interface 206 or the digital telephone interface 204 requesting service to the digital signal processor 214 selected via the input time division multiplexed highway 212.

The digital signal processor 214 can be programmed to perform all of the signal processing functions required to complete the paging process, including the function of the speech analyzer 107. Typical signal processing functions performed by the digital signal processor 214 include digital voice compression using the speech analyzer 107 in accordance with the present invention, dual tone multi frequency (DTMF) decoding and generation, modem tone generation and decoding, and pre-recorded voice prompt generation. The digital signal processor 214 can be programmed to perform one or more of the functions described above. In the case of a digital signal processor 214 that is programmed to perform more then one task, the controller 216 assigns the particular task needed to be performed at the time the digital signal processor 214 is selected, or in the case of a digital signal processor 214 that is programmed to perform only a single task, the controller 216 selects a digital signal processor 214 programmed to perform the particular function needed to complete the next step in the process. The operation of the digital signal processor 214 performing dual tone multi frequency (DTMF) decoding and generation, modem tone generation and decoding, and pre-recorded voice prompt generation is well known to one of ordinary skill in the art. The operation of the digital signal processor 214 performing the function of speech analyzer 107 in accordance with the present invention is described in detail below.
The processing of a page request, in the case of a voice message, proceeds in the following manner. The digital signal processor 214 that is coupled to an analog telephone interface 206 or a digital telephone interface 204 then prompts the originator for a voice message. The digital signal processor 214 compresses the voice message received using a process described below. The compressed digital voice message generated by the compression process is coupled to a paging protocol encoder 228, via the output time division multiplexed highway 218, under the control of the controller 216. The paging protocol encoder 228 encodes the data into a suitable paging protocol. One such encoding method is the inFLEXion™ protocol, developed by Motorola Inc. of Schaumburg, IL., although it will be appreciated that there are many other suitable encoding methods that can be utilized as well, for example the Post Office Code Standards Advisory Group (POCSAG) code. The controller 216 directs the paging protocol encoder 228 to store the encoded data in a data storage device 226 via the output time division multiplexed highway 218. At an appropriate time, the encoded data is downloaded into the transmitter control unit 220, under control of the controller 216, via the output time division multiplexed highway 218 and

transmitted using the radio frequency transmitter 108 and the transmitting antenna 110.
In the case of numeric messaging, the processing of a page request proceeds in a manner similar to the voice message with the exception of the process performed by the digital signal processor 214. The digital signal processor 214 prompts the originator for a DTMF message. The digital signal processor 214 decodes the DTMF signal received and generates a digital message. The digital message generated by the digital signal processor 214 is handled in the same way as the digital voice message generated by the digital signal processor 214 in the voice messaging case.
The processing of an alpha-numeric page proceeds in a manner similar to the voice message with the exception of the process performed by the digital signal processor 214. The digital signal processor 214 is programmed to decode and generate modem tones. The digital signal processor 214 interfaces with the originator using one of the standard user interface protocols such as the Page Entry Terminal (PET™) protocol. It will be appreciated that other communications protocols can be utilized as well. The digital message generated by the digital signal processor 214 is handled in the same way as the digital voice message generated by the digital signal processor 214 in the voice messaging case.
FIG. 3 is a flow chart which describes the operation of the paging terminal 106 and the speech analyzer 107 shown in FIG. 2 when processing a voice message. There are shown two entry points into the flow chart 300. The first entry point is for a process associated with the digital PSTN connection 202 and the second entry point is for a process associated with the analog PSTN connection 208. In the case of the digital PSTN connection 202, the process starts with step 302, receiving a request over a digital PSTN line. Requests for service from the digital PSTN connection 202 are indicated by a bit pattern in the incoming data stream. The digital telephone interface 204 receives the request for service and communicates the request to the controller 216.
In step 304, information received from the digital channel requesting service is separated from the incoming data stream by digital frame de¬multiplexing. The digital signal received from the digital PSTN connection 202 typically includes a plurality of digital channels multiplexed into an incoming data stream. The digital channel requesting service is de¬multiplexed and the digitized speech data is then stored temporary to facilitate time slot alignment and multiplexing of the data onto the input time division multiplexed highway 212. A time slot for the digitized speech

data on the input time division multiplexed highway 212 is assigned by the controller 216. Conversely, digitized speech data generated by the digital signal processor 214 for transmission to the digital PSTN connection 202 is formatted suitably for transmission and multiplexed into the outgoing data stream.
Similarly with the analog PSTN connection 208, the process starts with step 306 when a request from the analog PSTN line is received. On the analog PSTN connection 208, incoming calls are signaled by either low frequency AC signals or by DC signaling. The analog telephone interface 206 receives the request and communicates the request to the controller 216.
In step 308, the analog voice message is converted into a digital data stream by the analog to digital converter 207 which functions as a sampler for generating voice message samples and a digitizer for digitizing the voice message samples. The analog signal received over its total duration is referred to as the analog voice message. The analog signal is sampled, generating voice samples and then digitized, generating digitized speech samples, by the analog to digital converter 207. The samples of the analog signal are referred to as speech samples. The digitized voice samples are referred to as digital speech data. The digital speech data is multiplexed onto the input time division multiplexed highway 212 in a time slot assigned by the controller 216. Conversely any voice data on the input time division multiplexed highway 212 that originates from the digital signal processor 214 undergoes a digital to analog conversion before transmission to the analog PSTN connection 208.
As shown in FIG. 3, the processing path for the analog PSTN connection 208 and the digital PSTN connection 202 converge in step 310, when a digital signal processor is assigned to handle the incoming call. The controller 216 selects a digital signal processor 214 programmed to perform the digital voice compression process. The digital signal processor 214 assigned reads the data on the input time division multiplexed highway 212 in the previously assigned time slot.
The data read by the digital signal processor 214 is stored as frames, or segments of speech, for processing, in step 312, as uncompressed speech data. The stored uncompressed speech data is processed by the speech analyzer 107 at step 314, which will be described in detail below. The compressed voice data derived from the speech analyzer at step 314 is encoded suitably for transmission over a paging channel, in step 316. In step 318, the encoded data is stored in a paging queue for later transmission. At the appropriate time the

queued data is sent to the radio frequency transmitter 108 at step 320 and transmitted, at step 322.
FIG. 4 is a block diagram showing an overview of the data flow in the speech analyzer process at step 314. Stored digitized speech samples 402 herein called speech data, that were stored in step 312 are retrieved from the memory and coupled to a framer 404. The framer 404 segments the speech data into adjacent frames which by way of example is two hundred digitized speech samples within a window of two hundred and fifty-six digitized speech samples that are centered on the current frame and overlapping the previous and future frame. The output of the framer 404 is coupled to a pitch determiner 414. The output of the framer 404 is also coupled to a delay 405 which provides a one frame delay and which in turn is coupled to a second one frame delay 407. The one frame delay 405 and the second one frame delay 407 delays and buffers the output of the framer 404 to match the delay through the pitch determiner 414 as will be described below. The output of the second one frame delay 407 is coupled to a LPC analyzer 406, an energy calculator 410, and a frame voicing classifier 412.
During the development of an MBE voicing code book 416, the output of the second one frame delay 405 is also coupled to a ten band voicing analyzer 408. The ten band voicing analyzer 408 is coupled to an MBE voicing code book 416. The MBE voicing code book 416 is not used by the paging terminal 106 during normal operation and it is not necessary for the MBE voicing code book 416 to be stored at the paging terminal 106. The MBE voicing code book 416 is used by the receiver 114 as is described in copending U.S. Patent Application Serial No. (Attorney's Docket No. PT02122U).
The LPC analyzer 406 is coupled to a quantizer 422. The quantizer 422 is coupled to a first spectral code book 418 and a second residue code book 420. The quantizer 422 generates a first eleven bit index 426 and a second eleven bit index 428 that is the quantization of the spectral information of the speech frame from the second one frame delay 407. The first eleven bit index 426 and a second eleven bit index 428 are stored in a thirty-six bit transmit data buffer 424 for transmission.
The output of the energy calculator 410 is six bit RMS data 430 and is a measurement of the energy of the speech frame from the second one frame delay 407. The six bit RMS data 430 is stored in the thirty-six bit transmit data buffer 424 for transmission.
The output of the frame voicing classifier 412 is a single bit per frame voiced / unvoiced data word 432 defining the voiced / unvoiced characteristics of the speech frame from the second one frame delay 407. The

single bit per frame voiced / unvoiced data word 432 is stored in the thirty six bit transmit data buffer 424 for transmission.
The output of the pitch determiner 414 is a seven bit pitch data word 434 and is a measurement of the pitch of the speech frame generated by the framer 404. The seven bit pitch data word 434 is stored in the thirty six bit transmit data buffer 424 for transmission. The pitch determiner 414 is also coupled to the frame voicing classifier 412. Some of the intermediate results of the pitch calculations by the pitch determiner 414 are used by the frame voicing classifier 412 in the determination of the frame voiced / unvoiced characteristics.
In the preferred embodiment of the present invention the data generated from three frames of speech samples are stored in buffers. The frame of speech samples that has been delayed by the duration of two frames is referred to herein as the current frame. The speech analyzer 107 analyzes the speech data after a two frame delay to generate the speech parameter representing the current segment of speech. The three frames of speech stored in the buffers contain speech from the current frame, two future frames relative to the current frame, and previous results from two past frames relative to the current frame. The speech analyzer 107 analyzes frames of speech data in the future to establish trends such that current parameters will be consistent with future trends. The output of the framer 404 82(0 is delayed by one frame time by the one frame delay 405 to generate Si(i). The output of the one frame delay 405 Si(i) is delayed again by the second one frame delay 407 to generate S(i). S(i) is referred to herein as the current frame. Because the frame Si(i) comes one frame after the current S(i)then , Si(i) is in the future relative to S(i) and Si(i) is referred to herein as the first future frame. In the same manner 82(1) comes two frames after the current frame S(i) and 82(1) is referred to herein as the second future frame.
The LPC analyzer 406 performs a tenth order LPC analysis on the current frame of speech data to generate ten LPC spectral parameters 409. The ten LPC spectral parameters 409 are coefficients of a tenth order polynomial representing the magnitude of the harmonics contained in the speech frame. The LPC analyzer 406 arranges the ten LPC spectral parameters 409 into a spectral vector 411.
The quantizer 422 quantizes the spectral vector 411 generated by the LPC analyzer 406 into two eleven bit code words. The vector quantization function utilizes a plurality of predetermined spectral vectors identified by a plurality of indexes, comprising a spectral code book 418, which is stored in a memory in the digital signal processor 214. Each predetermined spectral

vector 419 of the spectral code book 418 is identified by an eleven bit index and preferably contains ten spectral parameters 417. The spectral code book 418 preferably contains 2048 predetermined spectral vectors. The vector quantization function compares the spectral vector 411 with every predetermined spectral vector 419 in the spectral code book 418 and calculates a set of distance values representing distances between the spectral vector 411 and each predetermined spectral vector 419. The first distance calculated and it's index is stored in a buffer. Then as each additional distance is calculated it is compared with the distance stored in the buffer and when a shorter distance is found, that distance and index replaces the previous distance and index. The index of the predetermined spectral vector 419 having a shortest distance to the spectral vector 411 is selected in this manner. The quantizer 422 quantizes the spectral vector 411 in two stages. The index selected is a first stage result.
In the second stage, the difference between the predetermined spectral vector 419 selected in stage one and the spectral vector 411 is determined. The difference is referred to as the residue spectral vector. The residue spectral vector is compared with a set of predetermined residue vectors. The set of predetermined residue vectors, comprise a second code book, or residue code book 420, and is also stored in the digital signal processor 214. The distance between the residue spectral vector and each predetermined residue vector of the residue code book 420 is calculated. The distance 433 and the corresponding index 429 of each distance calculation is stored in an index array 431. The index array 431 is searched and the index of the predetermined spectral vector of the second residue code book 420 having a shortest distance to the residue spectral vector, is selected. The index selected is the second stage result.
The eleven bit first stage result becomes the first eleven bit index 426 and the eleven bit second stage result becomes the second eleven bit index 428 that are stored in the thirty-six bit transmit data buffer 424 for transmission. The transmit data buffer 424 is also referred to herein as an output buffer.
The distance between a spectral vector 411 and a predetermined spectral vector 419 is typically calculated using a weighted sum of squares method. This distance is calculated by subtracting the value of one of the ten LPC spectral parameters 409 in a spectral vector 411 from a value of the corresponding predetermined spectral parameter 417 in the predetermined spectral vector 419, squaring the result and multiplying the squared result by a corresponding weighting value from a calculated weighting array. The value of the calculated weighting array is calculated from the spectral vector using a

procedure well known to one ordinarily skilled in the art This calculation is repeated on every parameter of the ten LPC spectral parameters 409 in the spectral vector 411 and the corresponding predetermined spectral parameter 417 in the predetermined spectral vector 419. The sum of the result of these calculations is the distance between the predetermined spectral vector 419 and the spectral vector 411. In the preferred embodiment of the present invention, the values of the parameters of the predetermined weighting array have been determined empirically by a series of listening tests.
The distance calculation described above can be shown by the following formula,
(Formula Removed)
where:
b is a preselected code book,
di equals the distance between the spectral vector and the predetermined spectral vector of a code book b,
equals the weighting value of parameter h of the calculated weighting array,
an equals the value of the parameter h of the spectral vector ,
b(k)h equals the parameter h in predetermined spectral vector k of the code book b, and
h is a index designating parameters in the spectral vector or the corresponding parameter in the speech parameter template.
As described above, a set of two eleven bit code books is utilized, however it will be appreciated that more than one code book and code books of different sizes, for example ten bit code books or twelve bit code books, can be used as well. It will also be appreciated that a single code book having a larger number of predetermined spectral vectors and a single stage quantization process can also be used, or that a split vector quantizer which is well known to one or ordinary skill in the art can be use to code the spectral vectors as well. It will also be appreciated that two or more sets of code books representing different dialects or languages can also be provided.

FIG. 5 shows a flow chart describing an empirical training process used in the development of the spectral code book 418, the residue code book 420 and the co-indexed MBE voicing code book 416 which has a predetermined association to the spectral code book 418. The training process analyzes a very large number of segments of speech to generate spectral vectors 411 and voicing vectors 425 representing each segment of speech. The process starts at step 452 where frames of digitized samples S(i) representing the segments of speech are high passed filtered. Next at step 454, the filtered frames are windowed by a 256 point Kaiser window. The parameter of the Kaiser window is preferably set equal to six. The Kaiser window is well known in the art and is used to smooth the effect of the abrupt start and stop that occurs when a frame is analyzed independent of the surrounding speech segments. The windowed frames are then analyzed to determined the spectral and voicing characteristics of each segment of speech. The spectral characteristics are determined at step 462. At step 462 a tenth order LPC analysis is performed on the windowed frames to generate ten LPC spectral parameters 409 for each speech segment. The ten LPC spectral parameters 409 generated are grouped into spectral vectors 411.
The voicing characteristics are determined at steps 456 through step 460. At step 456, a 512 point FFT is used to create a FFT spectrum. At step 458, the frequency spectrum is divided into a plurality bands. In the preferred embodiment of the present invention ten bands are used. Each band of the resulting ten bands of the FFT spectrum is designated by the value of a variable j. Next at step 460, a voicing parameter 427 based on the entropy, Ej, described below, of the FFT spectrum within each band, is calculated. Then at step 464, the voicing parameter 427 for the ten bands are grouped into a voicing vector 425 and associated with the corresponding spectral vector 411 and stored.
When the spectral vector 411 and the associated voicing vector 425 for all of the very large number of segments of speech are calculated then at step 465, the distance between the spectral vectors 411 are calculated. The distance is calculated using the distance formula described above. Then at step 466, the spectral vectors 411 that are closer together than a predetermined distance are grouped into clusters. At step 468, a centroid of each cluster is calculated and the vector defining the centroid becomes a predetermined spectral vector 419.
Next at step 470, the ten band predetermined voicing vector 421 is calculated by averaging the voicing vector 425 associated with the spectral vector within a cluster of spectral vectors identified by the predetermined spectral vector 419. The average value is calculated by summing the voicing

vectors 425 and then dividing the result by the total number of frames of speech grouped together in that cluster. The resulting ten band predetermined voicing vector 421 has ten voicing parameters 423 indicating the likelihood of each band being voiced or unvoiced. Then at step 474, the predetermined spectral vector 419 is stored at a location identified by an index. Next at step 476 the ten band predetermined voicing vector 421 is stored in the MBE voicing code book 416 at a location having the same index as the corresponding predetermined spectral vector 419. The common index identifies ten band predetermined voicing vector 421 and the spectral vector 419 representing the spectral and voicing characteristics of the cluster. Every segment of a very large number of segments of speech is analyzed in this manner. Once the MBE voicing code book 416 is determined, it is only used by the MBE synthesizer 116 in the receiver 114 and is not needed to be stored in the paging terminal 106. The ten band voicing analyzer 408 and the MBE voicing code book 416 is shown in FIG. 4 using dotted lines to illustrate that the ten band voicing analyzer 408 is only used during development of the spectral code book 418 and the MBE voicing code book 416.
Next at step 478, the residue vectors are calculated. The residue vectors are the differences between the spectral vectors 411 and the predetermined spectral vector 419 representing the associated cluster. Then at step 480, the residue vectors are clustered in the same manner as the spectral vectors 411 in step 466. At step 482, a centroid is calculated for each cluster and the vector defining the centroid becomes a predetermined residue vector. Then at step 484, each predetermined residue vector is stored as one vector of a set of predetermined residue vector comprising a residue code book 420. The residue code book 420 has a predetermined residue vector for each cluster derived.
The following formula is used to calculate the entropy of each band in each speech frame.
(Formula Removed)

where:
(Formula Removed)

Pij. equals a FFT spectral element, j equals the harmonic band,

i equals the harmonic within band j.
The RMS value of the frame energy is calculated by the energy calculator 410. The RMS frame energy is calculated by the following formula,

(Formula Removed)
where:
s(n) equals the magnitude of the speech sample n and N equals the number of speech samples in speech frame.
The pitch determiner 414 determines the pitch of the excitation source used by the MBE synthesizer 116 in the receiver 114. Pitch is defined herein as the number of speech samples between repetitive portions of speech. FIG. 6 shows an example of a portion of an analog speech wave form of a segment of speech 502. The portion of speech, in this example, is very repetitive and is classified as voiced. In the example, the distance between the repetitive portions is forty-three voice samples and the pitch is said to be 43. In the preferred embodiment of the present invention, the sampling rate is 8,000 samples per second, or 125 micro seconds (uS) per sample. Therefor, the time between peaks is 5.375 mili seconds (mS). The fundamental frequency of the analog speech wave form of a segment of speech 502 is the reciprocal of the period, or 186 Hz.
FIG. 7 is a plot of two pitch functions, y(i) 602 and yt(i) 606, developed by the pitch determiner 414 corresponding to the analog speech waveform of a segment of speech 502 of FIG. 5. The human voice is very complex and an analysis of any portion will reveal the presence of many different frequency components. The plot of the function y(i) 602 shows the amplitude of the various components verses the pitch of those components. In this example, it is clear that there is a peak 604 at a pitch of 43. The determination and use of y(i) 602 and yt(i) 606 will be described below.
FIG. 8 shows a example of a portion of an analog waveform of a segment of speech 702. This portion of speech is very random and is classified as unvoiced. FIG. 9 is a plot of two pitch functions developed by the pitch determiner 414 corresponding to the analog waveform of a segment of speech 702 of FIG. 8. The plot of the function y(i) 802 shows the amplitude of the various components verses the pitch of those components. In this example there is no clear peak. The pitch determiner 414 examines the

current frame and future frames to determine the correct pitch. The function yt(i) 804 is developed by the pitch determiner 414 by utilizing information from current and future frames as will be described below.
FIG. 10 shows an example of a portion of an analog waveform of a segment of speech 902. This portion starts very randomly and then develops a repetitive portion and is referred to as a transitional period of speech. FIG. 11 shows a plot of the function y(i) 1002 corresponding to the analog waveform of a segment of speech 902 of FIG. 10. The function y(i) 1002 does not have a clear peak. A plot of A function yt(i) 1004 shows a more defined peak. The function yt(i) is developed by the pitch determiner 414 by utilizing information from current and future frames as will be described below.
FIG. 12 is a block diagram representing an overview of the data flow for the pitch determiner 414. A frame of speech samples 82(0 1102 from the framer 404 is passed to a digital low pass filter 1104 for limiting the spectrum of the windowed speech samples to an anticipated range of pitch components. The low pass filter 1104 preferably has a cutoff frequency of 800 Hz. Low pass filtered speech samples, X2(i), are fed to a pitch function generator 1106. The pitch function generator 1106 processes the low pass filtered speech samples to generate a pitch function y2(i) that is an approximation of the amplitude of the pitch components verses the pitch.
The pitch function y2(i) is fed to a one frame delay and buffer 1110 to generate the pitch function yi(i). The pitch function yi(i) then is fed to a one frame delay and buffer 1112 to generate the pitch function y(i). The time delays generated by the one frame delay and buffer 1110 and the one frame delay and buffer 1112 provides the pitch tracker 1114 with three frames of pitch information. The low pass filtered speech samples, X2(i), from the low pass filter 1104 are also fed to a two frame delay buffer 1108 to generate a two frame delayed low pass filtered speech samples, x(i). The pitch function y(i) and the two frame delayed low pass filtered speech samples x(i) are referred to as the current frame. It is important for the understanding of this operation to keep in mind that the current frame has been delayed by two frames and that the pitch is not being determined in real time. The pitch function yi(i) delayed one frame is referred to as being a first future frame and the pitch function y2(i) is referred to as being two frames in the future or a second future frame. The definitions of the terms current frame, future frame and second future frame corresponds to the definition of the same terms used to describe S(i), Si(i) and 82(0 above in reference to FIG. 4.
The pitch tracker 1114 uses a pitch enhancer 1116 and a pitch detector 1118 to analyze the current frame pitch detection function, y(i), the two future

frames of pitch functions, y1(i) and y2(1), and the current frame of the low pass filtered speech samples, x(i), to generate a first pitch candidate based on current and future frames. The pitch tracker 1114 also generates a second pitch candidate using a magnitude summer 1122 and a pitch detector 1120 and data from the current segment of speech and data from proceeding segments of speech. The selection logic 1126 acts as a candidate selector to choose the most viable pitch from a first pitch candidate and a second pitch candidate. A seven bit pitch data word 434 is generated by the pitch tracker 1114, and represents the measurement of the pitch of the current frame of speech. The seven bit pitch data word 434 is stored in the thirty-six bit transmit data buffer 424 for transmission.
FIG. 13 is a flow chart showing details of the pitch function generator 1106. The pitch function generator 1106 determines a function relating the magnitude of the spectral frequency components verses pitch for the frame of speech currently being processed. From this function an approximation of the pitch can be made. The magnitudes of the low pass filtered speech samples, X2(i) 1202 are coupled to a squarer 1204 for generating squared digitized speech samples. The squaring is performed on a sample by sample basis. The squaring of X2(i) 1202 produces a number of new frequency components. The new frequency components contain the sums and differences of the frequencies of the various components of the low pass filtered speech samples, X2(i) 1202. The difference components of the harmonics of the fundamental pitch frequency will have components having frequency that are the same as the original pitch frequency. The regeneration of the fundamental pitch frequency is important because much of this portion of the speech spectrum is lost when the analog speech signal passes through the telephone network.
The squared samples are then preferably filtered using a Haar wavelet filter 1206. The Haar wavelet filter emphasizes the location of glottal events embedded in the original speech signal, increasing the accuracy of the pitch detection function. The Haar wavelet filter 1206 has a z transform transfer function as follows:
(Formula Removed)

The Fast Fourier Transform (FFT) calculator 1208 performs a 256 point FFT on the filtered signal generated by the Haar wavelet filter 1206. The discrete FFT spectrum, X2(k), generated by the FFT calculator 1208 has discrete


components ranging from k equals -128 to + 128. Because the Haar filtered signal X2(i) 1202 is a real signal, the resulting FFT discrete spectrum is a symmetrical spectrum and all the spectral information is in either halve. The pitch function generator 1106 uses only the positive components. The resulting positive components are spectrally shaped by the spectral shaper 1210 to eliminate components outside the range of the anticipated pitch range. The spectral shaping 1210 sets the spectral components greater then k equals 47 to zero.
The absolute value of the discrete components produced by the spectral shaping 1210 is calculated by the absolute value calculator 1212. The absolute value calculator 1212 calculates the absolute value of the components of X2(k) generating a zero phase spectrum.
An Inverse Fast Fourier Transform (IFFT) calculation is performed by the IFFT calculator 1214 on the absolute value of the spectrally shaped function X2(k). The IFFT of the absolute value of the spectrally shaped function X2(k) results in a time domain function resembling the time auto¬correlation of the filtered x2(i) 1202. The pitch detection function y2(i) 1218 is produced by normalizing each pitch component produced by the IFFT calculator 1214 by the normalizer 1216. The normalizer 1216 normalizes the discrete component of the function produced by the IFFT calculator 1214 by dividing those components by the first or D.C. component of that function. A plot of y(i) 602 for a voiced portion of speech is shown in FIG. 7. In this example the peak 604 at a pitch of 43 is clearly identifiable.
FIG. 14 is a block diagram detailing the operation of the pitch tracker 1114. The pitch tracker 1114 produces two pitch values, P 1320 and P'. P 1320 is the pitch value determined for the current segment of speech and P1 is a value used in the determination of the pitch value of future frames of speech. The pitch tracker 1114 uses the current frame pitch function y(i) 1308 and the pitch functions for the two future frames y1(i) 1304 and y2(i) 1302 to determine and track the pitch of the speech. The pitch tracker generates two possible pitch value candidates and then determines which of the two is the most probable value. The first candidate is a function of the current frame pitch function y(i) 1308 and the two future frames y1(i) 1304 and y2(i) 1302. The second candidate is a function of past pitch values and the current pitch function y(i). The second candidate is the most probable candidate during periods of slowly changing pitch, while the first candidate is the most probable during periods of speech where there is a sharp departure from the previous pitch.

A pitch enhancer 1116 comprises two dynamic peak enhancers 1310, 1311 for generating an enhanced pitch function comprising a plurality of enhanced pitch components. The dynamic peak enhancer 1310 uses the second future frame y2(i) 1302 coupled to a first input to enhance peaks in the future frame y1(i) 1304 coupled to a second input. The function generated is coupled to the first input of the second dynamic peak enhancer 1311 where it is used to enhance any peaks in the current frame pitch function y(i) 1304 coupled to a second input. Thus, the resulting function, yt(i), is the current frame pitch function enhanced by the pitch functions of both future frames. The value of this enhancement can be seen in the in FIG. 11. FIG. 11 is a plot of y(i) and yt(i) during a period of transition from unvoiced to voiced speech. While it is difficult to detect a clear peak in y(i) 1002, the peak in yt(i) 1004 is clear. The operation of the dynamic peak enhancer 1310 is explained below. In the preferred embodiment of the present invention, a pitch detection function from two future frames are used to enhance the peaks in the pitch detection function, y(i). However it will be appreciated that one or more future frames of pitch detection functions can be used as well.
A peak picking function 1314 searches the function yt(i) for a enhanced pitch component having a largest amplitude and returns the pitch value Pa and the magnitude A at pitch value Pa. A localized auto-correlation function 1316 searches a limited range about pitch value Pa for an auto-correlation peak. The auto-correlation function is a very computationally intensive process and by limiting the auto-correlation search to a range of about 30 percent of the range that would have to be searched using conventional methods results in a large savings of computational time. The localized auto¬correlation function 1316 returns a pitch value P'a that is the location of the point of maximum auto-correlation in the vicinity of pitch value Pa. The pitch value Pa is the first pitch value candidate of the current speech frame. The localized auto-correlation function 1316 also return A', the auto¬correlation value calculated at pitch value P'a. The operation of the localized auto-correlation function 1316 is described below.
A selection logic 1126, described below, determines a pitch value P 1320 and P'. The pitch value P1 from the previous frame is used in the determination of the pitch in the next frame. The pitch value P1 is buffered and saved for one frame by delay T 1322. The output of delay T 1322 becomes the pitch value P' from the previous frame. A peak picking function 1330 is coupled to y(i) and the pitch value P' from the previous frame. The peak picking function 1330 searches y(i) between i = 0.85P' delayed and i = 1.15P1

delayed and returns a maximum magnitude, B'o and the value of i, pitch value Pb, at the maximum.
A localized auto-correlation function 1332 searches a limited range about pitch value Pb for an auto-correlation peak. The localized auto¬correlation function 1332 returns a pitch value P'b that is the location of the point of maximum auto-correlation in the vicinity of pitch value Pb . The pitch value P'b is the second pitch value candidate of the current speech frame. The localized auto-correlation function 1332 also returns B', the auto¬correlation value calculated at pitch value P'b. The operation of the localized auto-correlation function 1332 is described below.
A function y(P) 1324 returns a magnitude BQ of the function y(i) at i equals pitch value P from the current frame. The magnitude BO is delayed one frame by delay T 1326 to become the magnitude BI of the previous frame. The magnitude BI is delayed one frame by delay T 1338 to become the magnitude, 62, of the second previous frame. The magnitude BI, the magnitude, 62 and the magnitude B'o are summed by the summer 1340. The summer returns the result of the summation, B.
Pitch value Pa, pitch value P'a , A and A' representing the first pitch value candidate, and pitch value Pb, pitch value P'b , B and B' representing the second pitch value candidate are coupled to the selection logic 1126. The selection logic 1126 evaluates the inputs and determines the most likely pitch value P 1320. The selection logic 1126 then set a selector 1346 and a selector 1348 accordingly. Since the pitch range is from 20 to 128, in the preferred embodiment of the present invention, a value of one is subtracted from the pitch value resulting in a range of 19 to 127 so the pitch can be represented by seven bits. The seven bit pitch data word 434 from the pitch determiner 414 is a measurement of the pitch of the speech frame generated by the framer 404. The seven bit pitch data word 434 is stored in the thirty six bit transmit data buffer 424 for transmission. The operation of the decision logic 1318 is described below.
The value A' from the localized auto-correlation function 1316 and the value B' from the localized auto-correlation function 1332 are coupled to a max function detector 1342. The max function detector 1342 compares the value of A' and B' and returns the larger of the two as Rm 1344. The use of the variable Rm 1344 will be used below in reference to the description of the frame voiced / unvoiced parameter.
FIG. 15 is a flow chart showing details of the operation of the dynamic peak enhancer 1310. The dynamic peak enhancer 1310 uses a function V(i) 1404 coupled to the second input 1404 to enhance peaks in function, U(i)

coupled to a first input 1402. At step 1406 values of an output function Z(i) are set to zero from i equals 0 to i equals 19. Then at step 1408 the value of i is set to 20.
At step 1410 a first pitch component is selected and the value of the limit N is calculated. The pitch component has a magnitude of Sj. N is set equal to the greater of 1 or the value of 0.85 Si rounded down to the nearest integer value. Then at step 1412 the value of limit M is calculated. M is set equal to the lesser of 128 or the value of 1.15 Si rounded down to the nearest integer value. The value of N and M determine a range of pitch components. Next the first input 1402, V(i) is searched within the range determined for a second pitch component having a maximum amplitude.
At step 1418 the value output function z(i), where the each component in the output function is an enhanced pitch component, is calculated using the following formula.
At step 1420 the value of i is incremented by one. Then at step 1422 a test is made to determine if the value of i is equal to or less then 128. When at step 1422 the value of i is equal to or less then the predetermined number, 128, the process returns to step 1410 and step 1410 through step 1420 are repeated. When at step 1422 the value of i is greater then 128, the process is completed and at step 1424 the function Z(i) is returned.
FIG. 16 and FIG. 17 are flow charts showing the details of the localized auto-correlation function 1316 and the localized auto-correlation function 1332. FIG. 16 shows the initialization process performed before the main loop shown in FIG. 17 is performed. The correlation is a metric used to measure the similarity between two segments of speech. The correlation will be at a maximum value when the offset between the two segments is equal to the pitch. As stated above, the pitch is defined as the distance between the repetitive portions of speech. The distance is measured as the number of samples between the repetitive portions. The localized auto-correlation function 1332 reduces computation by limiting the search for the maximum auto-correlation of the pitch function, x(i), received on the second input 1504, to the vicinity of the input 1502, P. The function is designed to minimize the number of calculations by observing the correlation results and intelligently determining the direction that the maximum auto-correlation will occur. The correlation function used in the preferred embodiment of the present invention uses the following normalized auto-correlation function (NACF).

(Formula Removed)

Where;
1 equals the offset, and
x(n) equals the low pass filtered delayed speech samples x(i) 1306, when n = i.
Referring to FIG. 16, the pitch value, P, is received on the first input 1502 and the pitch function, x(i) is received on the second input 1504. At step 1506 the NACF is calculated for 1 = P-l. The result is stored as a temporary variable, result right (Rr). Next at step 1508 the NACF is calculated for 1 = P+l. The result is stored as a temporary variable, result left (R1). Then at step 1510 the NACF is calculated for 1 = P. The result is stored as a temporary variable PEAK. Then at step 1512 a copy of the temporary variable PEAK is saved in temporary variable Re. Then at step 1512 a copy of the temporary variable P is saved in temporary variable Pe.
Next at step 1516, the left or lower limit (P1) of the search is determined. P1 is set equal to 0.85P rounded down to the nearest integer. Then at step 1518 the right or upper limit (Pu) of the search is determined. Pu is set equal to 1.15P rounded down to the nearest integer. The initialization process is completed at point AA 1520.
FIG. 17 shows the main loop of the localize auto-correlation calculation. The process continues from point AA 1520. At step 1602 a test is made to determine if pitch value P is within the search range limits. The lower range limit is defined as the greater of the lower limit, PI and the absolute lower limit 20. The upper limit is defined as the lesser of the upper limit, Pu and the absolute upper limit of 128. When the value of pitch value P is not within this range, the localized auto-correlation calculation has been completed and the process goes to step 1614. When the value of P is within this range, the process continues at step 1604.
At step 1604 a test is made to determine when the auto-correlation result to the right and to the left of pitch value P are less then the result at pitch value P indicating that pitch value P is already at the peak. The test compares the correlation result, PEAK with R1 and Rr. When PEAK is greater then Rr and PEAK is greater then or equal to R1 then pitch value P is

determined to be at the point of maximum correlation and the process goes to step 1614. When PEAK is less then R1 and RI then pitch value P is not at the point of maximum correlation and the process continues at step 1606.
At step 1606 a test is made to determine if pitch value P is at the end of the search range limits. When pitch value P is equal to the lower range limit, that P is equal to the greater of the lower limit, P1 plus one and the absolute lower limit 20 plus one P is at the end of range and the process goes to step 1612. When P is not at the end of range the process continues at step 1608.
At step 1608 a test is made to determine when the search should move to the left. When the value of Rr is greater than R1 the process should move to the right and the process goes to step 1618. When the value of Rr is not greater than R1 then at step 1610, a test is made to determine if the search should move to the left. When the value of R1 is greater than Rr then the process goes to step 1626. When the value of R1 is not greater than Rr the process continues at step 1612.
Step 1612 is performed when step 1602 through step 1610 indicates that the initial values determined at point A A 1520 represents the best correlation. Then at step 1612 the value of P is set to the value of Pe. Next at step 1614 Rm is set equal to PEAK. Next at step 1616 the process is completed and the values of P and Rm are returned.
At step 1618, when it is determined at step 1608 that the process should move to the right, the pitch value P is incremented by one. Next at step 1620 the value of RI is set equal to PEAK and at step 1622 PEAK is set equal to Rr. Then at step 1624 a new value is calculated for Rr using the following formula.
(Formula Removed)
After step 1624 the process goes to step 1602 described above.
At step 1626, when it has been determined at step 1610 that the process should move to the left, the pitch value P is decrement by one. Next at step 1628 the value of Rr is set equal to PEAK and at step 1630 PEAK is set equal to RI. Then at step 1632 a new value is calculated for RI using the following formula.
R1=NACF(P-1) After step 1632 the process goes to step 1602 described above.

FIG. 18 is a flow chart of the selection logic 1126 used to determine whether the first pitch candidate Pa or the second pitch candidate Pb most accurately characterizes the pitch of the speech segment. The selection logic 1126 receives the following:
the pitch candidate Pa,
the magnitude A , of the pitch function yt(i) at Pa,
the point of maximum correlation of the localized auto¬correlation function 1316, P'a,
the correlation value A' at P'a ,
the pitch candidate Pb,
the magnitude B , of the pitch function y(i) 1308 at Pb ,
the point of maximum correlation of the localized auto¬correlation function 1332, P'b , and
the correlation value B' at P'b.
The selection logic 1126 starts at step 1714. At step 1716, the values of Pa and Pb are compared. When at step 1716 the values of Pa and Pb are equal then at step 1744 values of Pb and P'b are selected for P and P' respectively and the selection process is completed. When at step 1716 the values of Pa and Pb are not equal, then at step 1718 the value of A' and B1 are compared. When at step 1718 the value of A' and B' are essentially equal, then at step 1744 values of Pb and P'b are selected for P and P' respectively and the selection process is completed. When at step 1718 the value of A' and B' are not essentially equal, then at step 1720 the value of the variable C is calculate using the following formula.
(Formula Removed)
Next step 7122 the value of a variable D is set equal to the larger of A and B. Then at step 1724 the value of the variable E is set equal to the larger of 0.12 and the quantity (0.0947 - 0.0827*D). Then at step 1726 the value of C is compared with the value of E. When at step 1726 value of C is not greater then the value of E, the process continues at step 1728. At step 1728 the value of variable Tl is set equal to the smaller of the 1.3 and the quantity (0.6*B + 0.7). Next at step 1730 the variable T2 is set equal to the larger of 1.0 and Tl. Then at step 1732 the quantity A/B is compared to the value of T2. When at step 1732 the quantity A/B is greater then the value of T2, then at step 1746 the values of Pa and P'a are selected for P and P', respectively, and the

selection process is completed. When at step 1732 the quantity A/B is not greater then the value of 12, then at step 1744 the values of Pb and P'b are selected for P and P', respectively, and the selection process is completed.
When at step 1726 value of C is greater then the value of E, the selection process continues at step 1734 where the value of a variable T3 is set equal to the smaller of A' and B'. Next at step 1736 a variable T4 is set equal to the larger of A' and B', and at step 1738 the value of a variable T5 is set equal to the larger of A and B. Then at step 1740 a test is made to determine if either of the following two conditions are true. The first condition is, T3 is equal to or less then 0.0 and T4 is greater then 0.25. The second condition is, T3 is greater then 0.0 and T4 is greater then 0.92 and T5 is less then 1.0. When neither of the conditions are true at step 1740, the process continues at step 1744 where the values of Pb and P'b are selected for P and P', respectively, and the selection process is completed. When either of the conditions are true at step 1740, the process continues at step 1742 where the value of B' is compared with the value of A'. When at step 1742 where the value of B1 is less then the value of A', then at step 1746 the values of Pa and P'a are selected for P and P' /respectively, and the selection process is completed. When at step 1742 where the value of B1 is not less then the value of A', then at step 1744 the values of Pb and P'b are selected for P and P', respectively, and the selection process is completed.
FIG. 19 shows the frame voicing classifier 412. The frame voicing classifier 412 derives seven parameters from the current speech frames digitized speech samples. The parameters are rla, PDm, Rm/ rl, Kj, Kg, and
The parameter rl is the result of a normalized one sample delayed auto-correlation calculation, rl is calculated by the following formula,
(Formula Removed)
Where;
s(n) equals S(i),
i = n and
N equals the parameters in the function s(n).

The parameter rla is the result of an empirically determined formula. The calculation of the parameter is similar to rl with the exception of the absolute value of s(n)s(n-l) being used in the numerator and the -0.5 offset.
rla is calculated by the following formula,
(Formula Removed)
PDm is a peak value of the function y(i), between the pitch range of 20 to 128. The function y(i) is described above in reference to the description of the pitch determiner 414.
Rm 1344 is the larger of value of the localized auto-correlation function 1316 at P'a and the value of the localized auto-correlation function 1332 at P'b. Rm 1344 is described above in reference to the description of the pitch tracker 1114.
K1 is a ratio of a low band energy to the full band energy
KI is calculated by the following formula,
(Formula Removed)

Where;
s1(n) equals lowpass filtered delayed speech samples, x(i) 1306 and s(n) equals the current frame speech samples S(i).
Ke is value of the calculated normalized energy around the peak point of energy in the current speech frame.
Ke is calculated by the following formula,
(Formula Removed)
Where: d equals 4 and
nm equals the value of i at the maximum value of S(i) for the current frame.
Rnns is calculated by the following formula,
(Formula Removed)

Where:
RMSmax equals the RMS value of the largest 1024 sample segments of the speech message. The speech message is divided into 1024 sample segments and the RMS value is calculated using the RMS formula above. The RMS value of the Segment having the largest RMS value is selected and used for RMSmax-
The frame voicing classifier 412 arranges the seven input parameters into a input vector P.
(Formula Removed)
An empirically determined matrix Wl is multiplied by the input vector P using matrix multiplication. The method of determining the coefficients of the weighting matrix Wl is described below. The result of the multiplication produces an intermediate vector al having seven coefficients, al1 through al7.
(Formula Removed)
Matrix multiplication is a systematic procedure readily handled by a digital signal processor. The calculation 1802 of the first coefficient all involves calculating the summation of the following:
The product of the multiplication 1816 of the first coefficients of the first row of Wl by the first coefficient of the first column of P.
The products of the multiplication's 1818-1828 of the second through seventh coefficients of the first row of Wl by the second through seventh coefficients of the first column of P, respectively.
The calculations 1804-1814 of the second through seventh coefficients, al2 through aly are performed in a similar manner using the second through seventh rows of Wl, respectively and the first column of P.
The coefficients of the intermediate vector al and the coefficients of an empirically determined vector bl 1832 which is processed using a tansig function to generate a second intermediate vector a2. The tansig function 1830 is a non-linear function, defined as
a2n=tansig(aln,bln) where,
tansig(aln.bln)=i + e(_2(a21[i+bln))
The intermediate vector a2 is multiplied by an empirically determined matrix W2 to generate a single cell vector a3.
W2 -[(1) (2) (3) (4) (5) (6) (7)]

The vector multiplication 1834 of the intermediate vector a2 and the matrix W2 involves calculating the summation of the following :
The product of the first coefficients of the first row of W2 by the first coefficient of the first column of a2.
The product of the second through seventh coefficients of the first row of W2 by the second through seventh coefficients of the first column of a2, respectively.
The coefficient of the vector a3 and the coefficient of a second empirically determined vector b2 1836 is processed by a logsig function 1838 to generate Vf. The logsig function 1838 is a non-linear function, defined as
Vf »logsig(a31,b21) where,
(Formula Removed)

The voiced / unvoiced comparator 1840 compares the value of Vf with 0.5. When the value of Vf is greater than 0.5, the frame is classified as voiced and when the value of Vf is less then 0.5, the frame is classified as unvoiced. When the frame is classified as voiced the V/UV bit is set to 1, otherwise it is set to 0.
The determination of the coefficients of Wl, W2, bl, and b2 is an empirical training process involving several steps. A very large number of speech segments are manually analyzed by observing their waveform by one skilled in the art and making a judgment as to their voicing characteristics. The voicing characteristics of the speech segments are then determined by the frame voicing classifier 412 as various coefficients for Wl, W2, bl, and b2 are tried. The performance of the frame voicing classifier 412 is determined by comparing the classifier's results with the manually determined results. With the aide of a computer, the coefficients for Wl, W2, bl, and b2 are varied until desired accuracy is obtained.
FIG. 20 shows an electrical block diagram of the digital signal processor 214 utilized in the paging terminal 106 shown in FIG. 2 to perform the function of the speech analyzer 107. A processor 1904, such as one of several standard commercial available digital signal processor ICs specifically designed to perform the computations associated with digital signal processing, is utilized. Digital signal processor ICs are available from several

different manufactures, such as a DSP56100 manufactured by Motorola Inc. of Schaumburg, IL. The processor 1904 is coupled to a ROM 1906, a RAM 1910, a digital input port 1912, a digital output port 1914, and a control bus port 1916, via the processor address and data bus 1908. The ROM 1906 stores the instructions used by the processor 704 to perform the signal processing function required for the type of messaging being used and control interface with the controller 216. The ROM 1906 also contains the instructions used to perform the functions associated with compressed voice messaging. The RAM 1910 provides temporary storage of data and program variables, the index arrays, the input voice data buffer, and the output voice data buffer. The digital input port 1912 provides the interface between the processor 1904 and the input time division multiplexed highway 212 under control of a data input function and a data output function. The digital output port provides an interface between processor 1904 and the output time division multiplexed highway 218 under control of the data output function. The control bus port 1916 provides an interface between the processor 1904 and the digital control bus 210. A clock 1902 generates a timing signal for the processor 1904.
The ROM 1906 contains by way of example the following: a controller interface function routine 1918, a data input function routine 1920, a gain normalization function routine 1922, a processing routine for the framer 404, a processing routine for the LPC analyzer 406, a processing routine for the ten band voicing analyzer 408, a processing routine for the energy calculator 410, a processing routine for the frame voicing classifier 412, a processing routine for the pitch determiner 414, a data output function routine 1936, one or more spectral code books 418, one or more residue code books 420, and one or more matrix weighting arrays 1942 as described above. RAM 1910 provides temporary storage for program variables 1944, index array 431, an input speech data buffer 1948 and an output speech buffer 1950. It will be appreciated that elements of the ROM 1906, such as the code book, can be stored in a separate mass storage medium, such as a hard disk drive or other similar storage devices.
In summary, speech sampled at a 8 KHz rate and encoded using conventional telephone techniques requires a data rate of 64 Kilo bits per second. However, speech encoded in accordance with the present requires a substantially slower transmission rate. For example, speech sampled at a 8 KHz rate and grouped into frames, or speech segments, representing 25 milliseconds of speech can be transmitted at an average data rate of 1,440 bits per second in accordance with the present invention. As hitherto stated, the

speech analyzer of the present invention digitally encodes the voice messages in such a way that the resulting data is very highly compressed and can easily be mixed with conventional paging data sent over a paging channel. The following functions are provided that greatly improve the operation and reduces the data rate: a highly accurate FFT based pitch determination and tracking function that can determine and track pitch even when the fundamental pitch frequencies are severely attenuated and reduces the computational intensity of the compression process; a highly accurate non¬linear frame voicing determination function; a method of providing multi-band voicing information not requiring the transmission of multi-band voicing information; and a natural sounding artificially generated excitation phase not requiring the transmission of phase information. In addition, the voice message is digitally encoded in such a way, that processing within the pager, or similar portable communication device is minimized. While specific embodiment of this invention have been shown and described, it can be appreciated that further modification and improvement will occur to those skilled in the art. —


WE CLAIM:
1 A pitch determiner (414) for use with a speech analyzer for
determining a pitch within one or more sequential segments of
speech, each segment of speech being represented by a predetermined
number of digitized speech samples, said pitch determiner
comprising:
a pitch function generator (1106) for generating from the predetermined number of digitized speech samples, a plurality of pitch components representing a pitch function, wherein said pitch function defines an amplitude of each of the plurality of pitch components;
a pitch enhancer (1116), for enhancing the pitch function of a current segment of speech utilizing the pitch function of one or more sequential segment of speech, by generating a plurality of enhanced pitch components; and
a pitch detector (1118) for detecting the pitch of the current segment of speech by determining the pitch of an enhanced pitch component having a largest amplitude of the plurality of enhanced pitch components.
2 The pitch determiner (414) as claimed in claim 1, wherein it
comprises a digital filter (1104), coupled to an input of said pitch
function generator (1106), for limiting a spectrum of the segment of
speech to an anticipated range of pitch components.
3 The pitch determiner (414) as claimed in claim 1, wherein it
comprises one or more delay elements (1110, 1112) for generating the
pitch function of one or more sequential segments of speech.
4 The pitch determiner (414) as claimed in claim 1, wherein said
pitch function generator (1106) comprises:
a squarer (1204) for squaring each of the predetermined number of digitized speech samples representing a segment of speech to generating squared digitized speech samples;

FFT calculator (1208) for deriving frequency components corresponding to the predetermined number of squared digitized speech samples representing a segment of speech;
an absolute value calculator (1212) for calculating an absolute value of the frequency components derived by the FFT calculator; and
an IFFT calculator (1214) for deriving a plurality of pitch components from the frequency components derived by the FFT calculator.
5 The pitch determiner (414) comprising the pitch function
generator (1106) as claimed in claim 4, wherein it comprises a haar
filter (1206), coupled to said squarer (1204) and to said FFT calculator
(1208), for emphasizing glottal events embedded in the speech thereby
increasing accuracy of pitch detection.
6 The pitch determiner (414) comprising the pitch function
generator (1106) as claimed in claim 4, wherein it comprises a band
limiting filter (1210), coupled to said FFT calculator (1208) and to said
absolute value calculator (1212), for limiting the range of the
frequency components derived by the FFT.
7 The pitch determiner (414) comprising the pitch function
generator (1106) as claimed in claim 4, wherein it comprises a
normalizer (1216), coupled to said IFFT calculator (1214) for
normalizing each pitch component of said plurality of pitch
components derived therefrom.
8 The pitch determiner (414) as claimed in claim 1, wherein said
pitch enhancer (1116) comprises a dynamic peak enhancer (1310) for
generating a plurality of enhanced pitch components from a plurality
of pitch components, said dynamic peak enhancer (1310) being
programmed to perform the steps of:

a) selecting a first pitch component of a first pitch function
(1410), the first pitch component having an amplitude;
b) determine a range of pitch components (1410, 1412)
about a pitch component of a second pitch function corresponding to
the first pitch component selected

c) selecting a second pitch component (1414) having a
maximum amplitude from within the range of pitch components;
d) summing the amplitude of the first pitch component with
the maximum amplitude of the second pitch component to generate
an enhanced pitch component (1418); and
repeating (1422) said steps of (a) through (d) for a predetermined number of pitch components of the plurality of pitch components of the first pitch function, to generate the plurality of enhanced pitch components.
9 The pitch determiner (414) as claimed in claim 8, wherein the
first pitch function represents the pitch function of the current
segment of speech and wherein the second pitch function represents
the pitch function of a succeeding segment of speech.
10 The pitch determiner (414) as claimed in claim 1, wherein the
pitch of the current segment of speech represents a first pitch
candidate and wherein a largest amplitude of the plurality of
enhanced pitch component represents a first magnitude, and wherein
said pitch determiner further comprises:
a second pitch detector (1120) for detecting a second pitch of the current segment of speech having a current magnitude, by utilizing a pitch of a preceding segment of speech and the pitch function of the current segment of speech, the second pitch detected representing a second pitch candidate;
a summer (1122) for summing the current magnitude and magnitudes of selected pitch components for one or more preceding segments of speech to generate a second magnitude, the selected pitch components for each of the one or more preceding segments of speech being determined by the pitch function and pitch of a preceding segment of speech; and
a candidate selector (1126) for selecting the first pitch candidate when a ratio of the first magnitude and the second magnitude is less than a threshold, and

selecting the second pitch candidate when a ratio of the first magnitude and second magnitude is greater than or equal to the threshold.
11 A pitch determiner for use with a speech analyzer substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.


Documents:

130-del-1997-abstract.pdf

130-DEL-1997-Assignment-(26-12-2011).pdf

130-del-1997-claims.pdf

130-DEL-1997-Correspondence-Others-(26-12-2011).pdf

130-del-1997-correspondence-others.pdf

130-del-1997-correspondence-po.pdf

130-del-1997-description (complete).pdf

130-del-1997-drawings.pdf

130-del-1997-form-1.pdf

130-del-1997-form-13.pdf

130-DEL-1997-Form-16-(26-12-2011).pdf

130-del-1997-form-19.pdf

130-del-1997-form-2.pdf

130-del-1997-form-3.pdf

130-del-1997-form-4.pdf

130-del-1997-form-6.pdf

130-DEL-1997-GPA-(26-12-2011).pdf

130-del-1997-gpa.pdf

130-del-1997-petition-137.pdf

130-del-1997-petition-138.pdf


Patent Number 214950
Indian Patent Application Number 130/DEL/1997
PG Journal Number 10/2008
Publication Date 07-Mar-2008
Grant Date 18-Feb-2008
Date of Filing 16-Jan-1997
Name of Patentee MOTOROLA, INC.,
Applicant Address 1303 EAST ALGONQUIN ROAD, SCHAUMBURG, ILLINOIS 60196, UNITED STATES OF AMEREICA.
Inventors:
# Inventor's Name Inventor's Address
1 XIAOJUN LI, 5882 SUNPOINT CIRCLE, BOYNTON BEACH, FL 33437
2 JIAN-CHENG HUANG, 7074 CATALINA ISLE DR. LAKEWORTH, FL. 33467
3 FLOYD SIMPSON 92 SWALLOW DR. LANTANA, FL 33462,
PCT International Classification Number G10L 3/02
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 08/591,995 1996-01-26 U.S.A.