Title of Invention

A METHOD AND SYSTEM FOR TISSUE DIFFERENTIATION

Abstract A system and method for tissue differentiation. In the method, M acoustic signals si(t), i = 1 to M, are obtained from M locations on a body surface. The N signals are subjected to band pass filtering using N band-pass filters, so as to generate NXM signals sij(t), i = 1 to M, j = 1 to N. K images I1 to IK, where K≤N, are then generated using the signals sij(t), i = 1 to M, j = 1 to N. The pixels are divided into a predetermined number L of categories CL, L from 1 to L, using the images I1 to IK. For each category CL, L from 1 to L, and for each pixel p(x,y), a probability pL of assigning the pixel p(x,y) to the category CL is determined. An image may then be generated using the probabilities pL.
Full Text FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION (See section 10, rule 13)
'METHOD AND SYSTEM FOR TISSUE DIFFERENTIATION'
DEEPBREEZE LTD., 15 Bareket Street, Industrial Park, 38900 Caesarea, Israel.;
The following specification particularly describes the invention and the manner in which it is to be performed.

WO 2005/074803 PCT/IL2005/000143
METHOD AND SYSTEM FOR TISSUE DIFFERENTIATION
FIELD OF THE INVENTION
This invention relates to methods for classifying tissues.
BACKGROUND OF THE INVENTION
It is known to apply a plurality of microphones onto a body surface in
5 order to record body sounds simultaneously at a plurality of locations on the body surface. U.S. Patent No. 6,139,505, for example, discloses a system in which microphones are placed around a patient's chest and recordings of the microphones are displayed on a screen or printed on paper. Kompis et al. (Chest 120(4):2001) discloses a system in which microphones are placed on a patient's
10 chest to record lung sounds that are analyzed to determine the location in the lungs of the source of a sound detected in the recording.
Applicant's copending Application No. 10/338,742 filed on January 9, 2003 and having the publication number US 2003-0139679 discloses a method and system for analyzing body sounds. A plurality of microphones are affixed to
15 an individual's chest or back. The recorded sound signals are analyzed to determine an average acoustic energy at a plurality of locations over the chest. The determined acoustic energies are then used to form an image of the respiratory tract.
A learning neural network is an algorithm used to classify elements based
20 upon previously input information on the nature of the elements. US Patent No
6,109,270 to Mah et al discloses use of a learning neural network to classify
brain tissue as being either normal or abnormal. US Patent No. 6,463,438 to

WO 2005/074803 PCT/IL2005/000143
3
Veltri et al. discloses use of a neural network to distinguish between normal and cancer cells.
SUMMARY OF THE INVENTION
The present invention provides a method and system for tissue
5 differentiation. M acoustic signals are obtained during a time interval by placing M microphones on a body surface such as an individuals back or chest. The M acoustic signals are each subjected to frequency band filtering by means of N frequency band filters. For each filter, the M outputs from the filter are input to a first image processor. The first image processor generates an image using the M
10 outputs of the filter. The images may be obtained by any method for generating an image from acoustic signals. For example, the images maybe obtained by the method of Kompis et al. (supra). In a preferred embodiment of the invention, an image is generated by the method disclosed in applicant's WO 03/057037. In the method of WO 03/057037, an image is obtained from M signals P(xi,t) for i=1to
15 M, (where the signal P(xi,t) is indicative of pressure waves at the location xi; on
the body surface) by determining an average acoustic energy _P(x,t1,t2)at at least one position x over a time interval from a first time t1 to a second time t2.
The N images are preferably, but not necessarily, transformed by an SVD (singular value decomposition) processor, as explained in detail below. The
20 output of the SVD processor is input to a self-organizing map neural network and to a classifier. A self-organizing map neural network is different from a learning neural network in that it does not need the learning phase which includes an external vector with the identification of a teacher and does not require the input of previously acquired information on the nature of the elements. The output of
25 the neural network consists of L N-dimensional vectors where L is a predetermined number of categories of interest. The output from the neural network is input to the classifier.
For each pixel p(x,y), the classifier is configured to calculate a probability of assigning the pixel to each of the L categories. One or more images may then

WO 2005/074803 PCT/IL2005/000143
4
be generated by a second image processor based upon the output from the classifier.
Thus, in its first aspect, the invention provides a method for tissue differentiation comprising:
5 (a) obtaining M acoustic signals Si(t), i=1 to M, from M locations on a body
surface;
(b) for each of N frequency bands, and for each of the signals Si(t), i from 1 to
M, subjecting the signal Si(t) to band pass filtering using N band-pass
filters, so as to generate NXM signals sij(t), i=1to M, j= 1 to N;
10 (c) generating K images II to IK, where K≤N, using the signals sij(t), i=1to
M,j=1to N;
(d) dividing pixels into a predetermined number L of categories Ct, £ from 1
to L, using the images II to IK; and
(e) for each category Cℓ, ℓ from 1 to L, and for each pixel p(x,y), calculating a
15 probability pℓ of assigning the pixel p(x,y) to the category Cℓ.
In its second aspect, the invention provides a system for tissue differentiation comprising:
a. M sound transducers configured to obtain M acoustic signals Si(t),
i=1to M, from M locations on a body surface;
20 b. N band pass filters, each band pass filter being configured to
receive each of the signals Si(t), i from 1 to M, so as to generate
NXM signals sij(t), i=1to M, j= 1 to N;
c. A first image generator configured to generate K images II to IK,
where K≤N, using the signals sij(t), i=1to M, j= 1 to N;
25 d. a neural network configured to divide pixels into a predetermined
number L of categories Cℓ, ℓ from 1 to L, using the images II to IK;
and
e. a classifier configured, for each category Cℓ, ℓ from 1 to L, and for
each pixel p(x,y), calculating a probability pℓ of assigning the pixel
30 P(x,y) to the category Cℓ.
WO 2005/074803 PCT7IL2005/000143
5
BRIEF DESCRIPTION OF THE DRAWINGS
In order to understand the invention and to see how it may be carried out
in practice, a preferred embodiment will now be described, by way of non-
limiting example only, with reference to the accompanying drawings, in which:
5 Fig. 1 is a schematic diagram of a system for carrying out the method of
the invention, in accordance with one embodiment of the invention;
Fig. 2 is shows 5 images of a heart obtained in accordance with one embodiment of the invention; and
Fig. 3 shows an image of an individual's lungs obtained in accordance
10 with one embodiment of the invention (Fig. 3 a) and an image of the same lungs obtained without band pass filtering (Fig. 3b).
DETAILED DESCRIPTION OF THE INVENTION
Fig. 1 shows a schematic diagram of a system for carrying out one embodiment of the method of the invention. M acoustic signals Si(t) to SM(t) are
15 obtained during a time interval by placing M microphones on a body surface (not shown) such as an individuals back or chest. The M acoustic signals are each subjected to frequency band filtering by means of N frequency band filters Fi to FN. For each filter Fj, j from 1 to N, the M outputs from the filter Fj, Sij(t), i from 1 to M, are input to a first image processor. The first image processor generates
20 N images Ij, j from 1 to N, where each image Ij is obtained using the M outputs of the filter Fj. The images Ij may be obtained by any method for generating an image from acoustic signals. For example, the images maybe obtained by the method of Kompis et al. (supra). In a preferred embodiment of the invention, an image is generated by the method disclosed in applicant's WO 03/057037. In the
25 method of WO 03/057037, an image is obtained from M signals P(xi,t) for i=1to M, (where the signal P(xi,t) is indicative of pressure waves at the location xt; on the body surface) by determining an average acoustic energy P(x,t1,t2)at at least one position x over a time interval from a first time t1 to a second time t2.

WQ 2005/074803 PCT/BL2005/000143
6
The N images Ij, j from 1 to N, are preferably, but not necessarily,
transformed by an SVD (singular value decomposition) processor. The SVD
processor calculates N eigen-images EIj and N corresponding eigen-values λj, for
j from 1 to N, (not shown) where the N eivgen-values λj are ordered so that
5 λ1≤λ2 ...≤...≤λj ≤...λN. The SVD processor then determines an integer K≤N
K
where K is the smallest integer for which N - , where a is a predetermined
j-1 threshold. The output of the SVD processor is the K eigen-images EII To EIK. The learning phase of the process is thus completed.
The output of the SVD processor is input to a self-organizing map neural
10 network. The output of the neural network consists of L N-dimensional vectors C1,...CL, where L is a predetermined number of categories of interest. The classification phase is carried out by inputting. The output from the neural network is input to the classifier together with the output of the SVD. The classifier thus receives as input the K eigen-images EII to EIK from the SVD (or
15 the N images II to IN, if a SVD processor is not used) and the L vectors Cl, ...CL from the neural network.
For each pixel p(x,y), the classifier is configured to calculate a probability Pj of assigning the pixel p(x,y)to the category Cj. One or more images may then be generated by a second image processor based upon the output from the
20 classifier. For example, for each category Cj, an image may be generated in which the pixel (x,y) has a gray level proportional to the probability that the pixel belongs to the category j. As another example, each category may be assigned a different color, and an image is generated in which each pixel is colored with the color of the category having a maximum probability for that pixel. As yet another
25 example of an image, an image may be generated by selecting, say three categories, and displaying the image on an RGB (red green blue) color display screen. In this example, for each pixel, the red, green, and blue intensity is

WO 2005/074803 PCT/IL2005/000143

7
proportional to the probability that the pixel belongs to the first, second, or third category, respectively. The generated image may be used by a practitioner to identify different tissue types in the image. Generated images may be used to form a data base for automatic learning by the practitioner or by the neural
5 network to analyze the images and identify tissue types in the images.
EXAMPLES
Example 1: Cardiac imaging
40 acoustic signals Si(t) arising from an individual's heart were obtained
10 during 0.04 seconds by placing 40 microphones on the individual's back in the cardiac region. The 40 acoustic signals were each subjected to frequency band filtering by means of 3 frequency band filters. For each filter, the 40 outputs from the filter were processed into an image as disclosed in applicant's US provisional Patent Application No. 60/474,595. The 3 images were input to a self-organizing
15 map neural network The output of the neural network consisted of 5 3-dimensional vectors C1,...C5, where 5 was the predetermined number of categories of interest. The output from the neural network was input to a classifier.
For each pixel, the classifier calculated a probability Pj of assigning the
20 pixel to the category Cj, for j from 1 to 5. An image was then generated for each
of the three categories in which the pixel (xi,yi,) has a gray level proportional to
the probability that the pixel belongs to that category. The 5 images are shown in
Fig. 2.
25
Example 2: pulmonary imaging
40 acoustic signals Si(t) arising from an individual's lungs were obtained
during 0.1 second by placing 40 microphones on the individual's back over the
lungs. The 40 acoustic signals were each subjected to frequency band filtering by
30 means of 3 frequency band filters. For each filter, the 40 outputs from the filter

WO 2005/074803 PCT/IL2005/000143
8
were processed into an image as disclosed in applicant's US Patent Application 10/338,742 having the publication number 2003 01 3967. The 3 images were input to a self-organizing map neural network The output of the neural network consisted of 3 3-dimensional vectors C1,...C3, where 3 was the predetermined
5 number of categories of interest. The output from the neural network was input to t’;e classifier.
For each pixel, the classifier calculated a probability pj of assigning the pixel to the category Cj, for j from 1 to 3. A color image was then generated as follows. A different color (red green and blue) was used to indicate each of the
10 three categories. In the color image, each pixel p(x,y) has a red, green, and blue level that is proportional to the probability that the pixel belongs to the first, second and third category respectively. A black and white rendition of the color image is shown in Fig. 3a. Fig. 3b shows an image of the individual's lungs obtained from the original sound signals (without frequency filtering) as
15 disclosed in applicant's US Patent Application 10/338,742 having the publication number 2003 01 3967.

WO 2005/074803 PCT/IL2005/000143
9
CLAIMS:
1. A method for tissue differentiation comprising:
(a) obtaining M acoustic signals Si(t), i=1to M, from M locations on a body
surface;
5 (b) for each of N frequency bands, and for each of the signals Si(t), i from 1 to
M, subjecting the signal si(t) to band pass filtering using N band-pass
filters, so as to generate NXM signals sij(t), i=1to M, j= 1 to N; (c) generating K images II to IK, where K≤N, using the signals sij(t), i=1to
M,j=ltoN;
(d) dividing pixels into a predetermined number L of categories Cℓ, ℓ from 1 to L,
using the images II to IK; and
(e) for each category Cℓ, ℓ from 1 to L, and for each pixel p(x,y), calculating a
probability pf. of assigning the pixel p(x,y) to the category Cℓ.
2. The method according to Claim 1 comprising
15 (a) generating N images I'i to I'N, wherein the image I'j is obtained using the signals sij(t), I from 1 to M, and
(b) generating K eigenimages and K eigenvalues using the N images I' i to I'N.
3. The method according to Claim 1 or Claim 2 wherein the body surface is
a chest or a back.
20 4. The method according to Claim 1 or w wherein the acoustic signals are indicative of cardiac sounds or respiratory tract sounds.
5. The method according to Claim 1 wherein an image is obtained from M
signals P(xi,t) for i=1to M, the signal P(xi,t) being indicative of pressure waves
at the location xi- on the body surface by determining an average acoustic energy
25 P(x,t1,,t2)at at least one position x over a time interval from a first time t1, to a second time t2, using the signals P(xi,t) for i=1to M.
6. The method according to Claim 1 further comprising generating one or more images using the probabilities pt.
7. A system for tissue differentiation comprising:

WO 2005/074803 PCT/IL2005/000143
10 -/■
(a) M sound transducers configured to obtain M acoustic signals Si(t), i=1to M, from M locations on a body surface;
(b) N band pass filters, each band pass filter being configured to receive each of the signals Si(t), i from 1 to M, so as to generate NXM signals sij(t), i=1to M,
5 j=1toN;
(c) A first image generator configured to generate K images II to IK, where K≤N, using the signals sij(t), i=1to M, j= 1 to N;
(d) a neural network configured to divide pixels into a predetermined number L of categories Cℓ, ℓ from 1 to L, using the images II to IK; and
10 (e) a classifier configured, for each category Cℓ, ℓ from 1 to L, and for each pixel p(x,y), calculating a probability pℓ of assigning the pixel p(x,y) to the category Cℓ.
8. The system according to Claim 7 further comprising a single value
decomposition processor, configured to:
15 (a) receive N images I'I to I’N, generated by the first image generator, wherein the image I’j is obtained using the signals Sij(t), I from 1 to M, and
(b) generate K eigenimages and K eigenvalues using the N images I'i to I’N.
9. The system according to Claim 67 or Claim 8 wherein the body surface is
a chest or a back.
20 10. The system according to Claim 7 or 8 wherein the acoustic signals are indicative of cardiac sounds or respiratory tract sounds.
11. The system according to Claim 7 wherein the first image generator is
cjnfigured to generate an image from M signals P(xi,t) for i=1to M, the
signal P{xi,t) being indicative of pressure waves at the location x,-; on the body
25 surface by determining an average acoustic energy P(x,t1,t2)at at least one position x over a time interval from a first time t1, to a second time t2, using the signals P(xi,t) for i=1 toM.
12. The system according to Claim 7 further comprising a second image
generator configured to generate one or more images using the probabilities pc-

—11
13. A method and system for tissue differentiation such as herein described with reference to the accompanying drawings and as illustrated in the foregoing examples.

Dated this 24th day of July, 2006


RAJESHWARI H.
OF K & S PARTNERS
AGENT FOR THE APPLICANT(S)

- 12-
Abstract:
A system and method for tissue differentiation. In the method, M acoustic signals si(t), i = 1 to M, are obtained from M locations on a body surface. The N signals are subjected to band pass filtering using N band-pass filters, so as to generate NXM signals sij(t), i = 1 to M, j = 1 to N. K images II to IK, where K

Documents:

899-mumnp-2006-abstract(29-4-2008).pdf

899-MUMNP-2006-ABSTRACT(GRANTED)-(28-10-2011).pdf

899-mumnp-2006-abstract.doc

899-mumnp-2006-abstract.pdf

899-MUMNP-2006-ASSIGNMENT(9-9-2006).pdf

899-MUMNP-2006-CANCELLED PAGES (29-4-2008).pdf

899-mumnp-2006-cancelled pages(29-4-2008).pdf

899-mumnp-2006-claims(amanded)-(29-4-2008).pdf

899-MUMNP-2006-CLAIMS(GRANTED)-(28-10-2011).pdf

899-mumnp-2006-claims.doc

899-mumnp-2006-claims.pdf

899-mumnp-2006-correspondance-received.pdf

899-MUMNP-2006-CORRESPONDENCE(27-7-2006).pdf

899-mumnp-2006-correspondence(28-5-2008).pdf

899-MUMNP-2006-CORRESPONDENCE(IPO)-(1-11-2011).pdf

899-mumnp-2006-correspondence(ipo)-(1-7-2008).pdf

899-mumnp-2006-description (complete).pdf

899-MUMNP-2006-DESCRIPTION(GRANTED)-(28-10-2011).pdf

899-mumnp-2006-drawing(29-4-2008).pdf

899-MUMNP-2006-DRAWING(GRANTED)-(28-10-2011).pdf

899-mumnp-2006-drawings.pdf

899-MUMNP-2006-FORM 1(29-4-2008).pdf

899-mumnp-2006-form 18(27-7-2006).pdf

899-MUMNP-2006-FORM 2(GRANTED)-(28-10-2011).pdf

899-MUMNP-2006-FORM 2(TITLE PAGE)-(COMPLETE)-(27-7-2006).pdf

899-MUMNP-2006-FORM 2(TITLE PAGE)-(GRANTED)-(28-10-2011).pdf

899-mumnp-2006-form 26(29-4-2008).pdf

899-MUMNP-2006-FORM 3(29-4-2008).pdf

899-mumnp-2006-form-1.pdf

899-mumnp-2006-form-2.doc

899-mumnp-2006-form-2.pdf

899-mumnp-2006-form-3.pdf

899-mumnp-2006-form-5.pdf

899-MUMNP-2006-MARKED COPY(27-7-2006).pdf

899-mumnp-2006-pct-search report.pdf

899-MUMNP-2006-PETITION UNDER RULE 137(29-4-2008).pdf

899-mumnp-2006-reply to examination report(30-4-2008).pdf

899-MUMNP-2006-WO INTERNATIONAL PUBLICATION REPORT(27-7-2006).pdf

abstract1.jpg


Patent Number 249599
Indian Patent Application Number 899/MUMNP/2006
PG Journal Number 44/2011
Publication Date 04-Nov-2011
Grant Date 28-Oct-2011
Date of Filing 27-Jul-2006
Name of Patentee DEEPBREEZE LTD.
Applicant Address 15 Bareket Street, Industrial Park, 38900 Caesarea
Inventors:
# Inventor's Name Inventor's Address
1 BOTBOL, Meir 7 Neve Hadarim Street, 37017 Pardes Hana,
PCT International Classification Number A61B7/02
PCT International Application Number PCT/IL2005/000143
PCT International Filing date 2005-02-06
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10/771,150 2004-02-04 U.S.A.