Title of Invention

A METHOD FOR DETERMINING A FOCUS VALUE IN AN IMAGE

Abstract A method for determining a focus value in an image comprising: selecting an area of interest in the image; selecting a color plane of interest; filtering the color plane of interest within the area of interest to produce a filtered region; determining the mean absolute value of the filtered region; determining if the mean absolute value for the filtered region is greater than a percentage of a largest previously calculated mean absolute value; and setting an optimum focal length to be equal to a focal length at which the image is captured if the mean absolute value for the filtered region is greater than the largest previously calculated mean absolute value; outputting the optimum focal length if the mean absolute value for the filtered region is not greater than the percentage of the largest previously calculated mean absolute value.
Full Text FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
COMPLETE SPECIFICATION
[See Section 10]
"A METHOD FOR DETERMINING A FOCUS VALUE IN
AN IMAGE"
INTEL CORPORATION, a Delaware Corporation, 2200 Mission College Boulevard, Santa Clara, California 95052, United States of America
The following specification particularly describes the nature of the invention and the manner in which it is to be performed :-

The present invention relates to a method for determining a focus value in an image.
Description of Related Art
Use of digital image capture systems such as digital cameras for video and still image capture has become very prevalent in many applications. Video capture may be used for such applications as video conferencing, video editing, and distributed video training. Still image capture with a digital camera may be used for such applications as photo albums, photo editing, and compositing.
Many digital video and still image capture systems use an image sensor constructed of an array of light sensitive elements, each commonly referred to as a "pixel" element. Each pixel element is responsible for capturing one of three-color channels: red, green, or blue. Specifically, each pixel element is made sensitive to a certain color channel through the use of a color filter placed over the pixel element such that the light energy reaching the pixel element is due only to the light energy from a particular spectrum; Each pixel element generates a signal that corresponds to the amount of light energy to which it is exposed. .
Digital image capture systems are typically expected to operate under a variety of conditions. In addition, features such as auto-focusing and auto-exposure are expected to be integrated features. These are typically hardware intensive processes. In the case of auto-focusing, speed and accuracy is essential for capturing high-quality images. Thus, it would be desirable to implement ' efficient and hardware friendly auto-focusing processes.

SUMMARY OF THE INVENTION
A method for determining a focus value in an image, including selecting an area of interest in the image and a color plane of interest. Then, filtering the color plane of interest within the area of interest to produce a filtered region; and, determining the mean absolute value of the filtered region A system for implementing the method is also disclosed.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
Figure 1 is a diagram illustrating an image with an area of interest.
Figure 2 is a flow diagram illustrating the determinat of a focal value (V) from an image such as the image in Figure 1 in accordance with one mode of operation of the present invention.
Figure 3 contains the impulse and frequency responses (both the magnitude and phase responses) of a filter configured in accordance with one embodiment of the present invention.
Figure 4 is a flow diagram illustrating one mode of operation of an auto-focusing system configured in accordance with the present invention.
Figure 5 contains the graphs of the focal values of images from four series of images as determined using the flow diagram of Figure 2.
DETAILED DESCRIPTION OF THE INVENTION The present invention provides an automatic focusing method and apparatus developed for digital image capture systems such as digital imaging and video cameras, (to be generally referred to herein as "camera"). During the focusing phase, a sequence of images at different focal lengths is captured. After being captured, each image is filtered by a symmetric finite impulse response (FIR) filter. A focus value, which indicates the level of camera focus, is derived from the filtered image. An FTR filter is adopted by the algorithm as it may be implemented by a fixed function digital signal processing (FFDSP) hardware with smaller computation costs and also avoid the error accumulation problem

normally seen in infinite impulse response (IIR) filters. Furthermore, using a symmetric filter can reduce the number of multiplications for filtering the image roughly by half, as compared to an asymmetric filter. In one embodiment, the focal distance at which the captured image has the largest focus value is considered the optimal focal distance for the scene and is output by the algorithm.
In one embodiment, a processor and a memory is used to process the images to extract focus values for determining when the image capturing system is in focus. As mentioned above, the processor may be a digital signal processor or an application specific integrated circuit (ASIC). The processor may also be a general purpose processor. The memory may be any storage device suitable for access by the processor.
Figure 1 is a diagram illustrating an image 100 with a width X and a height Y. Image 100 is composed of a set of pixels, each overlaid with a color filter from a color filter array (CFA) 102. In one embodiment, CFA 102 is in a Bayer pattern, with a repeated red (R), green (G), green (G), and blue (B) filter pattern. In
addition, Figure 1 also includes an area of interest 104, which has a size of N
pixels and M rows.
Figure 2 is a flow diagram illustrating the determination of a focal value
(V) from an image such as image 100 of Figure 1 in accordance with one mode of
operation of the present invention.
In block 200, an area of interest such as area of interest 104 is selected from
an image such as image 100. In one embodiment, only one area of interest is
selected from the image in the calculation of the focal value for the image.
However, in other embodiments, multiple areas of interest may be chosen, and
multiple focal values may be calculated. The following description is directed
towards determining one focal value per image. After the area of interest is
chosen, operation then continues with block 202.
In block 202, a color plane of interest is chosen from the area of interest. In
one embodiment, what is chosen is the green color plane, as the green spectrum

is better suited for determining luminance. In other embodiments, another color plane may be chosen with the requirement that the color plane chosen contains most of the luminance information of the scene. For example, in a Y-CYMG (cyan, yellow, and magenta) image capturing system, the yellow plane, which contains most of the luminance information, would be chosen as the color plane of interest. In addition, in the description that follows, the pixel at location (0,0) of the cropped image region (e.g., area of interest 104) is set to be a green pixel. Specifically, the top-left corner of the cropped image region is placed so that the top-left pixel is a green pixel.
In block 204, every two green pixel columns in the cropped image region are merged into a single complete green pixel column to generate a green plane G'of size M x N/2 pixels by the following two steps:
and,
2. G'(i.j/2) = G(i,J) for 0 where, G(i,j) is the value of the green pixel at location (i,j) of the cropped image region. Before the merge, G(i,j) are well defined only at locations (m,n), where (m+n) mod 2 = 0, 0 In block 206, merged color plane G' is filtered using a low-pass filter to reduce inaccuracies caused by noise (e.g./artifact edges introduced by the use of the Bayer pattern). In one embodiment, the green plane G' is then filtered column-wise by a 3-tap low-pass filter:
00G(i-1,j) + a1G(i.j), for 0 Ga(i.j) = G(i,j), for i=0, M-1 and 0 where A=[a0 a1 a2]=[0.25 0.5 0.25]. The interpolated green plane G- remains the same size of M by N/2 pixels. After the green plane G' has been modified to become interpolated green plane G- , operation then continues with block 208.


In block 208, the system divides the interpolated green plane Gm into three sub-regions, G1, G2 and G3, with equal sizes. In one embodiment, the interpolated green plane Ga divided into three sub-regions of size M x N/6 pixels. In other embodiments, the interpolated green plane Ga may be divided into three sub-regions of size M/3 x N/2 pixels. In addition, the interpolated green plane Ga may be divided into multiple regions of any size for other embodiments.
In block 210, the rows of each sub region G1, G2 and G3, are filtered by a p-tap FIR filter:

where The filter attempts to
extract relevant edge information useful for determining whether the camera is in focus.
In one embodiment, the p-tap FIR filter used to filter the sub regions G, G2 and G3 of interpolated green plane Ga is a 20-tap symmetric FIR filter. As noted above, a symmetric FIR filter is used in the algorithm because it can be implemented in current existing hardware with smaller computation costs and the number of multiplications required for filtering the image may be reduced roughly by half when compared to a non-symmetric FIR filter. In addition, as noted above, the FTR filter does not suffer from error accumulation problems normally encountered by UR filters. Table 1 contains one possible set of filter coefficients of the FIR filter that may be used to produce desired results.

hn Value
h0=h19 0.0006
h1=h18 -0.0041
h2=h17 O.OOOC
h3=h16 0.0292
h4=h15 -0.035C

WE CLAIM:
1. A method for determining a focus value in an image comprising:
selecting an area of interest in the image;
selecting a color plane of interest;
filtering the color plane of interest within the area of interest to produce a filtered region;
determining the mean absolute value of the filtered region;
determining if the mean absolute value for the filtered region is greater than a percentage of a largest previously calculated mean absolute value; and
setting an optimum focal length to be equal to a focal length at which the image is captured if the mean absolute value for the filtered region is greater than the largest previously calculated mean absolute value;
outputting the optimum focal length if the mean absolute value for the filtered region is not greater than the percentage of the largest previously calculated mean absolute value.
2. The method as claimed in claim 1, where filtering the color plane
of interest comprises: dividing the color plane of interest into a set of
sub-regions; and filtering each sub-region.


3. The method as claimed in claim 1, where filtering the color plane of interest comprises: filtering the color plane of interest using a finite impulse response filter.
4. The method as claimed in claim 3, where the finite impulse response filter is a 20-tap finite impulse response filter.
5. The method as claimed in claim 1, further comprising determining if the focal length is within a range of focal values; and outputting the optimum focal length if the focal length is not within a range of focal values.
6. The method as claimed in claim 1, further comprising changing the focal length by a step size; and capturing a second image at the focal length.
7. The method as claimed in claim 6, wherein the step size is determined by the following formula:
fs = (fmax -fmin) X SFN/SN
where fs is the step size; fmax is the largest focal length of an image capturing system for capturing the image; fmin is the smallest focal length of the image capturing system; SN controls a total number of


image evaluations; and SFN is related to an F-number setting as well as the focal distance.
8. The method as claimed in claim 2, further comprising: low pass filtering a plurality of merged portions of the selected color plane of interest within the area of interest, prior to dividing the color plane of interest.
9. The method as claimed in 3, wherein the finite impulse response filter is symmetric.
Dated this 26th day of July, 2002.
(JAYANTA PAL)
OF REMFRY & SAGAR
ATTORNEY FOR THE APPLICANTS


Documents:

abstract1.jpg

in-pct-2002-00289-mum-cancelled pages(20-04-2005).pdf

in-pct-2002-00289-mum-claims(granted)-(20-04-2005).doc

in-pct-2002-00289-mum-claims(granted)-(20-04-2005).pdf

in-pct-2002-00289-mum-correspondence(20-04-2005).pdf

in-pct-2002-00289-mum-correspondence(ipo)-(23-04-2004).pdf

in-pct-2002-00289-mum-drawing(20-04-2005).pdf

in-pct-2002-00289-mum-form 1(07-03-2002).pdf

in-pct-2002-00289-mum-form 19(15-03-2004).pdf

in-pct-2002-00289-mum-form 1a(08-05-2002).pdf

in-pct-2002-00289-mum-form 2(granted)-(20-04-2005).doc

in-pct-2002-00289-mum-form 2(granted)-(20-04-2005).pdf

in-pct-2002-00289-mum-form 3(07-03-2002).pdf

in-pct-2002-00289-mum-form 3(20-04-2005).pdf

in-pct-2002-00289-mum-form 5(07-03-2002).pdf

in-pct-2002-00289-mum-form-pct-ipea-409(07-03-2002).pdf

in-pct-2002-00289-mum-form-pct-isa-210(07-03-2002).pdf

in-pct-2002-00289-mum-petition under rule 137(20-04-2005).pdf

in-pct-2002-00289-mum-power of authority(03-08-2001).pdf

in-pct-2002-00289-mum-power of authority(20-04-2005).pdf


Patent Number 205577
Indian Patent Application Number IN/PCT/2002/00289/MUM
PG Journal Number 26/2007
Publication Date 29-Jun-2007
Grant Date 04-Apr-2007
Date of Filing 07-Mar-2002
Name of Patentee INTEL CORPORATION
Applicant Address 2200 MISSION COLLEGE BOULEVARD, SANTA CLARA, CALIFORNIA 95052, UNITED STATES OF AMERICA.
Inventors:
# Inventor's Name Inventor's Address
1 YAP-PENG TAN 98 NANYANG CRESENT, BLOCK M #11-01, SINGAPORE 637665.
2 TINKU ACHARYA 7292 S, ROBERTS ROAD, TEMPE, ARIZONA 85283, USA
3 BRENT THOMAS 2184 W, OLIVE WAY, CHANDLER, ARIZONA 85248, USA
PCT International Classification Number H 04 N 5/232
PCT International Application Number PCT/US00/22128
PCT International Filing date 2000-08-11
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 09 / 383,117 1999-08-25 U.S.A.