Title of Invention

" DEVICE FOR ALIGNING CONTAINERS WITH RESPECT TO AT LEAST ONE GEOMETRIC FEATURE INTO A THEORETICAL POSITION OR ORIENTATION"

Abstract ABSTRACT "DEVICE FOR ALIGNING CONTAINERS WITH RESPECT TO AT LEAST ONE GEOMETRIC FEATURE INTO A THEORETICAL POSITION OR ORIENTATION" The invention relates to a Device for aligning containers (2), with respect to at least one geometric feature container (16), into a theoretical position or orientation, with a conveyer (5) with container receivers (6) for receiving, in each case, one container, as well as with cameras (8, 9, 10, 11) of an image recognition system, that are arranged along a conveying route formed by the conveyer (5), which brings about an alignment system of the containers (2) through a comparison Of the actual image data supplied by the cameras (8, 9, 10, 11) with theoretical data or image characteristic values stored in an electronic evaluation and control unit (12), wherein pre-alignment of the containers (2) occurs using a first camera system forming a first stage of the image recognition system and the at least one camera (8, 9) of this first camera system records, over a large area, the container or outer peripheral surface having the typical geometric feature container (16), wherein at least one further camera system, following in the conveying direction, for further alignment, with the records at least one camera (10, 11) thereof, the respective guided-past container (2) for further alignment in a narrower peripheral surface area having the at least one typical feature geometric container (16), and wherein the electronic unit brings about a further alignment on the basis of further stored image data or characteristic values, in the case of existing deviations from the theoretical position, via the actuator of the respective container receiver (6), the electronic unit (12) compares the distance of the at least two reference points
Full Text

FIELD OF INVENTION
The invention pertains to a device for position-precise alignment of containers.
The invention further relates to a labelling machine with such a device.
BACKGROUND OF INVENTION
In containers and particularly bottles that have typical geometric container
features on their outer surface, e.g. seal surface, ornament, embossing, raised
lettering etc., it is necessary to apply the labels with high application precision
with respect to these container features. This means that in a labelling machine
to which the containers are fed in an upright position but in a purely random
alignment or orientation, these containers have to initially be aligned in such a
way that they have a pre-given orientation as precisely as possible with respect
to their container features. Only then can be at least one label be applied on the
respective container and subsequently pressed on and/or brushed on to it.
It is known that for this alignment on a rotor of a labelling machine container
holders in the form of turntables are foreseen that can be rotated in a controlled
manner with own setting drives around a vertical axis and hence also around the
axis of a respective container arranged on the container support. It is also
especially known that the control of the container holders for alignment should
be taken up by an image identification system or camera system, with which the
respective position or orientation of at least one of the typical geometric

container features used for the alignment is determined as actual value and then
to compare it in an electronic system with stored image data or values
representing the desired value, and from that the necessary control of the
setting drive of container holders for the required position correction is activated
(EP 1 205 388). In a design version of the known device the camera system has
four cameras that are successively arranged along the movement path of the
container holders in rotation direction of the rotor. Each camera scanned
respectively one portion of the circumference of the container and that too
overlapping 100° of this circumference in case of containers rotating around
their container axis. On the basis of the actual image data, a rotation position
correction of the container holders and alignment of the containers with respect
to their typical geometric container features can be taken up.
EP 1205388 (A1) disclose device for controlling the rotational movement of
containers. According to the invention, each bottle (6) rests on a platform (4)
rotatable by a stepper motor (8) and held by a transfer holder (5) on a turnable
(3). During attachment of the label the platform is locked in position by a cam
(9) driven peg (11), slidable in a fixture (10) on the turntable.
OBJECTS OF THE INVENTION
It is the task of this invention to present a device with which an alignment of
containers with respect to at least one typical geometric container feature is
possible with a significantly improved degree of precision and also with a high

capacity, i.e. for a large number of containers to be processed per unit of time.
This task is fulfilled with a device according to the features of the invention. A
labeling machine is the other object of the invention.
SUMMARY OF THE INVENTION
In the device as per the invention, with the image data of a first camera system
a pre-alignment of the containers takes place in such a way that after pre-
alignment they at least somewhat precisely conform to the required orientation,
particularly also with respect to their geometric container features used for the
alignment, and that too with a degree of precision that cannot be attained with
known devices. With this first camera system the testing range, i.e. the
circumference range of the respective container where the at least one
geometric container feature can be found, is determined over a large area.
With the image data of at least another camera system a more precise and
eventually also final alignment of each container takes place. As the container
region scanned by the at least one camera of the at least one additional camera
system is much smaller than the area to be scanned by the at least one camera
of the first camera system, i.e. the at least one camera of the additional camera
system for example has a much smaller aperture angle than the at least one
camera of the first camera system, alignment can take place very precisely
within a very extremely short period of time by which the image data supplied by
the at least one additional camera system.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The invention is described below in details on the basis of diagrams of design
examples. The following are shown:
Figure 1 - In schematic depiction a labelling machine of the circulating type;
Figure 2-7 - Different depiction for explaining the algorithm while calculating the
rotation angle of the container holders required for correction of
the orientation.
DETAIL DESCRIPTION OF THE INVENTION
The labelling machine shown in figure 1 and generally referred to there with the
sign 1 serves the purpose of labelling containers 2, e.g. bottles that are fed to
the labelling machine at a container inlet 3 and leaving the labelling machine 1 in
labelled condition at the container outlet 4. The containers 2 are for example
bottles made of translucent material, e.g. glass, and are provided on their
container outer side with at least one typical geometric container feature, e.g.
seal surface, ornament, embossing, raised letterings etc. The containers 2 are
supposed to be provided with labels with a high degree of application precision
with respect to these geometric features.
The labelling machine 1 consists of, among other things a driven turntable or

rotor 5 circulating around a vertical machine axis in the direction of the arrow A,
which has at its circumference several container carriers or container holders 6,
which are arranged distributed at uniform angular distances around the vertical
machine axis and on which respectively a container 3 is foreseen with its
container axis parallel to the vertical machine axis for applying the labels.
The containers 2 are fed to the labelling machine 1 at the container inlet 3
through a transporting mechanism (not shown) in an upright position, i.e. with
their container axis oriented in vertical direction, but otherwise in a purely
random orientation also with respect to their typical geometric container
features; they are then passed on in this purely random orientation to a
container holder 6 and subsequently aligned in an angular region W1 of the
rotation movement A of the rotor 5, so that each container 3 at the end of this
angular region is exactly aligned with respect to its typical geometric container
features, i.e. has a pre-given orientation. In this condition each container 2 is
passed through a labelling station 7 moving along with the rotor 5 for applying at
least one label, so that this can then be applied on the respectively container 2
with the endeavoured high degree of application precision with respect to the
geometric container features. In the angular region W2 of the rotation
movement A of the rotor 5 following the labelling station 7 till the container
outlet 3, the usual pressing on and/or brushing on of the labels take place.
For aligning the containers 2, the container holders 6 are rotate-able through
own setting drivers around an axis parallel to the vertical machine axis, and

controlled by a multi-stage image identification system with several electronic
cameras 9-11 explained in more details below and having an evaluation and
control electronics system 12 preferably in the form of a computer.
In the shown design form the cameras 8-11 that do not move along with the
rotor 5 are arranged radial outside the movement path of the container holder 6
in such a way that with each camera the passing containers 2 are scanned at
least in the testing range or in the region of its container outer surface revealing
the typical geometric container features. Furthermore, all the cameras 8 - 11 are
situated within the angular region W1 and hence in rotation direction A before
the labelling station 7.
In details, both cameras 8 and 9 that are arranged in a part region of the
angular region W1 following on to the container inlet 3, form a first stage of the
image identification system along with a white background or a wide background
element 13 forming a background mirror that does not move along with the rotor
5, and which is arranged in the depicted design form radial inwards and lying
opposite to both cameras 8 and 9 with respect to the circular movement path of
the container holders 6, as well as along with a foreground lighting indicated by
the arrow B1. Both the cameras 8 and 9 are arranged with their optical axes at
an angle to one another in such a way that they can scan a circumference region
or a range greater than 180° of the respective passing container 2. The images
or image data supplied by both cameras 8 and 9 are combined for this purpose
to a total image or a total data set that corresponds to a picture of the traversed
path or the container circumference region of greater than 180°.

The first stage of the image identification system is followed by the second stage
of this system formed by the single camera 10. The camera 10 is again allocated
with a white background or a background element forming a white background
mirror corresponding to the element 13, and in the depicted design form lying
radial inwards with respect to the movement path of the container holder 6. This
second stage further has foreground lighting, as indicated by the arrow B2.
Obviously the elements 13 and 14 of the first and second stage can also be
formed by a single, continuous element. Furthermore, the foreground lighting for
both stages can also be made up of one or more common light sources, e.g.
fluorescent screens. Basically it holds good that in relation to the optical
properties of the containers, for the foreground lighting a lighting technology is
selected that allows an optimum detection of the container features used for
alignment of the containers. Furthermore, by giving a special shape to the
background element 13 and/or 14, e.g. by partially blackening the white
background element 13 and/or 14, one can achieve an increased optical
detection of edge profiles of the container features use for the alignment.
Following the second stage (camera 10) in rotation direction A is the third stage
of the image detection system consisting of the single camera 11, with a
background lighting B3 provided for example by a fluorescent screen that does
not move along with the rotor 5 and on the side of the movement path of the
container holder 6 lying opposite to the camera 11. The background lighting B3
is selected or can be adjusted for a minimum possible optimum optical detection
with respect to colour and/or intensity in relation to the optical properties of the
containers 2 or the container material and/or in relation to the optical properties
of the filling substance.

In details, alignment of the containers 2 with the image identification system
takes place in such a way that with the first stage or with both the cameras 8
and 9 placed there the respective random orientation of the passing container 2
is detected with one picture per container and camera 8 or 9. By subsequent
comparison of the images or image data supplied by both camera systems 8 and
9 in the electronic system with the images or image data or typical values stored
there in a data storage, the actual orientation of the respective container 2 at
that movement is determined in the electronic system 12; from that the
necessary correction for achieving the required pre-alignment is determined and
the correction is carried out by corresponding control of the setting drive of the
respective container holder 6.
For each individual container 2 the position correction is carried out in a
described manner through control of the container holder 6, so that each
container 2 is at least aligned with a position precision that allows subsequent
exact determination of the position of the at least one typical container feature
used for final alignment.
In the second stage of the image identification system formed by the camera 10
each passing container 2 is scanned in a narrower region of its typical geometric
container feature. For this, the lens of the camera is design in such a way that
the optical aperture angle of the camera 10 is lesser than the corresponding
aperture angle of the cameras 8 and 9 and the region of the respective container
revealing the typical geometric container feature is imaged in a format-filling

manner. The thus generated image from each container 2 is again compared in
the electronic system 12 with an image stored there for the concerned container
type or with characteristic values stored for the concerned container type; from
this the necessary position correction is determined and carried out by
corresponding control of the setting drive of the respective container holder 6.
With the help of the image region reduced to the typical container feature, with
the second stage of the image identification system, already a very highly precise
alignment of each container 2 is attained that is also vastly improved with
respect to the pre-alignment (with the first stage).
With the camera 11 forming the third stage a fine-tuning or fine-alignment of
each container 2 is then done before it reaches the labelling station 7. As criteria
for this fine-alignment at least one edge profile or at least one typical edge point
is used, and that too for at least one typical container feature and/or in the
region of this container feature used for alignment. The image data supplied by
the camera 11 is again compared in the electronic system 12 with the image
data stored there for the respective container type or with the characteristic
values stored there for the respective container type, so that from this
comparison the necessary position correction can be calculated and taken up by
means of corresponding control of the setting drive of the concerned container
holder.
By means of the above described three-stage optical detection of the containers
2 or the typical container features a very precise alignment of the containers fed

to the labelling machine 1 in random orientation or positioning is achieved with
only four cameras, before these reach the labelling station 7, so that the
endeavoured application precision while applying the labels is ensured with
respect to the typical geometric container feature with a high degree of reliability
and also a very high capacity for the labelling machine, e.g. for a labelling
capacity of several 10,000 containers per hour.
The details of an algorithm, as it is used in at least one of these stages of the
image identification system for determining the necessary correction, are
explained below. In order to carry out a precise identification of the rotation
angle with a precision of at least one degree, the cylindrical geometry of the
bottle surface has to be taken into account. For known holder-geometry
(distance of a camera to the bottle, diameter of a bottle) and known geometry of
the embossing sample, from the calculation shown in the appendix one can
calculate for each rotation angle of the bottle (e.g. in 0.5 degree stages), as to
how the embossing sample gets distorted for an observer (=camera) on the
bottle surface. These calculated distorted embossing samples must now be
compared with the observed embossing sample on a taken-up bottle. The
calculated embossing sample that matches closest to the observed sample then
determines the rotation angle of the bottle.
Figure 2 shows as example a typical container feature of an embossing sample
16 on a bottle in almost frontal view. The bottle edge as well as the bottle centre
is marked with a thin vertical red line. This frontal view is allocated as the

rotation angle zero degrees. One naturally defines the zero point of the rotation
angle on the basis of the symmetry of the embossing sample (i.e. "in the centre
of the embossing sample"). Along the horizontal test line 17 those points are
given as 17.1 - 17.7 at which the embossing intersects the test line 17. These
points are referred to below as embossing points.
With the variable Xj the seen position of an embossing point in a scanned image
is denoted and Zj denotes the world coordinates on the bottle surface. The
running index i numbers the individual embossing points.
Figure 3 shows the same embossing sample, where the bottle has been twisted
by 24° to the left. Even in this picture the embossing points are marked. On
account of twisting the bottle the position and distances of the embossing points
17.1 - 17.7 have characteristically changed. For example, on account of the
perspective distortion on the cylindrical bottle body the visible distance between
two adjacent embossing points that have got closure to the left bottle edge is
reduced as compared to the untwisted position. In case of a stronger distortion
parts of the embossing sample would disappear behind the bottle horizon.
A geometric calculation according to fig. 5 leads to the formulas (1) and (2).


Here R gives the bottle radius and d the distance of the camera from the bottle
center.
With these formulas it is possible to convert the seen position Xi of embossing
point 17.1 - 17.7 into world coordinates Zi on the bottle surface and vice versa.
In order to be able to calculate the exact distribution of the embossing points
17.1 - 17.7 along a horizontal test line 17 to a random rotation angle of the
bottle, the position zi of all embossing points 17.1 - 17.7 on the bottle surface
for a known rotation angle (e.g. for zero degrees) has to be known with respect
to the symmetry axis (= zero point) of the embossing samples. The position Zi of
an embossing point is defined by the distance of the seen bottle centre
measured along the bottle surface.
Studying an embossing sample for the identification algorithm is thereby made
much simpler. As one can see clearly in figure 2 and 3 through the embossing
points 17.1 - 17.7, a user guidance can be executed in a computer programme,
in which a user can mark the crossing points of the embossing sample with a
test line 17. With the help of the formula (2) the clicked monitor functions Xi can
immediately be converted into world coordinates Zi on the bottle surface. Thus
the user can make the embossing sample available to the algorithm in the form
of a list of embossing points 17.1 - 17.2. Once the embossing points zi have
been determined in world coordinates for an embossing sample then it can be
converted for any rotation angle of the bottle by the formula (1) in reverse
direction in the seen positions Xi. The identification algorithm can thus calculate

the seen positions Xi (ϕ) for the given embossing sample for all possible rotation
angles ϕ. In practice, it has proved to be useful that the identification algorithm
carries out this calculation for all rotation angles ϕk with an angular distance of
0.25 degrees, i.e. fa = 0.25 degrees *k with k = 0.±1, ±2, ±3, .... For each
angle fa the algorithm can retain the allied distribution of the seen positions Xi
(ϕk) in the memory and does not have to calculate afresh for the sample search
in the next bottle with same embossing sample. This can save a lot of computing
time.
As a next step the algorithm must decide, which distribution Xi (ϕk) is best suited
for the situation observed in the image. For this a method is used that allocates
an evaluation number (score) to each distribution Xi (ϕk). This score is construed
in such a way that it is greater, the better the observed situation suits a
distribution. The maximum score SkMax that is reached for a given image situation
thus determines the rotation angle ϕkmax of the bottle.
For calculating a score, the brightness graph H(x) is determined along the test
line 17 (fig. 6). Here x denotes the pixel position along the horizontal test profile.
In such a brightness profile embossing points become conspicuous through clear
brightness fluctuations on a linear scale that approximately corresponds to the
width of an embossing point. These brightness fluctuations are however
superimposed by other brightness fluctuations that take place on a clearly
greater linear scale and can thus be separated from the brightness fluctuation
caused by embossing points in the following way: From the brightness profile
H(x) one calculates a brightness profile HAve(x) that is glazed on a linear scale
that clearly lies above the width of an embossing point. This glazed brightness

profile HAve(x) is subtracted from the original brightness profile H(x) and one
observes only the values of the differences, i.e.

Regions in which there are no embossing points then reveal very small values
Hsub(x), whereas one can find high values on an embossing point. By selecting a
suitable threshold value, the locations bi of the embossing points can be
identified in the given image in this way.
The score Sk for a distribution of the seen positions Xi(ϕk) is then calculated in the
following way:
The point pair bi and Xi with the least distant is searched for. If this distance is
lesser than a pre-given maximum distance d, then the found point pair is
evaluated as suitable, i.e. it is accepted that the position of the embossing point
bi found in the image matches a position of the embossing sample to the bottle
twist ϕk. In this case a bonus contribution is added to the score Sk. As the
embossing samples of different bottles are never all absolutely exactly the same,
the bottle geometry as well as the bottle position to the camera is subjected to
fluctuations during imaging, one can never assume an exact matching of a point
pair bi and Xi. Therefore, through the maximum distance d it is demanded that
the points have to lie sufficiently close together. If a point pair has been found in
this way, then these are marked in an internal list of the algorithm as already
allocated. For the remaining points the procedure is repeated till all possible

points have either been allocated or till all points have been identified as not-
allocate-able (i.e. for a point bi no model point Xi has been found that lies
sufficiently close).
If no corresponding points bi have been found for model points Xi, then for this
Malus's contributions are deducted from the score.
Figure 7 shows for the example shown in figure 3 the relation of the score Sk in
relation to the angle. One can identify that there is a sharp maximum at
approximately 24 degrees, i.e. the seen point sample bi best corresponds to the
point sample Xi for a bottle rotation angle of -24 degrees.
The invention has been described above on the basis of a design example. It is
obvious that numerous alterations and deviations are possible without deviating
from the basic concept of the invention. Thus it was mentioned above that the
first stage of the image identification system has two cameras 8 and 9 and the
second or third stage respectively have only one camera 10 or 11. The number
of cameras in these stages can obviously also be selected differently, whereby it
is however necessary and at least purpose-oriented if the camera system of the
first stage detects as large a circumference area of the respective passing
container 2 as possible.
In the shown design form the cameras 8, 9, 10 and 11 are designed or activated
in such a way that for each passing container 2 an image or image data set is
complied and then on the basis of this image data set the pre-alignment (in the
first stage), the pre-adjustment (in the second stage) and the fine-adjustment (in
the third stage) take place by comparing with the respective image data.

List of reference signs
1 Labelling machine
2 Container or bottle
3 Container inlet
4 Container outlet
5 Rotort or turntable
6 Container support/holder
7 Labelling station
8,9,10,11 Electronic camera
12 Evaluation and control electronic system
13,14 Background element or background mirror
15 Background lighting element, e.g. fluorescent screen
16 Embossing sample or container feature
17 Test line
17.1 - 17.7 Cutting point or embossing point
A Rotation direction of the rotor 5
B1, B2, B3 Lighting
W1, W2 Angular region of the rotation movement of the rotor 5

WE CLAIM
1. Device for aligning containers (2), with respect to at least one geometric
feature container (16), into a theoretical position or orientation, with a
conveyer (5) with container receivers (6) for receiving, in each case, one
container, as well as with cameras (8, 9, 10, 11) of an image recognition
system, that are arranged along a conveying route formed by the
conveyer (5), which brings about an alignment system of the containers
(2) through a comparison Of the actual image data supplied by the
cameras (8, 9, 10, 11) with theoretical data or image characteristic values
stored in an electronic evaluation and control unit (12), wherein pre-
alignment of the containers (2) occurs using a first camera system
forming a first stage of the image recognition system and the at least one
camera (8, 9) of this first camera system records, over a large area, the
container or outer peripheral surface having the typical geometric feature
container (16) , wherein at least one further camera system, following in
the conveying direction, for further alignment, with the records at least
one camera (10, 11) thereof, the respective guided-past container (2) for
further alignment in a narrower peripheral surface area having the at least
one typical feature geometric container (16), and wherein the electronic
unit brings about a further alignment on the basis of further stored image
data or characteristic values, in the case of existing deviations from the
theoretical position, via the actuator of the respective container receiver
(6), characterized in that the electronic unit (12) compares the distance of
the at least two reference points (17.1 -17.7) of the typical container

feature (16) of the respective container (2) in the image data supplied by
the at least one camera (8, 9, 10, 11) with characteristic values stored for
the container type, and uses the comparison result to activate the
actuator of the respective container receiver (6).
2. Device as claimed in claim 1, wherein in the conveying direction (A) of the
conveyer (5), following the first camera system forming the first stage of
the image recognition, there is provided at least one second camera
system forming a second stage of the image recognition system as well as
a third camera system forming a third stage of the image recognition
system with, in each case, at least one camera (10,11).
3. Device as claimed in claim 1 or 2, wherein, with a first camera system or
the at least one camera (8, 9) of this system, a peripheral area of the
respective container (2) of more than 180 degrees is recorded.
4. Device as claimed in one of the preceding claims, wherein the first camera
system has at least two cameras (8, 9) which are arranged with the
camera axes thereof at an angle relative to one another.
5. Device as claimed in claim 4, wherein the images or image data supplied
by the at least two cameras (8, 9) of the first camera system are
combined in the electronic unit (12) into one overall image.
6. Device as claimed in one of the preceding claims, wherein, at least one
camera system has at least two cameras (8, 9).

7. Device as claimed in one of the preceding claims, wherein the at least one
further camera system, in particular the second and third camera system,
in each case only have one camera (10,11).
8. Device as claimed in one of the preceding claims, the camera system! or
the cameras thereof are formed to produce individual images of the past
moved-containers (2).
9. Device as claimed in one of the preceding claims, wherein the cameras (8,
9, 10, 11) of the camera systems are configured and / or controlled such
that they produce only one image each of the respective guided-past
container (2).
10.Device as claimed in one of the preceding claims, wherein, at least one
camera system, preferably the first camera system, is formed with
foreground lighting (B1, B2).
11. Device as claimed in one of the preceding claims, wherein, at least one
camera system, for example the at least one further camera system or the
third camera system is formed for the production of images or image data
through translucency.
12. Device as claimed in one of the preceding claims, wherein, at least one
camera system is designed with background lighting.
13. Device as claimed in claim 10 or 12, wherein the foreground or
background lighting can be adjusted in color and / or intensity.

14. Device as claimed in one of the preceding claims, wherein the conveyer is
a rotor (5) that can be driven in a rotating manner about a vertical axis
machine.
15. Device as claimed in one of the preceding claims, wherein, each container
receiver (6) has its own actuator.
16. Device as claimed in one of the preceding claims the container receivers
(6) are rotary tables.
17. Device as claimed in claim wherein the first electronic unit compares from
a distance, pattern, several distances between reference points (17.1-
17.7) of the typical container feature (16) of the respective container (2)
in the image data supplied by the at least one camera (8, 9, 10, 11) with
at least one distance pattern stored for the container type.
18. Device as claimed in claim 17, wherein the electronic unit (12) compares
the distance or the distance pattern of the reference points (17.1-17.7) in
the image data supplied by the at least one camera (8, 9, 10, 11 ) with
distances or distance patterns stored for the container type, ascertain the
distance corresponding best to the distance in the image data, or the
distance pattern corresponding best to the distance pattern in the image
data and determines therefrom the necessary correction for the alignment
of the container (2).

19. Device as claimed in one of the preceding claims, wherein, it is a
component of a labeling machine (1) with a container infeed (3) for the
containers (2) to be labeled, with a container discharge (4) for the labeled
containers (2) as well as with at least one labeling station (7) provided at
a conveying route formed by the conveyer (5) between the container
infeed (3) and the discharge container (4), and that the first camera
system as well as the at least one further camera system are provided at
the infeed conveying route between the container (3) and the labeling at
least one station (7).
20. Device as claimed in claim 19, wherein the conveyer is a rotor (5),
revolving about a vertical axis machine, with a plurality of container
receivers (6).
21. Device as claimed in one of the preceding claims, wherein, each container
receiver for aligning the container (2) provided at this receiver can be
rotated through an actuator activated by the electronic unit.
22. Labelling machine with a device for aligning containers (2), with respect
to at least one geometric feature container (16), into a theoretical position
or orientation, as claimed in claim 1.


ABSTRACT
"DEVICE FOR ALIGNING CONTAINERS WITH RESPECT TO AT LEAST
ONE GEOMETRIC FEATURE INTO A THEORETICAL POSITION OR
ORIENTATION"
The invention relates to a Device for aligning containers (2), with respect to at
least one geometric feature container (16), into a theoretical position or
orientation, with a conveyer (5) with container receivers (6) for receiving, in each
case, one container, as well as with cameras (8, 9, 10, 11) of an image
recognition system, that are arranged along a conveying route formed by the
conveyer (5), which brings about an alignment system of the containers (2)
through a comparison Of the actual image data supplied by the cameras (8, 9,
10, 11) with theoretical data or image characteristic values stored in an
electronic evaluation and control unit (12), wherein pre-alignment of the
containers (2) occurs using a first camera system forming a first stage of the
image recognition system and the at least one camera (8, 9) of this first camera
system records, over a large area, the container or outer peripheral surface
having the typical geometric feature container (16), wherein at least one further
camera system, following in the conveying direction, for further alignment, with
the records at least one camera (10, 11) thereof, the respective guided-past
container (2) for further alignment in a narrower peripheral surface area having
the at least one typical feature geometric container (16), and wherein the
electronic unit brings about a further alignment on the basis of further stored
image data or characteristic values, in the case of existing deviations from the
theoretical position, via the actuator of the respective container receiver (6), the
electronic unit (12) compares the distance of the at least two reference points

Documents:


Patent Number 253314
Indian Patent Application Number 1402/KOL/2006
PG Journal Number 28/2012
Publication Date 13-Jul-2012
Grant Date 11-Jul-2012
Date of Filing 20-Oct-2006
Name of Patentee KHS GMBH
Applicant Address JUCHOSTRASSE 20, 44143 DORTMUND, GERMANY
Inventors:
# Inventor's Name Inventor's Address
1 FRANK PUTZER LUTTMOORKAMP 15, 22399 HAMBURG, GERMANY
PCT International Classification Number B65D5/00
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10 2005 050902.9 2005-10-21 Germany