CA2229175A1 - Automatic focus system - Google Patents

Automatic focus system Download PDF

Info

Publication number
CA2229175A1
CA2229175A1 CA 2229175 CA2229175A CA2229175A1 CA 2229175 A1 CA2229175 A1 CA 2229175A1 CA 2229175 CA2229175 CA 2229175 CA 2229175 A CA2229175 A CA 2229175A CA 2229175 A1 CA2229175 A1 CA 2229175A1
Authority
CA
Canada
Prior art keywords
focus
images
automatic
specimen
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2229175
Other languages
French (fr)
Inventor
Ryan S. Raz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Veracel Inc
Original Assignee
Morphometrix Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Morphometrix Technologies Inc filed Critical Morphometrix Technologies Inc
Priority to CA 2229175 priority Critical patent/CA2229175A1/en
Publication of CA2229175A1 publication Critical patent/CA2229175A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing

Abstract

An automated focus system comprising an intelligent controlled electro-mechanical actuation system for manoeuvring a microscope lens. The focus system brings a stained biological material into optimal focus for image acquisition. The automated focus system is operable without human intervention and utilizes a merit function based on the "texture" of a dark stained biological material in the field of view of the microscope objective. The system utilizes a maximization procedure using a feedback technique related to the merit function in order to control the position of the objective lens. The merit function, in turn, is based on a series of calculations performed on a set of digitized images captured at different focal positions. The use of an intelligent control routine to issue instructions to the motion control system allows the device to avoid the usual focus and pitfalls associated with microscopic image capture. The automated focus system also includes a focus mapping method which is particularly suited to monolayer specimens. The focus mapping method generates a mapping of the surface of the microscope slide which is then used to predict the optimal focus position at any point within the mapped region.

Description

AUTOMATIC FOCUS SYSTEM
This application is a continuation-in-part of co-pending patent application filed January 19, 1998, which is a continuation of International Patent Application No.
PCT/CA96/00476 filed July 18, 1996.
Field of the Invention The present invention relates to automatic focus systems, and more particularly to an automatic focus system for the microscopic examination of tissue or tissue components in medicine and biology.
HackQround of the Invention The microscopic examination of tissue or tissue components is a common and valuable practice in both medicine and biology. In the art, microscopic examination is used to view blood smears under magnification to establish the density of certain types o.f blood components or the presence of disease. Such procedures typically rely on the visual appearance of the tissue which is often enhanced by the use of specialized st=ains that bind to certain tissue components, foreign bodies or the products of cellular processes.
With the advent of computer technology, it has now become possible to automate many of the manual e~:amination procedures by digitizing the images and placing them into memory of a computer for analysis, display and storage. However, the success of known automated imaging systems critically depends on the ability of the system to focus its optics on the tissue components of interest without operator intervention.
The utilization of image processing equipment for the diagnostic analysis of cellular tissue often requires a detailed examination of the nuclei within the cells.
Specifically, it is the evaluation of the granulation or the "texture" of the nuclei together with the accurate representations of the borders of the nuclei that plays an important role in the final diagnostic decision.
One method known in the art for enhancing the "edge" information comprises applying the Laplacian operator. The Laplacian operator comprises a mathematical operation represented by the following expression:
b2 s2 ( + ) I (x, y) ( 1 ) bx2 bye where I(x,y) is the continuous, planar distribution of light intensity that makes up an image.
Summary of the Invention It has been found that the Laplacian operator can be adapted to enhance the texture content of digitized images if the texture is considered to be composed of many edges. In the case of cells examined in a diagnostic procedure, the texture of the nuclei is typically ascribed to the presence of many small, thin strands of genetic material. Such material can be interpreted as a system of many edges and thereby lends itself to the enhancement provided by the Laplacian operation.
In addition to the enhancement of edge/texture information, it has been found that the Laplacian operator can be modified to yield a quantitative measure of the texture content of a digitized image.
Accordingly, the present invention provides an automated focus system for adjusting the positioning of magnifying optics utilized for microscopic examination of tissue or tissue components in medical and biological applications. It is a feature of the present invention that texture characteristics of the captured images are enhanced and the texture characteristics are also utilized in the determination of optimal focussing.
According to the invention, the Laplacian operator is modified to provide enhanced edge/texture information and thereby a quantitative measure of the texture content of a digitized image. According to the invention, the Laplacian operator is modified in four respects. Firstly, the absolute value of the Laplacian operation is used rather than its calculated value.
Secondly, the local value of the Laplacian operation is calculated and added to a running total. Thirdly, the two-dimensional Laplacian operator (expression (1) above) is modified to a one-dimensional Laplacian operator as shown below:
Sz 2 5 Sxz I (x~Yx) (2) where I(x,yk) represents a single line of video data generated by the digitizing means of the system. In another aspect, the one-dimensional Laplacian operation is generalized for discreet digitized images to yield a "second difference" equation as shown below:
Sz ~xZ
I (x~~Yx) - (Ip+i,x - I~,x) - (I~,x - I~-i,k) (3) In another aspect of the present invention, expression (3) allows the calculation of the texture content to be done at high computing speeds with reduced memory requirements. Fourthly, the Laplacian operation is modified according to a weighting curve. The weighting curve can be selected, for example, to give darker regions of the image greater weight in the running total than lighter regions. The weighting according to the weighting curve allows darker regions, i.e. those typically associated with nuclei, to dominate the texture content calculation so that the method preferentially favours nuclear texture content.
Therefore, according to one aspect of the preferred embodiment of the present invention, there is provided an automated focus system comprising an intelligent controlled electro-mechanical actuation system for manoeuvring a microscope objective. The system brings a stained biological material into optimal focus for image acquisition. The system is operable without human intervention and utilizes a merit function based on the "texture" of a dark stained biological material in the field of view of the microscope objective. The system utilizes a maximization procedure using a feedback technique related to the merit function in order to control the position of the objective lens.
The merit function, in turn, is based on a series of calculations performed on a set of digitized images captured at different focal positions. The use of an intelligent control routine to issue instructions to the motion control system allows the device to avoid the usual focus and pitfalls associated with microscopic image capture.
In a first aspect, the present invention provides an automatic focus system for focussing a microscope objective for viewing a specimen located on a carrier, said automatic focus system comprising: (a) actuator means coupled to said microscope objective for moving said microscope objective in response to control signals; (b) image capture means for capturing images of the specimen located on said carrier; (c) means for calculating focus numbers for said images; and (d) a controller for controlling said actuator means, said controller including means for generating an optimum focus position from said focus numbers and having means for issuing control signals to said actuator means for moving said microscope objective to said optimum focus position.
In a second aspect, the present invention provides an automatic focussing method for focussing a microscopic objective for viewing a specimen located on a carrier, said method comprising the steps of: (a) capturing a plurality of images of said specimen; (b) calculating a focus number for each of said images; (c) determining an optimum focus position for said microscope objective from said focus numbers; (d) moving said microscope objective to said optimum focus position.
In another aspect, the present invention provides a focus mapping method. The focus mapping method alleviates the need to capture and analyze at least three or more digital images of a specimen in order to discover the correct focal position for one point on the microscope slide. The focus mapping method according to this aspect of the present invention is particularly suited to monolayer cytological specimens.
The focus mapping method for generating a focus map for predicting a focal position for a point of interest in a biological specimen carried on a slide comprises the steps of: (a) determining a first focal position for the surface of the slide carrying the specimen; (b) determining a plurality of focal positions for selected points on the surface of the slide; (c) generating a focus map from the focal positions determined in step (b), wherein said focus map provides a means for determining a focal position for a point of interest on said focus map based on the focal positions of the points neighbouring said point of interest.
Brief Description of the Drawings Reference will now be made to the accompanying drawings which show, by way of example, preferred embodiments of the present invention, and in which:
Fig. 1 is a block diagram of an automatic focus system according to this invention;
Fig. 2 is a block diagram of a "second difference" calculation according to the invention;
Fig. 3 is a graphical representation of a second difference weighting function according to the present invention;
Fig. 4 is a flow chart illustrating a method for calculating the Focus Number according to this invention;
Fig. 5 is a flow chart illustrating a method for initially focussing a microscope objective according to the present invention;

Fig. 6 is a flow chart illustrating a method for tracking the focus of the microscope objective lens according to this invention;
Fig. 7 (a) shows in diagrammatic form the focal point variation in a conventional cytological specimen;
Fig. 7 (b) shows in diagrammatic form the focal point variation in a monolayer cytological specimen;
Fig. 8(a) shows in diagrammatic form the expected grid layout for a monolayer cytological specimen;
Fig. 8(b) shows in diagrammatic form the expected grid layout for a conventional cytological specimen;
Fig. 9 shows in diagrammatic form global variation in focal position as a result of slide tilt;
Fig. 10(a) depicts the first step in a focus mapping method according to another embodiment of the present invention;
Fig. 10(b) depicts a second step in the focus mapping method;
Fig. 10(c) depicts a third step in the focus mapping method;
Fig. 10(d) depicts a fourth step in the focus mapping method;

_g_ Fig. 10 (e) depicts a fifth step in the focus mapping method; and Fig. 11 is a flow chart showing the steps for the focus mapping method according to the present invention.
Detailed Description of the Preferred Embodiment Reference is made to Fig. 1, which shows an automatic focus system according to the present invention and denoted by 10. The automatic focus system 10 according to the present invention is suitable for integration with an image acquisition module of known microscopic examination instruments comprising a digital camera and an objective lens. The details of such microscopic examination instruments are within the understanding of those skilled in the art.
As shown in Fig. 1, the automatic focus system 10 comprises a digital camera 12, a focus number calculation module 14, and a controller 16. The digital camera 12 is coupled to a mechanical actuator 18 which in turn is coupled to an objective lens 20 of a microscope (not shown) .
The mechanical actuator 18 is of known design and carries the objective lens 20 and translates it vertically under instructions from the controller 16. In one embodiment, the mechanical actuator comprises a voice-coil actuator with a long range (1 mm) movement.
The voice-coil actuator 18 may also include a LVDT
(Linear Variable Differential Transformer) position sensor (not shown in Fig. 1). The LVDT sensor as will be familiar to those skilled in the art provides precise positioning information to the controller 16, although _g_ such a position sensor is not required for operation of the system 10.
The automatic focus system 10 according to the present invention includes a texture calculation routine.
Preferably, the texture calculation routine is embedded in electronic hardware. For the system 10 shown in Fig.
1, the texture calculation routine resides in the focus number calculation module 14. As shown in Fig. 1, digitized images 22 generated by the digital camera 12 are transferred to the focus number calculation module 14 for processing. As will be described in more detail below, the texture calculation routine analyzes the digitized images produced by the digital camera 12 and generates a figure, i.e. Focus Number, that measures the texture content of the dark-stained regions of the image.
The controller 16 preferably also comprises an electronic hardware module. The primary function of the controller 16 is to receive and store the texture figures, i.e. Focus Numbers, generated by the focus number calculation module 14 for different focal positions of the objective lens 20. Using the Focus Numbers, the controller 16 determines how far and in what direction the objective lens 20 must be moved in order to maximize the texture, i.e. focus, of the dark-stained regions in the field of view. After a decision is generated, the controller 16 issues appropriate motion commands 26 to the mechanical actuator 18 for re-positioning of the objective lens 20. Preferably, the controller 16 includes logic for receiving sensor and position data 28 from the mechanical actuator 18.
The process steps for analyzing the texture content and controlling the positioning of the objective lens 20 will now be described with reference to Figs. 2 to 6. According to the invention, the automated focusing system 10 is operable in two modes and the mode of operation depends upon the anticipated distance to focus for the current position of the objective lens 20. If the distance from the current position of the objective lens 20 is determined to be large, for example, as might be the case on initial start-up or when a large lateral excursion has been executed, then the automatic focusing system 10 applies an "initial focusing protocol" . On the other hand, if the distance from the current position to the optimum focal point is likely to be small, then the automatic focusing system 10 uses a "tracking focusing protocol".
The method according to the present invention operates on the digitized images 22 produced by the digital camera 12. In known manner, the digital camera 12 digitizes the image of the stained biological material (located on a specimen plate or slide) captured through the objective lens 20. Fig. 2 depicts in diagrammatic form a digitized image 50. The digitized image 50 is generated by the digital camera 12 and transferred to the focus number calculation module 14 for processing. A
typical digitized image will comprise 512 scan lines produced by known CCD-type imaging cameras. Fig. 2 depicts a partial (i.e. not all 512 lines are shown) digitized image 50 comprising a series of digital scan lines 52 shown individually as 52a, 52b, 52c and so on.
Each digital scan line 52, in turn, comprises a sequence of digital scan values 54 denoted individually as 54a, 54b....54n. Each scan value 54 corresponds to the binary value of the digitized pixel as will be understood by one skilled in the art.

The method for providing automatic focusing according to the present invention utilizes a one-dimensional Laplacian operator given by the expression:
~z I (x~Yk) (1) bxz where I(x,yk) represents a single line of video data 52 (Fig. 2) generated by the digitizing means, i.e. digital camera 12, of the system 10. According to this aspect of the present invention, the one-dimensional Laplacian operation given in expression (1) is generalized for discreet digitized images to yield a "second difference"
equation as shown below:
b2 I (x~.Yk) - (I~+i,x - Ip,k) - (Ip,x - I~-i,x) (2) C~XZ
The "second difference" equation given by expression (2) allows the calculation of the texture content to be done at high computing speeds and also reduces the memory requirements for the focus number calculation module.
Tn another aspect of the present invention, the absolute value of each "second difference" calculation is weighted according to a weighting function or curve. A
weighting curve denoted generally by reference 40 is shown in Fig. 3. The vertical axis of the weighting curve 40 represents a weighting factor "w" and the horizontal axis represents pixel intensity "x" for each pixel in the scan line 52. The weighting curve 40 depicted in Fig. 3 has been selected to give darker regions of the image 50 greater weight in the running total than lighter regions. In other words, the weighting according to the weighting curve 40 allows darker regions, i.e. those typically associated with nuclei, to dominate the texture content calculation so that the method preferentially favours nuclear texture content.
Reference is next made to Fig. 4 which shows in flow-chart form a method for calculating the Focus Number according to the present invention. The method denoted generally by 100 operates on a digitized grey-level image 50 (as depicted in Fig. 2) which is generated by the digital camera 12. The method 100 is preferably implemented as a routine embedded in electronic hardware in the focus number calculation module 14. In the preferred embodiment, a series of three digitized images are captured by the digital camera 12 with each image at a different focal position. In addition, a fourth image designated a "test image" or "confirmation image" is also utilized.
The method 100 calculates a Focus Number for each of the images where each Focus Number provides a measure of the quantity of "texture" in the image and utilizing the weighting function 40 of Fig. 3 particular attention is given to the dark-stained regions. The darker regions are of interest because they are generally associated with the objects of diagnostic interest such as nuclei in the biological material. However, it will be understood that the weighting of the Focus Number or texture calculation can be altered to accommodate alternative staining schemes.
Referring to Fig. 4, the first step in the focus number calculation routine 100 comprises a decision block 102 which checks if the last line 52, e.g. line 512 in 512 line image, in the image 50 has been reached. If yes, the routine 100 stops or returns (block 104). If the last line 52 (i.e. line 5l2 of the image 50) has not been reached, the routine 100 determines (decision block 106) if a new line, e.g. line 52k (Fig. 2), in the image 50 is being processed. If a new line is being processed, the routine l00 inputs the first and second pixels 54a, 54b (Fig. 2) as shown in block 108. If the processing of the line 52k is already in progress, the routine 100 inputs the next pixel, e.g. pixel 54h in the image line 52 shown in Fig. 2.
Referring to Fig. 4, the routine l00 in block 112 executes the "second difference" calculation according to the "second difference" equation given above in expression (2). In block 114, the routine 100 takes the absolute value of the second difference calculation.
Next, the routine 100 performs the weighting assignment in block l16. In this step, the weighting function 40 (Fig. 3) is applied to the absolute value so that the value is weighted inversely with respect to the centre pixel's value. In block 118, the routine 100 adds the weighted value determined in step 116 to a running total.
The running total represents the Focus Number for the image under consideration.
The routine 100 then moves to step 120 where the pixel values are updated. As will be understood by one skilled in the art, the routine 100 utilizes four memory locations for processing. Three memory locations store the active picture element or pixel values, i.e.
pixel I~_l,k, I~,k, and I~+l,k in the "second difference"
equation. The fourth memory location stores the running total value (updated in step 118). In step 120, pixel 2 ( i . a . I~,k) becomes new pixel 1 ( i . a . I~_l,k) and pixel 3 (i.e. pixel I~,~,k) becomes new pixel 2 (i.e. I~,k) . New pixel 3 is updated in step 110 as the next pixel in the line 52k (described above). The steps of the routine 100 are repeated until a11 of the lines, e.g. 512, in the image 50 have been processed. At completion of the processing (step 104), the running total value calculated for the image represents the Focus Number or texture quantity for the image.
Both the initial focusing routine and the tracking focusing routine utilize the above described focus number calculation routine.
Reference is made to Fig. 5 which shows in flow-chart form the initial focusing routine 200. In operation, the initial focusing routine 200 utilizes a series of three digitized images taken at different focal positions (i.e. vertical translation steps for the objective lens 20) . The Focus Numbers are calculated for the images and the controller 16 analyzes the Focus Numbers as follows. First, the direction of motion for the next series of three images is determined from the direction of increasing Focus Number. Second, if the Focus Numbers show a middle figure bracketed by two lower values, then a test is performed to ensure that a local maximum has been found. Third, the step size of the image series is gradually decreased to efficiently determine the maximum and also avoid first-surface reflections from the cover slip for the microscope slide.
Once the maximum Focus Number has been determined, the routine 200 takes the four Focus Numbers corresponding to the three captured images and the test or confirmation image, and fits the data to a quadratic polynomial function. The precise position of the maximum is established from the maximum in the polynomial and the controller 16 moves the objective lens 20 to this position which represents the optimal focus for the image.
Referring back to Fig. 5, at step 202 the lens 20 is advanced one step and a digitized image is captured by the digital camera 12. A suitable step size at this stage in the routine is 50 microns. The image is stored in memory and the focus number calculation routine 100 (Fig. 4) is called in step 204 to calculate the Focus Number for the image. In step 206, the Focus Number is compared to a maximum value. If the Focus Number is greater than the maximum value, the Focus Number is stored as the maximum Focus Number in block 208. The routine 200 next determines in step 210 if the range limit has been reached. The range limit 210 keeps count of the three images and additional confirmation image.
If the range limit has not been reached, the lens 20 is advanced one step (block 202) and the sequence is repeated i.e. another image is captured at the next step and the focus number is calculated. Once the three images have been captured, a test or confirmation image is taken to verify that previously determined maximum Focus Number is not due to noise.
After the maximum Focus Number is determined, the initial focusing routine 200 proceeds to step 212.
At step 212, the~lens 20 is moved to the maximum Focus Number position. Next, the focussing routine 200 goes back one step (block 214) and then reduces the step size (e.g to 25 microns) for the image series (block 216) .
With the step size reduced, the routine 200 advances the lens 20 by one step in block 218. The Focus Number is calculated for the image at block 220 by calling the Focus Number calculation routine 100 (Fig. 4). The Focus Number calculated at step 220 is compared to the last Focus Number in decision block 222. If the Focus Number is not greater than the last Focus Number, the routine 200 proceeds to decision block 224 to check if the range limit has been reached. If the range limit is reached ( i . a . three images and a test image have been captured and processed), the routine 200 aborts processing (block 226). If the range limit is not reached, the routine 200 returns to step 218 and advances the lens 20 one more step and captures the next image and calculates the Focus Number.
Referring to Fig. 5, if the Focus Number is greater than the last Focus Number at step 222, the routine 200 moves to block 228 and executes a vertical translation step for the lens 20 and an image is captured at this new position. Next at step 230, the Focus Numbers for the image is calculated. The routine 200 then determines at decision block 232 if the calculated Focus Number is the maximum. If yes, then the routine 200 checks if the range limit has been reached at block 224 (i.e. three images and a test image have been processed), and if the range limit has not been reached, steps 218 to 222 are repeated as described above. If the Focus Number (determined at step 230) is not greater than the last Focus Number, i.e. the maximum has not been reached, the routine 200 moves to decision block 234 to determine if the step size can still be reduced. If the minimum step size has not been reached, the routine 200 goes back one vertical step at block 214, and steps 216 through 222 are repeated as described above.
On the other hand, if the minimum step size (e. g. 10 microns) has been reached as determined in step 234, the search for maximum Focus Number, i.e. texture, is complete and the routine 200 turns to determining the precise position of the texture maximum. At step 236, the initial focusing routine 200 takes the Focus Numbers for the three images and the additional test or confirmation image and fits a quadratic polynomial to the data. The routine 200 then establishes the position of the Focus Number maximum from the maximum in the polynomial at step 238. One skilled in the art will understand the implementation of the quadratic polynomial. In response to the position determined by the focusing routine 200, the controller 16 issues commands to the actuator 18 to move the obj ective lens 20 to the position which is the optimal focus for the image (block 240). The routine 200 uses sensor information 28 from the actuator 18 to verify the position of the objective lens 20 (block 242) . If the positioning of the objective lens 20 is correct, the initial focusing routine 200 is complete (block 244), otherwise the position of the lens 20 is adjusted through the actuator 18 (block 240) .
Reference is next made to Fig. 6 which shows the tracking focusing routine 300. The tracking focusing routine 300 is appropriately used after the system 10 has executed a small lateral move in which the anticipated focal position is not very far from the current position of the lens 20. In this context, "not far" means that it is likely that a series of three of the smallest vertical steps will be sufficient to bracket the focal position and allow the use of a quadratic polynomial fit in order to find the exact focal position.
In operation, the tracking focusing routine 300 executes three small vertical translation steps and calculates a Focus Number for each position. The routine 300 then uses these points to calculate the position of the maximum in a theoretical parabola. The parabola describes the variation of the Focus Numbers with respect to vertical position. The controller 16 then instructs the mechanical actuator 18 to move the lens 20 to this position in order to complete the focus routine.
Referring to Fig. 6, the tracking focusing routine 300 first checks if the vertical step size (e. g.
microns) is at the minimum in decision block 302. If the step size is not at the minimum, it is reduced (block 10 304). Once the minimum step size is set, the tracking routine 300 advances the lens 20 one step at block 306 and an image is captured, and then calculates the Focus Numbers for the captured image at the step in block 308.
The results of the Focus Number calculation are stored at block 310 and the lens 20 is advanced one more step at block 312 and another image is captured. The Focus Number calculation is repeated for the new vertical step (block 314) and stored in memory (block 316). The lens is then advanced another vertical step (block 318) and 20 a third image and confirmation image are captured. The Focus Number calculations are performed for the captured images taken at the step (block 320) and stored in memory (block 322).
As shown in Fig. 6, the tracking focusing routine 300 then uses a quadratic polynomial to fit the Focus Numbers to a parabola (block 324). The parabola provides a relation between the Focus Numbers and the vertical position of the objective lens 20. In step 326, the routine 300 attempts to ascertain the maximum focus position from the parabola. If the maximum position for the optimal focus cannot be ascertained, the tracking focusing routine 300 reverts to the initial focusing routine 200 (block 328). If the maximum is ascertainable, the maximum is determined in step 330 and the controller 16 issues commands to the actuator 18 to move the lens 20 to the position for optimal focus. The position of the lens 20 is checked (block 334) and adjusted if necessary (block 332) , otherwise the tracking focusing routine 300 returns control (block 336) to the calling routine running for example on a central control computer (not shown).
If the operation of the tracking focusing routine 300 cannot generate a maximum with the three measurements taken at each vertical step as described above, then the automatic focus system 10 automatically reverts to the initial focusing routine 200 described above with reference to Fig. 5.
According to another embodiment of the present invention, there is provided a focus mapping method. The focus mapping method alleviates the need to capture and analysis of at least three or more digital images of a specimen in order to discover the correct focal position for one point on the microscope slide. As will be described, the focus mapping method is particularly suited to monolayer cytological specimens.
The distribution of cellular material on a microscope slide is generally completely random. For a conventional a Pap smear specimen 400 as shown in Fig.
7(a), cervical cells 40l smeared onto a microscope slide 402 occupy a random horizontal and vertical arrangement beneath the cover glass 403. Some areas 404 will be thick, while other areas 405 will be very thin. As a result, the optimal focal position for an ordinary cellular preparation 400 can vary quite considerably over the surface of the microscope slide 402, irrespective of the smoothness of the slide 402 itself.
A monolayer specimen 410, on the other hand, is prepared by a controlled deposition of cells 411 taken from a fluid suspension, and results in different arrangement on the slide 402 as shown in Fig. 7(b). By eliminating cellular debris, mucus, and contaminants together with a dis-aggregation step, the monolayer specimen 410 will tend to comprise a mono-disperse distribution of cells 411 with little or no overlap between the microscope slide 402 and cover glass 403 as shown in Fig. 7 (b) . As a consequence, the cells 411 will bind to and follow the surface contour defined by the microscope slide 402 itself. For typical slides, the variation in surface smoothness may be on the order of 50 microns or less.
As will be described in more detail below, the focus mapping method according to this aspect of the invention generates a mapping of the surface contours of the slide as a guide for the optimal focal position of the specimen 410. According to the focus mapping method a discrete mapping of the optimal focal positions for the slide surface is initially generated. The mapping is then used to predict the optimal focal position at any point covered by this mapping. Advantageously, the application of the focus mapping method to monolayer specimens results in a mapping with a relatively small number of discrete positions because the expected fluctuation of focal positions will be roughly the same as the expected variation in the surface smoothness of the microscope slide 402 as shown in Fig. 7 (b) . As a result, the focus mapping method provides a prediction technique account for local variations in surface smoothness of the microscope slide 402. In addition, the focus mapping method can also account for "tilt" in a microscope slide. As shown in Fig. 9, a microscope slide 402' has become "tilted" in one or more directions with respect to the optical axis of the objective lens 20.
The focus mapping method according to this aspect of the present invention comprises the following principle steps: (1) finding a first focal position; (2) constructing a grid; (3) bracketing the point in question by neighbours in the grid; (4) defining a plane for predicting the focal position; (5) defining the "best fit" plane for focal position prediction;
Reference is next made to Figs. 10(a) to 10(e) which show the principal steps in a focus mapping method 500 according to the present invention. The first step 510 in the focus mapping method 500 involves determining a first focal position for the microscope slide. As shown in Fig. 10(a), after a monolayer microscope slide 499 is mounted for examination, the slide 499 is brought to a predetermined starting point, e.g. by the translation stage (not shown) in an automated testing system for cytological specimens. At this point, the autofocus method described above for the automatic focus system 10 is performed to determine a first focal position for the first (i.e. top) surface 498 of the microscope slide 499. It is assumed that all the cells in the monolayer specimen 497 are on the top surface 498.
The second step 520 in the focus mapping method 500 involves generating a grid 522 (Fig. 8 (a) ) . The grid 522 comprises a predetermined number of points 523(1), 523(2),...523(N) distributed evenly over the surface 498 of the microscope slide 499 as shown in Fig. 10(b). The density (i.e. number) of points 523 in the grid 522 is determined by the expected variation of smoothness of the surface 498 of the slide 499. For a typical monolayer specimen, at least 40 to 50 grid points 523 are needed to adequately map a 4 square centimeter area on the typical slide 499. At each point 523 on this grid 522, the autofocus method as described above is used to determine the first surface 498 of the microscope slide 499.
The third step 530 in the focus mapping method 500 involves bracketing the point 523 in question by neighbouring points 523 in the grid 522. As the grid 522 is generated each point 523 is compared with those of its known neighbours to determine whether or not the focal position is reasonable. There are times that dust or debris can confuse the automatic focus method. By providing such an internal check, the effect of such anomalies can be minimized. If a point 523 is found to be anomalous, a nearby position can be substituted, or the same position can be retried. Once the mapping of the focal position for each grid 523 is completed, the grid 522 is stored in memory for later use.
The fourth step 540 in the focus mapping method 500 involves defining a plane 544 which will be used to predict the focal position of a point in the grid 522.
After the translation stage has moved to the correct position for imaging, the system examines the predetermined focal positions FP of the nearest points (i.e. 523(11), 523(l7), 523(22) in Fig. 10(c)) to the point in question (i.e. 523(18) in Fig. 10(c)). Based on the three-dimensional coordinates of the focal positions FP for these nearest neighbours, the optimal focal position FP? for the point in question 523(18) can be predicted mathematically. The prediction can be made using a variety of mathematical procedures. For example, if the point in question lies within a region defined by three grid points, then these three grid points will define a plane 544 upon which the point in question is assumed to lie as shown in Fig. 10(d). If more than three points are to be used, then the plane defined by them might be the one that minimizes the collective distances from the grid points to the plane. On the other hand, it might be best to weight the points used to define the "best fit" plane 545 (i.e. step 550 in Fig.
10(e)) according to the distance of the neighbouring points from the point in question on the assumption that fluctuations in surface smoothness are more "local" than "global".
Lastly, step 560 is included to periodically verify the predicted focal position. The "true" focal position for a point on the slide is found using the automatic focus method as described above. The focal position determined by the automatic focus method is compared to the predicted focal position to gauge the accuracy and applicability of the grid mapping. If the predicted value obtained from the grid mapping fails this test, then the slide could be rejected, or the scanning might be restarted with a more densely populated grid to ensure that these local fluctuations are accounted for in the grid.
The principal processing steps for the focus mapping technique are summarized in the flow chart of Fig. 11.
It will be appreciated that the generation of the grid requires processing time at the beginning of the slide's scan. However for a typical application, the time needed to focus 40 to 50 or so points to generate the grid is weighed against the time needed to focus 2000 points over the course of examining the specimen if the prediction method is not utilized. The time savings in this case might be on the order of 75%. Provided that periodic checks are made of the grid mapping technique's accuracy, the focus mapping method provides highly reliable and effective focal position determinations well-suited for high speed cytological scanning systems.
It will be understood that focus mapping method according to the present invention is also applicable to a conventional specimen preparation 488 (Fig. 8(b)).
However, the expected variation will be much greater than for a monolayer specimen, and as result the grid 477 would need to have large number of discrete points as also shown in Fig. 8(b).
It is to be understood that the foregoing description of the preferred embodiment of this invention is not intended to be limiting or restricting, and that various rearrangements and modifications which may become apparent to those skilled in the art may be resorted to without departing from the scope of the invention as defined in the appended claims.

Claims (20)

1. An automatic focus system for focussing a microscope objective for viewing a specimen located on a carrier, said automatic focus system comprising:
(a) actuator means coupled to said microscope objective for moving said microscope objective in response to control signals;
(b) image capture means for capturing images of the specimen located on said carrier;
(c) means for calculating focus numbers for said images; and (d) a controller for controlling said actuator means, said controller including means for generating an optimum focus position from said focus numbers and having means for issuing control signals to said actuator means for moving said microscope objective to said optimum focus position.
2. The automatic focus system as claimed in claim 1, wherein said controller includes means for determining an initial focus position for said microscope objective.
3. The automatic focus system as claimed in claim 2, wherein said controller includes means for tracking the focus position of said microscope objective.
4. The automatic focus system as claimed in claim 1, wherein said means for calculating focus numbers for said images includes weighting means for weighing selected portions of said images for calculating said focus numbers.
5. The automatic focus system as claimed in claim 1, wherein said image capture means comprises a digitizing camera having means for digitizing an image of said specimen and generating an image output comprising a plurality of pixels.
6. The automatic focus system as claimed in claim 5, wherein said digitizing camera digitizes a plurality of images of said specimen at different focal positions and said means for calculating calculates focus numbers for each of said images.
7. The automatic focus system as claimed in claim 6, wherein said means for calculating focus numbers includes weighting means for weighing selected pixels in said image output.
8. The automatic focus system as claimed in claim 7, wherein said weighting means gives greater weight to pixels in darker regions of the image.
9. The automatic focus system as claimed in claim 6, wherein said means for calculating focus numbers includes means for executing a second difference equation, (I j+1,k - I j,k) - (I j,k - I j-1,k) where I(x,y k) represents a single line of pixels x in said image output.
10. The automatic focus system as claimed in claim 9, wherein said means for generating an optimum focus position includes means for fitting said focus numbers to a quadratic polynomial and means for deriving said optimum focus position from said quadratic polynomial.
11. An automatic focussing method for focussing a microscopic objective for viewing a specimen located on a carrier, said method comprising the steps of:
(a) capturing a plurality of images of said specimen;
(b) calculating a focus number for each of said images;
(c) determining an optimum focus position for said microscope objective from said focus numbers;
(d) moving said microscope objective to said optimum focus position.
12. The automatic focussing method as claimed in claim 11, wherein said step of capturing a plurality of images comprises digitizing images of said specimen and producing a series of digital output images.
13. The automatic focussing method as claimed in claim 12, wherein said step of calculating a focus number for each of said images comprises executing a second difference equation, (I j+1,k - I j,k) - (I j,k - I k-1,k) where I(x,y k) represents a single line of pixels x in said digital output image.
14. The automatic focussing method as claimed in claim 13, wherein said step of calculating a focus number for each of said images includes applying a weighting function to selected pixels in said digital output image.
15. The automatic focussing method as claimed in claim 14, wherein said weighting function is selected to give darker regions of said digital output image greater weight than lighter regions.
16. The automatic focussing method as claimed in claim 13, wherein said step of determining an optimum focus position comprises fitting said focus numbers to a quadratic polynomial and determining a maximum from said quadratic polynomial, said maximum corresponding to the optimum focus position.
17. A focus mapping method for generating a focus map for predicting a focal position for a point of interest in a biological specimen carried on a slide, said method comprising the steps of:
(a) determining a first focal position for the surface of the slide carrying the specimen;
(b) determining a plurality of focal positions for selected points on the surface of the slide;
(c) generating a focus map from the focal positions determined in step (b), wherein said focus map provides a means for determining a focal position for a point of interest on said focus map based on the focal positions of the points neighbouring said point of interest.
18. The focus mapping method as claimed in claim 17, wherein said plurality of selected points form a defined grid on the surface of the slide.
19. The focus mapping method as claimed in claim 17, further including the step performing a focal position determination for verifying the focal position determined from said focus map.
20. The focus mapping method as claimed in claim 19, wherein said biological specimen is prepared as a monolayer specimen.
CA 2229175 1998-02-06 1998-02-06 Automatic focus system Abandoned CA2229175A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA 2229175 CA2229175A1 (en) 1998-02-06 1998-02-06 Automatic focus system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA 2229175 CA2229175A1 (en) 1998-02-06 1998-02-06 Automatic focus system

Publications (1)

Publication Number Publication Date
CA2229175A1 true CA2229175A1 (en) 1999-08-06

Family

ID=29409256

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2229175 Abandoned CA2229175A1 (en) 1998-02-06 1998-02-06 Automatic focus system

Country Status (1)

Country Link
CA (1) CA2229175A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006097123A1 (en) * 2005-03-16 2006-09-21 European Molecular Biology Laboratory Autofocussing system for microscope systems
US7518652B2 (en) * 2000-05-03 2009-04-14 Aperio Technologies, Inc. Method and apparatus for pre-focus in a linear array based slide scanner
US7646495B2 (en) 2000-05-03 2010-01-12 Aperio Technologies, Inc. System and computer readable medium for pre-focus of digital slides
US8103082B2 (en) 2000-05-03 2012-01-24 Aperio Technologies, Inc. Optimizing virtual slide image quality
US8199358B2 (en) 2003-02-28 2012-06-12 Aperio Technologies, Inc. Digital slide image analysis
US8565480B2 (en) 2004-05-27 2013-10-22 Leica Biosystems Imaging, Inc. Creating and viewing three dimensional virtual slides
US8582849B2 (en) 2000-05-03 2013-11-12 Leica Biosystems Imaging, Inc. Viewing digital slides
US8705825B2 (en) 2009-12-11 2014-04-22 Leica Biosystems Imaging, Inc. Signal to noise ratio in digital pathology image analysis
US8743195B2 (en) 2008-10-24 2014-06-03 Leica Biosystems Imaging, Inc. Whole slide fluorescence scanner
US9235041B2 (en) 2005-07-01 2016-01-12 Leica Biosystems Imaging, Inc. System and method for single optical axis multi-detector microscope slide scanner
WO2016179286A1 (en) * 2015-05-05 2016-11-10 Massachusetts Institute Of Technology Substrate pre-scanning for high throughput microscopy
EP3614192A1 (en) * 2018-08-20 2020-02-26 Till GmbH Microscope device
US10746985B2 (en) 2014-04-07 2020-08-18 Massachusetts Institute Of Technology Use of microparticle additives to simultaneously enable artifact-free image registration, auto-focusing, and chromatic aberration correction in microscopy
CN114667470A (en) * 2019-11-25 2022-06-24 豪洛捷公司 Digital imaging system and method

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8805050B2 (en) 2000-05-03 2014-08-12 Leica Biosystems Imaging, Inc. Optimizing virtual slide image quality
US9213177B2 (en) 2000-05-03 2015-12-15 Leica Biosystems Imaging, Inc. Achieving focus in a digital pathology system
US7646495B2 (en) 2000-05-03 2010-01-12 Aperio Technologies, Inc. System and computer readable medium for pre-focus of digital slides
US7893988B2 (en) 2000-05-03 2011-02-22 Aperio Technologies, Inc. Method for pre-focus of digital slides
US7978894B2 (en) 2000-05-03 2011-07-12 Aperio Technologies, Inc. Fully automatic rapid microscope slide scanner
US8055042B2 (en) 2000-05-03 2011-11-08 Aperio Technologies, Inc. Fully automatic rapid microscope slide scanner
US8103082B2 (en) 2000-05-03 2012-01-24 Aperio Technologies, Inc. Optimizing virtual slide image quality
US9851550B2 (en) 2000-05-03 2017-12-26 Leica Biosystems Imaging, Inc. Fully automatic rapid microscope slide scanner
US8385619B2 (en) 2000-05-03 2013-02-26 Aperio Technologies, Inc. Fully automatic rapid microscope slide scanner
US8456522B2 (en) 2000-05-03 2013-06-04 Aperio Technologies, Inc. Achieving focus in a digital pathology system
US7518652B2 (en) * 2000-05-03 2009-04-14 Aperio Technologies, Inc. Method and apparatus for pre-focus in a linear array based slide scanner
US9729749B2 (en) 2000-05-03 2017-08-08 Leica Biosystems Imaging, Inc. Data management in a linear-array-based microscope slide scanner
US8582849B2 (en) 2000-05-03 2013-11-12 Leica Biosystems Imaging, Inc. Viewing digital slides
US9535243B2 (en) 2000-05-03 2017-01-03 Leica Biosystems Imaging, Inc. Optimizing virtual slide image quality
US8731260B2 (en) 2000-05-03 2014-05-20 Leica Biosystems Imaging, Inc. Data management in a linear-array-based microscope slide scanner
US9521309B2 (en) 2000-05-03 2016-12-13 Leica Biosystems Imaging, Inc. Data management in a linear-array-based microscope slide scanner
US8755579B2 (en) 2000-05-03 2014-06-17 Leica Biosystems Imaging, Inc. Fully automatic rapid microscope slide scanner
US9386211B2 (en) 2000-05-03 2016-07-05 Leica Biosystems Imaging, Inc. Fully automatic rapid microscope slide scanner
US8467083B2 (en) 2003-02-28 2013-06-18 Aperio Technologies, Inc. Framework for processing the content of a digital image of a microscope sample
US8780401B2 (en) 2003-02-28 2014-07-15 Leica Biosystems Imaging, Inc. Systems and methods for analyzing digital slide images using algorithms constrained by parameter data
US8199358B2 (en) 2003-02-28 2012-06-12 Aperio Technologies, Inc. Digital slide image analysis
US9019546B2 (en) 2003-02-28 2015-04-28 Leica Biosystems Imaging, Inc. Image processing of digital slide images based on a macro
US8565480B2 (en) 2004-05-27 2013-10-22 Leica Biosystems Imaging, Inc. Creating and viewing three dimensional virtual slides
US9069179B2 (en) 2004-05-27 2015-06-30 Leica Biosystems Imaging, Inc. Creating and viewing three dimensional virtual slides
US8923597B2 (en) 2004-05-27 2014-12-30 Leica Biosystems Imaging, Inc. Creating and viewing three dimensional virtual slides
WO2006097123A1 (en) * 2005-03-16 2006-09-21 European Molecular Biology Laboratory Autofocussing system for microscope systems
US9235041B2 (en) 2005-07-01 2016-01-12 Leica Biosystems Imaging, Inc. System and method for single optical axis multi-detector microscope slide scanner
US8743195B2 (en) 2008-10-24 2014-06-03 Leica Biosystems Imaging, Inc. Whole slide fluorescence scanner
US9523844B2 (en) 2008-10-24 2016-12-20 Leica Biosystems Imaging, Inc. Whole slide fluorescence scanner
US8705825B2 (en) 2009-12-11 2014-04-22 Leica Biosystems Imaging, Inc. Signal to noise ratio in digital pathology image analysis
US10746985B2 (en) 2014-04-07 2020-08-18 Massachusetts Institute Of Technology Use of microparticle additives to simultaneously enable artifact-free image registration, auto-focusing, and chromatic aberration correction in microscopy
US10282647B2 (en) 2015-05-05 2019-05-07 Massachusetts Institute Of Technology Substrate pre-scanning for high throughput microscopy
WO2016179286A1 (en) * 2015-05-05 2016-11-10 Massachusetts Institute Of Technology Substrate pre-scanning for high throughput microscopy
EP3614192A1 (en) * 2018-08-20 2020-02-26 Till GmbH Microscope device
WO2020038753A1 (en) * 2018-08-20 2020-02-27 Till Gmbh Microscope device
CN112771433A (en) * 2018-08-20 2021-05-07 美天施生物科技有限两合公司 Microscope device
JP2022513422A (en) * 2018-08-20 2022-02-08 ミルテニイ ビオテック ベー.ファー. ウント コー.カーゲー Microscope device
CN112771433B (en) * 2018-08-20 2023-06-30 美天施生物科技有限两合公司 Microscope apparatus
CN114667470A (en) * 2019-11-25 2022-06-24 豪洛捷公司 Digital imaging system and method

Similar Documents

Publication Publication Date Title
US6816606B2 (en) Method for maintaining high-quality focus during high-throughput, microscopic digital montage imaging
CA2827703C (en) Method for assessing image focus quality
CN111352227B (en) Distance determination of sample plane in microscope system
US7605356B2 (en) Apparatus and method for rapid microscopic image focusing
EP2671113B1 (en) Fast auto-focus in microscopic imaging
US7155049B2 (en) System for creating microscopic digital montage images
JP7252190B2 (en) System for generating enhanced depth-of-field synthetic 2D images of biological specimens
CA2229175A1 (en) Automatic focus system
US10459193B2 (en) Real-time autofocus focusing algorithm
US7627153B2 (en) Repositioning inaccuracies in an automated imaging system
CN115047610B (en) Chromosome karyotype analysis device and method for automatically fitting microscopic focusing plane
CN109001902A (en) Microscope focus method based on image co-registration
US8179575B2 (en) Chromatic registration for biological sample imaging
CA2227225A1 (en) Automatic focus system
KR101882696B1 (en) Examination method of sputum smear sample for automatic inspection system consisting of focus-tunable microscope
AU2018375358A1 (en) Dual processor image processing
Fan Methods for rapid and high quality acquisition of whole slide images

Legal Events

Date Code Title Description
FZDE Dead