WO2005011947A9 - Image processing method and system for microfluidic devices - Google Patents

Image processing method and system for microfluidic devices

Info

Publication number
WO2005011947A9
WO2005011947A9 PCT/US2004/024591 US2004024591W WO2005011947A9 WO 2005011947 A9 WO2005011947 A9 WO 2005011947A9 US 2004024591 W US2004024591 W US 2004024591W WO 2005011947 A9 WO2005011947 A9 WO 2005011947A9
Authority
WO
WIPO (PCT)
Prior art keywords
image
instructions
information associated
determining
fiducial
Prior art date
Application number
PCT/US2004/024591
Other languages
French (fr)
Other versions
WO2005011947A2 (en
WO2005011947A3 (en
Inventor
Colin Jon Taylor
Gang Sun
Simant Dube
Original Assignee
Fluidigm Corp
Colin Jon Taylor
Gang Sun
Simant Dube
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fluidigm Corp, Colin Jon Taylor, Gang Sun, Simant Dube filed Critical Fluidigm Corp
Priority to EP04757388A priority Critical patent/EP1667829A4/en
Priority to JP2006522086A priority patent/JP2007506943A/en
Priority to CA002532530A priority patent/CA2532530A1/en
Priority to AU2004261655A priority patent/AU2004261655A1/en
Publication of WO2005011947A2 publication Critical patent/WO2005011947A2/en
Publication of WO2005011947A9 publication Critical patent/WO2005011947A9/en
Publication of WO2005011947A3 publication Critical patent/WO2005011947A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L3/00Containers or dishes for laboratory use, e.g. laboratory glassware; Droppers
    • B01L3/50Containers for the purpose of retaining a material to be analysed, e.g. test tubes
    • B01L3/502Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures
    • B01L3/5027Containers for the purpose of retaining a material to be analysed, e.g. test tubes with fluid transport, e.g. in multi-compartment structures by integrated microfluidic structures, i.e. dimensions of channels and chambers are such that surface tension forces are important, e.g. lab-on-a-chip
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present invention is directed to image processing technology. More particularly, the invention provides an image processing method and system for detecting changes of an imaged obj ect. Merely by way of example, the invention has been applied to crystallization in a microfluidic device. But it would be recognized that the invention has a much broader range of applicability. [0007] Crystallization is an important technique to the biological and chemical arts. Specifically, a high-quality crystal of a target compound, can be analyzed by x-ray diffraction techniques to produce an accurate three-dimensional structure of the target. This three- dimensional structure information can then be utilized to predict functionality and behavior of the target.
  • the crystallization process is simple.
  • a target compound in pure form is dissolved in solvent.
  • the chemical environment of the dissolved target material is then altered such that the target is less soluble and reverts to the solid phase in crystalline form.
  • This change in chemical environment is typically accomplished by introducing a crystallizing agent that makes the target material less soluble, although changes in temperature and pressure can also influence solubility of the target material.
  • Each metering cell comprises one or more of pairs of opposing chambers, each pair being in fluid communication with the other through an interconnecting microfluidic channel, one chamber containing a protein solution, and the other, opposing chamber, containing a crystallization reagent.
  • a valve is situated to keep the contents of opposing chamber from each other until the valve is opened, thus allowing free interface diffusion to occur between the opposing chambers through the interconnecting microfluidic channel.
  • the microfluidic devices taught by Hansen et al. have arrays of metering cells containing chambers for conducting protein crystallization experiments therein. Use of such arrays in turn provides for high-throughput testing of numerous conditions for protein crystallization which require analysis.
  • the invention disclosed herein provides systems and methods for conducting such analysis to determine whether a particular set of protein crystallization conditions indeed caused crystals to form.
  • the present invention is directed to image processing technology. More particularly, the invention provides an image processing method and system for detecting changes of an imaged obj ect. Merely by way of example, the invention has been applied to crystallization in a microfluidic device. But it would be recognized that the invention has a much broader range of applicability. [0013] According to the present invention, a number of embodiments of the image processing method and system for microfluidic devices are provided.
  • a method for processing an image of a microfluidic device includes receiving a first image of a microfluidic device. The first image corresponds to a first state.
  • the method includes receiving a second image of the microfluidic device.
  • the second image corresponds to a second state.
  • the method includes transforming the first image into a third coordinate space.
  • the transforming uses at least a first fiducial on the first image.
  • the method includes transforming the second image into the third coordinate space.
  • the transforming uses at least a second fiducial on the second image.
  • the method includes obtaining a third image based on at least information associated with the transformed first image and the transformed second image, and processing the third image to obtain information associated with the first state and the second state.
  • the third coordinate space is based on the prior known geometry of the microfluidic device.
  • a computer-readable medium including instructions for processing an image of a microfluidic device comprises one or more instructions for receiving a first image of a microfluidic device.
  • the first image corresponds to a first state.
  • the computer-readable medium includes one or more instructions for receiving a second image of the microfluidic device.
  • the second image corresponds to a second state.
  • the computer-readable medium includes one or more instructions for transforming the first image into a third coordinate space.
  • the transforming uses at least a first fiducial on the first image.
  • the computer-readable medium includes one or more instructions for transforming the second image into the third coordinate space.
  • the transforming uses at least a second fiducial on the second image.
  • the computer-readable medium includes one or more instructions for obtaining a third image based on at least information associated with the transformed first image and the transformed second image, and one or more instructions for processing the third image to obtain information associated with the first state and the second state.
  • certain embodiments of the present invention improves the speed of imaging analysis and crystallization detection.
  • Some embodiments of the present invention simplify the image processing system for crystallization detection.
  • Certain embodiments of the present invention improve sensitivity of the image processing method and system.
  • a method for processing an image of a microfluidic device includes receiving a first image of a microfluidic device.
  • the first image includes a first fiducial marking and a first chamber region, and the first chamber region is associated with a first chamber boundary.
  • the method includes transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking, removing at least a first part of the first chamber boundary from the first image, processing information associated with the first chamber region, and determining whether a first crystal is present in the first chamber region.
  • a method for processing a plurality of images of a microfluidic device includes receiving at least a first image and a second image of a microfluidic device.
  • the first image and the second image are associated with a first focal position and a second focal position respectively, and each of the first image and the second image includes a first chamber region.
  • the method includes processing information associated with the first image and the second image, generating a third image based on at least information associated with the first image and the second image, processing information associated with the third image, and determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
  • a method for adjusting a classifier and processing an image of a microfluidic device includes receiving a first image of a microfluidic device.
  • the first image is associated with at least a first predetermined characteristic.
  • the method includes generating a first plurality of features based on at least information associated with the first image, and selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic.
  • the method includes determining a third plurality of features based on at least information associated with the second plurality of features, and processing information associated with the third plurality of features.
  • the method includes determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters, processing information associated with the first likelihood and the at least a first predetermined characteristic, and adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
  • a computer-readable medium includes instructions for processing an image of a microfluidic device.
  • the computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device.
  • the first image includes a first fiducial marking and a first chamber region, and the first chamber region is associated with a first chamber boundary.
  • the computer-readable medium includes one or more instructions for transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking, and one or more instructions for removing at least a first part of the first chamber boundary from the first image.
  • the computer- readable medium includes one or more instructions for processing information associated with the first chamber region, and one or more instructions for determining whether a first crystal is present in the first chamber region.
  • a computer-readable medium includes instructions for processing a plurality of images of a microfluidic device.
  • the computer-readable medium includes one or more instructions for receiving at least a first image and a second image of a microfluidic device.
  • the first image and the second image are associated with a first focal position and a second focal position respectively, and each of the first image and the second image includes a first chamber region.
  • the computer-readable medium includes one or more instructions for processing information associated with the first image and the second image, and one or more instructions for generating a third image based on at least information associated with the first image and the second image.
  • the computer-readable medium includes one or more instructions for processing information associated with the third image, and one or more instructions for determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
  • a computer-readable medium includes instructions for adjusting a classifier and processing an image of a microfluidic device.
  • the computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device.
  • the first image is associated with at least a first predetermined characteristic.
  • the computer-readable medium includes one or more instructions for generating a first plurality of features based on at least information associated with the first image, and one or more instructions for selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic.
  • the computer-readable medium includes one or more instructions for determining a third plurality of features based on at least information associated with the second plurality of features, and one or more instructions for processing information associated with the third plurality of features. Also, the computer-readable medium includes one or more instructions for determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters, one or more instructions for processing information associated with the first likelihood and the at least a first predetermined characteristic, and one or more instructions for adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
  • Figure 1 depicts overview of an exemplary imaging system.
  • Figure 2a and 2b depict a top plan and cross-sectional view of an exemplary microfluidic device used in accordance with the invention.
  • Figures 3 a and 3b depict how metering cell stretch and distortion may be compensated in accordance with the invention.
  • Figures 4a through 4c depict the process of masking and image subtraction employed in accordance with the invention.
  • Figure 5 is a simplified diagram for an image processing method according to an embodiment of the present invention.
  • Figure 6 is a simplified process 520 for transforming images according to one embodiment of the present invention.
  • Figure 7 shows simplified wells and channels according to one embodiment of the present invention.
  • FIGS 8-10 are simplified diagrams showing sample 1-D signals.
  • Figure 11 is a simplified diagram for masking images according to one embodiment of the present invention.
  • Figure 12 is a simplified diagram for implosion-padding process.
  • Figure 13 is a simplified method for wall detection according to an embodiment of the present invention.
  • Figures 14(a), (b) and (c) are simplified diagrams for wall detection according to an embodiment of the present invention.
  • Figure 15 is a simplified method for implosion padding according to an embodiment of the present invention.
  • Figure 16 is a simplified diagram for wall implosion according to an embodiment of the present invention.
  • Figure 17 is a simplified diagram for wall implosion at another time according to an embodiment of the present invention.
  • Figure 18 is a simplified method for image inspection according to an embodiment of the present invention.
  • Figure 19 is a simplified training method according to an embodiment of the present invention.
  • Figure 20 is a simplified method for classification according to an embodiment of the present invention.
  • Figure 21 is a simplified method for combining images according to an embodiment of the present invention.
  • Figure 22 is a simplified diagram for deep chamber according to an embodiment of the present invention.
  • Figure 23 is a simplified diagram for capturing multiple images according to an embodiment of the present invention.
  • FIG. 1 is a simplified diagram for an imaging system according to an embodiment of the present invention.
  • Figures 2a and 2b are simplified diagrams for a top view and cross- sectional view of a microfluidic device according to an embodiment of the present invention.
  • the microfluidic device as shown in Figures 2a and 2b can be used in conjunction with the imaging system as shown in Figure 1.
  • Imaging system (10) operates, in one embodiment, in the following manner.
  • microfluidic device (30) is securely placed on stage (20). Based on a fixed feature of the microfluidic device (30), for example, an edge of the base support of microfluidic device
  • computer (110) then causes x,y drive (25) to move stage (20) about to align microfluidic device (30) in a first x,y position with a first of a plurality of fiducial marking (30), wherein the fiducial markings are embedded within microfluidic device at a known z dimension distance from a chamber center point, comes into focus by imaging device (60) based on dead reckoning from the fixed feature.
  • imaging device (60) based on dead reckoning from the fixed feature.
  • a user of the system registers the precise coordinate of the fiducial with the imaging system. Two or more additional fiducial marks are then likewise mapped with the assistance of a user.
  • this process is automatic as the centroids of the flducials can be calculated precisely by locating the symmetric XY fiducial object and removing any non-symmetric components.
  • Imaging device (60) under the control of computer (110) then adjusts the z dimension location of focal plane (105) to focus upon the fiducial marking (not shown in figure 1, but shown in figure T). For example, once focused upon the first fiducial marking, the imaging system then obtains a first x,y coordinate image of microfluidic device (30) looking for additional fiducial markings within the field of view of image device (60).
  • the field of view can embrace an entire metering cell.
  • the computer then analyzes the first x,y coordinate image to determine whether the microfluidic device has skew and stretch, and if skew or stretch are determined, transforms the first x,y image to align the image and coordinate map of the microfluidic device to an idealized coordinate map.
  • the idealized coordinate map is used later during image subtraction and masking steps.
  • the system determines whether the stretch, distortion or lack of co-registration between the various microfluidic layers is present in the microfluidic device by comparing the location of the fiducial markings in the x,y coordinate image with the fiducial markings locations in the x,y coordinate image of the ideal stored image map. If differences are present between the actual fiducial locations and the imaged fiducial locations, a matrix transformation, preferable an Affine transformation, is performed to transform the imaged shape of the metering cell into a virtual shape of the ideal metering cell shape.
  • Figure 3 depicts an ideal microfluidic device stored image (actually stored as a coordinate map), and an actual, distorted, microfluidic device image (also stored as a coordinate map determined from fiducial mapping).
  • a matrix transformation maybe developed to reform the actual image into an ideal image for use in further image processing described herein.
  • defects or debris can be masked out of subsequent images to avoid false positive when applying automated crystal recognition analysis.
  • the walls of the chambers may be subtracted from subsequent images, again so as to not cause false reading with the crystal recognition analysis.
  • the discrepancy between various layers, such as between the control layer and the channel layer can also be calculated based on the position of a found object in the control layer, such as the control lines themselves. In another example, this correction is determined based on the control layer fiducials themselves. For certain embodiments, this extra transformation is important since the control layer partitions the protein chamber from the rest of the control line.
  • Figures 4a through 4c depict how the above image subtraction and masking occur at time zero prior to crystal formation.
  • Figure 4a depicts a metering cell with debris, shown as the letter "D" distributed about the metering cell chambers.
  • Figure 4b depicts an image wherein the mask has removed the foreign objects from the image so as to not provide false triggers for image analysis.
  • Figure 4c depicts how image subtraction is applied to remove the chamber edge features from the image to reduce the raw image into one of just wall-less chambers. From this final image, further masking may occur if wall implosion is detected, an event that usually occurs when the microfluidic device is dehydrating and the chamber contents are permeating outside of the chamber, causing a negative pressure therein and thus, wall collapse or implosion.
  • Such further masking for implosion employs a series of known shapes that occur when chamber implosion occurs and uses such known shapes to create additional masks to occlude from the image the now intruding imploded walls.
  • FIG. 5 is a simplified diagram for an image processing method according to an embodiment of the present invention.
  • the method includes a process 510 for locating fiducials, a process 520 for transforming image, a process 530 for masking image, a process 540 for comparing images, and a process 550 for inspecting image.
  • a process 510 for locating fiducials a process 520 for transforming image
  • a process 530 for masking image a process 540 for comparing images
  • a process 550 for inspecting image.
  • the process 540 for comparing images may be performed prior to the process 530 for masking image, during the process 530 for masking image, and/or after the process 530 for masking image. Future detail of the present invention can be found throughout the present specification and more particularly below.
  • marking fiducials are located on an image.
  • the image may be renormalized against a reference image, which was previously taken with either a standardized slab or nothing under the microscope, for white balancing or for exposure normalization, or other desirable characteristics.
  • Marking fiducials may include cross hairs.
  • the image includes metering cells in addition to a Fluidigm logo. Each metering cell has cross-hair fiducials at known locations around the metering cell. During the image acquisition, the positions of these fiducials are determined to within +/- 100 microns through the X-Y correction process. This estimation accuracy may be achieved even under rotational orientations.
  • some sub-images are extracted around these estimated locations. Within these sub-images, the cross-hair fiducials are found, and their global positions are determined. The global positions in the TO image are compared to the global positions in a subsequent image, such as the Tl image, the T2 image, ..., the TM image, ..., or the TN image. N is a positive integer, and M is a positive integer smaller than or equal to N.
  • the TO image is captured at TO; while the TM image is captured at TM. For example, at TO, no crystallization of protein occurs. At TM, crystallization of protein may have occurred. If a single fiducial is missed from the TO image or the subsequent TM image, the missed fiducial is usually not considered during the subsequent analysis of the images.
  • FIG. 6 is a simplified process 520 for transforming images according to one embodiment of the present invention.
  • the process 520 includes a process 610 for matching fiducials, a process 620 for calculating transformation, and a process 630 for transforming image.
  • the process 620 for calculating transformation and the process 630 for transforming image may be combined.
  • Other processes may be inserted to those noted above.
  • fiducials in an image is matched with corresponding fiducials in an ideal coordinate map.
  • the image is the TO image or the TM image.
  • the image is an x-y coordinate image
  • the ideal coordinate map is an x-y coordinate map.
  • the image is aligned against the ideal coordinate map. Locations of the fiducials in the image are compared with locations of the fiducials in the ideal coordinate map. Such comparison can reveal any distortion including a stretch of the microfluidic device when the image is captured, such as at TO or TM.
  • a spatial transformation from an image to an ideal coordinate space is calculated.
  • the ideal coordinate space corresponds to the ideal coordinate map.
  • a matrix transformation such as an Affine transformation, is calculated. For example, two least squares transformations are calculated from the TO image to an ideal coordinate space and from the TM image to the ideal coordinate space.
  • an image is transformed into an ideal coordinate space.
  • the image may be the TO image or the TM image.
  • a matrix transformation such as an Affine transformation, changes the shape of a metering cell in the image into an ideal shape.
  • the metering cell may be sliced into three or more diffusion experiments.
  • Figure 3 a shows a simplified ideal coordinate map
  • Figure 3b shows a simplified distorted image.
  • the TO image and the TM image are transformed into the ideal coordinate space.
  • the transformed TO image and the transformed TM image are located in the same coordinate space, so they are co-registered and comparable to one another.
  • the transformed TO image can be subtracted from the TM image to detect crystallization in the TM image. But such subtraction does not remove all the noise sources that should be removed.
  • the four vertical lines as discussed above include the left- wall of the right channel, the right wall and the left wall of the middle channel, and the right wall of the left channel.
  • the remaining two walls, e.g., the right wall of the right channel and the left wall of the left channel are demarcated by the containment lines which are found through thresholding a 1-D horizontal signal of a gross left and right sub-image.
  • the analysis of one-dimensional horizontal signal can also locate an interface line in the center channel and the top and bottom walls of the horizontal channels using small windows across the x- dimension.
  • the horizontal channels can be tilted out of the horizontal due to alignment errors.
  • the interface lines and the top and bottom walls of the channels are used in the subsequently processes.
  • Figures 8-10 are simplified diagrams showing sample 1-D signals. These diagrams are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the channel walls are not as crisp in signal as shown in Figures 8-10, as the strength of that signal depends on the z-location at the time of image acquisition.
  • Figure 9 is a simplified diagram for interface line detection.
  • Figure 10 is a simplified diagram for filtered and width matched signal.
  • the fiducials are on the same layer as the channel. The channel position can be found via the affine transformation without finding the channel walls.
  • an image is masked.
  • FIG. 11 is a simplified diagram for masking images according to one embodiment of the present invention.
  • the TO image and the Tl image are captured and transformed to the ideal coordinate space.
  • Each rectilinear region contains four bounding walls. The region beyond the four bounding walls in the TO image is masked out of the subsequent analysis.
  • the interface line is masked out.
  • large blob objects that appear in the region of interest and exceed threshold in the TO image are similarly masked as they are assumed to be pre-existing before crystallization. As shown in Figure 11, a blob object appears in the right channel in both the TO image and the Tl image, but the blob object does not exist in the scrubbed lower-right image.
  • the cells, voids, and spaces are deformable in microfluidic devices, so they can change in size from TO to TM.
  • Such deformation of the cell surfaces is modeled, and the mask is accordingly modified for the corresponding TM.
  • the left and right well subcomponents have their "implosion-padding" values calculated. This is necessary because the substantial pressure difference in the well between TO and TM implodes the walls from their original position.
  • the implosion-padding process includes extracting rectangle around a well in the TO image, calculating an average of a succession of rectangle-perimeters from the TO image, finding a minimum value of this vector and the index, repeating the above three processes of extracting, calculating, and finding for the subsequently Tl image, the T2 image, ..., the TM image, ..., and the TN image, and calculating the difference in the indices.
  • the difference in the indices is used to estimate additional padding to the masking region for the original TO image.
  • Figure 12 is a simplified diagram for implosion-padding process. As discussed above and further emphasized here, this diagram is merely an examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • images are compared to generate a comparison image.
  • a comparison image results from the subtraction of the TO image from the TM image.
  • the scrubbing can usually remove the walls of the chambers. Such removal can reduce false reading in the crystal recognition analysis.
  • the process 540 for image comparison may be performed prior to the process 530 for masking image, during the process 530 for masking image, and/or after the process 530 for masking image.
  • the comparison image is median re-centered to push the middle to 128 instead of the arbitrary value that would otherwise result.
  • the intensity of the image can vary even with respect to the reference image as it is dependent on the hydration conditions on the chip.
  • the mask generated in the process 530 is applied to the comparison image to create an attenuating front which softens the harsh borders that the mask would introduce to an image. The closer an image pixel is to a mask pixel, the more the image pixel is attenuated. This process is one example of scrubbing.
  • the distance map describing the distance of each image pixel from a mask pixel is calculated separately from the TO image.
  • FIGs 4a through 4c are simplified diagrams for image subtraction, masking and scrubbing. These diagrams are merely examples, which should not unduly limit the scope of the claims herein.
  • a metering cell contains debris indicated by the letter D's distributed about the metering cell chambers.
  • the metering cell may be rotated to align with the ideal coordinate map, and is transformed to make the imaged metering cell dimensions match those of the ideal metering cell dimensions. For example, the transformation can stretch compensate the image. Subsequently, the foreign objects not present in the ideal image are masked out.
  • FIG 4b is a simplified diagram for an image with foreign objects removed.
  • Figure 4c is a simplified diagram for image subtraction. The image subtraction calculates differences between the TO image and the TM image, and thereby removes the chamber edge features from the TM image. The TM image is converted into an image having wall-less chambers.
  • a further masking may be needed if wall implosion is detected.
  • Wall implosion usually occurs when the microfluidic device is dehydrating and the chamber contents are permeating outside of the chamber. The permeation causes a negative pressure therein and thus wall collapse or implosion.
  • Such further masking for implosion employs a series of known shapes that occur when chamber implosion occurs and uses such known shapes to create additional masks to occlude from the image the now intruding imploded walls.
  • an output scrubbed image is calculated by first renonnalizing the TO image and the TM image with respect to each other.
  • the renormalization process can reduce a DC or background signal resulting from environmental changes to the chip, such as loss of chip moisture.
  • a simple subtraction image is then calculated with a 128 offset.
  • This subtraction image is then "scrubbed" by stamping all the pixel locations in the stamp with 128 and thereby obliterating their output signal. Additionally, pixel locations are progressively attenuated based on their x-y distance to a stamped pixel in the mask. Therefore the subtraction image is scrubbed around the mask pixels to ensure a smooth transition from the stamped 128 value and the real image values.
  • an image is inspected for crystals.
  • the final scrubbed image is sent through a feature extractor which performs additional image processing techniques on the image.
  • Training and selection of these features is a semi-automatic process using Matlab scripts. A random combination of these features is selected. The higher dimensional space is mapped to a lower dimensionality through fisher-linear discriminant analysis to increase the separability of crystals from other materials. Classification is performed in this lower dimensional space using a K-nearest neighbor algorithm. A confusion matrix for the original training set is calculated by excluding the instance under test and a cost matrix is applied to the training matrix to evaluate the "goodness" of the training run. The best training run is used to determine the number of neighbors, the features used and two thresholds used for false positive rejection and false negative rejection.
  • a computer medium includes instructions for processing an image of a microfluidic device.
  • the computer medium stores a computer code that directs a processor to perform the inventive processes as discussed above.
  • An exemplary computer code may use Matlab or other computer language, and may run on Pentium PC or other computer.
  • the computer code is not intended to limit the scope of the claims herein.
  • One of ordinary skill in the art would recognize other variations, modifications, and alternatives.
  • the computer-readable medium includes one or more instructions for receiving the TO image of a microfluidic device.
  • the TO image is captured prior to crystallization.
  • the computer-readable medium includes one or more instructions for receiving the TM image of the microfluidic device.
  • the TM image is captured after the TO image.
  • the computer readable medium includes one or more instructions for transforming the TO image into an ideal coordinate space using at least a fiducial on the TO image, one or more instructions for transforming the TM image into the ideal coordinate space using at least a fiducial on the TM image, one or more instructions for obtaining a comparison image based on at least information associated with the transformed TO image and the transformed TM image, and one or more instructions for processing the comparison image to obtain information associated with the crystallization.
  • the computer code can perform locating fiducials, transforming image, masking image, comparing images, and inspecting image.
  • the computer code performs some or all of the processes as described in Figures 1-12.
  • Certain embodiments of the present invention improves the speed of imaging analysis and crystallization detection. Some embodiments of the present invention simplify the image processing system for crystallization detection. Certain embodiments of the present invention improve sensitivity of the image processing method and system.
  • marking fiducials are located on an image.
  • the image may be renormalized against a reference image, which was previously taken with either a standardized slab or nothing under the microscope, for white balancing or for exposure normalization, or other desirable characteristics.
  • the image may be 8-bit renormalized with high resolution, or other desirable characteristics.
  • Marking fiducials may include cross hairs, hi one embodiment of the present invention, the image includes metering cells in addition to a Fluidigm logo. Each metering cell has cross-hair fiducials at known locations around the metering cell.
  • the positions of these fiducials are determined to within +/- 100 microns through the X-Y correction process. This estimation accuracy may be achieved even under rotational orientations.
  • some sub-images are extracted around these estimated locations. Within these sub-images, the cross-hair fiducials are found, and their global positions are determined.
  • the TO image is analyzed at the process 510, and in another example, the TO image is not analyzed at the process 520. For example, the TO image is captured at TO. At TO, no crystallization of protein occurs. At TM, crystallization of protein may have occurred.
  • the global positions in the TO image are compared to the global positions in a subsequent image, such as the Tl image, the T2 image, ..., the TM image, ..., or the TN image.
  • N is a positive integer
  • M is a positive integer smaller than or equal to N.
  • the TM image is captured at TM. If a single fiducial is missed from the TO image or the subsequent TM image, the missed fiducial is usually not considered during the subsequent analysis of the images.
  • the process 520 includes a process 610 for matching fiducials, a process 620 for calculating transformation, and a process 630 for transforming image.
  • fiducials in an image is matched with corresponding fiducials in an ideal coordinate map.
  • the image is the TM image.
  • the image is an x-y coordinate image
  • the ideal coordinate map is an x-y coordinate map.
  • the image is aligned against the ideal coordinate map.
  • Locations of the fiducials in the image are compared with locations of the fiducials in the ideal coordinate map. Such comparison can reveal any distortion including a stretch of the microfluidic device when the image is captured, such as at TM.
  • the ideal coordinate map takes into account certain characteristics of the imaging system 10 and/or the microfluidic device 30. For example, the characteristics include some imperfections known or predicted at the time the ideal coordinate map was generated.
  • a spatial transformation from an image to an ideal coordinate space is calculated.
  • the ideal coordinate space corresponds to the ideal coordinate map.
  • a least squares transformation is calculated from the TO image to the ideal coordinate space.
  • a least squares transformation is not calculated from the TO image to the ideal coordinate space.
  • an image is transformed into an ideal coordinate space. For example, the TO image is transformed. In another example, the TO image is not transformed.
  • the transformed images are located in the same coordinate space, so they are co-registered and comparable to one another.
  • the transformed image includes at least part of the microfluidic device 30.
  • the microfluidic device 30 has the channel regions and well regions.
  • the channel regions and the well regions are interchangeable.
  • the channels and the wells refer to recessed regions in the microfluidic device, hi other embodiments, the microfluidic device uses channel regions to function as well regions, hi yet other embodiments, the microfluidic device includes chambers that can be used as fluid channels, control channels, and wells.
  • an image is masked. For example, a stamp or a mask is calculated using predetermined information about the idealized image.
  • the TM image is captured and transformed to the ideal coordinate space.
  • Each rectilinear region contains four bounding walls.
  • the region beyond the four bounding walls in the TM image is masked out of the subsequent analysis.
  • the interface line is masked out.
  • Figure 13 is a simplified method for wall detection.
  • the method 1300 includes process 1310 for receiving image, process 1320 for performing intensity analysis, process 1330 for converting intensities, process 1340 for detecting walls for first control channel, and process 1350 for detecting wall for second control channel.
  • process 1310 and 1320 is combined.
  • processes 1340 and 1350 is combined.
  • Other processes may be inserted to those noted above.
  • FIGs 14(a), (b) and (c) are simplified diagrams for wall detection according to an embodiment of the present invention. These diagrams are only illustrative, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • an image is received.
  • the image is the TO image or the TM image.
  • an image 1400 includes an interface line 1410 as a first control channel, a containment line 1420 as a second control channel, and a reaction channel 1430.
  • the interface line 1410 includes walls 1412 and 1414, and the containment line 1420 includes a wall 1422.
  • the reaction channel includes walls 1432 and 1434.
  • the interface line 1410 and the containment line 1420 are in the control layer.
  • the reaction channel 1430 is used for protein crystallization.
  • an intensity analysis is performed.
  • the image 1400 is analyzed based on intensity.
  • a curve 1440 represents image intensity along the direction of the reaction channel 1430.
  • the curve 1440 includes at least five peaks 1442, 1444, 1452, 1454, and 1456.
  • the peaks 1442 and 1444 correspond to bright regions, and the peaks 1452, 1454, and 1456 correspond to dark regions.
  • the peaks 1442 and 1452 are associated with to the wall 1412
  • the peaks 1444 and 1454 are associated with the wall 1414
  • the peak 1456 is associated with the wall 1422.
  • the intensities are converted.
  • the curve 1440 is converted into a curve 1460.
  • the conversion removes polarity differences between the peaks 1442 and 1452 and between the peaks 1444 and 1454. Additionally, the conversion also provide a smoothing process. For example, the intensity values of the curve 1440 is compared against the average intensity value of the curve 1440, and the absolute values of the differences are plotted along the direction of the reaction channel 1430. As a result, the curve 1460 includes three peaks 1472, 1474, and 1476.
  • the peak 1472 corresponds to the peaks 1442 and 1452
  • the peak 1474 corresponds to the peaks 1444 and 1454
  • the peak 1476 corresponds to the peak 1456.
  • the smoothing process ensures the peaks 1442 and 1452 are converted into a single peak 1472.
  • the conversion is performed without the smoothing process.
  • the curve 1440 has a single peak with a single polarity in place of the peaks 1442 and 1452. No smoothing or fusing of the two peaks is needed.
  • FIG. 14(c) walls of the first control channel are detected.
  • the peaks 1472 and 1474 are associated with the walls 1412 and 1414 of the first control channel 1410.
  • a line 1488 is drawn parallel to the x axis along the direction of the reaction channel.
  • the line 1488 intersects with the curve 1460 at four intersections 1482, 1484, 1486, and 1488.
  • the average x value of intersections 1482 and 1484 and the average x value of the intersections 1486 and 1488 are calculated.
  • the difference between the two average x values is determined as the calculated width of the interface line 1410.
  • the calculated width is compared against the predetermined width of the interface line 1410.
  • the difference between the calculated width and the predetermined width is minimized at a certain y position for the line 1488.
  • the average x value of intersections 1482 and 1484 is considered to be the position of the wall 1412
  • the average x value of the intersections 1486 and 1488 is considered to be the position of the wall 1414.
  • a wall of the second control channel is detected.
  • the predetermined length of the reaction channel 1430 between the interface line 1410 and the containment line 1420 is used to calculate the position of the containment line 1420.
  • the calculation provides an approximate location for the wall 1422.
  • the approximate locations for the walls 1414 and 1422 are further adjusted by a fine-correction process.
  • the fine-correction process calculates the penalty functions for the wall 1414 and the wall 1416 and determines a combined penalty function as a function of wall positions.
  • the combined penalty function takes into account the signal intensities of the curve 1460.
  • the combined penalty function takes into account the distance between the fine-corrected wall positions and the approximate wall positions without fine correction.
  • the locations of the walls 1414 and 1422 are determined, hi yet another example, by smoothing the combined penalty function, the locations of the walls 1414 and 1422 are determined.
  • Figure 13 is merely an example, which should not unduly limit the scope of the claims.
  • the walls 1432 and 1434 of the reaction channel 1430 as shown in Figure 14(a) are found in a way similar to the walls 1412, 1414, and 1422. The distance between the two walls 1432 and 1434 are predetermined. Multiple regions of the reaction channel 1430 are sampled to generate a composite estimate locations for the walls 1432 and 1434. hi another example, the fiducial markings are detected and registered on the channel layer, and the walls 1432 and 1434 are thereby determined.
  • the locations of the walls 1432, 1434, 1414 and 1422 can be determined based on at least information obtained from a bar code on the microfluidic device 30.
  • the region beyond the four bounding walls 1432, 1434, 1414 and 1422 can be masked out of the subsequent analysis.
  • a fiducial marking comprises a recessed region in a deformable layer.
  • the recessed region becomes a volume or open region surrounded by portions of the deformable layer or other layers.
  • the volume or open region is preferably filled with a fluid such as a gas including air or other non-reactive fluid.
  • the fluid also has a substantially different refractive index to light relative to the surrounding deformable layer.
  • the open region is preferably filed with an air or air type mixture and has a low refractive index.
  • the fiducial marking in the control layer has similar characteristics according to a specific embodiment.
  • the fiducial marking has sharp edges that highlight the marking from its surroundings.
  • the fiducial markings can be any physical features associated with the microfluidic device 30.
  • the fiducial markings include a channel wall or an edge of the microfluidic device 30.
  • images are compared to generate a comparison image.
  • a comparison image results from the subtraction of the TO image from the TM image.
  • a comparison image results from the subtraction of the TMl image from the TM2 image.
  • Ml and M2 is a positive integer smaller than or equal to N.
  • Ml is smaller than M2.
  • the mask generated in the process 530 is applied to the comparison image to create an attenuating front which softens the harsh borders that the mask would introduce to an image. The closer an image pixel is to a mask pixel, the more the image pixel is attenuated.
  • the mask takes into account wall implosion by an implosion-padding process. As discussed above and further emphasized here, the process 540 may be skipped in some examples.
  • FIG. 15 is a simplified method for implosion padding according to an embodiment of the present invention.
  • the method 4500 includes process 4510 for selecting image area, process 4520 for determining median intensity, process 4530 for determining need for additional image area, process 4540 for determining minimum intensity, and process 4550 for determining implosion padding.
  • process 4510 for selecting image area
  • process 4520 for determining median intensity
  • process 4530 for determining need for additional image area
  • process 4540 for determining minimum intensity
  • process 4550 for determining implosion padding.
  • an image area is selected from the TO image or the TM image.
  • the selected image area is associated with a rectangular boundary.
  • Figure 16 is a simplified diagram for wall implosion according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • An image area along the perimeter of a rectangle 4610 is selected from an image. The rectangle 4610 is assigned with an index.
  • a median intensity is determined. As shown in Figure 16, the median intensity for the image area is calculated.
  • the median intensity is associated with an index corresponding to the rectangle 4610, and determined based on raw pixel intensities along the perimeter of the rectangle 4610. In another embodiment, the average intensity instead of the median intensity for the image area is determined.
  • the process 4530 whether an additional image area should be selected is determined. If an additional image area needs to be selected, the process 4510 is performed. If an additional image area does not need to be selected, the process 4540 is performed. In one example, the processes 4520 and 4530 are repeated for a succession of nested rectangles and the rectangle index is plotted against the determined median intensity as shown in a curve 4620.
  • the minimum median intensity is determined. As shown in Figure 16, the median intensity is a function of the index, and may be plotted as the curve 4620. At an index equal to about 10, the median intensity approximately reaches a minimum. The rectangle associated with the minimum median intensity is related to the walls of the reaction chamber, and is used to determine the extent of implosion. In another embodiment, the minimum average intensity instead of the minimum median intensity for the image area is determined.
  • Figure 17 is a simplified diagram for wall implosion at another time according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • Figure 17 shows the processes 4510, 4520, 4530, and 4540 performed on an image taken later than the image analyzed in Figure 16.
  • Figure 16 is associated with the TO image or the TMl image.
  • Figure 17 is associated with TM2 image, and M2 is larger than Ml .
  • the index that corresponds to minimum median intensity has shifted from 10 to about 29. The change in index values indicates the wall implosion.
  • the additional implosion padding that should be applied for the image in Figure 17 is determined.
  • the mask can be designed to cover the wall implosion.
  • an image is inspected for crystals.
  • Figure 18 is a simplified method for image inspection. This diagram is merely an example, which should not unduly limit the scope of the claims.
  • the method 1500 includes process 1510 for training classifier and process 1520 for classifying image.
  • the specific sequences of processes may be interchanged with others replaced.
  • the process 1510 is skipped.
  • the process 1510 is repeated for a plurality of images. Further details of these processes are found throughout the present specification and more particularly below.
  • a classifier is trained.
  • Figure 19 is a simplified training method according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the process 1510 includes process 1610 for generating features, process 1620 for selecting features, process 1630 for projecting features, and process 1640 for adjusting classifier.
  • a number of features are generated.
  • the features are computed on the entire image.
  • the image is divided into overlapping tiles or spatial components, and the features are computed on each image tile or spatial component. These features describe certain characteristics of the image useful for the classification of the image. For example, the image can be classified into crystal, phase/precipitate and clear types.
  • some characteristics of the image are predetermined.
  • the predetermination is accomplished by manually and/or automatically inspecting the image.
  • the characteristics may describe with which of the crystal, phase/precipitate and clear classes the image is associated.
  • the predetermined characteristics can be used to assess the accuracy and adjust the various settings of the classifier.
  • the features including some or all of the following: [0101] Coarse Image Statistics: global image features;
  • Circle Counting Image Statistics count of different kinds of circles and ellipse
  • Sliding Threshold Features threshold values at which objects of sufficient size are segmented;
  • Biggest Object Features features of the biggest blob or object found in the image;
  • Form Analysis Features shape analysis features
  • Hough Transform Features features computed using Hough Transform method to detect straight lines.
  • neighborhood Line Detector features features computed in local neighborhoods detecting straight line patterns.
  • a N-by-N-pixel square neighborhood is centered around each pixel in the image and considered for a fixed value of N. For example, N is equal to 9.
  • the gradient of each pixel in the neighborhood is computed. Based on all the gradients of the pixels in the neighborhood, the dominant orientation angle indicative of the straight line pattern in the neighborhood is determined. Also, based on the number of pixels in the neighborhood aligned with the dominant orientation, the strength of the straight line pattern is determined. If there are a number of pixels forming a line and each of the neighborhoods centered at those pixels has strong and similarly oriented straight line patterns, the number of such pixels and the strength and similarity of orientations can be used as features for classification
  • certain features are selected from the plurality of features generated. For example, a subset of features is selected using an automatic method in which features are added and removed iteratively and classification accuracy is improved or optimized.
  • the feature selection process is repeated for each pair of the classes, and the accuracy for distinguishing between each pair of classes is improved.
  • the accuracy may be detennined between the result from the classifier and the predetermined characteristic of the image. For example, the image is associated with three classes including crystal, phase/precipitate and clear.
  • certain features are selected from all the features obtained at the process 1610.
  • the selection includes computing the Fisher Discriminant between the pair and evaluating its classification accuracy using receiver operating characteristic (ROC) curve area which is a plot between false negative rate and false positive rate.
  • ROC receiver operating characteristic
  • the selected features are projected.
  • all of the selected features are projected onto the lower dimensional feature space. For example, from 130 original features, 5 groups of features are selected. As discussed above, 3 groups of features are selected from all features for 3 pairs of classes, and 2 groups of features are selected from only Neighborhood Line Detector Features for 2 pairs of classes. These 5 groups of selected features are used to calculate 5 Fisher features. The number of dimensions is reduced from 130 to 5.
  • the classifier is adjusted, hi one embodiment, the Fisher features are input to a Feed Forward neural network.
  • This network is trained using a neural network training algorithm such as backpropagation algorithm.
  • the neural network can have multiple outputs, each output indicating the likelihood of the image or the image tile being in one of the classes such as crystal, phase/precipitate or clear. If the image is divided into image tiles, the neural network outputs for the different image tiles are combined into a single output using a spatial fusion algorithm. Based on the comparison between the output from the neural network and the predetermined characteristics of the image, the neural network is adjusted. For example, the weights and/or biases of the neural network is changed. [0115] At the process 1520, an image is classified.
  • FIG 20 is a simplified method for classification according to an embodiment of the present invention.
  • the process 1520 includes process 1710 for generating features, process 1720 for projecting features, and process 1730 for determining image class.
  • process 1710 for generating features
  • process 1720 for projecting features
  • process 1730 for determining image class.
  • a number of features are generated. These features include all the features selected at the process 1620. In one embodiment, the features are computed on the entire image. In another embodiment, the image is divided into overlapping tiles or spatial components, and the features are computed on each image tile or spatial component. In yet another embodiment, the scrubbing and ripping operations are performed on the image prior to the process 1710.
  • the selected features are projected. In one embodiment, all of the features selected at the process 1620 are projected onto the lower dimensional feature space. For example, from 130 original features, 5 groups of features are selected at the process 1620. These selected features are computed at the process 1710, and are used to calculate 5 Fisher features.
  • the image class is determined.
  • the Fisher features are input to a Feed Forward neural network.
  • the neural network can have multiple outputs, each output indicating the likelihood of the image or the image tile being in one of the classes such as crystal, phase/precipitate or clear. If the image is divided into image tiles, the neural network outputs for the different image tiles are combined into a single output using a spatial fusion algorithm, hi another embodiment, the crystal likelihood is compared against a threshold. If the crystal likelihood is above the threshold, the image is classified as a crystal image. For example, the threshold is 50%.
  • Figures 1-17 represent certain embodiments of the present invention, and these embodiments include many examples.
  • the TO image and/or the TM image associated with some or all of the processes 510, 520, 530, 540, and 550 maybe directly acquired by the imaging system 10, or generated from a plurality of images acquired by the imaging system 10.
  • the imaging system 10 captures a plurality of images for the same area of the microfluidic system 30 at a plurality of z-focus positions respectively. The plurality of images at different z-planes are combined into one image used as the TO image or TM image.
  • FIG. 21 is a simplified method for combining images according to an embodiment of the present invention.
  • the method 1800 includes process 1810 for determining image characteristics, process 1820 for performing statistical analysis, and process 1830 for generating combined image.
  • process 1810 for determining image characteristics
  • process 1820 for performing statistical analysis
  • process 1830 for generating combined image.
  • certain image characteristics are determined for the plurality of images.
  • the sharpness and colorness are determined for each pixel of each image.
  • the sharpness is detennined with Laplacian operator, and the colorness is determined with Saturation of the HSV color mode.
  • a statistical analysis is performed. In one embodiment, the statistics such as mean of sharpness and mean of colorness are determined for all the images.
  • a combined image is generated. For example,
  • Equation 1 [0124] wherein N is the number of images for the plurality of images.
  • Combinedlmage (x,y) is the intensity of the combined image at pixel (x,y)
  • Image m (x,y) is the intensity of image m at pixel (x,y).
  • the image intensity has three components including red intensity, green intensity, and blue intensity.
  • the intensity of the combined image associated with a given color is dependent upon the intensity of image m associated with the same color.
  • the weight wt m is determined based on the sharpness and colorness at pixel (x, y) for image m. For example,
  • Lapacian m (x,y) and Saturation m (x,y) are the values of Laplacian operator and Saturation respectively for the pixel (x,y) on image m.
  • MeanLaplacian is the mean of Laplacian values for all pixels in all of the plurality of images
  • MeanSaturation is the mean of Saturation values for all pixels in all the plurality of images.
  • a reaction chamber such as a reaction channel or the protein well
  • the crystals can be located anywhere within the reaction chamber.
  • Figure 22 is a simplified diagram for deep chamber according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims.
  • a protein well 1900 has a depth of about 300 microns, hi one example, the depth of focus of 1OX objective is less than 300 microns, and the single z-plane image capture cannot capture all the crystals 1910, 1920, and 1930. If the imaging system focuses on the middle of the protein well, the image may capture only the crystal 1920.
  • FIG 23 is a simplified diagram for capturing multiple images according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims.
  • Image #1 captures the crystal 1910
  • Image #2 captures the crystal 1920
  • Image #3 captures the crystal 1930.
  • the number of images are depending on the objective and aperture setting of the imaging system. The smaller the aperture, the larger of the depth of field, and the less the images needed. For example, 5 images with 70 micron step size may be used with 1OX objective.
  • the captured multiple images are combined according to the method 1800.
  • each of the three images has three components for a given (x, y) location.
  • the three components include red intensity, green intensity, and blue intensity.
  • the combined image has the same three components for a given (x, y) location.
  • Imagei (10, 10) (200, 100, 50)
  • hnage 2 (10, 10) (100, 200, 150)
  • Image 3 (10, 10) (50, 50, 50).
  • Combinedlmage (10, 10) is as follows:
  • Equation 3 (100, 196.12, 147.09) (Equation 3) [0131] where the combined image has a red intensity of 100, a green intensity of 196.12, and a blue intensity of 147.09 at x equal to 10 and y equal to 10.
  • Equation 3 is only an example, which should not unduly limit the scope of the claims.
  • Examples of the present invention include code that directs a processor to perform all or certain inventive processes as discussed above.
  • the computer code is implemented using C++ or other computer language.
  • the computer code is not intended to limit the scope of the claims herein.
  • One of ordinary skill in the art would recognize other variations, modifications, and alternatives.
  • a computer-readable medium includes instructions for processing an image of a microfluidic device.
  • the computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device.
  • the first image includes a first fiducial marking and a first chamber region, and the first chamber region is associated with a first chamber boundary.
  • the computer-readable medium includes one or more instructions for transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking, and one or more instructions for removing at least a first part of the first chamber boundary from the first image.
  • a computer-readable medium includes one or more instructions for processing information associated with the first chamber region, and one or more instructions for determining whether a first crystal is present in the first chamber region.
  • a computer-readable medium includes instructions for processing a plurality of images of a microfluidic device.
  • the computer-readable medium includes one or more instructions for receiving at least a first image and a second image of a microfluidic device.
  • the first image and the second image are associated with a first focal position and a second focal position respectively, and each of the first image and the second image includes a first chamber region.
  • the computer-readable medium includes one or more instructions for processing information associated with the first image and the second image, and one or more instructions for generating a third image based on at least information associated with the first image and the second image. Moreover, the computer-readable medium includes one or more instructions for processing information associated with the third image, and one or more instructions for determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
  • a computer-readable medium includes instructions for adjusting a classifier and processing an image of a microfluidic device.
  • the computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device.
  • the first image is associated with at least a first predetermined characteristic.
  • the computer-readable medium includes one or more instructions for generating a first plurality of features based on at least information associated with the first image, and one or more instructions for selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic.
  • the computer-readable medium includes one or more instructions for determining a third plurality of features based on at least information associated with the second plurality of features, and one or more instructions for processing information associated with the third plurality of features. Also, the computer-readable medium includes one or more instructions for determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters, one or more instructions for processing information associated with the first likelihood and the at least a first predetermined characteristic, and one or more instructions for adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
  • a wall of the second control channel is detected, hi one embodiment, once the interface line 1410 is located, the predetermined length of the reaction channel 1430 between the interface line 1410 and the containment line 1420 is used to calculate the position of the containment line 1420. The calculation provides an approximate location for the wall 1422. Afterwards, the approximate locations for the walls 1414 and 1422 are further adjusted by a fine-correction process. An exemplary computer code for fine correction is shown below.
  • (rightSide NULL))
  • Appendix A and Appendix B are attached as part of the present patent application. These appendices provide some examples and should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • TOWNSEND and TOWNSEND and CREW LLP Two Embarcadero Center, 8 th Floor San Francisco, California 94111 -3834 Tel: 650-326-2400 METHOD AND SYSTEM FOR MICROFLUIDIC DEVICE AND
  • the invention provides a microfluidic structure and method of manufacture, and a system and method for imaging a microfluidic device.
  • the fiducial markings are used for processing and imaging a microfluidic chip, but it would be recognized that the invention has a much broader range of applicability.
  • MEMS microelectromechanical
  • Such MEMS structures include pumps and valves.
  • the pumps and valves are often silicon-based and are made from bulk micro-machining (which is a subtractive fabrication method whereby single crystal silicon is lithographically patterned and then etched to form three-dimensional structures).
  • the pumps and valves also use surface micro-machining (which is an additive method where layers of semiconductor-type materials such as polysilicon, silicon nitride, silicon dioxide, and various metals are sequentially added and patterned to make three-dimensional structures).
  • surface micro-machining which is an additive method where layers of semiconductor-type materials such as polysilicon, silicon nitride, silicon dioxide, and various metals are sequentially added and patterned to make three-dimensional structures.
  • a limitation of silicon-based micro-machining is that the stiffness of the semiconductor materials used necessitates high actuation forces, which result in large and complex designs.
  • both bulk and surface micro-machining methods are often limited by the stiffness of the materials used.
  • adhesion between various layers of the fabricated device is also a problem.
  • wafer bonding techniques must be employed to create multilayer structures.
  • thermal stresses between the various layers of the device limits the total device thickness, often to approximately 20 microns. Using either of the above methods, clean room fabrication and careful quality control are required.
  • microfluidic systems using an elastomeric structure have been proposed.
  • these structures are often made by forming an elastomeric layer on top of a micromachined mold.
  • the micromachined mold has a raised protrusion which forms a recess extending along a bottom surface of the elastomeric layer.
  • the elastomeric layer is bonded to other elastomeric layers to form fluid and control regions.
  • the elastomeric layer has overcome certain limitations of conventional MEMS based structures. Further details of other characteristics of these elastomeric layers for microfluidic applications such as crystallization have been provided below.
  • [0008J Crystallization is an important technique to the biological and chemical arts. Specifically, a high-quality crystal of a target compound can be analyzed by x-ray diffraction techniques to produce an accurate three-dimensional structure of the target. This three- dimensional structure information can then be utilized to predict functionality and behavior of the target.
  • the crystallization process is simple.
  • a target compound in pure form is dissolved in solvent.
  • the chemical environment of the dissolved target material is then altered such that the target is less soluble and reverts to the solid phase in crystalline form.
  • This change in chemical environment is typically accomplished by introducing a crystallizing agent that makes the target material less soluble, although changes in temperature and pressure can also influence solubility of the target material.
  • forming a high quality crystal is generally difficult, often requiring much trial and error and patience on the part of the researcher.
  • the highly complex structure of even simple biological compounds means that they are usually not amenable to forming a highly ordered crystalline structure. Therefore, a researcher needs to be patient and methodical, experimenting with a large number of conditions for crystallization, altering parameters such as sample concentration, solvent type, countersolvent type, temperature, and duration in order to obtain a high quality crystal.
  • a high-throughput system for screening conditions for crystallization of target materials, for example proteins, is provided in a microfluidic device.
  • the array of metering cells is formed by a multilayer elastomeric manufacturing process.
  • Each metering cell comprises one or more of pairs of opposing chambers, each chamber being in fluid communication with the other through an interconnecting microfluidic channel, one chamber containing a protein solution, and the other, opposing chamber, containing a crystallization reagent.
  • a valve is situated to keep the contents of opposing chambers from each other until the valve is opened, thus allowing free interface diffusion to occur between the opposing chambers through the interconnecting microfluidic channel.
  • the microfluidic devices taught by Hansen et aL are have arrays of metering cells containing chambers for conducting protein crystallization experiments therein. Use of such arrays in turn provides for high-throughput testing of numerous conditions for protein crystallization which require analysis. See PCT publication WO 02/082047, published October 17, 2002 and by Hansen, et al. PCT publication WO 02/082047 is incorporated by reference herein in its entirety for all purposes. [0012] From the above, it is seen that improved techniques for elastomeric design and analysis are highly desirable.
  • the invention provides a microfluidic structure and method of manufacture, and a system and method for imaging a microfluidic device.
  • the fiducial markings are used for processing and imaging a microfluidic chip, but it would be recognized that the invention has a much broader range of applicability.
  • the invention provides a biological substrate, e.g., microfluidic chip.
  • the substrate includes a rigid substrate material, which has a surface region capable of acting as a handle substrate.
  • the substrate also has a deformable fluid layer (e.g., polymeric material, silicone, silicone rubber, rubber, plastic, PDMS) coupled to the surface region.
  • a deformable fluid layer e.g., polymeric material, silicone, silicone rubber, rubber, plastic, PDMS
  • One or more well regions are formed in a first portion of the deformable fluid layer and are capable of holding a fluid therein.
  • the one or more channel regions are formed in a second portion of the deformable fluid layer and are coupled to one or more of the well regions.
  • An active region is formed in the deformable fluid layer.
  • Such active region includes the one or more well regions, which are designed to hold fluid.
  • a non-active region is formed in the deformable fluid layer. The non-active region is formed outside of the first portion and the second portion.
  • At least three fiducial markings are formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions.
  • a control layer is coupled to the fluid layer.
  • the substrate also includes an other fiducial marking with pre-designed shape and size, including at least an edge and center region.
  • the invention provides a method of fabricating a biological substrate.
  • the method includes providing a rigid substrate material, which has a surface region and is capable of acting as a handle substrate.
  • the method includes coupling a deformable fluid layer to the surface region of thorigid substrate.
  • the deformable layer has one or more well regions formed in a first portion of the deformable fluid layer and one or more channel regions formed in a second portion of the deformable fluid layer.
  • An active region is formed in the deformable fluid layer.
  • a non-active region is formed in the deformable fluid layer and is formed outside of the first portion and the second portion.
  • the invention provides a method of manufacturing microfluidic chip structures.
  • the method includes providing a mold substrate including a plurality of well patterns. Each of the well patterns is provided within a portion of an active region of a f ⁇ uidic chip.
  • the method includes forming a plurality of fiducial marking patterns around a vicinity of each of the well patterns. Each of the plurality of fiducial marking patterns is within a portion of a non-active region of a fluidic chip.
  • the plurality of fiducial marking patterns includes a set of alignment marks disposed spatially around each of the well patterns.
  • the method also includes forming a thickness of deformable material within the plurality of well patterns and within the plurality of fiducial marking patterns to fill a portion of the mold substrate.
  • the method includes coupling the thickness of deformable material including a plurality of wells formed from the well patterns and a plurality of fiducial marking patterns formed from the fiducial marking patterns to rigid substrate material.
  • the present invention provides a microfluidic system.
  • the system has a rigid substrate material, which includes a surface region that is capable of acting as a handle substrate.
  • the system has a deformable fluid layer coupled to the surface region.
  • One or more well regions is formed in a first portion of the deformable fluid layer.
  • the one or more well regions is capable of holding a fluid therein.
  • the system has one or more channel regions formed in a second portion of the deformable fluid layer.
  • the one or more channel regions is coupled to one or more of the well regions.
  • An active region is formed in the deformable fluid layer.
  • the active region includes the one or more well regions.
  • a non-active region is formed in the deformable fluid layer.
  • the non-active region is formed outside of the first portion and the second portion.
  • a first fiducial marking is formed within the non-active region and is disposed in a spatial manner associated with at least one of the channel regions.
  • a second fiducial marking is formed within the non-active region and is disposed in a spatial manner associated with at least one of the well regions.
  • a control layer is coupled to the fluid layer.
  • the control layer includes one or more control regions.
  • a third fiducial marking is formed within the control layer.
  • the present invention provides another microfluidic system.
  • the system has a substrate comprising a surface region.
  • a deformable layer is coupled to the surface of the substrate.
  • the deformable layer comprises at least a thickness of first material.
  • a control layer is coupled to the deformable layer to form a sandwich structure including at least the substrate, the deformable layer and the control layer.
  • the control layer is made of at least a thickness of second material
  • At least one fiducial marking is provided within either the control layer or the deformable layer or the substrate.
  • the fiducial marking is characterized by a visual pattern provided in a volume surrounded wholly or partially by at least the substrate, the first material, or the second material.
  • a fluid is disposed within the open volume of the one fiducial marking.
  • the fluid is characterized by a refractive index that is substantially lower than its surrounding regions, e.g., first thickness of material, second thickness of material, substrate. That is, the refractive index may be associated with air or other like fluid and the surrounding regions are characterized by a refractive index associated with a solid according to a specific embodiment.
  • the invention provides at least one way to form alignment patterns for a deformable active region for a microfluidic system according to a specific embodiment.
  • the invention can also use conventional materials, which are relatively easy to use.
  • the invention provides at least two sets of alignment marks, including one set of spatially disposed fiducial markings and a pre-designated pattern, which has an edge and center region. Depending upon the embodiment, one or more of these benefits may exist.
  • the invention provides a method for processing a microfluidic device, e.g., microfluidic chip, biological chip.
  • the method includes providing a flexible substrate including a first plurality of fiducial markings, and determining a first plurality of actual locations corresponding to the first plurality of fiducial markings respectively.
  • the first plurality of fiducial markings is associated with a first plurality of design locations respectively.
  • the method includes processing information associated with the first plurality of actual locations and the first plurality of design locations, and determining a transformation between a design space and a measurement space.
  • the design space is associated with the first plurality of design locations, and the measurement space is associated, with the first plurality of actual locations.
  • a method for processing a microriserdic device includes providing a flexible substrate including at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein.
  • the method includes determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings, and performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space. Moreover, the method includes acquiring at least a first image of the first additional fiducial marking associated with the first chamber, performing a second alignment to the flexible substrate based on at least information associated with the first image, and acquiring a second image of the first chamber associated with the flexible substrate.
  • the invention provides a system for processing one or more microfluidic devices.
  • the system includes one or more computer- readable media and a stage for locating a flexible substrate.
  • the flexible substrate comprises at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein.
  • the one or more computer-readable media include one or more instructions for providing a flexible substrate, and one or more instructions for determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings.
  • the one or more computer-readable media include one or more instructions for performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space, one or more instructions for acquiring at least a first image of the first additional fiducial marking associated with the first chamber, one or more instructions for performing a second alignment to the flexible substrate based on at least information associated with the first image, and one or more instructions for acquiring a second image of the first chamber associated with the flexible substrate.
  • a method for processing a microfluidic device includes providing a flexible substrate (e.g., polymer, silicone based, rubber) comprising one or more well regions and a plurality of fiducial marks.
  • the well regions are capable of holding a fluid therein and at least three of the fiducial marks are within a vicinity of one of the well regions.
  • the flexible substrate has been provided on a rigid member.
  • the method includes locating the flexible substrate on a stage and capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping from a design space to a measurement space.
  • the method also includes aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region.
  • the method also includes acquiring a high-resolution image of at least the one well region and storing the high-resolution image in a memory.
  • the invention provides a system for processing one or more microfluidic devices.
  • the system includes one or more computer memories.
  • the system also includes a stage for locating a flexible substrate, which has one or more well regions and a plurality of fiducial marks.
  • the well regions are capable of holding a fluid therein.
  • At least three of the fiducial marks are within a vicinity of one of the well regions,
  • the one or more computer memories comprise one or more computer codes.
  • the one or more computer codes include a first code directed to capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping from a design space to a measurement space.
  • a second code is directed to aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region.
  • a third code is directed to acquiring a high-resolution image of at least the one well region.
  • a fourth code is directed to storing the high-resolution image in a memory.
  • the method also includes locating the deformabie substrate on a stage translatable in x, y, and z directions and translating the stage to image at least four fiducial marks associated with the deformable substrate.
  • the method determines x, y, and z positions (or other like spatial positions) of the at least four fiducial marks according to a preferred embodiment.
  • the method computes a non-planar mapping between a design space and a measurement space based on the x, y, and z positions of the at least four fiducial marks and translates the stage to an image acquisition position calculated using the non-planar mapping.
  • a step of capturing an image of at least one metering cell is included.
  • a method for producing an image of an object within a chamber of a micro fluidic device includes providing the microfluidic device.
  • the microfluidic device has x , y , and z dimensions and a chamber depth center point located between a top wall and a bottom wall of the chamber along the z dimension.
  • the chamber depth center point is located a known z dimension distance from an optically detectable fiducial marking embedded within the microfluidic device at a z depth.
  • the method includes placing the microfluidic device within an imaging system.
  • the imaging system includes an optical device capable of detecting the fiducial marking and transmitting the image of the object.
  • the optical device defines an optical path axially aligned with the z dimension of the microfluidic device and has a focal plane perpendicular to the optical path.
  • the fiducial marking is maximally detected when the focal plane is at the z depth in comparison to when the focal plane is not substantially in-plane with the z depth.
  • the imaging system includes an image processing device in communication with the optical device. The image processing device is able to control the optical device to cause the focal plane to move along the z axis and move the focal plane to maximally detect the fiducial marking.
  • the image processing device is further able to transmit the image of the object.
  • the method includes controlling the optical device with the image processing device to cause the focal plane to move along the optical path until the optical device maximally detects the fiducial marking. Moreover, the method includes controlling the optical device with the image processing device to move the focal plane along the optical path the z dimension distance to cause the field depth center point to be located at the chamber depth center point. Moreover, the method includes imaging the object within the chamber while the focal plane is located at the chamber depth center point.
  • a system for producing an image of an object within a chamber of a microfluidic device includes the microfluidic device.
  • the microfluidic device has x , y , and z dimensions and a chamber depth center point located between a top wall and a bottom wall of the chamber along the z dimension.
  • the chamber depth center point is located a known z dimension distance from a optically detectable fiducial marking embedded within the microfluidic device at a z depth.
  • the system includes an imaging system for placing the microfluidic device therein.
  • the imaging system includes an optical device capable of detecting the fiducial marking and transmitting the image of the object.
  • the optical device defines an optical path axially aligned with the z dimension' of the microfluidic device and having a focal plane.
  • the fiducial marking is maximally detected when the focal plane is substantially in-plane with the z depth as compared to when the field depth center point is not substantially in-plane with the z depth.
  • the imaging system includes an image processing device in communication with the optical device. The image processing device is able to control the optical device to cause the focal plane to move along the z axis and move the field depth center point to maximally detect the fiducial marking.
  • the image processing device is able to transmit the image of the object.
  • the image processing device is in operable communication with the optical device to cause the focal plane to move along the optical path until the optical device maximally detects the fiducial marking.
  • the image processing device causes the optical device to move the focal plane along the optical path the z dimension distance, the focal point is located at said chamber depth center point.
  • a method for producing an image of a chamber within a microfluidic device includes imaging the microfluidic device to produce an image using an imaging system having an optical path in the z plane of the microfluidic device, and mapping from the image a first set of coordinates of the microfluidic device to determine whether the microfluidic device is skewed or distorted when compared to a coordinate map of an ideal microfluidic device. Additionally, the method includes positioning the microfluidic device so as to position the chamber within the optical path based on a matrix transformation, calculated coordinate position determined by computing a matrix transformation between the first set of coordinates of the microfluidic device and the coordinate map of the ideal microfluidic device.
  • the method includes obtaining a time zero image of the microfluidic device chamber.
  • the time zero image contains images of artifacts present in the microfluidic device.
  • the method includes obtaining a second image of the microfluidic device chamber and subtracting the first image of the microfluidic device chamber from the second image of the microfluidic chamber to produce an image of the chamber without time zero artifacts.
  • Some embodiments provide alignment and/or focus based on mapping between the design space and the measurement space.
  • the transformation between the design space and the measurement space uses, for example, at least three fiducial markings.
  • Certain embodiments provide accurate focusing by acquiring and analyzing a plurality of images along at least one dimension.
  • Some embodiments of the present invention perform alignment and focusing on a micro fluidic device including at least one flexible substrate. The alignment and focusing take into account the deformation of the flexible substrate.
  • Certain embodiments improve throughput in imaging system. For example, the imaging system uses a computer system to automatically perform alignment and focusing.
  • mapping from the design space to the measurement space increases the accuracy of stage positioning, and thereby, the efficiency of high-resolution image acquisition.
  • mapping from the design space to the measurement space increases the accuracy of stage positioning, and thereby, the efficiency of high-resolution image acquisition.
  • Figures 1-10 are simplified diagrams illustrating a method for fabricating a micro fluidic system according to an embodiment of the present invention
  • Figure 11 is a simplified cross-sectional view diagram of a micro fluidic system according to an embodiment of the present invention.
  • Figure 12 is a simplified top-view diagram of a microfluidic system according to an alternative embodiment of the present invention.
  • Figure 13 is a simplified top and side-view diagram of a microfluidic system according to an alternative embodiment of the present invention
  • Figure 13A is a simplified top-view diagram of a microfluidic system including carrier and identification code according to an embodiment of the present invention
  • Figure 14 is a simplified imaging system for imaging objects within a microfluidic device according to an embodiment of the present invention.
  • Figures 15A and 15B are a simplified microfluidic device according to an embodiment of the present invention.
  • Figures 16A and 16B are simplified actual image in measurement space and simplified virtual image in design space respectively according to an embodiment of the present invention;
  • Figures 17A, 17B, and 17C show a simplified method for image subtraction and masking according to an embodiment of the present invention
  • FIG. 18 is a simplified imaging method for micro f ⁇ uidic device according to an embodiment of the present invention.
  • Figure 19 is a simplified method for mapping between the measurement space and the design space according to an embodiment of the present invention.
  • Figure 20 is a simplified diagram for fiducial markings according to an embodiment of the present invention.
  • Figure 21 is a simplified method for locating fiducial marking according to an embodiment of the present invention.
  • FIG. 22 is a simplified metering cell shifted from design position according to an embodiment of the present invention.
  • Figure 23 is a simplified method for aligning and focusing image system according to an embodiment of the present invention .
  • Figure 24 is a simplified method for acquiring images of fiducial marking according to an embodiment of the present invention
  • Figure 25 is a simplified method for aligning and focusing image system according to an embodiment of the present invention
  • Figure 26 is a simplified image acquired and analyzed according to an embodiment of the present invention.
  • Figure 27 shows simplified curves for focus score as a function of z position obtained at the process 804 according to an embodiment of the present invention
  • Figure 28 shows simplified curves for focus score as a function of z position according to one embodiment of the present invention.
  • Figure 29 shows simplified curves for focus score as a function of z position according to another embodiment of the present invention.
  • Figure 30 is a simplified surface map of a three dimensional flexible substrate according to an embodiment of the present invention.
  • the invention provides a microfluidic structure and method of manufacture, and a system and method for imaging a microfluidic device.
  • the fiducial markings are used for processing and imaging a microfluidic chip, but it would be recognized that the invention has a much broader range of applicability.
  • a method for manufacturing a fluid ⁇ c chip according to an embodiment of the present invention may be outlined below. Certain details of the method 100 are also provided according to a flow diagram illustrated by Figure 1, which is not intended to unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • each of the molded channel, well, and control layers is deformable or elastic. That is, well regions may vary slightly from well to well throughout a single microfluidic system, which has been provided on a chip.
  • the present system includes at least one or more fiducial markings that have been placed in predetermined spatial locations to be used with image processing techniques. These fiducial markings allow for any inherent errors caused by the deformable characteristic to be compensated at least in part using the image processing techniques. Further details of methods and resulting structures of the present microfluidic system have been described throughout the present specification and more particularly below.
  • a method for manufacturing a mold for a fluid layer according to an embodiment of the present invention may be outlined below. Certain details of the method 200 are also provided according to a flow diagram illustrated by Figure 2, which is not intended to unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • Pattern including fiduciais e.g., dots
  • the first layer of photoresist to form channel regions 203
  • (0070J) Form channel regions including fiduciais through the patterned film on the mold substrate material 204;
  • each of the molded channel and well layers is deformable or elastic.
  • the present system includes at least one or more fiducial markings that have been placed in predetermined spatial locations to be used with image processing techniques. These fiducial markings allow for any inherent errors caused by the deformable characteristic to be compensated at least in part using the image processing techniques. Further details of methods and resulting structures of the present micro fluidic system have been described throughout the present specification and more particularly below.
  • a method for manufacturing a mold for a control layer according to an embodiment of the present invention may be outlined below. Certain details of the method 300 are also provided according to a flow diagram illustrated by Figure 3, which is not intended to unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the above sequence of steps provides a method for manufacturing a mold for a molded control layer according to a specific embodiment.
  • the control layers is defo ⁇ nable or elastic.
  • the present system includes at least one or more fiducial markings that have been placed in predetermined spatial locations to be used with image processing techniques. These fiducial markings allow for any inherent errors caused by the deformable characteristic to be compensated at least in part using the image processing techniques. Further details of methods and resulting structures of the present microfluidic system have been described throughout the present specification and more particularly below.
  • Figures 1-11 are simplified diagrams illustrating a method for fabricating a microfluidic system according to an embodiment of the present invention. These diagrams are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives. As noted above, Figures 1 through 3 have been described. Certain features with regard to illustrating features of the fluidic system have been provided by way of Figures 4 through 11. For easy viewing, the left side illustrates an overview of the entire substrate, including patterns, while the right side illustrates a portion of the pattern that is pertinent according to a feature being described. [0088] Referring to Figure 4, fluid channel layer is illustrated.
  • the fluid channel layer (or control layer) includes fluid channels 401 to deliver fluid throughout the substrate 403. Fiducial markings in shape of circles 405 are used to locate the channels themselves. These circles are part of the fluid channel layer mask and are transferred with the channels onto the substrate. The circles are recessed regions, which do not extend all the way through the layer, in preferred embodiments.
  • a well layer 501 including well regions 501, 503 on the substrate are illustrated.
  • the well layer includes the well regions and the company logo 507 (which serves as a predetermined fiducial marking according to preferred embodiments) that enables x-y spatial location of a metering cell.
  • the logo is also used for focusing onto the wells as the logo height is the same height as the wells.
  • the well layer also includes a plurality of fiducial markings 505, e.g., crosses. Such crosses are located within a vicinity of each of the well regions. The crosses are at a finite distance and are translated from the mask to the substrate. When using image processing algorithms to locate one or more of the wells, the crosses can be used as a reference to well location.
  • each of the crosses are located in a spatial manner around a periphery of the well region. That is, each of the crosses occupies a corner region that is not active and is free from the well itself.
  • alignment occurs between the fluid channel layer and well layer according to a specific embodiment.
  • the method aligns these two layers at the substrate mold making process.
  • the well layer has a different thickness and shape than the fluid layer.
  • the well layer produces sharp edges while fluid channel layer produces round edges.
  • a goal is to have the wells overlaying the channels in order for the channels to distribute fluids into the wells.
  • the well layer mask is aligned to the fluid layer to place wells over the fluid channels, as shown. Alignment is done by matching the frame of the well layer to the frame of the fluid channel layer.
  • the method generally forms more than one design 701 on a substrate material as shown in Figure 7. Each of these designs can be separated using regions 703 according to a preferred embodiment.
  • the method performs final assembly after silicone (or other like material) has been poured separately over the fluid/well layer mold and the control layer mold. Preferably, the final assembly is made when the control layer of silicone is aligned to the fluid layer of silicone. Matching alignment marks are located on the fluid and control layer that need to overlay each other for proper alignment.
  • the method includes placing a template of the patterned substrate underneath the blank substrate, which is transparent, as illustrated by Figure 8. The template allows carrier top access to reagent inputs.
  • Figure 9 illustrates the patterned substrate, including wells and channels, overlying the transparent substrate. Details of the fiducial markings are provided throughout the present specification and more particularly below.
  • FIG 10 is a simplified top-view diagram 1000 of a completed microfluidic system including well 1001 and channel regions 1003. As shown, fiducial markings 1005 are disposed spatially around a periphery of the well region. The system also has company log 1007, which is a predetermined fiducial marking, which is larger in size than the other fiducial markings. The predetermined fiducial marking has one or more edges and a center region, among other features, as needed. Of course, one of ordinary skill in the art would recognize many other variations, modifications, and alternatives. Specific details with regard to the present system are also provided using the side-view diagram illustrated below.
  • FIG 11 is a simplified cross-sectional view diagram 1115 of a microfluidic system Il 00 according to an embodiment of the present invention.
  • This diagram is merely an example, which should not unduly limit the scope of the claims herein.
  • the system includes a glass substrate 1103 or any like transparent substrate material, which can act as a handle substrate.
  • Overlying the handle substrate is fluid channel 1 105 and well layer 1107.
  • the fluid channel and well layer have been provided on a single layer 1109 or can be multiple layers.
  • the fluid channel has a depth that is less than the well, which extends into the single layer.
  • the fluid channel and well layer are made using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material.
  • a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material.
  • the material is also transparent, but may be deformable or alternatively flexible in characteristic.
  • the system also has a control layer 1111, which includes control channel
  • control layer is made using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material. Depending upon the embodiment, there may also be other features in the system.
  • One 1102 of a plurality of fiducial markings is also shown.
  • the marking is at a vicinity of the well region and also has a height relative to the wells that are substantially similar. That is, optically the height of the marking is about the same as the well relative to a plane parallel to the substrate.
  • the marking may be formed based upon a predetermined off-set relative to the plane parallel to the substrate in other embodiments. Certain dimension are also shown, but are not intended to be limiting in any manner. Depending upon the embodiment, there can be many variations, alternatives, and modifications.
  • FIG. 12 is a simplified top-view diagram of a microfluidic system according to an alternative embodiment of the present invention.
  • the substrate includes a rigid substrate material, which has a surface region.
  • the substrate is capable of acting as a handle substrate.
  • the rigid substrate can be made of a suitable material such as a glass, a plastic, silicon, quartz, multi-layered materials, or any combination of these, and the like. Of course, the type of substrate used depends upon lhe application.
  • the substrate also includes a deformable fluid layer coupled to the surface region.
  • the fluid layer is attached using a glue layer or other attachment technique.
  • One or more well regions are formed in a first portion of the deformable fluid layer.
  • the one or more well regions is capable of holding a fluid therein.
  • One or more channel regions is formed in a second portion of the deformable fluid layer,
  • the one or more channel regions is coupled to one or more of the well regions.
  • the channel regions include protein channels 1201 and reagent channels 1203. Other channel regions can also be included.
  • the fluid layer includes active and non-active regions.
  • An active region is formed in the deformable fluid layer.
  • the active region includes the one or more well regions.
  • a non-active region is formed in the deformable fluid layer.
  • the non-active region is formed outside of the first portion and the second portion.
  • the term “active” and “non-active” are merely used for illustration purposes and should not limit the scope of the claims herein.
  • the non-active region generally corresponds to regions free from use of fluids or other transport medium, and the like.
  • the substrate includes a plurality of fiducial markings. Each of the fiducial markings is selectively placed within a certain layer region.
  • a first fiducial marking 1205 is formed within the non-active region and disposed in a spatial manner associated with at least one of the channel regions. That is, the first fiducial marking is within the channel regions.
  • the first fiducial marking is a recessed region that includes a selected width and depth. The recessed region forms a pattern to be captured by an image processing technique.
  • a second fiducial marking 1213 is formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions. That is, the second fiducial marking is within the channel regions.
  • the second fiducial marking is a recessed region that includes a selected width and depth.
  • the recessed region forms a pattern to be captured by an image processing technique.
  • the substrate also has a control layer coupled to the fluid layer.
  • the control layer includes one or more control regions.
  • the control layer includes interface control line 1207 and containment control line 1209. Other control lines can also be included.
  • a third fiducial marking 1211 is formed within the control layer.
  • the third fiducial marking is a recessed region that includes a selected width and depth. The recessed region forms a pattern to be captured by an image processing technique. Further details of the substrate can be found throughout the present specification and more particularly below.
  • FIG. 13 is a simplified top and side-view diagram 1300 of a microfluidic system according to an alternative embodiment of the present invention.
  • This diagram is merely an example, which should not unduly limit the scope of the claims herein.
  • the diagram includes a "top-view,” a "detailed top view” and “side view” of fluidic microstructures according to embodiments of the present invention.
  • the system also includes global fiducials 1301.
  • the global fiducials are used for rough alignment purposes, although may be used for fine alignment as well.
  • the global fiducials by a spatial dimension of greater than 100 ⁇ m and less than 250 ⁇ m.
  • the global fiducials include a length and a width of about 180 ⁇ m and 160 ⁇ m respectively.
  • the global fiducials are characterized by a depth of at least 10 ⁇ m within a thickness of the non-active region.
  • the global fiducials include a thickness of about 20 ⁇ m and are within the deformable layer 1305 as shown.
  • the side view diagram includes a substrate 1302, which is preferably rigid, with an upper surface region.
  • the rigid substrate can be made of a suitable material such as a glass, a plastic, silicon, quartz, multi-layered materials, or any combination of these, and the like. Of course, the type of substrate used depends upon the application.
  • the substrate also includes a deformable fluid layer coupled to the surface region.
  • the fluid layer is attached using a glue layer or other attachment technique.
  • One or more well regions are formed in a first portion of the deformable fluid layer.
  • the one or more well regions 1309 is capable of holding a fluid therein. As shown, the well region has a certain thickness within the deformable layer.
  • One or more channel regions 1311 is formed in a second portion of the deformable fluid layer.
  • the one or more channel regions is coupled to one or more of the well regions.
  • the channel regions include protein channels and reagent channels. Other channel regions can also be included. As shown, the channel regions are not as thick as the well regions.
  • the deformable layer includes an upper surface, which couples to control layer 1307.
  • the control layer includes a plurality of control channels 1313.
  • [0104 J Fiducial markings are selectively placed in a spatial manner on the microfluidic system.
  • the global alignment fiducial marking is formed in the deformable layer within a vicinity of a well region.
  • a first fiducial marking is placed within a vicinity of the well region.
  • four wells form a metering cell.
  • the metering cell has a length and a width each about 2 ⁇ m.
  • the first fiducial marking is placed substantially at the center of the metering cell
  • a second fiducial marking may be placed within a vicinity of the channel region within the deformable layer.
  • a third fiducial marking may be placed within a vicinity of the control channel in the control layer.
  • two of the fiducial markings may be within a vicinity of the channel region and the third fiducial marking may be within a vicinity of the control channel in the control layer.
  • two of the fiducial markings may be within a vicinity of the well region and the third fiducial marking may be within a vicinity of the control channel in the control layer.
  • the fiducial markings are placed within a vicinity of the region being examined, such as well or channel regions. The fiducial marking placed within the control layer or another layer serves as an alignment point to correct for depth of field or other optical characteristics.
  • a fiducial marking comprises a recessed region in the deformable layer.
  • the recessed region becomes a volume or open region surrounded by portions of the deformable layer or other layers.
  • the volume or open region is preferably filled with a fluid such as a gas including air or other non-reactive fluid.
  • the fluid also has a substantially different refractive index to light relative to the surrounding deformable layer.
  • the open region is preferably filed with an air or air type mixture and has a low refractive index.
  • the fiducial marking in the control layer has similar characteristics according to a specific embodiment.
  • the fiducial marking has sharp edges that highlight the marking from its surroundings. For example, the edges are preferably 90 degree corners or the like.
  • the edges are preferably 90 degree corners or the like.
  • the fluid channel and well layer are made using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material in certain embodiments.
  • the control layer can be made also using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material in some embodiments.
  • the fluid channel and well layer and the control layer are made of material, whose thermal coefficient is at least 10 -4 .
  • the thermal coefficient ranges from 10 -4 to 10 -3 .
  • the thermal coefficient equals about 3x 10 -3 .
  • the fluid channel and well layer and the control layer are made of material, whose Young's modulus is at most 5x10 6 .
  • the Young's modulus ranges from 8x 10 4 to 7.5x 10 5 .
  • the micro fluidic device includes the channel regions and well regions. These diagram are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the channel regions and the well regions are interchangeable.
  • the channels and the wells refer to recessed regions in the microfluidic device.
  • the microfluidic device uses channel regions to function as well regions.
  • the microfluidic device includes chambers that can be used as fluid channels, control channels, and wells.
  • a system 1350 includes a chip 1353, which has associated carrier 1351.
  • the chip can be any one of the embodiments referred to as a microfluidic system herein as well as others.
  • the chip generally includes a substrate, deformable layer, and control layer, among other features.
  • the chip also has well regions coupled to channel regions in the deformable layer.
  • the control layer is coupled to the deformable layer.
  • the carrier includes various features such as inlets/outlets 1355 that couple to elements in the chip.
  • the carrier also includes accumulation reservoirs 1357, which couple to the inlets/outlets.
  • the carrier has an identification region 1358 that includes barcode or other identification element. Other identification features, which can be identified visually, may also be used. Further embodiments may also include other identification devices such as radio frequency identification, pattern recognition, and the like.
  • the bar code is an encoded set of lines and spaces of different widths that can be scanned and interpreted into numbers to identify certain features of the microfluidic system.
  • the barcode includes intrinsic and/or extrinsic information associated with the chip.
  • the intrinsic information may be pattern recognition information and/or alignment information associated with the fiducial markings. That is, once identification and alignment of the system has occurred using at least the fiducial markings, such alignment information can be stored in memory of a computing or processing system according to an embodiment of the present invention.
  • the alignment information can be used to more efficiently process the specific chip, including bar code, for certain applications.
  • the alignment information associated with the fiducial markings can be stored in memory that is later retrievable using processing systems according to embodiments of the present invention.
  • Figure 14 is a simplified imaging system for imaging objects within a microfluidic device according to an embodiment of the present invention. This diagram is merely an example, which, should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • an imaging system 4010 includes a stage 4020.
  • the stage 4020 is movable in x, y, and z dimensions, as shown by arrows 4190.
  • the movement of the stage 4020 is caused by a stage drive 4025 under control of a computer system 4110.
  • the imaging system 4010 includes an imaging device 4060.
  • the imaging device 4060 includes an lens system 4070 with lenses 4075 therein, and a detector 4080.
  • the lens system 4070 is under control of the computer system 4110 to automatically adjust the focus of the lens system 4070 in response to image information gathered by the detector
  • the image is communicated to the computer system 4110 and stored in a database
  • the lens system 4070 can focus on a microfluidic device 4030 by adjusting a focal plane 4100 in the z direction.
  • the focal plane is at a chamber centerline of the microfluidic device 4030.
  • the microfluidic device 4030 is situated upon the stage 4020 and can have various structures.
  • the microfluidic device has a structure and is manufactured by a method as described in Figures 1-13.
  • the microfluidic device 4030 has a chamber 4050 wherein an object, such as a protein crystal, may be formed or otherwise located.
  • the chamber 4050 is capable to hold a volume of fluid less than 1 nano ⁇ iter.
  • a plurality of chambers can be combined to form a metering cell.
  • the chamber 4050 has a chamber centerline that is located between a top wall and a bottom wall of the chamber 4050.
  • the chamber 4050 is a well region, a channel region, or both.
  • the imaging system 10 includes an illumination device 4170 for producing an illumination beam 4180.
  • the illumination beam 4180 illuminates objects within the microfluidic device 4030.
  • the computer system 4110 may be in communication with an input/output device 4160 and a barcode reader 4120.
  • the barcode reader 4120 can read a bar code 4130 on a microfluidic device 4140.
  • the microfluidic device 4140 is used as the microfluidic device 4030.
  • the imaging system 4010 may be integrated into a larger robotic system, such as a rotating arm or railroad track type robotic system, to increase the throughput.
  • the imaging system 4010 can communicate with the robotic system and control the flow of microfluidic devices into and out of the imaging system, acquire information about the microfluidic devices and their contents, and supply image data and results from the imaging system to the robotic system. If the robotic system includes a database, the imaging system can contribute image and results to the database. The robotic system, in-turn, may automatically design further experiments based upon the results provided by the imaging system.
  • the imaging system 4010 operates in the following manner including a plurality of processes. These processes are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the microfluidic device 4030 is securely placed on the stage 4020. Based on a fixed feature of the microfluidic device 4030, the computer system 4110 instructs the drive 4025 to move the stage 4020 and align the microfluidic device 4030 with a first fiducial marking.
  • the fiducial marking is embedded within the microfluidic device 4030 at a known z dimension distance from the chamber centerline.
  • the first fiducial marking comes into focus by the imaging device 4060 based on dead reckoning from tfie fixed feature.
  • the actual coordinates of the first fiducial marking is then measured and registered with the imaging system 4010. Additionally, the actual coordinates of two or more additional fiducial markings are measured and registered. [0117J
  • the actual locations of the fiducial markings are compared with their design locations in the stored image map respectively.
  • the stored image map is associated with the design space.
  • the stored image map is an ideal image map.
  • the stored image map is associated with a mathematical grid. Based on the comparison, the imaging system 4010 determines whether stretch, distortion, or other deformation exists in the microfluidic device 4030.
  • a matrix transformation such as an Affine transformation
  • the transformation converts the actual shape of a metering cell into a virtual shape with respect to the design space.
  • an image subtraction and other image analysis may be performed.
  • Figures 15A and 15B are a simplified microfluidic device according to an embodiment of the present invention. These processes are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • Figures 15A and 15B depict a top view and a cross-sectional view of a microfluidic device respectively.
  • a microfluidic device 4230 includes at least a flexible substrate with a chamber 4250 and a fiducial marking 4254.
  • the fiducial markings 4254 are used for xyz alignment and focus of an imaging system.
  • the imaging system focuses upon the fiducial markings 4254 within the microfluidic device 4230 and conduct mapping between the measurement space and the design space.
  • the imaging system then adjusts a focal plane with respect to the z dimension of the microfluidic device 4230 and places the focal plane in plane with a selected point within the chamber 4250, preferably at chamber focus position 4256,
  • the chamber focus position 4256 is a ⁇ z distance 4252 away from a focus plane 4258 of the fiducial markings 4254.
  • the fiducial markings 4254 are optimally focused.
  • the microfluidic device 4230 may be used as the microfluidic device 4030.
  • the microfluidic device may be made by processes described in Figures 1-13A.
  • Figures 16A and 16B are simplified actual image in measurement space and simplified virtual image in design space respectively according to an embodiment of the present invention. These diagrams are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, the design space is ideal, and the measurement space is distorted.
  • the difference between the design space and the measurement space can be calculated through fiducial mapping. Consequently, a matrix transformation is developed to convert the actual image into a virtual image in the design space. Transforming various actual images into the same design space facilitates the image subtraction and masking in order to maximize the viewable area of a metering cell chamber. Moreover, if a defect or debris is present within the chamber at time zero in a series of time based images, such defect or debris can be masked out of subsequent images to avoid false positive when applying automated crystal recognition analysis. Additionally, the walls of a chamber may be subtracted from subsequent images to reduce the likelihood of false reading in the crystal recognition analysis .
  • FIGs 17A, 17B, and 17C show a simplified method for image subtraction and masking according to an embodiment of the present invention. These diagrams are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • Figure 17A depicts a metering cell with debris, shown as the letter "D" distributed about the metering cell chambers.
  • the metering cell is transformed into the design space. For example, the metering cell is rotated to align with the design coordinate system and stretch compensated to make the metering cell dimensions match those of the design metering cell dimensions.
  • the foreign objects not present in the design metering cell are masked out such that the regions including and immediately surrounding the foreign objects are masked. The masking can reduce the likelihood of falsely triggering the crystal detection analysis into deeming the foreign objects as crystals that were formed.
  • Figure 17B depicts a masked image where the foreign objects have been masked.
  • Figure 17A can be removed by image subtraction.
  • Figure 17C depicts an image without chamber walls.
  • further masking may be performed if wall implosion is detected.
  • the wall implosion may occur when the micro fiuidic device is dehydrating and the chamber contents are permeating outside of the chamber, causing a negative pressure therein and thus wall collapse or implosion.
  • Such further masking for implosion may employ a series of known shapes that occur when chamber implosion occurs and uses such known shapes to create additional masks to occlude from the image the now intruding imploded walls.
  • the method 4400 includes process 4410 for mapping between measurement space and design space, process 4420 for alignment and focusing, and process 4430 for capturing image.
  • the method 4400 may be performed by the imaging system 4010 on the microfluidic device 4030.
  • the imaging system 4010 performs the processes 4410,. 4420, and 4430 according to the instructions of the computer system 4110 or another computer system.
  • processes may be expanded and/or combined. Other processes may be inserted to those noted above. For example, a process of placing a microfluidic device on the stage of an imaging system is performed prior to the process 4410. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. For example, the process 4420 may be skipped. Further details of these processes are found throughout the present specification and more particularly below.
  • FIG. 19 is a simplified process 4410 for mapping between the measurement space and the design space according to an embodiment of the present invention.
  • the process 4410 includes process 4440 for locating fiducial marking, process 4442 for measuring actual location of fiducial marking, process 4444 for comparing actual location and design location of fiducial marking, process 4446 for determining need for additional fiducial marking, process 4448 for determining transformation between measurement space and design space, and process 4450 for coarse alignment.
  • the processes 4440, 4442, and 4444 may be performed for more than one fiducial markings before the process 4446 is performed. For example, several fiducial markings are located and measured. In anther embodiment, the process 4444 may be performed after the process 4446 has determined no additional mark needs to located. Further details of these processes are found throughout the present specification and more particularly below. [0127] At the process 4440, a fiducial marking is located on a micro fluidic device.
  • each of fiducial markings 4520, 4522, and 4524 includes three plus signs or crosses located at three corners of a square and a company logo located at the fourth corner of the square.
  • the fiducial marking 4520, 4522, or 4524 is the fiducial marking located at the process 4440.
  • the fiducial marking 4520, 4522, or 4524 is a global fiducial.
  • the fiducial marking 4520, 4522, or 4524 serves as both a global fiducial and a local fiducial. In yet another example, the fiducial marking 4520, 4522, or 4524 is located in the same plane as the well regions of the microfluidic device.
  • the located fiducial marking has a configuration different from the fiducial marking 4520, 4522, or 4524.
  • the located fiducial marking is readily recognizable by the image processing algorithm. Operation of the image processing algorithm is improved when the fiducial marking is readily visible, with minimal optical interference from the edge of the microfluidic device or other channels.
  • Locating the fiducial marking at the process 4440 can be performed manually, automatically, or both. For example, the f ⁇ duciai marking is moved and identified in the field of view of the imaging system by visual inspection. In another example, the imaging system automatically places and identifies the fiducial marking in the field of view.
  • Figure 21 is a simplified process 4440 for locating fiducial marking.
  • the process 4440 includes process 4810 for acquiring image, process 4820 for segmenting image, process 4830 for performing blob analysis, process 4840 for determining whether fiducial marking is located, process 4850 for adjusting position, and process 4860 for moving fiducial marking.
  • process 4810 for acquiring image
  • process 4820 for segmenting image
  • process 4830 for performing blob analysis
  • process 4840 for determining whether fiducial marking is located
  • process 4850 for adjusting position
  • process 4860 for moving fiducial marking.
  • an image of the fiducial marking is acquired.
  • an image of the fiducial marking is captures.
  • the image is captured by a digital camera such as a Leica DC500.
  • the image has a low resolution.
  • the image is 640 x 480 pixels in size, and the color depth resolution is 16 bits.
  • the pixel and color depth resolutions are varied to optimize system performance.
  • the image may be adjusted to compensate for variations in lamp intensity and color. This compensation may take the form of image normalization. Additionally, the red, blue, and green components of the image can be adjusted to white balance the image. The white-balancing of the image may be accomplished by median correction or other known techniques.
  • the image is segmented. Segmentation of the image can separate desired images from the background signal and produce "blobs" useful in further analysis steps.
  • the blob analysis is performed. The blobs in the image are compared against a training set contained in a database.
  • the training set contains images of a fiducial marking obtained from a large number of microfluidic devices and imaging conditions.
  • the fiducial marking is the company logo. In another example, the fiducial marking is one other than the company logo.
  • the process 4840 whether the fiducial marking is located is determined. If the fiducial marking is located, the process 4442 is performed. In one embodiment, if the best match of the blobs to the standards is found to be within a predetermined specification, the fiducial marking is considered to be located. For example, the predetermined specification includes a proximity ranking of less than 4200. If the fiducial marking is not detected, the process 4850 is performed. [0134] At the process 4850, the position of the stage is adjusted, After the adjustment, the processes 4810, 4820, 4830 and 4840 are performed. In one embodiment, at the process 4850, the stage is moved in an x direction and/or a y direction.
  • the stage is moved in a z direction at the process 4850.
  • the stage is moved by a selected amount in a first z-direction y stepping the z-motor of the stage in a first selected direction.
  • the processes 4810, 4820, 4830 and 4840 are performed.
  • the process 4850 is repeated until the fiducial marking is determined to be located at the process 4840 or the stage reaches the end of its range of motion in the first z direction. If the stage reaches the end of its range of motion, the stage is returned to the initial positio nd the stage is stepped by ⁇ z in a second selected z-direction.
  • the second z-direction is opposite to the first z-direction.
  • the step size ⁇ z can be uniform in both directions, or vary as a function of direction or distance from t each stepped z-height in the second direction, the processes 4810, 4820, 4830, and 4840 are performed.
  • the process 4850 is repeated until the fiducial marking is located or the stage reaches the end of its range of motion in the second z direction. If the fiducial marking cannot be located within the range of motion, an error message is generated.
  • the stage is moved in an x direction, a y direction, and/or a z direction.
  • the stage is translated to move the fiducial marking to substantially the center of the field of view of the imaging system.
  • 0136 As shown in Figure 19, at the process 4442, the actual location of the located fiducial marking is measured. As shown in Figure 20, the measured location of the fiducial marking 4520 is represented by vecto with respect to the origin O 4510. The measured vector r am representing the actual location of a fiducial marking can also be written as:
  • n is a positive integer.
  • the actual location ⁇ is automatically detected by an image processing routine.
  • the design location of the fiducial marking 4520 referenced to an origin O, can be represented by a design vector .
  • the design vector epresenting the design location of a fiducial marking can also be written as:
  • n is a positive integer.
  • the difference in the design location r nD and the measured location can be calculated a
  • the processes 4440, 4442, and 4444 are only examples.
  • the imaging system uses a predetermined magnification objective.
  • a 1OX magnification objective is used for the lenses 4075 of the imaging system 4010.
  • the imaging system first uses a lower power magnification objective, such as a 2.5X magnification objective, at the processes 4440, 4442, and 4446. Subsequently, for the same fiducial marking, the coarse alignment of the microfluidic device is performed. For example, the coarse alignment uses the difference vector The vector epresents the translation of the located fiducial marking in the x, y, and z axes from the design location. Using the x and y scalar values from , the stage position of the imaging system can be adjusted in the x-y plane to position the located fiducial marking at a pre- determined location in the x-y plane.
  • a lower power magnification objective such as a 2.5X magnification objective
  • using the z-axis scalar value from he position of the stage can be adjusted in the z plane to position the fiducial marking at a selected location in the z plane.
  • the z-axis focus adjustment may be performed before, after, and/or at the same time as the adjustment in the x-y plane.
  • the imaging system switches to a higher power magnification objective, for example, a 1OX magnification objective.
  • a higher power magnification objective for example, a 1OX magnification objective.
  • the measurements and adjustments made with a lower power objective place the fiducial marking within the field of view of the imaging objective when the objective is switched to the higher power magnification objective.
  • the image system can more accurately determine the vectors [0145]
  • whether an additional fiducial marking should be located and measured is determined. If an additional fiducial marking does not need to be located and measured, the process 4448 is performed. If an additional fiducial marking should be located and measured, the process 4440 is performed.
  • the processes 4440, 4442, and 4444 are performed for each of the three fiducial markings 4520, 4522, and 4524 as shown in Figure 20.
  • For the fiducial marking 4520 are determined.
  • For the fiducial marking 4522 re determined.
  • For the fiducial marking 4524 re determined.
  • more than three global fiducial markings are located and measured.
  • the transformation between measurement space and design space is determined.
  • a matrix transformation such as an Aff ⁇ ne transformation, is determined based on the difference vectors .
  • non-uniform absorption of fluids, non-uniform hydration and dehydration, or other factors can result in flexing, stretching, shrinking, bowing, swelling, contracting and other distortions in the microfluidic device.
  • fabrication processes for the device, handling during packaging and testing, and other protocols can introduce deformations and distortions in the device. These deformations may be dimensional Iy uniform or non-uniform, including both linear and non-linear distortions.
  • the effects of these distortions may impact the magnitude and direction of the measured vectors Accordingly, the deviation of these measured vectors from their corresponding design vectors represent the linear and non-linear distortions of the microfluidic device media.
  • a transformation can be created between the design space and the measurement space. This transformation is correlated with the flexing, stretching, bowing, and other distortions and deformations present in the microfluidic device.
  • the transformation may have linear components and/or non-linear components.
  • a transformation is determined based on three fiducial markings, such as the fiducial markings 4520, 4522, and 4524.
  • Such transformation can provide a planar mapping of the microfluidic device.
  • the plane defined by the three fiducial markings can be used to characterize the translation of the microfluidic device in the three dimensions of x, y, and z as well as stretching of the microfluidic device material in the plane of the microfluidic device.
  • the roll, pitch, and yaw of this plane can also be characterized by the plane defined by the three fiducial markings.
  • the coarse alignment is performed with the transformation between the design space and the measurement space.
  • the actual position of a metering cell of the microfluidic device is determined, and the metering cell is positioned in preparation for imaging.
  • the actual location of a metering cell can be shifted from the design location due to distortions and deformations of the microfluidic device.
  • the microfluidic device can be stretched in the plane of the microfluidic device, further shifting the actual position of the metering cell.
  • the metering cell is shifted in the x dimension and/or the y dimension.
  • the metering cell is shifted in the z dimension.
  • FIG 22 is a simplified metering cell shifted from design position according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
  • a metering cell 4710 in the design space is schematically illustrated with solid lines and the same metering cell 4730 in the measurement space is schematically illustrated in dashed lines.
  • the design vector points to a design location 4715 of a fiducial marking of the metering cell 4710, and the measured vecto oints to a design location 4735 of the same fiducial marking of the same metering cell 4730.
  • the tip of the measured vecto is offset from the design vector by an error vector
  • This error vector can have components in all three dimensions.
  • the approximate actual location of a metering cell can be calculated by taking into account the error vector.
  • the stage of the imaging system can be moved in the x dimension, the y dimension, and/or the z dimension to position the metering cell in preparation for imaging. ⁇ 0152 ⁇
  • Figure 19 is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • an algorithm can be used to register the microfluidic device with respect to the coordinates of the imaging system coupled to camera and the stage. Based on determinations and/or evaluations of the stack-up tolerances from the integrated microfluidic device and carrier and the microscope stage, tolerance metrics can be set. In one embodiment, the tolerances is set to ensure that at least one global fiducial generally appears within the field of view available when the lenses 4075 comprise a 2.5X objective. This tolerance definition allows automation of the fiducial finding process and streamline system operation.
  • an automated system can locate a fiducial marking outside the current field of view of the imaging system through a search routine, Additionally, the movement of the fiducial mark can be performed, for example, by moving the stage with respect to the imaging device, moving the imaging device with respect to the stage, or both.
  • the stage carries the microfluidic device to which the fiducial mark belongs.
  • the process 4420 includes process 4802 for acquiring images of and process 4804 for determining alignment and focus.
  • process 4802 for acquiring images of and process 4804 for determining alignment and focus.
  • process 4804 for determining alignment and focus.
  • some of the processes may be expanded and/or combined.
  • Other processes may be inserted to those noted above.
  • a process substantially similar to the process 4440 as described in Figure 21 is performed on a metering cell and its associated fiducial marking, which are aligned, focused and imaged at the processes 4802 and 4804.
  • the specific sequences of processes may be interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below.
  • FIG. 4802 images of a fiducial marking is acquired
  • the fiducial marking is associated with the metering cell, which has been aligned using the mapping between the design space and the measurement space at the process 4450.
  • Figure 24 is a simplified process 4802 for acquiring images of fiducial marking according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the process 4802 includes process 4910 for moving stage in first direction, process 4920 for moving stage in second direction, process 4930 for acquire image, and process 4940 for determining need for additional movement.
  • the fiducial marking used in the process 4802 may be a company logo having a height the same as that of the wells.
  • the fiducial marking has a height different from that of the wells. The known offset between the plane of the fiducial marking and that of the wells would enable accurate z-axis adjustments to be made. Further details of these processes are found throughout the present specification and more particularly below.
  • the stage of the imaging system is moved in a first z direction.
  • the metering cell and its associated fiducial marking can be aligned in the x dimension, the y dimension, and/or the z dimension based on the transformation between the design space and the measurement space.
  • the z position of the stage is referred to as Z f .
  • the stage is moved from Zf by a distance in a first z-direction equal t
  • the stage is moved in a second z-direction by a distance equal to dz .
  • this second z direction is opposite to the first z direction.
  • the step size ⁇ z can be uniform or vary as a function of distance from
  • an image of the fiducial marking is acquired.
  • the image is captured by a digital camera such as a Leica DC500.
  • the image has a low resolution.
  • the image is 640 x 480 pixels in size, and the color depth resolution is 16 bits.
  • the pixel and color depth resolutions are varied to optimize system performance.
  • the image may be adjusted to compensate for variations in lamp intensity and color. This compensation may take the form of image normalization.
  • the red, blue, and green components of the image can be adjusted to white balance the image. The white- balancing of the image may be accomplished by median correction or other known techniques.
  • FIG 23 is a simplified process 4804 for aligning and focusing image system according to an embodiment of the present invention.
  • the process 4804 includes 6810 for selecting image, process 6820 for segmenting image, process 6830 for performing blob analysis, process 6840 for determining whether fiducial marking is located, process 6850 for determining need for additional image, process 6860 for determining alignment, process 6870 for determining focus scores, and process 6880 for determining focus position.
  • an image is selected from the images taken in the process 4802 for further analysis.
  • the selected image is segmented. Segmentation of the image can separate desired image from the background signal and produce "blobs" useful in. further analysis steps.
  • the blob analysis is performed. The blobs in the image are compared against a training set contained in a database.
  • the training set contains images of a fiducial marking obtained from a large number of microfluidic devices and imaging conditions.
  • the fiducial marking is the company logo.
  • the fiducial marking is one other than the company iogo.
  • the fiducial marking is determined. If the fiducial marking is located, a region of interest (ROI) is created around the fiducial marking.
  • Figure 26 is a simplified image acquired and analyzed according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the fiducial marking may be a company log 4770 surround by a region of interest 4760. In one embodiment, if the best match of the blobs to the standards is found to be within a predetermined specification, the fiducial marking is considered to be located. For example, the predetermined specification includes a proximity ranking of less than 4200.
  • the process 6850 whether additional image should be analyzed is determined. For example, if any of the images taken at the process 4802 has not been selected at the process 6810, the process 6810 is performed to select the image not yet selected. If all of the images taken at the process 4802 have been selected, the process 6860 is performed. [0164] At the process 6860, the alignment in the x and y dimensions is determined. In one embodiment, the alignment uses the actual location of an ROI and the design location of the ROI. For example, the alignment in the x and y dimensions are determined by the difference between the actual location and the design location. In another embodiment, the fiducial marking has a known spatial relationship with chambers within the metering cell in the x and y dimensions.
  • the alignment in the x and y dimensions of the metering cell is determined based on the alignment in the x and y dimensions of the fiducial marking.
  • the metering cell has a length and a width each about 2 ⁇ m.
  • the fiducial marking is placed substantially at the center of the metering cell.
  • the fiducial marking is in the vicinity of or within the metering cell and their actual spatial relationship in the x and y dimensions does not change significantly from the design spatial relationship.
  • a focus score is determined and stored.
  • the focus score is calculated based on the standard deviation.
  • the focus score is calculated based on the "edginess" of the image.
  • the "edginess" of the image is assessed by a sobel operator.
  • the "edginess" of the image is determined by an edge-sensitive computer program similar to a high pass filter.
  • the techniques based on the "edginess" of the image usually take into account that when the image is in sharp focus, high frequency details are visible, and when the image is out of focus, the high frequency details are blurred or smudged.
  • the focus score is calculated based on histogram. The histogram techniques use specific characteristics of the fiducial marking to improve focusing.
  • the images for the area of interest are acquired by the imaging system. For each of at least some of the acquired images, a first sobel square sum is determined. The sobel operator is applied to each data point on the acquired image. Each resultant value is squared, and all of the squared values are added together. Additionally, the acquired image is blurred. For example, the blurring may be accomplished by applying Gaussian smoothing to the acquired image. In one embodiment, the Gaussian smoothing serves as a low pass filter attenuating high frequency components of the acquired image. In another embodiment, the Gaussian smoothing can be described as follows:
  • a second sobel square sum is determined by applying the sobel operator to the blurred image, squaring each resultant value, and summing all the squared values. Afterwards, clipping is applied to the second sobel square sum. If the second sobel square is smaller than a predetermined threshold, the second sobel square sum is set to the predetermined threshold. Dividing the clipped second sober square sum by the first sobel square sum, the resultant ratio is used as the focus score. The focus score for each of at least some of the acquired images is then stored. [0I68J At the process 6880, the focus position for the metering cell is determined. As discussed above, at the process 6870, the focus scores are obtained for various z positions.
  • the z position corresponding to a peak focus score is used as the focus position.
  • the z positions corresponding to two peak focus scores are determined and averaged. The average z position is used as the focus position.
  • the focus position is determined based on the characteristic of the entire curve representing the focus score as a function of z position.
  • the fiducial marking has a known spatial relationship with chambers within the metering cell in the z dimension. The focus position in the z dimension of the metering cell is determined based on the focus position in the z dimension of the fiducial marking. For example, the metering cell has a length and a width each about 2 ⁇ m. The fiducial marking is placed substantially at the center of the metering cell. In another example, the fiducial marking is in the vicinity of or within the metering cell and their actual spatial relationship in the z dimension does not change significantly from the design spatial relationship.
  • FIG. 27 shows simplified curves for focus score as a function of z position obtained at the process 6870 according to an embodiment of the present invention.
  • the focus score at each z value is associated with the sobel square sum for the acquired image without blurring.
  • This diagram is merely an example, which should not unduly limit the scope of the claims.
  • focus scores are calculated at z-axis positions separated by approximately 2 ⁇ m and extending for 100 ⁇ m on either side of zj.
  • the coarse nature of the z-axis position determined by the process 4802 is evident, as the peak of the focus score distributions are located approximately 20 ⁇ m fro
  • the method by which the stage is scanned, the density of measurement points, and the range over which the measurements extend can be varied, as would be evident to those skilled in the art.
  • focus scores are collected at fewer locations separated by greater distances.
  • focus scores collected at 10 ⁇ m spacing located on alternating sides of z / is used as inputs to the image processing software, only obtaining additional focus scores and filling in the curve if needed.
  • FIG. 27 shows two different focus score runs in which the aperture of the condenser of the imaging system is operated in either a narrow or a wide setting.
  • a curve 5030 corresponds to a narrow setting and represents a bi-modal distribution of focus scores.
  • the twin peaks are each associated with the detection of the top and bottom edges of the fiducial marking, such as a company logo.
  • This bi-modal distribution can be characterized by a full width half magnitude (FWHM) 5035. If the condenser aperture is operated at a wide setting, the bi-modal distribution merges into a single peaked distribution represented by a curve 5020.
  • the amplitude of the single peak is reduced from the amplitude characteristic of the bi-modal distribution and the FWHM is reduced as well.
  • the FWHM of the single peak distribution is represented by line 5025.
  • Figure 28 shows simplified curves for focus score as a function of z position obtained at the process 4804 according to one embodiment of the present invention.
  • the focus score at each z value is associated with the sobel square sum for the acquired image without blurring.
  • This diagram is merely an example, which should not unduly limit the scope of the claims.
  • the focusing scores obtained without image blurring may produce irregular focal peaks under certain conditions. Sometimes the peak is single modal, sometimes the peak is bi-modal, and usually the peak is a combination of the two.
  • FIG. 29 shows simplified curves for focus score as a function of z position obtained at the process 4804 according to another embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the focus score at each z value is associated with a ratio that is taken between the sobel square sum for the acquired image and the clipped sobel square sum for the blurred image.
  • Figures 28 and Figure 29 are produced from the same acquired images.
  • the blurring and ratio technique in effect normalizes the sobel output by the amount of that same output on a blurry version of the original.
  • the peak of the curves in Figure 29 occurs for the image which suffers the largest degradation as a result of the blurring operation.
  • An image which suffers no degradation produces a ratio value of 1.0.
  • This normalization process can remove the dependency of the sobel operation on the intensity of a particular image plane, which can fluctuate due to optical variations.
  • the number and scope of adjustments performed at the process 4420 for alignment and focusing depend on the accuracy of the mapping from the design space to the measurement space at the process 4410.
  • bending or tilting of the microftuidic device may result in additional z-axis focusing actions.
  • These additional focusing steps may result in an increase in the amount of time desired to acquire a high- resolution image of the metering cell.
  • Improved mapping between the design space and measurement space would enable the imaging system to move the metering cells to position in which the acquisition of high-resolution images can be performed with increased efficiency.
  • FIG. 30 is a simplified surface map of a three dimensional flexible substrate according to an embodiment of the present invention.
  • the warping or deformation of the microfluidic device is illustrated as an increase in. z-axis height at certain x-y positions across the flexible substrate.
  • the inputs for this higher order dimensional mapping could come from location information obtained using more than three fiducial markings.
  • inputs could be provided based on measurements made on the metering cell at the process 4420. Feedback from these measurements can be used to update and refine the mapping as a function of time.
  • a 12 point microfluidic device registration process can be used that fits at least four fiducial markings with a non-planar surface.
  • a three dimensional parabola could be used as the mapping surface.
  • the process of determining the coarse and fine locations of each fiducial marking can contribute information used in calculation of the parabolic fitting parameters.
  • fiducials near the edges, the center, and other locations on the microfluidic device could be utilized, in various orders, in the calculation of the parabolic fitting parameters.
  • the processes 4410 and 4420 could be combined into a single predictive focus-based algorithm that uses higher order fitting and localized corrections to improve system throughput.
  • the method 4400 uses the processes 4410 and 4420 for alignment and focusing in certain embodiments.
  • the alignment and focus of the fiducial marking associated with the metering cell are each within 100- ⁇ m accuracy.
  • the alignment of the fiducial marking is within about 1- ⁇ m accuracy.
  • the focusing of the fiducial marking is within about 1- ⁇ m accuracy.
  • the metering cell is moved to the focus position and an image of the metering cell is captured.
  • the captured image has a high resolution.
  • the image is acquired by the same camera that is used to capture the low-resolution image at the process 4810.
  • DC500 digital camera can be used to capture a high-resolution image.
  • the high resolution image has about 3900 x 3030 pixels and covers at least one well region including the fluid and species at a color depth of 16 bits.
  • the image includes the containment lines, the wells, and the channels that connect the wells.
  • the metering cell is moved in the x dimension and/or the y dimension in order to improve alignment prior to capturing the image of the metering cell.
  • the captured image is then normalized.
  • the color and intensity of the acquired image is significantly affected by the condition and operating voltage of the illumination source of the imaging system.
  • the illumination source is a bulb.
  • the overall hue of the image changes, with the red component of the light increasing in intensity in comparison with the other colors. This increase in red intensity may result from a decrease in the bulb temperature.
  • the opacity of the microfluidtc device which can depend on hydration levels and vary with time, may result in differences in image brightness.
  • image normalization can be employed.
  • a calibration image is taken with the rnicrofluidic device removed from the imaging system with the stage at a z calibration position.
  • the z calibration position is different from the focus position.
  • the z calibration position may take into account changes to the illumination beam as the beam passes through the microfluidic device.
  • the z calibration position is the same as the focus position.
  • the calibration image is then used to correct for the effects resulting from the condition and operating voltage of the illumination source.
  • the algorithm calculates the ratio of the intensity of the acquired image of the metering cell to the calibration image on a pixel by pixel basis.
  • the microfluidic device includes regions that contain substantially no information, the ratio of the intensities in these regions is set equal to unity.
  • the intensity ratio is then multiplied by a scaling factor to maximize the dynamic range around unity.
  • the image normalization effectively white balances the image by adjusting the red, blue, and green components of the image. Additionally, the image normalization improves consistency between the attenuated edge pixels and the center pixels. For example, the effects of white balance and consistency improvement are significant for low illumination conditions and particular condenser and/or aperture settings in which the non-linearity is pronounced.
  • the image is median shifted to move the centroid of the image histogram, i.e., counts as a function of intensity, to a known value.
  • the image is also downgraded around that centroid to reduce the data size in the image.
  • the intensity ratio is sampled at random locations on the microfluidic device. Using these sampled intensity ratio values, the image is adjusted to shift the centroid of the image to the known value. In one embodiment, the centroid is shifted to align with an intensity level of 128, and the image is downgraded to 8 bits. This shift may be used to cither darken or brighten the image.
  • the normalized, white balanced, and downgraded image- is stored in a computer memory available for further processing.
  • the three dimensional Locations of the metering cell can provide information useful in determining the parabolic fitting parameters.
  • the metering cells near the center of the microfluidic device, separated from the fiducial markings near the edges of the microfluidic device may be measured earlier in time than metering cells near the fiducial markings.
  • the early measurements of centrally located metering cells may provide for faster convergence of the fitting algorithm as the measured location of these centrally located cells may differ from the planar mapping more than the measured locations of cells closer to the fiducial markings.
  • the method 4400 uses various fiducial markings in various processes.
  • the fiducial markings can be any physical features associated with the microfluidic device.
  • the fiducial markings are on the handle substrate of the microfluidic device.
  • the fiducial markings are on the flexible substrate of the microfluidic device.
  • the fiducial markings may include a channel wall or an edge of the microfluidic device.
  • the fiducials markings are selected from ones described in Figures 1-13A and 15A-15B.
  • the method 4400 align and focus a metering cell and acquire an image of the metering cell.
  • the alignment and focus process may use at least one fiducial marking for the process 4420.
  • the spatial relationship between the fiducial marking and the metering cell does not change significantly.
  • the fiducial marking- is in the vicinity of the metering cell.
  • the metering cell is merely an example, which should not unduly limit the scope of the claims.
  • the method 4400 is applied to any physical feature on the microfluidic device.
  • the physical feature is aligned and focused, and an image of the physical feature is taken.
  • the physical feature is a chamber.
  • the chamber may be a well, a fluid channel, a control channel, or else.
  • a system for processing one or more microfluidic devices includes one or more computer-readable media and a stage for locating a flexible substrate.
  • the flexible substrate comprises at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein.
  • a volume of the fluid is less than a nanoliter.
  • the one or more computer-readable media include one or more instructions for providing a flexible substrate, and one or more instructions for determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings.
  • the one or more computer-readable media include one or more instructions for performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space, one or more instructions for acquiring at least a first image of the first additional fiducial marking associated with the first chamber, one or more instructions for performing a second alignment to the flexible substrate based on at least information associated with the first image, and one or more instructions for acquiring a second image of the first chamber associated with the flexible substrate.
  • the one or more instructions for determining a transformation between a design space and a measurement space include one or more instructions for determining at least three actual locations corresponding to the at least three fiducia! markings respectively.
  • the at least three fiducial markings are associated with at least three design locations respectively.
  • the one or more instructions for determining a transformation include one or more instructions for processing information associated with the at least three actual locations and the at least three design locations.
  • the design space is associated with the at least three design locations and the measurement space is associated with the at least three actual locations.
  • the one or more instructions for acquiring at least a first image of the first additional fiducial marking include one or more instructions for acquiring a first plurality of images of the first additional fiducial marking.
  • the first plurality of images includes the first image.
  • the one or more instructions for acquiring at least a first image includes one or more instructions for processing information associated with the first plurality of images.
  • the one or more computer-readable media includes one or more instructions for storing the second image in a memory.
  • the memory is a computer memory.
  • the second image includes 3900 by 3030 pixels.
  • the second image comprises a 16 bit image.
  • the one or more instructions for performing a second alignment to the flexible substrate includes one or more instructions for translating the flexible substrate in at least one dimension Io position a chamber in preparation for capturing the second image.
  • the one or more computer-readable media includes one or more instructions for normalizing the second image, one or more instructions for white balancing the second image, and one or more instructions for converting the second image from a first image depth to a second image depth.
  • the first image depth is 16 bits and the second image depth is 8 bits.
  • the first additional fiducial marking is a company logo.
  • the at least three fiducial markings include a company logo
  • the flexible substrate is deformable in three dimensions.
  • the flexible substrate is deformed by actions selected from the group consisting of fabrication, handling, and protocols. The protocols can result in the flexible substrate swelling or contracting.
  • a relationship between the design space and the measurement space is non- planar.
  • the flexible substrate is deformed such that a planar transformation is capable to approximately determine an actual location of the first chamber.
  • the transformation between the design space and the measurement space is non-planar.
  • the non-planar transformation comprises a three dimensional parabolic mapping. The non-planar transformation is updated using information obtained by characterization of a second additional fiducial marking.
  • Some embodiments provide at least one way to form alignment patterns for a deformable active region for a micro fluidic system. Certain embodiments rely on conventional materials, which are relatively easy to use. Some embodiments provide alignment and/or focus based on mapping between the design space and the measurement space. The transformation between the design space and the measurement space uses, for example, at least three fiducial markings. Certain embodiments provide accurate focusing by acquiring and analyzing a plurality of images along at least one dimension. Some embodiments of the present invention perform alignment and focusing on a microfluidic device including at least one flexible substrate. The alignment and focusing take into account the deformation of the flexible substrate. Certain embodiments improve throughput in imaging system.
  • the imaging system uses a computer system to automatically perform alignment and focusing.
  • mapping from the design space to the measurement space increases the accuracy of stage positioning, and thereby, the efficiency of high-resolution image acquisition.
  • one or more of these benefits may exist.
  • a biological substrate comprising: a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; a deformable fluid layer coupled to the surface region; one or more well regions formed in a first portion of the deformable fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformable fluid layer, the one or more channel regions being coupled to one or more of the well regions; an active region formed in the deformable fluid layer, the active region including the one or more well regions; a non-active region formed in the deformable fluid layer, the non-active region being formed outside of the first portion and the second portion; at least three Fiducial markings formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions; and a control layer coupled to the fluid layer, the control layer including one or more control regions.
  • each of the three fiducial markings are spatially disposed around a perimeter of the one well region.
  • the substrate of claim 1 further comprising a preselected fiducial marking, the preselected fiducial marking including at least an edge and a center region, the preselected fiducial making being characterized by a predetermined shape.
  • the rigid substrate material is selected from glass, plastic, metal, and composite materials. 7. The substrate of claim I wherein the rigid substrate material is characterized as transparent.
  • the defo ⁇ nable fluid layer is made of a material selected from silicone, polymer, rubber, plastic, and PDMS.
  • the substrate of claim 1 wherein the three fiducial markings including respective images that are capable of being captured with a charge coupled camera.
  • a method of fabricating a biological substrate comprising: providing a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; coupling a deformabie fluid layer to the surface region of the rigid substrate, the deformabie fluid layer comprising: one or more well regions formed in a first portion of the deformabie fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformabie fluid layer, the one or more channel regions being coupled to one or more of the well regions; an active region formed in the deformabie fluid layer, the active region including the one or more well regions; a non-active region formed in the deformabie fluid layer, the non- active region being formed outside of the first portion and the second portion; at least three fiducial markings formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions; and coupling a control layer to the fluid layer, the control layer including one
  • each of the three fiducial markings are spatially disposed around a perimeter of the one well region.
  • the method of claim 12 further comprising a preselected fiducial marking, the preselected fiducial marking including at least an edge and a center region, the preselected fiducial making being characterized by a predetermined shape.
  • the rigid substrate material is selected from glass, plastic, composite materials, and a metal.
  • the deformable fluid layer is made of a material selected from silicone, rubber, polymer, plastic, and PDMS.
  • a method o f manufacturing microfluidic chip structures comprising: providing a mold substrate including a plurality of well patterns, each of the well patterns being provided within a portion of an active region of a fluidic chip; forming a plurality of fiducial marking patterns around a vicinity of each of the well patterns, each of the plural ity of fiducial marking patterns being within a portion of a non-active region of a fluidic chip, the plurality of fiducial marking patterns including a set of alignment marks being disposed spatially around each of the well patterns; forming a thickness of deformable materia!
  • the thickness of deformable material including a plurality of wells formed from the well patterns and a plurality of fiducial marking patterns formed from the fiducial marking patterns to rigid substrate material.
  • the rigid substrate material is selected from a glass, a silicon, a composite, and a plastic.
  • a microfluidic system comprising: a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; a deformable fluid layer coupled to the surface region; one or more well regions formed in a first portion of the deformable fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformable fluid layer, the one or more channel regions being coupled to one or more of the well regions; an active region formed in the deformable fluid layer, the active region including the one or more well regions; a non-active region formed in the deformable fluid layer, the non-active region being formed outside of the first portion and the second portion; a first fiducial marking formed within the non-active region and disposed in a spatial manner associated with at least one of the channel regions; a second fiducial marking formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions; a control layer coupled to the fluid layer
  • the first fiducial marking comprises a first channel fiducial marking and a second channel fiducial marking, the first channel fiducial marking being spatially disposed from the second channel fiducial marking.
  • the third fiducial marking comprises a first control fiducial marking and a second control fiducial marking, the first control fiducial marking being spatially disposed from the second control fiducial marking.
  • the substrate of claim 29 wherein the first fiducial marking is characterized by a spatial dimension of greater than 100 ⁇ m and less than 250 ⁇ m.
  • the substrate of claim 29 wherein the first fiducial marking is characterized by a depth of at least 10 ⁇ m within a thickness of the non-active region.
  • the second fiducial marking is characterized by a depth of at least 10 ⁇ m within a thickness of the non-active region.
  • the third fiducial marking is characterized by a depth of at least 10 ⁇ m within a thickness of the control layer.
  • a method of manufacturing microfluidic chip structures comprising: providing a mold substrate including a plurality of well patterns, each of the well patterns being provided within a portion of an active region of a fluidic chip; forming at least one fiducial marking pattern around a vicinity of one of the well patterns, the fiducial marking pattern is one of a set of alignment marks; forming a thickness of deformable material within the plurality of well patterns and within the fiducial marking pattern to fill a portion of the mold substrate; releasing the deformable material from the mold substrate; and coupling the thickness of deformable material including a plurality of wells formed from the well patterns and a fiducial marking pattern formed from the fiducial marking pattern to a rigid substrate material.
  • a microfluidic system comprising: a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; a deformable fluid layer coupled to the surface region; one or more well regions formed in a first portion of the deformable fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformable fluid layer, the one or more channel regions be coupled to one or more of the well regions; an active region formed in the deformable fluid layer, the active region including the one or more well regions; a non-active region formed in the deformable fluid layer, the non-active region being formed outside of the first portion and the second portion; a control layer coupled to the fluid layer, the control layer including one or more control regions; and at least three fiducial markings comprising: at least a global alignment fiducial marking within a portion of the deformable layer; a first fiducial marking within the deformable layer or the control layer;
  • a method for processing a microfluidic device comprising: providing a flexible substrate including a first plurality of fiducial markings; determining a first plurality of actual locations corresponding to the first plurality of fiducial markings respectively, the first plurality of fiducial markings associated with a first plurality of design locations respectively; processing information associated with the first plurality of actual locations and the first plurality of design locations; determining a transformation between a design space and a measurement space, the design space associated with the first plurality of design locations, the measurement space associated with the first plurality of actual locations; performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space; acquiring a first plurality of images of the first fiducial marking; processing information associated with the first plurality of images; performing a second alignment to the flexible substrate based on at least - information associated with the first plurality of images; acquiring a second image of the flexible substrate.
  • the flexible substrate further comprises a first chamber, the first chamber being capable of holding a fluid therein.
  • Ill 44 The method of claim 43, and further comprising providing the flexible substrate on a stage.
  • processing information associated with the first plurality of images comprises: determining a first plurality of focus scores associated with the first plurality of images; processing information associated with the first plurality of focus scores; determining a focus position based on at least information associated with the first plurality of focus scores.
  • determining a first plurality of focus scores associated with the first plurality of images comprises: for each of the first plurality of images, determining a first value associated with a first characteristic of the each of the first plurality of images ; blurring the each of the first plurality of images; determining a second value associated with the first characteristic of the blurred each of the first plurality of images; if the second value is equal to or larger than a predetermined value, determining a focus score equal to a ratio between the first value and the second value; if the second value is smaller than the predetermined value, determining the focus score equal to a ratio between the first value and the predetermined value.
  • the method of claim 52 wherein the acquiring a first plurality of images of the first fiducial marking comprises: moving the flexible substrate to a first plurality of positions; for each of the first plurality of positions, acquiring one of the first plurality of images.
  • the method of claim 40 and further comprising: acquiring a first image of a first fiducial marking associated with the flexible substrate; performing a third alignment to the flexible substrate based on at least information associated with the first image.
  • the acquiring a first image of the first fiducial marking comprises: acquiring a second image; processing information associated the second image; determining whether the first fiducial marking is present in the second image; if the first fiducial marking is not present in the second image, translate the flexible substrate in at least one dimension; wherein the second image is the first image if the first fiducial marking is present in the second image; wherein the processing information associated the second image includes: segmenting the second image; performing blob analysis to the second image.
  • a method for processing a microfluidic device comprising: providing a flexible substrate including at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein; determining a transformation between a design space and a measurement space based on at least information associated with the at least three Fiducial markings; performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space; acquiring at least a first image of the first additional fiducial marking associated with the first chamber; performing a second alignment to the flexible substrate based on at least information associated with the first image; acquiring a second image of the first chamber associated with the flexible substrate,
  • determining a transformation between a design space and a measurement space comprises: determining at least three actual locations corresponding to the at least three fiducial markings respectively, the at least three fiducial markings being associated with at least three design locations respectively; processing information associated with the at least three actual locations and the at least three design locations.
  • the acquiring at least a first image of the first additional fiducial marking comprises: acquiring a first plurality of images of the first additional fiducial marking, the first plurality of images including the first image; processing information associated with the first plurality of images.
  • the performing a second alignment to the flexible substrate comprises: translating the flexible substrate in at least one dimension to position a chamber in preparation for capturing the second image.
  • a volume of the fluid is less than a nanoliter.
  • non-planar transformation comprises a three dimensional parabolic mapping.
  • a system for processing one or more microfluidic devices including one or more computer-readable media, the system also including a stage for locating a flexible substrate, the flexible substrate comprising at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein, the one or more computer-readable media including: one or more instructions for providing a flexible substrate; one or more instructions for determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings; one or more instructions for performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space; one or more instructions for acquiring at least a first image of the first additional fiducial marking associated with the first chamber; one or more instructions for performing a second alignment to the flexible substrate based on at least information associated with the first image; one or more instructions for acquiring a second image of the first chamber associated with the flexible substrate.
  • the one or more instructions for determining a transformation between a design space and a measurement space comprise: one or more instructions for determining at least three actual locations corresponding to the at least three fiducial markings respectively, the at least three fiducial markings being associated with at least three design locations respectively; one or more instructions for processing information associated with the at least three actual locations and the at least three design locations.
  • the one or more computer-readable media of claim 81 wherein the design space is associated with the at least three design locations and the measurement space is associated with the at least three actual locations.
  • the one or more computer-readable media of claim 80 and further comprising one or more instructions for storing the second image in a memory.
  • the one or more computer-readable media of claim 80 wherein the second image comprises 3900 by 3030 pixels.
  • the second image comprises a 16 bit image.
  • the one or more computer-readable media of claim 80 and further comprising: one or more instructions for normalizing the second image; one or more instructions for white balancing the second image; one or more instructions for converting the second image from a first image depth to a second image depth.
  • a method for processing a microfluidic device comprising: providing a flexible substrate comprising one or more well regions and a plurality of fiducial marks, the well regions being capable of holding a fluid therein, at least three of the fiducial marks being within a vicinity of one of the well regions; locating the flexible substrate on a stage; capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping fro'm a design space to a measurement space; aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region; acquiring a high-resolution image of at least the one well region; and storing the high-resolution image in a memory.
  • aligning the flexible substrate comprises acquiring a low-resolution image, normalizing the low-resolution image, and median correcting the low-resolution image.
  • aligning the flexible substrate further comprises segmenting the image, performing blob analysis, and translating the flexible substrate in at least one dimension.
  • translating the flexible substrate in at least one dimension comprises translating the stage to position a metering cell in preparation for capturing a high-resolution image.
  • the high-resolution image is normalized, white balanced, and converted to a reduced image depth.
  • aligning the flexible substrate in the three dimensions to an image acquisition location using at least one additional fiducial mark comprises creating a higher order mapping from a design space to a measurement space.
  • a method of processing a biological micro fluidic device comprising: providing a deformable substrate comprising one or more metering cells, the metering cells being capable of containing a fluid therein; locating the deformable substrate on a stage translatable in x, y, and z directions; translating the stage to image at least four fiducial marks associated with the deformable substrate; determining x, y, and z positions of the at least four fiducial marks; computing a non-planar mapping between a design space and a measurement space based on the x, y, and z positions of the at least four fiducial marks; translating the stage to an image acquisition position calculated using the non- planar mapping; and capturing an image of at least one metering cell.
  • a system for processing one or more microfluidic devices including one or more computer memories, the system also including a stage for locating a flexible substrate, the flexible substrate comprising one or more well regions and a plurality of fiducial marks, the well regions being capable of holding a fluid therein, at least three of the fiducial marks being within a vicinity of one of the well regions, the one or more computer memories comprising one or more computer codes, the one or more computer codes including: a first code directed to capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping from a design space to a measurement space; a second code directed to aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region; a third code directed to acquiring a high-resolution image of at least the one well region; and a fourth code directed to storing the high-resolution image in
  • a method for producing an image of an object within a chamber of a microfluidic device comprising the steps of: providing said microfluidic device, said microfluidic device having x , y , and z dimensions and further comprising a chamber depth center point located between a top wall and a bottom wall of said chamber along said z dimension, said chamber depth center point being located a known z dimension distance from an optically detectable fiducial marking embedded within said microfluidic device at a z depth; placing said microfluidic device within an imaging system comprising: an optical device capable of detecting said fiducial marking and transmitting said image of said object, said optical device defining an optical path axially aligned with said z dimension of said microfluidic device and having a focal plane perpendicular to said optical path, wherein when said focal plane is moved along said optical path in line with said fiducial marking, said fiducial marking is maximally detected when said focal ' plane is at said z depth in comparison to when said focal plane is not substantially in
  • microfluidic device is made wholly or partly from an elastomeric material.
  • microfluidic device is partly made from glass material.
  • optical device comprises an analog output charged coupled device type image detector and said image processor comprises an analog to digital converter.
  • optical device comprises a digital detection device.
  • said image processing device comprises a digital computer and a data storage device.
  • a system for producing an image of an object within a chamber of a microfluidic device comprising: said microfluidic device, said microfluidic device having x , y , and z dimensions and further comprising a chamber depth center point located between a top wall and a bottom wall of said chamber along said z dimension, said chamber depth center point being located a known z dimension distance from a optically detectable Fiducial marking embedded within said microfluidic device at a z depth; an imaging system for placing said microfluidic device therein comprising: an optical device capable of detecting said fiducial marking and transmitting said image of said object, said optical device defining an optical path axially aligned with said z dimension of said microfluidic device and having a focal plane, wherein when said focal plane is moved along said optical path in line with said fiducial marking, said fiducial marking is maximally detected when said focal plane is substantially in-plane with said z depth as compared to when said field depth center point is not substantially in-plane
  • microfiuidic device is made wholly or partly from an elast ⁇ meric material.
  • microfiuidic device is partly made from glass material.
  • optical device comprises an analog output charged coupled device type image detector and said image processor comprises an analog to digital converter.
  • optical device comprises a digital detection device
  • said digital computer comprises an output display for displaying said image of said object.
  • said output display comprises a graphical user interface
  • a method for producing an image of a chamber within a microfluidic device comprising the steps of: imaging said microfluidic device to produce an image using an imaging system having an optical path in the z plane of said microfluidic device; mapping from said image a first set of coordinates of said microfluidic device to determine whether the microfluidic device is skewed or distorted when compared to a coordinate map of an ideal microfluidic device; positioning said microfluidic device so as to position said chamber within said optical path based on a matrix transform calculated coordinate position determined by computing a matrix transformation between said first set of coordinates of said microfluidic device and said coordinate map of said ideal microfluidic device; obtaining a time zero image of said microfluidic device chamber; wherein said time zero image contains images of artifacts present in said microfluidic device; obtaining a second image of said microfluidic device chamber; and, subtracting the first image of said microfluidic device chamber from said second image of said microfluidic chamber to produce an image of
  • a microfluidic system comprising: a substrate comprising a surface region; a deformable layer coupled to the surface of the substrate, the deformable layer being made of at least a thickness of first material; a control layer coupled to the deformable layer to form a sandwich structure including at least the substrate, the deforraable layer and the control layer, the control layer being made of at least a thickness of second material; at least one fiducial marking provided within either the control layer or the deformable layer or the substrate, the fiducial marking being characterized by a visual pattern provided in a volume surrounded wholly or partially by at least the substrate, the first material, or the second material; and a fluid disposed within the open volume of the one fiducial marking, the fluid being characterized by a refractive index.
  • the refractive index is associated with air.
  • edges comprises 90 degree corners.
  • the system of claim 151 wherein the substrate is selected from silicon, quartz, glass, or rigid plastic.

Abstract

A method for processing an image of a micro fluidic device. The method includes receiving a first image of a micro fluidic device. The first image corresponds to a first state. Additionally, the method includes receiving a second image of the micro fluidic device. The second image corresponds to a second state. Moreover, the method includes transforming the first image and the second image into a third coordinate space (520). Also, the method includes obtaining a third image based on at least information associated (540) with the transformed first image and the transformed second image, and processing the third image to obtain information (550) associated with the first state and the second state.

Description

IMAGE PROCESSING METHOD AND SYSTEM FOR MICROFLUTDIC
DEVICES
CROSS-REFERENCES TO RELATED APPLICATIONS [0001] This application claims priority to U.S. Provisional Application No. 60/490,712, filed IuIy 28, 2003, which is incorporated by reference herein.
[0002] Additionally, U.S. Application Serial No. 10/851 ,777 filed May 20, 2004 and titled "Method and System for Microfluidic Device and Imaging Thereof is incorporated by reference herein.
STATEMENT AS TO RIGHTS TO INVENTIONS MADE UNDER FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT [0003] NOT APPLICABLE
REFERENCE TO A "SEQUENCE LISTING," A TABLE, OR A COMPUTER
PROGRAM LISTING APPENDIX SUBMITTED ON A COMPACT DISK. [0004] NOT APPLICABLE
COPYRIGHT NOTICE [0005] A portion of this application contains computer codes, which are owned by Fluidigm Corporation. All rights have been preserved under the copyright protection, Fluidigm Corporation ©2004.
BACKGROUND OF THE INVENTION [0006] The present invention is directed to image processing technology. More particularly, the invention provides an image processing method and system for detecting changes of an imaged obj ect. Merely by way of example, the invention has been applied to crystallization in a microfluidic device. But it would be recognized that the invention has a much broader range of applicability. [0007] Crystallization is an important technique to the biological and chemical arts. Specifically, a high-quality crystal of a target compound, can be analyzed by x-ray diffraction techniques to produce an accurate three-dimensional structure of the target. This three- dimensional structure information can then be utilized to predict functionality and behavior of the target.
[0008] In theory, the crystallization process is simple. A target compound in pure form is dissolved in solvent. The chemical environment of the dissolved target material is then altered such that the target is less soluble and reverts to the solid phase in crystalline form. This change in chemical environment is typically accomplished by introducing a crystallizing agent that makes the target material less soluble, although changes in temperature and pressure can also influence solubility of the target material.
[0009] hi practice however, forming a high quality crystal is generally difficult and sometimes impossible, requiring much trial and error and patience on the part of the researcher. Specifically, the highly complex structure of even simple biological compounds means that they are not amenable to forming a highly ordered crystalline structure.
Therefore, a researcher must be patient and methodical, experimenting with a large number of conditions for crystallization, altering parameters such as sample concentration, solvent type, countersolvent type, temperature, and duration in order to obtain a high quality crystal, if in fact a crystal can be obtained at all. [0010] Hansen, et al., describe in PCT publication WO 02/082047, published October 17, 2002 and herein incorporated by reference in its entirety for all purposes and the specific purposes disclosed therein and herein, a high-throughput system for screening conditions for crystallization of target materials, for example, proteins. The system is provided in a microfluidic device wherein an array of metering cells is formed by a multilayer elastomeric manufacturing process. Each metering cell comprises one or more of pairs of opposing chambers, each pair being in fluid communication with the other through an interconnecting microfluidic channel, one chamber containing a protein solution, and the other, opposing chamber, containing a crystallization reagent. Along the channel, a valve is situated to keep the contents of opposing chamber from each other until the valve is opened, thus allowing free interface diffusion to occur between the opposing chambers through the interconnecting microfluidic channel. As the opposing chambers approach equilibrium with respect to crystallization reagent and protein concentrations as free interface diffusion progresses, it is hoped that the protein will, at some point, form a crystal. In preferred embodiments, the microfluidic devices taught by Hansen et al. have arrays of metering cells containing chambers for conducting protein crystallization experiments therein. Use of such arrays in turn provides for high-throughput testing of numerous conditions for protein crystallization which require analysis.
[0011] The invention disclosed herein provides systems and methods for conducting such analysis to determine whether a particular set of protein crystallization conditions indeed caused crystals to form.
BRIEF SUMMARY OF THE INVENTION [0012] The present invention is directed to image processing technology. More particularly, the invention provides an image processing method and system for detecting changes of an imaged obj ect. Merely by way of example, the invention has been applied to crystallization in a microfluidic device. But it would be recognized that the invention has a much broader range of applicability. [0013] According to the present invention, a number of embodiments of the image processing method and system for microfluidic devices are provided. Merely by way of an example, a method for processing an image of a microfluidic device includes receiving a first image of a microfluidic device. The first image corresponds to a first state. Additionally, the method includes receiving a second image of the microfluidic device. The second image corresponds to a second state. Moreover, the method includes transforming the first image into a third coordinate space. The transforming uses at least a first fiducial on the first image. Also, the method includes transforming the second image into the third coordinate space. The transforming uses at least a second fiducial on the second image. Additionally, the method includes obtaining a third image based on at least information associated with the transformed first image and the transformed second image, and processing the third image to obtain information associated with the first state and the second state. In one example, the third coordinate space is based on the prior known geometry of the microfluidic device. In another example, although there are certain advantages to using the first image, the method can work adequately without the first image. The second image is transformed into the third coordinate space. [0014] According to another embodiment of the present invention, a computer-readable medium including instructions for processing an image of a microfluidic device comprises one or more instructions for receiving a first image of a microfluidic device. The first image corresponds to a first state. Additionally, the computer-readable medium includes one or more instructions for receiving a second image of the microfluidic device. The second image corresponds to a second state. Moreover, the computer-readable medium includes one or more instructions for transforming the first image into a third coordinate space. The transforming uses at least a first fiducial on the first image. Also the computer-readable medium includes one or more instructions for transforming the second image into the third coordinate space. The transforming uses at least a second fiducial on the second image.
Additionally, the computer-readable medium includes one or more instructions for obtaining a third image based on at least information associated with the transformed first image and the transformed second image, and one or more instructions for processing the third image to obtain information associated with the first state and the second state. [0015] Numerous benefits are achieved using the invention over conventional techniques. Depending upon the embodiment, one or more of these benefits may be achieved. For example, certain embodiments of the present invention improves the speed of imaging analysis and crystallization detection. Some embodiments of the present invention simplify the image processing system for crystallization detection. Certain embodiments of the present invention improve sensitivity of the image processing method and system.
[0016] According to yet another embodiment of the present invention, a method for processing an image of a microfluidic device includes receiving a first image of a microfluidic device. The first image includes a first fiducial marking and a first chamber region, and the first chamber region is associated with a first chamber boundary. Additionally, the method includes transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking, removing at least a first part of the first chamber boundary from the first image, processing information associated with the first chamber region, and determining whether a first crystal is present in the first chamber region. [0017] According to yet another embodiment of the present invention, a method for processing a plurality of images of a microfluidic device includes receiving at least a first image and a second image of a microfluidic device. The first image and the second image are associated with a first focal position and a second focal position respectively, and each of the first image and the second image includes a first chamber region. Additionally, the method includes processing information associated with the first image and the second image, generating a third image based on at least information associated with the first image and the second image, processing information associated with the third image, and determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
[0018] According to yet another embodiment of the present invention, a method for adjusting a classifier and processing an image of a microfluidic device includes receiving a first image of a microfluidic device. The first image is associated with at least a first predetermined characteristic. Additionally, the method includes generating a first plurality of features based on at least information associated with the first image, and selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic. Moreover, the method includes determining a third plurality of features based on at least information associated with the second plurality of features, and processing information associated with the third plurality of features. Also, the method includes determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters, processing information associated with the first likelihood and the at least a first predetermined characteristic, and adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
[0019] According to another embodiment of the present invention, a computer-readable medium includes instructions for processing an image of a microfluidic device. The computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device. The first image includes a first fiducial marking and a first chamber region, and the first chamber region is associated with a first chamber boundary. Additionally, the computer-readable medium includes one or more instructions for transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking, and one or more instructions for removing at least a first part of the first chamber boundary from the first image. Moreover, the computer- readable medium includes one or more instructions for processing information associated with the first chamber region, and one or more instructions for determining whether a first crystal is present in the first chamber region.
[0020] According to yet another embodiment of the present invention, a computer-readable medium includes instructions for processing a plurality of images of a microfluidic device. The computer-readable medium includes one or more instructions for receiving at least a first image and a second image of a microfluidic device. The first image and the second image are associated with a first focal position and a second focal position respectively, and each of the first image and the second image includes a first chamber region. Additionally, the computer-readable medium includes one or more instructions for processing information associated with the first image and the second image, and one or more instructions for generating a third image based on at least information associated with the first image and the second image. Moreover, the computer-readable medium includes one or more instructions for processing information associated with the third image, and one or more instructions for determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
[0021] According to yet another embodiment of the present invention, a computer-readable medium includes instructions for adjusting a classifier and processing an image of a microfluidic device. The computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device. The first image is associated with at least a first predetermined characteristic. Additionally, the computer-readable medium includes one or more instructions for generating a first plurality of features based on at least information associated with the first image, and one or more instructions for selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic. Moreover, the computer-readable medium includes one or more instructions for determining a third plurality of features based on at least information associated with the second plurality of features, and one or more instructions for processing information associated with the third plurality of features. Also, the computer-readable medium includes one or more instructions for determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters, one or more instructions for processing information associated with the first likelihood and the at least a first predetermined characteristic, and one or more instructions for adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
[0022] Depending upon the embodiment under consideration, one or more these benefits of the present invention may be achieved. These benefits and various additional objects, features and advantages of the present invention can be fully appreciated with reference to the detailed description and accompanying drawings that follow.
BRIEF DESCRIPTION OF THE DRAWINGS [0023] Figure 1 depicts overview of an exemplary imaging system. [0024] Figure 2a and 2b depict a top plan and cross-sectional view of an exemplary microfluidic device used in accordance with the invention.
[0025] Figures 3 a and 3b depict how metering cell stretch and distortion may be compensated in accordance with the invention.
[0026] Figures 4a through 4c depict the process of masking and image subtraction employed in accordance with the invention.
[0027] Figure 5 is a simplified diagram for an image processing method according to an embodiment of the present invention.
[0028] Figure 6 is a simplified process 520 for transforming images according to one embodiment of the present invention. [0029] Figure 7 shows simplified wells and channels according to one embodiment of the present invention.
[0030] Figures 8-10 are simplified diagrams showing sample 1-D signals.
[0031] Figure 11 is a simplified diagram for masking images according to one embodiment of the present invention. [0032] Figure 12 is a simplified diagram for implosion-padding process.
[0033] Figure 13 is a simplified method for wall detection according to an embodiment of the present invention.
[0034] Figures 14(a), (b) and (c) are simplified diagrams for wall detection according to an embodiment of the present invention; [0035] Figure 15 is a simplified method for implosion padding according to an embodiment of the present invention.
[0036] Figure 16 is a simplified diagram for wall implosion according to an embodiment of the present invention. [0037] Figure 17 is a simplified diagram for wall implosion at another time according to an embodiment of the present invention.
[0038] Figure 18 is a simplified method for image inspection according to an embodiment of the present invention.
[0039] Figure 19 is a simplified training method according to an embodiment of the present invention.
[0040] Figure 20 is a simplified method for classification according to an embodiment of the present invention.
[0041] Figure 21 is a simplified method for combining images according to an embodiment of the present invention. [0042] Figure 22 is a simplified diagram for deep chamber according to an embodiment of the present invention.
[0043] Figure 23 is a simplified diagram for capturing multiple images according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION [0044] The present invention is directed to image processing technology. More particularly, the invention provides an image processing method and system for detecting changes of an imaged object. Merely by way of example, the invention has been applied to crystallization in a microfluidic device. But it would be recognized that the invention has a much broader range of applicability. [0045] Figure 1 is a simplified diagram for an imaging system according to an embodiment of the present invention. Figures 2a and 2b are simplified diagrams for a top view and cross- sectional view of a microfluidic device according to an embodiment of the present invention. The microfluidic device as shown in Figures 2a and 2b can be used in conjunction with the imaging system as shown in Figure 1. These diagrams are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0046] Imaging system (10) operates, in one embodiment, in the following manner. First, microfluidic device (30) is securely placed on stage (20). Based on a fixed feature of the microfluidic device (30), for example, an edge of the base support of microfluidic device
(30), computer (110) then causes x,y drive (25) to move stage (20) about to align microfluidic device (30) in a first x,y position with a first of a plurality of fiducial marking (30), wherein the fiducial markings are embedded within microfluidic device at a known z dimension distance from a chamber center point, comes into focus by imaging device (60) based on dead reckoning from the fixed feature. A user of the system then registers the precise coordinate of the fiducial with the imaging system. Two or more additional fiducial marks are then likewise mapped with the assistance of a user. In other embodiments, this process is automatic as the centroids of the flducials can be calculated precisely by locating the symmetric XY fiducial object and removing any non-symmetric components. Imaging device (60), under the control of computer (110) then adjusts the z dimension location of focal plane (105) to focus upon the fiducial marking (not shown in figure 1, but shown in figure T). For example, once focused upon the first fiducial marking, the imaging system then obtains a first x,y coordinate image of microfluidic device (30) looking for additional fiducial markings within the field of view of image device (60). In preferred embodiments, the field of view can embrace an entire metering cell. The computer then analyzes the first x,y coordinate image to determine whether the microfluidic device has skew and stretch, and if skew or stretch are determined, transforms the first x,y image to align the image and coordinate map of the microfluidic device to an idealized coordinate map. The idealized coordinate map is used later during image subtraction and masking steps. [0047] In preferred embodiments, with the microfluidic device x,y coordinate image aligned against the ideal coordinate map, the system then determines whether the stretch, distortion or lack of co-registration between the various microfluidic layers is present in the microfluidic device by comparing the location of the fiducial markings in the x,y coordinate image with the fiducial markings locations in the x,y coordinate image of the ideal stored image map. If differences are present between the actual fiducial locations and the imaged fiducial locations, a matrix transformation, preferable an Affine transformation, is performed to transform the imaged shape of the metering cell into a virtual shape of the ideal metering cell shape. By converting the actual image to a known and fixed ideal image using the matrix transformation computed from the differences between the measured actual fiducial locations and the stored ideal fiducial locations, image subtraction and other image analysis are made possible. For instance, Figure 3 depicts an ideal microfluidic device stored image (actually stored as a coordinate map), and an actual, distorted, microfluidic device image (also stored as a coordinate map determined from fiducial mapping). By computing the differences between the coordinate maps through matrix analysis, a matrix transformation maybe developed to reform the actual image into an ideal image for use in further image processing described herein. By causing the imaged microfluidic device to conform to a standard shape, image subtraction and masking is possible to maximize the viewable area of a metering cell chamber. Moreover, if defects or debris are present within the chamber at time zero in a series of time based images, such defects or debris can be masked out of subsequent images to avoid false positive when applying automated crystal recognition analysis. In addition to masking off areas of the chambers which contain defects or debris, the walls of the chambers may be subtracted from subsequent images, again so as to not cause false reading with the crystal recognition analysis. The discrepancy between various layers, such as between the control layer and the channel layer, can also be calculated based on the position of a found object in the control layer, such as the control lines themselves. In another example, this correction is determined based on the control layer fiducials themselves. For certain embodiments, this extra transformation is important since the control layer partitions the protein chamber from the rest of the control line.
[0048] Figures 4a through 4c depict how the above image subtraction and masking occur at time zero prior to crystal formation. Figure 4a depicts a metering cell with debris, shown as the letter "D" distributed about the metering cell chambers. Using the technique described above, after the metering cell has been rotated, if needed, to align with the ideal metering coordinate system, and after the metering cell has been stretch compensated to make the imaged metering cell dimensions match those of the ideal metering cell dimensions, then foreign objects not present in the ideal image are masked out, meaning that those regions including, and immediately surrounding the foreign objects are masked so as to avoid falsely triggering the crystal detection analysis into deeming the foreign object as a crystal that was formed. Figure 4b depicts an image wherein the mask has removed the foreign objects from the image so as to not provide false triggers for image analysis. Figure 4c depicts how image subtraction is applied to remove the chamber edge features from the image to reduce the raw image into one of just wall-less chambers. From this final image, further masking may occur if wall implosion is detected, an event that usually occurs when the microfluidic device is dehydrating and the chamber contents are permeating outside of the chamber, causing a negative pressure therein and thus, wall collapse or implosion. Such further masking for implosion employs a series of known shapes that occur when chamber implosion occurs and uses such known shapes to create additional masks to occlude from the image the now intruding imploded walls.
[0049] Figure 5 is a simplified diagram for an image processing method according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method includes a process 510 for locating fiducials, a process 520 for transforming image, a process 530 for masking image, a process 540 for comparing images, and a process 550 for inspecting image. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequence of processes may be interchanged with others replaced. The process 540 for comparing images may be performed prior to the process 530 for masking image, during the process 530 for masking image, and/or after the process 530 for masking image. Future detail of the present invention can be found throughout the present specification and more particularly below.
[0050] At the process 510, marking fiducials are located on an image. The image may be renormalized against a reference image, which was previously taken with either a standardized slab or nothing under the microscope, for white balancing or for exposure normalization, or other desirable characteristics. Marking fiducials may include cross hairs. In one embodiment of the present invention, the image includes metering cells in addition to a Fluidigm logo. Each metering cell has cross-hair fiducials at known locations around the metering cell. During the image acquisition, the positions of these fiducials are determined to within +/- 100 microns through the X-Y correction process. This estimation accuracy may be achieved even under rotational orientations. During the process 510, some sub-images are extracted around these estimated locations. Within these sub-images, the cross-hair fiducials are found, and their global positions are determined. The global positions in the TO image are compared to the global positions in a subsequent image, such as the Tl image, the T2 image, ..., the TM image, ..., or the TN image. N is a positive integer, and M is a positive integer smaller than or equal to N. The TO image is captured at TO; while the TM image is captured at TM. For example, at TO, no crystallization of protein occurs. At TM, crystallization of protein may have occurred. If a single fiducial is missed from the TO image or the subsequent TM image, the missed fiducial is usually not considered during the subsequent analysis of the images.
[0051] Figure 6 is a simplified process 520 for transforming images according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 520 includes a process 610 for matching fiducials, a process 620 for calculating transformation, and a process 630 for transforming image. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. The process 620 for calculating transformation and the process 630 for transforming image may be combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequence of processes may be interchanged with others replaced. Future detail of the present invention can be found throughout the present specification and more particularly below.
[0052] At the process 610, fiducials in an image is matched with corresponding fiducials in an ideal coordinate map. For example, the image is the TO image or the TM image. In one embodiment, the image is an x-y coordinate image, and the ideal coordinate map is an x-y coordinate map. The image is aligned against the ideal coordinate map. Locations of the fiducials in the image are compared with locations of the fiducials in the ideal coordinate map. Such comparison can reveal any distortion including a stretch of the microfluidic device when the image is captured, such as at TO or TM.
[0053] At the process 620, a spatial transformation from an image to an ideal coordinate space is calculated. The ideal coordinate space corresponds to the ideal coordinate map. hi one embodiment, a matrix transformation, such as an Affine transformation, is calculated. For example, two least squares transformations are calculated from the TO image to an ideal coordinate space and from the TM image to the ideal coordinate space.
[0054] At the process 630, an image is transformed into an ideal coordinate space. The image may be the TO image or the TM image. For example, a matrix transformation, such as an Affine transformation, changes the shape of a metering cell in the image into an ideal shape. The metering cell may be sliced into three or more diffusion experiments. In one embodiment, Figure 3 a shows a simplified ideal coordinate map, and Figure 3b shows a simplified distorted image. By computing the differences between the fiducial locations in the coordinate map and the corresponding fiducial locations in the distorted image, a matrix transformation may be performed to convert the distorted image into an ideal image.
[0055] At the process 630, the TO image and the TM image are transformed into the ideal coordinate space. The transformed TO image and the transformed TM image are located in the same coordinate space, so they are co-registered and comparable to one another. The transformed TO image can be subtracted from the TM image to detect crystallization in the TM image. But such subtraction does not remove all the noise sources that should be removed.
[0056] In theory, the locations of the wells in the ideal coordinate space is known since the cross-hair fiducials are on the same layer as the wells, but in practice each metering cell is unique. Dead-reckoning the well-locations including well-walls usually do not provide accurate information. Instead, a sub-rectangular is usually extracted around each well location, and the TO image is used to look for the well walls. For example, four linear lines are fitted to the four walls of the well. In addition, four vertical lines are usually used to determine four of the six walls for the three channel segments. [0057] Figure 7 shows simplified wells and channels according to one embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The four vertical lines as discussed above include the left- wall of the right channel, the right wall and the left wall of the middle channel, and the right wall of the left channel. The remaining two walls, e.g., the right wall of the right channel and the left wall of the left channel are demarcated by the containment lines which are found through thresholding a 1-D horizontal signal of a gross left and right sub-image. The analysis of one-dimensional horizontal signal can also locate an interface line in the center channel and the top and bottom walls of the horizontal channels using small windows across the x- dimension. The horizontal channels can be tilted out of the horizontal due to alignment errors. The interface lines and the top and bottom walls of the channels are used in the subsequently processes. [0058] Figures 8-10 are simplified diagrams showing sample 1-D signals. These diagrams are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In certain embodiments, the channel walls are not as crisp in signal as shown in Figures 8-10, as the strength of that signal depends on the z-location at the time of image acquisition. Specifically, Figure 9 is a simplified diagram for interface line detection. Figure 10 is a simplified diagram for filtered and width matched signal. In some embodiments, the fiducials are on the same layer as the channel. The channel position can be found via the affine transformation without finding the channel walls. [0059] At the process 530, an image is masked. The masking increases the viewable area of a metering cell chamber. If defects or debris are present within a chamber in the TO image, these defects or debris can be masked out of the TO image and the subsequent TM image. The removal of defects or debris can reduce the number of false positives in automated crystal recognition analysis. [0060] For example, a stamp or a mask is calculated from the TO image in order to mask out regions of the TO image that contain signals not of interest to the crystal recognition analysis. Figure 11 is a simplified diagram for masking images according to one embodiment of the present invention. The TO image and the Tl image are captured and transformed to the ideal coordinate space. Each rectilinear region contains four bounding walls. The region beyond the four bounding walls in the TO image is masked out of the subsequent analysis.
Similarly, the interface line is masked out. Additionally, large blob objects that appear in the region of interest and exceed threshold in the TO image are similarly masked as they are assumed to be pre-existing before crystallization. As shown in Figure 11, a blob object appears in the right channel in both the TO image and the Tl image, but the blob object does not exist in the scrubbed lower-right image.
[0061] The cells, voids, and spaces are deformable in microfluidic devices, so they can change in size from TO to TM. Such deformation of the cell surfaces is modeled, and the mask is accordingly modified for the corresponding TM. For example, as shown in Figure 11, the left and right well subcomponents have their "implosion-padding" values calculated. This is necessary because the substantial pressure difference in the well between TO and TM implodes the walls from their original position. [0062] According to one embodiment of the present invention, the implosion-padding process includes extracting rectangle around a well in the TO image, calculating an average of a succession of rectangle-perimeters from the TO image, finding a minimum value of this vector and the index, repeating the above three processes of extracting, calculating, and finding for the subsequently Tl image, the T2 image, ..., the TM image, ..., and the TN image, and calculating the difference in the indices. The difference in the indices is used to estimate additional padding to the masking region for the original TO image. Figure 12 is a simplified diagram for implosion-padding process. As discussed above and further emphasized here, this diagram is merely an examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0063] At the process 540, images are compared to generate a comparison image. For example, a comparison image results from the subtraction of the TO image from the TM image. The scrubbing can usually remove the walls of the chambers. Such removal can reduce false reading in the crystal recognition analysis. As discussed above and further emphasized here, the process 540 for image comparison may be performed prior to the process 530 for masking image, during the process 530 for masking image, and/or after the process 530 for masking image.
[0064] hi one embodiment, the comparison image is median re-centered to push the middle to 128 instead of the arbitrary value that would otherwise result. The intensity of the image can vary even with respect to the reference image as it is dependent on the hydration conditions on the chip. The mask generated in the process 530 is applied to the comparison image to create an attenuating front which softens the harsh borders that the mask would introduce to an image. The closer an image pixel is to a mask pixel, the more the image pixel is attenuated. This process is one example of scrubbing. The distance map describing the distance of each image pixel from a mask pixel is calculated separately from the TO image.
[0065] Figures 4a through 4c are simplified diagrams for image subtraction, masking and scrubbing. These diagrams are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. As shown in Figure 4a, a metering cell contains debris indicated by the letter D's distributed about the metering cell chambers. With the processes described above, the metering cell may be rotated to align with the ideal coordinate map, and is transformed to make the imaged metering cell dimensions match those of the ideal metering cell dimensions. For example, the transformation can stretch compensate the image. Subsequently, the foreign objects not present in the ideal image are masked out. The masking process removes signals from the regions including and immediately surrounding the foreign objects. The removal can reduce falsely identifications of the foreign objects as crystals. Figure 4b is a simplified diagram for an image with foreign objects removed. Figure 4c is a simplified diagram for image subtraction. The image subtraction calculates differences between the TO image and the TM image, and thereby removes the chamber edge features from the TM image. The TM image is converted into an image having wall-less chambers.
[0066] For this converted image, a further masking may be needed if wall implosion is detected. Wall implosion usually occurs when the microfluidic device is dehydrating and the chamber contents are permeating outside of the chamber. The permeation causes a negative pressure therein and thus wall collapse or implosion. Such further masking for implosion employs a series of known shapes that occur when chamber implosion occurs and uses such known shapes to create additional masks to occlude from the image the now intruding imploded walls.
[0067] According to one embodiment of the present invention, an output scrubbed image is calculated by first renonnalizing the TO image and the TM image with respect to each other. The renormalization process can reduce a DC or background signal resulting from environmental changes to the chip, such as loss of chip moisture. A simple subtraction image is then calculated with a 128 offset. This subtraction image is then "scrubbed" by stamping all the pixel locations in the stamp with 128 and thereby obliterating their output signal. Additionally, pixel locations are progressively attenuated based on their x-y distance to a stamped pixel in the mask. Therefore the subtraction image is scrubbed around the mask pixels to ensure a smooth transition from the stamped 128 value and the real image values.
[0068] At a process 550, an image is inspected for crystals. For example, the final scrubbed image is sent through a feature extractor which performs additional image processing techniques on the image. [0069] Training and selection of these features is a semi-automatic process using Matlab scripts. A random combination of these features is selected. The higher dimensional space is mapped to a lower dimensionality through fisher-linear discriminant analysis to increase the separability of crystals from other materials. Classification is performed in this lower dimensional space using a K-nearest neighbor algorithm. A confusion matrix for the original training set is calculated by excluding the instance under test and a cost matrix is applied to the training matrix to evaluate the "goodness" of the training run. The best training run is used to determine the number of neighbors, the features used and two thresholds used for false positive rejection and false negative rejection.
[0070] According to yet another embodiment of the present invention, a computer medium includes instructions for processing an image of a microfluidic device. The computer medium stores a computer code that directs a processor to perform the inventive processes as discussed above. An exemplary computer code may use Matlab or other computer language, and may run on Pentium PC or other computer. The computer code is not intended to limit the scope of the claims herein. One of ordinary skill in the art would recognize other variations, modifications, and alternatives.
[0071] For example, the computer-readable medium includes one or more instructions for receiving the TO image of a microfluidic device. The TO image is captured prior to crystallization. Additionally, the computer-readable medium includes one or more instructions for receiving the TM image of the microfluidic device. The TM image is captured after the TO image. Moreover the computer readable medium includes one or more instructions for transforming the TO image into an ideal coordinate space using at least a fiducial on the TO image, one or more instructions for transforming the TM image into the ideal coordinate space using at least a fiducial on the TM image, one or more instructions for obtaining a comparison image based on at least information associated with the transformed TO image and the transformed TM image, and one or more instructions for processing the comparison image to obtain information associated with the crystallization. [0072] As another example, the computer code can perform locating fiducials, transforming image, masking image, comparing images, and inspecting image. As yet another example, the computer code performs some or all of the processes as described in Figures 1-12.
[0073] As discussed above and further emphasized here, the above examples of computer- readable medium and computer code are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, some processes may be achieved with hardware while other processes may be achieved with software. Some processes may be achieved with a combination of hardware and software. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Depending upon the embodiment, the specific sequence of processes may be interchanged with others replaced.
[0074] Numerous benefits are achieved using the invention over conventional techniques. Depending upon the embodiment, one or more of these benefits may be achieved. For example, certain embodiments of the present invention improves the speed of imaging analysis and crystallization detection. Some embodiments of the present invention simplify the image processing system for crystallization detection. Certain embodiments of the present invention improve sensitivity of the image processing method and system.
[0075] As discussed above and further emphasized here, Figures 1-12 represent certain embodiments of the present invention, and these embodiments include many examples. In one example, at the process 510, marking fiducials are located on an image. The image may be renormalized against a reference image, which was previously taken with either a standardized slab or nothing under the microscope, for white balancing or for exposure normalization, or other desirable characteristics. The image may be 8-bit renormalized with high resolution, or other desirable characteristics. Marking fiducials may include cross hairs, hi one embodiment of the present invention, the image includes metering cells in addition to a Fluidigm logo. Each metering cell has cross-hair fiducials at known locations around the metering cell. During the image acquisition, the positions of these fiducials are determined to within +/- 100 microns through the X-Y correction process. This estimation accuracy may be achieved even under rotational orientations. During the process 510, some sub-images are extracted around these estimated locations. Within these sub-images, the cross-hair fiducials are found, and their global positions are determined. In one example, the TO image is analyzed at the process 510, and in another example, the TO image is not analyzed at the process 520. For example, the TO image is captured at TO. At TO, no crystallization of protein occurs. At TM, crystallization of protein may have occurred.
[0076] If the TO image is analyzed at the process 520, the global positions in the TO image are compared to the global positions in a subsequent image, such as the Tl image, the T2 image, ..., the TM image, ..., or the TN image. N is a positive integer, and M is a positive integer smaller than or equal to N. The TM image is captured at TM. If a single fiducial is missed from the TO image or the subsequent TM image, the missed fiducial is usually not considered during the subsequent analysis of the images.
[0077] In another example, the process 520 includes a process 610 for matching fiducials, a process 620 for calculating transformation, and a process 630 for transforming image. At the process 610, fiducials in an image is matched with corresponding fiducials in an ideal coordinate map. For example, the image is the TM image. In one embodiment, the image is an x-y coordinate image, and the ideal coordinate map is an x-y coordinate map. The image is aligned against the ideal coordinate map. Locations of the fiducials in the image are compared with locations of the fiducials in the ideal coordinate map. Such comparison can reveal any distortion including a stretch of the microfluidic device when the image is captured, such as at TM. In one embodiment, the ideal coordinate map takes into account certain characteristics of the imaging system 10 and/or the microfluidic device 30. For example, the characteristics include some imperfections known or predicted at the time the ideal coordinate map was generated. [0078] At the process 620, a spatial transformation from an image to an ideal coordinate space is calculated. The ideal coordinate space corresponds to the ideal coordinate map. In one example, a least squares transformation is calculated from the TO image to the ideal coordinate space. In another example, a least squares transformation is not calculated from the TO image to the ideal coordinate space. [0079] At the process 630, an image is transformed into an ideal coordinate space. For example, the TO image is transformed. In another example, the TO image is not transformed. In one embodiment, the transformed images are located in the same coordinate space, so they are co-registered and comparable to one another. In another embodiment, the transformed image includes at least part of the microfluidic device 30. For example, the microfluidic device 30 has the channel regions and well regions. In certain embodiments, the channel regions and the well regions are interchangeable. The channels and the wells refer to recessed regions in the microfluidic device, hi other embodiments, the microfluidic device uses channel regions to function as well regions, hi yet other embodiments, the microfluidic device includes chambers that can be used as fluid channels, control channels, and wells. [0080] At the process 530, an image is masked. For example, a stamp or a mask is calculated using predetermined information about the idealized image. As shown in Figure 11, the TM image is captured and transformed to the ideal coordinate space. Each rectilinear region contains four bounding walls. The region beyond the four bounding walls in the TM image is masked out of the subsequent analysis. Similarly, the interface line is masked out.
[0081] In another example, Figure 13 is a simplified method for wall detection. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 1300 includes process 1310 for receiving image, process 1320 for performing intensity analysis, process 1330 for converting intensities, process 1340 for detecting walls for first control channel, and process 1350 for detecting wall for second control channel. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, the processes 1310 and 1320 is combined. In another example, the processes 1340 and 1350 is combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes maybe interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below. [0082] Figures 14(a), (b) and (c) are simplified diagrams for wall detection according to an embodiment of the present invention. These diagrams are only illustrative, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0083] At the process 1310, an image is received. For example, the image is the TO image or the TM image. In one embodiment, as shown in Figure 14(a), an image 1400 includes an interface line 1410 as a first control channel, a containment line 1420 as a second control channel, and a reaction channel 1430. The interface line 1410 includes walls 1412 and 1414, and the containment line 1420 includes a wall 1422. The reaction channel includes walls 1432 and 1434. For example, the interface line 1410 and the containment line 1420 are in the control layer. In another example, the reaction channel 1430 is used for protein crystallization.
[0084] At the process 1320, an intensity analysis is performed. In one embodiment, as shown in Figure 14(b), the image 1400 is analyzed based on intensity. A curve 1440 represents image intensity along the direction of the reaction channel 1430. The curve 1440 includes at least five peaks 1442, 1444, 1452, 1454, and 1456. The peaks 1442 and 1444 correspond to bright regions, and the peaks 1452, 1454, and 1456 correspond to dark regions. The peaks 1442 and 1452 are associated with to the wall 1412, the peaks 1444 and 1454 are associated with the wall 1414, and the peak 1456 is associated with the wall 1422.
[0085] At the process 1330, the intensities are converted. In one embodiment, as shown in Figure 14(c), the curve 1440 is converted into a curve 1460. The conversion removes polarity differences between the peaks 1442 and 1452 and between the peaks 1444 and 1454. Additionally, the conversion also provide a smoothing process. For example, the intensity values of the curve 1440 is compared against the average intensity value of the curve 1440, and the absolute values of the differences are plotted along the direction of the reaction channel 1430. As a result, the curve 1460 includes three peaks 1472, 1474, and 1476. The peak 1472 corresponds to the peaks 1442 and 1452, the peak 1474 corresponds to the peaks 1444 and 1454, and the peak 1476 corresponds to the peak 1456. hi one embodiment, the smoothing process ensures the peaks 1442 and 1452 are converted into a single peak 1472. hi another embodiment of the present invention, the conversion is performed without the smoothing process. For example, the curve 1440 has a single peak with a single polarity in place of the peaks 1442 and 1452. No smoothing or fusing of the two peaks is needed.
[0086] At the process 1340, walls of the first control channel are detected. In one embodiment, as shown in Figures 14(c), the peaks 1472 and 1474 are associated with the walls 1412 and 1414 of the first control channel 1410. A line 1488 is drawn parallel to the x axis along the direction of the reaction channel. The line 1488 intersects with the curve 1460 at four intersections 1482, 1484, 1486, and 1488. The average x value of intersections 1482 and 1484 and the average x value of the intersections 1486 and 1488 are calculated. The difference between the two average x values is determined as the calculated width of the interface line 1410. The calculated width is compared against the predetermined width of the interface line 1410. By moving the line 1488 up and down along the y direction, the difference between the calculated width and the predetermined width is minimized at a certain y position for the line 1488. At this y position, the average x value of intersections 1482 and 1484 is considered to be the position of the wall 1412, and the average x value of the intersections 1486 and 1488 is considered to be the position of the wall 1414.
[0087] At the process 1350, a wall of the second control channel is detected. In one embodiment, once the interface line 1410 is located, the predetermined length of the reaction channel 1430 between the interface line 1410 and the containment line 1420 is used to calculate the position of the containment line 1420. The calculation provides an approximate location for the wall 1422. Afterwards, the approximate locations for the walls 1414 and 1422 are further adjusted by a fine-correction process. The fine-correction process calculates the penalty functions for the wall 1414 and the wall 1416 and determines a combined penalty function as a function of wall positions. In one example, the combined penalty function takes into account the signal intensities of the curve 1460. In another example, the combined penalty function takes into account the distance between the fine-corrected wall positions and the approximate wall positions without fine correction. In yet another example, by minimizing the combined penalty function, the locations of the walls 1414 and 1422 are determined, hi yet another example, by smoothing the combined penalty function, the locations of the walls 1414 and 1422 are determined.
[0088] As discussed above and further emphasized here, Figure 13 is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, the walls 1432 and 1434 of the reaction channel 1430 as shown in Figure 14(a) are found in a way similar to the walls 1412, 1414, and 1422. The distance between the two walls 1432 and 1434 are predetermined. Multiple regions of the reaction channel 1430 are sampled to generate a composite estimate locations for the walls 1432 and 1434. hi another example, the fiducial markings are detected and registered on the channel layer, and the walls 1432 and 1434 are thereby determined. In yet another example, the locations of the walls 1432, 1434, 1414 and 1422 can be determined based on at least information obtained from a bar code on the microfluidic device 30. In yet another example, as shown in Figure 14(a), the region beyond the four bounding walls 1432, 1434, 1414 and 1422 can be masked out of the subsequent analysis.
[0089] Also, various fiducial markings can be included in the microfluidic system 30. In one embodiment, a fiducial marking comprises a recessed region in a deformable layer. The recessed region becomes a volume or open region surrounded by portions of the deformable layer or other layers. The volume or open region is preferably filled with a fluid such as a gas including air or other non-reactive fluid. The fluid also has a substantially different refractive index to light relative to the surrounding deformable layer. The open region is preferably filed with an air or air type mixture and has a low refractive index. Similarly, the fiducial marking in the control layer has similar characteristics according to a specific embodiment. In another embodiment, the fiducial marking has sharp edges that highlight the marking from its surroundings. In yet another embodiment, the fiducial markings can be any physical features associated with the microfluidic device 30. For example, the fiducial markings include a channel wall or an edge of the microfluidic device 30.
[0090] At the process 540, images are compared to generate a comparison image. For example, a comparison image results from the subtraction of the TO image from the TM image. In another example, a comparison image results from the subtraction of the TMl image from the TM2 image. Each of Ml and M2 is a positive integer smaller than or equal to N. For example, Ml is smaller than M2. Such removal can reduce false reading in the crystal recognition analysis. In another example, the mask generated in the process 530 is applied to the comparison image to create an attenuating front which softens the harsh borders that the mask would introduce to an image. The closer an image pixel is to a mask pixel, the more the image pixel is attenuated. In yet another example, the mask takes into account wall implosion by an implosion-padding process. As discussed above and further emphasized here, the process 540 may be skipped in some examples.
[0091] Figure 15 is a simplified method for implosion padding according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 4500 includes process 4510 for selecting image area, process 4520 for determining median intensity, process 4530 for determining need for additional image area, process 4540 for determining minimum intensity, and process 4550 for determining implosion padding. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some processes are combined or expanded. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below.
[0092] At the process 4510, an image area is selected from the TO image or the TM image. For example, the selected image area is associated with a rectangular boundary. Figure 16 is a simplified diagram for wall implosion according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. An image area along the perimeter of a rectangle 4610 is selected from an image. The rectangle 4610 is assigned with an index. [0093] At the process 4520, a median intensity is determined. As shown in Figure 16, the median intensity for the image area is calculated. The median intensity is associated with an index corresponding to the rectangle 4610, and determined based on raw pixel intensities along the perimeter of the rectangle 4610. In another embodiment, the average intensity instead of the median intensity for the image area is determined. At the process 4530, whether an additional image area should be selected is determined. If an additional image area needs to be selected, the process 4510 is performed. If an additional image area does not need to be selected, the process 4540 is performed. In one example, the processes 4520 and 4530 are repeated for a succession of nested rectangles and the rectangle index is plotted against the determined median intensity as shown in a curve 4620.
[0094] At the process 4540, the minimum median intensity is determined. As shown in Figure 16, the median intensity is a function of the index, and may be plotted as the curve 4620. At an index equal to about 10, the median intensity approximately reaches a minimum. The rectangle associated with the minimum median intensity is related to the walls of the reaction chamber, and is used to determine the extent of implosion. In another embodiment, the minimum average intensity instead of the minimum median intensity for the image area is determined.
[0095] At the process 4550, the implosion padding is determined. Figure 17 is a simplified diagram for wall implosion at another time according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Figure 17 shows the processes 4510, 4520, 4530, and 4540 performed on an image taken later than the image analyzed in Figure 16. For example, Figure 16 is associated with the TO image or the TMl image. Figure 17 is associated with TM2 image, and M2 is larger than Ml . In Figure 17, the index that corresponds to minimum median intensity has shifted from 10 to about 29. The change in index values indicates the wall implosion. Based on the location of the rectangles corresponding to the two index values, the additional implosion padding that should be applied for the image in Figure 17 is determined. The mask can be designed to cover the wall implosion. [0096] At a process 550, an image is inspected for crystals. For example, Figure 18 is a simplified method for image inspection. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 1500 includes process 1510 for training classifier and process 1520 for classifying image. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some processes are combined or expanded. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. For example, the process 1510 is skipped. In another example, the process 1510 is repeated for a plurality of images. Further details of these processes are found throughout the present specification and more particularly below. [0097] At the process 1510, a classifier is trained. Figure 19 is a simplified training method according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 1510 includes process 1610 for generating features, process 1620 for selecting features, process 1630 for projecting features, and process 1640 for adjusting classifier. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some processes are combined or expanded. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes maybe interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below.
[0098] At the process 1610, a number of features are generated. In one embodiment, the features are computed on the entire image. In another embodiment, the image is divided into overlapping tiles or spatial components, and the features are computed on each image tile or spatial component. These features describe certain characteristics of the image useful for the classification of the image. For example, the image can be classified into crystal, phase/precipitate and clear types.
[0099] In one embodiment, some characteristics of the image are predetermined. The predetermination is accomplished by manually and/or automatically inspecting the image. The characteristics may describe with which of the crystal, phase/precipitate and clear classes the image is associated. The predetermined characteristics can be used to assess the accuracy and adjust the various settings of the classifier.
[0100] In one embodiment, the features including some or all of the following: [0101] Coarse Image Statistics: global image features;
[0102] Circle Counting Image Statistics: count of different kinds of circles and ellipse;
[0103] Sliding Threshold Features: threshold values at which objects of sufficient size are segmented; [0104] Biggest Object Features: features of the biggest blob or object found in the image;
[0105] Discrete Fourier Transform Features: frequency analysis features;
[0106] Form Analysis Features: shape analysis features;
[0107] X-axis Symmetry Features: features describing the symmetry around X-axis;
[0108] Canny Image Sign Flipping Features: features describing the flipping of sign using Canny edge detector;
[0109] Hough Transform Features: features computed using Hough Transform method to detect straight lines; and
[0110] Neighborhood Line Detector Features: features computed in local neighborhoods detecting straight line patterns. [0111] The above list of features is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In one embodiment, for neighborhood line detector features, a N-by-N-pixel square neighborhood is centered around each pixel in the image and considered for a fixed value of N. For example, N is equal to 9. The gradient of each pixel in the neighborhood is computed. Based on all the gradients of the pixels in the neighborhood, the dominant orientation angle indicative of the straight line pattern in the neighborhood is determined. Also, based on the number of pixels in the neighborhood aligned with the dominant orientation, the strength of the straight line pattern is determined. If there are a number of pixels forming a line and each of the neighborhoods centered at those pixels has strong and similarly oriented straight line patterns, the number of such pixels and the strength and similarity of orientations can be used as features for classification
[0112] At the process 1620, certain features are selected from the plurality of features generated. For example, a subset of features is selected using an automatic method in which features are added and removed iteratively and classification accuracy is improved or optimized. In one embodiment, the feature selection process is repeated for each pair of the classes, and the accuracy for distinguishing between each pair of classes is improved. The accuracy may be detennined between the result from the classifier and the predetermined characteristic of the image. For example, the image is associated with three classes including crystal, phase/precipitate and clear. In another example, for each pair of classes, certain features are selected from all the features obtained at the process 1610. The selection includes computing the Fisher Discriminant between the pair and evaluating its classification accuracy using receiver operating characteristic (ROC) curve area which is a plot between false negative rate and false positive rate. For three pairs of classes, three groups of selected features are determined. Each group corresponds to a pair of class, and may be different from or the same as another group. Additionally, only for the Neighborhood Line Detector Features obtained at the process 1610, the feature selection process is performed. For example, the selection is related to two out of three pairs of classes, and two groups of selected Neighborhood Line Detector Features are determined, hi yet another embodiment, the three classes can be subdivided using a clustering algorithm in order to use pairs of subclasses for the feature selection process.
[0113] At the process 1630, the selected features are projected. In one embodiment, all of the selected features are projected onto the lower dimensional feature space. For example, from 130 original features, 5 groups of features are selected. As discussed above, 3 groups of features are selected from all features for 3 pairs of classes, and 2 groups of features are selected from only Neighborhood Line Detector Features for 2 pairs of classes. These 5 groups of selected features are used to calculate 5 Fisher features. The number of dimensions is reduced from 130 to 5.
[0114] At the process 1640, the classifier is adjusted, hi one embodiment, the Fisher features are input to a Feed Forward neural network. This network is trained using a neural network training algorithm such as backpropagation algorithm. The neural network can have multiple outputs, each output indicating the likelihood of the image or the image tile being in one of the classes such as crystal, phase/precipitate or clear. If the image is divided into image tiles, the neural network outputs for the different image tiles are combined into a single output using a spatial fusion algorithm. Based on the comparison between the output from the neural network and the predetermined characteristics of the image, the neural network is adjusted. For example, the weights and/or biases of the neural network is changed. [0115] At the process 1520, an image is classified. Figure 20 is a simplified method for classification according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 1520 includes process 1710 for generating features, process 1720 for projecting features, and process 1730 for determining image class. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below.
[0116] At the process 1710, a number of features are generated. These features include all the features selected at the process 1620. In one embodiment, the features are computed on the entire image. In another embodiment, the image is divided into overlapping tiles or spatial components, and the features are computed on each image tile or spatial component. In yet another embodiment, the scrubbing and ripping operations are performed on the image prior to the process 1710.
[0117] At the process 1720, the selected features are projected. In one embodiment, all of the features selected at the process 1620 are projected onto the lower dimensional feature space. For example, from 130 original features, 5 groups of features are selected at the process 1620. These selected features are computed at the process 1710, and are used to calculate 5 Fisher features.
[0118] At the process 1730, the image class is determined. In one embodiment, the Fisher features are input to a Feed Forward neural network. The neural network can have multiple outputs, each output indicating the likelihood of the image or the image tile being in one of the classes such as crystal, phase/precipitate or clear. If the image is divided into image tiles, the neural network outputs for the different image tiles are combined into a single output using a spatial fusion algorithm, hi another embodiment, the crystal likelihood is compared against a threshold. If the crystal likelihood is above the threshold, the image is classified as a crystal image. For example, the threshold is 50%. [0119] As discussed above and further emphasized here, Figures 1-17 represent certain embodiments of the present invention, and these embodiments include many examples. For example, the TO image and/or the TM image associated with some or all of the processes 510, 520, 530, 540, and 550 maybe directly acquired by the imaging system 10, or generated from a plurality of images acquired by the imaging system 10. In one embodiment of the present invention, the imaging system 10 captures a plurality of images for the same area of the microfluidic system 30 at a plurality of z-focus positions respectively. The plurality of images at different z-planes are combined into one image used as the TO image or TM image.
[0120] Figure 21 is a simplified method for combining images according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 1800 includes process 1810 for determining image characteristics, process 1820 for performing statistical analysis, and process 1830 for generating combined image. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some processes are combined or expanded. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below. ;
[0121] At the process 1810, certain image characteristics are determined for the plurality of images. In one embodiment, for each pixel of each image, the sharpness and colorness are determined. For example, the sharpness is detennined with Laplacian operator, and the colorness is determined with Saturation of the HSV color mode. At the process 1820, a statistical analysis is performed. In one embodiment, the statistics such as mean of sharpness and mean of colorness are determined for all the images.
[0122] At the process 1830, a combined image is generated. For example,
[0123] (Equation 1)
Figure imgf000031_0001
[0124] wherein N is the number of images for the plurality of images. Combinedlmage (x,y) is the intensity of the combined image at pixel (x,y), and Imagem(x,y) is the intensity of image m at pixel (x,y). For example, the image intensity has three components including red intensity, green intensity, and blue intensity. The intensity of the combined image associated with a given color is dependent upon the intensity of image m associated with the same color. The weight wtm is determined based on the sharpness and colorness at pixel (x, y) for image m. For example,
.„ .,,. / \ Laplacian (x,y) Λ . Saturation m(x,y) .
[0125] wtn(x,y) = 0.7 x — "'v '^ + 0.3x m V ^; (Equation 2)
MeanLapcian MeanSaturation
[0126] wherein Lapacianm(x,y) and Saturationm(x,y) are the values of Laplacian operator and Saturation respectively for the pixel (x,y) on image m. MeanLaplacian is the mean of Laplacian values for all pixels in all of the plurality of images, and MeanSaturation is the mean of Saturation values for all pixels in all the plurality of images.
[0127] The method for combining images has various applications. For example, in certain microfluidic devices, a reaction chamber, such as a reaction channel or the protein well, has a large depth. The crystals can be located anywhere within the reaction chamber. Figure 22 is a simplified diagram for deep chamber according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. A protein well 1900 has a depth of about 300 microns, hi one example, the depth of focus of 1OX objective is less than 300 microns, and the single z-plane image capture cannot capture all the crystals 1910, 1920, and 1930. If the imaging system focuses on the middle of the protein well, the image may capture only the crystal 1920.
[0128] Figure 23 is a simplified diagram for capturing multiple images according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications, hi one example, three images are acquired. Image #1 captures the crystal 1910, Image #2 captures the crystal 1920, and Image #3 captures the crystal 1930. The number of images are depending on the objective and aperture setting of the imaging system. The smaller the aperture, the larger of the depth of field, and the less the images needed. For example, 5 images with 70 micron step size may be used with 1OX objective. The captured multiple images are combined according to the method 1800.
[0129] hi one embodiment, each of the three images has three components for a given (x, y) location. The three components include red intensity, green intensity, and blue intensity. Similarly, the combined image has the same three components for a given (x, y) location. For example, at the pixel location (10, 10), Imagei (10, 10) = (200, 100, 50), hnage2 (10, 10) = (100, 200, 150) and Image3 (10, 10) = (50, 50, 50). The corresponding weights are Wt1 (10, 10) = 0.1, Wt2 (10, 10) = 10 and Wt3 (10, 10) = 0.2. According to Equation 1, Combinedlmage (10, 10) is as follows:
[0130] Combinedlmage (10, 10) = [Wt1 (10, 10) x linage (10, 10) + wt2 (10, 10) x Images + wt3 (10, 10) x Image3 (10, 10)] / [Wt1 (x, y) + Wt2 (x, y) + Wt3 (x, y)]
= [0.1 x (200, 100, 5) + 10 x (100, 200, 150) + 0.2 x (50, 50, 50)] / (0.1+10.0+0.2)
= ((0.1x200+10xl00+0.2x50)/10.3, (0.1xl00+10x200+0.2x50)/10.3, (0.1 x50+l Ox 150+0.2x50)/l 0.3))
= (100, 196.12, 147.09) (Equation 3) [0131] where the combined image has a red intensity of 100, a green intensity of 196.12, and a blue intensity of 147.09 at x equal to 10 and y equal to 10. As discussed above and further emphasized here, Equation 3 is only an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. [0132] Examples of the present invention include code that directs a processor to perform all or certain inventive processes as discussed above. The computer code is implemented using C++ or other computer language. The computer code is not intended to limit the scope of the claims herein. One of ordinary skill in the art would recognize other variations, modifications, and alternatives. [0133] According to one embodiment of the present invention, a computer-readable medium includes instructions for processing an image of a microfluidic device. The computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device. The first image includes a first fiducial marking and a first chamber region, and the first chamber region is associated with a first chamber boundary. Additionally, the computer-readable medium includes one or more instructions for transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking, and one or more instructions for removing at least a first part of the first chamber boundary from the first image. Moreover, the computer- readable medium includes one or more instructions for processing information associated with the first chamber region, and one or more instructions for determining whether a first crystal is present in the first chamber region. [0134] According to another embodiment of the present invention, a computer-readable medium includes instructions for processing a plurality of images of a microfluidic device. The computer-readable medium includes one or more instructions for receiving at least a first image and a second image of a microfluidic device. The first image and the second image are associated with a first focal position and a second focal position respectively, and each of the first image and the second image includes a first chamber region. Additionally, the computer-readable medium includes one or more instructions for processing information associated with the first image and the second image, and one or more instructions for generating a third image based on at least information associated with the first image and the second image. Moreover, the computer-readable medium includes one or more instructions for processing information associated with the third image, and one or more instructions for determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
[0135] According to yet another embodiment of the present invention, a computer-readable medium includes instructions for adjusting a classifier and processing an image of a microfluidic device. The computer-readable medium includes one or more instructions for receiving a first image of a microfluidic device. The first image is associated with at least a first predetermined characteristic. Additionally, the computer-readable medium includes one or more instructions for generating a first plurality of features based on at least information associated with the first image, and one or more instructions for selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic. Moreover, the computer-readable medium includes one or more instructions for determining a third plurality of features based on at least information associated with the second plurality of features, and one or more instructions for processing information associated with the third plurality of features. Also, the computer-readable medium includes one or more instructions for determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters, one or more instructions for processing information associated with the first likelihood and the at least a first predetermined characteristic, and one or more instructions for adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic. [0136] In yet another embodiment, at the process 1350, a wall of the second control channel is detected, hi one embodiment, once the interface line 1410 is located, the predetermined length of the reaction channel 1430 between the interface line 1410 and the containment line 1420 is used to calculate the position of the containment line 1420. The calculation provides an approximate location for the wall 1422. Afterwards, the approximate locations for the walls 1414 and 1422 are further adjusted by a fine-correction process. An exemplary computer code for fine correction is shown below. int DiffusionCellhnageTemplate: :fineCorrectProteinChannelLocation(IplImage* tθhnage,int proteinChannelBeginninghiPixels, int totalProteinChannelLengthlnPixels) { int fmeTuneDistance=CONTROL_LAYER_FINE_TUNE_DISTANCE_IN_MICRONS/this-
>m_engineConfϊguration->getXMicronsPerPixel(); this->StarthτιageTimer("fine correction start"); RECT leftRect;
RECT rightRect; leftRect.top=0; leftRect.bottom=tOhnage->height- 1 ; leftRect.left^roteinChannelBeginninglnPixels-fineTuneDistance^; leftRect.right^roteinChannehSeginningMPixels+fineTuneDistance^; rightRect.top=0; rightRect.bottom=tOmiage->height- 1 ; rightRect.left^roteinChannelBeginningliTPixels+totalProteinChannelLengthlnPixels- fineTuneDistance/2 ; rightRect.right^roteinChamielBeginninglnPixels+totalProteinChannelLengthhiPixels+fineT uneDistance/2;
Ipllmage* leftSide=hnageProcessor::extractImageRect(tOImage,&leftRect); Ipllmage* rightSide=ImageProcessor::extractImageRect(tOIniage,&rightRect); intreturnValue=proteinChannelBeginningInPixels; if((leftSide==NULL)||(rightSide=NULL))
{ // no additional calculation done here - simply return the base protein channel beginning
} else
{ this->PolllmageTimer("both images non-null"); // PERFORM THE FINE CORRECTION CALCULATION HERE int *leftSignal=ImageProcessor::calculateHorizontaIDerivativeAmplitude(leftSide); int *rightSignal=ImageProcessor::calculateHorizontalDerivativeAmplitude(rightSide); // tmV>PollImageTimer(" calculated derivative signals"); if((leftSignal!=NULL)&&(rightSignal!=NULL)) { this->PollImageTimer("both are non-null"); int signalWidth=leftSide->width; int minLeftSignal=INT_MAX; int minRightSignal=INT_MAX; // determine the min of each signal for (int i=O;i<signalWidth-l;i++) // skip the last value as it is always zero
{ if (leftSignal[i]<rninLeftSignal) minLefcSignal=leftSignal[i] ; if (rightSignal[i]<minRightSignal) minRightSignal=rightSignal[i] ;
}
// now subtract the min value for (i=O;i<signalWidth-l;i++) // skip the last value as it is always zero
{ leftSignal[i] -=minLeftSignal; rightS ignal[i] -=minRightSignal;
} // now interrogate the possible benefit from each of the possible fine tuning values this->PolirmageTimer("calculating penality function for each side"); int *leftPenality=new int[signalWidth]; int *rightPenality=new int[signalWidth]; int leftSum=0; int rightSum=O; for (i=O;i<signalWidth;i++)
{
// calculate the sum to determine to normalize left and right side leftSum+=leftSignal[i]; rightSum+=rightSignal[i];
// now calculate the penality for each side leftPenality[i]=O; rightPenality[i]=O;
// left penality are all the signal contributions to the left of this perturbation for (intj=0;j<i;j++)
{ rightPenality[i]+=rightSignal[j] ;
} // right penality includes all the signal contributions to the right of this pertubation for (j =signal Width- l;j>=i;j--)
{ leftPenality[i]+=leftSignal[j];
} }
// calculate the combined penality as a sum of the normalized penality contribution from
// each side of the signal this->PollImageTimer("calculatmg combined penality function"); double *combinedPenality=new double[signal Width]; double *combinedPenalityRaw=new double[signalWidth]; for (i=O;i<signalWidth;i++)
{ double leftValue=((double)leftPenality[i])/(leftSum); double rightValue=((double) rightPenality[i])/(rightSum); // unless we're in the area in which we can average... combinedPenalityRaw[i]=rightValue+leftValue;
}
// smooth the penality function to force the minimum peak to the center of the acceptable band // and calculate the minimum index double minPenality=le99; int minPenalityIndex=O; int smoothingWJiidow=SMOOTHING_WINDOW_FOR_CONTROL_LINE_DETERMINATIO N/this->m_engineConfiguration->getXMicronsP6rPixel() ; for (i=O;i<signalWidth;i++)
{ int left=i-smoothingWindow; int right=i+smoothingWindow; if(left<0) left=0; if (right>signalWidth-l)
Figure imgf000039_0001
combinedPenality[i]=0; for (int j=left; j<=right;j++)
{ combinedPenality[i] +=combinedPenalityRaw[j ] ;
} combinedPenality[i]/=(right-left); // normalize based on how much we were able to integrate if (combinedPenality[i]<miiiPenality) { minPenality=combinedPenality[i] ; minPenalityIndex=i;
} } ttøs->Poll!mageTimer("calculatmg offset"); // apply the fine correction to our return value returnValue+=minPenalityIndex-signalWidth/2; // subtract half the signal width since this was zero centered
//#defme DEBUG_FINE_CORRECT_CHANNEL #ifdefDEBUG_FINE_CORRECT_CHANKEL double *xValues=new double[leftSide->width]; double *yValuesl=new double[leftSide->width]; double *yValues2=new double[leftSide->width]; double *yValues3=new double[leftSide->width]; double *yValues4=new double[leftSide->width]; for (int ii=0;ii<signalWidth;ii++)
{ xValues[ii]=ii; yValues 1 [ii]=leftSignal[ii] ; yValues2[ii]=rightSignal[ii]; yValues3[ii]=((double)leftPenality[ii])/leftSum*2; yValues4[ii]=((double)rightPenality[ii])/rightSum*2;
}
CVGraphUtility newGraph; newGraph.plotDoubleXYData(xValues,yValues 1 ,signalWidth,xValues,yValues2,signalWidt h, "HorizontalDerivativeSignals ") ;
CVGraphUtility newGraph2; newGraph2.plotTripleXYData(xValues,yValues3,signalWidth,xValues,yValues4,signalWidt h, xValues,combinedPenality,signalWidth,"Penality Function"); delete[] xValues; delete[] yValuesl; delete[] yValues2; delete[] yValues3; delete[] yValues4; #endif
// free up values if (combinedPenality I=NULL)
{ deletef] combinedPenality; combinedPenality=0;
} if (combinedPenalityRaw ! =NULL)
{ delete[] combinedPenalityRaw; combinedPenalityRaw=0;
} if (leftPenality! =NULL)
{ delete[] leftPenality; leftPenality=O;
} if (rightPenality!=NULL)
{ delete[] rightPenality; rightPenality=O;
} } if(leftSignal!=MJLL)
{ deletef] leftSignal; leftSignal=O;
} if(rightSignal!=NULL)
{ delete[] rightSignal; rightSignal=0;
} } if (leftSide!=NULL) cvReleaseImage(&leftSide); if(rightSide!=NULL) cvReleaseImage(&rightSide); this->StopImageTimer(); return returnValue;
}
[0137]- As discussed above and further emphasized here, the above examples of computer- readable medium and computer code are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, some processes may be achieved with hardware while other processes may be achieved with software. Some processes may be achieved with a combination of hardware and software. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined.
Depending upon the embodiment, the specific sequence of processes may be interchanged with others replaced.
[0138] Appendix A and Appendix B are attached as part of the present patent application. These appendices provide some examples and should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0139] It is understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims.
Figure imgf000044_0001
Figure imgf000045_0001
Figure imgf000046_0001
Figure imgf000047_0001
ell-Signal Detection
Assumed: 1) Z-registered Images
2) no crystal growth at time t = t0
4-
Figure imgf000047_0002
α 3. Cross-pols
(No Activity) 4 β O=NA
Figure imgf000048_0001
Figure imgf000049_0001
Detection of Interface Line
Figure imgf000050_0001
No Fly
Figure imgf000051_0001
on Interface Line
TO Image
Figure imgf000051_0002
Scrubbed Tl without Interface No Fly Stamp
Figure imgf000051_0003
Scrubbed Tl WITH Interface No Fly Stamp
Figure imgf000051_0004
Aw020-Cl-DEl-day 5
Fly
Figure imgf000052_0001
around
Contain ent Line
TO Image Scrubbed Tl Image (5 days later)
Figure imgf000052_0002
Co pensation for Collapsing ells
Algorithm
1. Extract Rectangle around TO well as usual
2. Take the average of a succession of rectangle-perimeters from the image
3. Find the minimum value of this vector and the index
4. Repeat process for Tl (2,3,- • .)
5. Calculate the difference in the indices. Use this value for additional padding to the original TO no fly zone
Steps 1-3
Figure imgf000054_0001
Tl Image
M index shift from pixel index tO = 10 to
Figure imgf000055_0001
shift of -8 microns Image is take from a water chip after 1 day
Figure imgf000056_0001
Figure imgf000057_0001
Figure imgf000058_0001
METHOD ANB SYSTEM FOR MICROFLUIDIC DEVICE
IMAGING THEREOF
Inventor; Emerson Quan
A citizen of the United States
2415 Ardee Lane
South San Francisco, CA 94080
Colin Jon Taylor, a citizen of the United States 506-1/2 Aimer Road Burlingame, CA 94010
Michael Lee Portland, OR
Christopher Ceasar Sunnyvale, CA
Greg Harris Longmont, CO
Assignee: Fluidigm Corporation 7100 Shoreline Court South San Francisco, CA, 94080
Entity: Small
TOWNSEND and TOWNSEND and CREW LLP Two Embarcadero Center, 8th Floor San Francisco, California 94111 -3834 Tel: 650-326-2400 METHOD AND SYSTEM FOR MICROFLUIDIC DEVICE AND
IMAGING THEREOF
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001 J This application claims priority to U.S. Provisional Patent Application Nos. 60/472,226 filed May 20, 2003, 60/490,666 filed July 28, 2003, and 60/490,584 filed July 28, 2003, all of which are commonly assigned and incorporated by reference herein for all purposes.
STATEMENT AS TO RIGHTS TO INVENTIONS MADE UNDER FEDERALLY SPONSORED RESEARCH AND DEVELOPMENT
[0002] NOT APPLICABLE
REFERENCE TO A "SEQUENCE LISTING," A TABLE, OR A COMPUTER
PROGRAM LISTING APPENDIX SUBMITTED ON A COMPACT DISK. [0003] NOT APPLICABLE
BACKGROUND OF THE INVENTION
[0004] According to the present invention, techniques for microfluidic systems, including a microfluidic chip or circuit, are provided. More particularly, the invention provides a microfluidic structure and method of manufacture, and a system and method for imaging a microfluidic device. Merely by way of example, the fiducial markings are used for processing and imaging a microfluidic chip, but it would be recognized that the invention has a much broader range of applicability.
[0005] Microfluidic techniques have progressed overtime. Certain techniques of producing microelectromechanical (MEMS) structures have been proposed. Such MEMS structures include pumps and valves. The pumps and valves are often silicon-based and are made from bulk micro-machining (which is a subtractive fabrication method whereby single crystal silicon is lithographically patterned and then etched to form three-dimensional structures). The pumps and valves also use surface micro-machining (which is an additive method where layers of semiconductor-type materials such as polysilicon, silicon nitride, silicon dioxide, and various metals are sequentially added and patterned to make three-dimensional structures). Unfortunately, certain limitations exist with these conventional MEMS structures and techniques for making them.
[0006] As merely an example, a limitation of silicon-based micro-machining is that the stiffness of the semiconductor materials used necessitates high actuation forces, which result in large and complex designs. In fact, both bulk and surface micro-machining methods are often limited by the stiffness of the materials used. Additionally, adhesion between various layers of the fabricated device is also a problem. For example, in bulk micro-machining, wafer bonding techniques must be employed to create multilayer structures. On the other hand, when surface micro-machining, thermal stresses between the various layers of the device limits the total device thickness, often to approximately 20 microns. Using either of the above methods, clean room fabrication and careful quality control are required.
[0007] Accordingly, techniques for manufacturing microfluidic systems using an elastomeric structure have been proposed. As merely an example, these structures are often made by forming an elastomeric layer on top of a micromachined mold. The micromachined mold has a raised protrusion which forms a recess extending along a bottom surface of the elastomeric layer. The elastomeric layer is bonded to other elastomeric layers to form fluid and control regions. The elastomeric layer has overcome certain limitations of conventional MEMS based structures. Further details of other characteristics of these elastomeric layers for microfluidic applications such as crystallization have been provided below.
[0008J Crystallization is an important technique to the biological and chemical arts. Specifically, a high-quality crystal of a target compound can be analyzed by x-ray diffraction techniques to produce an accurate three-dimensional structure of the target. This three- dimensional structure information can then be utilized to predict functionality and behavior of the target.
[0009] In theory, the crystallization process is simple. A target compound in pure form is dissolved in solvent. The chemical environment of the dissolved target material is then altered such that the target is less soluble and reverts to the solid phase in crystalline form. This change in chemical environment is typically accomplished by introducing a crystallizing agent that makes the target material less soluble, although changes in temperature and pressure can also influence solubility of the target material. [0010] In practice however, forming a high quality crystal is generally difficult, often requiring much trial and error and patience on the part of the researcher. Specifically, the highly complex structure of even simple biological compounds means that they are usually not amenable to forming a highly ordered crystalline structure. Therefore, a researcher needs to be patient and methodical, experimenting with a large number of conditions for crystallization, altering parameters such as sample concentration, solvent type, countersolvent type, temperature, and duration in order to obtain a high quality crystal.
{0011 J A high-throughput system for screening conditions for crystallization of target materials, for example proteins, is provided in a microfluidic device. The array of metering cells is formed by a multilayer elastomeric manufacturing process. Each metering cell comprises one or more of pairs of opposing chambers, each chamber being in fluid communication with the other through an interconnecting microfluidic channel, one chamber containing a protein solution, and the other, opposing chamber, containing a crystallization reagent. Along the channel, a valve is situated to keep the contents of opposing chambers from each other until the valve is opened, thus allowing free interface diffusion to occur between the opposing chambers through the interconnecting microfluidic channel. As the opposing chambers approach equilibrium with respect to crystallization reagent and protein concentrations as free interface diffusion progresses, the protein would at some point, form a crystal under certain conditions. In some embodiments, the microfluidic devices taught by Hansen et aL are have arrays of metering cells containing chambers for conducting protein crystallization experiments therein. Use of such arrays in turn provides for high-throughput testing of numerous conditions for protein crystallization which require analysis. See PCT publication WO 02/082047, published October 17, 2002 and by Hansen, et al. PCT publication WO 02/082047 is incorporated by reference herein in its entirety for all purposes. [0012] From the above, it is seen that improved techniques for elastomeric design and analysis are highly desirable.
BRIEF SUMMARY OF THE INVENTION
[0013] According to the present invention, techniques for microfluidic systems, including a microfluidic chip or circuit, are provided. More particularly, the invention provides a microfluidic structure and method of manufacture, and a system and method for imaging a microfluidic device. Merely by way of example, the fiducial markings are used for processing and imaging a microfluidic chip, but it would be recognized that the invention has a much broader range of applicability. fO014] In a specific embodiment, the invention provides a biological substrate, e.g., microfluidic chip. The substrate includes a rigid substrate material, which has a surface region capable of acting as a handle substrate. The substrate also has a deformable fluid layer (e.g., polymeric material, silicone, silicone rubber, rubber, plastic, PDMS) coupled to the surface region. One or more well regions are formed in a first portion of the deformable fluid layer and are capable of holding a fluid therein. The one or more channel regions are formed in a second portion of the deformable fluid layer and are coupled to one or more of the well regions. An active region is formed in the deformable fluid layer. Such active region includes the one or more well regions, which are designed to hold fluid. A non-active region is formed in the deformable fluid layer. The non-active region is formed outside of the first portion and the second portion. Preferably, at least three fiducial markings are formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions. A control layer is coupled to the fluid layer. Preferably, the substrate also includes an other fiducial marking with pre-designed shape and size, including at least an edge and center region.
(0015] In an alternative specific embodiment, the invention provides a method of fabricating a biological substrate. The method includes providing a rigid substrate material, which has a surface region and is capable of acting as a handle substrate. The method includes coupling a deformable fluid layer to the surface region of thorigid substrate. The deformable layer has one or more well regions formed in a first portion of the deformable fluid layer and one or more channel regions formed in a second portion of the deformable fluid layer. An active region is formed in the deformable fluid layer. A non-active region is formed in the deformable fluid layer and is formed outside of the first portion and the second portion. Preferably, at least three fiducial markings are formed within the non-active region and are disposed in a spatial manner associated with at least one of the well regions. The method also includes coupling a control layer to the fluid layer. [0016] In yet an alternative embodiment, the invention provides a method of manufacturing microfluidic chip structures. The method includes providing a mold substrate including a plurality of well patterns. Each of the well patterns is provided within a portion of an active region of a fϊuidic chip. The method includes forming a plurality of fiducial marking patterns around a vicinity of each of the well patterns. Each of the plurality of fiducial marking patterns is within a portion of a non-active region of a fluidic chip. The plurality of fiducial marking patterns includes a set of alignment marks disposed spatially around each of the well patterns. The method also includes forming a thickness of deformable material within the plurality of well patterns and within the plurality of fiducial marking patterns to fill a portion of the mold substrate. The method includes coupling the thickness of deformable material including a plurality of wells formed from the well patterns and a plurality of fiducial marking patterns formed from the fiducial marking patterns to rigid substrate material. [0017j In yet an alternatively embodiment, the present invention provides a microfluidic system. The system has a rigid substrate material, which includes a surface region that is capable of acting as a handle substrate. The system has a deformable fluid layer coupled to the surface region. One or more well regions is formed in a first portion of the deformable fluid layer. The one or more well regions is capable of holding a fluid therein. The system has one or more channel regions formed in a second portion of the deformable fluid layer. The one or more channel regions is coupled to one or more of the well regions. An active region is formed in the deformable fluid layer. The active region includes the one or more well regions. A non-active region is formed in the deformable fluid layer. The non-active region is formed outside of the first portion and the second portion. A first fiducial marking is formed within the non-active region and is disposed in a spatial manner associated with at least one of the channel regions. A second fiducial marking is formed within the non-active region and is disposed in a spatial manner associated with at least one of the well regions. A control layer is coupled to the fluid layer. The control layer includes one or more control regions. A third fiducial marking is formed within the control layer. [0018J In yet an alternative specific embodiment, the present invention provides another microfluidic system. The system has a substrate comprising a surface region. A deformable layer is coupled to the surface of the substrate. The deformable layer comprises at least a thickness of first material. A control layer is coupled to the deformable layer to form a sandwich structure including at least the substrate, the deformable layer and the control layer. The control layer is made of at least a thickness of second material, At least one fiducial marking is provided within either the control layer or the deformable layer or the substrate. The fiducial marking is characterized by a visual pattern provided in a volume surrounded wholly or partially by at least the substrate, the first material, or the second material. Preferably, a fluid is disposed within the open volume of the one fiducial marking. The fluid is characterized by a refractive index that is substantially lower than its surrounding regions, e.g., first thickness of material, second thickness of material, substrate. That is, the refractive index may be associated with air or other like fluid and the surrounding regions are characterized by a refractive index associated with a solid according to a specific embodiment.
[0019J Numerous benefits are achieved using the present invention over conventional techniques. The invention provides at least one way to form alignment patterns for a deformable active region for a microfluidic system according to a specific embodiment. The invention can also use conventional materials, which are relatively easy to use. Preferably, the invention provides at least two sets of alignment marks, including one set of spatially disposed fiducial markings and a pre-designated pattern, which has an edge and center region. Depending upon the embodiment, one or more of these benefits may exist. These and other benefits have been described throughout the present specification and more particularly below.
[0020] In yet another specific embodiment, the invention provides a method for processing a microfluidic device, e.g., microfluidic chip, biological chip. The method includes providing a flexible substrate including a first plurality of fiducial markings, and determining a first plurality of actual locations corresponding to the first plurality of fiducial markings respectively. The first plurality of fiducial markings is associated with a first plurality of design locations respectively. Additionally, the method includes processing information associated with the first plurality of actual locations and the first plurality of design locations, and determining a transformation between a design space and a measurement space. The design space is associated with the first plurality of design locations, and the measurement space is associated, with the first plurality of actual locations. Moreover, the method includes performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space. Also, the method includes acquiring a first plurality of images of the first fiducial marking, processing information associated with the first plurality of images, performing a second alignment to the flexible substrate based on at least information associated with the first plurality of images, and acquiring a second image of the flexible substrate. [0021 J According to yet another embodiment, a method for processing a microrlujdic device includes providing a flexible substrate including at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein. Additionally, the method includes determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings, and performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space. Moreover, the method includes acquiring at least a first image of the first additional fiducial marking associated with the first chamber, performing a second alignment to the flexible substrate based on at least information associated with the first image, and acquiring a second image of the first chamber associated with the flexible substrate.
[0022} According to yet another embodiment, the invention provides a system for processing one or more microfluidic devices. The system includes one or more computer- readable media and a stage for locating a flexible substrate. The flexible substrate comprises at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein. The one or more computer-readable media include one or more instructions for providing a flexible substrate, and one or more instructions for determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings. Additionally, the one or more computer-readable media include one or more instructions for performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space, one or more instructions for acquiring at least a first image of the first additional fiducial marking associated with the first chamber, one or more instructions for performing a second alignment to the flexible substrate based on at least information associated with the first image, and one or more instructions for acquiring a second image of the first chamber associated with the flexible substrate.
(0023] According to yet another embodiment of the present invention, a method for processing a microfluidic device includes providing a flexible substrate (e.g., polymer, silicone based, rubber) comprising one or more well regions and a plurality of fiducial marks. The well regions are capable of holding a fluid therein and at least three of the fiducial marks are within a vicinity of one of the well regions. Preferably, the flexible substrate has been provided on a rigid member. The method includes locating the flexible substrate on a stage and capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping from a design space to a measurement space. The method also includes aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region. The method also includes acquiring a high-resolution image of at least the one well region and storing the high-resolution image in a memory.
[0024] In yet another alternative specific embodiment, the invention provides a system for processing one or more microfluidic devices. The system includes one or more computer memories. The system also includes a stage for locating a flexible substrate, which has one or more well regions and a plurality of fiducial marks. The well regions are capable of holding a fluid therein. At least three of the fiducial marks are within a vicinity of one of the well regions, The one or more computer memories comprise one or more computer codes. The one or more computer codes include a first code directed to capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping from a design space to a measurement space. A second code is directed to aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region. A third code is directed to acquiring a high-resolution image of at least the one well region. A fourth code is directed to storing the high-resolution image in a memory. Depending upon the embodiment, there may also be other computer codes to implement the functionality described herein as well as outside of the specification. [00251 &ι yet another alternative specific embodiment, the invention provides method of processing a biological microfluidic device. The method includes providing a deformable substrate comprising one or more metering cells, which are capable of containing a fluid therein. The method also includes locating the deformabie substrate on a stage translatable in x, y, and z directions and translating the stage to image at least four fiducial marks associated with the deformable substrate. The method determines x, y, and z positions (or other like spatial positions) of the at least four fiducial marks according to a preferred embodiment. The method computes a non-planar mapping between a design space and a measurement space based on the x, y, and z positions of the at least four fiducial marks and translates the stage to an image acquisition position calculated using the non-planar mapping. A step of capturing an image of at least one metering cell is included.
[0026] According to yet another embodiment of the present invention, a method for producing an image of an object within a chamber of a micro fluidic device includes providing the microfluidic device. The microfluidic device has x , y , and z dimensions and a chamber depth center point located between a top wall and a bottom wall of the chamber along the z dimension. The chamber depth center point is located a known z dimension distance from an optically detectable fiducial marking embedded within the microfluidic device at a z depth. Additionally, the method includes placing the microfluidic device within an imaging system. The imaging system includes an optical device capable of detecting the fiducial marking and transmitting the image of the object. The optical device defines an optical path axially aligned with the z dimension of the microfluidic device and has a focal plane perpendicular to the optical path. When the focal plane is moved along the optical path in line with the fiducial marking, the fiducial marking is maximally detected when the focal plane is at the z depth in comparison to when the focal plane is not substantially in-plane with the z depth. Additionally, the imaging system includes an image processing device in communication with the optical device. The image processing device is able to control the optical device to cause the focal plane to move along the z axis and move the focal plane to maximally detect the fiducial marking. The image processing device is further able to transmit the image of the object. Additionally, the method includes controlling the optical device with the image processing device to cause the focal plane to move along the optical path until the optical device maximally detects the fiducial marking. Moreover, the method includes controlling the optical device with the image processing device to move the focal plane along the optical path the z dimension distance to cause the field depth center point to be located at the chamber depth center point. Moreover, the method includes imaging the object within the chamber while the focal plane is located at the chamber depth center point.
[0027] According to yet another embodiment of the present invention, a system for producing an image of an object within a chamber of a microfluidic device includes the microfluidic device. The microfluidic device has x , y , and z dimensions and a chamber depth center point located between a top wall and a bottom wall of the chamber along the z dimension. The chamber depth center point is located a known z dimension distance from a optically detectable fiducial marking embedded within the microfluidic device at a z depth. Additionally, the system includes an imaging system for placing the microfluidic device therein. The imaging system includes an optical device capable of detecting the fiducial marking and transmitting the image of the object. The optical device defines an optical path axially aligned with the z dimension' of the microfluidic device and having a focal plane. When the focal plane is moved along the optical path in line with the fiducial marking, the fiducial marking is maximally detected when the focal plane is substantially in-plane with the z depth as compared to when the field depth center point is not substantially in-plane with the z depth. Additionally, the imaging system includes an image processing device in communication with the optical device. The image processing device is able to control the optical device to cause the focal plane to move along the z axis and move the field depth center point to maximally detect the fiducial marking. The image processing device is able to transmit the image of the object. The image processing device is in operable communication with the optical device to cause the focal plane to move along the optical path until the optical device maximally detects the fiducial marking. When the image processing device causes the optical device to move the focal plane along the optical path the z dimension distance, the focal point is located at said chamber depth center point.
[0028] According to yet another embodiment of the present invention, a method for producing an image of a chamber within a microfluidic device includes imaging the microfluidic device to produce an image using an imaging system having an optical path in the z plane of the microfluidic device, and mapping from the image a first set of coordinates of the microfluidic device to determine whether the microfluidic device is skewed or distorted when compared to a coordinate map of an ideal microfluidic device. Additionally, the method includes positioning the microfluidic device so as to position the chamber within the optical path based on a matrix transformation, calculated coordinate position determined by computing a matrix transformation between the first set of coordinates of the microfluidic device and the coordinate map of the ideal microfluidic device. Moreover, the method includes obtaining a time zero image of the microfluidic device chamber. The time zero image contains images of artifacts present in the microfluidic device. Also, the method includes obtaining a second image of the microfluidic device chamber and subtracting the first image of the microfluidic device chamber from the second image of the microfluidic chamber to produce an image of the chamber without time zero artifacts.
[0029] Numerous benefits are achieved using the present invention over conventional techniques. Some embodiments provide alignment and/or focus based on mapping between the design space and the measurement space. The transformation between the design space and the measurement space uses, for example, at least three fiducial markings. Certain embodiments provide accurate focusing by acquiring and analyzing a plurality of images along at least one dimension. Some embodiments of the present invention perform alignment and focusing on a micro fluidic device including at least one flexible substrate. The alignment and focusing take into account the deformation of the flexible substrate. Certain embodiments improve throughput in imaging system. For example, the imaging system uses a computer system to automatically perform alignment and focusing. In another example, mapping from the design space to the measurement space increases the accuracy of stage positioning, and thereby, the efficiency of high-resolution image acquisition. Depending upon the embodiment, one or more of these benefits may exist. These and other benefits have been described throughout the present specification and more particularly below.
[0030] Various additional objects, features and advantages of the present invention can be more fully appreciated with reference to the detailed description and accompanying drawings that follow.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] Figures 1-10 are simplified diagrams illustrating a method for fabricating a micro fluidic system according to an embodiment of the present invention;
[0032] Figure 11 is a simplified cross-sectional view diagram of a micro fluidic system according to an embodiment of the present invention;
[0033] Figure 12 is a simplified top-view diagram of a microfluidic system according to an alternative embodiment of the present invention;
[0034] Figure 13 is a simplified top and side-view diagram of a microfluidic system according to an alternative embodiment of the present invention; [0035] Figure 13A is a simplified top-view diagram of a microfluidic system including carrier and identification code according to an embodiment of the present invention;
[0036] Figure 14 is a simplified imaging system for imaging objects within a microfluidic device according to an embodiment of the present invention;
[0037] Figures 15A and 15B are a simplified microfluidic device according to an embodiment of the present invention; [0038J Figures 16A and 16B are simplified actual image in measurement space and simplified virtual image in design space respectively according to an embodiment of the present invention;
[0039] Figures 17A, 17B, and 17C show a simplified method for image subtraction and masking according to an embodiment of the present invention;
[004Oj Figure 18 is a simplified imaging method for micro fϊuidic device according to an embodiment of the present invention;
[0041] Figure 19 is a simplified method for mapping between the measurement space and the design space according to an embodiment of the present invention; [0042] Figure 20 is a simplified diagram for fiducial markings according to an embodiment of the present invention;
{0043] Figure 21 is a simplified method for locating fiducial marking according to an embodiment of the present invention;
[0044J Figure 22 is a simplified metering cell shifted from design position according to an embodiment of the present invention;
[0045] Figure 23 is a simplified method for aligning and focusing image system according to an embodiment of the present invention ;
[0046] Figure 24 is a simplified method for acquiring images of fiducial marking according to an embodiment of the present invention; [0047] Figure 25 is a simplified method for aligning and focusing image system according to an embodiment of the present invention;
[0048] Figure 26 is a simplified image acquired and analyzed according to an embodiment of the present invention;
J0049] Figure 27 shows simplified curves for focus score as a function of z position obtained at the process 804 according to an embodiment of the present invention;
[0050] Figure 28 shows simplified curves for focus score as a function of z position according to one embodiment of the present invention;
Ϊ0051] Figure 29 shows simplified curves for focus score as a function of z position according to another embodiment of the present invention; and [0052] Figure 30 is a simplified surface map of a three dimensional flexible substrate according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0053] According to the present invention, techniques for microfluidic systems, including a microfluidic chip or circuit, are provided. More particularly, the invention provides a microfluidic structure and method of manufacture, and a system and method for imaging a microfluidic device. Merely by way of example, the fiducial markings are used for processing and imaging a microfluidic chip, but it would be recognized that the invention has a much broader range of applicability.
METHOD FOR MANUFACTURING FLUIDIC CHIP
[0054] A method for manufacturing a fluidϊc chip according to an embodiment of the present invention may be outlined below. Certain details of the method 100 are also provided according to a flow diagram illustrated by Figure 1, which is not intended to unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0055] 1. Form a mold substrate for a moldable channel and well layer 101, including fiducial markings;
[0056] 2. Form molded channel and well layers 102, including fiducial markings, overlying the mold substrate via spinning of silicone material; [0057] 3. Form a mold substrate for a moldable control layer 103;
[0058] 4. Form molded control layer overlying the mold substrate via spinning of silicone material 104;
[0059] 5. Align molded channel and well layers overlying the molded control layer 105;
[0060] 6. Remove molded channel and well layers from the mold substrate for the molded channel and well layers to form a sandwiched structure including the channel and well layers and control layer 106;
[0061 ] 7. Align the channel and well layers to a transparent substrate surface 107;
[0062 j 8. Bond the sandwiched structure including the aligned channel and well layers to the transparent substrate 108; {0063 J 9. Provide the sandwiched structure for use in a fluidic processing system 109; and
[0064 J 10. Perform other steps HO, as desired.
[0065J The above sequence of steps provides a method for manufacturing a microfluidic system having molded channel, well, and control layers. In a specific embodiment, each of the molded channel, well, and control layers is deformable or elastic. That is, well regions may vary slightly from well to well throughout a single microfluidic system, which has been provided on a chip. To compensate for such deformable characteristic, the present system includes at least one or more fiducial markings that have been placed in predetermined spatial locations to be used with image processing techniques. These fiducial markings allow for any inherent errors caused by the deformable characteristic to be compensated at least in part using the image processing techniques. Further details of methods and resulting structures of the present microfluidic system have been described throughout the present specification and more particularly below. METHOD FORMANUFACTURINGMOLD FORFLUID LAYER
[0066] A method for manufacturing a mold for a fluid layer according to an embodiment of the present invention may be outlined below. Certain details of the method 200 are also provided according to a flow diagram illustrated by Figure 2, which is not intended to unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0067] I. Provide mold substrate material 201;
[0068] 2. Apply first layer of photoresist onto mold substrate 202;
£0069] 3. Pattern including fiduciais (e.g., dots) the first layer of photoresist to form channel regions 203; (0070J 4. Form channel regions including fiduciais through the patterned film on the mold substrate material 204;
(00711 5. Strip first layer of photoresist 205; [0072] 6. Apply second layer of photoresist 206;
[0073] 7. Align pattern onto the second layer of photoresist based upon one or more of the channel regions 207; [0074] 8. Pattern including wells, x-marks, and company logo aligned to channels (where alignment is provided by matching brackets) the second layer of photoresist 208;
[0075] 9. Form channels, x-marks, and company logo through the patterned second film on the mold substrate material 209; [0076 j 10. Strip second layer of photoresist to form completed mold substrate material including channel and well structures 210; and
[0077] 11. Perform other steps, as desired.
[0078] The above sequence of steps provides a method for manufacturing a mold for a molded channel and well layers according to a specific embodiment. In a specific embodiment, each of the molded channel and well layers is deformable or elastic. To compensate for such deformable characteristic, the present system includes at least one or more fiducial markings that have been placed in predetermined spatial locations to be used with image processing techniques. These fiducial markings allow for any inherent errors caused by the deformable characteristic to be compensated at least in part using the image processing techniques. Further details of methods and resulting structures of the present micro fluidic system have been described throughout the present specification and more particularly below.
METHOD FORMANUFACTURING CONTROL LAYER
[0079] A method for manufacturing a mold for a control layer according to an embodiment of the present invention may be outlined below. Certain details of the method 300 are also provided according to a flow diagram illustrated by Figure 3, which is not intended to unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0080] 1. Provide mold substrate material 301; [ [0081] 2.. Apply first layer of photoresist onto mold substrate 302;
[0082] 3. Pattern the first layer of photoresist to form control fluid regions 303;
[0083] 4. Form control fluid regions through the patterned film on the mold substrate material 304;
[0084] 5. Strip first layer of photoresist to form completed mold substrate material including control fluid regions 305; and [0085} 6. Perform other steps, as desired.
[0086] The above sequence of steps provides a method for manufacturing a mold for a molded control layer according to a specific embodiment. In a specific embodiment, the control layers is defoπnable or elastic. To compensate for such deformable characteristic, the present system includes at least one or more fiducial markings that have been placed in predetermined spatial locations to be used with image processing techniques. These fiducial markings allow for any inherent errors caused by the deformable characteristic to be compensated at least in part using the image processing techniques. Further details of methods and resulting structures of the present microfluidic system have been described throughout the present specification and more particularly below.
[0087] Figures 1-11 are simplified diagrams illustrating a method for fabricating a microfluidic system according to an embodiment of the present invention. These diagrams are merely examples, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives. As noted above, Figures 1 through 3 have been described. Certain features with regard to illustrating features of the fluidic system have been provided by way of Figures 4 through 11. For easy viewing, the left side illustrates an overview of the entire substrate, including patterns, while the right side illustrates a portion of the pattern that is pertinent according to a feature being described. [0088] Referring to Figure 4, fluid channel layer is illustrated. The fluid channel layer (or control layer) includes fluid channels 401 to deliver fluid throughout the substrate 403. Fiducial markings in shape of circles 405 are used to locate the channels themselves. These circles are part of the fluid channel layer mask and are transferred with the channels onto the substrate. The circles are recessed regions, which do not extend all the way through the layer, in preferred embodiments.
[0089| Referring to Figure 5, a well layer 501 including well regions 501, 503 on the substrate are illustrated. The well layer includes the well regions and the company logo 507 (which serves as a predetermined fiducial marking according to preferred embodiments) that enables x-y spatial location of a metering cell. In addition the logo is also used for focusing onto the wells as the logo height is the same height as the wells. The well layer also includes a plurality of fiducial markings 505, e.g., crosses. Such crosses are located within a vicinity of each of the well regions. The crosses are at a finite distance and are translated from the mask to the substrate. When using image processing algorithms to locate one or more of the wells, the crosses can be used as a reference to well location. As shown, each of the crosses are located in a spatial manner around a periphery of the well region. That is, each of the crosses occupies a corner region that is not active and is free from the well itself. [0090} Referring to Figure 6, alignment occurs between the fluid channel layer and well layer according to a specific embodiment. Here, the method aligns these two layers at the substrate mold making process. The well layer has a different thickness and shape than the fluid layer. The well layer produces sharp edges while fluid channel layer produces round edges. Preferably, a goal is to have the wells overlaying the channels in order for the channels to distribute fluids into the wells. The well layer mask is aligned to the fluid layer to place wells over the fluid channels, as shown. Alignment is done by matching the frame of the well layer to the frame of the fluid channel layer.
[0091} The method generally forms more than one design 701 on a substrate material as shown in Figure 7. Each of these designs can be separated using regions 703 according to a preferred embodiment. The method performs final assembly after silicone (or other like material) has been poured separately over the fluid/well layer mold and the control layer mold. Preferably, the final assembly is made when the control layer of silicone is aligned to the fluid layer of silicone. Matching alignment marks are located on the fluid and control layer that need to overlay each other for proper alignment. [0092} To align the patterned substrate to the blank substrate, the method includes placing a template of the patterned substrate underneath the blank substrate, which is transparent, as illustrated by Figure 8. The template allows carrier top access to reagent inputs. In addition proper alignment of the patterned substrate onto the blank transparent substrate enables the imaging station to view the global fiducials on the chip through the carrier bottom. As shown, Figure 9 illustrates the patterned substrate, including wells and channels, overlying the transparent substrate. Details of the fiducial markings are provided throughout the present specification and more particularly below.
[0093] Figure 10 is a simplified top-view diagram 1000 of a completed microfluidic system including well 1001 and channel regions 1003. As shown, fiducial markings 1005 are disposed spatially around a periphery of the well region. The system also has company log 1007, which is a predetermined fiducial marking, which is larger in size than the other fiducial markings. The predetermined fiducial marking has one or more edges and a center region, among other features, as needed. Of course, one of ordinary skill in the art would recognize many other variations, modifications, and alternatives. Specific details with regard to the present system are also provided using the side-view diagram illustrated below.
[0094] Figure 11 is a simplified cross-sectional view diagram 1115 of a microfluidic system Il 00 according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives. As shown, the system includes a glass substrate 1103 or any like transparent substrate material, which can act as a handle substrate. Overlying the handle substrate is fluid channel 1 105 and well layer 1107. The fluid channel and well layer have been provided on a single layer 1109 or can be multiple layers. The fluid channel has a depth that is less than the well, which extends into the single layer. Preferably, the fluid channel and well layer are made using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material. Preferably, the material is also transparent, but may be deformable or alternatively flexible in characteristic. The system also has a control layer 1111, which includes control channel
1113. Preferably, the control layer is made using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material. Depending upon the embodiment, there may also be other features in the system.
[00951 One 1102 of a plurality of fiducial markings is also shown. The marking is at a vicinity of the well region and also has a height relative to the wells that are substantially similar. That is, optically the height of the marking is about the same as the well relative to a plane parallel to the substrate. Alternatively, the marking may be formed based upon a predetermined off-set relative to the plane parallel to the substrate in other embodiments. Certain dimension are also shown, but are not intended to be limiting in any manner. Depending upon the embodiment, there can be many variations, alternatives, and modifications.
[0096] Other embodiments of the present invention are provided below.
[0097] Figure 12 is a simplified top-view diagram of a microfluidic system according to an alternative embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives. As shown, the system comprises a biological substrate 1200. The substrate includes a rigid substrate material, which has a surface region. The substrate is capable of acting as a handle substrate. The rigid substrate can be made of a suitable material such as a glass, a plastic, silicon, quartz, multi-layered materials, or any combination of these, and the like. Of course, the type of substrate used depends upon lhe application. {0098] The substrate also includes a deformable fluid layer coupled to the surface region. Preferably, the fluid layer is attached using a glue layer or other attachment technique. One or more well regions are formed in a first portion of the deformable fluid layer. The one or more well regions is capable of holding a fluid therein. One or more channel regions is formed in a second portion of the deformable fluid layer, The one or more channel regions is coupled to one or more of the well regions. The channel regions include protein channels 1201 and reagent channels 1203. Other channel regions can also be included.
[0099] The fluid layer includes active and non-active regions. An active region is formed in the deformable fluid layer. The active region includes the one or more well regions. A non-active region is formed in the deformable fluid layer. The non-active region is formed outside of the first portion and the second portion. The term "active" and "non-active" are merely used for illustration purposes and should not limit the scope of the claims herein. The non-active region generally corresponds to regions free from use of fluids or other transport medium, and the like.
[0100] The substrate includes a plurality of fiducial markings. Each of the fiducial markings is selectively placed within a certain layer region. In a specific embodiment, a first fiducial marking 1205 is formed within the non-active region and disposed in a spatial manner associated with at least one of the channel regions. That is, the first fiducial marking is within the channel regions. Preferably, the first fiducial marking is a recessed region that includes a selected width and depth. The recessed region forms a pattern to be captured by an image processing technique. In a specific embodiment, a second fiducial marking 1213 is formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions. That is, the second fiducial marking is within the channel regions. Preferably, the second fiducial marking is a recessed region that includes a selected width and depth. The recessed region forms a pattern to be captured by an image processing technique. [0101] The substrate also has a control layer coupled to the fluid layer. The control layer includes one or more control regions. The control layer includes interface control line 1207 and containment control line 1209. Other control lines can also be included. Preferably, a third fiducial marking 1211 is formed within the control layer. Preferably, the third fiducial marking is a recessed region that includes a selected width and depth. The recessed region forms a pattern to be captured by an image processing technique. Further details of the substrate can be found throughout the present specification and more particularly below. {0102J Figure 13 is a simplified top and side-view diagram 1300 of a microfluidic system according to an alternative embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives. As shown, the diagram includes a "top-view," a "detailed top view" and "side view" of fluidic microstructures according to embodiments of the present invention. As shown, the system also includes global fiducials 1301. The global fiducials are used for rough alignment purposes, although may be used for fine alignment as well. In one embodiment, the global fiducials by a spatial dimension of greater than 100 μm and less than 250 μm. For example, the global fiducials include a length and a width of about 180 μm and 160 μm respectively. In another embodiment, the global fiducials are characterized by a depth of at least 10 μm within a thickness of the non-active region. For example, the global fiducials include a thickness of about 20 μm and are within the deformable layer 1305 as shown. The side view diagram includes a substrate 1302, which is preferably rigid, with an upper surface region. The rigid substrate can be made of a suitable material such as a glass, a plastic, silicon, quartz, multi-layered materials, or any combination of these, and the like. Of course, the type of substrate used depends upon the application.
[0103] The substrate also includes a deformable fluid layer coupled to the surface region. Preferably, the fluid layer is attached using a glue layer or other attachment technique. One or more well regions are formed in a first portion of the deformable fluid layer. The one or more well regions 1309 is capable of holding a fluid therein. As shown, the well region has a certain thickness within the deformable layer. One or more channel regions 1311 is formed in a second portion of the deformable fluid layer. The one or more channel regions is coupled to one or more of the well regions. The channel regions include protein channels and reagent channels. Other channel regions can also be included. As shown, the channel regions are not as thick as the well regions. The deformable layer includes an upper surface, which couples to control layer 1307. As shown, the control layer includes a plurality of control channels 1313. [0104 J Fiducial markings are selectively placed in a spatial manner on the microfluidic system. In a specific embodiment, the global alignment fiducial marking is formed in the deformable layer within a vicinity of a well region. A first fiducial marking is placed within a vicinity of the well region. In one embodiment, four wells form a metering cell. The metering cell has a length and a width each about 2 μm. The first fiducial marking is placed substantially at the center of the metering cell A second fiducial marking may be placed within a vicinity of the channel region within the deformable layer. A third fiducial marking may be placed within a vicinity of the control channel in the control layer. Depending upon the application, there may be variations, alternatives, and modifications. That is, two of the fiducial markings may be within a vicinity of the channel region and the third fiducial marking may be within a vicinity of the control channel in the control layer. Alternatively, two of the fiducial markings may be within a vicinity of the well region and the third fiducial marking may be within a vicinity of the control channel in the control layer. Preferably, the fiducial markings are placed within a vicinity of the region being examined, such as well or channel regions. The fiducial marking placed within the control layer or another layer serves as an alignment point to correct for depth of field or other optical characteristics.
[0105] As shown in Figures 1-13, various fiducial markings can be included in microfluidic systems. In one embodiment, preferably a fiducial marking comprises a recessed region in the deformable layer. The recessed region becomes a volume or open region surrounded by portions of the deformable layer or other layers. The volume or open region is preferably filled with a fluid such as a gas including air or other non-reactive fluid. The fluid also has a substantially different refractive index to light relative to the surrounding deformable layer. The open region is preferably filed with an air or air type mixture and has a low refractive index. Similarly, the fiducial marking in the control layer has similar characteristics according to a specific embodiment. In certain embodiments, the fiducial marking has sharp edges that highlight the marking from its surroundings. For example, the edges are preferably 90 degree corners or the like. Of course, one of ordinary skill in the art would recognize other variations, modifications, and alternatives.
[0106} Additionally, as shown in Figures 1-13, the fluid channel and well layer are made using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material in certain embodiments. The control layer can be made also using a suitable material such as silicone, silicon rubber, rubber, plastic, PDMS, or other polymeric material in some embodiments. In other embodiments, the fluid channel and well layer and the control layer are made of material, whose thermal coefficient is at least 10-4. For example, the thermal coefficient ranges from 10-4 to 10-3. In yet another example, the thermal coefficient equals about 3x 10-3. In yet other embodiments, the fluid channel and well layer and the control layer are made of material, whose Young's modulus is at most 5x106. For example, the Young's modulus ranges from 8x 104 to 7.5x 105.
[0107J Also, as shown in Figures 1-13, the micro fluidic device includes the channel regions and well regions. These diagram are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In certain embodiments, the channel regions and the well regions are interchangeable. The channels and the wells refer to recessed regions in the microfluidic device. In other embodiments, the microfluidic device uses channel regions to function as well regions. In yet other embodiments, the microfluidic device includes chambers that can be used as fluid channels, control channels, and wells.
[0108J Figure 13A is a simplified top-view diagram of a microfluidic system including carrier and identification code according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize other variations, modifications, and alternatives. As shown, a system 1350 includes a chip 1353, which has associated carrier 1351. The chip can be any one of the embodiments referred to as a microfluidic system herein as well as others. The chip generally includes a substrate, deformable layer, and control layer, among other features. The chip also has well regions coupled to channel regions in the deformable layer. The control layer is coupled to the deformable layer. The carrier includes various features such as inlets/outlets 1355 that couple to elements in the chip. The carrier also includes accumulation reservoirs 1357, which couple to the inlets/outlets. The carrier has an identification region 1358 that includes barcode or other identification element. Other identification features, which can be identified visually, may also be used. Further embodiments may also include other identification devices such as radio frequency identification, pattern recognition, and the like.
[0109] Preferably, the bar code is an encoded set of lines and spaces of different widths that can be scanned and interpreted into numbers to identify certain features of the microfluidic system. The barcode includes intrinsic and/or extrinsic information associated with the chip. The intrinsic information may be pattern recognition information and/or alignment information associated with the fiducial markings. That is, once identification and alignment of the system has occurred using at least the fiducial markings, such alignment information can be stored in memory of a computing or processing system according to an embodiment of the present invention. The alignment information can be used to more efficiently process the specific chip, including bar code, for certain applications. The alignment information associated with the fiducial markings can be stored in memory that is later retrievable using processing systems according to embodiments of the present invention. Further details of these processing systems can be found throughout the present specification and more particularly below. [0110] Figure 14 is a simplified imaging system for imaging objects within a microfluidic device according to an embodiment of the present invention. This diagram is merely an example, which, should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0111] As shown in Figure 14, an imaging system 4010 includes a stage 4020. The stage 4020 is movable in x, y, and z dimensions, as shown by arrows 4190. The movement of the stage 4020 is caused by a stage drive 4025 under control of a computer system 4110.
Additionally, the imaging system 4010 includes an imaging device 4060. The imaging device 4060 includes an lens system 4070 with lenses 4075 therein, and a detector 4080. The lens system 4070 is under control of the computer system 4110 to automatically adjust the focus of the lens system 4070 in response to image information gathered by the detector
4080. The image is communicated to the computer system 4110 and stored in a database
4115.
[0112] The lens system 4070 can focus on a microfluidic device 4030 by adjusting a focal plane 4100 in the z direction. For example, the focal plane is at a chamber centerline of the microfluidic device 4030. The microfluidic device 4030 is situated upon the stage 4020 and can have various structures. For example, the microfluidic device has a structure and is manufactured by a method as described in Figures 1-13. In another example, the microfluidic device 4030 has a chamber 4050 wherein an object, such as a protein crystal, may be formed or otherwise located. For example, the chamber 4050 is capable to hold a volume of fluid less than 1 nanoϊiter. A plurality of chambers can be combined to form a metering cell. The chamber 4050 has a chamber centerline that is located between a top wall and a bottom wall of the chamber 4050. For example, the chamber 4050 is a well region, a channel region, or both.
[0113] Moreover, the imaging system 10 includes an illumination device 4170 for producing an illumination beam 4180. For example, the illumination beam 4180 illuminates objects within the microfluidic device 4030. Additionally, the computer system 4110 may be in communication with an input/output device 4160 and a barcode reader 4120. The barcode reader 4120 can read a bar code 4130 on a microfluidic device 4140. For example, the microfluidic device 4140 is used as the microfluidic device 4030.
(01I4J Although the above has been shown using a selected group of apparatuses for the system 4010, there can be many alternatives, modifications, and variations. For example, some of the apparatuses may be expanded and/or combined. Other apparatuses may be inserted to those noted above. Depending upon the embodiment, the arrangement of apparatuses may be interchanged with others replaced. Further details of these apparatuses are found throughout the present specification. [0Ϊ 15] For example, the imaging system 4010 may be integrated into a larger robotic system, such as a rotating arm or railroad track type robotic system, to increase the throughput. The imaging system 4010 can communicate with the robotic system and control the flow of microfluidic devices into and out of the imaging system, acquire information about the microfluidic devices and their contents, and supply image data and results from the imaging system to the robotic system. If the robotic system includes a database, the imaging system can contribute image and results to the database. The robotic system, in-turn, may automatically design further experiments based upon the results provided by the imaging system.
[0116] According to an embodiment of the present invention, the imaging system 4010 operates in the following manner including a plurality of processes. These processes are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The microfluidic device 4030 is securely placed on the stage 4020. Based on a fixed feature of the microfluidic device 4030, the computer system 4110 instructs the drive 4025 to move the stage 4020 and align the microfluidic device 4030 with a first fiducial marking. For example, the fiducial marking is embedded within the microfluidic device 4030 at a known z dimension distance from the chamber centerline. In another example, the first fiducial marking comes into focus by the imaging device 4060 based on dead reckoning from tfie fixed feature. The actual coordinates of the first fiducial marking is then measured and registered with the imaging system 4010. Additionally, the actual coordinates of two or more additional fiducial markings are measured and registered. [0117J The actual locations of the fiducial markings are compared with their design locations in the stored image map respectively. For example, the stored image map is associated with the design space. In another example, the stored image map is an ideal image map. In yet another example, the stored image map is associated with a mathematical grid. Based on the comparison, the imaging system 4010 determines whether stretch, distortion, or other deformation exists in the microfluidic device 4030. If differences are present between the actual fiducial locations and the design fiducial locations, a matrix transformation, such as an Affine transformation, is performed. The transformation converts the actual shape of a metering cell into a virtual shape with respect to the design space. By converting the actual image to the virtual image, an image subtraction and other image analysis may be performed. [0118] Although the above has been shown using a selected sequence of processes for operating the imaging system 4010, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of steps may be interchanged with others replaced. Further details of these processes are found throughout the present specification.
[0119] Figures 15A and 15B are a simplified microfluidic device according to an embodiment of the present invention. These processes are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Figures 15A and 15B depict a top view and a cross-sectional view of a microfluidic device respectively. A microfluidic device 4230 includes at least a flexible substrate with a chamber 4250 and a fiducial marking 4254. For example* the fiducial markings 4254 are used for xyz alignment and focus of an imaging system. In one embodiment, the imaging system focuses upon the fiducial markings 4254 within the microfluidic device 4230 and conduct mapping between the measurement space and the design space. The imaging system then adjusts a focal plane with respect to the z dimension of the microfluidic device 4230 and places the focal plane in plane with a selected point within the chamber 4250, preferably at chamber focus position 4256, The chamber focus position 4256 is a Δz distance 4252 away from a focus plane 4258 of the fiducial markings 4254. For example, at the focus plane 4258, the fiducial markings 4254 are optimally focused, In one embodiment, the microfluidic device 4230 may be used as the microfluidic device 4030. In another embodiment, the microfluidic device may be made by processes described in Figures 1-13A.
[0120] Figures 16A and 16B are simplified actual image in measurement space and simplified virtual image in design space respectively according to an embodiment of the present invention. These diagrams are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, the design space is ideal, and the measurement space is distorted.
[0121 J The difference between the design space and the measurement space can be calculated through fiducial mapping. Consequently, a matrix transformation is developed to convert the actual image into a virtual image in the design space. Transforming various actual images into the same design space facilitates the image subtraction and masking in order to maximize the viewable area of a metering cell chamber. Moreover, if a defect or debris is present within the chamber at time zero in a series of time based images, such defect or debris can be masked out of subsequent images to avoid false positive when applying automated crystal recognition analysis. Additionally, the walls of a chamber may be subtracted from subsequent images to reduce the likelihood of false reading in the crystal recognition analysis .
[0122] Figures 17A, 17B, and 17C show a simplified method for image subtraction and masking according to an embodiment of the present invention. These diagrams are merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0123] Figure 17A depicts a metering cell with debris, shown as the letter "D" distributed about the metering cell chambers. The metering cell is transformed into the design space. For example, the metering cell is rotated to align with the design coordinate system and stretch compensated to make the metering cell dimensions match those of the design metering cell dimensions. The foreign objects not present in the design metering cell are masked out such that the regions including and immediately surrounding the foreign objects are masked. The masking can reduce the likelihood of falsely triggering the crystal detection analysis into deeming the foreign objects as crystals that were formed. Figure 17B depicts a masked image where the foreign objects have been masked.
[0124] Additionally, the walls in Figure 17A can be removed by image subtraction. Figure 17C depicts an image without chamber walls. From Figure 17C, further masking may be performed if wall implosion is detected. The wall implosion may occur when the micro fiuidic device is dehydrating and the chamber contents are permeating outside of the chamber, causing a negative pressure therein and thus wall collapse or implosion. Such further masking for implosion may employ a series of known shapes that occur when chamber implosion occurs and uses such known shapes to create additional masks to occlude from the image the now intruding imploded walls.
[0125J Figure 18 is a simplified imaging method for microfluidic device according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The method 4400 includes process 4410 for mapping between measurement space and design space, process 4420 for alignment and focusing, and process 4430 for capturing image. In one embodiment, the method 4400 may be performed by the imaging system 4010 on the microfluidic device 4030. For example, the imaging system 4010 performs the processes 4410,. 4420, and 4430 according to the instructions of the computer system 4110 or another computer system. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. For example, a process of placing a microfluidic device on the stage of an imaging system is performed prior to the process 4410. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. For example, the process 4420 may be skipped. Further details of these processes are found throughout the present specification and more particularly below.
[0126] At the process 4410, the measurement space and the design space are mapped. Figure 19 is a simplified process 4410 for mapping between the measurement space and the design space according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 4410 includes process 4440 for locating fiducial marking, process 4442 for measuring actual location of fiducial marking, process 4444 for comparing actual location and design location of fiducial marking, process 4446 for determining need for additional fiducial marking, process 4448 for determining transformation between measurement space and design space, and process 4450 for coarse alignment. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. In one embodiment, the processes 4440, 4442, and 4444 may be performed for more than one fiducial markings before the process 4446 is performed. For example, several fiducial markings are located and measured. In anther embodiment, the process 4444 may be performed after the process 4446 has determined no additional mark needs to located. Further details of these processes are found throughout the present specification and more particularly below. [0127] At the process 4440, a fiducial marking is located on a micro fluidic device. For example, the microfluidic device is the microfluidic device 4030, Figure 20 is a simplified diagram for fiducial markings according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. As shown in Figure 20, each of fiducial markings 4520, 4522, and 4524 includes three plus signs or crosses located at three corners of a square and a company logo located at the fourth corner of the square. The fiducial marking 4520, 4522, or 4524 is the fiducial marking located at the process 4440. In one example, the fiducial marking 4520, 4522, or 4524 is a global fiducial. In another example, the fiducial marking 4520, 4522, or 4524 serves as both a global fiducial and a local fiducial. In yet another example, the fiducial marking 4520, 4522, or 4524 is located in the same plane as the well regions of the microfluidic device.
[0128] In another embodiment of the present invention, the located fiducial marking has a configuration different from the fiducial marking 4520, 4522, or 4524. In another embodiment, the located fiducial marking is readily recognizable by the image processing algorithm. Operation of the image processing algorithm is improved when the fiducial marking is readily visible, with minimal optical interference from the edge of the microfluidic device or other channels. [0129J Locating the fiducial marking at the process 4440 can be performed manually, automatically, or both. For example, the fϊduciai marking is moved and identified in the field of view of the imaging system by visual inspection. In another example, the imaging system automatically places and identifies the fiducial marking in the field of view. 10130] According to an embodiment of the present invention, Figure 21 is a simplified process 4440 for locating fiducial marking. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 4440 includes process 4810 for acquiring image, process 4820 for segmenting image, process 4830 for performing blob analysis, process 4840 for determining whether fiducial marking is located, process 4850 for adjusting position, and process 4860 for moving fiducial marking. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. For example, the process 4860 may be skipped. Further details of these processes are found throughout the present specification and more particularly below.
[0131J At the process 4810, an image of the fiducial marking is acquired. Prior to the process 4810, the stage is positioned to an initial position defined as F0 =xox + yoy + zoz . At the process 4810, an image of the fiducial marking is captures. In one embodiment, the image is captured by a digital camera such as a Leica DC500. In another embodiment, the image has a low resolution. For example, the image is 640 x 480 pixels in size, and the color depth resolution is 16 bits. In another example, the pixel and color depth resolutions are varied to optimize system performance. After the image is acquired, the image may be adjusted to compensate for variations in lamp intensity and color. This compensation may take the form of image normalization. Additionally, the red, blue, and green components of the image can be adjusted to white balance the image. The white-balancing of the image may be accomplished by median correction or other known techniques.
[OS 32] At the process 4820, the image is segmented. Segmentation of the image can separate desired images from the background signal and produce "blobs" useful in further analysis steps. At the process 4830, the blob analysis is performed. The blobs in the image are compared against a training set contained in a database. The training set contains images of a fiducial marking obtained from a large number of microfluidic devices and imaging conditions. For example, the fiducial marking is the company logo. In another example, the fiducial marking is one other than the company logo.
[0133] At the process 4840, whether the fiducial marking is located is determined. If the fiducial marking is located, the process 4442 is performed. In one embodiment, if the best match of the blobs to the standards is found to be within a predetermined specification, the fiducial marking is considered to be located. For example, the predetermined specification includes a proximity ranking of less than 4200. If the fiducial marking is not detected, the process 4850 is performed. [0134] At the process 4850, the position of the stage is adjusted, After the adjustment, the processes 4810, 4820, 4830 and 4840 are performed. In one embodiment, at the process 4850, the stage is moved in an x direction and/or a y direction. In another embodiment, the stage is moved in a z direction at the process 4850. For example, the stage is moved by a selected amount in a first z-direction
Figure imgf000089_0001
y stepping the z-motor of the stage in a first selected direction. At each stepped z-height, the processes 4810, 4820, 4830 and 4840 are performed. The process 4850 is repeated until the fiducial marking is determined to be located at the process 4840 or the stage reaches the end of its range of motion in the first z direction. If the stage reaches the end of its range of motion, the stage is returned to the initial positio nd the stage is stepped by Δz in a second selected z-direction. For
Figure imgf000089_0002
example, the second z-direction is opposite to the first z-direction. The step size Δz can be uniform in both directions, or vary as a function of direction or distance from t each
Figure imgf000089_0003
stepped z-height in the second direction, the processes 4810, 4820, 4830, and 4840 are performed. The process 4850 is repeated until the fiducial marking is located or the stage reaches the end of its range of motion in the second z direction. If the fiducial marking cannot be located within the range of motion, an error message is generated. In yet another embodiment, at the process 4850, the stage is moved in an x direction, a y direction, and/or a z direction.
[0135] At the process 4860, the stage is translated to move the fiducial marking to substantially the center of the field of view of the imaging system. |0136] As shown in Figure 19, at the process 4442, the actual location of the located fiducial marking is measured. As shown in Figure 20, the measured location of the fiducial marking 4520 is represented by vecto with respect to the origin O 4510. The measured vector ram representing the actual location of a fiducial marking can also be written as:
[0137] (Equation 1)
Figure imgf000090_0002
[0138] where n is a positive integer. For example, the actual location ^ is automatically detected by an image processing routine.
[0139] At the process 4444, the actual location and the design location of the fiducial marking is compared. The design location of the fiducial marking 4520, referenced to an origin O, can be represented by a design vector . The design vector epresenting the
Figure imgf000090_0008
Figure imgf000090_0009
design location of a fiducial marking can also be written as:
[0140] (Equation 2)
Figure imgf000090_0001
[0141] where n is a positive integer. The difference in the design location rnD and the measured location can be calculated a
Figure imgf000090_0004
Figure imgf000090_0003
[0142] As discussed above and further emphasized here, the processes 4440, 4442, and 4444 are only examples. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In one embodiment, at the processes 4440, 4442, and 4444, the imaging system uses a predetermined magnification objective. For example, a 1OX magnification objective is used for the lenses 4075 of the imaging system 4010.
[0143] In another embodiment, the imaging system first uses a lower power magnification objective, such as a 2.5X magnification objective, at the processes 4440, 4442, and 4446. Subsequently, for the same fiducial marking, the coarse alignment of the microfluidic device is performed. For example, the coarse alignment uses the difference vector The vector
Figure imgf000090_0007
epresents the translation of the located fiducial marking in the x, y, and z axes from the design location. Using the x and y scalar values from , the stage position of the imaging
Figure imgf000090_0005
system can be adjusted in the x-y plane to position the located fiducial marking at a pre- determined location in the x-y plane. Additionally, using the z-axis scalar value from he
Figure imgf000090_0006
position of the stage can be adjusted in the z plane to position the fiducial marking at a selected location in the z plane. The z-axis focus adjustment may be performed before, after, and/or at the same time as the adjustment in the x-y plane.
[0144J Afterwards, the imaging system switches to a higher power magnification objective, for example, a 1OX magnification objective. For example, the measurements and adjustments made with a lower power objective place the fiducial marking within the field of view of the imaging objective when the objective is switched to the higher power magnification objective. With the higher power magnification objective, the image system can more accurately determine the vectors
Figure imgf000091_0001
[0145] At the process 4446, whether an additional fiducial marking should be located and measured is determined. If an additional fiducial marking does not need to be located and measured, the process 4448 is performed. If an additional fiducial marking should be located and measured, the process 4440 is performed.
[0146J For example, the processes 4440, 4442, and 4444 are performed for each of the three fiducial markings 4520, 4522, and 4524 as shown in Figure 20. For the fiducial marking 4520, are determined. For the fiducial marking 4522, re
Figure imgf000091_0002
Figure imgf000091_0004
determined. For the fiducial marking 4524, re determined. In other
Figure imgf000091_0003
embodiments, more than three global fiducial markings are located and measured.
[0147] At the process 4448, the transformation between measurement space and design space is determined. For example, a matrix transformation, such as an Affϊne transformation, is determined based on the difference vectors
Figure imgf000091_0005
.
[0148] In one embodiment of the present invention, using a flexible microfluidic device, non-uniform absorption of fluids, non-uniform hydration and dehydration, or other factors, can result in flexing, stretching, shrinking, bowing, swelling, contracting and other distortions in the microfluidic device. In addition, fabrication processes for the device, handling during packaging and testing, and other protocols can introduce deformations and distortions in the device. These deformations may be dimensional Iy uniform or non-uniform, including both linear and non-linear distortions. The effects of these distortions may impact the magnitude and direction of the measured vectors Accordingly, the deviation of these measured
Figure imgf000091_0006
vectors from their corresponding design vectors represent the linear and non-linear distortions of the microfluidic device media. Using the difference vecto a
Figure imgf000092_0001
transformation can be created between the design space and the measurement space. This transformation is correlated with the flexing, stretching, bowing, and other distortions and deformations present in the microfluidic device. The transformation may have linear components and/or non-linear components.
(0149] For example, a transformation is determined based on three fiducial markings, such as the fiducial markings 4520, 4522, and 4524. Such transformation can provide a planar mapping of the microfluidic device. The plane defined by the three fiducial markings can be used to characterize the translation of the microfluidic device in the three dimensions of x, y, and z as well as stretching of the microfluidic device material in the plane of the microfluidic device. The roll, pitch, and yaw of this plane can also be characterized by the plane defined by the three fiducial markings.
[0150] At the process 4450, the coarse alignment is performed with the transformation between the design space and the measurement space. For example, the actual position of a metering cell of the microfluidic device is determined, and the metering cell is positioned in preparation for imaging. For example, the actual location of a metering cell can be shifted from the design location due to distortions and deformations of the microfluidic device. Not only can the plane of the microfluidic device be translated and tilted, the microfluidic device can be stretched in the plane of the microfluidic device, further shifting the actual position of the metering cell. In one embodiment, the metering cell is shifted in the x dimension and/or the y dimension. In another embodiment, the metering cell is shifted in the z dimension.
[015Ϊ J Figure 22 is a simplified metering cell shifted from design position according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims herein. One of ordinary skill in the art would recognize many variations, modifications, and alternatives. In Figure 22, a metering cell 4710 in the design space is schematically illustrated with solid lines and the same metering cell 4730 in the measurement space is schematically illustrated in dashed lines. The design vector
Figure imgf000092_0002
points to a design location 4715 of a fiducial marking of the metering cell 4710, and the measured vecto oints to a design location 4735 of the same fiducial marking of the
Figure imgf000092_0003
same metering cell 4730. The tip of the measured vecto is offset from the design vector
Figure imgf000092_0005
by an error vector This error vector can have components in all three dimensions.
Figure imgf000092_0004
Using the transformation from design space to measurement space, the approximate actual location of a metering cell can be calculated by taking into account the error vector. The stage of the imaging system can be moved in the x dimension, the y dimension, and/or the z dimension to position the metering cell in preparation for imaging. {0152} As discussed above, Figure 19 is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, to initiate the microfluidic device registration process, an algorithm can be used to register the microfluidic device with respect to the coordinates of the imaging system coupled to camera and the stage. Based on determinations and/or evaluations of the stack-up tolerances from the integrated microfluidic device and carrier and the microscope stage, tolerance metrics can be set. In one embodiment, the tolerances is set to ensure that at least one global fiducial generally appears within the field of view available when the lenses 4075 comprise a 2.5X objective. This tolerance definition allows automation of the fiducial finding process and streamline system operation. In another embodiment, an automated system can locate a fiducial marking outside the current field of view of the imaging system through a search routine, Additionally, the movement of the fiducial mark can be performed, for example, by moving the stage with respect to the imaging device, moving the imaging device with respect to the stage, or both. The stage carries the microfluidic device to which the fiducial mark belongs. [0153] As shown in Figure 18, at the process 4420, the alignment and focusing are performed. Figure 23 is a simplified process 4420 for aligning and focusing image system according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 4420 includes process 4802 for acquiring images of and process 4804 for determining alignment and focus. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. For example, a process substantially similar to the process 4440 as described in Figure 21 is performed on a metering cell and its associated fiducial marking, which are aligned, focused and imaged at the processes 4802 and 4804. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. Further details of these processes are found throughout the present specification and more particularly below. (0154] At the process 4802, images of a fiducial marking is acquired For example, the fiducial marking is associated with the metering cell, which has been aligned using the mapping between the design space and the measurement space at the process 4450. Figure 24 is a simplified process 4802 for acquiring images of fiducial marking according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 4802 includes process 4910 for moving stage in first direction, process 4920 for moving stage in second direction, process 4930 for acquire image, and process 4940 for determining need for additional movement. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes may be interchanged with others replaced. For example, the fiducial marking used in the process 4802 may be a company logo having a height the same as that of the wells. In another example, the fiducial marking has a height different from that of the wells. The known offset between the plane of the fiducial marking and that of the wells would enable accurate z-axis adjustments to be made. Further details of these processes are found throughout the present specification and more particularly below. [0155] At the process 4910, the stage of the imaging system is moved in a first z direction. As discussed above, at the process 4450, the metering cell and its associated fiducial marking can be aligned in the x dimension, the y dimension, and/or the z dimension based on the transformation between the design space and the measurement space. At the end of process 4450, the z position of the stage is referred to as Zf. At the process 4910, the stage is moved from Zf by a distance in a first z-direction equal t
Figure imgf000094_0001
|0156] At the process 4920, the stage is moved in a second z-direction by a distance equal to dz . For example, this second z direction is opposite to the first z direction. The step size δz can be uniform or vary as a function of distance from
Figure imgf000094_0002
10157] At the process 4930, an image of the fiducial marking is acquired. In one embodiment, the image is captured by a digital camera such as a Leica DC500. In another embodiment, the image has a low resolution. For example, the image is 640 x 480 pixels in size, and the color depth resolution is 16 bits. In another example, the pixel and color depth resolutions are varied to optimize system performance, After the image is acquired, the image may be adjusted to compensate for variations in lamp intensity and color. This compensation may take the form of image normalization. Additionally, the red, blue, and green components of the image can be adjusted to white balance the image. The white- balancing of the image may be accomplished by median correction or other known techniques.
[0158J At the process 4940, whether additional stage movement should be performed is determined. If the stage has been moved in the second direction though a distance equal to or larger tha
Figure imgf000095_0002
, no additional stage movement is needed. The process 4804 should be performed. If the stage has been moved in the second direction though a distance smaller tha n additional stage movement is needed. The process 4920 is performed.
Figure imgf000095_0001
[0159] As shown in Figure 23, at the process 4804, the alignment and focus are determined. Figure 25 is a simplified process 4804 for aligning and focusing image system according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The process 4804 includes 6810 for selecting image, process 6820 for segmenting image, process 6830 for performing blob analysis, process 6840 for determining whether fiducial marking is located, process 6850 for determining need for additional image, process 6860 for determining alignment, process 6870 for determining focus scores, and process 6880 for determining focus position. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes maybe interchanged with others replaced. For example, the process 6860 is skipped. In another example, the process 6860 is performed after the process 688O- Further details of these processes are found throughout the present specification and more particularly below.
[0160] At the process 6810, an image is selected from the images taken in the process 4802 for further analysis. At the process 6820, the selected image is segmented. Segmentation of the image can separate desired image from the background signal and produce "blobs" useful in. further analysis steps. [0161} At the process 6830, the blob analysis is performed. The blobs in the image are compared against a training set contained in a database. The training set contains images of a fiducial marking obtained from a large number of microfluidic devices and imaging conditions. For example, the fiducial marking is the company logo. In another example, the fiducial marking is one other than the company iogo.
[0162] At the process 6840, whether the fiducial marking is located is determined. If the fiducial marking is located, a region of interest (ROI) is created around the fiducial marking. Figure 26 is a simplified image acquired and analyzed according to an embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The fiducial marking may be a company log 4770 surround by a region of interest 4760. In one embodiment, if the best match of the blobs to the standards is found to be within a predetermined specification, the fiducial marking is considered to be located. For example, the predetermined specification includes a proximity ranking of less than 4200.
[0163] At the process 6850, whether additional image should be analyzed is determined. For example, if any of the images taken at the process 4802 has not been selected at the process 6810, the process 6810 is performed to select the image not yet selected. If all of the images taken at the process 4802 have been selected, the process 6860 is performed. [0164] At the process 6860, the alignment in the x and y dimensions is determined. In one embodiment, the alignment uses the actual location of an ROI and the design location of the ROI. For example, the alignment in the x and y dimensions are determined by the difference between the actual location and the design location. In another embodiment, the fiducial marking has a known spatial relationship with chambers within the metering cell in the x and y dimensions. The alignment in the x and y dimensions of the metering cell is determined based on the alignment in the x and y dimensions of the fiducial marking. For example, the metering cell has a length and a width each about 2 μm. The fiducial marking is placed substantially at the center of the metering cell. In another example, the fiducial marking is in the vicinity of or within the metering cell and their actual spatial relationship in the x and y dimensions does not change significantly from the design spatial relationship.
[0165] At the process 6870, a focus score is determined and stored. In one embodiment, the focus score is calculated based on the standard deviation. In another embodiment, the focus score is calculated based on the "edginess" of the image. For example, the "edginess" of the image is assessed by a sobel operator. In another example, the "edginess" of the image is determined by an edge-sensitive computer program similar to a high pass filter. The techniques based on the "edginess" of the image usually take into account that when the image is in sharp focus, high frequency details are visible, and when the image is out of focus, the high frequency details are blurred or smudged. In yet another embodiment, the focus score is calculated based on histogram. The histogram techniques use specific characteristics of the fiducial marking to improve focusing.
[0166] In yet another embodiment of the present invention, the images for the area of interest are acquired by the imaging system. For each of at least some of the acquired images, a first sobel square sum is determined. The sobel operator is applied to each data point on the acquired image. Each resultant value is squared, and all of the squared values are added together. Additionally, the acquired image is blurred. For example, the blurring may be accomplished by applying Gaussian smoothing to the acquired image. In one embodiment, the Gaussian smoothing serves as a low pass filter attenuating high frequency components of the acquired image. In another embodiment, the Gaussian smoothing can be described as follows:
[0167J For the blurred image, a second sobel square sum is determined by applying the sobel operator to the blurred image, squaring each resultant value, and summing all the squared values. Afterwards, clipping is applied to the second sobel square sum. If the second sobel square is smaller than a predetermined threshold, the second sobel square sum is set to the predetermined threshold. Dividing the clipped second sober square sum by the first sobel square sum, the resultant ratio is used as the focus score. The focus score for each of at least some of the acquired images is then stored. [0I68J At the process 6880, the focus position for the metering cell is determined. As discussed above, at the process 6870, the focus scores are obtained for various z positions. At the process 6880, in one embodiment, the z position corresponding to a peak focus score is used as the focus position. In another embodiment, the z positions corresponding to two peak focus scores are determined and averaged. The average z position is used as the focus position. In yet another embodiment, the focus position is determined based on the characteristic of the entire curve representing the focus score as a function of z position. [0169 ] In another embodiment, the fiducial marking has a known spatial relationship with chambers within the metering cell in the z dimension. The focus position in the z dimension of the metering cell is determined based on the focus position in the z dimension of the fiducial marking. For example, the metering cell has a length and a width each about 2 μm. The fiducial marking is placed substantially at the center of the metering cell. In another example, the fiducial marking is in the vicinity of or within the metering cell and their actual spatial relationship in the z dimension does not change significantly from the design spatial relationship.
[0170J Figure 27 shows simplified curves for focus score as a function of z position obtained at the process 6870 according to an embodiment of the present invention. The focus score at each z value is associated with the sobel square sum for the acquired image without blurring. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications, As shown in Figure 27, focus scores are calculated at z-axis positions separated by approximately 2 μm and extending for 100 μm on either side of zj. The coarse nature of the z-axis position determined by the process 4802 is evident, as the peak of the focus score distributions are located approximately 20 μm fro
Figure imgf000098_0001
[0171] In another embodiment, the method by which the stage is scanned, the density of measurement points, and the range over which the measurements extend can be varied, as would be evident to those skilled in the art. For example, focus scores are collected at fewer locations separated by greater distances. In another example, focus scores collected at 10 μm spacing located on alternating sides of z/is used as inputs to the image processing software, only obtaining additional focus scores and filling in the curve if needed.
[0172] Figure 27 shows two different focus score runs in which the aperture of the condenser of the imaging system is operated in either a narrow or a wide setting. A curve 5030 corresponds to a narrow setting and represents a bi-modal distribution of focus scores. The twin peaks are each associated with the detection of the top and bottom edges of the fiducial marking, such as a company logo. This bi-modal distribution can be characterized by a full width half magnitude (FWHM) 5035. If the condenser aperture is operated at a wide setting, the bi-modal distribution merges into a single peaked distribution represented by a curve 5020. The amplitude of the single peak is reduced from the amplitude characteristic of the bi-modal distribution and the FWHM is reduced as well. The FWHM of the single peak distribution is represented by line 5025.
{0173] Additionally, Figure 28 shows simplified curves for focus score as a function of z position obtained at the process 4804 according to one embodiment of the present invention. The focus score at each z value is associated with the sobel square sum for the acquired image without blurring. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. As shown in Figures 27 and 28, the focusing scores obtained without image blurring may produce irregular focal peaks under certain conditions. Sometimes the peak is single modal, sometimes the peak is bi-modal, and usually the peak is a combination of the two. Neither peak is guaranteed to be the top peak and thus grabbing one peak over the other may result in a focus plane error on the order of tens of microns. If the depth of field of the imaging system is less than 10 microns, grabbing the wrong peak can produce significantly out of focus images. [0174] The disadvantage for obtaining focusing scores without image blurring can be improved by blurring the image and calculating ratios as discussed above. Figure 29 shows simplified curves for focus score as a function of z position obtained at the process 4804 according to another embodiment of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. The focus score at each z value is associated with a ratio that is taken between the sobel square sum for the acquired image and the clipped sobel square sum for the blurred image. Figures 28 and Figure 29 are produced from the same acquired images.
[0175] As shown in Figure 29, the blurring and ratio technique in effect normalizes the sobel output by the amount of that same output on a blurry version of the original. The peak of the curves in Figure 29 occurs for the image which suffers the largest degradation as a result of the blurring operation. An image which suffers no degradation produces a ratio value of 1.0. This normalization process can remove the dependency of the sobel operation on the intensity of a particular image plane, which can fluctuate due to optical variations. [0176] In certain embodiments, the number and scope of adjustments performed at the process 4420 for alignment and focusing depend on the accuracy of the mapping from the design space to the measurement space at the process 4410. For example, bending or tilting of the microftuidic device, thereby shifting the metering cell out of the original plane of the microfluidic device, may result in additional z-axis focusing actions. These additional focusing steps may result in an increase in the amount of time desired to acquire a high- resolution image of the metering cell. Improved mapping between the design space and measurement space would enable the imaging system to move the metering cells to position in which the acquisition of high-resolution images can be performed with increased efficiency.
[0177] To further improve the mapping accuracy, in some embodiments, more than three fiducial markings may be used at the process 4410 to provide a non-planar transformation between the design space and the measurement space. Figure 30 is a simplified surface map of a three dimensional flexible substrate according to an embodiment of the present invention. The warping or deformation of the microfluidic device is illustrated as an increase in. z-axis height at certain x-y positions across the flexible substrate. In one embodiment, the inputs for this higher order dimensional mapping could come from location information obtained using more than three fiducial markings. In another embodiment, inputs could be provided based on measurements made on the metering cell at the process 4420. Feedback from these measurements can be used to update and refine the mapping as a function of time. Consequently, for another metering cell, placement of the microfluidic device in preparation for the process 4420 would improve in accuracy as more data is obtained. The generation of such a higher order dimensional mapping can substantially increase the system throughput by reducing or even eliminating the need for the process 4420 for some or all metering cells.
{0178] In one embodiment of the present invention, a 12 point microfluidic device registration process can be used that fits at least four fiducial markings with a non-planar surface. For example, a three dimensional parabola could be used as the mapping surface. For example, the process of determining the coarse and fine locations of each fiducial marking can contribute information used in calculation of the parabolic fitting parameters. In one embodiment, fiducials near the edges, the center, and other locations on the microfluidic device could be utilized, in various orders, in the calculation of the parabolic fitting parameters. In another embodiment, the processes 4410 and 4420 could be combined into a single predictive focus-based algorithm that uses higher order fitting and localized corrections to improve system throughput. [0179] As discussed above, the method 4400 uses the processes 4410 and 4420 for alignment and focusing in certain embodiments. For example, at the process 4410, the alignment and focus of the fiducial marking associated with the metering cell are each within 100-μm accuracy. In another example, at the process 4420, the alignment of the fiducial marking is within about 1-μm accuracy. In yet another example, at the process 4420, the focusing of the fiducial marking is within about 1-μm accuracy.
[0180] As shown in Figure 18, at the process 4430, the metering cell is moved to the focus position and an image of the metering cell is captured. For example, the captured image has a high resolution. In one embodiment, the image is acquired by the same camera that is used to capture the low-resolution image at the process 4810. In another embodiment, a Leica
DC500 digital camera can be used to capture a high-resolution image. For example, the high resolution image has about 3900 x 3030 pixels and covers at least one well region including the fluid and species at a color depth of 16 bits. In another example, the image includes the containment lines, the wells, and the channels that connect the wells. In yet another example, the metering cell is moved in the x dimension and/or the y dimension in order to improve alignment prior to capturing the image of the metering cell.
[01811 The captured image is then normalized. In one embodiment, the color and intensity of the acquired image is significantly affected by the condition and operating voltage of the illumination source of the imaging system. For example, the illumination source is a bulb. As a bulb ages, the overall hue of the image changes, with the red component of the light increasing in intensity in comparison with the other colors. This increase in red intensity may result from a decrease in the bulb temperature. Additionally, even wiϊh a constant illumination source, the opacity of the microfluidtc device, which can depend on hydration levels and vary with time, may result in differences in image brightness. To correct for these artifacts and any radial vignetting introduced by the microscope, a technique called image normalization can be employed.
[0182 j For image normalization, a calibration image is taken with the rnicrofluidic device removed from the imaging system with the stage at a z calibration position. In one example, the z calibration position is different from the focus position. The z calibration position may take into account changes to the illumination beam as the beam passes through the microfluidic device. In anther example, the z calibration position is the same as the focus position. The calibration image is then used to correct for the effects resulting from the condition and operating voltage of the illumination source. In one embodiment, the algorithm calculates the ratio of the intensity of the acquired image of the metering cell to the calibration image on a pixel by pixel basis. The microfluidic device includes regions that contain substantially no information, the ratio of the intensities in these regions is set equal to unity. The intensity ratio is then multiplied by a scaling factor to maximize the dynamic range around unity.
[0183} Although the mapping from this calibration image to the actual image may not be linear due to the bending of light rays as they pass through the microfluidic device and/or glass slab, the image normalization effectively white balances the image by adjusting the red, blue, and green components of the image. Additionally, the image normalization improves consistency between the attenuated edge pixels and the center pixels. For example, the effects of white balance and consistency improvement are significant for low illumination conditions and particular condenser and/or aperture settings in which the non-linearity is pronounced. [0184] Moreover, the image is median shifted to move the centroid of the image histogram, i.e., counts as a function of intensity, to a known value. The image is also downgraded around that centroid to reduce the data size in the image. For example, the intensity ratio is sampled at random locations on the microfluidic device. Using these sampled intensity ratio values, the image is adjusted to shift the centroid of the image to the known value. In one embodiment, the centroid is shifted to align with an intensity level of 128, and the image is downgraded to 8 bits. This shift may be used to cither darken or brighten the image. In one embodiment, the normalized, white balanced, and downgraded image-is stored in a computer memory available for further processing.
10185] As discussed above and further emphasized here, the above description of the process 4430 includes merely examples, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Although the above has been shown using a selected sequence of processes, there can be many alternatives, modifications, and variations. For example, some of the processes may be expanded and/or combined. Other processes may be inserted to those noted above. Depending upon the embodiment, the specific sequences of processes maybe interchanged with others replaced. Further details of these processes are found throughout the present specification. (0186] For example, in another embodiment of the present invention, information obtained at the process 4430 could be used as data inputs for the parabolic fitting at the process 4410 for another metering cell. In this embodiment, the three dimensional Locations of the metering cell, as determined from the high-resolution image, can provide information useful in determining the parabolic fitting parameters. For example, the metering cells near the center of the microfluidic device, separated from the fiducial markings near the edges of the microfluidic device, may be measured earlier in time than metering cells near the fiducial markings. The early measurements of centrally located metering cells may provide for faster convergence of the fitting algorithm as the measured location of these centrally located cells may differ from the planar mapping more than the measured locations of cells closer to the fiducial markings.
10187} As discussed above, the method 4400 uses various fiducial markings in various processes. In one embodiment, the fiducial markings can be any physical features associated with the microfluidic device. For example, the fiducial markings are on the handle substrate of the microfluidic device. In another example, the fiducial markings are on the flexible substrate of the microfluidic device. The fiducial markings may include a channel wall or an edge of the microfluidic device. In yet another example, the fiducials markings are selected from ones described in Figures 1-13A and 15A-15B.
10188] Additionally, the method 4400 align and focus a metering cell and acquire an image of the metering cell. The alignment and focus process may use at least one fiducial marking for the process 4420. The spatial relationship between the fiducial marking and the metering cell does not change significantly. For example, the fiducial marking-is in the vicinity of the metering cell. The metering cell is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. In one embodiment, the method 4400 is applied to any physical feature on the microfluidic device. The physical feature is aligned and focused, and an image of the physical feature is taken. For example, the physical feature is a chamber. The chamber may be a well, a fluid channel, a control channel, or else.
[0189] Moreover, the method 4400 may be performed by the imaging system 4010 or another imaging system according to the instructions of the computer system 4110 or another computer system. For example, a system for processing one or more microfluidic devices includes one or more computer-readable media and a stage for locating a flexible substrate. The flexible substrate comprises at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein. For example, a volume of the fluid is less than a nanoliter. The one or more computer-readable media include one or more instructions for providing a flexible substrate, and one or more instructions for determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings. Additionally, the one or more computer-readable media include one or more instructions for performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space, one or more instructions for acquiring at least a first image of the first additional fiducial marking associated with the first chamber, one or more instructions for performing a second alignment to the flexible substrate based on at least information associated with the first image, and one or more instructions for acquiring a second image of the first chamber associated with the flexible substrate. ["019O] The one or more instructions for determining a transformation between a design space and a measurement space include one or more instructions for determining at least three actual locations corresponding to the at least three fiducia! markings respectively. The at least three fiducial markings are associated with at least three design locations respectively. Additionally, the one or more instructions for determining a transformation include one or more instructions for processing information associated with the at least three actual locations and the at least three design locations. The design space is associated with the at least three design locations and the measurement space is associated with the at least three actual locations. The one or more instructions for acquiring at least a first image of the first additional fiducial marking include one or more instructions for acquiring a first plurality of images of the first additional fiducial marking. The first plurality of images includes the first image. Additionally, the one or more instructions for acquiring at least a first image includes one or more instructions for processing information associated with the first plurality of images.
(0191] Moreover, the one or more computer-readable media includes one or more instructions for storing the second image in a memory. The memory is a computer memory. The second image includes 3900 by 3030 pixels. The second image comprises a 16 bit image. The one or more instructions for performing a second alignment to the flexible substrate includes one or more instructions for translating the flexible substrate in at least one dimension Io position a chamber in preparation for capturing the second image. Also, the one or more computer-readable media includes one or more instructions for normalizing the second image, one or more instructions for white balancing the second image, and one or more instructions for converting the second image from a first image depth to a second image depth. For example, the first image depth is 16 bits and the second image depth is 8 bits.
[0192] In one embodiment, the first additional fiducial marking is a company logo. The at least three fiducial markings include a company logo, in another embodiment, the flexible substrate is deformable in three dimensions. For example, the flexible substrate is deformed by actions selected from the group consisting of fabrication, handling, and protocols. The protocols can result in the flexible substrate swelling or contracting. In yet another embodiment, a relationship between the design space and the measurement space is non- planar. The flexible substrate is deformed such that a planar transformation is capable to approximately determine an actual location of the first chamber. In yet another embodiment, the transformation between the design space and the measurement space is non-planar. For example, the non-planar transformation comprises a three dimensional parabolic mapping. The non-planar transformation is updated using information obtained by characterization of a second additional fiducial marking.
[0193] Numerous benefits are achieved using the present invention over conventional techniques. Some embodiments provide at least one way to form alignment patterns for a deformable active region for a micro fluidic system. Certain embodiments rely on conventional materials, which are relatively easy to use. Some embodiments provide alignment and/or focus based on mapping between the design space and the measurement space. The transformation between the design space and the measurement space uses, for example, at least three fiducial markings. Certain embodiments provide accurate focusing by acquiring and analyzing a plurality of images along at least one dimension. Some embodiments of the present invention perform alignment and focusing on a microfluidic device including at least one flexible substrate. The alignment and focusing take into account the deformation of the flexible substrate. Certain embodiments improve throughput in imaging system. For example, the imaging system uses a computer system to automatically perform alignment and focusing. In another example, mapping from the design space to the measurement space increases the accuracy of stage positioning, and thereby, the efficiency of high-resolution image acquisition. Depending upon the embodiment, one or more of these benefits may exist. These and other benefits have been described throughout the present specification.
[0194} It is also understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims.
WHAT IS CLAIMED IS:
1. A biological substrate, the substrate comprising: a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; a deformable fluid layer coupled to the surface region; one or more well regions formed in a first portion of the deformable fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformable fluid layer, the one or more channel regions being coupled to one or more of the well regions; an active region formed in the deformable fluid layer, the active region including the one or more well regions; a non-active region formed in the deformable fluid layer, the non-active region being formed outside of the first portion and the second portion; at least three Fiducial markings formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions; and a control layer coupled to the fluid layer, the control layer including one or more control regions.
2. The substrate of claim 1 wherein the three fiducial markings are spatially disposed around at least the one well region.
3. The substrate of claim 1 wherein the three fiducial markings are outside of the one or more well regions.
4. The substrate of claim 1 wherein each of the three fiducial markings are spatially disposed around a perimeter of the one well region.
5. The substrate of claim 1 further comprising a preselected fiducial marking, the preselected fiducial marking including at least an edge and a center region, the preselected fiducial making being characterized by a predetermined shape.
6. The substrate of claim 1 wherein the rigid substrate material is selected from glass, plastic, metal, and composite materials. 7. The substrate of claim I wherein the rigid substrate material is characterized as transparent.
8. The substrate of claim 1 wherein the defoπnable fluid layer is made of a material selected from silicone, polymer, rubber, plastic, and PDMS.
9. The substrate of claim I wherein the deformabie fluid layer is optically transparent.
10. The substrate of claim 1 wherein the three fiducial markings are each characterized as a predetermined recessed region.
11. The substrate of claim 1 wherein the three fiducial markings including respective images that are capable of being captured with a charge coupled camera.
12. A method of fabricating a biological substrate, the method comprising: providing a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; coupling a deformabie fluid layer to the surface region of the rigid substrate, the deformabie fluid layer comprising: one or more well regions formed in a first portion of the deformabie fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformabie fluid layer, the one or more channel regions being coupled to one or more of the well regions; an active region formed in the deformabie fluid layer, the active region including the one or more well regions; a non-active region formed in the deformabie fluid layer, the non- active region being formed outside of the first portion and the second portion; at least three fiducial markings formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions; and coupling a control layer to the fluid layer, the control layer including one or more control regions. 13. The method of claim 12 wherein the three fiducial markings are spatially disposed around at least the one well region.
14. The method of claim 12 wherein the three fiducial markings are outside of the one or more well regions.
15. The method of claim 12 wherein each of the three fiducial markings are spatially disposed around a perimeter of the one well region.
16. The method of claim 12 further comprising a preselected fiducial marking, the preselected fiducial marking including at least an edge and a center region, the preselected fiducial making being characterized by a predetermined shape.
17. The method of claim 12 wherein the rigid substrate material is selected from glass, plastic, composite materials, and a metal.
18. The method of claim 12 wherein the rigid substrate material is characterized as transparent.
19. The method of claim 12 wherein the deformable fluid layer is made of a material selected from silicone, rubber, polymer, plastic, and PDMS.
20. The method of claim 12 wherein the deformable fluid layer is optically transparent.
21. The method of claim 12 wherein the three fiducial markings are each characterized as a predetermined recessed region.
22. The method of claim 12 wherein the three-fiducial markings including respective images that are capable of being captured with a charge coupled camera.
23. A method o f manufacturing microfluidic chip structures, the method comprising: providing a mold substrate including a plurality of well patterns, each of the well patterns being provided within a portion of an active region of a fluidic chip; forming a plurality of fiducial marking patterns around a vicinity of each of the well patterns, each of the plural ity of fiducial marking patterns being within a portion of a non-active region of a fluidic chip, the plurality of fiducial marking patterns including a set of alignment marks being disposed spatially around each of the well patterns; forming a thickness of deformable materia! within the plurality of well patterns and within the plurality of fiducial marking patterns to fill a portion of the mold substrate; and coupling the thickness of deformable material including a plurality of wells formed from the well patterns and a plurality of fiducial marking patterns formed from the fiducial marking patterns to rigid substrate material.
24. The method of claim 23 wherein the rigid substrate material is selected from a glass, a silicon, a composite, and a plastic.
25. The method of claim 23 wherein the thickness of deformable material is silicon, silicon rubber, rubber, or other polymeric material.
26. The method of claim 23 wherein the set of alignment marks among the fiducial marking patterns spatially disposed around each of the well patterns comprises at least three recessed regions, each of the recessed regions being spatially disposed around a periphery of the well pattern.
27. The method of claim 26 wherein the coupling comprises bonding the thickness of deformable material overlying the rigid substrate material.
28. The method of claim 26 wherein the set of alignment marks comprises at least three recessed regions being spatially disposed around a periphery of the well patterns.
29. A microfluidic system, the system comprising: a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; a deformable fluid layer coupled to the surface region; one or more well regions formed in a first portion of the deformable fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformable fluid layer, the one or more channel regions being coupled to one or more of the well regions; an active region formed in the deformable fluid layer, the active region including the one or more well regions; a non-active region formed in the deformable fluid layer, the non-active region being formed outside of the first portion and the second portion; a first fiducial marking formed within the non-active region and disposed in a spatial manner associated with at least one of the channel regions; a second fiducial marking formed within the non-active region and disposed in a spatial manner associated with at least one of the well regions; a control layer coupled to the fluid layer, the control layer including one or more control regions; and a third fiducial marking formed within the control layer.
30. The substrate of claim 29 wherein the second fiducial marking is a company logo and is larger than cither the first fiducial marking or the second fiducial marking,
31. The substrate of claim 29 wherein the first fiducial marking comprises a first channel fiducial marking and a second channel fiducial marking, the first channel fiducial marking being spatially disposed from the second channel fiducial marking.
32. The substrate of claim 29 wherein the third fiducial marking comprises a first control fiducial marking and a second control fiducial marking, the first control fiducial marking being spatially disposed from the second control fiducial marking.
33. The substrate of claim 29 wherein the first fiducial marking, the second fiducial marking, and the third fiducial marking, being spatially disposed from each other along the surface region.
34. The substrate of claim 29 wherein the first fiducial marking is characterized by a spatial dimension of greater than 100 μm and less than 250 μm.
35. The substrate of claim 29 wherein the first fiducial marking is characterized by a depth of at least 10 μm within a thickness of the non-active region.
36. The substrate of claim 29 wherein the second fiducial marking is characterized by a depth of at least 10 μm within a thickness of the non-active region. 37. The substrate of claim 29 wherein the third fiducial marking is characterized by a depth of at least 10 μm within a thickness of the control layer.
38. A method of manufacturing microfluidic chip structures, the method comprising: providing a mold substrate including a plurality of well patterns, each of the well patterns being provided within a portion of an active region of a fluidic chip; forming at least one fiducial marking pattern around a vicinity of one of the well patterns, the fiducial marking pattern is one of a set of alignment marks; forming a thickness of deformable material within the plurality of well patterns and within the fiducial marking pattern to fill a portion of the mold substrate; releasing the deformable material from the mold substrate; and coupling the thickness of deformable material including a plurality of wells formed from the well patterns and a fiducial marking pattern formed from the fiducial marking pattern to a rigid substrate material.
39. A microfluidic system, the system comprising: a rigid substrate material, the rigid substrate material having a surface region, the surface region being capable of acting as a handle substrate; a deformable fluid layer coupled to the surface region; one or more well regions formed in a first portion of the deformable fluid layer, the one or more well regions being capable of holding a fluid therein; one or more channel regions formed in a second portion of the deformable fluid layer, the one or more channel regions be coupled to one or more of the well regions; an active region formed in the deformable fluid layer, the active region including the one or more well regions; a non-active region formed in the deformable fluid layer, the non-active region being formed outside of the first portion and the second portion; a control layer coupled to the fluid layer, the control layer including one or more control regions; and at least three fiducial markings comprising: at least a global alignment fiducial marking within a portion of the deformable layer; a first fiducial marking within the deformable layer or the control layer;
a second fiducial marking within the deformable iayer or the control layer.
40. A method for processing a microfluidic device, the method comprising: providing a flexible substrate including a first plurality of fiducial markings; determining a first plurality of actual locations corresponding to the first plurality of fiducial markings respectively, the first plurality of fiducial markings associated with a first plurality of design locations respectively; processing information associated with the first plurality of actual locations and the first plurality of design locations; determining a transformation between a design space and a measurement space, the design space associated with the first plurality of design locations, the measurement space associated with the first plurality of actual locations; performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space; acquiring a first plurality of images of the first fiducial marking; processing information associated with the first plurality of images; performing a second alignment to the flexible substrate based on at least - information associated with the first plurality of images; acquiring a second image of the flexible substrate.
41. The method of claim 40 wherein the flexible substrate further comprises a first chamber, the first chamber being capable of holding a fluid therein.
42. The method of claim 41 wherein the first fiducial marking is within a vicinity of the first chamber.
43. The method of claim 42 wherein the second image of the flexible substrate is associated with the first chamber.
Ill 44. The method of claim 43, and further comprising providing the flexible substrate on a stage.
45. The method of claim 43, and further comprising storing the second image in a memory.
46. The method of claim 45 wherein the second image comprises 3900 by 3030 pixels.
47. The method of claim 46 wherein the second image comprises a 16 bit image.
48. The method of claim 47 wherein the memory is a computer memory.
49. The method of claim 40 wherein the first plurality of fiducial markings comprises three fiducial markings.
50. The method of claim 49 wherein the first plurality of fiducial markings is free from the first fiducial marking.
51. The method of claim 50 wherein the processing information associated with the first plurality of images comprises: determining a first plurality of focus scores associated with the first plurality of images; processing information associated with the first plurality of focus scores; determining a focus position based on at least information associated with the first plurality of focus scores.
52. The method of claim 51 wherein the determining a first plurality of focus scores associated with the first plurality of images comprises: for each of the first plurality of images, determining a first value associated with a first characteristic of the each of the first plurality of images ; blurring the each of the first plurality of images; determining a second value associated with the first characteristic of the blurred each of the first plurality of images; if the second value is equal to or larger than a predetermined value, determining a focus score equal to a ratio between the first value and the second value; if the second value is smaller than the predetermined value, determining the focus score equal to a ratio between the first value and the predetermined value.
53. The method of claim 52 wherein the acquiring a first plurality of images of the first fiducial marking comprises: moving the flexible substrate to a first plurality of positions; for each of the first plurality of positions, acquiring one of the first plurality of images.
54. The method of claim 50 wherein the performing a second alignment to the flexible substrate comprises moving the flexible substrate to the focus position.
55. The method of claim 40, and further comprising: acquiring a first image of a first fiducial marking associated with the flexible substrate; performing a third alignment to the flexible substrate based on at least information associated with the first image.
56. The method of claim 55 wherein the acquiring a first image of the first fiducial marking comprises: acquiring a second image; processing information associated the second image; determining whether the first fiducial marking is present in the second image; if the first fiducial marking is not present in the second image, translate the flexible substrate in at least one dimension; wherein the second image is the first image if the first fiducial marking is present in the second image; wherein the processing information associated the second image includes: segmenting the second image; performing blob analysis to the second image.
57. A method for processing a microfluidic device, the method comprising: providing a flexible substrate including at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein; determining a transformation between a design space and a measurement space based on at least information associated with the at least three Fiducial markings; performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space; acquiring at least a first image of the first additional fiducial marking associated with the first chamber; performing a second alignment to the flexible substrate based on at least information associated with the first image; acquiring a second image of the first chamber associated with the flexible substrate,
58. The method of claim 57 wherein the determining a transformation between a design space and a measurement space comprises: determining at least three actual locations corresponding to the at least three fiducial markings respectively, the at least three fiducial markings being associated with at least three design locations respectively; processing information associated with the at least three actual locations and the at least three design locations.
59. The method of claim 58 wherein the design space is associated with the at least three design locations and the measurement space is associated with, the at least three actual locations.
60. The method of claim 57 wherein the acquiring at least a first image of the first additional fiducial marking comprises: acquiring a first plurality of images of the first additional fiducial marking, the first plurality of images including the first image; processing information associated with the first plurality of images.
61. The method of claim 57, and further comprising locating the flexible substrate on a stage. 62, The method of claim 61 , and further comprising storing the second image in a memory.
63. The method of claim 62 wherein the memory is a computer memory.
64. The method of claim 51 wherein the second image comprises 3900 by 3030 pixels..
65. The method of claim 64 wherein the second image comprises a 16 bit image.
66. The method of claim 57 wherein the performing a second alignment to the flexible substrate comprises: translating the flexible substrate in at least one dimension to position a chamber in preparation for capturing the second image.
67. The method of claim 57 wherein a volume of the fluid is less than a nanoliter.
68. The method of claim 57 wherein the first additional fiducial marking is a company logo.
69. The method of claim 57 wherein the at least three fiducial markings comprise a company logo.
70. The method of claim 57 wherein the flexible substrate is deformable in three dimensions.
71. The method of claim 70 wherein the flexible substrate is deformed by actions selected from the group consisting of fabrication, handling, and protocols.
72. The method of claim 71 wherein the protocols can result in the flexible substrate swelling or contracting.
73. The method of claim 70 wherein a relationship between the design space and the measurement space is non-planar. 74. The method of claim 73 wherein the flexible substrate is deformed such that a planar transformation is capable to approximately determine an actual location of the first chamber.
75. The method of claim 57, and further comprising: normalizing the second image; white balancing the second image; converting the second image from a first image depth to a second image depth.
76. The method of claim 75 wherein the first image depth is 16 bits and the second image depth is 8 bits.
77. The method of claim 57 wherein the transformation between the design space and the measurement space is non-planar.
78. The method of claim 77 wherein the non-planar transformation comprises a three dimensional parabolic mapping.
79. The method of claim 77 wherein the non-planar transformation is updated using information obtained by characterization of a second additional fiducial marking.
80. A system for processing one or more microfluidic devices, the system including one or more computer-readable media, the system also including a stage for locating a flexible substrate, the flexible substrate comprising at least three fiducial markings, a first additional fiducial marking, and a first chamber capable of holding a fluid therein, the one or more computer-readable media including: one or more instructions for providing a flexible substrate; one or more instructions for determining a transformation between a design space and a measurement space based on at least information associated with the at least three fiducial markings; one or more instructions for performing a first alignment to the flexible substrate based on at least information associated with the transformation between the design space and the measurement space; one or more instructions for acquiring at least a first image of the first additional fiducial marking associated with the first chamber; one or more instructions for performing a second alignment to the flexible substrate based on at least information associated with the first image; one or more instructions for acquiring a second image of the first chamber associated with the flexible substrate.
81. The one or more computer-readable media of claim 80 wherein the one or more instructions for determining a transformation between a design space and a measurement space comprise: one or more instructions for determining at least three actual locations corresponding to the at least three fiducial markings respectively, the at least three fiducial markings being associated with at least three design locations respectively; one or more instructions for processing information associated with the at least three actual locations and the at least three design locations.
82. The one or more computer-readable media of claim 81 wherein the design space is associated with the at least three design locations and the measurement space is associated with the at least three actual locations.
83. The one or more computer-readable media of claim 80 wherein the one or more instructions for acquiring at least a first image of the first additional fiducial marking comprise: one or more instructions for acquiring a first plurality of images of the first additional fiducial marking, the first plurality of images including the first image; one or more instructions for processing information associated with the first plurality of images.
84. The one or more computer-readable media of claim 80, and further comprising one or more instructions for storing the second image in a memory.
85. The one or more computer-readable media of claim 84 wherein the memory is a computer memory.
86. The one or more computer-readable media of claim 80 wherein the second image comprises 3900 by 3030 pixels. 87. The one or more computer-readable media of claim 86 wherein the second image comprises a 16 bit image.
88. The one or more computer-readable media of claim 80 wherein the one or more instructions for performing a second alignment to the flexible substrate comprise: one or more instructions for translating the flexible substrate in at least one dimension to position a chamber in preparation for capturing the second image.
89. The one or more computer-readable media of claim 80 wherein a volume of the fluid is less than a nanoliter
90. The one or more computer-readable media of claim 80 wherein the first additional fiducial marking is a company logo.
91. The one or more computer-readable media of claim 80 wherein the at least three fiducial markings comprise a company logo.
92. The one or more computer-readable media of claim 80 wherein the flexible substrate is deformable in three dimensions,
93. The one or more computer-readable media of claim 92 wherein the flexible substrate is deformed by actions selected from the group consisting of fabrication, handling, and protocols.
94. The one or more computer-readable media of claim 93 wherein the protocols can result in the flexible substrate swelling or contracting.
95. The one or more computer-readable media of claim 92 wherein a relationship between the design space and the measurement space is non-planar.
96. The one or more computer-readable media of claim 95 wherein the flexible substrate is deformed such that a planar transformation is capable to approximately determine an actual location of the first chamber.
97. The one or more computer-readable media of claim 80, and further comprising: one or more instructions for normalizing the second image; one or more instructions for white balancing the second image; one or more instructions for converting the second image from a first image depth to a second image depth.
98. The one or more computer-readable media of claim 97 wherein the first image depth is 16 bits and the second image depth is 8 bits.
99. The one or more computer-readable media of claim 80 wherein the transformation between the design space and the measurement space is non-planar.
100. The one or more computer-readable media of claim 99 wherein the non-planar transformation comprises a three dimensional parabolic mapping.
101. The one or more computer-readable media of claim 99 wherein the non-planar transformation is updated using information obtained by characterization of a second additional fiducial marking.
102. A method for processing a microfluidic device, the method comprising: providing a flexible substrate comprising one or more well regions and a plurality of fiducial marks, the well regions being capable of holding a fluid therein, at least three of the fiducial marks being within a vicinity of one of the well regions; locating the flexible substrate on a stage; capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping fro'm a design space to a measurement space; aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region; acquiring a high-resolution image of at least the one well region; and storing the high-resolution image in a memory.
103. The method of claim 102 wherein a high-resolution image of at least the one well region is acquired. 104. The method of claim 103 wherein the high-resolution image comprises 3900 by 3030 pixels.
105. The method of claim 104 wherein the high-resolution image comprises a 16 bit image.
106. The method of claim 102 wherein the memory is a computer memory.
107. The method of claim 102 wherein aligning the flexible substrate comprises acquiring a low-resolution image, normalizing the low-resolution image, and median correcting the low-resolution image.
108. The method of claim 107 wherein aligning the flexible substrate further comprises segmenting the image, performing blob analysis, and translating the flexible substrate in at least one dimension.
109. The method of claim 108 wherein translating the flexible substrate in at least one dimension comprises translating the stage to position a metering cell in preparation for capturing a high-resolution image.
1 10. The method of claim 102 wherein a volume of the fluid is less than a nanolϊter.
1 11. The method of claim 102 wherein the at least one additional fiducial mark is a company logo.
112. The method of claim 102 wherein the at least three fiducial marks are alignment marks.
113. The method of claim 1 12 wherein the alignment marks are portions of the wells.
114. The method of claim 102 wherein the flexible substrate is deformable in three dimensions.
115. The method of claim 114 wherein the flexible substrate is deformed by actions selected from the group consisting of fabrication, handling, and protocols. 116. The method of claim 115 wherein the protocols can result in the flexible substrate swelling or contracting.
117. The method of claim 116 wherein the flexible substrate is deformed such that there is no linear mapping between the design space and the measurement space.
118. The method of claim 117 wherein the flexible substrate is deformed such that a planar mapping is insufficient to accurately predict the image acquisition location.
119. The method of claim 102 wherein prior to storing the high-resolution image in a memory, the high-resolution image is normalized, white balanced, and converted to a reduced image depth.
120. The method of claim 119 wherein the reduced image depth is reduced from a 16 bit image to an 8 bit image.
121. The method of claim 102 wherein aligning the flexible substrate in the three dimensions to an image acquisition location using at least one additional fiducial mark comprises creating a higher order mapping from a design space to a measurement space.
122. The method of claim 121 wherein the higher order mapping comprises a three dimensional parabolic mapping.
123. The method of claim 121 wherein the higher order mapping is updated using information obtained by characterization of a second additional-fiducial mark.
124. A method of processing a biological micro fluidic device comprising: providing a deformable substrate comprising one or more metering cells, the metering cells being capable of containing a fluid therein; locating the deformable substrate on a stage translatable in x, y, and z directions; translating the stage to image at least four fiducial marks associated with the deformable substrate; determining x, y, and z positions of the at least four fiducial marks; computing a non-planar mapping between a design space and a measurement space based on the x, y, and z positions of the at least four fiducial marks; translating the stage to an image acquisition position calculated using the non- planar mapping; and capturing an image of at least one metering cell.
125. The method of claim 124 wherein prior to capturing the image of at least one metering cell, scanning the stage in the z direction; acquiring an image of an alignment mark; calculating a focus metric; and repeating the scanning, acquiring, and calculating steps.
126. A system for processing one or more microfluidic devices, the system including one or more computer memories, the system also including a stage for locating a flexible substrate, the flexible substrate comprising one or more well regions and a plurality of fiducial marks, the well regions being capable of holding a fluid therein, at least three of the fiducial marks being within a vicinity of one of the well regions, the one or more computer memories comprising one or more computer codes, the one or more computer codes including: a first code directed to capturing an image of at least the three fiducial marks within the vicinity of the one well region of the flexible substrate to generate a mapping from a design space to a measurement space; a second code directed to aligning the flexible substrate to an image acquisition location using at least the mapping from the design space and one additional fiducial mark, wherein the at least one additional fiducial mark is associated with the one well region; a third code directed to acquiring a high-resolution image of at least the one well region; and a fourth code directed to storing the high-resolution image in a memory.
127. A method for producing an image of an object within a chamber of a microfluidic device comprising the steps of: providing said microfluidic device, said microfluidic device having x , y , and z dimensions and further comprising a chamber depth center point located between a top wall and a bottom wall of said chamber along said z dimension, said chamber depth center point being located a known z dimension distance from an optically detectable fiducial marking embedded within said microfluidic device at a z depth; placing said microfluidic device within an imaging system comprising: an optical device capable of detecting said fiducial marking and transmitting said image of said object, said optical device defining an optical path axially aligned with said z dimension of said microfluidic device and having a focal plane perpendicular to said optical path, wherein when said focal plane is moved along said optical path in line with said fiducial marking, said fiducial marking is maximally detected when said focal'plane is at said z depth in comparison to when said focal plane is not substantially in-plane with said z depth, an image processing device in communication with said optical device, said image processing device being able to control said optical device to cause said focal plane to move along said z axis and move said focal plane to maximally detect said fiducial marking, said image processing device being further able to transmit said image of said object; controlling said optical device with said image processing device to cause said focal plane to move along said optical path until said optical device maximally detects said fiducial marking; further controlling said optical device with said image processing device to move said focal plane along said optical path said z dimension distance to cause said field depth center point to be located at said chamber depth center point; and, imaging said object within said chamber while said focal plane is located said chamber depth center point.
128. The method of claim 127 wherein said microfluidic device is made wholly or partly from an elastomeric material.
129. The method of claim 128 wherein said elastomeric material is polydimethlysiloxane.
130. The method of claim 127 wherein said microfluidic device is partly made from glass material.
131. The method of claim 130 wherein said chamber is formed wholly or partly in said glass material.
132. The method of claim 128 wherein said chamber is wholly or partly within said elastαmeric material. 133. The method of claim 127 wherein said depth of field is greater than, equal to, or less than the z dimension of said chamber.
134. The method of claim 127 wherein said optical device comprises an analog output charged coupled device type image detector and said image processor comprises an analog to digital converter.
135. The method of claim 127 wherein said optical device comprises a digital detection device.
136. The method of claim 127 wherein said image processing device comprises a digital computer and a data storage device.
137. The method of claim 136 wherein said digital computer comprises an output display for displaying said image of said object.
138. A system for producing an image of an object within a chamber of a microfluidic device comprising: said microfluidic device, said microfluidic device having x , y , and z dimensions and further comprising a chamber depth center point located between a top wall and a bottom wall of said chamber along said z dimension, said chamber depth center point being located a known z dimension distance from a optically detectable Fiducial marking embedded within said microfluidic device at a z depth; an imaging system for placing said microfluidic device therein comprising: an optical device capable of detecting said fiducial marking and transmitting said image of said object, said optical device defining an optical path axially aligned with said z dimension of said microfluidic device and having a focal plane, wherein when said focal plane is moved along said optical path in line with said fiducial marking, said fiducial marking is maximally detected when said focal plane is substantially in-plane with said z depth as compared to when said field depth center point is not substantially in-plane with said z depth, an image processing device in communication with said optical device, said image processing device being able to control said optical device to cause said focal plane to move along said z axis and move said field depth center point to maximally detect said fiducial marking, said image processing device being further able to transmit said image of said object, said image processing device being in operable communication with said optical device to cause said focal plane to move along said optical path until said optical device maximally detects said fiducial marking, wherein when said image processing device causes said optical device to move said focal plane along said optical path said z dimension distance, said focal point is located at said chamber depth center point.
139. The system of claim 138 wherein said microfiuidic device is made wholly or partly from an elastαmeric material.
140. The system of claim 139 wherein said elastomeiic material is polydimethylsiloxane.
141. The system of claim 138 wherein said microfiuidic device is partly made from glass material.
142. The system of claim 138 wherein said chamber is formed wholly or partly in said glass material.
143. The system of claim 139 wherein said chamber is wholly or partly within said elastomeric material.
144. The system of claim 138 wherein said depth of field is greater than, equal to, or less than the z dimension of said chamber.
145. The system of claim 138 wherein said optical device comprises an analog output charged coupled device type image detector and said image processor comprises an analog to digital converter.
146. The system of claim 138 wherein said optical device comprises a digital detection device,
147. The system of claim 138 wherein said image processing device comprises a digital computer and a data storage device.
148. The system of claim 147 wherein said digital computer comprises an output display for displaying said image of said object. 149. The system of claim 148 wherein said output display comprises a graphical user interface,
150. A method for producing an image of a chamber within a microfluidic device comprising the steps of: imaging said microfluidic device to produce an image using an imaging system having an optical path in the z plane of said microfluidic device; mapping from said image a first set of coordinates of said microfluidic device to determine whether the microfluidic device is skewed or distorted when compared to a coordinate map of an ideal microfluidic device; positioning said microfluidic device so as to position said chamber within said optical path based on a matrix transform calculated coordinate position determined by computing a matrix transformation between said first set of coordinates of said microfluidic device and said coordinate map of said ideal microfluidic device; obtaining a time zero image of said microfluidic device chamber; wherein said time zero image contains images of artifacts present in said microfluidic device; obtaining a second image of said microfluidic device chamber; and, subtracting the first image of said microfluidic device chamber from said second image of said microfluidic chamber to produce an image of said chamber without time zero artifacts.
151. A microfluidic system, the system comprising: a substrate comprising a surface region; a deformable layer coupled to the surface of the substrate, the deformable layer being made of at least a thickness of first material; a control layer coupled to the deformable layer to form a sandwich structure including at least the substrate, the deforraable layer and the control layer, the control layer being made of at least a thickness of second material; at least one fiducial marking provided within either the control layer or the deformable layer or the substrate, the fiducial marking being characterized by a visual pattern provided in a volume surrounded wholly or partially by at least the substrate, the first material, or the second material; and a fluid disposed within the open volume of the one fiducial marking, the fluid being characterized by a refractive index. 152. The system of claim 151 wherein the refractive index is associated with air.
153. The system of claim 151 wherein the fluid comprises an air mixture.
154. The system of claim 151 wherein the fluid comprises air.
155. The system of claim 151 wherein the open volume is characterized by a recessed region.
156. The system of claim 151 wherein the refractive index is substantially different from a surrounding material, the surrounding material being either the first material or the second material or the substrate.
157. The system of claim 151 wherein the fiducial marking is characterized by edges to form the pattern.
158. The system of claim 157 wherein the edges comprises 90 degree corners.
159. The system of claim 151 wherein the fluid is non-reactive.
160. The system of claim 151 wherein the substrate is selected from silicon, quartz, glass, or rigid plastic.
Figure imgf000130_0001
Figure imgf000131_0001
300
Start
301
Provide mold substrate material.
302
Apply first layer of photoresist onto mold substrate.
303
Pattern the first layer of photoresist to form control fluid regions.
304
Form control fluid regions through the patterned film on the mold substrate material.
305
Strip first layer of photoresist to form completed mold substrate material including control fluid regions.
Stop
Figure imgf000132_0001
Figure imgf000133_0001
Figure imgf000134_0001
Figure 6
Figure imgf000135_0001
Figure imgf000136_0001
Figure 9
Figure imgf000137_0001
Figure imgf000138_0001
Figure imgf000139_0001
Figure imgf000140_0001
Figure imgf000141_0001

Figure imgf000142_0001
Figure imgf000143_0001
Figure imgf000144_0001
Figure imgf000145_0001
4400
Figure imgf000146_0001
Map Between 4410
Measurement Space and
Design Space
4420
Align and Focus
4430
Capture Image
FIG. 18
4410
Figure imgf000147_0001
FIG. 19
Figure imgf000148_0001
Figure imgf000149_0001
Figure imgf000150_0001
Figure imgf000151_0001
Figure imgf000152_0001
4804
Figure imgf000153_0001
Fig. 25
Figure imgf000154_0001
Figure imgf000155_0001
Figure imgf000156_0001
Figure imgf000157_0001

Claims

WHAT IS CLAIMED IS:
1. A method for processing an image of a microfluidic device, the method comprising: receiving a first image of a microfluidic device, the first image corresponding to a first state; receiving a second image of the microfluidic device, the second image corresponding to a second state; transforming the first image into a third coordinate space, the transforming using at least a first fiducial on the first image; transforming the second image into the third coordinate space, the transforming using at least a second fiducial on the second image; obtaining a third image based on at least information associated with the transformed first image and the transformed second image; processing the third image to obtain information associated with the first state and the second state.
2. The method of claim 1 , the method further comprising: locating the at least a first fiducial on the first image; locating the at least a second fiducial on the second image.
3. The method of claim 1 wherein the transforming the first image into a third coordinate space comprises: associating the at least a first fiducial to at least a third fiducial in the third coordinate space; performing a first transformation to the first image based on at least information associated with the at least a first fiducial and the at least a third fiducial.
4. The method of claim 3 wherein the performing a first transformation comprises: estimating the first transformation based on at least information associated with the at least a first fiducial and the at least a third fiducial; converting the first image into the third coordinate space, the converting using the first transformation.
5. The method of claim 4 wherein the transforming the second image into the third coordinate space comprises: associating the at least a second fiducial to the at least a third fiducial in the third coordinate space; performing a second transformation to the second image based on at least information associated with the at least a second fiducial and the at least a third fiducial.
6. The method of claim 5 wherein the performing a second transformation comprises: estimating the second transformation based on at least information associated with the at least a second fiducial and the at least a third fiducial; converting the second image into the third coordinate space, the converting using the second transformation.
7. The method of claim 1 wherein the obtaining a third image comprises: obtaining a difference between the first image and the second image.
8. The method of claim 7 wherein the obtaining a third image further comprises: masking at least a first part of the first image, the at least a first part free from information associated with the first state; masking at least a second part of the second image, the at least a second part free from information associated with the second state.
9. The method of claim 8 wherein the at least a second part corresponds to the at least a first part, the at least a second part based on at least information associated with a change of a feature from the first image to the second image.
10. The method of claim 7 wherein the obtaining a third image further comprises masking at least a third part of the third image, the at least a third part free from information associated with the first state and the second state.
11. The method of claim 10 wherein the at least a third part is based on at least information associated with a change of a feature from the first image to the second image.
12. A computer-readable medium including instructions for processing an image of a micro fluidic device, the computer-readable medium comprising: one or more instructions for receiving a first image of a microfluidic device, the first image corresponding to a first state; one or more instructions for receiving a second image of the microfluidic device, the second image corresponding to a second state; one or more instructions for transforming the first image into a third coordinate space, the transforming using at least a first fiducial on the first image; one or more instructions for transforming the second image into the third coordinate space, the transforming using at least a second fiducial on the second image; one or more instructions for obtaining a third image based on at least information associated with the transformed first image and the transformed second image; one or more instructions for processing the third image to obtain information associated with the first state and the second state.
13. The computer-readable medium of claim 12, the computer-readable medium further comprising: one or more instructions for locating the at least a first fiducial on the first image; one or more instructions for locating the at least a second fiducial on the second image.
14. The computer-readable medium of claim 12 wherein the one or more instructions for transforming the first image into a third coordinate space comprises: one or more instructions for associating the at least a first fiducial to at least a third fiducial in the third coordinate space; one or more instructions for performing a first transformation to the first image based on at least information associated with the at least a first fiducial and the at least a third fiducial.
15. The computer-readable medium of claim 14 wherein the one or more instructions for performing a first transformation comprises: one or more instructions for estimating the first transformation based on at least information associated with the at least a first fiducial and the at least a third fiducial; one or more instructions for converting the first image into the third coordinate space, the converting using the first transformation.
16. The computer-readable medium of claim 15 wherein the one or more instructions for transforming the second image into the third coordinate space comprises: one or more instructions for associating the at least a second fiducial to the at least a third fiducial in the third coordinate space; one or more instructions for performing a second transformation to the second image based on at least information associated with the at least a second fiducial and the at least a third fiducial.
17. The computer-readable medium of claim 16 wherein the one or more instructions for performing a second transformation comprises: one or more instructions for estimating the second transformation based on at least information associated with the at least a second fiducial and the at least a third fiducial; one or more instructions for converting the second image into the third coordinate space, the converting using the second transformation.
18. The computer-readable medium of claim 12 wherein the one or more instructions for obtaining a third image comprises: one or more instructions for obtaining a difference between the first image and the second image.
19. The computer-readable medium of claim 18 wherein the one or more instructions for obtaining a third image further comprises: one or more instructions for masking at least a first part of the first image, the at least a first part free from information associated with the first state; one or more instructions for masking at least a second part of the second image, the at least a second part free from information associated with the second state.
20. The computer-readable medium of claim 19 wherein the at least a second part corresponds to the at least a first part, the at least a second part based on at least information associated with a change of a feature from the first image to the second image.
21. The computer-readable medium of claim 18 wherein the one or more instructions for obtaining a third image further comprises one or more instructions for masking at least a third part of the third image, the at least a third part free from information associated with the first state and the second state.
22. The computer-readable medium of claim 21 wherein the at least a third part is based on at least information associated with a change of a feature from the first image to the second image.
23. The method of claim 1 wherein the first state is different from the second state.
24. The method of claim 1 wherein the first state is the same as the second state.
25. The method of claim 1 wherein the first state is associated with absence of crystallization.
26. The method of claim 25 wherein the second state is associated with presence of crystallization.
27. The method of claim 25 wherein the second state is associated with absence of crystallization.
28. The computer-readable medium of claim 12 wherein the first state is different from the second state.
29. The computer-readable medium of claim 12 wherein the first state is the same as the second state.
30. The computer-readable medium of claim 12 wherein the first state is associated with absence of crystallization.
31. The computer-readable medium of claim 30 wherein the second state is associated with presence of crystallization.
32. The computer-readable medium of claim 30 wherein the second state is associated with absence of crystallization.
33. A method for processing an image of a microfluidic device, the method comprising: receiving a first image of a microfluidic device, the first image including a first fiducial marking and a first chamber region, the first chamber region being associated with a first chamber boundary; transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking; removing at least a first part of the first chamber boundary from the first image; processing information associated with the first chamber region; determining whether a first crystal is present in the first chamber region.
34. The method of claim 33 wherein the determining whether a first crystal is present in the first chamber region comprises: generating a first plurality of features based on at least information associated with the first chamber region; processing information associated with the first plurality of features; determining a second plurality of features based on at least information associated with the first plurality of features; processing information associated with the second plurality of features; determining whether the first crystal is present or absent in the first chamber region.
35. The method of claim 34 wherein the determining whether the first crystal is present or absent in the first chamber region comprises: determining a first likelihood that the first crystal is present in the chamber region based on at least information associated with the second plurality of features; processing information associated with the first likelihood and a first threshold; determining that the first crystal is present if the first likelihood exceeds a first threshold and the first crystal is absent if the first likelihood does not exceed the first threshold.
36. The method of claim 34 wherein the first plurality of features comprises at least a neighborhood line detector feature, the neighborhood line detector feature being associated with detecting at least a straight line pattern.
37. The method of claim 34 wherein the second plurality of features comprises a first Fisher feature.
38. The method of claim 37 wherein the first Fisher feature is associated with a first image state and a second image state, each of the first image state and the second image state being selected from a group consisting of a crystal state, a phase/precipitate state, and a clear state.
39. The method of claim 33 wherein: the first chamber boundary comprises a first section and a second section, the first section being substantially parallel with the second section; the removing at least a first part of the first chamber boundary from the first image comprises: determining a first plurality of intensities associated with a first plurality of pixels along a first direction, the first direction intersecting both the first section and the second section; processing information associated with the first plurality of intensities; determining a first location associated with the first section and a second location associated with the second section based on at least information related to the first plurality of intensities.
40. The method of claim 39 wherein the determining a first location associated with the first section and a second location associated with the second section comprises: determining a third location associated with the first section based on at least information related to the first plurality of intensities; determining a fourth location associated with the second section based on at least information related to the third location and a predetermined distance between the first section and the second section; processing at least information associated with the third location, the fourth location, and the first plurality of intensities; determining the first location and the second location based on at least information associated with the third location, the fourth location, and the first plurality of intensities.
41. The method of claim 40 wherein the determining the first location and the second location based on at least information associated with the third location, the fourth location, and the first plurality of intensities comprises: determining a first penalty function associated with the third location; determining a second penalty function associated with the fourth location; processing information associated with the first penalty function and the second penalty function; determining a third penalty function, the third penalty function being associated with the first penalty function and the second penalty function; processing information associated with the third penalty function; determining the first location and the second location based on at least information associated with the third penalty function.
42. The method of claim 41 wherein the determining the first location and the second location based on at least information associated with the third penalty function comprises minimizing the third penalty function.
43. A method for processing a plurality of images of a microfluidic device, the method comprising: receiving at least a first image and a second image of a microfluidic device, the first image and the second image being associated with a first focal position and a second focal position respectively, each of the first image and the second image including a first chamber region; processing information associated with the first image and the second image; generating a third image based on at least information associated with the first image and the second image; processing information associated with the third image; determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
44. The method of claim 43 wherein: the third image comprises a first fiducial marking; the first chamber region is associated with a first chamber boundary; the deteπnining whether a first crystal is present in the first chamber region comprises: transforming the third image into a first coordinate space based on at least information associated with the first fiducial marking; removing at least a first part of the first chamber boundary from the third image; processing information associated with the first chamber region; deteπnining whether a first crystal is present or absent in the first chamber region.
45. The method of claim 44 wherein the determining whether a first crystal is present or absent in the chamber region comprises: generating a first plurality of features based on at least information associated with the first chamber region; processing information associated with the first plurality of features; determining a second plurality of features based on at least information associated with the first plurality of features; processing information associated with the second plurality of features; determining whether the first crystal is present or absent in the first chamber region based on at least information associated with the second plurality of features.
46. The method of claim 43 wherein the generating a third image comprises: determining a first plurality of sharpness values and a first plurality of colorness values associated with the first image; determining a second plurality of sharpness values and a second plurality of colorness values associated with the second image; processing information associated with the first plurality of sharpness values, the first plurality of colorness values, the second plurality of sharpness values, the second plurality of colorness values; determining a first plurality of intensities associated with the third image based on at least information associated with the first plurality of sharpness values, the first plurality of colorness values, the second plurality of sharpness values, the second plurality of colorness values.
47. The method of claim 46 wherein the first plurality of intensities comprises a plurality of red intensities, a plurality of green intensities, and a plurality of blue intensities.
48. A method for adjusting a classifier and processing an image of a microfluidic device, the method comprising: receiving a first image of a microfluidic device, the first image being associated with at least a first predetermined characteristic; generating a first plurality of features based on at least information associated with the first image; selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic; determining a third plurality of features based on at least information associated with the second plurality of features; processing information associated with the third plurality of features; determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters; processing information associated with the first likelihood and the at least a first predetermined characteristic; adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
49. The method of claim 48, and further comprising: receiving a second image of a microfluidic device; generating the second plurality of features based on at least information associated with the second image; processing information associated with the second plurality of features; determining the third plurality of features based on at least information associated with the second plurality of features; processing information associated with the third plurality of features and the first plurality of adjusted parameters; deteπnining whether a first crystal is present or absent in the second image based on at least information associated with the third plurality of features and the first plurality of adjusted parameters.
50. The method of claim 49 wherein the deteπnining whether the first crystal is present or absent in the second image comprises: determining a second likelihood that the first crystal is present in the second image based on at least information associated with the third plurality of features; processing information associated with the second likelihood and a first threshold; determining that the first crystal is present if the second likelihood exceeds a first threshold and the first crystal is absent if the second likelihood does not exceed the first threshold.
51. The method of claim 48 wherein the first plurality of features comprises at least a neighborhood line detector feature, the neighborhood line detector feature being associated with detecting at least a straight line pattern.
52. The method of claim 48 wherein the third plurality of features comprises a first Fisher feature.
53. The method of claim 48 wherein the first Fisher feature is associated with a first image state and a second image state, each of the first image state and the second image state being selected from a group consisting of a crystal state, a phase/precipitate state, and a clear state.
54. A computer-readable medium including instructions for processing an image of a micro fluidic device, the computer-readable medium comprising: one or more instructions for receiving a first image of a microfluidic device, the first image including a first fiducial marking and a first chamber region, the first chamber region being associated with a first chamber boundary; one or more instructions for transforming the first image into a first coordinate space based on at least information associated with the first fiducial marking; one or more instructions for removing at least a first part of the first chamber boundary from the first image; one or more instructions for processing information associated with the first chamber region; one or more instructions for determining whether a first crystal is present in the first chamber region.
55. The computer-readable medium of claim 54 wherein the one or more instructions for determining whether a first crystal is present in the first chamber region comprises: one or more instructions for generating a first plurality of features based on at least information associated with the first chamber region; one or more instructions for processing information associated with the first plurality of features; one or more instructions for determining a second plurality of features based on at least information associated with the first plurality of features; one or more instructions for processing information associated with the second plurality of features; one or more instructions for determining whether the first crystal is present or absent in the first chamber region.
56. The computer-readable medium of claim 55 wherein the one or more instructions for determining whether the first crystal is present or absent in the first chamber region comprises: one or more instructions for determining a first likelihood that the first crystal is present in the first chamber region based on at least information associated with the second plurality of features ; one or more instructions for processing information associated with the first likelihood and a first threshold; one or more instructions for determining that the first crystal is present if the first likelihood exceeds a first threshold and the first crystal is absent if the first likelihood does not exceed the first threshold.
57. The computer-readable medium of claim 55 wherein the first plurality of features comprises at least a neighborhood line detector feature, the neighborhood line detector feature being associated with detecting at least a straight line pattern.
58. The computer-readable medium of claim 55 wherein the second plurality of features comprises a first Fisher feature.
59. The computer-readable medium of claim 58 wherein the first Fisher feature is associated with a first image state and a second image state, each of the first image state and the second image state being selected from a group consisting of a crystal state, a phase/precipitate state, and a clear state.
60. The computer-readable medium of claim 54 wherein: the first chamber boundary comprises a first section and a second section, the first section being substantially parallel with the second section; the one or more instructions for removing at least a first part of the first chamber boundary from the first image comprises: one or more instructions for determining a first plurality of intensities associated with a first plurality of pixels along a first direction, the first direction intersecting both the first section and the second section; one or more instructions for processing information associated with the first plurality of intensities ; one or niore instructions for determining a first location associated with the first section and a second location associated with the second section based on at least information related to the first plurality of intensities.
61. The computer-readable medium of claim 60 wherein the one or more instructions for determining a first location associated with the first section and a second location associated with the second section comprises: one or more instructions for determining a third location associated with the first section based on at least information related to the first plurality of intensities; one or more instructions for determining a fourth location associated with the second section based on at least information related to the third location and a predetermined distance between the first section and the second section; one or more instructions for processing at least information associated with the third location, the fourth location, and the first plurality of intensities; one or more instructions for determining the first location and the second location based on at least information associated with the third location, fourth location, and the first plurality of intensities.
62. The computer-readable medium of claim 61 wherein the one or more instructions for determining the first location and the second location based on at least information associated with the third location, the fourth location, and the first plurality of intensities comprises: one or more instructions for determining a first penalty function associated with the third location; one or more instructions for determining a second penalty function associated with the fourth location; one or more instructions for processing information associated with the first penalty function and the second penalty function; one or more instructions for determining a third penalty function, the third penalty function being associated with the first penalty function and the second penalty function; one or more instructions for processing information associated with the third penalty function; one or more instructions for determining the first location and the second location based on at least information associated with the third penalty function.
63. The computer-readable medium of claim 62 wherein the one or more instructions for determining the first location and the second location based on at least information associated with the third penalty function comprises one or more instructions for minimizing the third penalty function.
64. A computer-readable medium including instructions for processing a plurality of images of a microfluidic device, the computer-readable medium comprising: one or more instructions for receiving at least a first image and a second image of a microfluidic device, the first image and the second image being associated with a first focal position and a second focal position respectively, each of the first image and the second image including a first chamber region; one or more instructions for processing information associated with the first image and the second image; one or more instructions for generating a third image based on at least information associated with the first image and the second image; one or more instructions for processing information associated with the third image; one or more instructions for determining whether a first crystal is present in the first chamber region based on at least information associated with the third image.
65. The computer-readable medium of claim 64 wherein: the third image comprises a first fiducial marking; the first chamber region is associated with a first chamber boundary; the one or more instructions for determining whether a first crystal is present in the first chamber region comprises: one or more instructions for transforming the third image into a first coordinate space based on at least information associated with the first fiducial marking; one or more instructions for removing at least a first part of the first chamber boundary from the third image; one or more instructions for processing information associated with the first chamber region; one or more instructions for determining whether a first crystal is present or absent in the first chamber region.
66. The computer-readable medium of claim 65 wherein the one or more instructions for determining whether a first crystal is present or absent in the chamber region comprises: one or more instructions for generating a first plurality of features based on at least information associated with the first chamber region; one or more instructions for processing information associated with the first plurality of features ; one or more instructions for determining a second plurality of features based on at least information associated with the first plurality of features; one or more instructions for processing information associated with the second plurality of features; one or more instructions for determining whether the first crystal is present or absent in the first chamber region based on at least information associated with the second plurality of features .
67. The computer-readable medium of claim 64 wherein the one or more instructions for generating a third image comprises: one or more instructions for determining a first plurality of sharpness values and a first plurality of colorness values associated with the first image; one or more instructions for determining a second plurality of sharpness values and a second plurality of colorness values associated with the second image; one or more instructions for processing information associated with the first plurality of sharpness values, the first plurality of colorness values, the second plurality of sharpness values, the second plurality of colorness values; one or more instructions for determining a first plurality of intensities associated with the third image based on at least information associated with the first plurality of sharpness values, the first plurality of colorness values, the second plurality of sharpness values, the second plurality of colorness values.
68. The computer-readable medium of claim 67 wherein the first plurality of intensities comprises a plurality of red intensities, a plurality of green intensities, and a plurality of blue intensities.
69. A computer-readable medium including instructions for adjusting a classifier and processing an image of a microfluidic device, the computer-readable medium comprising: one or more instructions for receiving a first image of a microfluidic device, the first image being associated with at least a first predetermined characteristic; one or more instructions for generating a first plurality of features based on at least information associated with the first image; one or more instructions for selecting a second plurality of features from the first plurality of features based on at least information associated with the first plurality of features and the at least a first predetermined characteristic; one or more instructions for determining a third plurality of features based on at least information associated with the second plurality of features; one or more instructions for processing information associated with the third plurality of features ; one or more instructions for determining at least a first likelihood based on at least information based on the third plurality of features and a first plurality of parameters; one or more instructions for processing information associated with the first likelihood and the at least a first predetermined characteristic; one or more instructions for adjusting the first plurality of parameters based on at least information associated with the first likelihood and the at least a first predetermined characteristic.
70. The computer-readable medium of claim 69, and further comprising: one or more instructions for receiving a second image of a microfluidic device; one or more instructions for generating the second plurality of features based on at least information associated with the second image; one or more instructions for processing information associated with the second plurality of features; one or more instructions for determining the third plurality of features based on at least information associated with the second plurality of features; one or more instructions for processing information associated with the third plurality of features and the first plurality of adjusted parameters; one or more instructions for determining whether a first crystal is present or absent in the second image based on at least information associated with the third plurality of features and the first plurality of adjusted parameters.
71. The computer-readable medium of claim 70 wherein the one or more instructions for determining whether the first crystal is present or absent in the second image comprises: one or more instructions for determining a second likelihood that the first crystal is present in the second image based on at least information associated with the third plurality of features; one or more instructions for processing information associated with the second likelihood and a first threshold; one or more instructions for determining that the first crystal is present if the second likelihood exceeds a first threshold and the first crystal is absent if the second likelihood does not exceed the first threshold.
72. The computer-readable medium of claim 69 wherein the first plurality of features comprises at least a neighborhood line detector feature, the neighborhood line detector feature being associated with detecting at least a straight line pattern.
73. The computer-readable medium of claim 69 wherein the third plurality of features comprises a first Fisher feature.
74. The computer-readable medium of claim 69 wherein the first Fisher feature is associated with a first image state and a second image state, each of the first image state and the second image state being selected from a group consisting of a crystal state, a phase/precipitate state, and a clear state.
75. The method of claim 1 wherein: the first image comprises a first chamber region associated with a first chamber boundary; the second image comprises a second chamber region associated with a second chamber boundary; the obtaining a third image comprises determining an implosion padding based on information associated with the first image and the second image.
76. The method of claim 75 wherein the determining an implosion padding comprises: processing information associated with the first image; determining a first index related to a first implosion associated with the first chamber boundary based on at least information associated with the first image; processing information associated with the second image; determining a second index related to a second implosion associated with the second chamber boundary based on at least information associated with the second image; processing information associated with the first index and the second index; determining the implosion padding based on at least information associated with the first index and the second index.
77. The method of claim 76 wherein the determining a first index related to a first implosion comprises: selecting a plurality of image areas, the plurality of image areas associated with a plurality of boundaries respectively; determining a plurality of median intensities associated with the plurality of boundaries respectively; processing information associated with the plurality of median intensities; determining the first index based on at least information associated with the plurality of median intensities.
78. The method of claim 77 wherein the determining the first index based on at least information associated with the plurality of median intensities comprises: determining a minimum intensity from the plurality of median intensities, the minimum intensity being associated with one of the plurality of boundaries; determining the first index based on at least information associated with the one of the plurality of boundaries.
79. The computer-readable medium of claim 12 wherein: the first image comprises a first chamber region associated with a first chamber boundary; the second image comprises a second chamber region associated with a second chamber boundary; the one or more instructions for obtaining a third image comprises one or more instructions for determining an implosion padding based on information associated with the first image and the second image.
80. The method of claim 79 wherein the one or more instructions for determining an implosion padding comprises: one or more instructions for processing information associated with the first image; one or more instructions for determining a first index related to a first implosion associated with the first chamber boundary based on at least information associated with the first image; one or more instructions for processing information associated with the second image; one or more instructions for determining a second index related to a second implosion associated with the second chamber boundary based on at least information associated with the second image; one or more instructions for processing information associated with the first index and the second index; one or more instructions for determining the implosion padding based on at least information associated with the first index and the second index.
81. The computer-readable medium of claim 80 wherein the one or more instructions for determining a first index related to a first implosion comprises: one or more instructions for selecting a plurality of image areas, the plurality of image areas associated with a plurality of boundaries respectively; one or more instructions for determining a plurality of median intensities associated with the plurality of boundaries respectively; one or more instructions for processing information associated with the plurality of median intensities; one or more instructions for determining the first index based on at least information associated with the plurality of median intensities.
82. The computer-readable medium of claim 81 wherein the one or more instructions for determining the first index based on at least information associated with the plurality of median intensities comprises: one or more instructions for determining a minimum intensity from the plurality of median intensities, the minimum intensity being associated with one of the plurality of boundaries; determining the first index based on at least information associated with the one of the plurality of boundaries .
PCT/US2004/024591 2003-07-28 2004-07-28 Image processing method and system for microfluidic devices WO2005011947A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP04757388A EP1667829A4 (en) 2003-07-28 2004-07-28 Image processing method and system for microfluidic devices
JP2006522086A JP2007506943A (en) 2003-07-28 2004-07-28 Image processing method and system for microfluidic devices
CA002532530A CA2532530A1 (en) 2003-07-28 2004-07-28 Image processing method and system for microfluidic devices
AU2004261655A AU2004261655A1 (en) 2003-07-28 2004-07-28 Image processing method and system for microfluidic devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49071203P 2003-07-28 2003-07-28
US60/490,712 2003-07-28

Publications (3)

Publication Number Publication Date
WO2005011947A2 WO2005011947A2 (en) 2005-02-10
WO2005011947A9 true WO2005011947A9 (en) 2006-10-26
WO2005011947A3 WO2005011947A3 (en) 2007-12-13

Family

ID=34115428

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/024591 WO2005011947A2 (en) 2003-07-28 2004-07-28 Image processing method and system for microfluidic devices

Country Status (7)

Country Link
US (2) US7583853B2 (en)
EP (1) EP1667829A4 (en)
JP (1) JP2007506943A (en)
AU (1) AU2004261655A1 (en)
CA (1) CA2532530A1 (en)
SG (1) SG145697A1 (en)
WO (1) WO2005011947A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10832404B2 (en) 2014-12-09 2020-11-10 Berkeley Lights, Inc. Automated detection and repositioning of micro-objects in microfluidic devices

Families Citing this family (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6221654B1 (en) 1996-09-25 2001-04-24 California Institute Of Technology Method and apparatus for analysis and sorting of polynucleotides based on size
GB2352283A (en) 1999-06-28 2001-01-24 California Inst Of Techn Microfabricated valves, pumps, mirror array and refracting structure
US8709153B2 (en) 1999-06-28 2014-04-29 California Institute Of Technology Microfludic protein crystallography techniques
US7306672B2 (en) 2001-04-06 2007-12-11 California Institute Of Technology Microfluidic free interface diffusion techniques
US7459022B2 (en) * 2001-04-06 2008-12-02 California Institute Of Technology Microfluidic protein crystallography
US7195670B2 (en) 2000-06-27 2007-03-27 California Institute Of Technology High throughput screening of crystallization of materials
US7144616B1 (en) 1999-06-28 2006-12-05 California Institute Of Technology Microfabricated elastomeric valve and pump systems
US7867763B2 (en) 2004-01-25 2011-01-11 Fluidigm Corporation Integrated chip carriers with thermocycler interfaces and methods of using the same
US8105553B2 (en) 2004-01-25 2012-01-31 Fluidigm Corporation Crystal forming devices and systems and methods for using the same
US20050118073A1 (en) * 2003-11-26 2005-06-02 Fluidigm Corporation Devices and methods for holding microfluidic devices
US7351376B1 (en) 2000-06-05 2008-04-01 California Institute Of Technology Integrated active flux microfluidic devices and methods
EP2299256A3 (en) 2000-09-15 2012-10-10 California Institute Of Technology Microfabricated crossflow devices and methods
US7378280B2 (en) 2000-11-16 2008-05-27 California Institute Of Technology Apparatus and methods for conducting assays and high throughput screening
US7691333B2 (en) 2001-11-30 2010-04-06 Fluidigm Corporation Microfluidic device and methods of using same
US7118910B2 (en) 2001-11-30 2006-10-10 Fluidigm Corporation Microfluidic device and methods of using same
JP2005521425A (en) 2002-04-01 2005-07-21 フルイディグム コーポレイション Microfluidic particle analysis system
US8220494B2 (en) 2002-09-25 2012-07-17 California Institute Of Technology Microfluidic large scale integration
US7143785B2 (en) 2002-09-25 2006-12-05 California Institute Of Technology Microfluidic large scale integration
EP1546412B1 (en) 2002-10-02 2014-05-21 California Institute Of Technology Microfluidic nucleic acid analysis
US8828663B2 (en) 2005-03-18 2014-09-09 Fluidigm Corporation Thermal reaction device and method for using the same
US7604965B2 (en) 2003-04-03 2009-10-20 Fluidigm Corporation Thermal reaction device and method for using the same
EP1636017A2 (en) 2003-05-20 2006-03-22 Fluidigm Corporation Method and system for microfluidic device and imaging thereof
US20050171428A1 (en) * 2003-07-21 2005-08-04 Gabor Fichtinger Registration of ultrasound to fluoroscopy for real time optimization of radiation implant procedures
AU2004261655A1 (en) * 2003-07-28 2005-02-10 Fluidigm Corporation Image processing method and system for microfluidic devices
US7407799B2 (en) 2004-01-16 2008-08-05 California Institute Of Technology Microfluidic chemostat
US7307802B2 (en) 2004-06-07 2007-12-11 Fluidigm Corporation Optical lens system and method for microfluidic devices
WO2006060748A2 (en) 2004-12-03 2006-06-08 California Institute Of Technology Microfluidic sieve valves
JP2008522795A (en) * 2004-12-03 2008-07-03 カリフォルニア インスティチュート オブ テクノロジー Microfluidic device with chemical reaction circuit
EP1882189A2 (en) 2005-04-20 2008-01-30 Fluidigm Corporation Analysis engine and database for manipulating parameters for fluidic systems on a chip
US20070054293A1 (en) * 2005-08-30 2007-03-08 California Institute Of Technology Microfluidic chaotic mixing systems and methods
EP1938101A2 (en) * 2005-09-13 2008-07-02 Fluidigm Corporation Microfluidic assay devices and methods
FR2897703B1 (en) * 2006-02-20 2008-04-25 Univ Grenoble 1 AUTOMATIC DETECTION OF A SURGICAL TOOL ON AN IMAGE PROVIDED BY A MEDICAL IMAGING SYSTEM
US7815868B1 (en) 2006-02-28 2010-10-19 Fluidigm Corporation Microfluidic reaction apparatus for high throughput screening
US8828661B2 (en) * 2006-04-24 2014-09-09 Fluidigm Corporation Methods for detection and quantification of nucleic acid or protein targets in a sample
US8050516B2 (en) * 2006-09-13 2011-11-01 Fluidigm Corporation Methods and systems for determining a baseline during image processing
US8055034B2 (en) * 2006-09-13 2011-11-08 Fluidigm Corporation Methods and systems for image processing of microfluidic devices
EP1901235B1 (en) * 2006-09-15 2019-08-14 Honda Research Institute Europe GmbH Free style deformation (FSD)
JP5100757B2 (en) * 2006-11-30 2012-12-19 キヤノン ユー.エス. ライフ サイエンシズ, インコーポレイテッド System and method for monitoring amplification and dissociation reactions of DNA molecules
US8473216B2 (en) * 2006-11-30 2013-06-25 Fluidigm Corporation Method and program for performing baseline correction of amplification curves in a PCR experiment
EP2125219B1 (en) * 2007-01-19 2016-08-10 Fluidigm Corporation High precision microfluidic devices and methods
US8016260B2 (en) 2007-07-19 2011-09-13 Formulatrix, Inc. Metering assembly and method of dispensing fluid
WO2009033178A1 (en) 2007-09-07 2009-03-12 Fluidigm Corporation Copy number variation determination, methods and systems
EP2050395A1 (en) * 2007-10-18 2009-04-22 Paracelsus Medizinische Privatuniversität Methods for improving image quality of image detectors, and systems therefor
WO2009100449A1 (en) 2008-02-08 2009-08-13 Fluidigm Corporation Dynamic array assay methods
US8125640B2 (en) * 2008-03-03 2012-02-28 Wisconsin Alumni Research Foundation Automated analysis system for detection and quantification of biomolecules by measurement of changes in liquid crystal orientation
CN102056838B (en) 2008-04-11 2013-07-03 弗卢丁公司 Microfluidic device and methods
US9579830B2 (en) 2008-07-25 2017-02-28 Fluidigm Corporation Method and system for manufacturing integrated fluidic chips
US8617488B2 (en) 2008-08-07 2013-12-31 Fluidigm Corporation Microfluidic mixing and reaction systems for high efficiency screening
FR2935802B1 (en) * 2008-09-05 2012-12-28 Horiba Abx Sas METHOD AND DEVICE FOR CLASSIFYING, VISUALIZING AND EXPLORING BIOLOGICAL DATA
JP5287178B2 (en) * 2008-11-27 2013-09-11 富士通セミコンダクター株式会社 Defect review device
US8058630B2 (en) 2009-01-16 2011-11-15 Fluidigm Corporation Microfluidic devices and methods
US8100293B2 (en) 2009-01-23 2012-01-24 Formulatrix, Inc. Microfluidic dispensing assembly
US8797336B2 (en) * 2009-06-30 2014-08-05 Apple Inc. Multi-platform image processing framework
US8551787B2 (en) * 2009-07-23 2013-10-08 Fluidigm Corporation Microfluidic devices and methods for binary mixing
TW201110955A (en) * 2009-09-25 2011-04-01 Univ Nat Taiwan Gait training device
SG169918A1 (en) 2009-10-02 2011-04-29 Fluidigm Corp Microfluidic devices with removable cover and methods of fabrication and application
KR101532314B1 (en) * 2009-10-27 2015-06-29 삼성전자주식회사 Quality control method and apparatus of microfluidic device
US9205468B2 (en) 2009-11-30 2015-12-08 Fluidigm Corporation Microfluidic device regeneration
US8693762B2 (en) * 2010-09-14 2014-04-08 The Regents Of The University Of California Inertial particle focusing flow cytometer
WO2012054933A2 (en) 2010-10-22 2012-04-26 Fluidigm Corporation Universal probe assay methods
US9168531B2 (en) 2011-03-24 2015-10-27 Fluidigm Corporation Method for thermal cycling of microfluidic samples
US9644231B2 (en) 2011-05-09 2017-05-09 Fluidigm Corporation Nucleic acid detection using probes
EP2707507B1 (en) 2011-05-09 2017-11-01 Fluidigm Corporation Probe based nucleic acid detection
US8717673B2 (en) * 2011-05-28 2014-05-06 Board Of Trustees Of The University Of Illinois Simple ultra-stable stage with built-in fiduciary markers for fluorescence nanoscopy
WO2013016356A1 (en) * 2011-07-25 2013-01-31 Mad City Labs, Inc. Active-feedback positional drift correction in a microscope image using a fiduciary element held on a nanopositioning stage
US9558268B2 (en) * 2014-08-20 2017-01-31 Mitsubishi Electric Research Laboratories, Inc. Method for semantically labeling an image of a scene using recursive context propagation
JP2019537157A (en) 2016-12-01 2019-12-19 バークレー ライツ,インコーポレイテッド Automatic detection and relocation of minute objects by microfluidic devices
US10408852B2 (en) * 2017-04-26 2019-09-10 Lawrence Livermore National Security, Llc Automated control of microfluidic devices based on machine learning
CN110376171B (en) * 2019-07-15 2021-11-19 上海理工大学 Transmission type fluorescence detection imaging system applied to dPCR detector
US20220276235A1 (en) * 2019-07-18 2022-09-01 Essenlix Corporation Imaging based homogeneous assay
DE102020202528A1 (en) 2020-02-27 2021-09-02 Robert Bosch Gesellschaft mit beschränkter Haftung Method for analyzing a structure within a fluidic system
WO2021178889A1 (en) * 2020-03-05 2021-09-10 The Trustees Of Columbia University In The City Of New York Three-dimensional dosimetry procedures, methods and devices, and optical ct scanner apparatus which utilizes fiber optic taper for collimated images
US20220101183A1 (en) * 2020-09-29 2022-03-31 International Business Machines Corporation Feature processing for machine learning
EP4306213A1 (en) * 2022-07-14 2024-01-17 Sartorius Stedim Biotech GmbH Method for producing a roll of membrane units

Family Cites Families (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3570515A (en) * 1969-06-19 1971-03-16 Foxboro Co Aminar stream cross-flow fluid diffusion logic gate
NL7102074A (en) * 1971-02-17 1972-08-21
FR2287606A1 (en) * 1974-10-08 1976-05-07 Pegourie Jean Pierre PNEUMATIC LOGIC CIRCUITS AND THEIR INTEGRATED CIRCUITS
JPS5941169B2 (en) 1975-12-25 1984-10-05 シチズン時計株式会社 Elastomer
US4119388A (en) * 1977-06-23 1978-10-10 Armitage Harry J Machine tool
US4153855A (en) * 1977-12-16 1979-05-08 The United States Of America As Represented By The Secretary Of The Army Method of making a plate having a pattern of microchannels
US4245673A (en) * 1978-03-01 1981-01-20 La Telemechanique Electrique Pneumatic logic circuit
US4373527B1 (en) * 1979-04-27 1995-06-27 Univ Johns Hopkins Implantable programmable medication infusion system
US4434704A (en) * 1980-04-14 1984-03-06 Halliburton Company Hydraulic digital stepper actuator
US4575681A (en) * 1982-11-12 1986-03-11 Teleco Oilfield Services Inc. Insulating and electrode structure for a drill string
US4662710A (en) * 1982-12-03 1987-05-05 Amp Incorporated Method and apparatus for breaking an optical fiber
US4581624A (en) 1984-03-01 1986-04-08 Allied Corporation Microminiature semiconductor valve
JPH07104855B2 (en) * 1985-03-28 1995-11-13 インターナショナル・ビジネス・マシーンズ・コーポレーション Numerical simulation device
US5088515A (en) * 1989-05-01 1992-02-18 Kamen Dean L Valve system with removable fluid interface
US4948564A (en) * 1986-10-28 1990-08-14 Costar Corporation Multi-well filter strip and composite assemblies
EP0314469B1 (en) * 1987-10-27 1993-06-23 Fujitsu Limited Process and apparatus for preparation of single crystal of biopolymer
US5354695A (en) * 1992-04-08 1994-10-11 Leedy Glenn J Membrane dielectric isolation IC fabrication
US4898582A (en) * 1988-08-09 1990-02-06 Pharmetrix Corporation Portable infusion device assembly
US4992312A (en) * 1989-03-13 1991-02-12 Dow Corning Wright Corporation Methods of forming permeation-resistant, silicone elastomer-containing composite laminates and devices produced thereby
CH679555A5 (en) * 1989-04-11 1992-03-13 Westonbridge Int Ltd
JPH04501449A (en) * 1989-06-14 1992-03-12 ウエストンブリッジ インターナショナル リミティド micro pump
KR910012538A (en) * 1989-12-27 1991-08-08 야마무라 가쯔미 Micro pump and its manufacturing method
DE4006152A1 (en) * 1990-02-27 1991-08-29 Fraunhofer Ges Forschung MICROMINIATURIZED PUMP
US5096388A (en) * 1990-03-22 1992-03-17 The Charles Stark Draper Laboratory, Inc. Microfabricated pump
SE470347B (en) * 1990-05-10 1994-01-31 Pharmacia Lkb Biotech Microstructure for fluid flow systems and process for manufacturing such a system
US5259737A (en) * 1990-07-02 1993-11-09 Seiko Epson Corporation Micropump with valve structure
US5164558A (en) * 1991-07-05 1992-11-17 Massachusetts Institute Of Technology Micromachined threshold pressure switch and method of manufacture
JP3328300B2 (en) * 1991-07-18 2002-09-24 アイシン精機株式会社 Fluid control device
DE4143343C2 (en) * 1991-09-11 1994-09-22 Fraunhofer Ges Forschung Microminiaturized, electrostatically operated micromembrane pump
US5265327A (en) * 1991-09-13 1993-11-30 Faris Sadeg M Microchannel plate technology
US5637469A (en) * 1992-05-01 1997-06-10 Trustees Of The University Of Pennsylvania Methods and apparatus for the detection of an analyte utilizing mesoscale flow systems
DE4220077A1 (en) * 1992-06-19 1993-12-23 Bosch Gmbh Robert Micro-pump for delivery of gases - uses working chamber warmed by heating element and controlled by silicon wafer valves.
US5364742A (en) 1992-09-21 1994-11-15 International Business Machines Corporation Micro-miniature structures and method of fabrication thereof
US5477474A (en) * 1992-10-29 1995-12-19 Altera Corporation Computer logic simulation with dynamic modeling
JP2812629B2 (en) * 1992-11-25 1998-10-22 宇宙開発事業団 Crystal growth cell
US5290240A (en) * 1993-02-03 1994-03-01 Pharmetrix Corporation Electrochemical controlled dispensing assembly and method for selective and controlled delivery of a dispensing fluid
US5400741A (en) * 1993-05-21 1995-03-28 Medical Foundation Of Buffalo, Inc. Device for growing crystals
ATE156895T1 (en) * 1993-05-27 1997-08-15 Fraunhofer Ges Forschung MICRO VALVE
SE501713C2 (en) * 1993-09-06 1995-05-02 Pharmacia Biosensor Ab Diaphragm-type valve, especially for liquid handling blocks with micro-flow channels
US5642015A (en) * 1993-07-14 1997-06-24 The University Of British Columbia Elastomeric micro electro mechanical systems
US5659171A (en) * 1993-09-22 1997-08-19 Northrop Grumman Corporation Micro-miniature diaphragm pump for the low pressure pumping of gases
CH689836A5 (en) * 1994-01-14 1999-12-15 Westonbridge Int Ltd Micropump.
JPH07311834A (en) * 1994-05-19 1995-11-28 Toshiba Medical Eng Co Ltd Image processor and its aid
DE4433894A1 (en) 1994-09-22 1996-03-28 Fraunhofer Ges Forschung Method and device for controlling a micropump
DE69531430T2 (en) 1994-10-07 2004-07-01 Bayer Corp. relief valve
US5571410A (en) * 1994-10-19 1996-11-05 Hewlett Packard Company Fully integrated miniaturized planar liquid sample handling and analysis device
US5500071A (en) * 1994-10-19 1996-03-19 Hewlett-Packard Company Miniaturized planar columns in novel support media for liquid phase analysis
US5788468A (en) * 1994-11-03 1998-08-04 Memstek Products, Llc Microfabricated fluidic devices
US5665070A (en) * 1995-01-19 1997-09-09 I-Flow Corporation Infusion pump with magnetic bag compression
US5588430A (en) 1995-02-14 1996-12-31 University Of Florida Research Foundation, Inc. Repeat fixation for frameless stereotactic procedure
JP3094880B2 (en) * 1995-03-01 2000-10-03 住友金属工業株式会社 Method for controlling crystallization of organic compound and solid state element for controlling crystallization used therein
US5775371A (en) * 1995-03-08 1998-07-07 Abbott Laboratories Valve control
US5876187A (en) * 1995-03-09 1999-03-02 University Of Washington Micropumps with fixed valves
EP0839318B1 (en) * 1995-06-16 2003-05-07 University of Washington Microfabricated differential extraction device and method
US5856174A (en) * 1995-06-29 1999-01-05 Affymetrix, Inc. Integrated nucleic acid diagnostic device
CA2183478C (en) * 1995-08-17 2004-02-24 Stephen A. Carter Digital gas metering system using tri-stable and bi-stable solenoids
US6130098A (en) 1995-09-15 2000-10-10 The Regents Of The University Of Michigan Moving microdroplets
JPH09153131A (en) * 1995-11-30 1997-06-10 Hitachi Ltd Method and device for processing picture information and picture information integrating system
US5705018A (en) * 1995-12-13 1998-01-06 Hartley; Frank T. Micromachined peristaltic pump
KR100207410B1 (en) 1995-12-19 1999-07-15 전주범 Fabrication method for lightpath modulation device
US5660370A (en) * 1996-03-07 1997-08-26 Integrated Fludics, Inc. Valve with flexible sheet member and two port non-flexing backer member
US5885470A (en) * 1997-04-14 1999-03-23 Caliper Technologies Corporation Controlled fluid transport in microfabricated polymeric substrates
US5942443A (en) * 1996-06-28 1999-08-24 Caliper Technologies Corporation High throughput screening assay systems in microscale fluidic devices
WO1998002601A1 (en) * 1996-07-15 1998-01-22 Sumitomo Metal Industries, Ltd. Equipment for crystal growth and crystal-growing method using the same
US6136212A (en) 1996-08-12 2000-10-24 The Regents Of The University Of Michigan Polymer-based micromachining for microfluidic devices
US5738799A (en) 1996-09-12 1998-04-14 Xerox Corporation Method and materials for fabricating an ink-jet printhead
US5854684A (en) * 1996-09-26 1998-12-29 Sarnoff Corporation Massively parallel detection
US5971355A (en) 1996-11-27 1999-10-26 Xerox Corporation Microdevice valve structures to fluid control
US5912984A (en) * 1996-12-19 1999-06-15 Cognex Corporation Method and apparatus for in-line solder paste inspection
US6376971B1 (en) * 1997-02-07 2002-04-23 Sri International Electroactive polymer electrodes
EP0991928A2 (en) 1997-06-27 2000-04-12 Immunetics Rapid flow-through binding assay apparatus and method
US6529612B1 (en) 1997-07-16 2003-03-04 Diversified Scientific, Inc. Method for acquiring, storing and analyzing crystal images
US6073482A (en) 1997-07-21 2000-06-13 Ysi Incorporated Fluid flow module
US5932799A (en) * 1997-07-21 1999-08-03 Ysi Incorporated Microfluidic analyzer module
US5876675A (en) * 1997-08-05 1999-03-02 Caliper Technologies Corp. Microfluidic devices and systems
JPH1185997A (en) * 1997-09-03 1999-03-30 Canon Inc Image processor
TW352471B (en) * 1997-09-20 1999-02-11 United Microelectronics Corp Method for preventing B-P-Si glass from subsiding
US5842787A (en) * 1997-10-09 1998-12-01 Caliper Technologies Corporation Microfluidic systems incorporating varied channel dimensions
US5836750A (en) * 1997-10-09 1998-11-17 Honeywell Inc. Electrostatically actuated mesopump having a plurality of elementary cells
WO1999024744A1 (en) * 1997-11-12 1999-05-20 California Institute Of Technology Micromachined parylene membrane valve and pump
US6174675B1 (en) * 1997-11-25 2001-01-16 Caliper Technologies Corp. Electrical current for controlling fluid parameters in microchannels
AU2459799A (en) * 1998-01-20 1999-08-02 Triconex, Incorporated Two out of three voting solenoid arrangement
AU3491299A (en) 1998-04-14 1999-11-01 Lumenal Technologies, L.P. Test cartridge with a single inlet port
US6246330B1 (en) * 1998-05-29 2001-06-12 Wyn Y. Nielsen Elimination-absorber monitoring system
AU4719399A (en) 1998-06-26 2000-01-17 University Of Washington Crystallization media
US6081577A (en) * 1998-07-24 2000-06-27 Wake Forest University Method and system for creating task-dependent three-dimensional images
RU2143343C1 (en) 1998-11-03 1999-12-27 Самсунг Электроникс Ко., Лтд. Microinjector and microinjector manufacture method
MXPA01009999A (en) 1999-04-06 2003-07-14 Uab Research Foundation Method for screening crystallization conditions in solution crystal growth.
EP1183059A1 (en) * 1999-06-08 2002-03-06 Medical Research Group, Inc. Method and apparatus for infusing liquids using a chemical reaction in an implanted infusion device
US6296673B1 (en) * 1999-06-18 2001-10-02 The Regents Of The University Of California Methods and apparatus for performing array microcrystallizations
US7052545B2 (en) * 2001-04-06 2006-05-30 California Institute Of Technology High throughput screening of crystallization of materials
US7195670B2 (en) * 2000-06-27 2007-03-27 California Institute Of Technology High throughput screening of crystallization of materials
US7244402B2 (en) * 2001-04-06 2007-07-17 California Institute Of Technology Microfluidic protein crystallography
US7459022B2 (en) * 2001-04-06 2008-12-02 California Institute Of Technology Microfluidic protein crystallography
US7217321B2 (en) * 2001-04-06 2007-05-15 California Institute Of Technology Microfluidic protein crystallography techniques
US7306672B2 (en) * 2001-04-06 2007-12-11 California Institute Of Technology Microfluidic free interface diffusion techniques
DE19933614C1 (en) 1999-07-17 2000-11-30 Moeller Gmbh Contact system for current-limiting load switch has 2-armed contact arm carrying contact pieces cooperating with contact pieces of fixed contact rails fitted to pivot axis via elongate slot
ATE381116T1 (en) 1999-07-20 2007-12-15 Stanford Res Inst Int ELECTROACTIVE POLYMER GENERATORS
US6977145B2 (en) * 1999-07-28 2005-12-20 Serono Genetics Institute S.A. Method for carrying out a biochemical protocol in continuous flow in a microreactor
AU6396500A (en) 1999-08-02 2001-02-19 Emerald Biostructures, Inc. Method and system for creating a crystallization results database
US6409832B2 (en) * 2000-03-31 2002-06-25 Micronics, Inc. Protein crystallization in microfluidic structures
US7279146B2 (en) * 2003-04-17 2007-10-09 Fluidigm Corporation Crystal growth devices and systems, and methods for using same
US8105553B2 (en) * 2004-01-25 2012-01-31 Fluidigm Corporation Crystal forming devices and systems and methods for using the same
US7867763B2 (en) * 2004-01-25 2011-01-11 Fluidigm Corporation Integrated chip carriers with thermocycler interfaces and methods of using the same
US7351376B1 (en) 2000-06-05 2008-04-01 California Institute Of Technology Integrated active flux microfluidic devices and methods
WO2002000343A2 (en) * 2000-06-27 2002-01-03 Fluidigm Corporation A microfluidic design automation method and system
US6885982B2 (en) * 2000-06-27 2005-04-26 Fluidigm Corporation Object oriented microfluidic design method and system
JP3542550B2 (en) 2000-07-19 2004-07-14 本田技研工業株式会社 Method of forming fuel cell seal
US6863791B1 (en) * 2000-09-11 2005-03-08 Spectrumedix Llc Method for in-situ calibration of electrophoretic analysis systems
US6728424B1 (en) 2000-09-15 2004-04-27 Koninklijke Philips Electronics, N.V. Imaging registration system and method using likelihood maximization
US6508988B1 (en) * 2000-10-03 2003-01-21 California Institute Of Technology Combinatorial synthesis system
WO2002045153A1 (en) * 2000-12-01 2002-06-06 Ebara Corporation Inspection method and apparatus using electron beam, and device production method using it
JP2002228606A (en) * 2001-01-31 2002-08-14 Hitachi Ltd Electron beam circuit pattern inspecting method and apparatus therefor
JP4455813B2 (en) 2001-04-06 2010-04-21 カリフォルニア インスティテュート オブ テクノロジー Method for promoting interaction between two solutions
US6960437B2 (en) 2001-04-06 2005-11-01 California Institute Of Technology Nucleic acid amplification utilizing microfluidic devices
US20030005308A1 (en) * 2001-05-30 2003-01-02 Rathbun Paul L. Method and system for globally restricting client access to a secured web site
US6797056B2 (en) * 2001-06-08 2004-09-28 Syrrx, Inc. Microfluidic method employing delivery of plural different fluids to same lumen
US6847153B1 (en) * 2001-06-13 2005-01-25 The United States Of America As Represented By The Secretary Of The Navy Polyurethane electrostriction
US20030027225A1 (en) * 2001-07-13 2003-02-06 Caliper Technologies Corp. Microfluidic devices and systems for separating components of a mixture
US7075162B2 (en) * 2001-08-30 2006-07-11 Fluidigm Corporation Electrostatic/electrostrictive actuation of elastomer structures using compliant electrodes
US7123569B2 (en) * 2001-09-04 2006-10-17 Imation Corp. Optical data storage medium
WO2003031066A1 (en) * 2001-10-11 2003-04-17 California Institute Of Technology Devices utilizing self-assembled gel and method of manufacture
US7312085B2 (en) * 2002-04-01 2007-12-25 Fluidigm Corporation Microfluidic particle-analysis systems
US7059348B2 (en) * 2002-05-13 2006-06-13 Fluidigm Corporation Drug delivery system
CA2521171C (en) * 2003-04-03 2013-05-28 Fluidigm Corp. Microfluidic devices and methods of using same
US7476363B2 (en) * 2003-04-03 2009-01-13 Fluidigm Corporation Microfluidic devices and methods of using same
EP1636017A2 (en) * 2003-05-20 2006-03-22 Fluidigm Corporation Method and system for microfluidic device and imaging thereof
AU2004261655A1 (en) * 2003-07-28 2005-02-10 Fluidigm Corporation Image processing method and system for microfluidic devices
US7413712B2 (en) * 2003-08-11 2008-08-19 California Institute Of Technology Microfluidic rotary flow reactor matrix

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10832404B2 (en) 2014-12-09 2020-11-10 Berkeley Lights, Inc. Automated detection and repositioning of micro-objects in microfluidic devices

Also Published As

Publication number Publication date
US20100119154A1 (en) 2010-05-13
CA2532530A1 (en) 2005-02-10
EP1667829A4 (en) 2008-12-10
US20050282175A1 (en) 2005-12-22
WO2005011947A2 (en) 2005-02-10
SG145697A1 (en) 2008-09-29
AU2004261655A1 (en) 2005-02-10
US7583853B2 (en) 2009-09-01
WO2005011947A3 (en) 2007-12-13
JP2007506943A (en) 2007-03-22
US7792345B2 (en) 2010-09-07
EP1667829A2 (en) 2006-06-14

Similar Documents

Publication Publication Date Title
WO2005011947A9 (en) Image processing method and system for microfluidic devices
US8808640B2 (en) Method and system for microfluidic device and imaging thereof
KR100924985B1 (en) Defect detecting apparatus, defect detecting method, information processing apparatus, information processing method, and program therefor
CN102053356B (en) System and method for imaging with enhanced depth of field
US10852290B2 (en) Analysis accuracy improvement in automated testing apparatus
KR20090077980A (en) Defect detecting apparatus, defect detecting method, information processing apparatus, information processing method, and program therefor
JP2009103508A (en) Defect classification method and device thereof
CN102053355A (en) System and method for imaging with enhanced depth of field
JP5439543B2 (en) Defect classification method and apparatus
JP2008139201A (en) Apparatus and method for detecting defect, apparatus and method for processing information, and its program
CN114341619A (en) Measurement accuracy and reliability improvement
JP2006005242A (en) Apparatus and method for image processing, exposure device, and device manufacturing method
TWI755755B (en) Equipment for testing biological specimens
JP4791998B2 (en) Pattern shape evaluation method and program
WO2020257809A1 (en) Improved optical transmission sample holder and analysis at multiple wavelengths
EP1210634B1 (en) Methods and devices in an optical system
KR102523770B1 (en) Machine vision-based quality management system and method for electric parts using deep learning
CN114326078A (en) Microscope system and method for calibration checking
CN117393483A (en) Wafer alignment method and wafer alignment equipment
Hsiao et al. Automatic Detection and Location for The Fiducial Marks and Reference Fiducial Marks

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2532530

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2006522086

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2004261655

Country of ref document: AU

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2004757388

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2004261655

Country of ref document: AU

Date of ref document: 20040728

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 2004261655

Country of ref document: AU

WWP Wipo information: published in national office

Ref document number: 2004757388

Country of ref document: EP