WO2009137866A1 - Method and system for automated cell function classification - Google Patents

Method and system for automated cell function classification Download PDF

Info

Publication number
WO2009137866A1
WO2009137866A1 PCT/AU2009/000571 AU2009000571W WO2009137866A1 WO 2009137866 A1 WO2009137866 A1 WO 2009137866A1 AU 2009000571 W AU2009000571 W AU 2009000571W WO 2009137866 A1 WO2009137866 A1 WO 2009137866A1
Authority
WO
WIPO (PCT)
Prior art keywords
cell
image
images
objects
cells
Prior art date
Application number
PCT/AU2009/000571
Other languages
French (fr)
Inventor
Ze'ev Wayne Bomzon
Sarah May Russell
Min Gu
Alan Gerald Herschtal
Original Assignee
Swinburne University Of Technology
Peter Maccallum Cancer Institute
Technion Research & Development Foundation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2008902434A external-priority patent/AU2008902434A0/en
Application filed by Swinburne University Of Technology, Peter Maccallum Cancer Institute, Technion Research & Development Foundation Ltd filed Critical Swinburne University Of Technology
Publication of WO2009137866A1 publication Critical patent/WO2009137866A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • Described embodiments relate to methods and systems for automated cell function classification.
  • described embodiments relate to cell function classification based on extracting information regarding cells in a sample area from images of the sample area taken over time.
  • Imaging has become a key tool in biological studies, and current studies on cell rely heavily microscopy. Cell microscopy also has many practical applications. For instance, drug screening, IVF and diagnostic tools may all use microscopy to some extent.
  • cells can be imaged for tens of hours at a time, thereby generating a large number of images that must be perused in order to obtain any relevant information regarding cell function or cell behaviour. This can be particularly problematic and error-prone when non-adherent cells are being studied, because of the highly dynamic nature and relatively rapid movement of the cells.
  • Certain embodiments relate to a method of automatic cell function classification, comprising: obtaining a sequence of images of a sample area in which at least one cell is located; processing the sequence of images to determine objects in each of the images; classifying the objects in each of the images to identify cell objects based on which of the objects resemble cells; determining properties of the identified cell objects across the sequence of images to determine behaviour of each cell object; and classifying one or more cell functions of each identified cell object based on the determined properties.
  • the method may further comprise determining whether object classification training is required in order to classify the objects and, if so, training an object classifier using object classification training data.
  • the method may further comprise determining whether cell function classification training is required in order to classify the cell functions and, if so, training a cell function classifier using cell function training data.
  • the processing may comprise applying filtering to each of the images to generate an enhanced version of each image.
  • the filtering may comprise noise reduction filtering and may comprise background removal filtering.
  • the processing may comprise applying an image intensity threshold to generate a binary image for each of the images in the sequence.
  • the processing may comprise identifying objects in each image based on the binary image.
  • the objects may be identified by determining groups of neighbouring or adjacent pixels in each image for which an intensity of the pixel in the image is above the image intensity threshold.
  • the image intensity threshold may be an adaptive threshold or it may be a fixed threshold, for example.
  • the method may further comprise modifying each image of the sequence to remove objects that are not determined to be cells.
  • the method may further comprise masking each image of the sequence using the binary image generated from that image to remove background image data from each image of the sequence.
  • Classifying the objects may comprise classifying each object as resembling one of: no cells; one cell; two cells; three cells; four cells; and more than four cells.
  • the method may further comprise measuring one or more properties of each cell object.
  • the one or more properties may include a velocity of the cell object over the sequence of images and may include shape properties and/or a centroid of each cell object.
  • the one or more properties may include a path of the cell within the sample area over the sequence of images.
  • Objects identified as resembling more than one cell are treated initially as a single object, but the object is tracked so that, if the cells separate from each other, each cell is then tracked as a separate entity. To do this, the object is treated as being equivalent to the same number of cell objects as the number of cells it resembles, with all cell objects sharing the same centroid, location and shape, until the cells separate from each other.
  • Embodiments of the described methods may also be used to track and classify functions of cells of more than one type and/or expressing different image characteristics, such as different fluorescence or non-fluorescence.
  • the method may further comprise storing a data file for each or all identified cell object, the data file comprising classified cell functions of the cell objects for each image, measured properties of the cell objects for each image and a cell label for each cell object.
  • the data file may also comprise information on how the images were captured, such as the microscope used, the time intervals between images, the microscope magnification, the type of camera and the modes of imaging used.
  • the data file may also contain information regarding the cell types used and cell culture conditions.
  • a cell classification system comprising at least one processing device and data storage accessible to the at least one processing device.
  • the data storage comprises executable program code which, when executed by the at least one processing device, causes the system to perform methods described above.
  • the system may comprise an image acquisition system in communication with the data storage for acquiring the sequence of images of the sample area and storing the acquired sequence of images in the data storage for later processing according to methods described above.
  • the methods, systems and program code may be used to study non-adherent cells in the sample area, including live cells.
  • the cells may include T-cells and/or dendritic cells, for example.
  • Figure 1 is a block diagram of a system for automated cell function classification
  • FIG. 2 is a block diagram showing an analysis module of the system in further detail
  • Figure 3 is a flowchart of a method of automated cell function classification
  • Figure 4 is a flowchart of a method of enhancing images
  • Figure 5 is a flowchart of a method of identifying and classifying objects in the enhanced images
  • Figure 6 is a flowchart of a method of classifying objects
  • Figure 7 is a flowchart of a method of tracking cells through sequences of images
  • Figure 8 is a flowchart of a method of measuring cell properties
  • Figure 9 is a flowchart of a method of classifying cell function to identify events of interest
  • Figure 10 is a flowchart of a method of training a cell shape classifier
  • Figure 11 is a flowchart of a method of training a cell function classifier
  • Figure 12 is a flowchart of a method for segmenting fluorescent images
  • Figure 13 is a flowchart of a method for segmenting non-fluorescent images
  • Figure 14 is a flowchart of a method for creating two binary images, each showing a different cell type from one fluorescent image and one non-fluorescent image.
  • the described embodiments relate generally to methods and systems for automated cell function classification based on captured image data.
  • the image data may be captured and processed in real time or may be stored for processing at any time after acquisition of the images.
  • the obtained images of the sample area have at least one, and probably many, live cells shown in those images.
  • live cells tend to move, change and interact with other cells
  • a sequence of images of such cells can be used to extract a substantial amount of information about the functions of the observed cells.
  • the described embodiments generally involve processing sequences of images to automatically identify, classify and track cells through the sequences of images.
  • the described embodiments generally provide an efficient means for analyzing images generated in high-throughput microscopy measurements performed on non-adherent cells. These embodiments relate to a method for automated analysis of the content in sequences of images showing non-adherent cells under the microscope.
  • An example of the types of biological systems that the analysis can be applied to is studying T-cells and their interactions with antigen-presenting cells using microscopy.
  • the described embodiments may employ machine learning for automated classification of cells based.
  • Work in this field has focused mainly on phenotyping adherent cell types from microscope image.
  • non-adherent cells are highly dynamic and therefore different approaches are sought.
  • the purpose of the described embodiments is not to phenotype cells from static images. Rather the invention is devised to identify changes in the function of specific cells, and determine interactions between cell-types, such as T-cell and dendritic cell interactions, over time. Changes in function are manifested not only in shape changes, and changes in fluorescent intensity, but also in changes of velocity and cell- cell proximity. Some described embodiments utilize these parameters for automatically interpreting the content of sequences of images showing cells.
  • cells are cultured and placed in a transparent container suitable for cell culture and for imaging with a microscope.
  • the types of containers might be chamber slides, Petri dishes, multi-well plates or a microscope slide.
  • the cells are then placed on a microscope and images of the cell are captured over a periods of time which can range from minutes to days.
  • fluorescence-based imaging modes There are many different modes of imaging that can be used for viewing the cells. For convenience, we will classify the various modes into two classes: fluorescence-based imaging modes and non-fluorescence-based imaging modes.
  • fluorescence imaging When fluorescence imaging is used, specific compounds within the sample are excited with a certain band of wavelengths of light, and the image is collected based on the emission of these compounds at a different band of wavelengths.
  • This type of imaging could include one photon-fluorescence, two photon fluorescence, multiphoton fluorescence, second harmonic generation based imaging and imaging based on higher harmonics.
  • Non-fluorescence imaging the sample is viewed based on its optical properties, such as reflection, transmission and refractive index.
  • the image formation is performed using the same spectra as the excitation source.
  • Non-fluorescence imaging includes methods such as transmission microscopy, reflection microscopy, phase contrast microscopy and Differential Interference Contrast microscopy (DIC).
  • fluorescence imaging will be performed at one or several wavelengths, and this can be supplemented by images acquired using a non-fluorescence technique, such as DIC.
  • microscopes There is no limitation on the type of (light) microscope that can be used for these experiments, as long the microscope can generate images of live cells. Suitable microscopes could include widefield microscopes or confocal microscopes or any other type of device that can be used to create images of cells in fluorescent and non- fluorescent modes.
  • cells can be imaged using either fluorescent or non-fluorescent techniques.
  • image processing and identification is generally easier if fluorescent imaging is acquired.
  • these modes of imaging provide a wealth of information regarding protein interactions and motion, which are of interest to scientists. Therefore, some mode of fluorescence will generally be used.
  • a single type of cell might be labeled with two different fluorescent labels. Each label will generally mark a different compound of interest within the cell. These compounds could be specific proteins, ions, lipids or anything else found within the cell that can be labeled.
  • autofluorescence might be used. Autofiuorescence is the fluorescent signal emitted by the cell when no labeling is performed. In all cases, a separate image is captured for each wavelength/ mode of imaging at each time point.
  • Fluorescence labeling of cells can be performed in two ways: the cell can be exposed to a fluorescent compound, which enters the cell in some manner and labels compounds within the cell; or the genetic material in the cell can be manipulated so that the cell expresses a fluorescent version of certain proteins.
  • the different cells will generally be distinguishable through the different combination of fluorescent wavelengths at which they will be imaged.
  • An example of an experiment involving two cell types is an experiment involving dendritic cells and T-cells.
  • the system needs to be trained to distinguish between the two cell types.
  • One way to do this is to label the T-cells so they fluoresce red and label the dendritic cells so they fluoresce in green. In this manner, it is possible to distinguish the two cells simply by determining that if a pixel is positive in the green image then it belongs to a dendritic cell, and if it fluoresces red it belongs to a T-cell.
  • T-cells it is possible to label the T-cells with a fluorescent marker, and not label the dendritic cells with a fluorescent compound.
  • segmentation (described further below) of images captured in the non-fluorescence mode identifies the location of
  • T-cells and dendritic cells but does not provide an immediate means for distinguishing between the two cell types, whereas segmentation of the red fluorescent image will reveal the pixels belonging only to the T-cells.
  • a set of logical operations (logical XOR followed by logical AND) can then be used to generate two segmented images: one image in which only the dendritic cells are visible, and one image in which only the T- cells are visible. These images can then be used for training a classifier.
  • the measurement module may perform measurements of relationships regarding the objects in the two image sequences, such as proximity of cells of one type to cells of the second type. For instance, in order to determine whether a T-cell is interacting with a dendritic cell, it may be desirable to determine that the T-cell is in contact with the dendritic cell.
  • System 100 comprises an image acquisition unit 110, an analysis unit 120, a sample 130 within a controlled environment 135 and a data storage unit 140.
  • Image acquisition unit 110 captures images of one or more sample areas in the controlled environment 135 and stores these captured images as image sequences in data storage 140.
  • Analysis unit 120 accesses the sequences of images in data storage 140 and processes these sequences in order to obtain information about cells in the sample area from which the images were captured.
  • Analysis unit 120 may comprise a computer system, either as part of image acquisition unit 110 or as a separate system, and has a processor 122, memory 126 and user interface 124.
  • Processor 122 may comprise more than one processing component and may include, for example, field programmable gate arrays (FPGAs) 5 parallel or distributed processing components, application specific integrated circuits (ASICs) or digital signal processors (DSP).
  • User interface 124 may comprise standard computer peripheral devices, such as a keyboard, mouse and display.
  • Memory 126 may comprise one or more forms of volatile and non-volatile memory accessible to processor 122.
  • Memory 126 comprises computer readable program code which, when executed by processor 122, performs functions as described herein. For convenience, some of the program code stored in memory 126 is referred to as a configuration module 127 and an analysis module 128. When executed, configuration module 127 allows a user to configure analysis unit 120 in relation to its handling of sequences of images. Functions of analysis module 128, when executed, are described in further detail below, with reference to Figure 2.
  • a sequence of time-lapse images showing the behaviour of the cells is acquired with the image acquisition unit 110. These images are then stored in digital form in the data storage unit 140. The stored data is retrieved from data storage 140 by the image analysis unit 120, and analysed to identify and characterize cellular function in the image sequence. The results can be presented to a user using a user interface 124 of the analysis unit 120 and saved to memory using the data storage unit 140.
  • the image acquisition system 110 comprises a microscope 112, an image acquisition control module 114, an input interface 116 and a display 118.
  • a sample 130 containing the cells is placed on a microscope stage (not shown), and a time sequence of images of the sample is captured.
  • the parameters used for image acquisition are configurable using input interface 116.
  • the parameters are fed to the image acquisition control 114, which controls the elements of the microscope 112.
  • a user can refer to the display 118, which shows the user the images that are being obtained by the microscope 112.
  • Microscope 112 can be any type of microscope suitable for imaging live cells. This includes transmitted light microscopes, epi-fluorescence microscopes, confocal microscopes, two-photon microscopes and any modification of these instruments that can yield images of cells. There is no fundamental limitation on the mode of imaging as long as it yields images of the cells.
  • Microscope 112 is equipped with a detector that enables the capture of a digitized image.
  • the detector may be a CCD or EMCCD camera, in the case of a transmitted light or epifluorescence microscope, or a photomultiplier or avalanche detector, in the case of scanning systems such as a confocal microscope or a two-photon microscope. Any detector capable of producing a digital image can be used.
  • Microscope 112 may include a motorized stage for the sample, motorized focus and a motorized mechanism for selecting the imaging mode. This provides the user with more scope for application, but is not essential to the system. Suitable microscopes are made by companies such as Olympus, Zeiss, Leica or Nikon.
  • Microscope 112 may be equipped with environmental control units 135 so that the cells can be kept in a constant environment. Such units may be obtained from Solent, for example.
  • Image acquisition control 114 can be configured to fix the number of images to be captured, the locations on the sample at which to capture the images, the modes and wavelengths at which to capture images, the frequency at which images are captured, the exposure times and any other settings relevant to the image acquisition (eg. camera gain).
  • the parameters used by image acquisition control 114 can be configured through the input interface 116.
  • a user can refer to the display 118, which can show the current images being formed by the microscope, in order to determine the appropriate configuration settings.
  • the input interface 116, image acquisition control 114 and display 118 may come as a single package, and may be available from either microscope manufacturers or from other vendors. For instance, metamorph and Image Pro Plus are both software packages that include these three units.
  • Images acquired with the image acquisition unit 110 are saved on the data storage unit 140.
  • Acquired images can be saved using any suitable format e.g. tif, bitmap, Jpeg etc.
  • the stored images can be retrieved by the analysis unit 120 for further manipulation as described herein.
  • Files and data generated by the analysis unit 120 are also written to the data storage unit 140.
  • Analysis module 128 is where all image analysis is performed. The various components of this module are described below.
  • the analysis module 128 can run in three different modes: cell shape training mode, cell function training mode and analysis mode.
  • shape training mode the input images are used to train a cell shape classifier to identify cell-like objects in images based on cell shape input such as may be input by a user.
  • function training mode sequences of images are used to train a cell function classifier to identify changes in cell functionality based on cell function input such as may be input by a user.
  • analysis mode the analysis module 128 is used to automatically interpret image sequences, and identify when and where cells are performing certain functions that may be of interest to the user.
  • Analysis module 128 comprises an image reading module 205, an image enhancement module 210, a segmentation module 215, an image modification module 220, a cell shape classifier 225, a measurement module 230, a tracking module 235, a cell function classifier 240, a user interface module 245, an XML scheme module 250 and a file writing module 255.
  • image reading module 205 an image enhancement module 210, a segmentation module 215, an image modification module 220, a cell shape classifier 225, a measurement module 230, a tracking module 235, a cell function classifier 240, a user interface module 245, an XML scheme module 250 and a file writing module 255.
  • Image reading module 205 reads sequences of images stored in data storage 140 for processing by analysis module 128. Reading of the sequences of images by image reading module 205 from data storage 140 may be automatically performed, for example where image reading module 205 detects that a new or unread sequence of images is stored in data storage 140. Alternatively, image reading module 205 can be caused by a user to read a selected sequence of images from data storage 140. Image reading module 205 may effectively buffer the images in the sequence when providing the images to image enhancement module 210 for further processing.
  • Image enhancement module 210 receives images from image reading module 205 and performs certain enhancement functions in relation to each image.
  • image enhancement module 210 may apply a noise reduction filter and/or a background removal filter.
  • the noise reduction filter may be a Wiener adaptive filter, for example, having a predetermined kernel size selected according to the size of the cells under study. This kernel size can be configured by a user interface 124 using configuration module 127 prior to the analysis being performed in relation to the sequence of images using analysis module 128.
  • the background removal filter may be a morphological filter, such as a Top Hat filter, based on a predetermined structure element corresponding approximately to a shape of the cell.
  • the structure element may be a circle, ellipse, square, diamond or other discrete shape, as long as the structure element is chosen to be slightly bigger than the cell type under study.
  • image enhancement module 210 Following application of the filters, image enhancement module 210 generates for each original image an enhanced image with reduced noise and removed background. The sequence of enhanced images is then passed to segmentation module 215.
  • Segmentation module 215 distinguishes between pixels belonging to objects and pixels belonging to the background in the enhanced image to identify objects within the sequence of images.
  • the segmentation module 215 first applies an image intensity threshold to all pixels in the image so that pixels below the intensity threshold are considered to be "off or 0 or "black", and pixels having an intensity above the threshold are considered to be "on” or 1 or “white”. In this way, segmentation module 215 generates a binary image from each enhanced image, containing only "black” and "white”. From this binary image, objects within the image are identified as groups of two or more adjoining or adjacent pixels with the same value. . All objects thus identified are sequentially labelled.
  • Segmentation module 215 thus generates for each enhanced image a binary mask in which the background is 0 and the foreground is 1, and a list of labelled objects together with the pixels that belong to each object in the enhanced image. Generally, objects that are considered background will not be labelled. In some embodiments, all pixels belonging to a specific object have the value of that object label.
  • the intensity threshold to be used in generating the binary masking image can be a fixed threshold or an adaptive or dynamically determined threshold.
  • an adaptive threshold may be used where reflected light intensities are uneven across the sample area.
  • the adaptive threshold may be used to compensate for areas having low reflected light intensity and areas having relatively higher reflected light intensity.
  • Thresholding may be followed by other operations that help to separate background from foreground. For example, morphological opening and closing might be used to correct pixels that were incorrectly classified as background or foreground by application of the threshold.
  • Other algorithms that could be used for segmentation include voting schemes, region growing schemes and watershed segmentation.
  • Cell shape classifier 225 receives the binary image generated by segmentation module 215 for each enhanced image in the sequence and is used to classify the objects identified in the binary image.
  • Cell shape classifier 225 can be used in either a training mode or an analysis mode.
  • the classifier can be constructed using tools such as Bayesian classification, Decision trees, logical regression and Support Vector Machines.
  • Image modification module 220 is used to eliminate non-cell like objects from the images.
  • Image modification module 220 receives a sequence of images showing labelled objects, along with a list containing object classifications assigned by cell shape classifier 225.
  • Image modification module 220 scans the list for labels of non-cell like objects in each image and identifies the pixels belonging to those objects in each image. The image modification module 220 then sets the value of the non-cell like objects in each image to 0. This is also done for each of the binary images. In this way, all non- cell like images are removed from the images.
  • Image modification module 220 uses the binary images as masks and multiplies each enhanced image with the corresponding binary mask image to generate a new sequence of enhanced images in which the background is completely suppressed. Image modification module 220 then re-labels all of the objects in each image so that the remaining cell-like objects are indexed sequentially.
  • Measurement module 230 receives images and measures the properties of the objects in the images. Measurement module 230 generates a list of the properties of the objects, which can include measurements such as the object centroids, areas, shape moments and other characteristic parameters such as the circumference of the object, the circularity of the object, the euler number of the object and the extent of the object, the major and minor axes of the objects and the ratio between the two.
  • the list of measurements can also include a list of properties relating to the intensity distribution and texture of the pixels belonging to each object. For example measurement module 230 can be used to measure the maximum intensity, minimum intensity, median intensity and ratios between these parameters. Measurement module 230 can be used to calculate the centre of mass of the intensity of the object and higher moments of the intensity as well. Various ratios between the intensity parameters and the shape parameters that could be useful for training and classification might also be calculated by measurement module 230.
  • measurement module 230 receives both binary and intensity images from the enhancement module 210, segmentation module 215 and image modification module 220. Measurement module 230 also receives the list of pixels belonging to each object from the segmentation 215 module or image modification module 220. Measurement module 230 can then isolate the pixels belonging to each object in both the binary and intensity images and perform the measurements described above. The measurement module 230 may also receive a list of trajectories from the tracking module 235 and calculate the velocity of each tracked cell from this information. Other parameters characterising the motion of the cell objects, such as mean squared displacement, directionality, average velocity etc., can also be calculated from the trajectories.
  • Tracking module 235 receives the list of measurements from the measurement module 230, and matches the centroids in sequential frames to form a list containing an estimate of the trajectories of the cells imaged in each experiment.
  • the list generated by the tracking module includes an identifier for each tracked cell, the numbers of the images in which it appears, the centroids of the shape in each instance that it appeared and all other measured properties of the cell at each instance it appeared. Tracking can be performed using any multiple particle tracking algorithm, for example such as the algorithm suggested by D. Crocker and J. Grier (JOURNAL OF COLLOID AND INTERFACE SCIENCE
  • Cell function classifier 240 receives the list of trajectories, measurements on shapes, velocities and other derived parameters from the measurement module 230. It uses this information to classify the function of each cell in each image.
  • the cell function classifier 240 can be used in either training mode or classification mode.
  • User interface module 245 receives input from users (via user interface 124) and provides visual feedback from the analysis module 128 to the user.
  • User input received through user interface 124 might include the names of files to be read by the image reading module 205, or parameters required for the configuration of the system, such as global thresholds for segmentation, various parameters required for tracking and the types of measurements that should be performed by the measurement module 235.
  • the user interface is also the module through which user-based classification of objects and cell function is received in the training modes for cell shape classifier 225 and cell function classifier 240.
  • the user interface module 245 may be used to examine the images in the sequence visually, assess the suitability of the configuration parameters through visual examination of the results and display measurement results and cell trajectories.
  • XML scheme module 250 is used to form a searchable XML scheme containing information about the experiment and analysis.
  • the XML scheme contains information about the experimental setup such as types of microscopes used, types of cells, exposure times etc.
  • the XML scheme also contains a list of all the cells that were tracked and classified during the experiment and contains fields that characterise the function of each cell in each frame.
  • the XML scheme is based on known keywords and fields and provides users with a convenient method for gaining information about the content of a complex experiment at a glance.
  • the XML scheme module 250 also provides a mechanism whereby users can search for experiments containing specific • types of data.
  • File writing module 255 writes all information generated during the operation of all other modules to memory 126 and/or data storage 140. All information is saved in a systematic manner that enables the system or users to identify all files belonging to a single experiment. This could be by creating batches of files, or by naming the files using prefixes that clearly notate the experiment to which the files belong, and the types of data stored in each file.
  • a step 305 of configuring the system for image capture may be performed. This configuration can be done according to stored configuration settings or may be performed by a user via user interface 124 or input interface 116, or both.
  • Method 300 comprises obtaining a sequence of images at step 310 using image reading module 205 to access the sequence as stored in data storage 140.
  • the sequence of images is enhanced by applying filtering to each image. Step 315 is described in further detail below, with reference to Figure 4.
  • segmentation module 215 is used to identify objects in each of the enhanced images.
  • the identified objects are then classified at step 325 in order to identify cell-like objects within each enhanced image. Steps 320 and 325 are described in further detail below, with reference to Figures 5 and 6. If a cell shape classifier needs to be trained in order to perform the object classification, this can be done according to method 1000 described below in relation to Figure 10.
  • Step 330 the properties of the cell-like objects are measured and movements of the cells are tracked through the sequences of images. Step 330 is described in further detail below, with reference to Figures 7 and 8.
  • Step 335 the cell function of each cell like object is classified in order to identify properties, events and/or behaviour of interest to the cell study. Step 335 is described in further detail below, with reference to Figure 9. If a cell function classifier needs to be trained in order to perform the cell function classification, this can be done according to method 1100 described below in relation to Figure 11.
  • step 405 a set of images is acquired from image reading module 205.
  • step 410 the index /, which is use to denote the number of the image in the sequence is set to 1.
  • step 415 the rth image is read into the image enhancement module 210.
  • a noise reduction filter is applied to the image. This filter could comprise a low pass Gaussian filter, a moving average filter or an adaptive noise reduction filter such as a Wiener filter.
  • step 425 a background removal filter is applied to the /th image.
  • This may be done by estimating the background of the image generated in 420 using a moving average filter with a large kernel, and then subtracting the result from the image generated in step 420. Alternatively it may be done by performing a morphological tophat on the image generated in step 420, where the structure element is chosen to be much larger than a typical cell.
  • step 430 the image enhancement module 210 checks if the zth image is the last image in the sequence. If it is the last image, then the filtered and enhanced images are stored to memory 126 in step 440. If it is not the last image, then the generated filtered image is stored to memory 126 in step 435, and the value of i increases by 1. Image enhancement module 210 then returns to step 415 to read in the next image.
  • a sequence of enhanced images is acquired by segmentation module 215 from the image enhancement module 210.
  • the threshold to be applied to the images is determined.
  • the threshold may be determined by the user during configuration of the system, or automatically determined using criteria such as Otsu's criteria.
  • the threshold may be global or adaptive, for example.
  • step 515 the index i, which is used to denote the number of the image in the sequence is set to 1.
  • step 520 a binary image of the zth image is generated by segmentation module 215 using a combination of thresholding and filtering. In the binary image, all pixels belonging to objects are equal to one, and all background objects are equal to zero.
  • step 520 all groups of non-zero pixels in the binary image are grouped. All connected pixels are considered to belong to the same group. All groups are then provided with an identifier or label. A new image in which all pixels belonging to a certain group have the same identifier is then generated by segmentation module 215.
  • step 530 an index j for all objects in the image is set to 1.
  • step 540 the cell shape classifier 225 is used to determine whether the/th object is cell-like or not. If it is not cell-like then in step 555, all pixels belonging to this object are set to zero.
  • the cell shape classifier 225 determines the number of cells, N, in theyth object. In step 550, the measurement module measures the centroid of the object and allocates the centroid to N objects. In step 560, cell shape classifier 225 examines if there are any more objects in the image. If there are more objects, then index y is incremented and the cell shape classifier 225 proceeds to examine the next object in the image at step 535.
  • the cell shape classifier 225 examines whether there are anymore images in the sequence at step 565. If there are no more images in the sequence then cell shape classifier generates a list of coordinates and labelled and classified objects in step 570. If there are more images in the sequence, cell shape classifier 225 increments index i and proceeds to examine the next image at step 520.
  • cell shape classifier 225 receives from segmentation module 215 the list of pixels belonging to an object. It then calculates the principal axes of the shape, a rectangular bounding box of the shape and the area of the shape.
  • the image from which the pixels were derived is cropped around the bounding box. This creates a new image in which only the object of interest is visible.
  • the image generated in step 610 is rotated so that the principal axis of the shape is parallel to the y-axis.
  • the rotated image is resized so that the x-vertice of the box is 16 pixels wide.
  • the projections of the resized images along the x and y axes are calculated.
  • the projections and the area of the shape are provided in to the trained shape classifier.
  • the classifier determines what class the object belongs to.
  • the object classification is saved. This can be done by generating a vector containing the projections, area and a serial number or string notating the classification.
  • step 710 the list of centroids for all objects and images is read by the tracking module 235.
  • step 720 the centroids in successive image frames are matched up using a suitable multiple particle tracking algorithm.
  • the tracking algorithm is then used at step 730 to track movement of the centroid of each cell across successive images.
  • tracking module 235 generates a list of all tracked cell objects and their centroids.
  • measurement module 230 is used to take measurements of cell movements and properties as described below in relation to Figure 8.
  • step 805 binary images are read by measurement module 230 from segmentation module 215, and a new set of images is generated in which all pixels belonging to the same object are assigned a value equal to the identifier of the object.
  • an index i which determines the number of the cell being examined is set to 1.
  • the track of the zth cell is extracted from the list of tracked objects from tracking module 235. This list will comprise a series of coordinates and frame numbers showing the cell trajectory.
  • an index j relating to the trajectory of the cell is set to 1.
  • the number of the frame in which the cell appeared for the/th time is determined from the list generated at 815 (let this index be N).
  • the Nth image is read by measurement module 230, and the pixels belonging to the z ' th object in this image are determined from the tracked objects list obtained at step 815 and the list of labelled objects in this image.
  • the shape and intensity properties of the object are measured by the measurement module 230 using the inputs and the information generated in step 825.
  • Cell velocity is calculated from the trajectory generated at step 825 by determining the movement of each cell object between images.
  • the determined measurements are added to a list containing information about the trajectories and properties of previous cells examined or of this cell examined at previous points.
  • cell function classification step 335 is described in further detail.
  • the cell function classifier 240 gets a list of cell tracks and measurements generated using the process described in Figure 8.
  • the list is provided to a trained cell function classifier.
  • the cell function classifier 240 determines the function being performed by the cell at each point using the input data.
  • the system outputs a new list containing the classification of functions of each cell at each time point (i.e. for each image).
  • a method 1000 of training a cell shape classifier is described in further detail.
  • a sequence of binary images obtained by segmenting microscope images of cells is obtained.
  • an index describing the number of the image in the sequence is set to 1.
  • the zth image is shown to a user using the user interface module 245 and user interface 124.
  • the user is asked to classify all objects in the image and corresponding cell shape classification data is received in response. Steps 1025 to 1055 are repeated for all objects in the image.
  • the bounding box, principal axes and area of the object are calculated.
  • the image is cropped to generate a new image containing only the object of interest. This is done by cropping around the bounding box.
  • the cropped image is rotated so the major principal axis is parallel to the y-axis.
  • the image is resized so that its x-vertice is 16 pixels in size.
  • the projections of the shape onto x and y axes are calculated.
  • the area, projections and user classification for each shape are stored in a list.
  • the cell shape classifier 225 checks whether there are any more images in the stack. If there are more images, the cell shape classifier 225 moves on to the next image and returns to step 1015. If there are no more images, the list generated in step 1055 is fed into the cell shape classifier 225 to train it to identify cell-like objects. Training of the classifier may be performed using a Support vector machine, a decision tree a neural network, logistic regression or other suitable machine learning tool.
  • step 1110 a list of cell properties and sequences of images showing cells is obtained.
  • step 1120 the cell function classifier 240 sets an index for the serial numbers of the tracked cells to 1.
  • step 1130 the system gets the track and list of cell properties of the i'th cell from the list read in at step 1110.
  • the images of the cell are displayed to a user using the user interface module 245 and user interface 124, and cell function classifier 240 receives cell function classification data as input from the user.
  • the images of the cell will generally be shown to the user sequentially.
  • the user interface module 245 allows the user to scroll backwards and forwards through the image to better examine the cell and its function.
  • the classification of the function of the f th cell at each time point is recorded.
  • the cell function classifier 240 examines if there are any more cells that require classification. If there are more cells that require classification, the cell function classifier 240 will return to examine the next cell at step 1130.
  • the user input and list properties will be used to train the cell function classifier 240.
  • Training of the classifier may be performed using a support vector machine, neural network, decision tree, logistic regression algorithm or other suitable machine learning tool.
  • an experiment might involve two or more cell types.
  • the cell function sought to be classified might involve interactions between the two cell types
  • cell type A might be labelled to fluoresce at a wavelength of about 500nm
  • cell type B might be labelled to fluoresce at around 550nm. Since the images can be acquired at different wavelengths, the image acquisition unit 110 can be used to acquire one set of images in which only cell type A are visible, and one set of images in which only cell type B are visible. Each of these sets of images can then be fed into the analysis module 128 in parallel, and shape classification can be performed on each image using different parameters.
  • function classification requires information about the interactions between the two cell types, then the segmented and enhanced images of both cell types are fed simultaneously into the measurement module and parameters such as the minimum distance from a cell of type A to a cell of type B can be measured. These parameters are then used in either function training mode, or in function classification mode.
  • both cell types will appear in one set of images, but only one cell type will be visible in a second mode of image. For instance, if non-fluorescent and fluorescent images are used, and if only cell type A is fiuorescently labelled, then both cell types will appear in the non-fluorescent images, and only cell type A will appear in the fluorescent images. In this case, both sets of images are enhanced and segmented. A set of logical operators are then applied to the segmented images to determine which pixels belong to cell type A and which pixels belong to cell type B. This information is used to generate two new binary images: one in which only the pixels belonging to cell type A are "on", and one in which only the pixels belonging to cell type B are "on". These images are then fed into the measurement module 230 and used as explained above.
  • the shape training mode can be used to train the shape classifier to distinguish between different cell types based on shape. This information is then used to determine which pixels in the segmented images belong to which cell type. This information is then used to generate binary images in which the "on" pixels belong only to 230 cells of a specific class. These binary images are then fed into the measurement module 230 and used as explained above.
  • the fluorescent images carry information about the distribution of proteins within the cell. As the cell alters its function, these distributions might change. Therefore, parameters calculated from the fluorescence images can be used to classify cell function. Some of the measure parameters that could be used include: the ratio between the maximum and minimum intensity; the ratio between the maximum and median intensity; the ratio between the maximum and average intensity; the difference between the maximum and minimum intensity; the ratio between the second moments of the intensity image and the second moments of the binary image of a single cell; the distance between the centroid of the intensity image and the centroid of the binary image of the cell; and higher order moments of the intensity distribution.
  • a method 1200 for performing step 520 of generating a binary image from a fluorescent image is described in further detail. Given a sequence of images, method 1200 is repeated iteratively for all images in the sequence.
  • the image is read in.
  • the image is thresholded using a global or adaptive threshold to generate a binary image.
  • a morphological close is applied to the filter, within a structure element will be much smaller than the cells of interest. The objective of this close is to join groups of pixels belonging to the same object in cases where the thresholding resulted in gaps.
  • a filter for closing holes in objects is applied.
  • the binary image is saved to memory 126.
  • a method 1300 is described for performing step 520 to generate a binary image from a non-fluorescent image.
  • the non- fluorescent image sequence is read in to the system using image reading module 205.
  • a tophat filter is applied to the image.
  • the structure element for the tophat filter will be small, such as a disk with a radius of two pixels. The objective of this filter is to enhance edges and make the cells more visible.
  • a Sobel filter is applied to enhance horizontal edges of the tophatted image.
  • a Sobel filter is applied to enhance vertical edges in the tophatted image.
  • the Sobel filtered images are added. In some cases it might be useful to add the squares of the images.
  • the resulting image is thresholded using an adaptive or global threshold to generate a binary image.
  • a hole filling algorithm is used to eliminate holes in objects, for example by smoothing.
  • a morphological close with a small structure element is applied to the binary image.
  • the binary image is saved to memory 126.
  • a method 1400 is described for creating two binary images, each showing a different cell type from one fluorescent image and one non-fluorescent image. Given two sequences of images, method 1400 is repeated iteratively for all images in the sequence.
  • the segmented fluorescent and segmented non- fluorescent images are read in. These images can be generated using methods 1200 and 1300, for example.
  • a morphological dilation is applied to the fluorescent image. The dilation will generally be performed with a small structure element such as a disk with a radius of 2 pixels.
  • a logical XOR is applied to the dilated image and the segmented non-fluorescent image.
  • a morphological AND is applied to the segmented non-fluorescent image and the image produced in step 1420 to generate a further image. This further image shows only pixels belonging to the cell type not present in the fluorescent image.
  • the image produced in 1430 and the segmented fluorescent image are saved to memory. These separate images can then be used to perform automatic cell function classification of more than one type of cell observed in the same experiment.

Abstract

A method of automatic cell function classification, comprising obtaining a sequence of images of a sample area in which at least one cell is located, processing the sequence of images to determine objects in each of the images, classifying the objects in each of the images to identify cell objects based on which of the objects resemble cells, determining properties of the identified cell objects across the sequence of images to determine behaviour of each cell object, and classifying one or more cell functions of each identified cell object based on the determined properties.

Description

METHOD AND SYSTEM FOR AUTOMATED CELL FUNCTION
CLASSIFICATION
TECHNICAL FIELD
Described embodiments relate to methods and systems for automated cell function classification. In particular, described embodiments relate to cell function classification based on extracting information regarding cells in a sample area from images of the sample area taken over time.
BACKGROUND
Imaging has become a key tool in biological studies, and current studies on cell rely heavily microscopy. Cell microscopy also has many practical applications. For instance, drug screening, IVF and diagnostic tools may all use microscopy to some extent.
There are two standard modes by which cell microscopy can work: transmission/phase microscopy and fluorescence microscopy. Both these modes can be applied to either living or fixed cells, and they yield complimentary information. In recent years, fusion proteins that enable fluorescent labelling of proteins within live cells have become extremely common. The fusion proteins enable studies on protein motion/interaction within living cells, thereby providing a lot of insight into cell physiology. Fusion proteins in conjunction with other fluorescent markers that can be used in live cells have made live cell microscopy an extremely powerful tool for cell biology studies. In many cases optimal utilization of this power requires high throughput experiment in which many cells are subjected to different conditions and imaged simultaneously. Such high throughput experiments are becoming more common in the research environment. One of the problems associated with microscopy is that imaging cellular activity over extended periods of time generates immense amounts of data that need to be screened and analysed.
In some experiments, cells can be imaged for tens of hours at a time, thereby generating a large number of images that must be perused in order to obtain any relevant information regarding cell function or cell behaviour. This can be particularly problematic and error-prone when non-adherent cells are being studied, because of the highly dynamic nature and relatively rapid movement of the cells.
It is desired to address or ameliorate one or more disadvantages or shortcomings associated with existing techniques for studying cells, or to at least provide a useful alternative thereto.
SUMMARY
Certain embodiments relate to a method of automatic cell function classification, comprising: obtaining a sequence of images of a sample area in which at least one cell is located; processing the sequence of images to determine objects in each of the images; classifying the objects in each of the images to identify cell objects based on which of the objects resemble cells; determining properties of the identified cell objects across the sequence of images to determine behaviour of each cell object; and classifying one or more cell functions of each identified cell object based on the determined properties.
The method may further comprise determining whether object classification training is required in order to classify the objects and, if so, training an object classifier using object classification training data. The method may further comprise determining whether cell function classification training is required in order to classify the cell functions and, if so, training a cell function classifier using cell function training data.
The processing may comprise applying filtering to each of the images to generate an enhanced version of each image. The filtering may comprise noise reduction filtering and may comprise background removal filtering.
The processing may comprise applying an image intensity threshold to generate a binary image for each of the images in the sequence. The processing may comprise identifying objects in each image based on the binary image. The objects may be identified by determining groups of neighbouring or adjacent pixels in each image for which an intensity of the pixel in the image is above the image intensity threshold. The image intensity threshold may be an adaptive threshold or it may be a fixed threshold, for example. The method may further comprise modifying each image of the sequence to remove objects that are not determined to be cells. The method may further comprise masking each image of the sequence using the binary image generated from that image to remove background image data from each image of the sequence.
Classifying the objects may comprise classifying each object as resembling one of: no cells; one cell; two cells; three cells; four cells; and more than four cells. The method may further comprise measuring one or more properties of each cell object. The one or more properties may include a velocity of the cell object over the sequence of images and may include shape properties and/or a centroid of each cell object. The one or more properties may include a path of the cell within the sample area over the sequence of images.
Objects identified as resembling more than one cell are treated initially as a single object, but the object is tracked so that, if the cells separate from each other, each cell is then tracked as a separate entity. To do this, the object is treated as being equivalent to the same number of cell objects as the number of cells it resembles, with all cell objects sharing the same centroid, location and shape, until the cells separate from each other. Embodiments of the described methods may also be used to track and classify functions of cells of more than one type and/or expressing different image characteristics, such as different fluorescence or non-fluorescence.
The method may further comprise storing a data file for each or all identified cell object, the data file comprising classified cell functions of the cell objects for each image, measured properties of the cell objects for each image and a cell label for each cell object. The data file may also comprise information on how the images were captured, such as the microscope used, the time intervals between images, the microscope magnification, the type of camera and the modes of imaging used. The data file may also contain information regarding the cell types used and cell culture conditions.
Further embodiments relate to computer readable storage storing executable program code which, when executed by at least one processing device, causes the at least one processing device to perform methods described above.
Other embodiments relate to a cell classification system comprising at least one processing device and data storage accessible to the at least one processing device. The data storage comprises executable program code which, when executed by the at least one processing device, causes the system to perform methods described above. The system may comprise an image acquisition system in communication with the data storage for acquiring the sequence of images of the sample area and storing the acquired sequence of images in the data storage for later processing according to methods described above.
The methods, systems and program code may be used to study non-adherent cells in the sample area, including live cells. The cells may include T-cells and/or dendritic cells, for example. BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are described in further detail below, by way of example only, with reference to the accompanying drawings, in which: Figure 1 is a block diagram of a system for automated cell function classification;
Figure 2 is a block diagram showing an analysis module of the system in further detail;
Figure 3 is a flowchart of a method of automated cell function classification; Figure 4 is a flowchart of a method of enhancing images;
Figure 5 is a flowchart of a method of identifying and classifying objects in the enhanced images;
Figure 6 is a flowchart of a method of classifying objects;
Figure 7 is a flowchart of a method of tracking cells through sequences of images;
Figure 8 is a flowchart of a method of measuring cell properties;
Figure 9 is a flowchart of a method of classifying cell function to identify events of interest;
Figure 10 is a flowchart of a method of training a cell shape classifier; Figure 11 is a flowchart of a method of training a cell function classifier;
Figure 12 is a flowchart of a method for segmenting fluorescent images; Figure 13 is a flowchart of a method for segmenting non-fluorescent images; and
Figure 14 is a flowchart of a method for creating two binary images, each showing a different cell type from one fluorescent image and one non-fluorescent image. DETAILED DESCRIPTION
The described embodiments relate generally to methods and systems for automated cell function classification based on captured image data. The image data may be captured and processed in real time or may be stored for processing at any time after acquisition of the images. In the described embodiments, the obtained images of the sample area have at least one, and probably many, live cells shown in those images. As live cells tend to move, change and interact with other cells, a sequence of images of such cells can be used to extract a substantial amount of information about the functions of the observed cells. Thus, the described embodiments generally involve processing sequences of images to automatically identify, classify and track cells through the sequences of images.
The described embodiments generally provide an efficient means for analyzing images generated in high-throughput microscopy measurements performed on non-adherent cells. These embodiments relate to a method for automated analysis of the content in sequences of images showing non-adherent cells under the microscope. An example of the types of biological systems that the analysis can be applied to is studying T-cells and their interactions with antigen-presenting cells using microscopy.
The described embodiments may employ machine learning for automated classification of cells based. Work in this field has focused mainly on phenotyping adherent cell types from microscope image. However, non-adherent cells are highly dynamic and therefore different approaches are sought.
Furthermore, the purpose of the described embodiments is not to phenotype cells from static images. Rather the invention is devised to identify changes in the function of specific cells, and determine interactions between cell-types, such as T-cell and dendritic cell interactions, over time. Changes in function are manifested not only in shape changes, and changes in fluorescent intensity, but also in changes of velocity and cell- cell proximity. Some described embodiments utilize these parameters for automatically interpreting the content of sequences of images showing cells.
In a typical experiment, cells are cultured and placed in a transparent container suitable for cell culture and for imaging with a microscope. The types of containers might be chamber slides, Petri dishes, multi-well plates or a microscope slide. The cells are then placed on a microscope and images of the cell are captured over a periods of time which can range from minutes to days.
There are many different modes of imaging that can be used for viewing the cells. For convenience, we will classify the various modes into two classes: fluorescence-based imaging modes and non-fluorescence-based imaging modes. When fluorescence imaging is used, specific compounds within the sample are excited with a certain band of wavelengths of light, and the image is collected based on the emission of these compounds at a different band of wavelengths. This type of imaging could include one photon-fluorescence, two photon fluorescence, multiphoton fluorescence, second harmonic generation based imaging and imaging based on higher harmonics.
In non-fluorescence imaging, the sample is viewed based on its optical properties, such as reflection, transmission and refractive index. The image formation is performed using the same spectra as the excitation source. Non-fluorescence imaging includes methods such as transmission microscopy, reflection microscopy, phase contrast microscopy and Differential Interference Contrast microscopy (DIC).
A combination of both fluorescence and non-fluorescence microscopy techniques may be used. Generally, fluorescence imaging will be performed at one or several wavelengths, and this can be supplemented by images acquired using a non-fluorescence technique, such as DIC.
There is no limitation on the type of (light) microscope that can be used for these experiments, as long the microscope can generate images of live cells. Suitable microscopes could include widefield microscopes or confocal microscopes or any other type of device that can be used to create images of cells in fluorescent and non- fluorescent modes.
As mentioned above, cells can be imaged using either fluorescent or non-fluorescent techniques. However, image processing and identification is generally easier if fluorescent imaging is acquired. Furthermore, since it is possible to label specific proteins within the cells with fluorescent labels, these modes of imaging provide a wealth of information regarding protein interactions and motion, which are of interest to scientists. Therefore, some mode of fluorescence will generally be used.
In some instances, a single type of cell might be labeled with two different fluorescent labels. Each label will generally mark a different compound of interest within the cell. These compounds could be specific proteins, ions, lipids or anything else found within the cell that can be labeled. In some cases, autofluorescence might be used. Autofiuorescence is the fluorescent signal emitted by the cell when no labeling is performed. In all cases, a separate image is captured for each wavelength/ mode of imaging at each time point.
Fluorescence labeling of cells can be performed in two ways: the cell can be exposed to a fluorescent compound, which enters the cell in some manner and labels compounds within the cell; or the genetic material in the cell can be manipulated so that the cell expresses a fluorescent version of certain proteins.
In many cases, it may be desirable to image at least two different types of cells. The different cells will generally be distinguishable through the different combination of fluorescent wavelengths at which they will be imaged. An example of an experiment involving two cell types is an experiment involving dendritic cells and T-cells. In order to interpret the results, the system needs to be trained to distinguish between the two cell types. One way to do this is to label the T-cells so they fluoresce red and label the dendritic cells so they fluoresce in green. In this manner, it is possible to distinguish the two cells simply by determining that if a pixel is positive in the green image then it belongs to a dendritic cell, and if it fluoresces red it belongs to a T-cell.
Alternatively, it is possible to label the T-cells with a fluorescent marker, and not label the dendritic cells with a fluorescent compound. In this case, segmentation (described further below) of images captured in the non-fluorescence mode identifies the location of
T-cells and dendritic cells, but does not provide an immediate means for distinguishing between the two cell types, whereas segmentation of the red fluorescent image will reveal the pixels belonging only to the T-cells. A set of logical operations (logical XOR followed by logical AND) can then be used to generate two segmented images: one image in which only the dendritic cells are visible, and one image in which only the T- cells are visible. These images can then be used for training a classifier.
Once two sets of binary images have been generated with each binary image representing a different cell type, it is possible to perform function classification and function analysis for each set of images independently. In some cases, it might be desirable for the measurement module to perform measurements of relationships regarding the objects in the two image sequences, such as proximity of cells of one type to cells of the second type. For instance, in order to determine whether a T-cell is interacting with a dendritic cell, it may be desirable to determine that the T-cell is in contact with the dendritic cell.
Referring in particular to Figure 1, there is shown a block diagram of a system 100 for automated cell function classification. System 100 comprises an image acquisition unit 110, an analysis unit 120, a sample 130 within a controlled environment 135 and a data storage unit 140. Image acquisition unit 110 captures images of one or more sample areas in the controlled environment 135 and stores these captured images as image sequences in data storage 140. Analysis unit 120 accesses the sequences of images in data storage 140 and processes these sequences in order to obtain information about cells in the sample area from which the images were captured. Analysis unit 120 may comprise a computer system, either as part of image acquisition unit 110 or as a separate system, and has a processor 122, memory 126 and user interface 124. Processor 122 may comprise more than one processing component and may include, for example, field programmable gate arrays (FPGAs)5 parallel or distributed processing components, application specific integrated circuits (ASICs) or digital signal processors (DSP). User interface 124 may comprise standard computer peripheral devices, such as a keyboard, mouse and display. Memory 126 may comprise one or more forms of volatile and non-volatile memory accessible to processor 122.
Memory 126 comprises computer readable program code which, when executed by processor 122, performs functions as described herein. For convenience, some of the program code stored in memory 126 is referred to as a configuration module 127 and an analysis module 128. When executed, configuration module 127 allows a user to configure analysis unit 120 in relation to its handling of sequences of images. Functions of analysis module 128, when executed, are described in further detail below, with reference to Figure 2.
A sequence of time-lapse images showing the behaviour of the cells is acquired with the image acquisition unit 110. These images are then stored in digital form in the data storage unit 140. The stored data is retrieved from data storage 140 by the image analysis unit 120, and analysed to identify and characterize cellular function in the image sequence. The results can be presented to a user using a user interface 124 of the analysis unit 120 and saved to memory using the data storage unit 140.
The image acquisition system 110 comprises a microscope 112, an image acquisition control module 114, an input interface 116 and a display 118. A sample 130 containing the cells is placed on a microscope stage (not shown), and a time sequence of images of the sample is captured. The parameters used for image acquisition are configurable using input interface 116. The parameters are fed to the image acquisition control 114, which controls the elements of the microscope 112. In order to optimize the image acquisition settings, a user can refer to the display 118, which shows the user the images that are being obtained by the microscope 112.
Microscope 112 can be any type of microscope suitable for imaging live cells. This includes transmitted light microscopes, epi-fluorescence microscopes, confocal microscopes, two-photon microscopes and any modification of these instruments that can yield images of cells. There is no fundamental limitation on the mode of imaging as long as it yields images of the cells.
Microscope 112 is equipped with a detector that enables the capture of a digitized image. The detector may be a CCD or EMCCD camera, in the case of a transmitted light or epifluorescence microscope, or a photomultiplier or avalanche detector, in the case of scanning systems such as a confocal microscope or a two-photon microscope. Any detector capable of producing a digital image can be used. Microscope 112 may include a motorized stage for the sample, motorized focus and a motorized mechanism for selecting the imaging mode. This provides the user with more scope for application, but is not essential to the system. Suitable microscopes are made by companies such as Olympus, Zeiss, Leica or Nikon. Microscope 112 may be equipped with environmental control units 135 so that the cells can be kept in a constant environment. Such units may be obtained from Solent, for example.
Image acquisition control 114 can be configured to fix the number of images to be captured, the locations on the sample at which to capture the images, the modes and wavelengths at which to capture images, the frequency at which images are captured, the exposure times and any other settings relevant to the image acquisition (eg. camera gain). The parameters used by image acquisition control 114 can be configured through the input interface 116. A user can refer to the display 118, which can show the current images being formed by the microscope, in order to determine the appropriate configuration settings. The input interface 116, image acquisition control 114 and display 118 may come as a single package, and may be available from either microscope manufacturers or from other vendors. For instance, metamorph and Image Pro Plus are both software packages that include these three units.
Images acquired with the image acquisition unit 110 are saved on the data storage unit 140. Acquired images can be saved using any suitable format e.g. tif, bitmap, Jpeg etc. The stored images can be retrieved by the analysis unit 120 for further manipulation as described herein. Files and data generated by the analysis unit 120 are also written to the data storage unit 140.
Analysis module 128 is where all image analysis is performed. The various components of this module are described below. The analysis module 128 can run in three different modes: cell shape training mode, cell function training mode and analysis mode. In shape training mode, the input images are used to train a cell shape classifier to identify cell-like objects in images based on cell shape input such as may be input by a user. In function training mode, sequences of images are used to train a cell function classifier to identify changes in cell functionality based on cell function input such as may be input by a user. In analysis mode, the analysis module 128 is used to automatically interpret image sequences, and identify when and where cells are performing certain functions that may be of interest to the user.
Referring also to Figure 2, analysis module 128 is shown and described in further detail. Analysis module 128 comprises an image reading module 205, an image enhancement module 210, a segmentation module 215, an image modification module 220, a cell shape classifier 225, a measurement module 230, a tracking module 235, a cell function classifier 240, a user interface module 245, an XML scheme module 250 and a file writing module 255. These modules and classifiers are described in further detail below.
Image reading module 205 reads sequences of images stored in data storage 140 for processing by analysis module 128. Reading of the sequences of images by image reading module 205 from data storage 140 may be automatically performed, for example where image reading module 205 detects that a new or unread sequence of images is stored in data storage 140. Alternatively, image reading module 205 can be caused by a user to read a selected sequence of images from data storage 140. Image reading module 205 may effectively buffer the images in the sequence when providing the images to image enhancement module 210 for further processing.
Image enhancement module 210 receives images from image reading module 205 and performs certain enhancement functions in relation to each image. For example, image enhancement module 210 may apply a noise reduction filter and/or a background removal filter. The noise reduction filter may be a Wiener adaptive filter, for example, having a predetermined kernel size selected according to the size of the cells under study. This kernel size can be configured by a user interface 124 using configuration module 127 prior to the analysis being performed in relation to the sequence of images using analysis module 128. The background removal filter may be a morphological filter, such as a Top Hat filter, based on a predetermined structure element corresponding approximately to a shape of the cell. The structure element may be a circle, ellipse, square, diamond or other discrete shape, as long as the structure element is chosen to be slightly bigger than the cell type under study. Following application of the filters, image enhancement module 210 generates for each original image an enhanced image with reduced noise and removed background. The sequence of enhanced images is then passed to segmentation module 215.
Segmentation module 215 distinguishes between pixels belonging to objects and pixels belonging to the background in the enhanced image to identify objects within the sequence of images. The segmentation module 215 first applies an image intensity threshold to all pixels in the image so that pixels below the intensity threshold are considered to be "off or 0 or "black", and pixels having an intensity above the threshold are considered to be "on" or 1 or "white". In this way, segmentation module 215 generates a binary image from each enhanced image, containing only "black" and "white". From this binary image, objects within the image are identified as groups of two or more adjoining or adjacent pixels with the same value.. All objects thus identified are sequentially labelled. Segmentation module 215 thus generates for each enhanced image a binary mask in which the background is 0 and the foreground is 1, and a list of labelled objects together with the pixels that belong to each object in the enhanced image. Generally, objects that are considered background will not be labelled. In some embodiments, all pixels belonging to a specific object have the value of that object label.
The intensity threshold to be used in generating the binary masking image can be a fixed threshold or an adaptive or dynamically determined threshold. For example, an adaptive threshold may be used where reflected light intensities are uneven across the sample area. The adaptive threshold may be used to compensate for areas having low reflected light intensity and areas having relatively higher reflected light intensity. Thresholding may be followed by other operations that help to separate background from foreground. For example, morphological opening and closing might be used to correct pixels that were incorrectly classified as background or foreground by application of the threshold. Other algorithms that could be used for segmentation include voting schemes, region growing schemes and watershed segmentation.
Cell shape classifier 225 receives the binary image generated by segmentation module 215 for each enhanced image in the sequence and is used to classify the objects identified in the binary image. Cell shape classifier 225 can be used in either a training mode or an analysis mode. The classifier can be constructed using tools such as Bayesian classification, Decision trees, logical regression and Support Vector Machines.
Image modification module 220 is used to eliminate non-cell like objects from the images. Image modification module 220 receives a sequence of images showing labelled objects, along with a list containing object classifications assigned by cell shape classifier 225. Image modification module 220 scans the list for labels of non-cell like objects in each image and identifies the pixels belonging to those objects in each image. The image modification module 220 then sets the value of the non-cell like objects in each image to 0. This is also done for each of the binary images. In this way, all non- cell like images are removed from the images. Image modification module 220 uses the binary images as masks and multiplies each enhanced image with the corresponding binary mask image to generate a new sequence of enhanced images in which the background is completely suppressed. Image modification module 220 then re-labels all of the objects in each image so that the remaining cell-like objects are indexed sequentially.
Measurement module 230 receives images and measures the properties of the objects in the images. Measurement module 230 generates a list of the properties of the objects, which can include measurements such as the object centroids, areas, shape moments and other characteristic parameters such as the circumference of the object, the circularity of the object, the euler number of the object and the extent of the object, the major and minor axes of the objects and the ratio between the two. The list of measurements can also include a list of properties relating to the intensity distribution and texture of the pixels belonging to each object. For example measurement module 230 can be used to measure the maximum intensity, minimum intensity, median intensity and ratios between these parameters. Measurement module 230 can be used to calculate the centre of mass of the intensity of the object and higher moments of the intensity as well. Various ratios between the intensity parameters and the shape parameters that could be useful for training and classification might also be calculated by measurement module 230.
In order to perform measurements on objects, measurement module 230 receives both binary and intensity images from the enhancement module 210, segmentation module 215 and image modification module 220. Measurement module 230 also receives the list of pixels belonging to each object from the segmentation 215 module or image modification module 220. Measurement module 230 can then isolate the pixels belonging to each object in both the binary and intensity images and perform the measurements described above. The measurement module 230 may also receive a list of trajectories from the tracking module 235 and calculate the velocity of each tracked cell from this information. Other parameters characterising the motion of the cell objects, such as mean squared displacement, directionality, average velocity etc., can also be calculated from the trajectories.
Tracking module 235 receives the list of measurements from the measurement module 230, and matches the centroids in sequential frames to form a list containing an estimate of the trajectories of the cells imaged in each experiment. The list generated by the tracking module includes an identifier for each tracked cell, the numbers of the images in which it appears, the centroids of the shape in each instance that it appeared and all other measured properties of the cell at each instance it appeared. Tracking can be performed using any multiple particle tracking algorithm, for example such as the algorithm suggested by D. Crocker and J. Grier (JOURNAL OF COLLOID AND INTERFACE SCIENCE
179, 298-310 (1996) ARTICLE NO. 0217).
Cell function classifier 240 receives the list of trajectories, measurements on shapes, velocities and other derived parameters from the measurement module 230. It uses this information to classify the function of each cell in each image. The cell function classifier 240 can be used in either training mode or classification mode.
User interface module 245 receives input from users (via user interface 124) and provides visual feedback from the analysis module 128 to the user. User input received through user interface 124 might include the names of files to be read by the image reading module 205, or parameters required for the configuration of the system, such as global thresholds for segmentation, various parameters required for tracking and the types of measurements that should be performed by the measurement module 235. The user interface is also the module through which user-based classification of objects and cell function is received in the training modes for cell shape classifier 225 and cell function classifier 240. The user interface module 245 may be used to examine the images in the sequence visually, assess the suitability of the configuration parameters through visual examination of the results and display measurement results and cell trajectories. XML scheme module 250 is used to form a searchable XML scheme containing information about the experiment and analysis. In particular, the XML scheme contains information about the experimental setup such as types of microscopes used, types of cells, exposure times etc. The XML scheme also contains a list of all the cells that were tracked and classified during the experiment and contains fields that characterise the function of each cell in each frame. The XML scheme is based on known keywords and fields and provides users with a convenient method for gaining information about the content of a complex experiment at a glance. The XML scheme module 250 also provides a mechanism whereby users can search for experiments containing specific types of data.
File writing module 255 writes all information generated during the operation of all other modules to memory 126 and/or data storage 140. All information is saved in a systematic manner that enables the system or users to identify all files belonging to a single experiment. This could be by creating batches of files, or by naming the files using prefixes that clearly notate the experiment to which the files belong, and the types of data stored in each file.
Referring now to Figure 3, a general method 300 of automated cell function classification is described in further detail. As a precursor to the automated cell function classification, a step 305 of configuring the system for image capture may be performed. This configuration can be done according to stored configuration settings or may be performed by a user via user interface 124 or input interface 116, or both.
Method 300 comprises obtaining a sequence of images at step 310 using image reading module 205 to access the sequence as stored in data storage 140. At step 315, the sequence of images is enhanced by applying filtering to each image. Step 315 is described in further detail below, with reference to Figure 4.
At step 320, segmentation module 215 is used to identify objects in each of the enhanced images. The identified objects are then classified at step 325 in order to identify cell-like objects within each enhanced image. Steps 320 and 325 are described in further detail below, with reference to Figures 5 and 6. If a cell shape classifier needs to be trained in order to perform the object classification, this can be done according to method 1000 described below in relation to Figure 10.
At step 330, the properties of the cell-like objects are measured and movements of the cells are tracked through the sequences of images. Step 330 is described in further detail below, with reference to Figures 7 and 8.
At step 335, the cell function of each cell like object is classified in order to identify properties, events and/or behaviour of interest to the cell study. Step 335 is described in further detail below, with reference to Figure 9. If a cell function classifier needs to be trained in order to perform the cell function classification, this can be done according to method 1100 described below in relation to Figure 11.
Referring now to Figure 4, a method for performing image enhancement according to step 315 is described in further detail. In step 405 a set of images is acquired from image reading module 205. In step 410, the index /, which is use to denote the number of the image in the sequence is set to 1. In step 415, the rth image is read into the image enhancement module 210. In step 420, a noise reduction filter is applied to the image. This filter could comprise a low pass Gaussian filter, a moving average filter or an adaptive noise reduction filter such as a Wiener filter. In step 425, a background removal filter is applied to the /th image. This may be done by estimating the background of the image generated in 420 using a moving average filter with a large kernel, and then subtracting the result from the image generated in step 420. Alternatively it may be done by performing a morphological tophat on the image generated in step 420, where the structure element is chosen to be much larger than a typical cell.
In step 430, the image enhancement module 210 checks if the zth image is the last image in the sequence. If it is the last image, then the filtered and enhanced images are stored to memory 126 in step 440. If it is not the last image, then the generated filtered image is stored to memory 126 in step 435, and the value of i increases by 1. Image enhancement module 210 then returns to step 415 to read in the next image.
Referring now to Figure 5, object identification and classification according to steps 320 and 325 are described in further detail. In step 505, a sequence of enhanced images is acquired by segmentation module 215 from the image enhancement module 210. In step 510, the threshold to be applied to the images is determined. The threshold may be determined by the user during configuration of the system, or automatically determined using criteria such as Otsu's criteria. The threshold may be global or adaptive, for example.
In step 515, the index i, which is used to denote the number of the image in the sequence is set to 1. In step 520, a binary image of the zth image is generated by segmentation module 215 using a combination of thresholding and filtering. In the binary image, all pixels belonging to objects are equal to one, and all background objects are equal to zero.
In step 520, all groups of non-zero pixels in the binary image are grouped. All connected pixels are considered to belong to the same group. All groups are then provided with an identifier or label. A new image in which all pixels belonging to a certain group have the same identifier is then generated by segmentation module 215.
In step 530, an index j for all objects in the image is set to 1. In step 540, the cell shape classifier 225 is used to determine whether the/th object is cell-like or not. If it is not cell-like then in step 555, all pixels belonging to this object are set to zero.
If the object is cell-like then in step 545, the cell shape classifier 225 determines the number of cells, N, in theyth object. In step 550, the measurement module measures the centroid of the object and allocates the centroid to N objects. In step 560, cell shape classifier 225 examines if there are any more objects in the image. If there are more objects, then index y is incremented and the cell shape classifier 225 proceeds to examine the next object in the image at step 535.
If there are no more objects in the image, the cell shape classifier 225examines whether there are anymore images in the sequence at step 565. If there are no more images in the sequence then cell shape classifier generates a list of coordinates and labelled and classified objects in step 570. If there are more images in the sequence, cell shape classifier 225 increments index i and proceeds to examine the next image at step 520.
Referring now to Figure 6, object classification according to step 535 is described in further detail. At step 605, cell shape classifier 225 receives from segmentation module 215 the list of pixels belonging to an object. It then calculates the principal axes of the shape, a rectangular bounding box of the shape and the area of the shape. At step 610, the image from which the pixels were derived is cropped around the bounding box. This creates a new image in which only the object of interest is visible. At step 615, the image generated in step 610 is rotated so that the principal axis of the shape is parallel to the y-axis. At step 620, the rotated image is resized so that the x-vertice of the box is 16 pixels wide. At step 625, the projections of the resized images along the x and y axes are calculated. At step 630, the projections and the area of the shape are provided in to the trained shape classifier. At step 635, the classifier determines what class the object belongs to. At step 640, the object classification is saved. This can be done by generating a vector containing the projections, area and a serial number or string notating the classification.
Referring now to Figure 7, cell tracking according to step 330 is described in further detail. At step 710, the list of centroids for all objects and images is read by the tracking module 235. At step 720, the centroids in successive image frames are matched up using a suitable multiple particle tracking algorithm. The tracking algorithm is then used at step 730 to track movement of the centroid of each cell across successive images. At step 740, tracking module 235 generates a list of all tracked cell objects and their centroids. At step 750, measurement module 230 is used to take measurements of cell movements and properties as described below in relation to Figure 8.
Referring now to Figure 8, obtaining measurements of cell movement and properties according to step 750 is described in further detail. At step 805, binary images are read by measurement module 230 from segmentation module 215, and a new set of images is generated in which all pixels belonging to the same object are assigned a value equal to the identifier of the object. At step 810, an index i which determines the number of the cell being examined is set to 1. At step 815, the track of the zth cell is extracted from the list of tracked objects from tracking module 235. This list will comprise a series of coordinates and frame numbers showing the cell trajectory.
At step 820, an index j relating to the trajectory of the cell is set to 1. At step 823, the number of the frame in which the cell appeared for the/th time is determined from the list generated at 815 (let this index be N). At step 825, the Nth image is read by measurement module 230, and the pixels belonging to the z'th object in this image are determined from the tracked objects list obtained at step 815 and the list of labelled objects in this image. In step 830, the shape and intensity properties of the object are measured by the measurement module 230 using the inputs and the information generated in step 825. Cell velocity is calculated from the trajectory generated at step 825 by determining the movement of each cell object between images.
At step 835, the determined measurements are added to a list containing information about the trajectories and properties of previous cells examined or of this cell examined at previous points. At step 840, the measurement module 230 determines whether the cell position in the image is the last point in the trajectory of the z'th cell. If it is not the last point, then the measurement module 230 returns to examine the j+lnth point at step 823. If it is the last point, the measurement module 230 examines if there are any more objects to track at step 845. If there are, measurement module 230 sets i=i+l and returns to examine the next object at step 815. If the are no more cells to consider, measurement module 230 stores the list of objects and properties at step 850 to memory 126 for use in cell function classification and other analysis functions.
Referring now to Figure 9, cell function classification step 335 is described in further detail. At step 910, the cell function classifier 240 gets a list of cell tracks and measurements generated using the process described in Figure 8. At step 920, the list is provided to a trained cell function classifier. At step 930, the cell function classifier 240 determines the function being performed by the cell at each point using the input data.
At step 940, the system outputs a new list containing the classification of functions of each cell at each time point (i.e. for each image).
Referring now to Figure 10, a method 1000 of training a cell shape classifier is described in further detail. At step 1005, a sequence of binary images obtained by segmenting microscope images of cells is obtained. At step 1010, an index describing the number of the image in the sequence is set to 1. At step 1015, the zth image is shown to a user using the user interface module 245 and user interface 124. At step 1020, the user is asked to classify all objects in the image and corresponding cell shape classification data is received in response. Steps 1025 to 1055 are repeated for all objects in the image.
At step 1030, the bounding box, principal axes and area of the object are calculated. At step 1035, the image is cropped to generate a new image containing only the object of interest. This is done by cropping around the bounding box. At step 1040, the cropped image is rotated so the major principal axis is parallel to the y-axis.
At step 1045, the image is resized so that its x-vertice is 16 pixels in size. At step 1050, the projections of the shape onto x and y axes are calculated. At step 1055, the area, projections and user classification for each shape are stored in a list. At step 1060, the cell shape classifier 225 checks whether there are any more images in the stack. If there are more images, the cell shape classifier 225 moves on to the next image and returns to step 1015. If there are no more images, the list generated in step 1055 is fed into the cell shape classifier 225 to train it to identify cell-like objects. Training of the classifier may be performed using a Support vector machine, a decision tree a neural network, logistic regression or other suitable machine learning tool.
Referring now to Figure 11 , a method 1100 of training a cell function classifier 240 is described in further detail. In step 1110, a list of cell properties and sequences of images showing cells is obtained. At step 1120, the cell function classifier 240 sets an index for the serial numbers of the tracked cells to 1. At step 1130, the system gets the track and list of cell properties of the i'th cell from the list read in at step 1110.
At step 1140, the images of the cell are displayed to a user using the user interface module 245 and user interface 124, and cell function classifier 240 receives cell function classification data as input from the user. The images of the cell will generally be shown to the user sequentially. The user interface module 245 allows the user to scroll backwards and forwards through the image to better examine the cell and its function.
At step 1150, the classification of the function of the f th cell at each time point (i.e. each image) is recorded. At step 1160, the cell function classifier 240 examines if there are any more cells that require classification. If there are more cells that require classification, the cell function classifier 240 will return to examine the next cell at step 1130.
If there are no more cells, the user input and list properties will be used to train the cell function classifier 240. Training of the classifier may be performed using a support vector machine, neural network, decision tree, logistic regression algorithm or other suitable machine learning tool.
In some instances, an experiment might involve two or more cell types. In fact, the cell function sought to be classified might involve interactions between the two cell types
(for instance T-cell dendritic cell interactions). Therefore it can be useful to be able to classify different cell types from the image automatically, and measure parameters such as the distance between cells of different classes in order to perform automated classification of certain cell functions.
In many cases, the different cell types will be labelled differently. For instance, cell type A might be labelled to fluoresce at a wavelength of about 500nm, whereas cell type B might be labelled to fluoresce at around 550nm. Since the images can be acquired at different wavelengths, the image acquisition unit 110 can be used to acquire one set of images in which only cell type A are visible, and one set of images in which only cell type B are visible. Each of these sets of images can then be fed into the analysis module 128 in parallel, and shape classification can be performed on each image using different parameters. If function classification requires information about the interactions between the two cell types, then the segmented and enhanced images of both cell types are fed simultaneously into the measurement module and parameters such as the minimum distance from a cell of type A to a cell of type B can be measured. These parameters are then used in either function training mode, or in function classification mode.
In some cases, both cell types will appear in one set of images, but only one cell type will be visible in a second mode of image. For instance, if non-fluorescent and fluorescent images are used, and if only cell type A is fiuorescently labelled, then both cell types will appear in the non-fluorescent images, and only cell type A will appear in the fluorescent images. In this case, both sets of images are enhanced and segmented. A set of logical operators are then applied to the segmented images to determine which pixels belong to cell type A and which pixels belong to cell type B. This information is used to generate two new binary images: one in which only the pixels belonging to cell type A are "on", and one in which only the pixels belonging to cell type B are "on". These images are then fed into the measurement module 230 and used as explained above.
In some cases, all cell types will appear in all modes of imaging. When this occurs, the shape training mode can be used to train the shape classifier to distinguish between different cell types based on shape. This information is then used to determine which pixels in the segmented images belong to which cell type. This information is then used to generate binary images in which the "on" pixels belong only to 230 cells of a specific class. These binary images are then fed into the measurement module 230 and used as explained above.
The fluorescent images carry information about the distribution of proteins within the cell. As the cell alters its function, these distributions might change. Therefore, parameters calculated from the fluorescence images can be used to classify cell function. Some of the measure parameters that could be used include: the ratio between the maximum and minimum intensity; the ratio between the maximum and median intensity; the ratio between the maximum and average intensity; the difference between the maximum and minimum intensity; the ratio between the second moments of the intensity image and the second moments of the binary image of a single cell; the distance between the centroid of the intensity image and the centroid of the binary image of the cell; and higher order moments of the intensity distribution.
Referring now to Figure 12, a method 1200 for performing step 520 of generating a binary image from a fluorescent image is described in further detail. Given a sequence of images, method 1200 is repeated iteratively for all images in the sequence. At step 1210, the image is read in. At step 1220, the image is thresholded using a global or adaptive threshold to generate a binary image. At step 1230, a morphological close is applied to the filter, within a structure element will be much smaller than the cells of interest. The objective of this close is to join groups of pixels belonging to the same object in cases where the thresholding resulted in gaps. At step 1240, a filter for closing holes in objects is applied. At step 1250, the binary image is saved to memory 126.
Referring now to Figure 13, a method 1300 is described for performing step 520 to generate a binary image from a non-fluorescent image. At step 1310, the non- fluorescent image sequence is read in to the system using image reading module 205. At step 1320, a tophat filter is applied to the image. The structure element for the tophat filter will be small, such as a disk with a radius of two pixels. The objective of this filter is to enhance edges and make the cells more visible. At step 1330, a Sobel filter is applied to enhance horizontal edges of the tophatted image. At step 1340, a Sobel filter is applied to enhance vertical edges in the tophatted image. At step 1350, the Sobel filtered images are added. In some cases it might be useful to add the squares of the images. At step 1360, the resulting image is thresholded using an adaptive or global threshold to generate a binary image.
At step 1370, a hole filling algorithm is used to eliminate holes in objects, for example by smoothing. At step 1380, a morphological close with a small structure element is applied to the binary image. At step 1390, the binary image is saved to memory 126.
Referring now to Figure 14, a method 1400 is described for creating two binary images, each showing a different cell type from one fluorescent image and one non-fluorescent image. Given two sequences of images, method 1400 is repeated iteratively for all images in the sequence. At step 1405, the segmented fluorescent and segmented non- fluorescent images are read in. These images can be generated using methods 1200 and 1300, for example. At step 1410, a morphological dilation is applied to the fluorescent image. The dilation will generally be performed with a small structure element such as a disk with a radius of 2 pixels. At step 1420, a logical XOR is applied to the dilated image and the segmented non-fluorescent image. At step 1430, a morphological AND is applied to the segmented non-fluorescent image and the image produced in step 1420 to generate a further image. This further image shows only pixels belonging to the cell type not present in the fluorescent image.
At step 1440 the image produced in 1430 and the segmented fluorescent image are saved to memory. These separate images can then be used to perform automatic cell function classification of more than one type of cell observed in the same experiment.
Throughout this specification and the claims which follow, unless the context requires otherwise, the word "comprise", and variations such as "comprises" and "comprising", will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.

Claims

CLAIMS:
1. A method of automatic cell function classification, comprising: obtaining a sequence of images of a sample area in which at least one cell is located; processing the sequence of images to determine objects in each of the images; classifying the objects in each of the images to identify cell objects based on which of the objects resemble cells; determining properties of the identified cell objects across the sequence of images to determine behaviour of each cell object; and classifying one or more cell functions of each identified cell object based on the determined properties.
2. The method of claim 1, further comprising: determining whether object classification training is required in order to classify the objects; and training an object classifier using object classification training data if object classification training is required.
3. The method of claim 1 or 2, further comprising: determining whether cell function classification training is required in order to classify the cell functions; and training a cell function classifier using cell function training data if cell function classification training is required.
4. The method of any one of claims 1 to 3, wherein the processing comprises applying filtering to each of the images.
5. The method of claim 4, wherein the filtering comprises noise reduction filtering.
6. The method of claim 4 or claim 5, wherein the filtering comprises background removal filtering.
7. The method of any one of claims 4 to 6, wherein the processing comprises applying an image intensity threshold to generate a binary image for each of the images in the sequence.
8. The method of claim 7, wherein the processing comprises identifying objects in each image based on the binary image.
9. The method of claim 8, wherein the objects are identified by determining groups of neighbouring or adjacent pixels in each image for which an intensity of the pixel in the image is above the image intensity threshold.
10. The method of any one of claims 7 to 9, wherein the image intensity threshold is an adaptive threshold.
11. The method of any one of claims 7 to 9, wherein the image intensity threshold is a fixed threshold.
12. The method of any one of claims 7 to 11, further comprising modifying each image of the sequence to remove objects that are determined not to resemble cells.
13. The method of claim 12, further comprising masking each image of the sequence using the binary image generated from that image to remove background image data from each image of the sequence.
14. The method of any one of claims 1 to 13, wherein classifying the objects comprises classifying each object as resembling one of: no cells; one cell; two cells; three cells; four cells; and more than four cells.
15. The method of any one of claims 1 to 14, further comprising measuring one or more properties of each cell object.
16. The method of claim 15, wherein the one or more properties include a velocity of the cell object over the sequence of images.
17. The method of claim 15 or claim 16, wherein the one or more properties include shape properties and a centroid of each cell.
18. The method of any one of claims 15 to 17, wherein the one or more properties include a path of the cell within the sample area over the sequence of images.
19. The method of any one of claims 15 to 18, wherein the determining comprises measuring the one or more properties.
20. The method of any one of claims 1 to 19, further comprising storing a data file for each identified cell object, the data file comprising classified cell functions of the cell object, measured properties of the cell object and a cell label.
21. Computer readable storage storing executable program code which, when executed by at least one processing device, causes the at least one processing device to perform the method of any one of claims 1 to 20.
22. A cell classification system comprising at least one processing device and data storage accessible to the at least one processing device, the data storage comprising executable program code which, when executed by the at least one processing device, causes the system to perform the method of any one of claims 1 to 20.
23. The system of claim 22, further comprising an image acquisition system in communication with the data storage for acquiring the sequence of images of the sample area and storing the acquired sequence of images in the data storage for later processing.
PCT/AU2009/000571 2008-05-16 2009-05-07 Method and system for automated cell function classification WO2009137866A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2008902434 2008-05-16
AU2008902434A AU2008902434A0 (en) 2008-05-16 Method and system for automated cell function classification

Publications (1)

Publication Number Publication Date
WO2009137866A1 true WO2009137866A1 (en) 2009-11-19

Family

ID=41318272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2009/000571 WO2009137866A1 (en) 2008-05-16 2009-05-07 Method and system for automated cell function classification

Country Status (1)

Country Link
WO (1) WO2009137866A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2392292A1 (en) * 2010-09-07 2012-12-07 Telefónica, S.A. Method for classification of images
US20130183707A1 (en) * 2012-01-13 2013-07-18 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Stem cell bioinformatics
WO2014134550A1 (en) * 2013-02-28 2014-09-04 Auxogyn, Inc. Apparatus, method, and system for image-based human embryo cell classification
WO2016134211A1 (en) * 2015-02-20 2016-08-25 President And Fellows Of Harvard College Structural phenotyping of myocytes
US9607202B2 (en) 2009-12-17 2017-03-28 University of Pittsburgh—of the Commonwealth System of Higher Education Methods of generating trophectoderm and neurectoderm from human embryonic stem cells
CN111492368A (en) * 2017-12-22 2020-08-04 文塔纳医疗系统公司 System and method for classifying cells in tissue images based on membrane characteristics
US10942170B2 (en) 2014-03-20 2021-03-09 Ares Trading S.A. Quantitative measurement of human blastocyst and morula morphology developmental kinetics
WO2021067797A1 (en) * 2019-10-04 2021-04-08 New York Stem Cell Foundation, Inc. Imaging system and method of use thereof
CN113474813A (en) * 2019-02-01 2021-10-01 埃森仪器公司Dba埃森生物科学公司 Label-free cell segmentation using phase contrast and bright field imaging

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4724543A (en) * 1985-09-10 1988-02-09 Beckman Research Institute, City Of Hope Method and apparatus for automatic digital image analysis
US20060127881A1 (en) * 2004-10-25 2006-06-15 Brigham And Women's Hospital Automated segmentation, classification, and tracking of cell nuclei in time-lapse microscopy

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4724543A (en) * 1985-09-10 1988-02-09 Beckman Research Institute, City Of Hope Method and apparatus for automatic digital image analysis
US20060127881A1 (en) * 2004-10-25 2006-06-15 Brigham And Women's Hospital Automated segmentation, classification, and tracking of cell nuclei in time-lapse microscopy

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607202B2 (en) 2009-12-17 2017-03-28 University of Pittsburgh—of the Commonwealth System of Higher Education Methods of generating trophectoderm and neurectoderm from human embryonic stem cells
ES2392292A1 (en) * 2010-09-07 2012-12-07 Telefónica, S.A. Method for classification of images
US20130183707A1 (en) * 2012-01-13 2013-07-18 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Stem cell bioinformatics
JP2016509845A (en) * 2013-02-28 2016-04-04 プロジェニー, インコーポレイテッド Apparatus, method and system for image-based human germ cell classification
US9542591B2 (en) 2013-02-28 2017-01-10 Progyny, Inc. Apparatus, method, and system for automated, non-invasive cell activity tracking
WO2014134550A1 (en) * 2013-02-28 2014-09-04 Auxogyn, Inc. Apparatus, method, and system for image-based human embryo cell classification
US9710696B2 (en) 2013-02-28 2017-07-18 Progyny, Inc. Apparatus, method, and system for image-based human embryo cell classification
US10942170B2 (en) 2014-03-20 2021-03-09 Ares Trading S.A. Quantitative measurement of human blastocyst and morula morphology developmental kinetics
WO2016134211A1 (en) * 2015-02-20 2016-08-25 President And Fellows Of Harvard College Structural phenotyping of myocytes
CN111492368A (en) * 2017-12-22 2020-08-04 文塔纳医疗系统公司 System and method for classifying cells in tissue images based on membrane characteristics
CN111492368B (en) * 2017-12-22 2024-03-05 文塔纳医疗系统公司 Systems and methods for classifying cells in tissue images based on membrane characteristics
CN113474813A (en) * 2019-02-01 2021-10-01 埃森仪器公司Dba埃森生物科学公司 Label-free cell segmentation using phase contrast and bright field imaging
WO2021067797A1 (en) * 2019-10-04 2021-04-08 New York Stem Cell Foundation, Inc. Imaging system and method of use thereof
EP4038177A4 (en) * 2019-10-04 2024-01-24 New York Stem Cell Found Inc Imaging system and method of use thereof

Similar Documents

Publication Publication Date Title
WO2009137866A1 (en) Method and system for automated cell function classification
JP4387201B2 (en) System and method for automated color segmentation and minimal significant response for measurement of fractional localized intensity of intracellular compartments
US9057701B2 (en) System and methods for rapid and automated screening of cells
Schulze et al. PlanktoVision-an automated analysis system for the identification of phytoplankton
US8064678B2 (en) Automated detection of cell colonies and coverslip detection using hough transforms
US20170052106A1 (en) Method for label-free image cytometry
Colin et al. Quantitative 3D-imaging for cell biology and ecology of environmental microbial eukaryotes
US20100135566A1 (en) Analysis and classification, in particular of biological or biochemical objects, on the basis of time-lapse images, applicable in cytometric time-lapse cell analysis in image-based cytometry
WO2017150194A1 (en) Image processing device, image processing method, and program
Amat et al. Towards comprehensive cell lineage reconstructions in complex organisms using light‐sheet microscopy
Marée The need for careful data collection for pattern recognition in digital pathology
Delpiano et al. Automated detection of fluorescent cells in in‐resin fluorescence sections for integrated light and electron microscopy
CN112996900A (en) Cell sorting device and method
Viana et al. Robust integrated intracellular organization of the human iPS cell: where, how much, and how variable
US20240095910A1 (en) Plaque detection method for imaging of cells
Rychtáriková et al. Super-resolved 3-D imaging of live cells’ organelles from bright-field photon transmission micrographs
Niederlein et al. Image analysis in high content screening
Caldas et al. iSBatch: a batch-processing platform for data analysis and exploration of live-cell single-molecule microscopy images and other hierarchical datasets
Bearer Overview of image analysis, image importing, and image processing using freeware
US20240118527A1 (en) Fluorescence microscopy for a plurality of samples
Krappe et al. Automated classification of bone marrow cells in microscopic images for diagnosis of leukemia: a comparison of two classification schemes with respect to the segmentation quality
Matula et al. Acquiarium: free software for the acquisition and analysis of 3D images of cells in fluorescence microscopy
Culley et al. Made to measure: An introduction to quantifying microscopy data in the life sciences
Culley et al. Made to measure: an introduction to quantification in microscopy data
Chou et al. Fast and Accurate Cell Tracking: a real-time cell segmentation and tracking algorithm to instantly export quantifiable cellular characteristics from large scale image data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09745288

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09745288

Country of ref document: EP

Kind code of ref document: A1