WO2016207092A1 - System and method for generating an ultrasonic image - Google Patents

System and method for generating an ultrasonic image Download PDF

Info

Publication number
WO2016207092A1
WO2016207092A1 PCT/EP2016/064125 EP2016064125W WO2016207092A1 WO 2016207092 A1 WO2016207092 A1 WO 2016207092A1 EP 2016064125 W EP2016064125 W EP 2016064125W WO 2016207092 A1 WO2016207092 A1 WO 2016207092A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
acoustic
image
interest
acoustic data
Prior art date
Application number
PCT/EP2016/064125
Other languages
French (fr)
Inventor
Julian Charles Nolan
Matthew John LAWRENSON
Original Assignee
Koninklijke Philips N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips N.V. filed Critical Koninklijke Philips N.V.
Publication of WO2016207092A1 publication Critical patent/WO2016207092A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4477Constructional features of the ultrasonic, sonic or infrasonic diagnostic device using several separate ultrasound transducers or probes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/026Stethoscopes comprising more than one sound collector
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5246Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode

Definitions

  • the invention relates to an ultrasonic imaging comprising a first receiving unit for receiving image data from a region of interest.
  • the invention further relates to ultrasound probe for use in an ultrasonic imaging system according to the present invention, the ultrasound probe comprising an ultrasonic transducer for receiving image data of a region of interest.
  • the invention also relates to a method comprising the steps of receiving image data from the region of interest.
  • Ultrasonography is commonly used for generating an image so as to visualize internal structures of the human body, such as tendons, muscles, joints, vessels and internal organs.
  • This imaging technique has number of known advantages, such as providing a realtime image enabling a healthcare professional, or otherwise any person to see a
  • Ultrasonic imaging is commonly generated in three steps which may be summarized as i) producing a sound wave (for instance via a piezoelectric transducer), ii) receiving the echoes of the produced sound wave, and iii) forming the image based on each received echoes.
  • the image is based on the time elapsed between the generated sound wave and the received echo and the strength of said received echo. From a plurality of received echoes, determination of the location of the pixel(s) (or image
  • a system for use in imaging a subject and determining a position of the imaging probe relative to a body of said subject is known from US 6,678,545 B2.
  • Said system comprises an imaging probe configured to scan the subject and provide scan images of the subject.
  • An array of receivers is in communication with the base and the imaging probe.
  • a first plurality of reference points is fixed in relation to the base and in communication with the array.
  • a second plurality of reference points is fixed in relation to the imaging probe and in communication with the array.
  • a processor in communication with the imaging probe and the array calculates the position in the scan images corresponding to the position of the imaging probe relative to the subject.
  • US 8372006 B a method for detecting and locating a target using phase information obtained from an array of microphones or other sensors.
  • the invention therein disclosed is based the introduction of a device that includes a transmitting and stimulating ultrasound transducer (probe) and a multiplicity of sensors at given locations around or adjacent to a human breast.
  • An ultrasound transducer generates certain stimulating signals which are transmitted to the breast and which, in presence of a microcalcification or other target, will result in reflected, demodulated, reradiated and scattered signals. These signals will travel away from the microcalcification and toward a location or locations whereby the various sensors are located.
  • this object is realized by an ultrasonic imaging system as defined in the opening paragraph characterized in that the ultrasonic imaging system further comprises a second receiving unit for receiving an acoustic signal of a frequency within the human hearing range from an acoustic source and generating acoustic data from said acoustic signal, wherein said acoustic source is located within a region of interest, a processing unit configured to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data, and wherein the processing unit is further configured to register the image data and the located acoustic data so as to generate the enriched ultrasound data based on the image data and the located acoustic data.
  • the invention is advantageous in that it enables generation of an enriched ultrasound data based on located acoustic data and image data; an association between the received acoustic data and the received image data is registered in such enriched ultrasound data, therefore enabling a user (such as a healthcare professional, a physician, a caregiver) to perceive an internal structure (for instance an organ, or a blood vessel, or a tissue) by at least two senses (sight and hearing).
  • a superposition of image data and acoustic data enables a further assessment on the internal structure of a subject (for instance a patient) such that more information of potential relevance are provided to the healthcare user so as to strengthen his/her assessment of the condition of such internal structure (for instance for diagnostic purposes).
  • the invention is further advantageous in that it enables a faster scanning time of the region of interest by an ultrasonic imaging system.
  • the invention speeds the analysis process of an ultrasonic imaging system as a region of interest or a feature of interest may be quickly identified based on its acoustic emissions, which are captured by the second receiving unit.
  • region of interest or a feature of interest may be automatically identified by a processing unit. Localization of a region of interest or area of interest within the ultrasound image is therefore eased, and the image interpretation by the healthcare user is improved.
  • the invention is further advantageous in that it enables an ultrasonic imaging system which is easier to use, allowing lay users, additionally and/or alternatively user undergoing a training, additionally and/or alternatively users with limited skills, additionally and/or alternatively less experienced healthcare professionals, additionally and/or alternatively any person and or robot who would benefits in ease of use to operate an ultrasonic imaging system so that conclusion(s), assessment(s) or otherwise information are engendered.
  • the invention is further advantageous in that it enables the ultrasonic imaging system parameters (for instance the frequency, the mode) to be modified, or changed, or optimized based on the received acoustic data, so as to ensure that the scanned image represents data relevant to the issue highlighted by acoustic analysis. Consequently, the present invention enables for an optimization of the tradeoff between spatial resolution of the image and imaging depth into the body.
  • the ultrasonic imaging system parameters for instance the frequency, the mode
  • the processing unit is further configured to generate an ultrasonic image based on the enriched ultrasound data.
  • This embodiment is advantageous in that it enables visual representation, when presented on a display means, of the image data captured by the first receiving unit.
  • the second receiving unit of the ultrasonic imaging system comprises an array of microphones.
  • an array of microphones (or microphone array) comprises a plurality of microphones (i.e. two or more) that are distributed over a certain space and are operating in tandem.
  • Said embodiment is advantageous in that it enables adequate location of the acoustic source of the acoustic signal (therefore the acoustic data) following the processing of the received acoustic data. For instance, based on a fixed physical relationship in space between different individual microphone transducer array elements, simultaneous digital signal processing of the received signals from each of the individual microphone array elements by a suitable algorithm is configured to create one or more virtual microphones so that further processing is made possible.
  • the first receiving unit of the ultrasonic imaging system comprises an ultrasound probe.
  • Said embodiment is advantageous in that it enables digital capture of image data, therefore obviating the need of any converting means, such as an Analogue-to-Digital converter (ADC).
  • ADC Analogue-to-Digital converter
  • This embodiment is further advantageous in that it enables the ultrasonic imaging system according to the present invention to use elements, and/or features, and/or aspect of known ultrasonic imaging systems, thereby limiting modifications for upgrading towards an ultrasonic imaging system according to the present invention.
  • the processing unit of the ultrasonic imaging system further comprises a beamforming (or spatial filtering) algorithm configured to processes the acoustic data so as to determine the location of the acoustic source relative to the region of interest.
  • a beamforming or spatial filtering algorithm configured to processes the acoustic data so as to determine the location of the acoustic source relative to the region of interest.
  • a second receiving unit such as a microphone array
  • beamforming is namely achieved by combining elements in a phased array (for instance received from the array of microphones) in such a way that signals at particular angles experience constructive interference while others experience destructive interference.
  • a microphone array digitalization of such acoustic data (for instance by an Analogue-to-Digital converter (ADC, or any other acoustic processing techniques), enabling conception of digital virtual microphone(s).
  • ADC Analogue-to-Digital converter
  • digital virtual microphone(s) From such digital virtual microphone(s), a 2D or 3D space coordinates can be derived (for instance Cartesian coordinates) such that the location of the acoustic source is made possible.
  • the processing unit of the ultrasonic imaging system further comprises an image segmentation algorithm such that one or more features of interest in the enriched ultrasound data are identified.
  • image segmentation algorithm suitable for the present invention are, for instance a shape-constrained deformable model, and/or an active shape model. This embodiment is advantageous in that it enables a region of interest from image data to be identified, particularly a contour of the region of interest is enforced (for instance highlighted, or otherwise made prominent when displayed), thereby allowing an easier identification of an arrangement, or an element of the imaged structure (for instance an organ, or a blood vessel, or a tissue) when visualized.
  • the ultrasonic imaging system further comprises a display configured to display an ultrasonic image based on enriched ultrasound data, and an audio generator (such as a loudspeaker) configured to generate an audible signal from the acoustic data and/or the located acoustic data registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected.
  • an audio generator such as a loudspeaker
  • Said embodiment is advantageous in that it enables the healthcare user, or any person to hear the sound (or a frequency associated with this sound) as captured by the second receiving unit (alternatively by an array of microphones) following selection or identification of a picture element (or a pixel, or a group of pixels, or a frame of the image) or more than one picture elements, or a group of picture elements within the ultrasonic image.
  • the sound generated by said internal structure may be made audible (for instance played by a loudspeaker). Consequently, said embodiment enables, for instance, a better assessment of said internal structure.
  • the ultrasonic imaging system further comprises a first memory for storing at least one of the acoustic data and/or the located acoustic data so that the audible signal can be replayed at a further moment, or the located acoustic data be reregistered with further image data generated at another further moment.
  • Said embodiment is advantageous in that the captured acoustic data (or located acoustic data) may be replayed at a future consultation such as, for instance, to be compared with the newly gathered acoustic data (or located acoustic data).
  • a modification, or a changed, or a difference of the acoustic data generated by the internal structure may be assessed, such as to help, support or otherwise provide valuable information to the user in his assessment, for instance an improvement following an intervention, or degradation of an existing condition monitored over time.
  • the ultrasonic imaging system further comprises a second memory for storing a reference acoustic signal for the region of interest, or the feature of interest, wherein the processing unit is further configured to compare the acoustic data and/or the located acoustic data with the reference acoustic signal.
  • an abnormal acoustic data for instance acoustic signal
  • a reference acoustic signal may be, for instance, an acoustic signal which has been scientifically demonstrated and validated in a so- called normal range, (for instance ground truth, or golden standard).
  • Such embodiment enables quicker assessment, diagnostic or conclusion of the situation of the internal structure (for instance an organ, or a blood vessel, or a tissue) from which the acoustic data and image data are captured.
  • the display is further configured to display the one or more picture elements and/or the ultrasonic image at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal; and/or display the one or more picture elements and/or the ultrasonic image at an increased size when located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal.
  • Said embodiment is advantageous in that it enables emphasis of the representation of the image data associated with the acoustic data.
  • this embodiment may provide information on an internal structure, that appears abnormal, if it works normally, thereby alleviating the need for further examination by the healthcare professional, such a minimally invasive surgery, or an invasive surgery.
  • an ultrasound probe as defined in one of the opening paragraph configured for receiving acoustic data of a frequency within human hearing range and ultrasound data characterized in that that the ultrasound probe further comprises an array of microphones for receiving acoustic data from an acoustic source, wherein said acoustic source is located within a region of interest.
  • Said embodiment is advantageous in that it enables reception of the acoustic data and the image data from one device, or apparatus.
  • the use of one device comparable to a plurality of device enables easier manipulation by a user (such as a healthcare professional, a physician, a caregiver) which consequently provides for a more robust system for generating an ultrasound image registered with acoustic data.
  • this embodiment enables diminution of the processing needs of data, thereby enabling faster processing time.
  • manipulation of the system according to the present invention by un-skilled users, or otherwise any user that have not received extensive training is made possible.
  • the foregoing object is realized by a method characterized in receiving acoustic data of a frequency within human hearing range from an acoustic source, wherein said acoustic source is located within a region of interest, processing the acoustic data to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data, registering the image data and the located acoustic data, and generating the enriched ultrasound data based on the registered image data and the located acoustic data.
  • the method according to the present invention is advantageous for analogous reasons as the corresponding features of the system according to present invention.
  • the method is advantageous in that it enables superposition of acoustic data and image captured data from a source, said source being within a region of interest, which is for instance an internal structure (for instance an organ, or a blood vessel, or a tissue).
  • a region of interest which is for instance an internal structure (for instance an organ, or a blood vessel, or a tissue).
  • a visual signal e.g. display of an ultrasonic image
  • the latter advantage enables a user (such as a healthcare professional, a physician, a caregiver) to see a representation of the region of interest and hear the sound (acoustic signal) generated by the structure, or otherwise a source within said region of interest.
  • the method further comprises displaying an ultrasonic image based on the enriched ultrasound data, and generating an audible signal from the acoustic data and/or the located acoustic data registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected.
  • Said embodiment is advantageous for analogous reasons as the corresponding embodiment of the system according to the present invention.
  • the method further comprises displaying the one or more picture elements and/or the ultrasonic image at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal, and/or displaying the one or more picture elements and/or the ultrasonic image at an increased size when located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal.
  • Said embodiment is advantageous for analogous reasons as the corresponding embodiment of the system according to the present invention.
  • Fig.l schematically represents an embodiment of the ultrasonic system according to the present invention.
  • Fig.2 schematically represents another embodiment of the ultrasonic system according to the present invention.
  • Fig.3a schematically represents an embodiment of the device for use in a system according to the present invention.
  • Fig.3b schematically represents an alternative embodiment of the device for use in a system according to the present invention.
  • Fig.4a graphically represent the image data received according to the present invention.
  • Fig.4b graphically represent the located acoustic data generated according to the present
  • Fig.4c graphically represent the registration of the image data and the located acoustic data according to the present invention.
  • Fig.5 represent an embodiment of the ultrasonic image displayed by an ultrasonic system according to the present invention.
  • Fig. 6 schematically represents an embodiment of the method according to the present invention.
  • a region of interest may be an internal structure (for instance an organ, or a blood vessel, or a tissue), or part of such internal structure. Additionally or alternatively, a region of interest may comprise a plurality of internal structures, or a plurality of parts of such plurality of internal structures.
  • a feature of interest corresponds to an image of a part of an internal structure (or a plurality of internal structures), which may have been processed by an imaging segmentation algorithm for example.
  • a region of interest may be the aortic valve in the heart of a subject, where the feature of interest is the representation of the posterior cusp.
  • a microphone is a device that convert an acoustic signal into an electric signal.
  • a microphone array or array of
  • a microphone array comprises solely omnidirectional (or non-directional) microphones, alternatively it comprises solely unidirectional (or directional) microphones, alternatively it comprises a mixture of omnidirectional and unidirectional microphones distributed over a perimeter of a given space.
  • a microphone array may consist of two (2) or more microphones, where examples of number of microphones could be four (4) microphones, alternatively six (6) microphones, alternatively (9) microphones.
  • the microphone array comprises, as an example, eight (8) microphones (further detailed hereunder under Figure 5).
  • a planar microphone array comprises all microphones in one single plane.
  • the skilled in the art will know that a planar array provides a large aperture and may be used for directional beam control by varying the relative phase of each element.
  • a 3D microphone array comprises all microphones in a three-dimensional space. Such microphone arrays are usually, but not limited to a spherical configuration relative to each other.
  • An advantage of a 3D microphone array resides in an increased accuracy of sound pressure levels and a more accurate (virtual) positioning of sound sources (for instance the region of interest).
  • a 3D microphone array enables to capture an increased amount of the emitted sound by the region of interest, increasing the sensitivity relative to a planar microphone array. Additionally or alternatively, a 3D array enables limitation of the air gap (or air pouch) between the body surface and the microphones, thereby increasing the quality of the sound capture.
  • an ultrasonic transducer is a device capable of converting energy into an ultrasound wave.
  • An ultrasonic transducer may comprise a piezoelectric element (such as a crystal, alternatively a plurality of crystals) capable of changing dimension when a voltage is applied.
  • it comprises a magnetostrictive materials may be used as such material changes size in response to a magnetic field.
  • a capacitor microphone using a plate which moves in response to the ultrasound wave may be used.
  • the system may comprise a Capacitive Micromachined Ultrasonic Transducer (CMUT) transducer cell.
  • CMUT Capacitive Micromachined Ultrasonic Transducer
  • ultrasound waves When receiving ultrasound waves, ultrasound waves cause the membrane to move or vibrate and change the capacitance between the electrodes which can be detected. Thereby the ultrasound waves are transformed into a corresponding electrical signal. Conversely, an electrical signal applied to the electrodes causes the membrane to move or vibrate, thereby transmitting ultrasound waves.
  • Fig.l schematically represents an ultrasonic system 100 according to the present invention.
  • Said ultrasonic system 100 comprises a second receiving unit 110 configured to receiving an acoustic signal (for instance sound waves) from an acoustic source and generating acoustic data from said acoustic signal from a region of interest 150.
  • Said region of interest 150 may be an organ, alternatively may be a tissue, alternatively may be a blood vessel, alternatively may be a region of an organ, alternatively may be a region of a tissue, alternatively may be a region of a blood vessel, or alternatively any structure that emits an acoustic wave.
  • heart murmurs could be captured by the second receiving unit.
  • numerous other structures of the human or animal body emit an acoustic signal (e.g. a noise, a sub audible signal, an ultrasonic signal), such as for example the kidneys, the liver, or tissues such as the aorta, or a tissue such as a ventricular valve.
  • said region of interest 150 may correspond to one or more internal structures, within the human or animal body.
  • a region of interest 150 may be, as mentioned above, a region of a structure, for instance an organ, but could also represent a plurality of structures, for instance the origin area of the aorta at the periphery of the left ventricle of the heart. Additionally or
  • said region of interest could represent the origin of the urethra at the periphery of the urinary bladder. Additionally or alternatively, the region of interest may represent a region covering the liver, the gallbladder, the pancreas and the stomach.
  • the ultrasonic system of Fig.1 further comprises a first receiving unit 120 (for instance an ultrasonic transducer) for receiving image data from the region of interest 150.
  • image data may be generated by any modality configured to get image of an internal structure (for instance an organ, or a blood vessel, or a tissue) of a subject 160 (for instance a patient).
  • image data may be gather by alternative modalities, such as for example computer tomography (CT), Magnetic resonance imaging (MRI), Positron emission tomography (PET).
  • CT computer tomography
  • MRI Magnetic resonance imaging
  • PET Positron emission tomography
  • the image data are digitalized, such that a map within space coordinates (for instance a 2D representation, or a 3D representation) of the imaged region of interest 150 is generated following treatment via one or more adequate image processing techniques for instance a shape-constrained deformable model, for instance an active shape model.
  • a map within space coordinates for instance a 2D representation, or a 3D representation
  • the ultrasonic system 100 further comprises a processing unit 130.
  • Said processing unit 130 can be, as illustrated in Fig. 1 as a single unit, but may alternatively be a plurality of units. Additionally, or alternatively, the processing unit 130 may be at the same location that the second receiving unit 110 and the first receiving unit 120, but may alternatively be at a remote location, within a server room for instance, or alternatively in the cloud.
  • the processing unit 130 may host means, such as ultrasound image
  • Such processed image data capable of being represented on a display means is also called ultrasonic images, which may be generated, for instance, from A-Mode data, additionally or alternatively from B- Mode data.
  • the processing unit 130 is configured to host at least one signal processing algorithm configured to process the acoustic data.
  • Said signal processing algorithm may receive digitalized audio data.
  • the audio data may need to be pre-processed by an Analogue-to- Digital converter (ADC) (not shown).
  • ADC Analogue-to- Digital converter
  • the microphone array may output a digital acoustic signal, such that such signal is ready to be processed by the processing unit 130.
  • the signal processing algorithm may be a beamforming algorithm.
  • a beamforming algorithm comprise computer readable s code that when executed on a suitable computer (for instance a PC, for instance a tablet, for instance a mobile phone), enable such suitable computer to process acoustic data (for instance audio data) to determine (or assess, or locate) the source of such acoustic data. This is possible when acoustic data are received from a plurality of microphones (for instance microphone arrays) at a known positions relative to each other (fixed physical distance between each or the individual microphones) such that a virtual digital map of the received sound is created.
  • a suitable computer for instance a PC, for instance a tablet, for instance a mobile phone
  • location of the source of the acoustic signal may be determined with accuracy following digitalization of the acoustic signal (further detailed hereunder under Fig.4).
  • the task of locating a sound source may be seen as the process of acoustic source localization.
  • Different methods have are known to achieved such task, for instance the so called “particle velocity or intensity vector", the “time difference of arrival” and the “triangulation”.
  • the signal processing algorithm is based on the steered beamformer approach so as to identify the location of the source of the acoustic data.
  • Said method makes use of an array of microphones combined with a steered beamformer which enhanced by the Reliability Weighted Phase Transform (RWPHAT).
  • RWPHAT Reliability Weighted Phase Transform
  • the signal processing algorithm is based on collocated microphone arrays approach so as to identify the location of the source of the acoustic data.
  • Said approach enables real-time sound localization uses a collocated array named Acoustic Vector Sensor (AVS) array.
  • AVS Acoustic Vector Sensor
  • the output is represented by a horizontal angle and a vertical angle of the sound sources which is found by the peak in the combined 3D spatial spectrum.
  • the signal processing algorithm is based on time delay estimates (TDE) methods. Said methods use the fact that the sound reaches the microphones with slightly different times. The delays are computed using cross-correlation function between the signals from different microphones.
  • TDE time delay estimates
  • any signal processing algorithm suitable for the present invention may be based on any other techniques that would be known to the skilled person such that the location of the acoustic source of the acoustic data is identified.
  • located acoustic data are generated, thereby combining the acoustic data and the acoustic source in a 2D Cartesian plan (for instance X,Y), alternatively in a 3D Cartesian plan (for instance X, Y, Z) so that the located acoustic data may be further process as it will be further detailed hereunder.
  • a 2D Cartesian plan for instance X,Y
  • a 3D Cartesian plan for instance X, Y, Z
  • the acoustic data can be filtered and localized following creation of an acoustic intensity map highlighting different sound signals, which are segregated, based on their source locations.
  • a sound propagation model can be generated. Accordingly, an adaptive
  • beamforming algorithm can be selected to compensate for the sound speed delay through biological structure (for instance an organ, or a blood vessel, or a tissue).
  • the processing unit 130 is further configured to register the located acoustic data and the image data (both data being into a digital format) so as to generate enriched ultrasound data from which an ultrasonic image can be displayed on a display 140.
  • This registration may is achieved following image registration process such that different set of data (i.e. located acoustic data and image data) are transformed into one coordinate system (either 2D or 3D).
  • the registration and/or the spatial correspondence of such acoustic data and such image data could, for instance, be established via a phantom (including structures visible in ultrasound and something generating noise, possibly at different positions) with state of the art geometry and a calibration process.
  • a registration algorithm based on known registration method would enable the registration of the located acoustic data and the image data so that the enriched ultrasound data is generated.
  • transformation models for instance linear transformation, non-rigid transformation
  • the processing unit 130 may also host an image segmentation algorithm configured to process the image data.
  • Said image segmentation algorithm may comprise one or more mathematical models, for instance a shape-constrained deformable model, for instance an active shape model such that a feature of interest (not shown) is identified into the enriched ultrasound data or alternatively onto the ultrasonic image (alternatively onto the image data), more particularly a contour of the feature of interest is enforced, said contour of the feature of interest being the schematic visual representation of the contour of the region of interest 150.
  • the ultrasonic system 100 may further comprise a display 140 for representing the region of interest 150 in a manner that is intelligible to a person.
  • the ultrasonic system 100 may further comprise an acoustic generator 170 (for instance a loudspeaker) configured to generate an audible signal. Additionally, the ultrasonic system 100 may comprise more than one acoustic generator 170 (for instance a loudspeaker).
  • said audible signal represents from the acoustic data (additionally or alternatively the located acoustic data) registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected, as it will be further detailed in Fig.5.
  • Fig.2 schematically represents another embodiment of the ultrasonic system 200 according to the present invention.
  • the first receiving unit 220 and the second receiving unit 210 are embedded within a single unit 290 (for instance an ultrasound probe).
  • a single unit 290 for instance an ultrasound probe.
  • Embodiments of such single unit 290 will be further detailed in Fig.5.
  • the skilled in the art will understand that such an embodiment limits the number of devices in contact with the subject (for instance the patient).
  • such embodiment has the benefit that the relative position of microphone array and the ultrasound transducer is constant.
  • One of the advantage of such constant relative position is the limitation of potential mistakes in determination of the position of the second receiving unit relative to the second relative unit which permits registration as detailed above.
  • Exemplary embodiments of the single unit 290 (for instance an ultrasound probe) will be further detailed with the help of Fig.3.
  • Fig.3a schematically represents an exemplary embodiment of an ultrasound probe 370 for use in an ultrasonic imaging system, for instance according to the present invention.
  • Said ultrasound probe 370 comprises a microphone array 310 (a plurality of microphones) so as to create a microphone array and an ultrasound transducer 320 (for instance a piezoelectric element, for instance a CMUT transducer (as defined above).
  • an ultrasound transducer 320 for instance a piezoelectric element, for instance a CMUT transducer (as defined above).
  • the ultrasound probe further comprises a means 370 for connection with an ultrasonic imaging system, said means 373 being either a wire (so as to transmit an electric or optical signal), but may also be a wireless means, such as a Bluetooth, Wi-Fi or otherwise any wireless elements arranged to transmit a signal (for instance data) from the probe to an ultrasonic imaging system, or alternatively to a server, or alternatively to a cloud, so that such wirelessly transmitted data can be used by an ultrasonic imaging system.
  • a means 370 for connection with an ultrasonic imaging system said means 373 being either a wire (so as to transmit an electric or optical signal), but may also be a wireless means, such as a Bluetooth, Wi-Fi or otherwise any wireless elements arranged to transmit a signal (for instance data) from the probe to an ultrasonic imaging system, or alternatively to a server, or alternatively to a cloud, so that such wirelessly transmitted data can be used by an ultrasonic imaging system.
  • the emitting frequency of the ultrasound transducer 320 is chosen in dependence of the required image resolution and required wave penetration. For instance, a low resolution and good penetration transducer is usually within the frequency range of 1 MHz to 10 MHz (megahertz). For a high resolution and low penetration transducer, the frequency range may be between 10 MHz to 20 MHz.
  • Audio data (or sound) generated by a structure varies in frequencies.
  • Heart sound S2 i.e. closure of the semilunar valves
  • Heart sound S2 has a component within 10 Hz to 400 Hz.
  • each microphone constituting the microphone array 310 may be arranged so as to form a circle, or alternatively a square, or otherwise any polygonal form such that it surrounds the ultrasound transducer 320.
  • the ultrasound transducer 320 is positioned at the center, or near the center of the circle (the center otherwise named origin), or alternatively the square, or alternatively the any polygonal form formed by each of the microphones constituting the microphone array 310.
  • each microphone constituting the microphone array 310 is arranged so as to form a circle (for instance where each microphone is at an equidistance from each other, for instance where the distance between each microphone is not consistent) around the ultrasound transducer 320, but the skilled in the art will find numerous embodiments wherein the ultrasound transducer 320 is surrounded by such a microphone array 310.
  • Fig.3b schematically represents a further exemplary embodiment of an ultrasound probe 370 for use in an ultrasonic imaging system according to the present invention.
  • the exemplary embodiment of Fig.3b differs from the one of Fig.3a in the positioning of the ultrasound array 310 relative to the ultrasound transducer 320.
  • each microphone constituting the microphone array 310 is positioned in a plan, said plan being adjacent to a further plan formed by the ultrasound transducer 320.
  • a microphone array 310 for use in the present invention may consists of 3 microphones, or alternatively 5 microphones, or alternatively 10 microphones, or alternatively 20 microphones, or alternatively 50 microphones.
  • each microphone constituting the microphone array 310 must be separate by each other by a minimal distance (for instance 0.5 mm, for instance 1 mm, for instance 1.4 mm) and positioned such that they can receive an acoustic signal from an internal structure (for instance an organ, or a blood vessel, or a tissue) of the subject (for instance the patient), where the maximal number of microphones will be limited by the shape and the surface of the ultrasound probe 370.
  • a minimal distance for instance 0.5 mm, for instance 1 mm, for instance 1.4 mm
  • Fig.4a represents the 3D outcome of the image data from the region of interest as captured by the first receiving unit, for instance an ultrasound transducer.
  • the image data are within a digital format, where following capture, may be, for instance, represented into a 3D Cartesian plan (for instance within the X and Y axis, for instance within the X, Y and Z axis).
  • a graphical represent the image data, and where each points of said region of interest is associated a coordinate data which as a source in the region of interest.
  • Fig.4b represents the 3D outcome of the located acoustic data from the region of interest as captured by the second receiving unit, for instance an array of microphones.
  • the processing unit is configured to digitalize the analogue acoustic data captured by the second receiving unit and to process such digitalized acoustic data so as to generate located acoustic data.
  • Such located acoustic data may be, for instance, represented into a 3D Cartesian plan (for instance within the X and Y axis, for instance within the X, Y and Z axis).
  • a graphical represent the located acoustic data, and where each points of said region of interest is associated a coordinate data which as a source in the region of interest.
  • Fig.4c graphically represents the 3D outcome of the registration of the image data and located acoustic data according to the present invention, thereby graphically representing the enriched ultrasound data as an ultrasonic image.
  • each coordinate data for instance in X and Y, or alternatively X, Y and Z
  • each coordinates of the ultrasonic image represent a point of the region of interest together with the acoustic data from said point of the region of interest.
  • a representation of the image data and the acoustic data of said point within the region of interest may be identified and represented to the user (visually and audibly) .
  • each coordinate of the ultrasonic image may correspond to a picture element (for instance a pixel) of said ultrasonic image.
  • a picture element for instance a pixel
  • the selection of a picture element on an image may enable acoustic data associated with the image data selected to become audible to the user (such as a healthcare professional, a physician, a caregiver, a patient).
  • Fig. 5 represents an ultrasonic image displayed on a display 440 (for instance a computer screen, for instance a user interface of a tablet or a mobile phone), where one or more picture elements and/or the ultrasonic image are displayed at an increased size 405.
  • a display 440 for instance a computer screen, for instance a user interface of a tablet or a mobile phone
  • an audio generator means 470 such as a loudspeaker, to make the acoustic data (or the located acoustic data) associated with one or more of the enhanced picture element audible.
  • the image enhancement 405 of the region of interest via one or more picture elements and/or the ultrasonic image, may manually requested by the user (such as a healthcare professional, a physician, a caregiver). Additionally, or alternatively, such image enhancement 405 may be automatically displayed to the extent that the located acoustic data (or the acoustic data) from an area, or a point of the region of interest is outside the range (below or above a given threshold) of reference acoustic signal as found in a memory (for instance a database, for instance a look-up table, or otherwise stored).
  • a memory for instance a database, for instance a look-up table, or otherwise stored.
  • the ultrasonic image could have an image portion corresponding to one or more picture elements that is at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the reference acoustic signal.
  • said reference acoustic signal can be found in a memory (for instance a database, for instance a look-up table, or otherwise stored).
  • the reference acoustic signal accordingly can be taken from general guidelines, or alternatively from the subject health record data following a scan at a moment where the subject (for instance the patient) was in a healthy condition, or alternatively from an average of number of similar subject (same age, same sex).
  • Such reference acoustic signal can be a precise signal, but should preferably comprise a range bordered by a minimum threshold and maximum threshold.
  • the system (for instance the processor) will assess the acoustic data, or located acoustic data as being normal, or acceptable, or healthy.
  • the system (for instance the processor) will assess the acoustic data, or located acoustic data as being abnormal, or unacceptable, or unhealthy.
  • the one or more picture elements and/or the ultrasonic image will be displayed at either an improved resolution or at an increased size.
  • Fig. 6 schematically represents a method according to the present invention.
  • step S 1 corresponds to receiving image data from the region of interest. Said step is achieved, for instance via a first receiving unit (for instance an ultrasound transducer).
  • a first receiving unit for instance an ultrasound transducer
  • Step S2 corresponds to receiving acoustic data from an acoustic source, wherein said acoustic source is located within a region of interest. Said step is achieved, for instance, via a second receiving unit (for instance an array of microphones).
  • a second receiving unit for instance an array of microphones.
  • Step S3 corresponds to processing the acoustic data to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data.
  • Said step is achieved, for instance, via a processor comprising one or more algorithms so that the location of the acoustic data is determined.
  • a beamforming algorithm for example can generated such located acoustic data when run a suitable computer.
  • Step S4 corresponds to registering the image data and the located acoustic data. This step is achieved, for instance, via the processor which further comprises one or more further algorithm configured to register image data and the located acoustic data such that step S5 is made available.
  • Step S5 corresponds to generating enriched ultrasound data based on the registered image data and the located acoustic data.
  • a scan of an internal organ is performed and an image data are obtained.
  • acoustic data from the region are captured using a microphone array.
  • the location where the acoustic signals originated from is determined.
  • an acoustic map of the scanned region is created such that signals are isolated based on their source location.
  • Captured acoustic signals are analyzed and compared with a pre-identified sound (for example, a heart murmur) to detect a sound of interest.
  • a pre-identified sound for example, a heart murmur
  • the area which emits the "sound-of-interest” is classified as an "area-of- interest" for the "image-scan”.
  • One or more image segmentation algorithms are used to detect an anatomical object or feature of interest (for example, an aortic valve) from the region of interest identified in the image data.
  • an anatomical object or feature of interest for example, an aortic valve

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Gynecology & Obstetrics (AREA)
  • Acoustics & Sound (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

An ultrasonic imaging system comprising a first receiving unit for receiving image data from the region of interest a second receiving unit for receiving an acoustic signal from an acoustic source and generating acoustic data from said acoustic signal, wherein said acoustic source is located within a region of interest, a processing unit configured to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data; and wherein the processing unit is further configured to register the image data and the located acoustic data so as to generate the enriched ultrasound data based on the image data and the located acoustic data.

Description

System and method for generating an ultrasonic
FIELD OF THE INVENTION
The invention relates to an ultrasonic imaging comprising a first receiving unit for receiving image data from a region of interest.
The invention further relates to ultrasound probe for use in an ultrasonic imaging system according to the present invention, the ultrasound probe comprising an ultrasonic transducer for receiving image data of a region of interest.
The invention also relates to a method comprising the steps of receiving image data from the region of interest.
BACKGROUND OF THE INVENTION
Ultrasonography is commonly used for generating an image so as to visualize internal structures of the human body, such as tendons, muscles, joints, vessels and internal organs. This imaging technique has number of known advantages, such as providing a realtime image enabling a healthcare professional, or otherwise any person to see a
representation of the internal structure imaged without any lengthy processing delay.
Ultrasonic imaging is commonly generated in three steps which may be summarized as i) producing a sound wave (for instance via a piezoelectric transducer), ii) receiving the echoes of the produced sound wave, and iii) forming the image based on each received echoes. In more details, the image is based on the time elapsed between the generated sound wave and the received echo and the strength of said received echo. From a plurality of received echoes, determination of the location of the pixel(s) (or image
element(s)) is achieved, thereby allowing generation of an ultrasonic image. Following generation of the (either a 2D image or a 3D image), said image is displayed for visualization, using for instance DICOM standard.
A system for use in imaging a subject and determining a position of the imaging probe relative to a body of said subject is known from US 6,678,545 B2. Said system comprises an imaging probe configured to scan the subject and provide scan images of the subject. An array of receivers is in communication with the base and the imaging probe. A first plurality of reference points is fixed in relation to the base and in communication with the array. A second plurality of reference points is fixed in relation to the imaging probe and in communication with the array. A processor in communication with the imaging probe and the array calculates the position in the scan images corresponding to the position of the imaging probe relative to the subject.
It is known from US 8372006 B a method for detecting and locating a target using phase information obtained from an array of microphones or other sensors. The invention therein disclosed is based the introduction of a device that includes a transmitting and stimulating ultrasound transducer (probe) and a multiplicity of sensors at given locations around or adjacent to a human breast. An ultrasound transducer generates certain stimulating signals which are transmitted to the breast and which, in presence of a microcalcification or other target, will result in reflected, demodulated, reradiated and scattered signals. These signals will travel away from the microcalcification and toward a location or locations whereby the various sensors are located.
It is a drawback of known ultrasonic imaging apparatuses that further relevant information associated to the region undergoing an ultrasound scan (for instance a region of interest) is not adequately assessed relative to the image information of said region.
Consequently, the healthcare professional lacks valuable information which he/she could benefit to take a decision, or to make a diagnostic, or to prepare a therapy plan. SUMMARY OF THE INVENTION
It is an object of the invention to provide an ultrasonic imaging system of the kind set forth in the opening paragraphs which enables an association between acoustic information associated to the region undergoing an ultrasound scan relative to the image information of said region
According to a first aspect of the invention, this object is realized by an ultrasonic imaging system as defined in the opening paragraph characterized in that the ultrasonic imaging system further comprises a second receiving unit for receiving an acoustic signal of a frequency within the human hearing range from an acoustic source and generating acoustic data from said acoustic signal, wherein said acoustic source is located within a region of interest, a processing unit configured to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data, and wherein the processing unit is further configured to register the image data and the located acoustic data so as to generate the enriched ultrasound data based on the image data and the located acoustic data. The invention is advantageous in that it enables generation of an enriched ultrasound data based on located acoustic data and image data; an association between the received acoustic data and the received image data is registered in such enriched ultrasound data, therefore enabling a user (such as a healthcare professional, a physician, a caregiver) to perceive an internal structure (for instance an organ, or a blood vessel, or a tissue) by at least two senses (sight and hearing). In addition to the foregoing, a superposition of image data and acoustic data enables a further assessment on the internal structure of a subject (for instance a patient) such that more information of potential relevance are provided to the healthcare user so as to strengthen his/her assessment of the condition of such internal structure (for instance for diagnostic purposes).
The invention is further advantageous in that it enables a faster scanning time of the region of interest by an ultrasonic imaging system. For instance, the invention speeds the analysis process of an ultrasonic imaging system as a region of interest or a feature of interest may be quickly identified based on its acoustic emissions, which are captured by the second receiving unit. Alternatively, such region of interest or a feature of interest may be automatically identified by a processing unit. Localization of a region of interest or area of interest within the ultrasound image is therefore eased, and the image interpretation by the healthcare user is improved.
The invention is further advantageous in that it enables an ultrasonic imaging system which is easier to use, allowing lay users, additionally and/or alternatively user undergoing a training, additionally and/or alternatively users with limited skills, additionally and/or alternatively less experienced healthcare professionals, additionally and/or alternatively any person and or robot who would benefits in ease of use to operate an ultrasonic imaging system so that conclusion(s), assessment(s) or otherwise information are engendered.
The invention is further advantageous in that it enables the ultrasonic imaging system parameters (for instance the frequency, the mode) to be modified, or changed, or optimized based on the received acoustic data, so as to ensure that the scanned image represents data relevant to the issue highlighted by acoustic analysis. Consequently, the present invention enables for an optimization of the tradeoff between spatial resolution of the image and imaging depth into the body.
In another embodiment, the processing unit is further configured to generate an ultrasonic image based on the enriched ultrasound data. This embodiment is advantageous in that it enables visual representation, when presented on a display means, of the image data captured by the first receiving unit.
In another embodiment, the second receiving unit of the ultrasonic imaging system comprises an array of microphones. According to the present invention, an array of microphones (or microphone array) comprises a plurality of microphones (i.e. two or more) that are distributed over a certain space and are operating in tandem. Said embodiment is advantageous in that it enables adequate location of the acoustic source of the acoustic signal (therefore the acoustic data) following the processing of the received acoustic data. For instance, based on a fixed physical relationship in space between different individual microphone transducer array elements, simultaneous digital signal processing of the received signals from each of the individual microphone array elements by a suitable algorithm is configured to create one or more virtual microphones so that further processing is made possible.
In another embodiment, the first receiving unit of the ultrasonic imaging system comprises an ultrasound probe. Said embodiment is advantageous in that it enables digital capture of image data, therefore obviating the need of any converting means, such as an Analogue-to-Digital converter (ADC). This embodiment is further advantageous in that it enables the ultrasonic imaging system according to the present invention to use elements, and/or features, and/or aspect of known ultrasonic imaging systems, thereby limiting modifications for upgrading towards an ultrasonic imaging system according to the present invention.
In another embodiment, the processing unit of the ultrasonic imaging system further comprises a beamforming (or spatial filtering) algorithm configured to processes the acoustic data so as to determine the location of the acoustic source relative to the region of interest. Said embodiment is advantageous in that, together with the acoustic data received by a second receiving unit, such as a microphone array, a robust processing technique to locate the acoustic source (of the acoustic data) is provided. As it will be further elucidated hereunder, beamforming is namely achieved by combining elements in a phased array (for instance received from the array of microphones) in such a way that signals at particular angles experience constructive interference while others experience destructive interference. From such processing of acoustic data received by a microphone array digitalization of such acoustic data (for instance by an Analogue-to-Digital converter (ADC, or any other acoustic processing techniques), enabling conception of digital virtual microphone(s). From such digital virtual microphone(s), a 2D or 3D space coordinates can be derived (for instance Cartesian coordinates) such that the location of the acoustic source is made possible.
In another embodiment, the processing unit of the ultrasonic imaging system further comprises an image segmentation algorithm such that one or more features of interest in the enriched ultrasound data are identified. Non limiting examples of image segmentation algorithm suitable for the present invention are, for instance a shape-constrained deformable model, and/or an active shape model. This embodiment is advantageous in that it enables a region of interest from image data to be identified, particularly a contour of the region of interest is enforced (for instance highlighted, or otherwise made prominent when displayed), thereby allowing an easier identification of an arrangement, or an element of the imaged structure (for instance an organ, or a blood vessel, or a tissue) when visualized.
In another embodiment, the ultrasonic imaging system further comprises a display configured to display an ultrasonic image based on enriched ultrasound data, and an audio generator (such as a loudspeaker) configured to generate an audible signal from the acoustic data and/or the located acoustic data registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected. Said embodiment is advantageous in that it enables the healthcare user, or any person to hear the sound (or a frequency associated with this sound) as captured by the second receiving unit (alternatively by an array of microphones) following selection or identification of a picture element (or a pixel, or a group of pixels, or a frame of the image) or more than one picture elements, or a group of picture elements within the ultrasonic image. By said selection or identification of such picture element(s), or more than one picture elements, or a group of picture elements, in addition of viewing the image generated based on the internal structure (for instance an organ, or a blood vessel, or a tissue), the sound generated by said internal structure, or more precisely by a specific area within said internal structure may be made audible (for instance played by a loudspeaker). Consequently, said embodiment enables, for instance, a better assessment of said internal structure.
In another embodiment, the ultrasonic imaging system further comprises a first memory for storing at least one of the acoustic data and/or the located acoustic data so that the audible signal can be replayed at a further moment, or the located acoustic data be reregistered with further image data generated at another further moment. Said embodiment is advantageous in that the captured acoustic data (or located acoustic data) may be replayed at a future consultation such as, for instance, to be compared with the newly gathered acoustic data (or located acoustic data). As a result, a modification, or a changed, or a difference of the acoustic data generated by the internal structure may be assessed, such as to help, support or otherwise provide valuable information to the user in his assessment, for instance an improvement following an intervention, or degradation of an existing condition monitored over time.
In another embodiment, the ultrasonic imaging system further comprises a second memory for storing a reference acoustic signal for the region of interest, or the feature of interest, wherein the processing unit is further configured to compare the acoustic data and/or the located acoustic data with the reference acoustic signal. Said embodiment is advantageous in that an abnormal acoustic data (for instance acoustic signal) from the region of interest (or the feature of interest) can be easily assessable by the healthcare user (such as a healthcare professional, a physician, a caregiver). A reference acoustic signal may be, for instance, an acoustic signal which has been scientifically demonstrated and validated in a so- called normal range, (for instance ground truth, or golden standard). Such embodiment enables quicker assessment, diagnostic or conclusion of the situation of the internal structure (for instance an organ, or a blood vessel, or a tissue) from which the acoustic data and image data are captured.
In another embodiment, the display is further configured to display the one or more picture elements and/or the ultrasonic image at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal; and/or display the one or more picture elements and/or the ultrasonic image at an increased size when located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal. Said embodiment is advantageous in that it enables emphasis of the representation of the image data associated with the acoustic data. Such emphasis may enable a diagnostic by a healthcare professional, or an assessment by a user which could have been missed without the superposition of the acoustic data and the image data. Alternatively or additionally, this embodiment may provide information on an internal structure, that appears abnormal, if it works normally, thereby alleviating the need for further examination by the healthcare professional, such a minimally invasive surgery, or an invasive surgery.
According to a second aspect of the invention, the foregoing object is realized by an ultrasound probe as defined in one of the opening paragraph configured for receiving acoustic data of a frequency within human hearing range and ultrasound data characterized in that that the ultrasound probe further comprises an array of microphones for receiving acoustic data from an acoustic source, wherein said acoustic source is located within a region of interest.
Said embodiment is advantageous in that it enables reception of the acoustic data and the image data from one device, or apparatus. The use of one device comparable to a plurality of device enables easier manipulation by a user (such as a healthcare professional, a physician, a caregiver) which consequently provides for a more robust system for generating an ultrasound image registered with acoustic data. In addition to the easiness of use, this embodiment enables diminution of the processing needs of data, thereby enabling faster processing time. Moreover, via said embodiment, as the position of an array of microphones relative to the ultrasound probe remains constant, manipulation of the system according to the present invention by un-skilled users, or otherwise any user that have not received extensive training is made possible.
According to a third aspect of the invention, the foregoing object is realized by a method characterized in receiving acoustic data of a frequency within human hearing range from an acoustic source, wherein said acoustic source is located within a region of interest, processing the acoustic data to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data, registering the image data and the located acoustic data, and generating the enriched ultrasound data based on the registered image data and the located acoustic data.
The method according to the present invention is advantageous for analogous reasons as the corresponding features of the system according to present invention. For instance, the method is advantageous in that it enables superposition of acoustic data and image captured data from a source, said source being within a region of interest, which is for instance an internal structure (for instance an organ, or a blood vessel, or a tissue). Such superposition of acoustic data and image data provides for the possibility to generate an audible signal indicative of the acoustic data emitted from the source with a visual signal (e.g. display of an ultrasonic image) indicative of the image data. The latter advantage enables a user (such as a healthcare professional, a physician, a caregiver) to see a representation of the region of interest and hear the sound (acoustic signal) generated by the structure, or otherwise a source within said region of interest.
In another embodiment, the method further comprises displaying an ultrasonic image based on the enriched ultrasound data, and generating an audible signal from the acoustic data and/or the located acoustic data registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected. Said embodiment is advantageous for analogous reasons as the corresponding embodiment of the system according to the present invention.
In another embodiment, the method further comprises displaying the one or more picture elements and/or the ultrasonic image at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal, and/or displaying the one or more picture elements and/or the ultrasonic image at an increased size when located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal. Said embodiment is advantageous for analogous reasons as the corresponding embodiment of the system according to the present invention.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter.
It will be appreciated by those skilled in the art that two or more of the above- mentioned options, implementations, and/or aspects of the invention may be combined in any way deemed useful.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the applicator device, the system and the method according to the invention will be further elucidated and described with reference to the drawing, in which:
Fig.l schematically represents an embodiment of the ultrasonic system according to the present invention.
Fig.2 schematically represents another embodiment of the ultrasonic system according to the present invention.
Fig.3a schematically represents an embodiment of the device for use in a system according to the present invention.
Fig.3b schematically represents an alternative embodiment of the device for use in a system according to the present invention.
Fig.4a graphically represent the image data received according to the present invention.
Fig.4b graphically represent the located acoustic data generated according to the present Fig.4c graphically represent the registration of the image data and the located acoustic data according to the present invention.
Fig.5 represent an embodiment of the ultrasonic image displayed by an ultrasonic system according to the present invention.
Fig. 6 schematically represents an embodiment of the method according to the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
Certain embodiments will now be described in greater detail with reference to the accompanying drawings. In the following description, like drawing reference numerals are used for like elements, even in different drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. Also, well known functions or constructions are not described in detail since they would obscure the embodiments with unnecessary detail. Moreover, expressions such as "at least one of, when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
Within the scope of the present invention, a region of interest may be an internal structure (for instance an organ, or a blood vessel, or a tissue), or part of such internal structure. Additionally or alternatively, a region of interest may comprise a plurality of internal structures, or a plurality of parts of such plurality of internal structures. A feature of interest corresponds to an image of a part of an internal structure (or a plurality of internal structures), which may have been processed by an imaging segmentation algorithm for example. For illustration purposes, a region of interest may be the aortic valve in the heart of a subject, where the feature of interest is the representation of the posterior cusp.
Within the scope of the present invention, a microphone is a device that convert an acoustic signal into an electric signal. A microphone array (or array of
microphones) comprises a plurality of microphones arranged to operate in tandem. Generally, a microphone array comprises solely omnidirectional (or non-directional) microphones, alternatively it comprises solely unidirectional (or directional) microphones, alternatively it comprises a mixture of omnidirectional and unidirectional microphones distributed over a perimeter of a given space. A microphone array may consist of two (2) or more microphones, where examples of number of microphones could be four (4) microphones, alternatively six (6) microphones, alternatively (9) microphones. In an embodiment according to the present invention, the microphone array comprises, as an example, eight (8) microphones (further detailed hereunder under Figure 5).
A planar microphone array comprises all microphones in one single plane. The skilled in the art will know that a planar array provides a large aperture and may be used for directional beam control by varying the relative phase of each element. A 3D microphone array comprises all microphones in a three-dimensional space. Such microphone arrays are usually, but not limited to a spherical configuration relative to each other. An advantage of a 3D microphone array resides in an increased accuracy of sound pressure levels and a more accurate (virtual) positioning of sound sources (for instance the region of interest).
Additionally or alternatively, a 3D microphone array enables to capture an increased amount of the emitted sound by the region of interest, increasing the sensitivity relative to a planar microphone array. Additionally or alternatively, a 3D array enables limitation of the air gap (or air pouch) between the body surface and the microphones, thereby increasing the quality of the sound capture.
Within the scope of the present invention, an ultrasonic transducer is a device capable of converting energy into an ultrasound wave. An ultrasonic transducer may comprise a piezoelectric element (such as a crystal, alternatively a plurality of crystals) capable of changing dimension when a voltage is applied. Alternatively, it comprises a magnetostrictive materials may be used as such material changes size in response to a magnetic field. Alternatively, a capacitor microphone using a plate which moves in response to the ultrasound wave may be used. Alternatively the system may comprise a Capacitive Micromachined Ultrasonic Transducer (CMUT) transducer cell. Such cell comprises a cavity with a movable mechanical part also called a membrane and a pair of electrodes separated by the cavity. When receiving ultrasound waves, ultrasound waves cause the membrane to move or vibrate and change the capacitance between the electrodes which can be detected. Thereby the ultrasound waves are transformed into a corresponding electrical signal. Conversely, an electrical signal applied to the electrodes causes the membrane to move or vibrate, thereby transmitting ultrasound waves.
Fig.l schematically represents an ultrasonic system 100 according to the present invention. Said ultrasonic system 100 comprises a second receiving unit 110 configured to receiving an acoustic signal (for instance sound waves) from an acoustic source and generating acoustic data from said acoustic signal from a region of interest 150. Said region of interest 150 may be an organ, alternatively may be a tissue, alternatively may be a blood vessel, alternatively may be a region of an organ, alternatively may be a region of a tissue, alternatively may be a region of a blood vessel, or alternatively any structure that emits an acoustic wave. The skilled person will easily understand that the heart would be a suitable structure so as to carry the present invention, where heart murmurs could be captured by the second receiving unit. However, numerous other structures of the human or animal body emit an acoustic signal (e.g. a noise, a sub audible signal, an ultrasonic signal), such as for example the kidneys, the liver, or tissues such as the aorta, or a tissue such as a ventricular valve.
The skilled person will understand that in a preferred embodiment, said region of interest 150 may correspond to one or more internal structures, within the human or animal body. A region of interest 150 may be, as mentioned above, a region of a structure, for instance an organ, but could also represent a plurality of structures, for instance the origin area of the aorta at the periphery of the left ventricle of the heart. Additionally or
alternatively, said region of interest could represent the origin of the urethra at the periphery of the urinary bladder. Additionally or alternatively, the region of interest may represent a region covering the liver, the gallbladder, the pancreas and the stomach.
The ultrasonic system of Fig.1 further comprises a first receiving unit 120 (for instance an ultrasonic transducer) for receiving image data from the region of interest 150. Such image data may be generated by any modality configured to get image of an internal structure (for instance an organ, or a blood vessel, or a tissue) of a subject 160 (for instance a patient). Even if the present invention mainly discloses ultrasound imaging (US), image data may be gather by alternative modalities, such as for example computer tomography (CT), Magnetic resonance imaging (MRI), Positron emission tomography (PET). With all the forgoing modality, the image data are digitalized, such that a map within space coordinates (for instance a 2D representation, or a 3D representation) of the imaged region of interest 150 is generated following treatment via one or more adequate image processing techniques for instance a shape-constrained deformable model, for instance an active shape model.
The ultrasonic system 100 further comprises a processing unit 130. Said processing unit 130 can be, as illustrated in Fig. 1 as a single unit, but may alternatively be a plurality of units. Additionally, or alternatively, the processing unit 130 may be at the same location that the second receiving unit 110 and the first receiving unit 120, but may alternatively be at a remote location, within a server room for instance, or alternatively in the cloud.
The processing unit 130 may host means, such as ultrasound image
computation processes, that will be known to the skilled in the art to transposed image data captured from a transducer array (or alternatively any other means capable of receiving an echo signal) into data capable of being displayed for visualization. Such processed image data capable of being represented on a display means is also called ultrasonic images, which may be generated, for instance, from A-Mode data, additionally or alternatively from B- Mode data.
The processing unit 130 is configured to host at least one signal processing algorithm configured to process the acoustic data. Said signal processing algorithm may receive digitalized audio data. As most microphone arrays are configured to generate an analogue acoustic signal, the audio data may need to be pre-processed by an Analogue-to- Digital converter (ADC) (not shown). Alternatively, the microphone array may output a digital acoustic signal, such that such signal is ready to be processed by the processing unit 130.
The signal processing algorithm may be a beamforming algorithm. A beamforming algorithm comprise computer readable s code that when executed on a suitable computer (for instance a PC, for instance a tablet, for instance a mobile phone), enable such suitable computer to process acoustic data (for instance audio data) to determine (or assess, or locate) the source of such acoustic data. This is possible when acoustic data are received from a plurality of microphones (for instance microphone arrays) at a known positions relative to each other (fixed physical distance between each or the individual microphones) such that a virtual digital map of the received sound is created. Based on the beamforming signal processing techniques on the reception of the acoustic signal by the microphone array, location of the source of the acoustic signal may be determined with accuracy following digitalization of the acoustic signal (further detailed hereunder under Fig.4).
The skilled person will understand that the task of locating a sound source (for instance acoustic source) may be seen as the process of acoustic source localization. Different methods have are known to achieved such task, for instance the so called "particle velocity or intensity vector", the "time difference of arrival" and the "triangulation".
In an embodiment, the signal processing algorithm is based on the steered beamformer approach so as to identify the location of the source of the acoustic data. Said method makes use of an array of microphones combined with a steered beamformer which enhanced by the Reliability Weighted Phase Transform (RWPHAT).
Additionally, or alternatively, the signal processing algorithm is based on collocated microphone arrays approach so as to identify the location of the source of the acoustic data. Said approach enables real-time sound localization uses a collocated array named Acoustic Vector Sensor (AVS) array. By such approach, the output is represented by a horizontal angle and a vertical angle of the sound sources which is found by the peak in the combined 3D spatial spectrum.
Additionally, or alternatively, the signal processing algorithm is based on time delay estimates (TDE) methods. Said methods use the fact that the sound reaches the microphones with slightly different times. The delays are computed using cross-correlation function between the signals from different microphones.
Additionally, or alternatively, any signal processing algorithm suitable for the present invention may be based on any other techniques that would be known to the skilled person such that the location of the acoustic source of the acoustic data is identified.
Based on the output of the signal processing algorithm, located acoustic data are generated, thereby combining the acoustic data and the acoustic source in a 2D Cartesian plan (for instance X,Y), alternatively in a 3D Cartesian plan (for instance X, Y, Z) so that the located acoustic data may be further process as it will be further detailed hereunder.
Additionally, or alternatively, the acoustic data can be filtered and localized following creation of an acoustic intensity map highlighting different sound signals, which are segregated, based on their source locations. In order to consider the errors introduced due to scattering or reflection of acoustic waves propagating through the subject (for instance the patient), a sound propagation model can be generated. Accordingly, an adaptive
beamforming algorithm can be selected to compensate for the sound speed delay through biological structure (for instance an organ, or a blood vessel, or a tissue).
The processing unit 130 is further configured to register the located acoustic data and the image data (both data being into a digital format) so as to generate enriched ultrasound data from which an ultrasonic image can be displayed on a display 140. This registration may is achieved following image registration process such that different set of data (i.e. located acoustic data and image data) are transformed into one coordinate system (either 2D or 3D). In an exemplary embodiment, based on the difference between the resolution of the located acoustic data and the resolution of the image data, the registration and/or the spatial correspondence of such acoustic data and such image data could, for instance, be established via a phantom (including structures visible in ultrasound and something generating noise, possibly at different positions) with state of the art geometry and a calibration process. Such registration enable integration of those different set of data, into an integrated data set which corresponds to the enriched ultrasound data. Numerous means are available to the skilled person so as to make the registration effective. For instance, a registration algorithm based on known registration method would enable the registration of the located acoustic data and the image data so that the enriched ultrasound data is generated. For instance, transformation models (for instance linear transformation, non-rigid transformation) are capable of generating the enriched ultrasound data based on the located acoustic data and the image data.
The processing unit 130 may also host an image segmentation algorithm configured to process the image data. Said image segmentation algorithm may comprise one or more mathematical models, for instance a shape-constrained deformable model, for instance an active shape model such that a feature of interest (not shown) is identified into the enriched ultrasound data or alternatively onto the ultrasonic image (alternatively onto the image data), more particularly a contour of the feature of interest is enforced, said contour of the feature of interest being the schematic visual representation of the contour of the region of interest 150.
The ultrasonic system 100 may further comprise a display 140 for representing the region of interest 150 in a manner that is intelligible to a person.
The ultrasonic system 100 may further comprise an acoustic generator 170 (for instance a loudspeaker) configured to generate an audible signal. Additionally, the ultrasonic system 100 may comprise more than one acoustic generator 170 (for instance a loudspeaker). In an embodiment, said audible signal represents from the acoustic data (additionally or alternatively the located acoustic data) registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected, as it will be further detailed in Fig.5.
Fig.2 schematically represents another embodiment of the ultrasonic system 200 according to the present invention. Within this alternative system, the first receiving unit 220 and the second receiving unit 210 are embedded within a single unit 290 (for instance an ultrasound probe). Embodiments of such single unit 290 will be further detailed in Fig.5. However, the skilled in the art will understand that such an embodiment limits the number of devices in contact with the subject (for instance the patient). Additionally, such embodiment has the benefit that the relative position of microphone array and the ultrasound transducer is constant. One of the advantage of such constant relative position is the limitation of potential mistakes in determination of the position of the second receiving unit relative to the second relative unit which permits registration as detailed above. Exemplary embodiments of the single unit 290 (for instance an ultrasound probe) will be further detailed with the help of Fig.3.
All the other features, or elements, or functions, or advantages discussed with the use of Fig.1 apply mutatis mutandis to Fig.2.
Fig.3a schematically represents an exemplary embodiment of an ultrasound probe 370 for use in an ultrasonic imaging system, for instance according to the present invention. Said ultrasound probe 370 comprises a microphone array 310 (a plurality of microphones) so as to create a microphone array and an ultrasound transducer 320 (for instance a piezoelectric element, for instance a CMUT transducer (as defined above). The ultrasound probe further comprises a means 370 for connection with an ultrasonic imaging system, said means 373 being either a wire (so as to transmit an electric or optical signal), but may also be a wireless means, such as a Bluetooth, Wi-Fi or otherwise any wireless elements arranged to transmit a signal (for instance data) from the probe to an ultrasonic imaging system, or alternatively to a server, or alternatively to a cloud, so that such wirelessly transmitted data can be used by an ultrasonic imaging system.
The emitting frequency of the ultrasound transducer 320 is chosen in dependence of the required image resolution and required wave penetration. For instance, a low resolution and good penetration transducer is usually within the frequency range of 1 MHz to 10 MHz (megahertz). For a high resolution and low penetration transducer, the frequency range may be between 10 MHz to 20 MHz.
Audio data (or sound) generated by a structure varies in frequencies.
Numerous internal structures (for instance the heart) emit sounds at a low frequency, below the 12 Hz (hertz) which is often determined of the lowest frequency audible for the human ear in ideal laboratory conditions. The commonly stated range of human hearing has been commonly assessed as between 20 Hz to 20 kHz (kilohertz), where human are most sensitive to (i.e. able to discern at lowest intensity) frequencies between 2,000 and 5,000 Hz. For instance, heart sound S2 (i.e. closure of the semilunar valves) has a component within 10 Hz to 400 Hz.
In an exemplary embodiment, each microphone constituting the microphone array 310 may be arranged so as to form a circle, or alternatively a square, or otherwise any polygonal form such that it surrounds the ultrasound transducer 320. Within this embodiment, that the ultrasound transducer 320 is positioned at the center, or near the center of the circle (the center otherwise named origin), or alternatively the square, or alternatively the any polygonal form formed by each of the microphones constituting the microphone array 310. For clarity, with the exemplary embodiment depicted in Fig.3a, each microphone constituting the microphone array 310 is arranged so as to form a circle (for instance where each microphone is at an equidistance from each other, for instance where the distance between each microphone is not consistent) around the ultrasound transducer 320, but the skilled in the art will find numerous embodiments wherein the ultrasound transducer 320 is surrounded by such a microphone array 310.
Fig.3b schematically represents a further exemplary embodiment of an ultrasound probe 370 for use in an ultrasonic imaging system according to the present invention. The exemplary embodiment of Fig.3b differs from the one of Fig.3a in the positioning of the ultrasound array 310 relative to the ultrasound transducer 320. In this further embodiment, each microphone constituting the microphone array 310 is positioned in a plan, said plan being adjacent to a further plan formed by the ultrasound transducer 320.
The skilled in the art will understand that the number of microphones constituting the microphone array 310 depicted in Fig.3a and 3b should not be seen in any event as a limiting number. A microphone array 310 for use in the present invention may consists of 3 microphones, or alternatively 5 microphones, or alternatively 10 microphones, or alternatively 20 microphones, or alternatively 50 microphones. However, each microphone constituting the microphone array 310 must be separate by each other by a minimal distance (for instance 0.5 mm, for instance 1 mm, for instance 1.4 mm) and positioned such that they can receive an acoustic signal from an internal structure (for instance an organ, or a blood vessel, or a tissue) of the subject (for instance the patient), where the maximal number of microphones will be limited by the shape and the surface of the ultrasound probe 370.
Fig.4a represents the 3D outcome of the image data from the region of interest as captured by the first receiving unit, for instance an ultrasound transducer. As mentioned above, the image data are within a digital format, where following capture, may be, for instance, represented into a 3D Cartesian plan (for instance within the X and Y axis, for instance within the X, Y and Z axis). Following generation of such Cartesian plan, a graphical represent the image data, and where each points of said region of interest is associated a coordinate data which as a source in the region of interest.
Fig.4b represents the 3D outcome of the located acoustic data from the region of interest as captured by the second receiving unit, for instance an array of microphones. As detailed above, the processing unit according to the present invention is configured to digitalize the analogue acoustic data captured by the second receiving unit and to process such digitalized acoustic data so as to generate located acoustic data. Such located acoustic data may be, for instance, represented into a 3D Cartesian plan (for instance within the X and Y axis, for instance within the X, Y and Z axis). Following generation of such Cartesian plan, a graphical represent the located acoustic data, and where each points of said region of interest is associated a coordinate data which as a source in the region of interest.
Fig.4c graphically represents the 3D outcome of the registration of the image data and located acoustic data according to the present invention, thereby graphically representing the enriched ultrasound data as an ultrasonic image. Within said ultrasonic image, each coordinate data (for instance in X and Y, or alternatively X, Y and Z) of the image data is associated with a coordinate data of the located acoustic data whereby each coordinates of the ultrasonic image represent a point of the region of interest together with the acoustic data from said point of the region of interest. Consequently, to the extent a single point of said ultrasonic image is selected by a user for instance (such as a healthcare professional, a physician, a caregiver), a representation of the image data and the acoustic data of said point within the region of interest may be identified and represented to the user (visually and audibly) .
Additionally or alternatively, each coordinate of the ultrasonic image, either within the 2D plane (X and Y), or the 3D representation (X, Y and Z) may correspond to a picture element (for instance a pixel) of said ultrasonic image. As each picture element is associated with located acoustic data, as explained above, the selection of a picture element on an image may enable acoustic data associated with the image data selected to become audible to the user (such as a healthcare professional, a physician, a caregiver, a patient).
Fig. 5 represents an ultrasonic image displayed on a display 440 (for instance a computer screen, for instance a user interface of a tablet or a mobile phone), where one or more picture elements and/or the ultrasonic image are displayed at an increased size 405. Moreover, such embodiment may comprise an audio generator means 470, such as a loudspeaker, to make the acoustic data (or the located acoustic data) associated with one or more of the enhanced picture element audible.
The image enhancement 405 of the region of interest, via one or more picture elements and/or the ultrasonic image, may manually requested by the user (such as a healthcare professional, a physician, a caregiver). Additionally, or alternatively, such image enhancement 405 may be automatically displayed to the extent that the located acoustic data (or the acoustic data) from an area, or a point of the region of interest is outside the range (below or above a given threshold) of reference acoustic signal as found in a memory (for instance a database, for instance a look-up table, or otherwise stored). Additionally or alternatively, the ultrasonic image could have an image portion corresponding to one or more picture elements that is at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the reference acoustic signal. As for the foregoing embodiment, said reference acoustic signal can be found in a memory (for instance a database, for instance a look-up table, or otherwise stored).
The reference acoustic signal accordingly can be taken from general guidelines, or alternatively from the subject health record data following a scan at a moment where the subject (for instance the patient) was in a healthy condition, or alternatively from an average of number of similar subject (same age, same sex). Such reference acoustic signal can be a precise signal, but should preferably comprise a range bordered by a minimum threshold and maximum threshold.
When the acoustic data, or located acoustic data registered with the one or more picture elements of the ultrasound image are within the range of the minimum threshold and maximum threshold, or alternatively correspond to the reference acoustic signal, the system (for instance the processor) will assess the acoustic data, or located acoustic data as being normal, or acceptable, or healthy. Inversely, when the acoustic data, or located acoustic data registered with the one or more picture elements of the ultrasound image are outside the range of the minimum threshold and maximum threshold, or alternatively do not correspond to the reference acoustic signal, the system (for instance the processor) will assess the acoustic data, or located acoustic data as being abnormal, or unacceptable, or unhealthy. In an embodiment according to the present invention, in the latter case the one or more picture elements and/or the ultrasonic image will be displayed at either an improved resolution or at an increased size.
Fig. 6 schematically represents a method according to the present invention.
According to this method, step S 1 corresponds to receiving image data from the region of interest. Said step is achieved, for instance via a first receiving unit (for instance an ultrasound transducer).
Step S2 corresponds to receiving acoustic data from an acoustic source, wherein said acoustic source is located within a region of interest. Said step is achieved, for instance, via a second receiving unit (for instance an array of microphones).
Step S3 corresponds to processing the acoustic data to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data. Said step is achieved, for instance, via a processor comprising one or more algorithms so that the location of the acoustic data is determined. A beamforming algorithm for example can generated such located acoustic data when run a suitable computer.
Step S4 corresponds to registering the image data and the located acoustic data. This step is achieved, for instance, via the processor which further comprises one or more further algorithm configured to register image data and the located acoustic data such that step S5 is made available.
Step S5 corresponds to generating enriched ultrasound data based on the registered image data and the located acoustic data.
In order to provide an exemplary use case of the present invention, the inventors propose the following use case, where the following steps are subsequently carried:
A scan of an internal organ is performed and an image data are obtained.
Simultaneously, acoustic data from the region are captured using a microphone array.
Via beamforming techniques, the location where the acoustic signals originated from is determined.
Subsequently, an acoustic map of the scanned region is created such that signals are isolated based on their source location.
Captured acoustic signals are analyzed and compared with a pre-identified sound (for example, a heart murmur) to detect a sound of interest.
The area which emits the "sound-of-interest" is classified as an "area-of- interest" for the "image-scan".
One or more image segmentation algorithms are used to detect an anatomical object or feature of interest (for example, an aortic valve) from the region of interest identified in the image data.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless
telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. An ultrasonic imaging system (100, 200) comprising:
a first receiving unit (120, 220) for receiving image data from a region of interest, wherein the image data are based on ultrasound data;
characterized in that, the ultrasonic imaging system further comprises:
a second receiving unit (110, 210) for receiving an acoustic signal of a frequency within human hearing range from an acoustic source and generating acoustic data from said acoustic signal, wherein said acoustic source is located within the region of interest;
a processing unit (130, 230) configured to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data; and
wherein the processing unit (130, 230) is further configured to register the image data and the located acoustic data so as to generate enriched ultrasound data based on the image data and the located acoustic data.
2. The ultrasonic imaging system (100, 200) according to claim 1, wherein the processing unit is further configured to generate an ultrasonic image based on the enriched ultrasound data.
3. The ultrasonic imaging system (100, 200) according to claim 1 or 2, wherein the second receiving unit (110, 210) comprises an array of microphones.
4. The ultrasonic imaging system (100, 200) according to any of the preceding claims, wherein the first receiving unit (120, 220) comprises an ultrasound probe.
5. The ultrasonic imaging system (100, 200) according to any of the preceding claims, wherein the processing unit (130, 230) further comprises a beamforming algorithm configured to processes the acoustic data so as to determine the location of the acoustic source relative to the region of interest.
6. The ultrasonic imaging system (100, 200) according to any of the preceding claims, wherein the processing unit (130, 230) further comprises an image segmentation algorithm such that one or more features of interest in the ultrasonic image are identified.
7. The ultrasonic imaging system (100, 200) according to any of the preceding claims, wherein said ultrasonic imaging system further comprises:
a display (140, 240) configured to display the ultrasonic image, and an audio generator (170, 270) configured to generate an audible signal from the acoustic data and/or the located acoustic data registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected.
8. The ultrasonic imaging system (100, 200) according to claim 7, wherein said ultrasonic imaging system further comprises a first memory for storing at least one of the acoustic data and/or the located acoustic data so that the audible signal can be replayed at a further moment, or the located acoustic data be re-registered with further image data generated at another further moment.
9. The ultrasonic imaging system (100, 200) according to claim 7, wherein said ultrasonic imaging system further comprises a second memory for storing a reference acoustic signal for the region of interest, or the feature of interest,
wherein the processing unit (130, 230) is further configured to compare the acoustic data and/or the located acoustic data with the reference acoustic signal.
10. The ultrasonic imaging system (100, 200) according to any of the preceding claims, wherein said ultrasonic imaging system further comprises an alarm generator to generate an alarm signal when the acoustic data and/or the located acoustic data differ from, or are outside the range of the stored reference acoustic signal.
11. The ultrasonic imaging system (100, 200) according to claim 6 to 9, wherein the display (140, 240) is further configured to:
display the one or more picture elements and/or the ultrasonic image at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal; and/or
display the one or more picture elements and/or the ultrasonic image at an increased size when located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal.
12. An ultrasound probe (370) for use in an ultrasonic imaging system according to any of the preceding claims, the ultrasound probe (370) configured for receiving acoustic data of a frequency within human hearing range and ultrasound data, the ultrasound probe (370) comprising:
an ultrasonic transducer (320) for receiving the image data of a region of interest, and an array of microphones (310) for receiving the acoustic data from an acoustic source, wherein said acoustic source is located within a region of interest.
13. The ultrasound probe (370) according to claim 11, wherein the array of microphones (310) is a planar microphone array, or a 3D microphone array.
14. A method comprising the steps of:
receiving image data from the region of interest, wherein the image data comprising ultrasound data;
the method being characterized in:
receiving acoustic data of a frequency within human hearing range from an acoustic source, wherein said acoustic source is located within a region of interest;
processing the acoustic data to determine the location of the acoustic source of the acoustic data relative to the region of interest, thereby generating located acoustic data;
registering the image data and the located acoustic data; and
generating the enriched ultrasound data based on the registered image data and the located acoustic data.
15. The method according to claim 14, the method further comprising:
generating an ultrasonic image based on the enriched ultrasound data.
16. The method according to claim 15, the method further comprising:
displaying an ultrasonic image based on the enriched ultrasound data; and generating an audible signal from the acoustic data and/or the located acoustic data registered with the ultrasonic image and/or the one or more feature of interest when one or more picture elements of the ultrasonic image are selected.
17. The method according to claim 15 or 16, the method further comprising:
displaying the one or more picture elements and/or the ultrasonic image at an improved resolution when the located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal; and/or
displaying the one or more picture elements and/or the ultrasonic image at an increased size when located acoustic data registered with the one or more picture elements of the ultrasound image differ from, or are outside the range of the stored reference acoustic signal.
PCT/EP2016/064125 2015-06-26 2016-06-20 System and method for generating an ultrasonic image WO2016207092A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP15173960.4 2015-06-26
EP15173960 2015-06-26

Publications (1)

Publication Number Publication Date
WO2016207092A1 true WO2016207092A1 (en) 2016-12-29

Family

ID=53496488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/064125 WO2016207092A1 (en) 2015-06-26 2016-06-20 System and method for generating an ultrasonic image

Country Status (1)

Country Link
WO (1) WO2016207092A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10507009B2 (en) 2017-10-05 2019-12-17 EchoNous, Inc. System and method for fusing ultrasound with additional signals
US11647977B2 (en) 2018-10-08 2023-05-16 EchoNous, Inc. Device including ultrasound, auscultation, and ambient noise sensors

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ZA965568B (en) * 1995-06-29 1997-01-29 Teratech Corp A Legal Body Org Portable ultrasound imaging system
US5844997A (en) * 1996-10-10 1998-12-01 Murphy, Jr.; Raymond L. H. Method and apparatus for locating the origin of intrathoracic sounds
WO1999023940A1 (en) * 1997-11-10 1999-05-20 Medacoustics, Inc. Non-invasive turbulent blood flow imaging system
US6678545B2 (en) 1990-10-19 2004-01-13 Saint Louis University System for determining the position in a scan image corresponding to the position of an imaging probe
EP1502549A1 (en) * 2003-07-23 2005-02-02 Konica Minolta Medical & Graphic, Inc. Medical image displaying method
US20100286527A1 (en) * 2009-05-08 2010-11-11 Penrith Corporation Ultrasound system with multi-head wireless probe
US8372006B1 (en) 2010-06-16 2013-02-12 Quantason, LLC Method for detecting and locating a target using phase information
WO2014163443A1 (en) * 2013-04-05 2014-10-09 Samsung Electronics Co., Ltd. Electronic stethoscope apparatus, automatic diagnostic apparatus and method
US20140323865A1 (en) * 2013-04-26 2014-10-30 Richard A. Hoppmann Enhanced ultrasound device and methods of using same
JP2015130904A (en) * 2014-01-09 2015-07-23 株式会社日立メディコ Medical examination support system and medical examination support method
US20160066797A1 (en) * 2013-05-22 2016-03-10 Snu R&Db Foundation Compound medical device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6678545B2 (en) 1990-10-19 2004-01-13 Saint Louis University System for determining the position in a scan image corresponding to the position of an imaging probe
ZA965568B (en) * 1995-06-29 1997-01-29 Teratech Corp A Legal Body Org Portable ultrasound imaging system
US5844997A (en) * 1996-10-10 1998-12-01 Murphy, Jr.; Raymond L. H. Method and apparatus for locating the origin of intrathoracic sounds
WO1999023940A1 (en) * 1997-11-10 1999-05-20 Medacoustics, Inc. Non-invasive turbulent blood flow imaging system
EP1502549A1 (en) * 2003-07-23 2005-02-02 Konica Minolta Medical & Graphic, Inc. Medical image displaying method
US20100286527A1 (en) * 2009-05-08 2010-11-11 Penrith Corporation Ultrasound system with multi-head wireless probe
US8372006B1 (en) 2010-06-16 2013-02-12 Quantason, LLC Method for detecting and locating a target using phase information
WO2014163443A1 (en) * 2013-04-05 2014-10-09 Samsung Electronics Co., Ltd. Electronic stethoscope apparatus, automatic diagnostic apparatus and method
US20140323865A1 (en) * 2013-04-26 2014-10-30 Richard A. Hoppmann Enhanced ultrasound device and methods of using same
US20160066797A1 (en) * 2013-05-22 2016-03-10 Snu R&Db Foundation Compound medical device
JP2015130904A (en) * 2014-01-09 2015-07-23 株式会社日立メディコ Medical examination support system and medical examination support method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10507009B2 (en) 2017-10-05 2019-12-17 EchoNous, Inc. System and method for fusing ultrasound with additional signals
US10874377B2 (en) 2017-10-05 2020-12-29 EchoNous, Inc. System and method for fusing ultrasound with additional signals
US11647992B2 (en) 2017-10-05 2023-05-16 EchoNous, Inc. System and method for fusing ultrasound with additional signals
US11647977B2 (en) 2018-10-08 2023-05-16 EchoNous, Inc. Device including ultrasound, auscultation, and ambient noise sensors

Similar Documents

Publication Publication Date Title
CN106037797B (en) Three-dimensional volume of interest in ultrasound imaging
CN106163409B (en) Haptic feedback for ultrasound image acquisition
US11653897B2 (en) Ultrasonic diagnostic apparatus, scan support method, and medical image processing apparatus
CN110192893B (en) Quantifying region of interest placement for ultrasound imaging
EP2977012B1 (en) Ultrasound imaging apparatus and controlling method thereof
JP2021501656A (en) Intelligent ultrasound system to detect image artifacts
KR20150107214A (en) Ultrasound diagnosis apparatus and mehtod for displaying a ultrasound image
EP3975867B1 (en) Methods and systems for guiding the acquisition of cranial ultrasound data
KR20130080640A (en) Method and apparatus for providing ultrasound images
US20170119354A1 (en) Imaging systems and methods for positioning a 3d ultrasound volume in a desired orientation
US20210321978A1 (en) Fat layer identification with ultrasound imaging
WO2018054969A1 (en) Ultrasound transducer tile registration
JP2018079070A (en) Ultrasonic diagnosis apparatus and scanning support program
US9589387B2 (en) Image processing apparatus and image processing method
JP6720001B2 (en) Ultrasonic diagnostic device and medical image processing device
WO2016207092A1 (en) System and method for generating an ultrasonic image
JP6767575B2 (en) Ultrasonic Transducer / Tile Alignment
US20150105658A1 (en) Ultrasonic imaging apparatus and control method thereof
KR102419310B1 (en) Methods and systems for processing and displaying fetal images from ultrasound imaging data
KR20170095799A (en) Ultrasonic imaging device and its control method
JP6501796B2 (en) Acquisition Orientation Dependent Features for Model-Based Segmentation of Ultrasound Images
EP3930582B1 (en) Acoustic sensing apparatus and method
KR20160085016A (en) Ultrasound diagnostic apparatus and control method for the same
CN115243621A (en) Background multiplanar reconstruction of three dimensional ultrasound imaging data and associated devices, systems, and methods
US20230125779A1 (en) Automatic depth selection for ultrasound imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16730834

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16730834

Country of ref document: EP

Kind code of ref document: A1