WO2001015078A2 - Verfahren zum trainieren eines neuronalen netzes - Google Patents
Verfahren zum trainieren eines neuronalen netzes Download PDFInfo
- Publication number
- WO2001015078A2 WO2001015078A2 PCT/EP2000/008280 EP0008280W WO0115078A2 WO 2001015078 A2 WO2001015078 A2 WO 2001015078A2 EP 0008280 W EP0008280 W EP 0008280W WO 0115078 A2 WO0115078 A2 WO 0115078A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- neurons
- training
- synapses
- neuron
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/41—Detecting, measuring or recording for evaluating the immune or lymphatic systems
- A61B5/414—Evaluating particular organs or parts of the immune or lymphatic systems
- A61B5/415—Evaluating particular organs or parts of the immune or lymphatic systems the glands, e.g. tonsils, adenoids or thymus
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/41—Detecting, measuring or recording for evaluating the immune or lymphatic systems
- A61B5/414—Evaluating particular organs or parts of the immune or lymphatic systems
- A61B5/418—Evaluating particular organs or parts of the immune or lymphatic systems lymph vessels, ducts or nodes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the invention relates to a method for training a neural network for determining risk functions for patients following an initial illness with a predetermined illness on the basis of predetermined training data records, which contain objectifiable and, to a considerable extent, measurement-related data on the clinical picture of the patient, the neural network comprises an input layer with a plurality of input neurons and at least one intermediate layer with a plurality of intermediate neurons, and furthermore an output layer with a plurality of output neurons and a plurality of synapses, each having two neurons of different layers with one another connect.
- neural networks for evaluating large amounts of data have added to, or have replaced, the previously usual evaluation methods. It has been shown that neural networks are better able than the conventional methods to detect and reveal dependencies between the individual input data which are hidden in the data sets and are not readily recognizable. Therefore, neural networks, which have been trained on the basis of a known amount of data, provide more reliable statements for new input data of the same data type than the previous evaluation methods.
- this survival function indicates the probability that a predetermined event will occur in a patient under consideration.
- This predetermined event does not necessarily have to be the death of the patient, as the term “survival function” suggests, but can be any event, for example a new illness from cancer.
- the data records comprise a whole series of objectifiable information, ie data on the value of which a person who may be operating the neural network has no influence and the value of which can, if desired, be recorded automatically.
- this is information about the person of the patient, for example age, gender and the like, information about the clinical picture, for example number of cancerous lymph nodes, tumor biological factors, such as upA (urokinase plasminogen activator), its inhibitor PAI-1 and the like factors, as well as information about the treatment method, for example the type, duration and intensity of chemotherapy or radiation therapy.
- upA urokinase plasminogen activator
- PAI-1 urokinase plasminogen activator
- a whole series of the above-mentioned information, particularly the information on the clinical picture can only be determined using suitable measuring apparatus.
- the personal data can also be automatically read from suitable data carriers, for example machine-readable ID cards or the like.
- suitable data carriers for example machine-readable ID cards or the like.
- the data that can be objectified is not all available at the same time, which should often be the case, in particular, in the case of laboratory measured values, it can be stored on a suitable one Storage medium are temporarily stored in a database before they are fed to the neural network as input data.
- the neural network as a signal filter
- a neural network can thus be understood as a kind of "signal filter” that filters out a meaningful output signal from a noisy and therefore not yet meaningful input signal.
- signal filter whether or how well the filter can perform its function depends on whether it is possible to keep the intensity of the filter's self-noise so low that the signal to be filtered out is not lost in this self-noise.
- the intensity of the "self-noise" of a neural network is lower, the more data records are available for training the neural network on the one hand and the simpler the structure of the neural network on the other.
- the generalization ability of the network increases with increasing simplification of the structure of the neural network.
- part of the training of neural networks is therefore concerned with finding and eliminating structural parts which are unnecessary for obtaining a meaningful output signal.
- this "thinning out” also referred to in the technical language as "pruning” it should be noted as a further boundary condition that the structure of the neural network must not be “thinned out” arbitrarily, since with decreasing complexity of the neural network also its ability to complex To reproduce correlations, and thus its informative value decreases.
- This object is achieved according to the invention by a method for training a neural network for determining risk functions for
- Patients include, the neural network comprising: an input layer having a plurality of input neurons, at least one intermediate layer having a plurality of intermediate neurons, an output layer having a plurality of output neurons, and a plurality of synapses, each having two neurons connecting different layers, the training of the neural network simplifying the structure
- Procedure comprises, ie the finding and elimination of synapses, which have no significant influence on the course of the risk function, by either a1) selecting two transmit neurons connected to one and the same receive neuron, a2) assuming that the transmit neurons -Neurons outgoing to the receiving neuron qualitatively show essentially the same behavior, that is to say they are correlated with one another, a3) the synapse of one of the two transmitting neurons to the receiving neuron interrupts and the weight of the synapse of the other transmitting neuron increases adapts accordingly to the receiving neuron, a4) compares the reaction of the neural network modified according to step a3) with the reaction of the unchanged neural network, and a5) decides, if the deviation of the reaction does not exceed a predetermined amount, decides in step a3 ) to maintain the change made, or by selecting b1) a synapse, b2) assuming that d this synapse has no significant influence on the course of the risk function, b3) interrupts this sy
- a neural network trained in the manner described above supports the attending physician, for example, in deciding which follow-up treatment to use for a particular freshly operated patient.
- the doctor can enter the patient data and the data recorded in the laboratory on the clinical picture of the initial treatment in the laboratory and receives information from the neural network as to which type of after-treatment results in the survival function that is most favorable for the patient under consideration.
- the aggressiveness of the individual types of aftertreatment can also be taken into account, so that the survivor function that is the same or approximately the same can be selected to be the most gentle type of aftertreatment for the patient.
- Fig. 1 shows the structure of a neural network, which is constructed in the manner of a multi-layer perceptron.
- the neural network comprises: an input layer with a plurality of input neurons N ; (i for "input neuron"), at least one intermediate layer with a plurality of intermediate neurons N h (h for "hidden neuron”), - an output layer with a plurality of output neurons N 0
- the number of input neurons is usually chosen depending on the number of objectifiable information available. However, if this increases the time required to determine the reaction of the neural network to an unacceptable extent, for example with the help of a greatly simplified structure of neural networks, an estimate of the importance of the individual objectifiable information on the significance of the overall system can be made in advance. However, it should be emphasized that this preceding estimation is also carried out automatically and without intervention by the respective operator. Furthermore, the number of output neurons is chosen so large that in the sense a series development of the survival function has a sufficient number of series development terms available to be able to achieve a meaningful approximation to the actual survival function. Finally, the number of intermediate neurons is chosen on the one hand so large that the results of the trained neural network are meaningful, and on the other hand so small that the time required to determine the result is acceptable.
- the onto the monitored Neuron N y acting stimulation signal S y is usually by summing the response signals A x of the formed over this neuron N y arranged neurons N x, where the contributions of each neuron N x each with a weighting factor w xy in the sum received, which indicates the strength of the synapse connecting the two neurons.
- the stimulation signals S, the input neurons N, are formed from the input data x tjj relating to a specific patient j.
- Stimulation signal: S, x,
- the mean value S, medium of the patient j belonging to the training data set is formed.
- Q is formed. If the value of an input variable x, j lies above the mean value S, medium , so the scaling takes place according to the 75% quartile. If, on the other hand, it is below the mean, the scaling takes place according to the 25% quartile.
- the hyperbolic tangent function as activation function F, standardized response signals and their values are obtained in a simple manner are in the range from - 1 to + 1.
- This stimulation signal S h is converted into a response signal A h by the neurons N h according to a predetermined activation function F h , which in turn can be, for example, the hyperbolic tangent function:
- the parameters b h are referred to in the field of neural networks as the bias or "bias" of the neuron in question. Like the values of the synapse weights w xy , the values of these bias parameters b h are also determined in the course of training the neural network.
- the stimulation signal S 0 and the response signal A 0 for a neuron N 0 of the output layer are determined in an analogous manner:
- a 0 F 0 (S 0 - b 0 )
- the parameters b 0 in turn indicate the "bias" of the neurons N 0 of the output layer, while the parameters c serve to adjust the stimulation contributions of the neurons N, the input layer and N h of the intermediate layer.
- the values of both the parameter b 0 and the parameter c are determined in the training phase of the neural network. With regard to the preloads b 0 , it can be advantageous to require as a boundary condition that the response of all output neurons N 0 averaged over the entire amount of training data is zero.
- the response signals A 0 of the output neurons N 0 indicate the respective coefficient of the associated term of the series development of the survival function sought.
- a specific after-treatment which can include, for example, chemotherapy and / or radiation therapy.
- the survival function S (t) now indicates the probability that a particular event has not yet occurred in a patient under consideration at time t.
- This particular event can be, for example, a new cancer, or in the worst case, the death of the patient.
- S (0) 1.
- S ( ⁇ ) 1 is usually assumed.
- an event density f (t) and a risk function ⁇ (t) can be defined on the basis of the survival function S (t):
- ⁇ (t) ⁇ 0 -exp [ ⁇ 0 B 0 (t) -A 0 ]
- the parameters A 0 denote the response signals of the neurons N 0 of the output layer of the. Neural network.
- ⁇ Q is a parameter which is independent of t and is used as a normalization factor.
- B 0 (t) denotes a set of functions which, as basic functions of the series development, enable a good approximation to the actual course of the risk function.
- the fractal polynomials or functions such as t p (p not necessarily an integer) can be used as the function set B 0 (t).
- the weights w xy of the synapses and the other weights mentioned above under point 5.2. mentioned optimization parameters set such that the survival function supplied by the neural network corresponds as closely as possible to the "actual survival function".
- n-dimensional space is understood to mean an (n + 1) -dimensional structure that surrounds the current base in the n-dimensional space, i.e. a triangle in a 2-dimensional space and a triangle in a 3-dimensional space Triangle pyramid and so on.
- (n + 1) corner points are arranged is determined on the basis of the properties of the optimization function at the corner points of the previous cycle.
- the main advantage of the Simplex method can be seen in the fact that it can only be carried out using the optimization function and that the step size and step direction are automatically determined.
- a next training step is used to examine whether the structure of the neural network cannot be simplified on the basis of the knowledge gained to date.
- This "pruning" is therefore about examining the question of which of the synapses have so little influence on the overall function of the neural network that they can be dispensed with. In the simplest case, this can be done, for example, by setting the weight assigned to them once and for all to zero. In principle, however, it is also conceivable to "freeze" the weight of the synapse in question to a fixed value.
- simplex optimization steps and structure simplification steps should alternate in an iterative process.
- the value of the likelihood function is first calculated as a reference value on the basis of the entire structure of the neural network in its current training state, i.e. using the current values of the weights of all synapses. Then the influence of a given synapse is suppressed, i.e. the value of the weight of this synapse is set to zero. The value of the likelihood function is then calculated for the network structure thus simplified, and the ratio of this value to the reference value is formed.
- CHI-QUADRAT test known per se (cf. point 6. "Bibliography", document) can be used, for example, as a significance test. Alternatively, this significance test could also be carried out using the BOOT-STRAPPING method, which is also known per se (see point 6.
- CHI-QUADRAT test is particularly beneficial when determining the response of the neural network based on a likelihood function.
- the BOOT-STRAPPING method is also suitable for other types of functions for displaying the response of the neural network.
- the exclusion or suppression of synapses by the correlation method is based on the consideration that it could be possible that two neurons arranged in one and the same layer have the same qualitative influence on a neuron in a layer below.
- the response of the neural network more precisely the response signal of this last-mentioned neuron, should essentially not change if this neuron is only stimulated by one of the two neurons arranged above it, and the influence of the second neuron by strengthening the remaining synapse into account account.
- the synapse leading from the second neuron to the neuron under consideration could then be dispensed with.
- a 2 mA-, + n
- the stimulation signal S 0 is at least approximately correlated with one another and that the weight w 1 o is greater than the weight w 2o , the following applies to the stimulation signal S 0 :
- a 2 mA, + n
- the stimulation signal S h are at least approximately correlated with one another and that the weight w lh is greater than the weight w 2h , the following applies to the stimulation signal S h :
- the thinning out of the structure of the neural network described above can result in individual neurons no longer being connected to any other neuron. This is the case, for example, if an input neuron is not connected to any intermediate neuron or output neuron, or if an output neuron is not connected to any intermediate neuron or input neuron. It is therefore only logical if these neurons, which no longer have any influence on the functioning of the neural network, are completely deactivated.
- a special case is formed by intermediate neurons, which are still connected to neurons of the input layer, but not to neurons of the output layer. These intermediate neurons can no longer exert any influence on the function of the neural network. Therefore, the synapses leading from the input layer to these intermediate neurons can also be suppressed, i.e. the weights of these synapses are set to zero.
- the above-mentioned generalization data set is used, which has no influence on the training of the neural network and thus enables an objective statement.
- Bostwick DG (1 998), 'Practical clinical application of predictive factors in prostate cancer: A review with an emphasis on quantitative ethods in tissue specimens', Anal Quant Cytol Histol. Oct, 20 (5), 323-42. Review.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00962377A EP1232478B1 (de) | 1999-08-26 | 2000-08-24 | Verfahren zum trainieren eines neuronalen netzes |
AT00962377T ATE516558T1 (de) | 1999-08-26 | 2000-08-24 | Verfahren zum trainieren eines neuronalen netzes |
US10/049,650 US6968327B1 (en) | 1999-08-26 | 2000-08-24 | Method for training a neural network |
AU74130/00A AU7413000A (en) | 1999-08-26 | 2000-08-24 | Method for training a neural network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE19940577.8 | 1999-08-26 | ||
DE19940577A DE19940577A1 (de) | 1999-08-26 | 1999-08-26 | Verfahren zum Trainieren eines neuronalen Netzes |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2001015078A2 true WO2001015078A2 (de) | 2001-03-01 |
WO2001015078A3 WO2001015078A3 (de) | 2002-06-27 |
Family
ID=7919734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2000/008280 WO2001015078A2 (de) | 1999-08-26 | 2000-08-24 | Verfahren zum trainieren eines neuronalen netzes |
Country Status (6)
Country | Link |
---|---|
US (1) | US6968327B1 (de) |
EP (1) | EP1232478B1 (de) |
AT (1) | ATE516558T1 (de) |
AU (1) | AU7413000A (de) |
DE (1) | DE19940577A1 (de) |
WO (1) | WO2001015078A2 (de) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004061548A2 (en) * | 2003-01-07 | 2004-07-22 | Ramot At Tel Aviv University Ltd. | Identification of effective elements in complex systems |
DE102004033614A1 (de) * | 2004-07-12 | 2006-02-09 | Emedics Gmbh | Einrichtung und Verfahren zum Abschätzen einer Auftretenswahrscheinlichkeit einer Gesundheitsstörung |
DE102007008514A1 (de) * | 2007-02-21 | 2008-09-04 | Siemens Ag | Verfahren und Vorrichtung zur neuronalen Steuerung und/oder Regelung |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7395248B2 (en) * | 2000-12-07 | 2008-07-01 | Kates Ronald E | Method for determining competing risks |
US6917926B2 (en) * | 2001-06-15 | 2005-07-12 | Medical Scientists, Inc. | Machine learning method |
DE10345440A1 (de) * | 2003-09-30 | 2005-05-12 | Siemens Ag | Verfahren, Computerprogramm mit Programmcode-Mitteln und Computerprogramm-Produkt zur Analyse von Einflussgrößen auf einen Brennvorgang in einer Brennkammer unter Verwendung eines trainierbaren, statistischen Modells |
US7814038B1 (en) | 2007-12-06 | 2010-10-12 | Dominic John Repici | Feedback-tolerant method and device producing weight-adjustment factors for pre-synaptic neurons in artificial neural networks |
US20090276385A1 (en) * | 2008-04-30 | 2009-11-05 | Stanley Hill | Artificial-Neural-Networks Training Artificial-Neural-Networks |
US9015096B2 (en) | 2012-05-30 | 2015-04-21 | Qualcomm Incorporated | Continuous time spiking neural network event-based simulation that schedules co-pending events using an indexable list of nodes |
CN106339755B (zh) * | 2016-08-29 | 2018-09-21 | 深圳市计量质量检测研究院 | 基于神经网络与周期核函数gpr的锂电池健康状态预测方法 |
US20210174154A1 (en) * | 2018-08-07 | 2021-06-10 | Yale University | Interpretable deep machine learning for clinical radio;ogy |
EP3977402A1 (de) | 2019-05-28 | 2022-04-06 | PAIGE.AI, Inc. | Systeme und verfahren zur bildverarbeitung zur vorbereitung von objektträgern für verarbeitete bilder für die digitale pathologie |
CN113884903B (zh) * | 2021-10-19 | 2023-08-18 | 中国计量大学 | 一种基于多层感知器神经网络的电池故障诊断方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5734797A (en) | 1996-08-23 | 1998-03-31 | The United States Of America As Represented By The Secretary Of The Navy | System and method for determining class discrimination features |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03288285A (ja) * | 1990-04-04 | 1991-12-18 | Takayama:Kk | データ処理装置の学習方法 |
AU5547794A (en) * | 1992-11-02 | 1994-05-24 | Boston University | Neural networks with subdivision |
US6601051B1 (en) * | 1993-08-09 | 2003-07-29 | Maryland Technology Corporation | Neural systems with range reducers and/or extenders |
US5812992A (en) * | 1995-05-24 | 1998-09-22 | David Sarnoff Research Center Inc. | Method and system for training a neural network with adaptive weight updating and adaptive pruning in principal component space |
US6594629B1 (en) * | 1999-08-06 | 2003-07-15 | International Business Machines Corporation | Methods and apparatus for audio-visual speech detection and recognition |
-
1999
- 1999-08-26 DE DE19940577A patent/DE19940577A1/de not_active Withdrawn
-
2000
- 2000-08-24 US US10/049,650 patent/US6968327B1/en not_active Expired - Lifetime
- 2000-08-24 EP EP00962377A patent/EP1232478B1/de not_active Expired - Lifetime
- 2000-08-24 WO PCT/EP2000/008280 patent/WO2001015078A2/de active Application Filing
- 2000-08-24 AT AT00962377T patent/ATE516558T1/de active
- 2000-08-24 AU AU74130/00A patent/AU7413000A/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5734797A (en) | 1996-08-23 | 1998-03-31 | The United States Of America As Represented By The Secretary Of The Navy | System and method for determining class discrimination features |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004061548A2 (en) * | 2003-01-07 | 2004-07-22 | Ramot At Tel Aviv University Ltd. | Identification of effective elements in complex systems |
WO2004061548A3 (en) * | 2003-01-07 | 2005-10-13 | Univ Ramot | Identification of effective elements in complex systems |
DE102004033614A1 (de) * | 2004-07-12 | 2006-02-09 | Emedics Gmbh | Einrichtung und Verfahren zum Abschätzen einer Auftretenswahrscheinlichkeit einer Gesundheitsstörung |
DE102007008514A1 (de) * | 2007-02-21 | 2008-09-04 | Siemens Ag | Verfahren und Vorrichtung zur neuronalen Steuerung und/oder Regelung |
Also Published As
Publication number | Publication date |
---|---|
EP1232478A2 (de) | 2002-08-21 |
ATE516558T1 (de) | 2011-07-15 |
EP1232478B1 (de) | 2011-07-13 |
AU7413000A (en) | 2001-03-19 |
WO2001015078A3 (de) | 2002-06-27 |
US6968327B1 (en) | 2005-11-22 |
DE19940577A1 (de) | 2001-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1232478B1 (de) | Verfahren zum trainieren eines neuronalen netzes | |
EP0538739B1 (de) | Vorrichtung zur Bestimmung des Gesundheitszustandes eines Lebewesens | |
DE60015074T2 (de) | Verfahren und vorrichtung zur beobachtung der therapieeffektivität | |
EP1589438A2 (de) | Verfahren und Vorrichtung zum Überwachen einer Konzentration eines Analyten im lebenden Körper eines Menschen oder Tieres | |
DE102016215109A1 (de) | Verfahren und Datenverarbeitungseinheit zum Optimieren eines Bildrekonstruktionsalgorithmus | |
EP1384199A2 (de) | Verfahren zur ermittlung konkurrierender risiken | |
EP3619631A1 (de) | Verbesserungen bei der radiologischen erkennung chronisch thromboembolischer pulmonaler hypertonie | |
WO2005081161A2 (de) | Verfahren zur qualitätskontrolle von je an unterschiedlichen, aber vergleichbaren patientenkollektiven im rahmen eines medizinischen vorhabens erhobenen medizinischen datensätzen | |
DE112019005902T5 (de) | Ähnlichkeitsbestimmungsvorrichtung, ähnlichkeitsbestimmungsverfahren und ähnlichkeitsbestimmungsprogramm | |
WO2003054794A2 (de) | Auswerten von mittels funktionaler magnet-resonanz-tomographie gewonnenen bildern des gehirns | |
WO2022096297A1 (de) | Computerimplementiertes verfahren und vorrichtung zum durchführen einer medizinischen laborwertanalyse | |
DE102020206059A1 (de) | Computerimplementiertes Verfahren und System zum Trainieren eines Auswertungsalgorithmus, Computerprogramm und elektronisch lesbarer Datenträger | |
Wei et al. | Applying dimensional psychopathology: transdiagnostic prediction of executive cognition using brain connectivity and inflammatory biomarkers | |
DE69633681T2 (de) | Verfahren zur feststellung von reperfusion nach thrombolytischer therapie | |
Xu et al. | Non‐parametric estimation of the post‐lead‐time survival distribution of screen‐detected cancer cases | |
DE102017127857B4 (de) | Verwendung des ADIPOQ Genexpressionslevels zur Einordnung eines Probanden in Risikogruppen bei der Prognose oder Diagnose einer Diabetes mellitus Typ II Erkrankung | |
EP3454341A1 (de) | Automatisiertes verarbeiten von patientendaten zur gesundheitsbetreuung | |
DE102005058332A1 (de) | Verfahren zur Ermittlung der Regulationsfähigkeit biologischer Systeme | |
Stram et al. | Recent uses of biological data for the evaluation of A-bomb radiation dosimetry | |
EP1014849A1 (de) | Verfahren zur bewertung einer infolge einer lokalen durchstrahlung eines lebewesens erhaltenen streulichtverteilung durch kennwert-ermittlung | |
EP0646261B1 (de) | Verfahren und einrichtung zur analyse von hochdynamischen sekretionsphänomenen von hormonen in biologischen dynamischen systemen mittels biosensoren | |
EP1519303A2 (de) | Vorrichtung zur Klassifikation physiologischer Ereignisse | |
DE112022001973T5 (de) | Vorhersage von medizinischen ereignissen mit hilfe eines personalisierten zweikanal-kombinator-netzwerks | |
CN115546150A (zh) | 一种基于去噪卷积神经网络的蒙特卡罗剂量计算去噪方法、系统及设备 | |
Fröhlich | Same, same but different: Cognitive and neural mechanisms underlying basic subprocesses of reading in younger and older adults |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2000962377 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10049650 Country of ref document: US |
|
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWP | Wipo information: published in national office |
Ref document number: 2000962377 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: JP |