US20080163035A1 - Method for Data Distribution and Data Distribution Unit in a Multiprocessor System - Google Patents

Method for Data Distribution and Data Distribution Unit in a Multiprocessor System Download PDF

Info

Publication number
US20080163035A1
US20080163035A1 US11/666,406 US66640605A US2008163035A1 US 20080163035 A1 US20080163035 A1 US 20080163035A1 US 66640605 A US66640605 A US 66640605A US 2008163035 A1 US2008163035 A1 US 2008163035A1
Authority
US
United States
Prior art keywords
data
recited
mode
processing units
switchover
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/666,406
Inventor
Thomas Kottke
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DE200410051964 external-priority patent/DE102004051964A1/en
Priority claimed from DE200410051950 external-priority patent/DE102004051950A1/en
Priority claimed from DE200410051952 external-priority patent/DE102004051952A1/en
Priority claimed from DE200410051937 external-priority patent/DE102004051937A1/en
Priority claimed from DE200410051992 external-priority patent/DE102004051992A1/en
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Assigned to ROBERT BOSCH GMBH reassignment ROBERT BOSCH GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KOTTKE, THOMAS
Publication of US20080163035A1 publication Critical patent/US20080163035A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30181Instruction operation extension or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30181Instruction operation extension or modification
    • G06F9/30189Instruction operation extension or modification according to execution mode, e.g. mode flag
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1629Error detection by comparing the output of redundant processing systems
    • G06F11/1641Error detection by comparing the output of redundant processing systems where the comparison is not performed by the redundant processing components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1695Error detection or correction of the data by redundancy in hardware which are operating with time diversity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/16Protection against loss of memory contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3851Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/845Systems in which the redundancy can be transformed in increased performance

Definitions

  • the present invention relates to a device and a method for data distribution from at least one data source in a multiprocessor system.
  • Dual computer systems or dual processor systems are nowadays widely used computer systems for applications critical with regard to safety, in particular in vehicles, such as antilock systems, electronic stability programs (ESP), X-by-wire systems such as drive-by-wire or steer-by-wire or brake-by-wire, etc. or also in other networked systems.
  • ESP electronic stability programs
  • X-by-wire systems such as drive-by-wire or steer-by-wire or brake-by-wire, etc. or also in other networked systems.
  • powerful error detection mechanisms and error handling mechanisms are needed, in particular to counteract transient errors arising, for example, when the size of semiconductor structures of computer systems is reduced. It is relatively difficult to protect the core itself, i.e., the processor.
  • One approach is the use of a dual-core system for error detection.
  • Such processor units having at least two integrated execution units are known as dual core or multicore architectures.
  • Such dual core or multicore architectures are currently proposed mainly for two reasons:
  • the two execution units or cores may contribute to an enhanced performance in that the two execution units or cores are considered and treated as two processing units on a single semiconductor module.
  • the two execution units or cores process different programs or tasks. This allows enhanced performance; for this reason, this configuration is referred to as performance mode.
  • the second reason for using a dual-core or multicore architecture is enhanced reliability in that the two execution units redundantly process the same program.
  • the results of the two execution units or CPUs, i.e., cores, are compared, and an error may be detected from the comparison for agreement.
  • this configuration is referred to as safety mode or error detection mode.
  • both dual processor and multiprocessor systems that work redundantly to recognize hardware errors (see dual core or master checker systems), and dual processor and multiprocessor systems that process different data on their processors. If these two operating modes are combined according to an embodiment of the present invention in a dual processor or multiprocessor system (for the sake of simplicity we shall only refer to dual processor systems; however, the present invention is also applicable to multiprocessor systems), both processors must contain different data in performance mode and the same data in error detection mode.
  • An object of the present invention is to provide a unit and a method which delivers the instructions/data to the at least two processors redundantly or differently, depending on the mode, and divides up the memory access rights, in particular in the performance mode.
  • processors which, however, also includes the concept of cores or execution units.
  • the present invention provides a unit for data distribution from at least one data source in a system having at least two execution units and contains switchover means (ModeSwitch) which make switchover between at least two operating modes of the system possible, the unit being designed in such a way that the data distribution and/or the data source depends on the operating mode.
  • Switch switchover means
  • the present invention also presents a corresponding data distribution method from at least one data source in a system having at least two execution units, which has switchover means which make switchover between at least two operating modes of the system possible, the data distribution and/or a selection of a data source (instruction memory, data memory, cache in particular) depending on the operating mode.
  • the first operating mode corresponds to a safety mode in which the two processing units process the same programs and/or data, and comparison means are provided, which compare the states resulting from the processing of the same programs for agreement.
  • the unit according to the present invention and the method according to the present invention make optimized implementation of both modes possible in a dual-processor system.
  • the two processors operate in error detection mode (F mode)
  • the two processors receive the same data/instructions; if they operate in performance mode (P mode), each processor may access the memory. In that case, this unit manages the accesses to the single memory or peripheral present.
  • the unit receives the data/addresses of a processor (here referred to as “master”) and relays them to the components such as memories, bus, etc.
  • the second processor here “slave” intends to access the same device.
  • the data distribution unit receives this request at a second port, but does not relay it to the other components.
  • the data distribution unit transmits the same data to both slave and master and compares the data of the two processors. If they are different, the data distribution unit (here DDU) indicates this via an error signal. Therefore, only the master operates the bus/memory and the slave receives the same data (operating mode as in the case of a dual-core system).
  • both processors process different program portions.
  • the memory accesses are therefore also different.
  • the DDU therefore receives the request of the processors and returns the results/requested data to the processor that requested them. If both processors intend to access the same component at the same time, one processor is set to a wait state until the other one has been served.
  • Switchover between the two modes and thus between the different types of operation of the data distribution unit takes place via a control signal, which may be generated by one of the two processors or externally.
  • Switchover is advantageously triggered and/or indicated by a control signal, a mode signal in particular, which refers to the operating mode of at least one processing unit, the control signal being generated externally in particular with reference to the processing units.
  • switchover is triggered and/or indicated by an instruction, e.g., an instruction which describes an illegal operation (illOp), the instruction being generated by the switchover means, the mode switch unit in particular.
  • an instruction e.g., an instruction which describes an illegal operation (illOp)
  • the instruction being generated by the switchover means, the mode switch unit in particular.
  • Input data of both processing units are advantageously compared for agreement in an operating mode which corresponds to a safety mode (F mode) and/or also output data of both processing units are compared for agreement in an operating mode which corresponds to a safety mode (F mode).
  • F mode safety mode
  • F mode safety mode
  • the data to be distributed are advantageously relayed to at least one additional component, a processing unit for example, the data to be distributed being extended by an error detection code prior to relaying.
  • the input data may also be relayed to at least one additional component, a processing unit in particular, the input data being extended by an error detection code prior to relaying.
  • the output data may also be relayed to at least one additional component, the output data being extended by an error detection code prior to relaying.
  • an error signal is advantageously output if an error is detected thanks to the error detection code.
  • an error signal is output only in the safety mode (F mode).
  • a delay component which delays the preceding data by a clock pulse offset as a function of the clock pulse offset between the two processing units in the particular operating mode, may be included according to the present invention.
  • FIG. 1 shows a schematic illustration of a dual-core computer system.
  • FIG. 2 shows an example embodiment of the data distribution unit according to the present invention.
  • the data to be distributed are advantageously read from a memory and then distributed to the processing units.
  • Data distribution is advantageously controlled by state machines, two state machines being provided for each processing unit. They are advantageously configured as one synchronous state machine and one asynchronous state machine.
  • a system is provided with such a unit according to the present invention, a monitoring circuit external to the unit being also provided, which detects an error if an intended switchover between the operating modes does not take place.
  • the DDU unit delays the data for the slave as needed, i.e., it stores the master's output data until they may be compared to the slave's output data for error detection.
  • the clock pulse offset is elucidated in more detail with reference to FIG. 1 .
  • FIG. 1 shows a dual-core system having a first computer 100 , in particular a master computer and a second computer 101 , in particular a slave computer.
  • the entire system is operated at a predefinable clock pulse, i.e., in predefinable clock cycles CLK.
  • the clock pulse is supplied to the computers via clock input CLK 1 of computer 100 and clock input CLK 2 of computer 101 .
  • first computer 100 and second computer 101 operate at a predefinable time offset or a predefinable clock pulse offset. Any desired time period may be defined for a time offset, and also any desired clock pulse regarding an offset of the clock pulses.
  • This may be an offset by an integral number of clock pulses, but also, as shown in this example, an offset by 1.5 clock pulses, first computer 100 working, i.e., being operated here 1.5 clock pulses ahead of second computer 101 .
  • This offset may prevent common mode failures from interfering with the computers or processors, i.e., the cores of the dual-core system, in the same way and thus from remaining undetected.
  • due to the offset such common mode failures affect the computers at different points in time during the program run and thus have different effects for the two computers, which makes errors detectable. Under certain circumstances, effects of errors of the same type would not be detectable in a comparison without a clock pulse offset; this is avoided by the method according to the present invention.
  • 1.5 clock pulses in this particular case of a dual-core system offset modules 112 through 115 are provided.
  • this system is designed to operate at a predefined time offset or clock pulse offset, here of 1.5 clock pulses, i.e., while one of the computers, e.g., computer 100 , is directly addressing external components 103 and 104 in particular, second computer 101 is running with a delay of exactly 1.5 clock pulses.
  • computer 101 is supplied with the inverted clock signal at clock input CLK 2 .
  • the above-mentioned terminals of the computer i.e., its data and/or instructions, must therefore also be delayed by the above-mentioned clock pulses, here 1.5 clock pulses in particular; as mentioned previously offset or delay modules 112 through 115 are provided for this purpose.
  • components 103 and 104 are provided, which are connected to the two computers 100 and 101 via bus 116 , having bus lines 116 A, 116 B, and 116 C, and bus 117 , having bus lines 117 A and 117 B.
  • Bus 117 is an instruction bus, 117 A being an instruction address bus and 117 B being the partial instruction (data) bus.
  • Address bus 117 A is connected to computer 100 via an instruction address terminal IA 1 (instruction address 1 ) and to computer 101 via an instruction address terminal IA 2 (instruction address 2 ).
  • the instructions proper are transmitted via partial instruction bus 117 B, which is connected to computer 100 via an instruction terminal I 1 (instruction 1 ) and to computer 101 via an instruction terminal I 2 (instruction 2 ).
  • this instruction bus 117 having 117 A and 117 B one component 103 , an instruction memory, for example, a safe instruction memory in particular or the like, is connected in between. This component, in particular as an instruction memory, is also operated at clock rate CLK in this example.
  • a data bus 116 has a data address bus or data address line 116 A and a data bus or data line 116 B.
  • Data address bus or data address line 116 A is connected to computer 100 via a data address terminal DA 1 (data address 1 ) and to computer 101 via a data address terminal DA 2 (data address 2 ).
  • data bus or data line 116 B is connected to computer 100 via a data terminal DO 1 (data out 1 ) and to computer 101 via a data terminal DO 2 (data out 2 ).
  • data bus 116 has data bus line 116 C, which is connected to computer 100 via a data terminal DI 1 (data in 1 ) and to computer 101 via a data terminal DI 2 (data in 2 ).
  • a component 104 a data memory for example, a safe data memory in particular or the like, is connected in between.
  • This component 104 is also supplied with clock cycle CLK.
  • Components 103 and 104 represent any components that are connected to the computers of the dual-core system via a data bus and/or instruction bus and are able to receive or output erroneous data and/or instructions corresponding to accesses via data and/or instructions of the dual-core system for read and/or write operations.
  • Error identifier generators 105 , 106 , and 107 which generate an error identifier such as a parity bit, or another error code such as an error correction code (ECC), or the like, are provided for error prevention.
  • ECC error correction code
  • appropriate error identifier checking devices 108 and 109 are also provided for checking the particular error identifier, i.e., the parity bit or another error code such as ECC, for example.
  • a computer, computer 100 in particular in this case may write or read erroneous data and/or instructions into components, external components in particular such as memory 103 or 104 in particular in this case, but also with regard to other users or actuators or sensors during this time or clock pulse offset.
  • a delay unit 102 is connected into the lines of the data bus and/or into the instruction bus. For the sake of clarity, only connection into the data bus is depicted. Of course, connection into the instruction bus is also possible and conceivable.
  • This delay unit 102 delays the accesses, the memory accesses in particular in this case, so that a possible time offset or clock pulse offset is compensated, in particular in the case of an error detection, for example, via comparators 110 and 111 , at least until the error signal is generated in the dual-core system, i.e., the error is detected in the dual-core system.
  • Different variants may be implemented:
  • a delayed write operation may then be converted into a read operation via a change signal, e.g., the error signal, in order to avoid erroneous writing.
  • DDU data distribution unit
  • switch detect Switchover between the two modes is detected by the “switch detect” units.
  • the unit is situated between the cache and the processor on the instruction bus and shows whether the IllOp instruction is loaded into the processor. If the instruction is detected, this event is communicated to the mode switch unit.
  • the switch detect unit is provided separately for each processor. The switch detect unit does not have to have an error-tolerant design, since it is present in duplicate, i.e., redundantly. It is also conceivable to design this unit to be error-tolerant and thus without redundancy.
  • ModeSwitch Switchover between the two modes is triggered by the “switch detect” unit. If a switchover is to be performed from lock mode to split mode, both switch detect units detect the switchover, since both processors are processing the same program code in the lock mode.
  • the switch detect unit of processor 1 detects these 1.5 clock pulses before the switch detect unit of processor 2 .
  • the mode switch unit stops processor 1 for two pulses with the aid of the wait signal.
  • Processor 2 is also stopped 1.5 clock pulses later, but only for one-half of a clock pulse, thus being synchronized to the system clock.
  • the status signal is subsequently switched to split for the other components, and the two processors continue to operate. For the two processors to execute different tasks, they must diverge in the program code.
  • the processor ID read is different for each of the two processors. If a comparison is to be made with a reference processor ID, the corresponding processor may be brought to another program point using a conditional jump instruction. When switching over from split mode to lock mode, this is noticed by a processor, i.e., by one before the other. This processor will execute program code containing the switchover instruction. This is now registered by the switch detect unit, which informs the mode switch unit accordingly. The mode switch unit stops the corresponding processor and informs the second one of the synchronization intent via an interrupt. The second processor receives an interrupt and may now execute a software routine to terminate its task. It then jumps to the program point where the switchover instruction is located.
  • Its switch detect unit now also signals the intent to change modes to the mode switch unit.
  • the wait signal is deactivated for processor 1 and, 1.5 clock pulses later, for processor 2 .
  • both processors work synchronously with a clock pulse offset of 1.5 clock pulses.
  • both switch detect units must inform the mode switch unit that they intend to switch to the split mode. If the switchover intent is only communicated by one unit, the error is detected by the comparator units, since these continue to receive data from one of the two processors, and these data are different from that of the stopped processor.
  • both processors are in the split mode and one does not switch back to the lock mode, this may be detected by an external watchdog.
  • the watchdog notices that the waiting processor is no longer sending messages. If there is only one watchdog signal for the processor system, the watchdog may only be triggered in the lock mode. The watchdog would thus detect that no mode switchover has taken place.
  • the mode signal is in the form of a dual-rail signal, where 10 stands for the lock mode and 01 for the split mode. 00 and 11 indicate errors.
  • IramControl Access to the instruction memory of both processors is controlled via the IRAM control, which must have a reliable design, since it is a single point of failure. It has two state machines for each processor: a synchronous state machine iram 1 clkreset and an asynchronous state machine readiram 1 . In the safety-critical mode, the state machines of the two processors monitor one another, and in the performance mode they operate separately.
  • Reloading of the two caches of the processors is controlled by two state machines, one synchronous state machine iramclkreset and an asynchronous state machine readiram. These two state machines divide the memory accesses in the split mode. Processor 1 has the higher priority. After an access to the main memory by processor 1 , if both processors now intend to access the main memory, processor 2 receives the memory access permission. These two state machines are implemented for each processor. In the lock mode, the output signals of the state machines are compared in order to detect the occurrence of any error.
  • the data for updating cache 2 in the lock mode are delayed by 1.5 clock pulses in the IRAM control unit.
  • bit 5 in register 0 of SysControl the identity of the core is encoded. In the case of core 1 the bit is 0 and in the case of core 2 it is high. This register is mirrored in the memory area having the address 65528.
  • the program counter of processor 1 is delayed by 3.5 clock pulses to enable a comparison with the program counter of processor 2 in the lock mode.
  • the caches of both processors may be reloaded separately. If a switchover into the lock mode is performed, the two caches are not coherent with respect to one another. This may cause the two processors to diverge and the comparators to thus signal an error.
  • a flag table is constructed in the IRAM control, where it is noted whether a cache line has been written in the lock mode or in the split mode. When the cache is reloaded in the lock mode, the entry corresponding to the cache line is set at 0, and when it is reloaded in the split mode or when the cache line of a single cache is updated, it is set at 1.
  • this cache line has been updated in the lock mode, i.e., whether it is identical in the two caches.
  • the processor may always access the cache line, regardless of the status of the Flag Vector. This table must be present only once, since in the event of an error, the two processors diverge and thus this error is reliably detected by the comparators. Since the access times to the central table are relatively long, this table may also be copied to each cache.
  • DramControl The parity is formed in this component for the address, data, and memory control signals of each processor.
  • the split mode state is in turn subdivided into seven states which resolve the access conflicts and are able to lock the data memory for the other processor.
  • the order of execution represents the priorities at the same time.
  • the DDU has the switchover intent detector (IllOPDetect), the mode switch unit, and the Iram and Dram control.
  • IllOPDetect switchover intent detector
  • the mode switch unit As mentioned previously, the DDU has the switchover intent detector (IllOPDetect), the mode switch unit, and the Iram and Dram control.
  • the core of the invention is the general mode of operation of the data distribution unit DDU (different data assignment and thus also operating mode selection, depending on the mode).

Abstract

A unit and method for distributing data from at least one data source in a system provided with at least two computer units, containing switching means which are used to switch between at least two operating modes of the system, wherein data distribution and/or selection of a data source is dependent upon the operating mode.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a device and a method for data distribution from at least one data source in a multiprocessor system.
  • 2. Description of Related Art
  • In technical applications such as in the automobile industry or in the industrial goods industry in particular, i.e., in mechanical engineering and automation, more and more microprocessor-based or computer-based control and regulating systems are being used for applications critical with regard to safety. Dual computer systems or dual processor systems (dual cores) are nowadays widely used computer systems for applications critical with regard to safety, in particular in vehicles, such as antilock systems, electronic stability programs (ESP), X-by-wire systems such as drive-by-wire or steer-by-wire or brake-by-wire, etc. or also in other networked systems. To satisfy these high safety requirements in future applications, powerful error detection mechanisms and error handling mechanisms are needed, in particular to counteract transient errors arising, for example, when the size of semiconductor structures of computer systems is reduced. It is relatively difficult to protect the core itself, i.e., the processor. One approach, as mentioned above, is the use of a dual-core system for error detection.
  • Such processor units having at least two integrated execution units are known as dual core or multicore architectures. Such dual core or multicore architectures are currently proposed mainly for two reasons:
  • First, they may contribute to an enhanced performance in that the two execution units or cores are considered and treated as two processing units on a single semiconductor module. In this configuration, the two execution units or cores process different programs or tasks. This allows enhanced performance; for this reason, this configuration is referred to as performance mode.
  • The second reason for using a dual-core or multicore architecture is enhanced reliability in that the two execution units redundantly process the same program. The results of the two execution units or CPUs, i.e., cores, are compared, and an error may be detected from the comparison for agreement. In the following, this configuration is referred to as safety mode or error detection mode.
  • Thus, currently there are both dual processor and multiprocessor systems that work redundantly to recognize hardware errors (see dual core or master checker systems), and dual processor and multiprocessor systems that process different data on their processors. If these two operating modes are combined according to an embodiment of the present invention in a dual processor or multiprocessor system (for the sake of simplicity we shall only refer to dual processor systems; however, the present invention is also applicable to multiprocessor systems), both processors must contain different data in performance mode and the same data in error detection mode.
  • An object of the present invention is to provide a unit and a method which delivers the instructions/data to the at least two processors redundantly or differently, depending on the mode, and divides up the memory access rights, in particular in the performance mode.
  • BRIEF SUMMARY OF THE INVENTION
  • Such a unit makes it possible to operate a dual processor system effectively in such a way that switchover during operation is possible in both safety and performance modes. We shall therefore refer to processors, which, however, also includes the concept of cores or execution units.
  • The present invention provides a unit for data distribution from at least one data source in a system having at least two execution units and contains switchover means (ModeSwitch) which make switchover between at least two operating modes of the system possible, the unit being designed in such a way that the data distribution and/or the data source depends on the operating mode. A system having such a unit is also presented.
  • The present invention also presents a corresponding data distribution method from at least one data source in a system having at least two execution units, which has switchover means which make switchover between at least two operating modes of the system possible, the data distribution and/or a selection of a data source (instruction memory, data memory, cache in particular) depending on the operating mode.
  • The first operating mode corresponds to a safety mode in which the two processing units process the same programs and/or data, and comparison means are provided, which compare the states resulting from the processing of the same programs for agreement.
  • The unit according to the present invention and the method according to the present invention make optimized implementation of both modes possible in a dual-processor system.
  • If the two processors operate in error detection mode (F mode), the two processors receive the same data/instructions; if they operate in performance mode (P mode), each processor may access the memory. In that case, this unit manages the accesses to the single memory or peripheral present.
  • In the F mode, the unit receives the data/addresses of a processor (here referred to as “master”) and relays them to the components such as memories, bus, etc. The second processor (here “slave”) intends to access the same device. The data distribution unit receives this request at a second port, but does not relay it to the other components. The data distribution unit transmits the same data to both slave and master and compares the data of the two processors. If they are different, the data distribution unit (here DDU) indicates this via an error signal. Therefore, only the master operates the bus/memory and the slave receives the same data (operating mode as in the case of a dual-core system).
  • In the P mode both processors process different program portions. The memory accesses are therefore also different. The DDU therefore receives the request of the processors and returns the results/requested data to the processor that requested them. If both processors intend to access the same component at the same time, one processor is set to a wait state until the other one has been served.
  • Switchover between the two modes and thus between the different types of operation of the data distribution unit takes place via a control signal, which may be generated by one of the two processors or externally.
  • Switchover is advantageously triggered and/or indicated by a control signal, a mode signal in particular, which refers to the operating mode of at least one processing unit, the control signal being generated externally in particular with reference to the processing units.
  • It is furthermore advantageous if switchover is triggered and/or indicated by an instruction, e.g., an instruction which describes an illegal operation (illOp), the instruction being generated by the switchover means, the mode switch unit in particular.
  • Input data of both processing units are advantageously compared for agreement in an operating mode which corresponds to a safety mode (F mode) and/or also output data of both processing units are compared for agreement in an operating mode which corresponds to a safety mode (F mode).
  • The data to be distributed are advantageously relayed to at least one additional component, a processing unit for example, the data to be distributed being extended by an error detection code prior to relaying. The input data may also be relayed to at least one additional component, a processing unit in particular, the input data being extended by an error detection code prior to relaying. The output data may also be relayed to at least one additional component, the output data being extended by an error detection code prior to relaying. For all these cases, an error signal is advantageously output if an error is detected thanks to the error detection code. In one embodiment, an error signal is output only in the safety mode (F mode).
  • Basically, a distinction may be made between a performance mode and a safety mode, and in the performance mode the data of both processing units are prioritized, and this data may be received and/or relayed sequentially as a function of the prioritization.
  • A delay component, which delays the preceding data by a clock pulse offset as a function of the clock pulse offset between the two processing units in the particular operating mode, may be included according to the present invention.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • FIG. 1 shows a schematic illustration of a dual-core computer system.
  • FIG. 2 shows an example embodiment of the data distribution unit according to the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The data to be distributed are advantageously read from a memory and then distributed to the processing units.
  • Data distribution is advantageously controlled by state machines, two state machines being provided for each processing unit. They are advantageously configured as one synchronous state machine and one asynchronous state machine.
  • A system is provided with such a unit according to the present invention, a monitoring circuit external to the unit being also provided, which detects an error if an intended switchover between the operating modes does not take place.
  • If the dual-processor system is operated with a clock pulse offset in the F mode, but not in the P mode, the DDU unit delays the data for the slave as needed, i.e., it stores the master's output data until they may be compared to the slave's output data for error detection.
  • The clock pulse offset is elucidated in more detail with reference to FIG. 1.
  • FIG. 1 shows a dual-core system having a first computer 100, in particular a master computer and a second computer 101, in particular a slave computer. The entire system is operated at a predefinable clock pulse, i.e., in predefinable clock cycles CLK. The clock pulse is supplied to the computers via clock input CLK1 of computer 100 and clock input CLK2 of computer 101. In this dual-core system, there is also a special feature for error detection in that first computer 100 and second computer 101 operate at a predefinable time offset or a predefinable clock pulse offset. Any desired time period may be defined for a time offset, and also any desired clock pulse regarding an offset of the clock pulses. This may be an offset by an integral number of clock pulses, but also, as shown in this example, an offset by 1.5 clock pulses, first computer 100 working, i.e., being operated here 1.5 clock pulses ahead of second computer 101. This offset may prevent common mode failures from interfering with the computers or processors, i.e., the cores of the dual-core system, in the same way and thus from remaining undetected. In other words, due to the offset, such common mode failures affect the computers at different points in time during the program run and thus have different effects for the two computers, which makes errors detectable. Under certain circumstances, effects of errors of the same type would not be detectable in a comparison without a clock pulse offset; this is avoided by the method according to the present invention. To implement this time or clock pulse offset, 1.5 clock pulses in this particular case of a dual-core system, offset modules 112 through 115 are provided.
  • To detect the above-mentioned common mode errors, this system is designed to operate at a predefined time offset or clock pulse offset, here of 1.5 clock pulses, i.e., while one of the computers, e.g., computer 100, is directly addressing external components 103 and 104 in particular, second computer 101 is running with a delay of exactly 1.5 clock pulses. To generate the desired 1.5-pulse delay in this case, computer 101 is supplied with the inverted clock signal at clock input CLK2. However, the above-mentioned terminals of the computer, i.e., its data and/or instructions, must therefore also be delayed by the above-mentioned clock pulses, here 1.5 clock pulses in particular; as mentioned previously offset or delay modules 112 through 115 are provided for this purpose. In addition to the two computers or processors 100 and 101, components 103 and 104 are provided, which are connected to the two computers 100 and 101 via bus 116, having bus lines 116A, 116B, and 116C, and bus 117, having bus lines 117A and 117B. Bus 117 is an instruction bus, 117A being an instruction address bus and 117B being the partial instruction (data) bus. Address bus 117A is connected to computer 100 via an instruction address terminal IA1 (instruction address 1) and to computer 101 via an instruction address terminal IA2 (instruction address 2). The instructions proper are transmitted via partial instruction bus 117B, which is connected to computer 100 via an instruction terminal I1 (instruction 1) and to computer 101 via an instruction terminal I2 (instruction 2). In this instruction bus 117 having 117A and 117B, one component 103, an instruction memory, for example, a safe instruction memory in particular or the like, is connected in between. This component, in particular as an instruction memory, is also operated at clock rate CLK in this example. In addition, a data bus 116 has a data address bus or data address line 116A and a data bus or data line 116B. Data address bus or data address line 116A is connected to computer 100 via a data address terminal DA1 (data address 1) and to computer 101 via a data address terminal DA2 (data address 2). Also data bus or data line 116B is connected to computer 100 via a data terminal DO1 (data out 1) and to computer 101 via a data terminal DO2 (data out 2). Furthermore, data bus 116 has data bus line 116C, which is connected to computer 100 via a data terminal DI1 (data in 1) and to computer 101 via a data terminal DI2 (data in 2). In this data bus 116 having lines 116A, 116B, and 116C, a component 104, a data memory for example, a safe data memory in particular or the like, is connected in between. This component 104 is also supplied with clock cycle CLK.
  • Components 103 and 104 represent any components that are connected to the computers of the dual-core system via a data bus and/or instruction bus and are able to receive or output erroneous data and/or instructions corresponding to accesses via data and/or instructions of the dual-core system for read and/or write operations. Error identifier generators 105, 106, and 107, which generate an error identifier such as a parity bit, or another error code such as an error correction code (ECC), or the like, are provided for error prevention. For this purpose, appropriate error identifier checking devices 108 and 109 are also provided for checking the particular error identifier, i.e., the parity bit or another error code such as ECC, for example.
  • In the redundant design in the dual-core system, the data and/or instructions are compared in comparators 110 and 111 as depicted in FIG. 1. However, if there is a time offset, a clock pulse offset for example, between computers 100 and 101, caused either by a non-synchronous dual-core system or, in the case of a synchronous dual-core system by synchronization errors, or as in this special example, by a time or clock pulse offset, here of 1.5 clock pulses in particular, provided for error detection, a computer, computer 100 in particular in this case, may write or read erroneous data and/or instructions into components, external components in particular such as memory 103 or 104 in particular in this case, but also with regard to other users or actuators or sensors during this time or clock pulse offset. It may thus erroneously perform a write access instead of an intended read access due to this clock pulse offset. These scenarios result, of course, in errors in the entire system, in particular without a clear possibility to display which data and/or instructions exactly have been erroneously changed, which also causes recovery problems.
  • In order to eliminate this problem, a delay unit 102, as shown, is connected into the lines of the data bus and/or into the instruction bus. For the sake of clarity, only connection into the data bus is depicted. Of course, connection into the instruction bus is also possible and conceivable. This delay unit 102 delays the accesses, the memory accesses in particular in this case, so that a possible time offset or clock pulse offset is compensated, in particular in the case of an error detection, for example, via comparators 110 and 111, at least until the error signal is generated in the dual-core system, i.e., the error is detected in the dual-core system. Different variants may be implemented:
  • Delay of the write and read operations, delay of the write operations only, or, although not preferably, delay of the read operations. A delayed write operation may then be converted into a read operation via a change signal, e.g., the error signal, in order to avoid erroneous writing.
  • An exemplary implementation of the data distribution unit (DDU) which has a device for detecting the switchover intent (via IllOPDetect), since the IllOP instruction (IllOP=illegal operation) is used for switchover in this example, the mode switch unit, and the iram and dram control module is explained with reference to FIG. 2.
  • IllOpDetect: Switchover between the two modes is detected by the “switch detect” units. The unit is situated between the cache and the processor on the instruction bus and shows whether the IllOp instruction is loaded into the processor. If the instruction is detected, this event is communicated to the mode switch unit. The switch detect unit is provided separately for each processor. The switch detect unit does not have to have an error-tolerant design, since it is present in duplicate, i.e., redundantly. It is also conceivable to design this unit to be error-tolerant and thus without redundancy.
  • ModeSwitch: Switchover between the two modes is triggered by the “switch detect” unit. If a switchover is to be performed from lock mode to split mode, both switch detect units detect the switchover, since both processors are processing the same program code in the lock mode. The switch detect unit of processor 1 detects these 1.5 clock pulses before the switch detect unit of processor 2. The mode switch unit stops processor 1 for two pulses with the aid of the wait signal. Processor 2 is also stopped 1.5 clock pulses later, but only for one-half of a clock pulse, thus being synchronized to the system clock. The status signal is subsequently switched to split for the other components, and the two processors continue to operate. For the two processors to execute different tasks, they must diverge in the program code. This takes place via a read access to the processor ID directly after switching over into the split mode. The processor ID read is different for each of the two processors. If a comparison is to be made with a reference processor ID, the corresponding processor may be brought to another program point using a conditional jump instruction. When switching over from split mode to lock mode, this is noticed by a processor, i.e., by one before the other. This processor will execute program code containing the switchover instruction. This is now registered by the switch detect unit, which informs the mode switch unit accordingly. The mode switch unit stops the corresponding processor and informs the second one of the synchronization intent via an interrupt. The second processor receives an interrupt and may now execute a software routine to terminate its task. It then jumps to the program point where the switchover instruction is located. Its switch detect unit now also signals the intent to change modes to the mode switch unit. At the next rising system clock edge, the wait signal is deactivated for processor 1 and, 1.5 clock pulses later, for processor 2. Now both processors work synchronously with a clock pulse offset of 1.5 clock pulses.
  • If the system is in lock mode, both switch detect units must inform the mode switch unit that they intend to switch to the split mode. If the switchover intent is only communicated by one unit, the error is detected by the comparator units, since these continue to receive data from one of the two processors, and these data are different from that of the stopped processor.
  • If both processors are in the split mode and one does not switch back to the lock mode, this may be detected by an external watchdog. In the event of a trigger signal for each processor, the watchdog notices that the waiting processor is no longer sending messages. If there is only one watchdog signal for the processor system, the watchdog may only be triggered in the lock mode. The watchdog would thus detect that no mode switchover has taken place. The mode signal is in the form of a dual-rail signal, where 10 stands for the lock mode and 01 for the split mode. 00 and 11 indicate errors.
  • IramControl: Access to the instruction memory of both processors is controlled via the IRAM control, which must have a reliable design, since it is a single point of failure. It has two state machines for each processor: a synchronous state machine iram1clkreset and an asynchronous state machine readiram1. In the safety-critical mode, the state machines of the two processors monitor one another, and in the performance mode they operate separately.
  • Reloading of the two caches of the processors is controlled by two state machines, one synchronous state machine iramclkreset and an asynchronous state machine readiram. These two state machines divide the memory accesses in the split mode. Processor 1 has the higher priority. After an access to the main memory by processor 1, if both processors now intend to access the main memory, processor 2 receives the memory access permission. These two state machines are implemented for each processor. In the lock mode, the output signals of the state machines are compared in order to detect the occurrence of any error.
  • The data for updating cache 2 in the lock mode are delayed by 1.5 clock pulses in the IRAM control unit.
  • In bit 5 in register 0 of SysControl, the identity of the core is encoded. In the case of core 1 the bit is 0 and in the case of core 2 it is high. This register is mirrored in the memory area having the address 65528.
  • In the event of a memory access by core 2, a check is first made to determine in what mode the core is operating. If it is in the lock mode, its memory access is suppressed. This signal is in the form of a common rail signal, since it is critical with regard to safety.
  • The program counter of processor 1 is delayed by 3.5 clock pulses to enable a comparison with the program counter of processor 2 in the lock mode.
  • In the split mode, the caches of both processors may be reloaded separately. If a switchover into the lock mode is performed, the two caches are not coherent with respect to one another. This may cause the two processors to diverge and the comparators to thus signal an error. To avoid this, a flag table is constructed in the IRAM control, where it is noted whether a cache line has been written in the lock mode or in the split mode. When the cache is reloaded in the lock mode, the entry corresponding to the cache line is set at 0, and when it is reloaded in the split mode or when the cache line of a single cache is updated, it is set at 1. If the processor now accesses the memory in the lock mode, a check is performed of whether this cache line has been updated in the lock mode, i.e., whether it is identical in the two caches. In the split mode, the processor may always access the cache line, regardless of the status of the Flag Vector. This table must be present only once, since in the event of an error, the two processors diverge and thus this error is reliably detected by the comparators. Since the access times to the central table are relatively long, this table may also be copied to each cache.
  • DramControl: The parity is formed in this component for the address, data, and memory control signals of each processor.
  • There is a process for both processors for locking the memory. This process does not have to have a fail-safe design, since in the lock mode erroneous memory accesses are detected by the comparators and in the split mode no safety-relevant applications are executed. A check is performed here of whether the processor intends to lock the memory for the other processor. The data memory is locked via an access to the memory address $FBFF$=64511. This signal must be applied for one cycle even if a wait instruction is being applied to the processor at the time of the call. The state machine for managing the data memory access has two main states:
      • processor status lock: Both processors operate in the lock mode. This means that the data memory locking function is not needed. Processor 1 coordinates the memory accesses.
      • processor status split: A data memory access conflict resolution is now necessary, and memory lock must be able to occur.
  • The split mode state is in turn subdivided into seven states which resolve the access conflicts and are able to lock the data memory for the other processor. When both processors intend to access the memory at the same time, the order of execution represents the priorities at the same time.
      • Core1\_Lock: Processor 1 has locked the data memory. If processor 2 intends to access the memory in this state, it is stopped by a wait signal until processor 1 releases the data memory again.
      • Core2\_Lock: This is the same state as the previous one, except that now processor 2 has locked the data memory and processor 1 is stopped for data memory operations.
      • lock1_wait: The data memory was locked by processor 2 as processor 1 also intended to reserve it for itself. Processor 1 is thus pre-marked for the next memory lock.
      • nex: The same for processor 2. The data memory was locked by processor 1 during the locking attempt. The memory is pre-reserved for processor 2. In the event of normal memory access without locking, processor 2 may have access before processor 1 if processor 1 was up previously.
      • Memory access by processor 1: The memory is not locked in this case. Processor 1 is allowed to access the data memory. If it intends to lock it, it may do so in this state.
      • Memory access by processor 2. Processor 1 did not intend to access the memory in the same clock pulse; therefore, the memory is free for processor 2.
      • No processor intends to access the data memory.
  • As mentioned previously, the DDU has the switchover intent detector (IllOPDetect), the mode switch unit, and the Iram and Dram control.
  • As explained previously, the core of the invention is the general mode of operation of the data distribution unit DDU (different data assignment and thus also operating mode selection, depending on the mode).

Claims (28)

1-27. (canceled)
28. A method for data distribution from at least one data source in a system having at least two processing units, the method comprising:
providing a switchover unit which implements a switchover between at least two operating modes of the system; and
performing the data distribution, wherein at least one of the data distribution and a selection of a data source depends on an operating mode of the system.
29. The method as recited in claim 28, wherein the switchover is at least one of triggered and indicated by a control signal that refers to the operating mode of at least one processing unit.
30. The method as recited in claim 29, wherein the control signal is generated externally with respect to the processing units.
31. The method as recited in claim 28, wherein the switchover is at least one of triggered and indicated by an instruction describing an illegal action.
32. The method as recited in claim 31, wherein the instruction is generated by the switchover unit.
33. The method as recited in claim 28, further comprising:
comparing, in an operating mode corresponding to a safety mode, input data of the at least two processing units with respect to one another for agreement.
34. The method as recited in claim 28, further comprising:
comparing, in an operating mode corresponding to a safety mode, output data of the at least two processing units with respect to one another for agreement.
35. The method as recited in claim 28, further comprising:
relaying the data to be distributed, wherein the data to be distributed are relayed to a processing unit, and wherein the data to be distributed are extended by an error detection code prior to the relaying.
36. The method as recited in claim 33, further comprising:
relaying the input data to a processing unit, wherein the input data are extended by an error detection code prior to the relaying.
37. The method as recited in claim 34, further comprising:
relaying the output data to at least one additional component, wherein the output data are extended by an error detection code prior to the relaying.
38. The method as recited in claim 33, further comprising:
outputting an error signal in the event of non-agreement.
39. The method as recited in claims 35, further comprising:
outputting an error signal if an error is detected on the basis of the error detection code.
40. The method as recited in claim 38, wherein the error signal is output only in the safety mode.
41. The method as recited in claim 28, wherein the at least two operating modes of the system include a performance mode and a safety mode, and wherein in the performance mode the data of both processing units are prioritized and at least one of sequentially received and relayed as a function of assigned priorities.
42. A device for performing data distribution from at least one data source in a system having at least two processing units, comprising:
a switchover unit which implements a switchover between at least two operating modes of the system;
wherein at least one of the data distribution and a selection of a data source depends on an operating mode of the system.
43. The device as recited in claim 42, further comprising:
a comparison unit;
wherein, in a first operating mode corresponding to a safety mode in which the at least two processing units process the same program, the comparison unit compares resulting states from the processing of the same program by the two processing units for agreement.
44. The device as recited in claim 42, wherein, in an operating mode corresponding to a safety mode, input data of the at least two processing units are compared with respect to one another for agreement.
45. The device as recited in claim 42, wherein, in an operating mode corresponding to a safety mode, output data of the at least two processing units are compared with respect to one another for agreement.
46. The device as recited in claim 42, wherein data to be distributed are relayed to a processing unit, and wherein the data to be distributed are extended by an error detection code prior to the relaying.
47. The device as recited in claim 42, wherein one operating mode of the system is a performance mode, and wherein in the performance mode the data of both processing units are prioritized and at least one of sequentially received and relayed as a function of assigned priorities.
48. The device as recited in claim 42, further comprising:
a delay component which delays a preceding data by a clock pulse offset as a function of processing-unit clock pulse offset of the at least two processing units.
49. The device as recited in claim 42, wherein data to be distributed are read from a memory and then distributed to the at least two processing units.
50. The device as recited in claim 42, further comprising:
at least one state machine controlling the distribution of the data.
51. The device as recited in claim 50, wherein two state machines are provided for each processing unit.
52. The device as recited in claim 50, wherein a synchronous state machine and an asynchronous state machine are provided.
53. A multi-processor system, comprising:
at least two processing units; and
device for performing data distribution from at least one data source, wherein the device includes a switchover unit which implements a switchover between at least two operating modes of the system;
wherein at least one of the data distribution and a selection of a data source depends on an operating mode of the system.
54. The system as recited in claim 53, further comprising:
a monitoring circuit external to the device for performing data distribution, wherein the monitoring circuit detects an error if an intended switchover between the at least two operating modes does not take place.
US11/666,406 2004-10-25 2005-10-25 Method for Data Distribution and Data Distribution Unit in a Multiprocessor System Abandoned US20080163035A1 (en)

Applications Claiming Priority (11)

Application Number Priority Date Filing Date Title
DE102004051964.1 2004-10-25
DE200410051964 DE102004051964A1 (en) 2004-10-25 2004-10-25 Memory unit monitoring device for use in multiprocessor system, has switching unit, though which system is switched between two operating modes such that device is arranged in such a manner that contents of unit are simultaneously logged
DE200410051950 DE102004051950A1 (en) 2004-10-25 2004-10-25 Clock switching unit for microprocessor system, has switching unit by which switching can be done between two operating modes, where unit is formed so that clock switching takes place with one processor during switching of modes
DE102004051992.7 2004-10-25
DE200410051952 DE102004051952A1 (en) 2004-10-25 2004-10-25 Data allocation method for multiprocessor system involves performing data allocation according to operating mode to which mode switch is shifted
DE102004051937.4 2004-10-25
DE102004051952.8 2004-10-25
DE102004051950.1 2004-10-25
DE200410051937 DE102004051937A1 (en) 2004-10-25 2004-10-25 Data distributing method for multiprocessor system, involves switching between operating modes e.g. safety and performance modes, of computer units, where data distribution and/or selection of data source is dependent upon one mode
DE200410051992 DE102004051992A1 (en) 2004-10-25 2004-10-25 Access delay method for multiprocessor system involves clocking processors differently to enable both processors to access memory at different times
PCT/EP2005/055532 WO2006045798A1 (en) 2004-10-25 2005-10-25 Method and device for distributing data from at least one data source in a multiprocessor system

Publications (1)

Publication Number Publication Date
US20080163035A1 true US20080163035A1 (en) 2008-07-03

Family

ID=35677569

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/666,405 Active 2027-04-27 US7853819B2 (en) 2004-10-25 2005-10-25 Method and device for clock changeover in a multi-processor system
US11/666,407 Abandoned US20080126718A1 (en) 2004-10-25 2005-10-25 Method And Device For Monitoring A Memory Unit In A Mutliprocessor System
US11/666,406 Abandoned US20080163035A1 (en) 2004-10-25 2005-10-25 Method for Data Distribution and Data Distribution Unit in a Multiprocessor System
US11/666,413 Abandoned US20090164826A1 (en) 2004-10-25 2005-10-25 Method and device for synchronizing in a multiprocessor system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/666,405 Active 2027-04-27 US7853819B2 (en) 2004-10-25 2005-10-25 Method and device for clock changeover in a multi-processor system
US11/666,407 Abandoned US20080126718A1 (en) 2004-10-25 2005-10-25 Method And Device For Monitoring A Memory Unit In A Mutliprocessor System

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/666,413 Abandoned US20090164826A1 (en) 2004-10-25 2005-10-25 Method and device for synchronizing in a multiprocessor system

Country Status (8)

Country Link
US (4) US7853819B2 (en)
EP (5) EP1812861A1 (en)
JP (5) JP4532561B2 (en)
KR (4) KR20070083772A (en)
AT (2) ATE407398T1 (en)
DE (2) DE502005005284D1 (en)
RU (1) RU2007119316A (en)
WO (5) WO2006045802A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249271A1 (en) * 2008-03-27 2009-10-01 Hiromichi Yamada Microcontroller, control system and design method of microcontroller
US20100262811A1 (en) * 2009-04-08 2010-10-14 Moyer William C Debug signaling in a multiple processor data processing system
US10025281B2 (en) 2011-03-15 2018-07-17 Omron Corporation Control device and system program, and recording medium
US11269799B2 (en) * 2019-05-03 2022-03-08 Arm Limited Cluster of processing elements having split mode and lock mode
US20230168978A1 (en) * 2021-11-30 2023-06-01 Mellanox Technologies, Ltd. Transaction based fault tolerant computing system
US11880356B1 (en) 2018-08-30 2024-01-23 Gravic, Inc. Multi-processor transaction-based validation architecture that compares indicia associated with matching transaction tags

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7882379B2 (en) * 2006-09-22 2011-02-01 Sony Computer Entertainment Inc. Power consumption reduction in a multiprocessor system
US20080244305A1 (en) * 2007-03-30 2008-10-02 Texas Instruments Deutschland, Gmbh Delayed lock-step cpu compare
DE102007063291A1 (en) * 2007-12-27 2009-07-02 Robert Bosch Gmbh safety control
US7941698B1 (en) * 2008-04-30 2011-05-10 Hewlett-Packard Development Company, L.P. Selective availability in processor systems
JP2010198131A (en) * 2009-02-23 2010-09-09 Renesas Electronics Corp Processor system and operation mode switching method for processor system
US8295287B2 (en) * 2010-01-27 2012-10-23 National Instruments Corporation Network traffic shaping for reducing bus jitter on a real time controller
US8954714B2 (en) * 2010-02-01 2015-02-10 Altera Corporation Processor with cycle offsets and delay lines to allow scheduling of instructions through time
WO2011101707A1 (en) * 2010-02-16 2011-08-25 Freescale Semiconductor, Inc. Data processing method, data processor and apparatus including a data processor
KR101664108B1 (en) 2010-04-13 2016-10-11 삼성전자주식회사 Apparatus and method of hardware acceleration for processing synchronization of multi core
JP5718600B2 (en) * 2010-09-10 2015-05-13 日本電気通信システム株式会社 Information processing system and information processing method
US8683251B2 (en) 2010-10-15 2014-03-25 International Business Machines Corporation Determining redundancy of power feeds connecting a server to a power supply
EP2701073B1 (en) 2011-04-18 2018-01-24 Fujitsu Limited Thread processing method and thread processing system
US9086977B2 (en) * 2011-04-19 2015-07-21 Freescale Semiconductor, Inc. Cache memory with dynamic lockstep support
WO2014080245A1 (en) 2012-11-22 2014-05-30 Freescale Semiconductor, Inc. Data processing device, method of execution error detection and integrated circuit
US9429981B2 (en) * 2013-03-05 2016-08-30 St-Ericsson Sa CPU current ripple and OCV effect mitigation
US9823983B2 (en) 2014-09-25 2017-11-21 Nxp Usa, Inc. Electronic fault detection unit
WO2016087175A1 (en) * 2014-12-01 2016-06-09 Continental Teves Ag & Co. Ohg Processing system for a motor vehicle system
JP6516097B2 (en) * 2015-06-11 2019-05-22 大日本印刷株式会社 Arithmetic device, IC card, arithmetic method, and arithmetic processing program
JP2019061392A (en) 2017-09-26 2019-04-18 ルネサスエレクトロニクス株式会社 Microcontroller and control method of microcontroller
US20230259433A1 (en) * 2022-02-11 2023-08-17 Stmicroelectronics S.R.L. Systems and methods to test an asychronous finite machine

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3783250A (en) * 1972-02-25 1974-01-01 Nasa Adaptive voting computer system
US5732209A (en) * 1995-11-29 1998-03-24 Exponential Technology, Inc. Self-testing multi-processor die with internal compare points
US6038584A (en) * 1989-11-17 2000-03-14 Texas Instruments Incorporated Synchronized MIMD multi-processing system and method of operation
US20020073357A1 (en) * 2000-12-11 2002-06-13 International Business Machines Corporation Multiprocessor with pair-wise high reliability mode, and method therefore
US6615366B1 (en) * 1999-12-21 2003-09-02 Intel Corporation Microprocessor with dual execution core operable in high reliability mode
US7055060B2 (en) * 2002-12-19 2006-05-30 Intel Corporation On-die mechanism for high-reliability processor

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE1269827B (en) * 1965-09-09 1968-06-06 Siemens Ag Method and additional device for the synchronization of data processing systems working in parallel
US4823256A (en) * 1984-06-22 1989-04-18 American Telephone And Telegraph Company, At&T Bell Laboratories Reconfigurable dual processor system
AU616213B2 (en) * 1987-11-09 1991-10-24 Tandem Computers Incorporated Method and apparatus for synchronizing a plurality of processors
US5226152A (en) * 1990-12-07 1993-07-06 Motorola, Inc. Functional lockstep arrangement for redundant processors
DE4104114C2 (en) * 1991-02-11 2000-06-08 Siemens Ag Redundant data processing system
JPH05128080A (en) * 1991-10-14 1993-05-25 Mitsubishi Electric Corp Information processor
US5751932A (en) 1992-12-17 1998-05-12 Tandem Computers Incorporated Fail-fast, fail-functional, fault-tolerant multiprocessor system
JPH07121483A (en) * 1993-10-28 1995-05-12 Nec Eng Ltd Shared memory access control circuit
US5758132A (en) * 1995-03-29 1998-05-26 Telefonaktiebolaget Lm Ericsson Clock control system and method using circuitry operating at lower clock frequency for selecting and synchronizing the switching of higher frequency clock signals
CA2178440A1 (en) * 1995-06-07 1996-12-08 Robert W. Horst Fail-fast, fail-functional, fault-tolerant multiprocessor system
JPH096733A (en) * 1995-06-14 1997-01-10 Toshiba Corp Parallel signal processor
JPH0973436A (en) * 1995-09-05 1997-03-18 Mitsubishi Electric Corp Operation mode switching system of multiplied computers
US5809522A (en) * 1995-12-18 1998-09-15 Advanced Micro Devices, Inc. Microprocessor system with process identification tag entries to reduce cache flushing after a context switch
FR2748136B1 (en) * 1996-04-30 1998-07-31 Sextant Avionique ELECTRONIC MODULE WITH REDUNDANT ARCHITECTURE FOR FUNCTIONALITY INTEGRITY CONTROL
GB2317032A (en) * 1996-09-07 1998-03-11 Motorola Gmbh Microprocessor fail-safe system
GB9704542D0 (en) * 1997-03-05 1997-04-23 Sgs Thomson Microelectronics A cache coherency mechanism
EP0978784A1 (en) * 1998-08-04 2000-02-09 Motorola, Inc. Method for coding computer programs and method for debugging coded computer programs
GB2340627B (en) * 1998-08-13 2000-10-04 Plessey Telecomm Data processing system
JP2000200255A (en) * 1999-01-07 2000-07-18 Hitachi Ltd Method and circuit for synchronization between processors
WO2000079405A1 (en) * 1999-06-21 2000-12-28 Hitachi, Ltd. Data processor
US6640313B1 (en) * 1999-12-21 2003-10-28 Intel Corporation Microprocessor with high-reliability operating mode
DE10136335B4 (en) 2001-07-26 2007-03-22 Infineon Technologies Ag Processor with several arithmetic units
US6947047B1 (en) * 2001-09-20 2005-09-20 Nvidia Corporation Method and system for programmable pipelined graphics processing with branching instructions
US20040076189A1 (en) * 2002-10-17 2004-04-22 International Business Machines Corporation Multiphase clocking method and apparatus
JP2004234144A (en) * 2003-01-29 2004-08-19 Hitachi Ltd Operation comparison device and operation comparison method for processor
KR20060026884A (en) * 2003-06-24 2006-03-24 로베르트 보쉬 게엠베하 Method for switching between at least two operating modes of a processor unit and corresponding processor unit
US7134031B2 (en) * 2003-08-04 2006-11-07 Arm Limited Performance control within a multi-processor system
DE10349581A1 (en) * 2003-10-24 2005-05-25 Robert Bosch Gmbh Method and device for switching between at least two operating modes of a processor unit

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3783250A (en) * 1972-02-25 1974-01-01 Nasa Adaptive voting computer system
US6038584A (en) * 1989-11-17 2000-03-14 Texas Instruments Incorporated Synchronized MIMD multi-processing system and method of operation
US5732209A (en) * 1995-11-29 1998-03-24 Exponential Technology, Inc. Self-testing multi-processor die with internal compare points
US6615366B1 (en) * 1999-12-21 2003-09-02 Intel Corporation Microprocessor with dual execution core operable in high reliability mode
US20020073357A1 (en) * 2000-12-11 2002-06-13 International Business Machines Corporation Multiprocessor with pair-wise high reliability mode, and method therefore
US7055060B2 (en) * 2002-12-19 2006-05-30 Intel Corporation On-die mechanism for high-reliability processor

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249271A1 (en) * 2008-03-27 2009-10-01 Hiromichi Yamada Microcontroller, control system and design method of microcontroller
US7890233B2 (en) 2008-03-27 2011-02-15 Renesas Electronics Corporation Microcontroller, control system and design method of microcontroller
US20110106335A1 (en) * 2008-03-27 2011-05-05 Renesas Electronics Corporation Microcontroller, control system and design method of microcontroller
US8046137B2 (en) 2008-03-27 2011-10-25 Renesas Electronics Corporation Microcontroller, control system and design method of microcontroller
US20100262811A1 (en) * 2009-04-08 2010-10-14 Moyer William C Debug signaling in a multiple processor data processing system
US8275977B2 (en) 2009-04-08 2012-09-25 Freescale Semiconductor, Inc. Debug signaling in a multiple processor data processing system
US10025281B2 (en) 2011-03-15 2018-07-17 Omron Corporation Control device and system program, and recording medium
US11880356B1 (en) 2018-08-30 2024-01-23 Gravic, Inc. Multi-processor transaction-based validation architecture that compares indicia associated with matching transaction tags
US11269799B2 (en) * 2019-05-03 2022-03-08 Arm Limited Cluster of processing elements having split mode and lock mode
US20230168978A1 (en) * 2021-11-30 2023-06-01 Mellanox Technologies, Ltd. Transaction based fault tolerant computing system
US11899547B2 (en) * 2021-11-30 2024-02-13 Mellanox Technologies, Ltd. Transaction based fault tolerant computing system

Also Published As

Publication number Publication date
JP2008518309A (en) 2008-05-29
ATE409327T1 (en) 2008-10-15
EP1810145A1 (en) 2007-07-25
ATE407398T1 (en) 2008-09-15
DE502005005284D1 (en) 2008-10-16
RU2007119316A (en) 2008-12-10
WO2006045802A3 (en) 2007-01-04
KR20070062579A (en) 2007-06-15
JP2008518310A (en) 2008-05-29
WO2006045800A1 (en) 2006-05-04
KR20070067168A (en) 2007-06-27
EP1812861A1 (en) 2007-08-01
KR20070083772A (en) 2007-08-24
DE502005005490D1 (en) 2008-11-06
EP1810145B1 (en) 2008-09-03
WO2006045804A1 (en) 2006-05-04
JP4532561B2 (en) 2010-08-25
US7853819B2 (en) 2010-12-14
US20090164826A1 (en) 2009-06-25
JP2008518311A (en) 2008-05-29
KR20070083771A (en) 2007-08-24
WO2006045798A1 (en) 2006-05-04
US20080209251A1 (en) 2008-08-28
EP1807761A1 (en) 2007-07-18
WO2006045802A2 (en) 2006-05-04
EP1807763B1 (en) 2008-09-24
WO2006045801A3 (en) 2006-07-06
JP2008518312A (en) 2008-05-29
JP2008518308A (en) 2008-05-29
WO2006045801A2 (en) 2006-05-04
US20080126718A1 (en) 2008-05-29
EP1807763A2 (en) 2007-07-18
EP1820102A2 (en) 2007-08-22

Similar Documents

Publication Publication Date Title
US20080163035A1 (en) Method for Data Distribution and Data Distribution Unit in a Multiprocessor System
KR20130119452A (en) Microprocessor system having fault-tolerant architecture
US9417946B2 (en) Method and system for fault containment
JP5199088B2 (en) Method and apparatus for controlling a computer system comprising at least two instruction execution units and one comparison unit
US20070277023A1 (en) Method For Switching Over Between At Least Two Operating Modes Of A Processor Unit, As Well Corresponding Processor Unit
US20090044044A1 (en) Device and method for correcting errors in a system having at least two execution units having registers
Kohn et al. Architectural concepts for fail-operational automotive systems
US20070283061A1 (en) Method for Delaying Accesses to Date and/or Instructions of a Two-Computer System, and Corresponding Delay Unit
US20070294559A1 (en) Method and Device for Delaying Access to Data and/or Instructions of a Multiprocessor System
JP4182948B2 (en) Fault tolerant computer system and interrupt control method therefor
KR20080067663A (en) Program-controlled unit and method for the operation thereof
US20040193735A1 (en) Method and circuit arrangement for synchronization of synchronously or asynchronously clocked processor units
US20090024908A1 (en) Method for error registration and corresponding register
US20150039944A1 (en) System and Method of High Integrity DMA Operation
JP4829821B2 (en) Multiprocessor system and recovery method in multiprocessor system
US10540222B2 (en) Data access device and access error notification method
JP3746957B2 (en) Control method of logical partitioning system
DE102004051937A1 (en) Data distributing method for multiprocessor system, involves switching between operating modes e.g. safety and performance modes, of computer units, where data distribution and/or selection of data source is dependent upon one mode
JP5287198B2 (en) Information processing device
JP4117685B2 (en) Fault-tolerant computer and its bus selection control method
JPH0498326A (en) Microprocessor
JPH03228189A (en) Microprocessor

Legal Events

Date Code Title Description
AS Assignment

Owner name: ROBERT BOSCH GMBH, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KOTTKE, THOMAS;REEL/FRAME:020624/0133

Effective date: 20070618

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION