WO1992003779A1 - Method of efficient communication between coprocessors - Google Patents

Method of efficient communication between coprocessors Download PDF

Info

Publication number
WO1992003779A1
WO1992003779A1 PCT/US1991/004070 US9104070W WO9203779A1 WO 1992003779 A1 WO1992003779 A1 WO 1992003779A1 US 9104070 W US9104070 W US 9104070W WO 9203779 A1 WO9203779 A1 WO 9203779A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
wake
processors
data storage
accordance
Prior art date
Application number
PCT/US1991/004070
Other languages
French (fr)
Inventor
George A. Spix
Gregory G. Gaertner
Diane M. Wengelski
Keith J. Thompson
Original Assignee
Supercomputer Systems Limited Partnership
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US07/571,951 external-priority patent/US5202988A/en
Application filed by Supercomputer Systems Limited Partnership filed Critical Supercomputer Systems Limited Partnership
Publication of WO1992003779A1 publication Critical patent/WO1992003779A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The wake queue invention (70) provides a method whereby processors of differing capability can efficiently communicate. This allows improved efficiency in distributing the operating system. Slowere coprocessors can queue entries (80) in areas referred to as wake queues. Access to these wake queues is governed by a queueing mechanism (90) which utilizes the global registers to synchronize multiple concurrent producers (80) and multiple concurrent consumers (30). When a fast processor runs the kernel process assigned to handle the wake queue (70), entries (80) are pulled off of the wake queue and the information in that entry is processed. Implementations can vary widely depending upon the information and processing desired.

Description

METHOD OF EFFICIENT COMMUNICATION BETWEEN COPROCESSORS
RELATED APPLICATIONS
This application is a continuation-in-part of an applica¬ tion filed in the United States Patent and Trademark Office on June 11, 1990, entitled INTEGRATED SOFTWARE ARCHITECTURE FOR A HIGHLY PARALLEL MULTIPROCESSOR SYSTEM, Serial No. 07/537,466, and assigned to the assignee of the present invention, the disclosure of which is hereby incorporated by reference in the present application. This application is also related to the copending application entitled GLOBAL REGISTERS FOR A MULTIPROCESSOR SYSTEM, Serial No. 07/536,198. The application is also related to the co¬ pending applications filed currently herewith =, entitled SCHEDULING METHOD FOR A MULTIPROCESSING OPERATING SYSTEM, and METHOD OF IMPLEMENTING KERNEL FUNCTIONS USING MINIMAL- CONTEXT PROCESSES, all of which are assigned to the assigne of the present invention, the disclosures of which are hereby incorporated by reference in the present application.
TECHNICAL FIELD
The present invention relates generally to multiprocessor computer systems and specifically to allocating processors in a tightly-coupled configuration to execute the threads of one or more multithreaded programs that are running on the system simultaneously.
BACKGROUND ART
The present invention involves the way the wake queue in the preferred embodiment of the computer system operates for coprocessors of unequal speed versus how an interrupt handler operates. The wake queue consists of a data structure used as a repository between communicating processors and the locking scheme implemented in the global registers which coordinates access to that data structure. Instead of interrupts, the computer system can use the wake queue to alert the consumer (processor) of a request from a producer (i.e., disk drive). Prior art has typically involved interrupt handlers in computer systems, such as is done in conjunction with the UNIX operating system.
In the prior art, the kernel is responsible for handling interrupts, whether they result from hardware (such as from the clock or from peripheral devices), from a programmed interrupt (execution of instructions designed to cause "software interrupts"), or from exceptions (such as page faults). If the CPU is executing at a lower processor execution level than the level of the interrupt, it accepts the interrupt before decoding the next instruction and raises the processor execution level, so that no other interrupts of that level (or lower) can happen while it handles the current interrupt, preserving the integrity of kernel data structures. The kernel handles the interrupt with the following sequence of operations:
1. It saves the current register context of the executing process and creates (pushes) a new context layer.
2. It determines the "source" or cause of the interrupt, identifying the type of interrupt (such as clock or disk) and the unit number of the interrupt, if applicable (such as, which dis drive caused the interrupt) . When the system receives an interrupt, it gets a number from the machine that it uses as an offset into a table, commonly called a interrupt vector. The content
1 UNIX is a trademark of AT&T Bell Laboratories. of interrupt vectors vary from machine to machine, but they usually contain the address of the interrupt handler for the corresponding interrupt source and a way of finding a parameter for the interrupt handler. For example, consider the table of interrupt handlers in Figure 1. If a terminal interrupts the system, the kernel gets interrupt number 2 from the hardware and invokes the terminal interrupt handler ttyintr. The kernel invokes the interrupt handler. The kernel stack for the new context layer is logi¬ cally distinct from the kernel stack of the previous context layer. Some implementations use the kernel stack of the execution process to store the interrupt handler stack frames, and other implementations use a global interrupt stack to store the frames for interrupt handlers that are guaranteed to return without switching context.
The interrupt handler completes its work and returns. The kernel executes a machine-specific sequence of instructions that restores the register context and kernel stack of the previous context layer as they existed at the time of the interrupt and then resumes execution of the restored context layer. The behavior of the process may be affected by the interrupt handler, since the interrupt handler may have altered global kernel data structures and awakened sleeping processes. Usually, however, the process continues execution as if the interrupt had never happened. (Refer to "The Design of the UNIX Operating System" by Maurice J. Bach, pages 162-163, Prentice-Hall, 1986.) SUMMARY OF THE INVENTION
In the present invention, items are place on a queue, referred to as the wake queue. An objective of the inven¬ tion is to provide a method of placing the items on the wake queue by a slower processor and removing them by a faster processor without having the faster processor wait for the slower processor. The wake queue invention also provides a method whereby processors of differing capabilit can efficiently communicate. This allows realization of the objective for the present invention of distributing the operating system efficiently. Slower coprocessors can queue entries in wake queues. Access to these wake queues are governed by a queuing mechanism which utilizes the global registers to synchronize multiple concurrent producers and multiple concurrent consumers. (Refer to the related patent GLOBAL REGISTERS FOR A MULTIPROCESSOR SYSTEM. )
When a fast processor runs the kernel procedure assigned to handle the wake queue, entries are pulled off of the wake queue and the information in that entry is processed. Implementations can vary widely depending upon the informa¬ tion and processing desired. Once such implementation involves having the slower coprocessor queue a semaphore address onto a wake queue and having the kernel procedure then wake a process waiting on that semaphore. In this way, the objective of allowing coprocessors to schedule tasks to be done by the fast processors without delaying the fast processors is achieved.
Counting semaphores, as mentioned above, are software mechanisms for synchronization. The semaphore consists of a count of the available resources to be managed and a list associated with entities waiting for that resource. To implement a lock, this count is set to one so that only one resource, the lock, exists. If the semaphore is going to govern multiple resources it is set to the number of resources available. This count is determined as a resourc is taken. When the semaphore count goes to zero, no more resources are available so the requester is put to sleep to wait for one to become available. As a process frees a resource, it increments the semaphore counter and wakes up a waiting process.
Another implementation has the slower coprocessor queuing command blocks onto the wake queue which a kernel procedure in a fast processor dequeues and queues onto a work queue before waking the appropriate kernel daemon to process that work queue. When this daemon runs, it consults the work queue to obtain and process the information communicated from the slower processor. In this way, the objective of allowing fast processors to do post interrupt processing without being interrupted is achieved.
The objective of the present invention for the locking necessary to efficiently synchronize the different speed processors without blocking the faster processor is best described by th following analogy: To visualize this model, imagine a warehouse with a door for requestors and a door for workers. Inside the warehouse is a long table with chairs for requestors to write a work request on one side, and opposite chairs for workers to pick up the work. The requestor looks at current requestor seats available before he enters the warehouse and, if zero, waits. Otherwise, he enters and decrements the requestor seat count.
To find a chair, he grabs the next request number (modulo number of chairs, initialized to zero), and goes to the location, checks to see if the previous requestor has left, sits down, and writes his workorder. He then increments the current workorder outstanding count by the worker door, and may or may not ring the service bell(s) to get help. A worker, either on hearing the bell or on glancing at the workorder outstanding count and seeing it nonzero, decre¬ ments the workorder outstanding count and, if nonzero, enters the warehouse. To find a chair, he grabs the next service number (modulo number of chairs, initialized to zero), and goes to the location, checks to see if the previous worker has left, sits down, picks up the work¬ order, and as he leaves, increments the requestor seats available.
There is still individual locking required around the seat position. This circular queue is basically a clock distri¬ bution mechanism that avoids convoying around a master queue lock and is still FIFO in response. An individual requestor/worker can still find themselves waiting on each other if one or the other is slow (or not operating) .
Although in the present invention of the wake queue with the associated integrated dispatcher and the ability to create alternative processing entities is implemented in a tightly coupled, multithreaded, multiprocessing environ¬ ment, it is not limited to this implementation. The wake queue method of the present invention can be implemented in other functions and utilities that require efficient processor scheduling.
Those having normal skill in the art will recognize the foregoing and other objects, feature,s advantages and applications of the present invention from the following more detailed description of the preferred embodiments as illustrated in the accompanying drawings.
DESCRIPTION OF THE DRAWINGS
Figure 1 is a table showing prior art interrupt handlers. Figure 2 is a schematic diagram showing the wake queue in the integrated dispatcher environment.
Figure 3 is a pictorial representation of an excerpt showing the format of wake queue global registers.
Figure 4 is an illustration of the interaction between global registers and certain portions of memory.
Figure 5 shows an excerpt from the header file that descri the format of the wake queue-related global registers.
DESCRIPTION OF THE PREFERRED EMBODIMENT
The wake queue mechanism is a way for coprocessors of unequal speed to efficiently communicate in a multiprocess operating system. Slow speed producers can produce entries for high speed consumers without interrupting the consumer. The consumers in the present invention are typically processors in the preferred embodiment. Typical producers are anything attached to the input/output controller (IOC), i.e., a disk drive, a frame buffer, etc.; although the computer system processors can also be producers.
Figure 2 shows the functions and data structures in the preferred embodiment of the wake queue. In the present invention, the producer communicates with the consumer by queuing an entry in the queue entry funciton 80 on a wake queue data structure 70. The entry is ultimately consumed by the consuming processor when it runs the integrated dispatcher function 1112, or any other kernel procedure assigned to process wake queue entries, which dequeues the entry via consume e .ry function 30. The related patent application entitled SCHEDULING MECHANISM FOR A MULTIPRO¬ CESSING OPERATING SYSTEM contains information on the integrated dispatcher. Using the integrated dispatcher to process wake queue entries is in agreement with the preferred embodiment of the present invention? however, other means are available. That is, any kernel procedure can be enhanced with the intelligence needed to process wake queue entries.
The processing done on the entry can vary widely, depending upon the intelligence added in the consume entry function 30. For example, if the wake queue's entries consist of semaphore addresses, the consume entry function can wake a process waiting on the semaphore address in that entry, thereby queuing that process on to the run queue data structure 10. This process is then eligible to be run. This allows coprocessors to schedule tasks they want fast processors to execute. If this wake queue's entries consist of command blocks, the consume entry function 30 could queue the command block onto a work queue data structure 90 and wake a waiting iproc, queuing it on the run queue data structure 10. An iproc is a minimal context process entity capable of performing all system-side tasks associated with kernel functions of a kernel based operatin system. It is described in greater detail in the commonly- assigned and copending patent application entitled METHOD OF IMPLEMENTING KERNEL FUNCTIONS USING MINIMAL-CONTEXT PROCESSES.
This iproc is now eligible to run. When it does, it consults the appropriate work queue to obtain the informa¬ tion communicated from the coprocessor and processes it accordingly. This allows fast processors to do what is referred to in standard UNIX as "post interrupt processing" without having to interrupt the fast processor. It is important to note her that once the iproc runs, it will process all of its work queue entries, not just one. That is, multiple entries may have been queued onto the wake queue by coprocessors and then transferred to the work queue before this iproc was chosen to run. This allows the consumer (the iproc) to process the output of multiple producers [the coprocessor(s) ] much more efficiently. It should be noted that what is placed on the wake queue and what the consumer does with it upon removal is irrelevant to the present invention. The invention is an apparatus for placing and removing items only.
The wake queue can be used to implement a coprocessor scheduling mechanism with limited interrupts. That is, interrupts are now limited to handle the case when the wak queue fills up. The wake queue is circular and of fixed size, so when it fills, an interrupt is generated such tha the integrated dispatcher can run and empty out the wake queue. This situation where the wake queue fill up could occur when he processors would otherwise be flooded by requests, as In the case of multiple striped disk requests occurring at the same time.
The overall idea behind the wake queue in accordance with this invention is to avoid having slower coprocessors (e.g., disk controller) interrupting fast processors, because doing so slows down the fast processors. The wake queue allows this to happen by letting coprocessors queue entries in a certain section of memory, called the wake queue. Access to this section of memory is governed by a locking mechanism implemented in a set of global registers. The interface functions between memory areas and these global registers is illustrated in Figure 4. Both th the processors and the coprocessors must adhere to this locking mechanism when queueing or dequeuing 'entries' from the wake queue.
Figure 5 shows an excerpt from the header file that describ the format of the wake queue-related global registers, with their format described pictorially in Figure 3.
These global registers are initialized as follows: G__Stats - 0 full = 0, no occurrences of queue full wait = 0, no occurrences of waiting for entry
G_Avail - 0 available = number of entries in queue (a power of 2) queue_mask = available-1 (mod max entries) queue_fwa = base address of queue entries (in this cluster)
G_Reqs = 0 requests = 0, number of requests outstanding idle = number of idle processors dont_signal = bit map of processors never to signal (must always be at least one eligible)
G_0ffs = 0 pad = 0, used to insure no f&a overflow of in and out ordinals in_ordinal = 0, producer increments to get slot to store entry out_ordinal - 0, consumer increments to get slot to process size = entry size in words (address = ordinal*size + queue_f a) locked = 0, table unlocked
The locking mechanism implemented in these global register is used to govern access to the memory being used a a wake queue. This mechanism consists of having 'producers' gain access to queue an entry on the wake queue by decrementing the "available" field and incrementing the "requests" field, and conversely, having 'consumers' gain access to dequeue an entry from the wake queue by decrementing the "requests" filed and incrementing the "available" field. In this way, the in and out pointers safely chase each other through the queue entries. That is, to queue an entry, a producer calls queue_entry which performs the following:
1. User an atomic operation to obtain and update the G_Avail register value.
If the avail field of this register indicate that there are no available slots in the wake queue, generate an interrupt to allow the fast processors to clean out the wake queue.
Note: In our embodiment, this operation is the FCA (Fetch and Conditionally Add) instruction. FCA adds a value to the contents of the register and returns the result to that register if that result is greater than or equal to zero.
2. Uses an atomic operation to obtain and update the G_Offs register value.
Note: In our embodiment, this operation is the FAA (Fetch and Add) instruction. FAA adds a value to the contents of the register and returns the result to that register.
3. Calculates the address in the wake queue at which to deposit the entry using the in_ord and size fields of the G_0ffs register and the q_fwa and qjmask fields of the G_Avail register.
4. Updates the G_Stats register.
5. Deposits 'entry' information at 'address'.
6. Uses an atomic operation to increment the requests field of the G_Reqs register. To dequeue an entry, a consumer calls consume_entry which performs the following:
1. Uses an atomic operation to obtain the G_Avail register value.
2. Uses an atomic operation to obtain and update the G_Reqs register value.
Note: In our embodiment, the FCA is used to decrement the outstanding requests counter.
3. Uses an atomic operation to obtain and update the G_Offs register value.
Note: In our embodiment, the FAA is used to increment the out_ord field.
4. Calculates the address in the wake queue from which to dequeue the entry using the out_ord and size fields of the G_Offs register and the q_fwa and Q_mask fields of the G_Avail register. '
5. Updates the G_Stats register.
6. Dequeues 'entry' information at 'address'.
7. Uses an atomic operation to increment the avail field of the G_Avail register.
This scheme allows coprocessors to communicate with faster processors without interrupting the faster processors. Some kernel routine must be embellished with the intelli¬ gence to handle each wake queue. In our embodiment, this routine is the integrated dispatcher. When a fast processo executes whatever routine has been specified to handle the wake queue, it will dequeue entries and do whatever pro¬ cessing is appropriate for that entry.
The information communicated between the coprocessor and the processor is completely implementation dependent as long as the coprocessor routine which is queuing entries and the kernel routine which is dequeuing entries mutually understands the contents of the entry and the processing that is appropriate for that entry.
Multiple wake queues can exist, and each different wake queue could have entirely unique entry formats. Each wake queue would depend upon setting up a set of global registers to govern access to it and enhancing some kernel routine with the intelligence to process its entries. Each entry on a specific wake queue will be of the same format/size, but entries from different wake queues need have nothing in common. For example, one coprocessor may queue eight byte entries on a certain wake queue, while another coprocessor could be queuing twenty-four byte entries onto a different wake queue.
A command block is set up to detail specific processing. It is often associated with 10. For example, the operating system can pass command blocks to a disk controller to inform the controller where data should be read/written from/to, the amount of data to transfer, etc.
Command blocks can be queued on a wake queue to detail peripheral activity. That is, a fast processor could be the producer queuing an entry on a wake queue and the controller would then be a consumer dequeuing entries to decipher details of disk"trans ers. Note"that this implies that a coprocesε r routine rather than a kernel routine would be enhanceJ with the intelligence to process queue entries. Upon completion of the disk activity, the roles could be reversed. The controller would then be the producer queuing an entry on a different wake queue and the fast processor would then be a consumer dequeuing the entry to determine which process should be woken up as a result of the completed disk activity. (This is the example mentioned above of the coprocessor queuing an address on the wake queue and the integrated dispatcher waking up the process sleeping on that address.)
Regarding the queue entry function 80, a processor or coprocessor calls the queue_entry routine whenever it has information it wants communicated via the wake queue mechanism. Although not limited to this use, it was designed with the intent of allowing slower coprocessors to communicate with fast processors without interrupts.
Note that for the consume entry funciton 30, whatever routine has been enhanced with the intelligence to process this wake queue's entries, calls consume_entry to dequeue an entry and then performs the appropriate processing on that information. In the example cited herein, the co¬ processor queues an address on the wake queue which the integrated dispatcher dequeues. The integrated dispatcher knows that this address is the address of a semaphore upon which a process has been sleeping on while waiting for the coprocessor to finish this task. The integrated dispatcher performs a wake operation on that semaphore which dequeues the process on the run queue so that it can be chosen for execution.
Although the description of the preferred embodiment has been presented, it is contemplated that various changes could be made without deviating from the spirit of th present invention. Accordingly, it is intended that the scope of the present invention be dictated by the appended claims rather than by the description of the preferred embodiment. While the exemplary preferred embodiments of the present invention are described herein with particularity, those having normal skill in the art will recognize various changes, modifications, additions and applications other than those specifically mentioned herein without departing from the spirit of this invention.

Claims

1. In a system having a multiplicity of sources of service requests and one or more processors any of which can perform tasks correlated to said service requests with the system including a plurality of shared data storage resources, the method of interfacing between the service requests and the processors comprising the steps of:
designating at least one of said shared data storage resources as a wake queue;
introducing information to said wake queue relating to a data handling activity necessary to respond to a service request in response to the occurrence of said service request;
monitoring for availability of a processor; and
loading a processor as it becomes available with the said data handling activity information introduced to said wake queue.
2. The method in accordance with claim 1 wherein said designating step includes the step of establishing a plurality of data storage areas within said shared data storage resource; and
said introducing step includes the step of sequentially storing data handling activity information sequentially in said data storage areas in response to respectively occurri said service requests.
3. The method in accordance with claim 2 which includes the step of periodically causing at least one of said processors to inspect said wake queue for determining whether a service request requires handling.
4. The method in accordance with claim 3 which includes the step of producing an interrupt signal for demanding attention by one of said processors in response to storage of information in the last of said data storage areas thereby reflecting that the wake queue is at capacity.
5. The method in accordance with claim 4 wherein said data storage areas are each a global register of multipro¬ cessor environment, said introducing step including the step of sequentially placing information in said global registers.
6. The method in accordance with claim 5 wherein said introducing step further including the step of responding to introducing of semaphore address entries in said global register by waking a waiting process.
7. The method in accordance with claim 5 wherein said introducing step include the step of responding to the placing of command blocks in at least one of said global registers by queuing said command block in a work queue and waking a waiting iproc for further queuing in a priority level of a run queue data structure.
PCT/US1991/004070 1990-08-23 1991-06-10 Method of efficient communication between coprocessors WO1992003779A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/571,951 US5202988A (en) 1990-06-11 1990-08-23 System for communicating among processors having different speeds
US571,951 1990-08-23

Publications (1)

Publication Number Publication Date
WO1992003779A1 true WO1992003779A1 (en) 1992-03-05

Family

ID=24285730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/004070 WO1992003779A1 (en) 1990-08-23 1991-06-10 Method of efficient communication between coprocessors

Country Status (1)

Country Link
WO (1) WO1992003779A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0954796A1 (en) * 1996-03-19 1999-11-10 Bull HN Information Systems Inc. Procedure to detect and disperse convoys for queued gate servers
WO2020097179A1 (en) * 2018-11-06 2020-05-14 Dover Microsystems, Inc. Systems and methods for stalling host processor
US11748457B2 (en) 2018-02-02 2023-09-05 Dover Microsystems, Inc. Systems and methods for policy linking and/or loading for secure initialization
US11797398B2 (en) 2018-04-30 2023-10-24 Dover Microsystems, Inc. Systems and methods for checking safety properties
US11841956B2 (en) 2018-12-18 2023-12-12 Dover Microsystems, Inc. Systems and methods for data lifecycle protection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644461A (en) * 1983-04-29 1987-02-17 The Regents Of The University Of California Dynamic activity-creating data-driven computer architecture
US4807111A (en) * 1987-06-19 1989-02-21 International Business Machines Corporation Dynamic queueing method
US4943913A (en) * 1988-02-10 1990-07-24 International Business Machines Corporation Operating system accessing control blocks by using home address space segment table to control instruction and operand fetch and store operations
US5031089A (en) * 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644461A (en) * 1983-04-29 1987-02-17 The Regents Of The University Of California Dynamic activity-creating data-driven computer architecture
US4807111A (en) * 1987-06-19 1989-02-21 International Business Machines Corporation Dynamic queueing method
US4943913A (en) * 1988-02-10 1990-07-24 International Business Machines Corporation Operating system accessing control blocks by using home address space segment table to control instruction and operand fetch and store operations
US5031089A (en) * 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0954796A1 (en) * 1996-03-19 1999-11-10 Bull HN Information Systems Inc. Procedure to detect and disperse convoys for queued gate servers
EP0954796A4 (en) * 1996-03-19 2006-10-04 Bull Hn Information Syst Procedure to detect and disperse convoys for queued gate servers
US11748457B2 (en) 2018-02-02 2023-09-05 Dover Microsystems, Inc. Systems and methods for policy linking and/or loading for secure initialization
US11797398B2 (en) 2018-04-30 2023-10-24 Dover Microsystems, Inc. Systems and methods for checking safety properties
WO2020097179A1 (en) * 2018-11-06 2020-05-14 Dover Microsystems, Inc. Systems and methods for stalling host processor
WO2020097177A1 (en) * 2018-11-06 2020-05-14 Dover Microsystems, Inc. Systems and methods for stalling host processor
US11875180B2 (en) 2018-11-06 2024-01-16 Dover Microsystems, Inc. Systems and methods for stalling host processor
US11841956B2 (en) 2018-12-18 2023-12-12 Dover Microsystems, Inc. Systems and methods for data lifecycle protection

Similar Documents

Publication Publication Date Title
US5202988A (en) System for communicating among processors having different speeds
US5452452A (en) System having integrated dispatcher for self scheduling processors to execute multiple types of processes
US5390329A (en) Responding to service requests using minimal system-side context in a multiprocessor environment
US6006247A (en) Method and system for scheduling threads and handling exceptions within a multiprocessor data processing system
US7137116B2 (en) Method and system for performing a task on a computer
US11550627B2 (en) Hardware accelerated dynamic work creation on a graphics processing unit
EP1839146B1 (en) Mechanism to schedule threads on os-sequestered without operating system intervention
Anderson et al. Scheduler activations: Effective kernel support for the user-level management of parallelism
US6732138B1 (en) Method and system for accessing system resources of a data processing system utilizing a kernel-only thread within a user process
US7373640B1 (en) Technique for dynamically restricting thread concurrency without rewriting thread code
US4387427A (en) Hardware scheduler/dispatcher for data processing system
JP2514299B2 (en) Serialization method of interrupt handling for process level programming
US6983462B2 (en) Method and apparatus for serving a request queue
JPH03126158A (en) Method and apparatus for scheduling
US20040078799A1 (en) Interpartition communication system and method
WO1992003779A1 (en) Method of efficient communication between coprocessors
JPS6334490B2 (en)
JP7346649B2 (en) Synchronous control system and method
Wani Operating System
Papadimitriou et al. Mac OS versus FreeBSD: A comparative evaluation
Labrosse Operating systems
Rothberg Interrupt handling in Linux
WO1992003783A1 (en) Method of implementing kernel functions
WO1992003784A1 (en) Scheduling method for a multiprocessing operating system
Seo et al. Supporting preemptive user-level threads for embedded real-time systems

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP KR

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE