US20120079486A1 - Integration of dissimilar job types into an earliest deadline first (edf) schedule - Google Patents

Integration of dissimilar job types into an earliest deadline first (edf) schedule Download PDF

Info

Publication number
US20120079486A1
US20120079486A1 US12/889,263 US88926310A US2012079486A1 US 20120079486 A1 US20120079486 A1 US 20120079486A1 US 88926310 A US88926310 A US 88926310A US 2012079486 A1 US2012079486 A1 US 2012079486A1
Authority
US
United States
Prior art keywords
job
static
schedule
processor
edf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/889,263
Inventor
Scott A. Brandt
Richard A. Golding
Theodore M. Wong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/889,263 priority Critical patent/US20120079486A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRANDT, SCOTT A, GOLDING, RICHARD A, WONG, THEODORE M
Publication of US20120079486A1 publication Critical patent/US20120079486A1/en
Priority to US13/594,468 priority patent/US8607240B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4887Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic

Definitions

  • the CPU scheduler is a core operating system component responsible for managing the execution scheduling of different programs.
  • the CPU scheduler is responsible for determining which program to run at any given time. Where programs have implicit or explicit timeliness constraints or deadlines, the CPU scheduler is primarily responsible for ensuring that the applications execute in time to satisfy the timeliness constraints or deadlines. Many different operating systems and CPU schedulers have been developed to address various sorts of timing requirements.
  • Static schedules are very basic table-driven schedules that include an ordered list of tasks along with a set of times at which those tasks should occur. Upon completion of all of the tasks in the list, the list typically repeats with a fixed frequency, or period. Static schedules are common in legacy systems due in part to their extreme simplicity. However, static schedules are limited in their use because each static schedule is essentially hand-designed to ensure that all work can be completed in the time available. This is feasible only for small task sets.
  • Priority scheduling (e.g., rate monotonic (RM)) assigns a static priority to each concurrently executing task. Tasks are executed according to their priority, with the highest priority task that is currently executable always selected for execution.
  • RM rate monotonic
  • a limitation of static priority-based schedules is that in general only a relatively low fraction of the CPU utilization may be guaranteed.
  • Another type of dynamic schedule is a dynamic deadline-based schedule. Deadline scheduling assigns a set of deadlines to each task. In one deadline-based scheduling algorithm known as earliest deadline first (EDF), the system always runs the task with the earliest deadline. An advantage of EDF is that 100% of the CPU utilization may be guaranteed. However, EDF is less frequently used due to concerns about its overhead. Recent research indicates that deadline based schemes are effective in supporting a broad range of types of processes ranging from best-effort to hard real-time.
  • RM and EDF dynamic scheduling schemes allow the concurrent execution of different applications with different timeliness requirements with less hand-integration than a static schedule. In this way, each application can be admitted to the system based on knowledge of the resource availability, without detailed knowledge of the other applications.
  • a potential disadvantage of RM and EDF dynamic scheduling is that they are incapable of supporting legacy processes designed to operate in isolation with static schedulers. This prevents the migration of legacy applications and systems to modern multi-programmed processing platforms. Since legacy applications that use static scheduling are common in aerospace, automotive, and other industries, there are many legacy applications that currently cannot be integrated into modern multi-programmed processing platforms that use dynamic scheduling.
  • Embodiments of a computer program product include a computer readable storage medium to store a computer readable program that, when executed on a computer, causes the computer to perform operations for scheduling jobs on a processor.
  • the scheduler implements an EDF scheduling model.
  • the operations include receiving a plurality of jobs for an earliest deadline first (EDF) schedule.
  • the operations also include receiving one or more separate jobs with a fixed scheduling requirement.
  • the separate job additionally or alternatively may be a short duration sporadic job.
  • the operations also include determining an amount of available utilization capacity of the processor.
  • the operations also include inserting the separate job into an execution plan of the processor in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
  • Other embodiments of the computer program product and corresponding operations are also described.
  • the system includes a processor and a scheduler.
  • the processor executes instructions related to a plurality of jobs.
  • the scheduler implements an earliest deadline first (EDF) scheduling model.
  • the scheduler also receives a plurality of jobs for an EDF schedule.
  • the scheduler also receives a separate job with a fixed scheduling requirement.
  • the scheduler also inserts the separate job into an execution plan of the processor in response to a determination that an available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
  • EDF earliest deadline first
  • Other embodiments of the system are also described.
  • Embodiments of a method are also described.
  • the method is implemented within a scheduler for a processor.
  • the scheduler implements an earliest deadline first (EDF) scheduling model.
  • An embodiment of the method includes receiving a plurality of jobs for an EDF schedule.
  • the method also includes receiving a separate job with a fixed scheduling requirement with a specific execution time.
  • the method also includes determining an amount of available utilization capacity of the processor.
  • the method also includes inserting the separate job into an execution plan of the processor with the plurality of jobs from the EDF schedule in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
  • Other embodiments of the method are also described.
  • FIG. 1 depicts a schematic block diagram of one embodiment of a computer system for integrating dissimilar job types into an integrated schedule for dynamic scheduling.
  • FIG. 2 depicts a schematic diagram of one embodiment of a process for integrating static jobs into an EDF schedule.
  • FIG. 3 depicts a schematic diagram of one embodiment of a process for inserting idle jobs into the integrated schedule of FIG. 2 .
  • FIG. 4 depicts a schematic diagram of one embodiment of a process for integrating a sporadic job into an EDF schedule.
  • FIG. 5 depicts a flow chart diagram of one embodiment of a method for operating the EDF scheduler for the processor within the computer system of FIG. 1 .
  • statically scheduled tasks are executed concurrently with other tasks in a multiprogrammed real-time environment.
  • short-duration sporadic jobs (which may or may not be part of a static job schedule) are safely executed in a multiprogrammed real-time environment. More specifically, short-duration periodic or aperiodic pieces of code may be executed in a system scheduled with EDF.
  • multiprogrammed real-time systems may safely execute legacy code, or other high-priority code such as interrupt service routines, which are designed to execute at specific times. Additionally, the execution of such legacy code can be implemented without causing other real-time and non-real-time processes to violate their timing requirements.
  • task and “job” are used interchangeably within this description. These terms are not intended to have specific meanings herein, other than to reference a general amount of executable instructions or code. There is not necessarily any correlation between a task or job and a specific amount of code or typical types of scheduled units such as processes or threads (although some examples may use implementations with processes and/or threads). Additionally, there is no restriction on the type of multiprogramming environment in which tasks or jobs may be executed.
  • FIG. 1 depicts a schematic block diagram of one embodiment of a computer system 100 for integrating dissimilar job types into an integrated schedule for dynamic scheduling.
  • the computer system 100 is illustrated with specific component parts which are described with certain functionality, other embodiments of the computer system 100 may include fewer or more component parts capable of implementing more or less functionality.
  • the computer system 100 includes a processor 102 and a memory 104 .
  • the processor 102 is representative of any type of processor which executes instructions or program code. Some examples of such processors include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a coprocessor, or another similar type of processor.
  • the illustrated processor 102 is representative of a multi-processor environment in which multiple processors cooperatively execute tasks or jobs in a multi-threaded or other parallel processing approach.
  • the memory 104 is representative of any type of memory that can store instructions or program code for execution by the processor 102 .
  • the memory 104 is separate from the processor 102 , as in the case of main memory or lower levels of cache.
  • the memory 104 is integrated with the processor 102 , as in the case of higher levels of cache that are implemented on the same die as the processor 102 .
  • the depicted memory 104 stores a program application 106 that is designed for execution on the processor 102 .
  • the application 106 may be designed to be implemented according to an EDF schedule 108 . In this way, the application 106 is designed for implementation so that the task with the earliest deadline is implemented first, at any given time during the execution of the application 106 .
  • the memory 104 also stores another program application 110 that is not natively designed for execution according to an EDF schedule. Rather, this application 110 is designed for execution according to a static schedule 112 .
  • this type of situation occurs when a legacy application 110 from a legacy system (not shown) is loaded into the memory 104 of the computer system 100 .
  • the legacy system may have been specially designed to implement the static schedule 112 of the legacy application 110 and, thus, is not inherently compatible with the computer system 100 onto which it is loaded.
  • the processor 102 includes an EDF scheduler 114 that schedules tasks for processing by the processor 102 .
  • the EDF scheduler 114 schedules the tasks according to an EDF scheduling model.
  • the EDF scheduler 114 implements a rate-based earliest deadline (RBED) scheduling model.
  • RBED rate-based earliest deadline
  • each task has an associated deadline, and the scheduler 114 schedules the tasks so that the tasks with the earliest deadlines are processed before the tasks with later deadlines. Assuming the processor 102 has sufficient resources, the scheduler 114 can satisfy all of the associated deadlines by allocating processor time for execution of specific tasks prior to their corresponding deadlines.
  • the EDF scheduler 114 schedules tasks for both of the applications 106 , 110 stored in the memory. Hence, the scheduler 114 schedules EDF processes 116 for the application 106 designed for implementation with the EDF schedule 108 . Also, the scheduler 114 schedules static/sporadic processes 118 for the application 110 designed for implementation with the static schedule 112 .
  • the scheduler 114 takes the tasks (e.g., processes 116 , 118 ) for the applications 106 , 108 and creates an execution plan 120 for use by the processor 102 .
  • the scheduler 114 may create and store an actual execution plan 120 prior to execution of the tasks by the processor 102 . If the execution plan 120 is stored for later execution, the execution plan 120 may be stored in the memory 104 or on another memory device (e.g., on-board cache) coupled to the scheduler 114 .
  • the scheduler 114 dynamically creates the execution plan 120 so that the execution plan is created by the scheduler 114 at approximately the same time that the tasks are executed by the processor 102 .
  • FIG. 2 depicts a schematic diagram of one embodiment of a process 130 for integrating static jobs into an EDF schedule.
  • certain component parts of the computer system 100 are shown in FIG. 2 .
  • the flow of data and processing operations depicted in FIG. 2 are not necessarily indicative of a specific physical layout or data coupling of such component parts within the computer system 100 .
  • a periodic real-time task commonly includes a unit of work called a job that is repeatedly executed with a frequency equal to 1/p, where p is the period of the task.
  • a task with a period of 1/10 of a second would execute the job as many as ten times per second. Jobs become ready to run at their release time and must complete by their deadline.
  • a common practice is to make the deadline of the previous job the release time of the next job, and to set the deadline of each job equal to the period of the task.
  • the execution of the task includes a sequence of jobs—one job per period, and each job is ready to run as soon as the previous job's deadline has been reached.
  • Other embodiments may be implemented with other possible configurations that use different release times and/or deadlines.
  • the scheduler 114 includes a static integration engine 132 which uses the EDF schedule 108 and the static schedule 112 to generate an integrated schedule 134 .
  • An example of the EDF schedule 108 is shown in table format, including a list of jobs (designated as J — 1 through J_N) with associated deadlines (D — 1-D_N). The deadline indicates the time at which the job must be completed.
  • tasks also may have associated rates (R — 1-R_N) that indicate the fraction of the total processor time they will receive between each of their deadlines. To enforce these rates, a timer may be set to interrupt the processing of any job to prevent it from using more than its assigned rate by its assigned deadline.
  • Each task initially has a budget.
  • B_i is reduced by the amount of time the task executed.
  • the timer is set to go off B_i time units in the future.
  • the scheduler may set the corresponding task to a state that is not currently executable and proceed to execute the executable task with the highest priority (earliest deadline). If the task will execute again with a new deadline, B_i may be replenished (e.g., again set to R_i*P_i) when the previous deadline has been reached.
  • the depicted static schedule 112 includes a list of jobs (designated as j — 1 through j_n) and corresponding instants (i — 1-i_n), which are also referred to as critical instants.
  • the static schedule 112 would be executed on a dedicated processor that is not running any other jobs or tasks.
  • the critical instants indicate the times by which the corresponding jobs must be completed and, in some embodiments, at which the next job must be started.
  • the EDF scheduler 114 is capable of integrating the static jobs into a dynamic schedule for implementation on a processor other than the specific processor for which the static schedule was designed.
  • the integrated schedule 134 includes the jobs from both the EDF schedule 108 and the static schedule 112 .
  • the entire static schedule 112 is treated as a non-periodic real-time task, and the critical instants of the static jobs are used as the deadlines of the jobs in the task in the integrated schedule 134 .
  • the entire set of jobs may be repeated, similar to the way that individual jobs in a periodic task are repeated.
  • the scheduler 114 also generates a new rate (designated as r_S) corresponding to the task containing the static jobs.
  • r_S a new rate
  • some or all of the static jobs are combined into a single task with a rate r_S that is a summation of individual rates (r — 1 through r_n) that would otherwise be associated with the individual static jobs.
  • r_S a rate that is a summation of individual rates (r — 1 through r_n) that would otherwise be associated with the individual static jobs.
  • r_S a rate that is a summation of individual rates (r — 1 through r_n) that would otherwise be associated with the individual static jobs.
  • r_S a summation of individual rates
  • the scheduler 114 is an RBED integrated real-time scheduler which supports a variety of different types of processes.
  • the scheduler 114 supports different processes by reserving a rate for each task.
  • the rate may be expressed as any type of resource allocation for the processor 102 .
  • the rate is expressed as a percentage of CPU processing cycles.
  • the deadlines, or times at which the indicated resource allocation is expected to equal the corresponding reservation are also represented.
  • the scheduler 114 implements RBED to support statically scheduled tasks in the following manner.
  • a static schedule (S) is designed to operate on a CPU (C — 1) with a known frequency (F — 1).
  • job j — 1 must finish before time i — 1
  • job j — 2 must start at time i — 1 and finish before time i — 2.
  • the RBED implementation ensures that each task starts and finishes at the appropriate times. This is accomplished by ensuring that the expected resources, exactly equivalent to the resources that would be provided by the first CPU (C — 1), are available between any two critical instants La and i_b.
  • the scheduler 114 makes sure that the same or similar processing resources are available for the static jobs as the processing resources that would be available on another processor for which the static schedule is designed.
  • CBS cluster based scheduling
  • VRE variable rate execution
  • FIG. 3 depicts a schematic diagram of one embodiment of a process 140 for inserting idle jobs into the integrated schedule of FIG. 2 .
  • idle times within the static schedule (S) can be allocated to idle tasks that consume any unused CPU time.
  • the scheduler 114 includes a utilization engine 142 to identify idle times within the static schedule (S) and to generate idle jobs (designated as j_iA A and j_iB) that consume some or all of the unused CPU time. Corresponding deadlines (i_iA and i_iB) (and optionally rates (r_iA and r_iB)) are also generated. Alternatively, some or all of the idle time may be available for allocation to other processes.
  • FIG. 4 depicts a schematic diagram of one embodiment of a process 150 for integrating a sporadic job 152 into an EDF schedule.
  • certain component parts of the computer system 100 are shown in FIG. 4 .
  • the flow of data and processing operations depicted in FIG. 4 are not necessarily indicative of a specific physical layout or data coupling of such component parts within the computer system 100 .
  • the sporadic jobs include interrupt service routines or jobs related to other externally generated events.
  • the scheduler 114 includes a sporadic integration engine 154 which integrates the sporadic job 152 with the EDF schedule 108 to generate the integrated schedule 134 .
  • sporadic jobs may be handled by assigning to the job (designated s — 1) a corresponding rate (r — 1) and deadline (d — 1). The job then may be scheduled along with all other jobs handled by the RBED scheduler.
  • the sporadic job (designated as j_S) is scheduled between dynamic jobs J — 2 and J — 3. Also, the scheduler 114 generates a corresponding rate (r_S) and deadline (d_S) for the sporadic job (j_S).
  • the rate assigned to a sporadic job must be chosen so that the sum of the assigned rates is less than or equal to 100% of the CPU.
  • R is the sum of all of the assigned rates in the system prior to the arrival of the sporadic job
  • a sporadic job has a worst case execution time e_S
  • its deadline d_S may be set to its arrival time+e_S*r_S.
  • the rate r_S is considered to be allocated and unavailable until either the deadline or until the arrival time e′_S*r_S, whichever is later, where e′_S is the actual execution time of the job.
  • the job may begin execution immediately and, as long as no other job enters the system with an earlier deadline, it may safely execute to completion without interfering with the processing of any other job or task. This is useful in the case of interrupt service routines and other urgent but not necessarily compute-intensive sporadic processing.
  • This functionality can be used in at least two ways. First this type of functionality can be used for determining the feasibility of specific jobs or tasks with fixed scheduling requirements in a system of tasks scheduled with EDF. Second, this type of functionality can be used for determining at run-time when it is safe to allow interrupt service routines (or other asynchronous events) to be executed. As described above, the scheduler 114 may implement RBED or another type of EDF scheduling model.
  • FIG. 5 depicts a flow chart diagram of one embodiment of a method for operating the EDF scheduler 114 for the processor 102 within the computer system 100 of FIG. 1 .
  • the method 200 for determining integration parameters is described in conjunction with the computer system 100 of FIG. 1 , other embodiments of the method 200 may be implemented with other types of computer systems.
  • the depicted method 200 includes receiving 202 a plurality of jobs for an earliest deadline first (EDF) schedule.
  • the EDF scheduler 114 implements an EDF scheduling model using the jobs.
  • the EDF scheduler 114 also receives 204 a separate job that is, in some embodiments, from a source other than the EDF schedule.
  • the separate job has a fixed scheduling requirement with a specific execution time.
  • the separate job is a short duration sporadic job, and the fixed scheduling requirement is associated with the sporadic job.
  • the EDF scheduler 114 determines 206 an amount of available utilization capacity of the processor 102 .
  • the EDF scheduler 114 may generate 210 a notification of the insufficient utilization capacity.
  • the EDF scheduler inserts 212 the separate job into an execution plan of the processor 102 with the plurality of jobs from the EDF schedule 108 in response to a determination 208 that the available utilization capacity of the processor 102 is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. The depicted method 200 then ends.
  • an embodiment of a computer program product for inserting jobs into a scheduler, which implements an EDF scheduling protocol, of a processor includes a computer useable storage medium to store a computer readable program that, when executed on a computer, causes the computer to perform operations.
  • the operations include receiving a plurality of jobs from an earliest deadline first (EDF) schedule.
  • the operations also include receiving a separate job from a source other than the EDF schedule.
  • the separate job has a fixed scheduling requirement.
  • the operations also include determining an amount of available utilization capacity of the processor.
  • the operations also include inserting the separate job into an execution plan of the processor in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
  • Embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable storage medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable storage medium can be any apparatus that can store the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer-useable or computer-readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device), or a propagation medium.
  • Examples of a computer-readable storage medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk.
  • Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD).
  • An embodiment of a data processing system suitable for storing and/or executing program code includes at least one processor coupled directly or indirectly to memory elements through a system bus such as a data, address, and/or control bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices can be coupled to the system either directly or through intervening I/O controllers.
  • network adapters also may be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.

Abstract

A system for inserting jobs into a scheduler of a processor includes the processor and the scheduler. The processor executes instructions related to a plurality of jobs. The scheduler implements an earliest deadline first (EDF) scheduling model. The scheduler also receives a plurality of jobs from an EDF schedule. The scheduler also receives a separate job from a source other than the EDF schedule. The separate job has a fixed scheduling requirement. The separate job also may be a short duration sporadic job. The scheduler also inserts the separate job into an execution plan of the processor in response to a determination that an available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.

Description

    STATEMENT OF FEDERALLY SPONSORED RESEARCH
  • This invention was made with Government support under Contract No. HR0011-08-C-0031 awarded by Defense Advanced Research Projects Agency (DARPA). The Government has certain rights in the invention.
  • BACKGROUND
  • Many systems, including embedded systems, have real-time timeliness constraints on their processing. Such systems often interact with the real world and/or users and have specific constraints on when they must respond to events, produce results, etc. Some examples of such systems include safety-critical systems such as anti-lock brakes, control systems such as flight control systems, multimedia systems such as video or audio players, and so forth. Systems intended to address such real-time situations must be carefully designed to ensure the timing requirements of the applications they are built to execute.
  • The CPU scheduler is a core operating system component responsible for managing the execution scheduling of different programs. The CPU scheduler is responsible for determining which program to run at any given time. Where programs have implicit or explicit timeliness constraints or deadlines, the CPU scheduler is primarily responsible for ensuring that the applications execute in time to satisfy the timeliness constraints or deadlines. Many different operating systems and CPU schedulers have been developed to address various sorts of timing requirements.
  • One type of scheduler that is frequently used in legacy systems (e.g., military and aerospace systems) is based on static schedules. Static schedules are very basic table-driven schedules that include an ordered list of tasks along with a set of times at which those tasks should occur. Upon completion of all of the tasks in the list, the list typically repeats with a fixed frequency, or period. Static schedules are common in legacy systems due in part to their extreme simplicity. However, static schedules are limited in their use because each static schedule is essentially hand-designed to ensure that all work can be completed in the time available. This is feasible only for small task sets.
  • In contrast to static schedules, other schedulers use dynamic schedules. One type of dynamic schedule is a dynamic priority-based schedule. Priority scheduling (e.g., rate monotonic (RM)) assigns a static priority to each concurrently executing task. Tasks are executed according to their priority, with the highest priority task that is currently executable always selected for execution. A limitation of static priority-based schedules is that in general only a relatively low fraction of the CPU utilization may be guaranteed. Another type of dynamic schedule is a dynamic deadline-based schedule. Deadline scheduling assigns a set of deadlines to each task. In one deadline-based scheduling algorithm known as earliest deadline first (EDF), the system always runs the task with the earliest deadline. An advantage of EDF is that 100% of the CPU utilization may be guaranteed. However, EDF is less frequently used due to concerns about its overhead. Recent research indicates that deadline based schemes are effective in supporting a broad range of types of processes ranging from best-effort to hard real-time.
  • An advantage of RM and EDF dynamic scheduling schemes is that they allow the concurrent execution of different applications with different timeliness requirements with less hand-integration than a static schedule. In this way, each application can be admitted to the system based on knowledge of the resource availability, without detailed knowledge of the other applications. A potential disadvantage of RM and EDF dynamic scheduling is that they are incapable of supporting legacy processes designed to operate in isolation with static schedulers. This prevents the migration of legacy applications and systems to modern multi-programmed processing platforms. Since legacy applications that use static scheduling are common in aerospace, automotive, and other industries, there are many legacy applications that currently cannot be integrated into modern multi-programmed processing platforms that use dynamic scheduling.
  • SUMMARY
  • Embodiments of a computer program product are described. The computer program product includes a computer readable storage medium to store a computer readable program that, when executed on a computer, causes the computer to perform operations for scheduling jobs on a processor. The scheduler implements an EDF scheduling model. In one embodiment, the operations include receiving a plurality of jobs for an earliest deadline first (EDF) schedule. The operations also include receiving one or more separate jobs with a fixed scheduling requirement. In some embodiments, the separate job additionally or alternatively may be a short duration sporadic job. The operations also include determining an amount of available utilization capacity of the processor. The operations also include inserting the separate job into an execution plan of the processor in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. Other embodiments of the computer program product and corresponding operations are also described.
  • Embodiments of a system are also described. In one embodiment, the system includes a processor and a scheduler. The processor executes instructions related to a plurality of jobs. The scheduler implements an earliest deadline first (EDF) scheduling model. The scheduler also receives a plurality of jobs for an EDF schedule. The scheduler also receives a separate job with a fixed scheduling requirement. The scheduler also inserts the separate job into an execution plan of the processor in response to a determination that an available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. Other embodiments of the system are also described.
  • Embodiments of a method are also described. In one embodiment, the method is implemented within a scheduler for a processor. The scheduler implements an earliest deadline first (EDF) scheduling model. An embodiment of the method includes receiving a plurality of jobs for an EDF schedule. The method also includes receiving a separate job with a fixed scheduling requirement with a specific execution time. The method also includes determining an amount of available utilization capacity of the processor. The method also includes inserting the separate job into an execution plan of the processor with the plurality of jobs from the EDF schedule in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. Other embodiments of the method are also described.
  • Other aspects and advantages of embodiments of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrated by way of example of the principles of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts a schematic block diagram of one embodiment of a computer system for integrating dissimilar job types into an integrated schedule for dynamic scheduling.
  • FIG. 2 depicts a schematic diagram of one embodiment of a process for integrating static jobs into an EDF schedule.
  • FIG. 3 depicts a schematic diagram of one embodiment of a process for inserting idle jobs into the integrated schedule of FIG. 2.
  • FIG. 4 depicts a schematic diagram of one embodiment of a process for integrating a sporadic job into an EDF schedule.
  • FIG. 5 depicts a flow chart diagram of one embodiment of a method for operating the EDF scheduler for the processor within the computer system of FIG. 1.
  • Throughout the description, similar reference numbers may be used to identify similar elements.
  • DETAILED DESCRIPTION
  • In the following description, specific details of various embodiments are provided. However, some embodiments may be practiced with less than all of these specific details. In other instances, certain methods, procedures, components, structures, and/or functions are described in no more detail than to enable the various embodiments of the invention, for the sake of brevity and clarity.
  • It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
  • The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
  • Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment of the invention. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, discussions of the features and advantages, and similar language, throughout this specification may, but do not necessarily, refer to the same embodiment.
  • Furthermore, the described features, advantages, and characteristics of the invention may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.
  • Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present invention. Thus, the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
  • While many embodiments are described herein, at least some of the described embodiments facilitate integrating non-EDF jobs with EDF jobs within an EDF scheduling model. In some embodiments, statically scheduled tasks are executed concurrently with other tasks in a multiprogrammed real-time environment. In other embodiments, short-duration sporadic jobs (which may or may not be part of a static job schedule) are safely executed in a multiprogrammed real-time environment. More specifically, short-duration periodic or aperiodic pieces of code may be executed in a system scheduled with EDF.
  • As a result of the embodiments described herein, multiprogrammed real-time systems may safely execute legacy code, or other high-priority code such as interrupt service routines, which are designed to execute at specific times. Additionally, the execution of such legacy code can be implemented without causing other real-time and non-real-time processes to violate their timing requirements. This includes jobs implemented via legacy code designed for execution on isolated processors via static schedules. This enables legacy code to execute in modern real-time environments and facilitate graceful transitions from legacy code to modern real-time code designed for multiprogrammed environments. In other embodiments, the executed code is not necessarily legacy code.
  • As a matter of convenience, the terms “task” and “job” are used interchangeably within this description. These terms are not intended to have specific meanings herein, other than to reference a general amount of executable instructions or code. There is not necessarily any correlation between a task or job and a specific amount of code or typical types of scheduled units such as processes or threads (although some examples may use implementations with processes and/or threads). Additionally, there is no restriction on the type of multiprogramming environment in which tasks or jobs may be executed.
  • FIG. 1 depicts a schematic block diagram of one embodiment of a computer system 100 for integrating dissimilar job types into an integrated schedule for dynamic scheduling. Although the computer system 100 is illustrated with specific component parts which are described with certain functionality, other embodiments of the computer system 100 may include fewer or more component parts capable of implementing more or less functionality.
  • In the illustrated embodiment, the computer system 100 includes a processor 102 and a memory 104. The processor 102 is representative of any type of processor which executes instructions or program code. Some examples of such processors include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a coprocessor, or another similar type of processor. In some embodiments, the illustrated processor 102 is representative of a multi-processor environment in which multiple processors cooperatively execute tasks or jobs in a multi-threaded or other parallel processing approach.
  • The memory 104 is representative of any type of memory that can store instructions or program code for execution by the processor 102. In some embodiments, the memory 104 is separate from the processor 102, as in the case of main memory or lower levels of cache. In other embodiments, the memory 104 is integrated with the processor 102, as in the case of higher levels of cache that are implemented on the same die as the processor 102.
  • The depicted memory 104 stores a program application 106 that is designed for execution on the processor 102. The application 106 may be designed to be implemented according to an EDF schedule 108. In this way, the application 106 is designed for implementation so that the task with the earliest deadline is implemented first, at any given time during the execution of the application 106.
  • The memory 104 also stores another program application 110 that is not natively designed for execution according to an EDF schedule. Rather, this application 110 is designed for execution according to a static schedule 112. In some embodiments, this type of situation occurs when a legacy application 110 from a legacy system (not shown) is loaded into the memory 104 of the computer system 100. The legacy system may have been specially designed to implement the static schedule 112 of the legacy application 110 and, thus, is not inherently compatible with the computer system 100 onto which it is loaded.
  • The processor 102 includes an EDF scheduler 114 that schedules tasks for processing by the processor 102. Specifically, the EDF scheduler 114 schedules the tasks according to an EDF scheduling model. In a specific embodiment, the EDF scheduler 114 implements a rate-based earliest deadline (RBED) scheduling model. For implementation of the EDF scheduling model, each task has an associated deadline, and the scheduler 114 schedules the tasks so that the tasks with the earliest deadlines are processed before the tasks with later deadlines. Assuming the processor 102 has sufficient resources, the scheduler 114 can satisfy all of the associated deadlines by allocating processor time for execution of specific tasks prior to their corresponding deadlines.
  • In the illustrated embodiment, the EDF scheduler 114 schedules tasks for both of the applications 106, 110 stored in the memory. Hence, the scheduler 114 schedules EDF processes 116 for the application 106 designed for implementation with the EDF schedule 108. Also, the scheduler 114 schedules static/sporadic processes 118 for the application 110 designed for implementation with the static schedule 112.
  • In one embodiment, the scheduler 114 takes the tasks (e.g., processes 116, 118) for the applications 106, 108 and creates an execution plan 120 for use by the processor 102. In one embodiment, the scheduler 114 may create and store an actual execution plan 120 prior to execution of the tasks by the processor 102. If the execution plan 120 is stored for later execution, the execution plan 120 may be stored in the memory 104 or on another memory device (e.g., on-board cache) coupled to the scheduler 114. In another embodiment, the scheduler 114 dynamically creates the execution plan 120 so that the execution plan is created by the scheduler 114 at approximately the same time that the tasks are executed by the processor 102.
  • FIG. 2 depicts a schematic diagram of one embodiment of a process 130 for integrating static jobs into an EDF schedule. For reference, certain component parts of the computer system 100 are shown in FIG. 2. However, the flow of data and processing operations depicted in FIG. 2 are not necessarily indicative of a specific physical layout or data coupling of such component parts within the computer system 100.
  • A periodic real-time task commonly includes a unit of work called a job that is repeatedly executed with a frequency equal to 1/p, where p is the period of the task. Thus, a task with a period of 1/10 of a second would execute the job as many as ten times per second. Jobs become ready to run at their release time and must complete by their deadline. A common practice is to make the deadline of the previous job the release time of the next job, and to set the deadline of each job equal to the period of the task. In that scenario, the execution of the task includes a sequence of jobs—one job per period, and each job is ready to run as soon as the previous job's deadline has been reached. Other embodiments may be implemented with other possible configurations that use different release times and/or deadlines.
  • In the illustrated embodiment, the scheduler 114 includes a static integration engine 132 which uses the EDF schedule 108 and the static schedule 112 to generate an integrated schedule 134. An example of the EDF schedule 108 is shown in table format, including a list of jobs (designated as J 1 through J_N) with associated deadlines (D1-D_N). The deadline indicates the time at which the job must be completed.
  • In one embodiment, tasks also may have associated rates (R1-R_N) that indicate the fraction of the total processor time they will receive between each of their deadlines. To enforce these rates, a timer may be set to interrupt the processing of any job to prevent it from using more than its assigned rate by its assigned deadline. Each task initially has a budget. In one embodiment, the budget is calculated as B_i=R_i*P_i, where R_i is the task's rate, and P_i is the time between consecutive deadlines of the same task. Other embodiments may use other mathematical operations or algorithms to calculate the budget for a specific task. For example, in some embodiments, that budget may be calculated according to B_i=R_i/P_i. Each time a task executes, B_i is reduced by the amount of time the task executed. When the task is executed, the timer is set to go off B_i time units in the future. Upon receipt of such an interrupt, the scheduler may set the corresponding task to a state that is not currently executable and proceed to execute the executable task with the highest priority (earliest deadline). If the task will execute again with a new deadline, B_i may be replenished (e.g., again set to R_i*P_i) when the previous deadline has been reached.
  • An example of the static schedule 112 is also shown. The depicted static schedule 112 includes a list of jobs (designated as j 1 through j_n) and corresponding instants (i1-i_n), which are also referred to as critical instants. In a legacy system, the static schedule 112 would be executed on a dedicated processor that is not running any other jobs or tasks. The critical instants indicate the times by which the corresponding jobs must be completed and, in some embodiments, at which the next job must be started.
  • Typically, this type of static schedule would be designed for a specific processing environment to ensure that the tasks are executed exactly as indicated in the schedule to satisfy the critical instants. However, in embodiments described herein, the EDF scheduler 114 is capable of integrating the static jobs into a dynamic schedule for implementation on a processor other than the specific processor for which the static schedule was designed.
  • One example of an integrated schedule 134 is shown in FIG. 2. The integrated schedule 134 includes the jobs from both the EDF schedule 108 and the static schedule 112. In one embodiment, the entire static schedule 112 is treated as a non-periodic real-time task, and the critical instants of the static jobs are used as the deadlines of the jobs in the task in the integrated schedule 134. When all of the jobs in the static schedule 112 have completed, the entire set of jobs may be repeated, similar to the way that individual jobs in a periodic task are repeated.
  • In one embodiment, the scheduler 114 also generates a new rate (designated as r_S) corresponding to the task containing the static jobs. In one embodiment, some or all of the static jobs are combined into a single task with a rate r_S that is a summation of individual rates (r 1 through r_n) that would otherwise be associated with the individual static jobs. An example of how to generate this rate is described in more detail below. However, embodiments of the scheduler 114 may use various methods for generating and assigning rates to the static jobs within the integrated schedule 134.
  • In one embodiment, the scheduler 114 is an RBED integrated real-time scheduler which supports a variety of different types of processes. The scheduler 114 supports different processes by reserving a rate for each task. The rate may be expressed as any type of resource allocation for the processor 102. In one embodiment, the rate is expressed as a percentage of CPU processing cycles. In some embodiments, the deadlines, or times at which the indicated resource allocation is expected to equal the corresponding reservation, are also represented. By implementing the scheduler 114 in this manner, all tasks and jobs, including static tasks and jobs, are scheduled according to EDF, or more specifically according to RBED.
  • In one embodiment, the scheduler 114 implements RBED to support statically scheduled tasks in the following manner. In this example, a static schedule (S) is designed to operate on a CPU (C1) with a known frequency (F1). The scheduler (S) schedules a set of jobs (J={j 1, j 2, . . . , j_n}) that have critical instants (I={i 2, i 3, . . . , i_n}) corresponding to the start times and finish times of the tasks in the static schedule. In other words, job j 1 must finish before time i 1, and job j 2 must start at time i 1 and finish before time i 2.
  • In order to run the jobs (J) that are originally in the static schedule (S) on a second CPU (C2) with a frequency (F 2; where F 2 is greater than F1) that also may be running other tasks scheduled with EDF, having assigned rates and deadlines, additional parameters are generated for the jobs (J). In particular, the jobs (J) can run on the second CPU (C2) concurrently with other tasks by assigning the task containing jobs (J) the following processing parameters:
  • Rate = F_ 1 F_ 2 ; and Deadlines = { i_ 1 , i_ 2 , , i_n } .
  • In one embodiment, the RBED implementation ensures that each task starts and finishes at the appropriate times. This is accomplished by ensuring that the expected resources, exactly equivalent to the resources that would be provided by the first CPU (C1), are available between any two critical instants La and i_b. In other words, the scheduler 114 makes sure that the same or similar processing resources are available for the static jobs as the processing resources that would be available on another processor for which the static schedule is designed.
  • It should also be noted that although embodiments are described herein with specific reference to RBED, other embodiments may be implemented with another scheduling model based upon EDF, for example cluster based scheduling (CBS) or variable rate execution (VRE).
  • FIG. 3 depicts a schematic diagram of one embodiment of a process 140 for inserting idle jobs into the integrated schedule of FIG. 2. In some embodiments, it may be useful to obscure the actual execution times of the jobs (J) from the static schedule (S) within the integrated schedule 134. For example, there may be security reasons to prevent or impair detection of the execution times via covert channels based on CPU usage. In one embodiment, idle times within the static schedule (S) can be allocated to idle tasks that consume any unused CPU time.
  • In the illustrated embodiment, the scheduler 114 includes a utilization engine 142 to identify idle times within the static schedule (S) and to generate idle jobs (designated as j_iA A and j_iB) that consume some or all of the unused CPU time. Corresponding deadlines (i_iA and i_iB) (and optionally rates (r_iA and r_iB)) are also generated. Alternatively, some or all of the idle time may be available for allocation to other processes.
  • FIG. 4 depicts a schematic diagram of one embodiment of a process 150 for integrating a sporadic job 152 into an EDF schedule. For reference, certain component parts of the computer system 100 are shown in FIG. 4. However, the flow of data and processing operations depicted in FIG. 4 are not necessarily indicative of a specific physical layout or data coupling of such component parts within the computer system 100.
  • In some embodiments the sporadic jobs include interrupt service routines or jobs related to other externally generated events.
  • In the illustrated embodiment, the scheduler 114 includes a sporadic integration engine 154 which integrates the sporadic job 152 with the EDF schedule 108 to generate the integrated schedule 134.
  • In an embodiment using the RBED scheduler, sporadic jobs may be handled by assigning to the job (designated s1) a corresponding rate (r1) and deadline (d1). The job then may be scheduled along with all other jobs handled by the RBED scheduler.
  • In the illustrated example of the integrated schedule 134, the sporadic job (designated as j_S) is scheduled between dynamic jobs J 2 and J 3. Also, the scheduler 114 generates a corresponding rate (r_S) and deadline (d_S) for the sporadic job (j_S).
  • In an RBED embodiment, the rate assigned to a sporadic job must be chosen so that the sum of the assigned rates is less than or equal to 100% of the CPU. In particular, if R is the sum of all of the assigned rates in the system prior to the arrival of the sporadic job, then the sporadic job may be assigned any rate r_S up to r′=100%−R. If the sporadic job should complete as quickly as possible, then it should be assigned the rate r_S=r′.
  • If a sporadic job has a worst case execution time e_S, then its deadline d_S may be set to its arrival time+e_S*r_S. The rate r_S is considered to be allocated and unavailable until either the deadline or until the arrival time e′_S*r_S, whichever is later, where e′_S is the actual execution time of the job.
  • If the resulting deadline of the sporadic job is the earliest deadline in the system, the job may begin execution immediately and, as long as no other job enters the system with an earlier deadline, it may safely execute to completion without interfering with the processing of any other job or task. This is useful in the case of interrupt service routines and other urgent but not necessarily compute-intensive sporadic processing.
  • This functionality can be used in at least two ways. First this type of functionality can be used for determining the feasibility of specific jobs or tasks with fixed scheduling requirements in a system of tasks scheduled with EDF. Second, this type of functionality can be used for determining at run-time when it is safe to allow interrupt service routines (or other asynchronous events) to be executed. As described above, the scheduler 114 may implement RBED or another type of EDF scheduling model.
  • FIG. 5 depicts a flow chart diagram of one embodiment of a method for operating the EDF scheduler 114 for the processor 102 within the computer system 100 of FIG. 1. Although the method 200 for determining integration parameters is described in conjunction with the computer system 100 of FIG. 1, other embodiments of the method 200 may be implemented with other types of computer systems.
  • In the illustrated embodiment, the depicted method 200 includes receiving 202 a plurality of jobs for an earliest deadline first (EDF) schedule. In one embodiment, the EDF scheduler 114 implements an EDF scheduling model using the jobs. The EDF scheduler 114 also receives 204 a separate job that is, in some embodiments, from a source other than the EDF schedule. The separate job has a fixed scheduling requirement with a specific execution time. In some embodiments, the separate job is a short duration sporadic job, and the fixed scheduling requirement is associated with the sporadic job. The EDF scheduler 114 then determines 206 an amount of available utilization capacity of the processor 102. If the EDF scheduler 114 determines 208 that there is insufficient utilization capacity, then the EDF scheduler 114 may generate 210 a notification of the insufficient utilization capacity. Alternatively, the EDF scheduler inserts 212 the separate job into an execution plan of the processor 102 with the plurality of jobs from the EDF schedule 108 in response to a determination 208 that the available utilization capacity of the processor 102 is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job. The depicted method 200 then ends.
  • It should also be noted that at least some of the operations for the methods may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. As an example, an embodiment of a computer program product for inserting jobs into a scheduler, which implements an EDF scheduling protocol, of a processor includes a computer useable storage medium to store a computer readable program that, when executed on a computer, causes the computer to perform operations. In one embodiment, the operations include receiving a plurality of jobs from an earliest deadline first (EDF) schedule. The operations also include receiving a separate job from a source other than the EDF schedule. The separate job has a fixed scheduling requirement. The operations also include determining an amount of available utilization capacity of the processor. The operations also include inserting the separate job into an execution plan of the processor in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
  • Embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements. In one embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Furthermore, embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable storage medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable storage medium can be any apparatus that can store the program for use by or in connection with the instruction execution system, apparatus, or device.
  • The computer-useable or computer-readable storage medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device), or a propagation medium. Examples of a computer-readable storage medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include a compact disk with read only memory (CD-ROM), a compact disk with read/write (CD-R/W), and a digital video disk (DVD).
  • An embodiment of a data processing system suitable for storing and/or executing program code includes at least one processor coupled directly or indirectly to memory elements through a system bus such as a data, address, and/or control bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Additionally, network adapters also may be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.
  • Although the operations of the method(s) herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operations may be performed, at least in part, concurrently with other operations. In another embodiment, instructions or sub-operations of distinct operations may be implemented in an intermittent and/or alternating manner.
  • Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts so described and illustrated. The scope of the invention is to be defined by the claims appended hereto and their equivalents.

Claims (20)

1. A computer program product comprising:
a computer readable storage medium to store a computer readable program, wherein the computer readable program, when executed on a computer, causes the computer to perform operations for inserting jobs into a scheduler of a processor, the operations comprising:
receiving a plurality of jobs from an earliest deadline first (EDF) schedule, wherein the scheduler implements an EDF scheduling model;
receiving a separate job from a source other than the EDF schedule, wherein the separate job has a fixed scheduling requirement;
determining an amount of available utilization capacity of the processor; and
inserting the separate job into an execution plan of the processor in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
2. The computer program product of claim 1, wherein the fixed scheduling requirement comprises a specific execution time for the separate job.
3. The computer program product of claim 1, wherein the separate job comprises a static job from a static schedule, wherein the static job is natively scheduled for implementation according to a static scheduling model.
4. The computer program product of claim 3, wherein the operations further comprise:
obtaining a critical instant for the static job from the static schedule, wherein the critical instant indicates a specific execution time requirement of the static job; and
assigning a deadline to the static job within the execution plan, wherein the deadline is based on the critical instant of the static job.
5. The computer program product of claim 4, wherein the operations further comprise reserving a rate for the static job, wherein the rate comprises a utilization factor indicative of an amount of the utilization capacity of the processor that is allocated to the static job.
6. The computer program product of claim 5, wherein the operations further comprise calculating the rate for the static job according to the following ratio:
Rate = F_ 1 F_ 2 ,
in which F2 represents an operating frequency of the processor on which the static job executes, and F1 represents an operating frequency of another processor on which the static job is designed to execute.
7. The computer program product of claim 6, wherein the operations further comprise reserving resources on the processor that are substantially equivalent to resources designed to be available for execution of the static job on the other processor.
8. The computer program product of claim 5, wherein the operations further comprise scheduling the static job for execution on the processor concurrently with other jobs from the EDF schedule.
9. The computer program product of claim 3, wherein the operations further comprise scheduling an idle task at a time corresponding to an unscheduled time within the static schedule, wherein the idle task is configured to consume an unused processor resource.
10. The computer program product of claim 1, wherein the separate job comprises a short duration sporadic job, wherein the sporadic job is assigned an earliest deadline, is allowed to execute to completion, and affects the EDF schedule.
11. The computer program product of claim 1, wherein the sporadic job comprises an interrupt service routine.
12. The computer program product of claim 1, wherein the sporadic job comprises a specific execution time requirement.
13. A system comprising:
a processor configured to execute instructions related to a plurality of jobs; and
a scheduler coupled to the processor, wherein the scheduler is configured to implement an earliest deadline first (EDF) scheduling model, and the scheduler is further configured to:
receive a plurality of jobs from an EDF schedule;
receive a separate job from a source other than the EDF schedule, wherein the separate job has a fixed scheduling requirement; and
insert the separate job into an execution plan of the processor in response to a determination that an available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
14. The system of claim 13, wherein the separate job comprises a static job from a static schedule, wherein the static job is natively scheduled for implementation at a specific execution time according to a static scheduling model, and the scheduler further comprises a static integration engine to integrate the static job into the execution plan with the plurality of jobs from the EDF schedule.
15. The system of claim 14, wherein the scheduler is further configured to:
obtain a critical instant for the static job from the static schedule, wherein the critical instant indicates the specific execution time requirement of the static job;
assign a deadline to the static job within the execution plan, wherein the deadline is based on the critical instant of the static job; and
reserve a rate for the static job, wherein the rate comprises a utilization factor indicative of an amount of the utilization capacity of the processor that is allocated to the static job.
16. The system of claim 14, wherein the scheduler further comprises a utilization engine to schedule an idle task at a time corresponding to an unscheduled time within the static schedule, wherein the idle task is configured to consume an unused processor resource.
17. The system of claim 13, wherein the separate job comprises a short duration sporadic job, wherein the sporadic job is assigned an earliest deadline, is allowed to execute to completion, and affects the EDF schedule, and the scheduler further comprises a sporadic integration engine to integrate the sporadic job into the execution plan with the plurality of jobs from the EDF schedule.
18. A method for implementation within a scheduler for a processor, the method comprising:
receiving a plurality of jobs from an earliest deadline first (EDF) schedule, wherein the scheduler implements an EDF scheduling model;
receiving a separate job from a source other than the EDF schedule, wherein the separate job has a fixed scheduling requirement with a specific execution time;
determining an amount of available utilization capacity of the processor; and
inserting the separate job into an execution plan of the processor with the plurality of jobs from the EDF schedule in response to a determination that the available utilization capacity of the processor is sufficient to execute the separate job according to the fixed scheduling requirement associated with the separate job.
19. The method of claim 18, wherein the separate job comprises a static job from a static schedule, and the static job is natively scheduled for implementation according to a static scheduling model, and the method further comprises:
obtaining a critical instant for the static job from the static schedule, wherein the critical instant indicates the specific execution time requirement of the static job;
assigning a deadline to the static job within the execution plan, wherein the deadline is based on the critical instant of the static job; and
reserving a rate for the static job, wherein the rate comprises a utilization factor indicative of an amount of the utilization capacity of the processor that is allocated to the static job, and the rate is calculated according to the following ratio:
Rate = F_ 1 F_ 2 ,
in which F2 represents an operating frequency of the processor on which the static job executes, and F1 represents an operating frequency of another processor on which the static job is designed to execute.
20. The method of claim 18, wherein the separate job comprises a short duration sporadic job, wherein the sporadic job is assigned an earliest deadline, is allowed to execute to completion, and affects the EDF schedule, and the method further comprises integrating the sporadic job into the execution plan with the plurality of jobs from the EDF schedule.
US12/889,263 2010-09-23 2010-09-23 Integration of dissimilar job types into an earliest deadline first (edf) schedule Abandoned US20120079486A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/889,263 US20120079486A1 (en) 2010-09-23 2010-09-23 Integration of dissimilar job types into an earliest deadline first (edf) schedule
US13/594,468 US8607240B2 (en) 2010-09-23 2012-08-24 Integration of dissimilar job types into an earliest deadline first (EDF) schedule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/889,263 US20120079486A1 (en) 2010-09-23 2010-09-23 Integration of dissimilar job types into an earliest deadline first (edf) schedule

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/594,468 Continuation US8607240B2 (en) 2010-09-23 2012-08-24 Integration of dissimilar job types into an earliest deadline first (EDF) schedule

Publications (1)

Publication Number Publication Date
US20120079486A1 true US20120079486A1 (en) 2012-03-29

Family

ID=45872027

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/889,263 Abandoned US20120079486A1 (en) 2010-09-23 2010-09-23 Integration of dissimilar job types into an earliest deadline first (edf) schedule
US13/594,468 Expired - Fee Related US8607240B2 (en) 2010-09-23 2012-08-24 Integration of dissimilar job types into an earliest deadline first (EDF) schedule

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/594,468 Expired - Fee Related US8607240B2 (en) 2010-09-23 2012-08-24 Integration of dissimilar job types into an earliest deadline first (EDF) schedule

Country Status (1)

Country Link
US (2) US20120079486A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182180A (en) * 2014-07-30 2014-12-03 山东大学 Low-energy EDF (earliest deadline first) real-time task scheduling method for mixed main memory embedded system
EP2945062A1 (en) * 2014-05-16 2015-11-18 Thales Method for performing services in real time, in particular flight management and real-time system implementing such a method
US20160021223A1 (en) * 2014-07-21 2016-01-21 Samsung Electronics Co., Ltd. Server for performing low power communication and operation method thereof, and scheduling map generation method for performing low power communication
WO2016064312A1 (en) * 2014-10-22 2016-04-28 Telefonaktiebolaget L M Ericsson (Publ) Coordinated scheduling between real-time processes
JP2017503251A (en) * 2013-12-18 2017-01-26 クロノ−セイフ Method for configuring and executing a real-time task sequence plan
CN109992382A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 Method for scheduling task, device and task dispatcher
EP3702952A1 (en) * 2019-03-01 2020-09-02 Worldline Scheduling method ensuring security service for real-time system
US10970114B2 (en) * 2015-05-14 2021-04-06 Atlassian Pty Ltd. Systems and methods for task scheduling
US11106496B2 (en) * 2019-05-28 2021-08-31 Microsoft Technology Licensing, Llc. Memory-efficient dynamic deferral of scheduled tasks
WO2022102883A1 (en) 2020-11-12 2022-05-19 Samsung Electronics Co., Ltd. A method and apparatus for real-time task scheduling for a non-preemptive system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140080058A (en) * 2012-12-20 2014-06-30 삼성전자주식회사 Load balancing method for multicore and mobile terminal
US10289448B2 (en) 2016-09-06 2019-05-14 At&T Intellectual Property I, L.P. Background traffic management
CN109144682A (en) 2017-06-27 2019-01-04 阿里巴巴集团控股有限公司 The priority processing method and processing unit of task
CN109558227B (en) * 2018-11-12 2023-03-31 中国航空工业集团公司西安飞行自动控制研究所 Monotonic rate task scheduling method based on task execution budget

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5210872A (en) * 1991-06-28 1993-05-11 Texas Instruments Inc. Critical task scheduling for real-time systems
US5278976A (en) * 1990-04-16 1994-01-11 Rolm Company Method for detecting infinite loops by setting a flag indicating execution of an idle task having lower priority than executing application tasks
US6189022B1 (en) * 1997-08-20 2001-02-13 Honeywell International Inc. Slack scheduling for improved response times of period transformed processes
US20020120661A1 (en) * 2000-06-02 2002-08-29 Binns Pamela A. Methods and apparatus for sharing slack in a time-partitioned system
US20020120663A1 (en) * 2000-06-02 2002-08-29 Binns Pamela A. Method and apparatus for slack stealing with dynamic threads
US6721789B1 (en) * 1999-10-06 2004-04-13 Sun Microsystems, Inc. Scheduling storage accesses for rate-guaranteed and non-rate-guaranteed requests
US7093256B2 (en) * 2002-12-13 2006-08-15 Equator Technologies, Inc. Method and apparatus for scheduling real-time and non-real-time access to a shared resource
US7165252B1 (en) * 1999-06-21 2007-01-16 Jia Xu Method of scheduling executions of processes with various types of timing properties and constraints
US20070074216A1 (en) * 2004-06-01 2007-03-29 Sony Computer Entertainment Inc. Task management method, task management device, semiconductor integrated circuit, electronic apparatus, and task management system
US20080235695A1 (en) * 2007-03-23 2008-09-25 Nec Corporation Resource allocation system for jobs, resource allocation method and resource allocation program for jobs
US7502912B2 (en) * 2003-12-30 2009-03-10 Intel Corporation Method and apparatus for rescheduling operations in a processor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2104580T3 (en) 1989-02-24 1997-10-16 At & T Corp ADAPTIVE PLANNING OF TASKS FOR MULTIPROCESS SYSTEMS.
US7370326B2 (en) 2004-04-02 2008-05-06 Emulex Design & Manufacturing Corporation Prerequisite-based scheduler
US7631130B2 (en) 2005-02-04 2009-12-08 Mips Technologies, Inc Barrel-incrementer-based round-robin apparatus and instruction dispatch scheduler employing same for use in multithreading microprocessor
US7788673B2 (en) 2005-09-12 2010-08-31 Honeywell International Inc. Method and system for generating a static partition scheduling timeline with reduced context switching overhead
US20090217282A1 (en) 2008-02-26 2009-08-27 Vikram Rai Predicting cpu availability for short to medium time frames on time shared systems

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5278976A (en) * 1990-04-16 1994-01-11 Rolm Company Method for detecting infinite loops by setting a flag indicating execution of an idle task having lower priority than executing application tasks
US5210872A (en) * 1991-06-28 1993-05-11 Texas Instruments Inc. Critical task scheduling for real-time systems
US6189022B1 (en) * 1997-08-20 2001-02-13 Honeywell International Inc. Slack scheduling for improved response times of period transformed processes
US7165252B1 (en) * 1999-06-21 2007-01-16 Jia Xu Method of scheduling executions of processes with various types of timing properties and constraints
US6721789B1 (en) * 1999-10-06 2004-04-13 Sun Microsystems, Inc. Scheduling storage accesses for rate-guaranteed and non-rate-guaranteed requests
US7302685B2 (en) * 2000-06-02 2007-11-27 Honeywell International Inc. Methods and apparatus for sharing slack in a time-partitioned system
US20020120661A1 (en) * 2000-06-02 2002-08-29 Binns Pamela A. Methods and apparatus for sharing slack in a time-partitioned system
US20020120663A1 (en) * 2000-06-02 2002-08-29 Binns Pamela A. Method and apparatus for slack stealing with dynamic threads
US7140022B2 (en) * 2000-06-02 2006-11-21 Honeywell International Inc. Method and apparatus for slack stealing with dynamic threads
US20080028415A1 (en) * 2000-06-02 2008-01-31 Honeywell International Inc. Methods and Apparatus for Sharing Slack in a Time-Partitioned System
US7093256B2 (en) * 2002-12-13 2006-08-15 Equator Technologies, Inc. Method and apparatus for scheduling real-time and non-real-time access to a shared resource
US7502912B2 (en) * 2003-12-30 2009-03-10 Intel Corporation Method and apparatus for rescheduling operations in a processor
US20070074216A1 (en) * 2004-06-01 2007-03-29 Sony Computer Entertainment Inc. Task management method, task management device, semiconductor integrated circuit, electronic apparatus, and task management system
US7954101B2 (en) * 2004-06-01 2011-05-31 Sony Computer Entertainment Inc. Skipping non-time-critical task according to control table when operating frequency falls
US20080235695A1 (en) * 2007-03-23 2008-09-25 Nec Corporation Resource allocation system for jobs, resource allocation method and resource allocation program for jobs

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017503251A (en) * 2013-12-18 2017-01-26 クロノ−セイフ Method for configuring and executing a real-time task sequence plan
EP2945062A1 (en) * 2014-05-16 2015-11-18 Thales Method for performing services in real time, in particular flight management and real-time system implementing such a method
FR3021108A1 (en) * 2014-05-16 2015-11-20 Thales Sa METHOD FOR REAL-TIME SERVICE EXECUTION, IN PARTICULAR FLIGHT MANAGEMENT, AND REAL-TIME SYSTEM USING SUCH A METHOD
US9967322B2 (en) * 2014-05-16 2018-05-08 Thales Method for the execution of services in real time flight management system
KR102166361B1 (en) * 2014-07-21 2020-10-15 삼성전자주식회사 Server for performing low power communication and operation method thereof and scheduling map generating method for performing low power communication
KR20160010997A (en) * 2014-07-21 2016-01-29 삼성전자주식회사 Server for performing low power communication and operation method thereof and scheduling map generating method for performing low power communication
US20160021223A1 (en) * 2014-07-21 2016-01-21 Samsung Electronics Co., Ltd. Server for performing low power communication and operation method thereof, and scheduling map generation method for performing low power communication
US10567296B2 (en) * 2014-07-21 2020-02-18 Samsung Electronics Co., Ltd. Server for performing low power communication and operation method thereof, and scheduling map generation method for performing low power communication
CN104182180A (en) * 2014-07-30 2014-12-03 山东大学 Low-energy EDF (earliest deadline first) real-time task scheduling method for mixed main memory embedded system
WO2016064312A1 (en) * 2014-10-22 2016-04-28 Telefonaktiebolaget L M Ericsson (Publ) Coordinated scheduling between real-time processes
US10725823B2 (en) 2014-10-22 2020-07-28 Telefonaktiebolaget Lm Ericsson (Publ) Coordinated scheduling between real-time processes
US10970114B2 (en) * 2015-05-14 2021-04-06 Atlassian Pty Ltd. Systems and methods for task scheduling
CN109992382A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 Method for scheduling task, device and task dispatcher
EP3702952A1 (en) * 2019-03-01 2020-09-02 Worldline Scheduling method ensuring security service for real-time system
US11106496B2 (en) * 2019-05-28 2021-08-31 Microsoft Technology Licensing, Llc. Memory-efficient dynamic deferral of scheduled tasks
WO2022102883A1 (en) 2020-11-12 2022-05-19 Samsung Electronics Co., Ltd. A method and apparatus for real-time task scheduling for a non-preemptive system
EP4204961A4 (en) * 2020-11-12 2023-10-04 Samsung Electronics Co., Ltd. A method and apparatus for real-time task scheduling for a non-preemptive system

Also Published As

Publication number Publication date
US8607240B2 (en) 2013-12-10
US20130283279A1 (en) 2013-10-24

Similar Documents

Publication Publication Date Title
US8607240B2 (en) Integration of dissimilar job types into an earliest deadline first (EDF) schedule
US11507420B2 (en) Systems and methods for scheduling tasks using sliding time windows
US9207977B2 (en) Systems and methods for task grouping on multi-processors
EP2728473B1 (en) Systems and methods for instruction entity allocation and scheduling on multi-processors
Monot et al. Multisource software on multicore automotive ECUs—Combining runnable sequencing with task scheduling
Nelissen et al. U-EDF: An unfair but optimal multiprocessor scheduling algorithm for sporadic tasks
US8875146B2 (en) Systems and methods for bounding processing times on multiple processing units
KR20140113310A (en) Task scheduling with precedence relationships in multicore systems
KR101644800B1 (en) Computing system and method
Liu et al. Bursty-interference analysis techniques for analyzing complex real-time task models
Isovic et al. Handling mixed sets of tasks in combined offline and online scheduled real-time systems
Afshar et al. Resource sharing among prioritized real-time applications on multiprocessors
Pathan Unifying fixed-and dynamic-priority scheduling based on priority promotion and an improved ready queue management technique
Thekkilakattil et al. Multiprocessor fixed priority scheduling with limited preemptions
CN116244073A (en) Resource-aware task allocation method for hybrid key partition real-time operating system
Capota et al. P_FENP: a multiprocessor real-time scheduling algorithm
Doan et al. A novel task-to-processor assignment approach for optimal multiprocessor real-time scheduling
Audsley et al. Integrating best effort and fixed priority scheduling
Afshar et al. Resource sharing in a hybrid partitioned/global scheduling framework for multiprocessors
Lindh et al. Scheduling algorithms for real-time systems
Bommert Schedule-aware Distribution of Parallel Load in a Mixed Criticality Environment
Dobrin et al. Handling Non-periodic Events Together with Complex Constrained Tasks in Distributed Real-Time Systems
Ali et al. Mixed Criticality Reward-Based Systems using Resource Reservation
Huang et al. A new scheduling approach for mix-criticality real-time system
CN115543554A (en) Method and device for scheduling calculation jobs and computer readable storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BRANDT, SCOTT A;GOLDING, RICHARD A;WONG, THEODORE M;SIGNING DATES FROM 20100824 TO 20100910;REEL/FRAME:025081/0605

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING PUBLICATION PROCESS