WO1997024671A1 - Event filtering feature for a computer operating system in a home communications terminal - Google Patents

Event filtering feature for a computer operating system in a home communications terminal Download PDF

Info

Publication number
WO1997024671A1
WO1997024671A1 PCT/US1996/020126 US9620126W WO9724671A1 WO 1997024671 A1 WO1997024671 A1 WO 1997024671A1 US 9620126 W US9620126 W US 9620126W WO 9724671 A1 WO9724671 A1 WO 9724671A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
thread
kernel
queue
descriptor
Prior art date
Application number
PCT/US1996/020126
Other languages
French (fr)
Inventor
James A. Houha
Original Assignee
Powertv, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Powertv, Inc. filed Critical Powertv, Inc.
Priority to EP96944439A priority Critical patent/EP0880745A4/en
Priority to KR1019980704949A priority patent/KR19990076823A/en
Priority to AU14246/97A priority patent/AU1424697A/en
Publication of WO1997024671A1 publication Critical patent/WO1997024671A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Definitions

  • This invention relates generally to real-time operating systems adapted for high performance applications, such as those executing in a home communications terminal (HCT) to provide cable television or other audiovisual capabilities. More particularly, the invention provides a feature which improves the performance of operating, systems installed in devices having limited computing resources.
  • HCT home communications terminal
  • HCTs need to evolve to transform today's limited capability television sets into interactive multimedia entertainment and communication systems.
  • One possible approach for increasing the capabilities of HCTs is to port existing operating systems, such as UNIX or the like, to PC-compatible microprocessors in the HCT.
  • the huge memory re ⁇ uirements needed to support such existing operating systems render such an approach prohibitively expensive.
  • memory is a primary cost component of HCTs
  • competitive price pressures mean that the added functions must be provided in a manner which minimizes memory use and maximizes processor performance. Consequently, it has been determined that new operating system features must be developed which provide media-centric high performance features while minimizing memory requirements.
  • One conventional operating system design paradigm which has been determined to generally consume a large amount of memory is the partitioning of thread coordination mechanisms —such as semaphores, timers, exceptions, messages, and so forth — into separate subsystems in the operating system. Each such subsystem conventionally includes different application programming interface (API) conventions, different data structures, and different memory areas used by the kernel to keep track of and to check on them.
  • API application programming interface
  • Another conventional operating system design paradigm which has been determined to cause operating system inefficiency in a real-time operating system is the manner in which events are transferred to threads executing in the system.
  • a conventional kernel would transfer that event to a thread which handles the event, even though the event may be of no interest unless a cursor on the television screen is within a certain window.
  • This results in inefficiency since executing the thread involves a context switch, followed by the thread quickly determining that the event is of no interest because of the cursor location on the screen.
  • Many other examples of such inefficiency stem from the fact that threads in a system cannot "prequalify" events which they are to receive from the kernel.
  • the present invention solves the aforementioned problems by providing an efficient real-time kernel having features tailored to the needs of HCT
  • kernels typically provide separate event subsystems, semaphore subsystems and queue subsystems
  • one aspect of the present invention contemplates replacing such subsystems with a single, integrated event subsystem which provides the functionality of semaphores and other synchronization mechanisms through events on event queues.
  • a single event data structure can be used. This single data structure can be optimized to speed up the kernel. Because the kernel needs to be aware of only of two states for each thread (executing the thread, or waiting for an event to deliver to the thread), kernel efficiency can be increased.
  • conventional kernels typically require that the kernel distinguish between various other states, such as waiting for a semaphore, an event, message stream, or an I/O operation, thus resulting in increased complexity and increasing memory requirements in the kernel.
  • Another aspect of the present invention contemplates providing means for each thread to "register" with the kernel to indicate what kind of events (or classes of events) the thread would like to receive.
  • Each thread can also specify a "filter” procedure which, when an event is posted to the system (but before it is delivered to the thread), decides whether the posted event is appropriate for that context or not.
  • This filter may be an interrupt service routine which runs at interrupt time instead of invoking the destination thread, which would require a context switch.
  • HCT home communication terminal
  • FIG. 1 shows one possible configuration for a home communication terminal (HCT) on which an operating system employing the principles of the present invention can be installed.
  • FIG. 2 shows schematically how a kernel event handler 204 constructed in accordance with the present invention can efficiently handle and filter incoming events.
  • FIG. 3 shows steps which may be executed by a kernel event handler to efficiently handle events in an HCT.
  • FIG. 4 shows an example of different threads having registered interest in different types of events.
  • FIG. 5 shows one possible format for an event object in a system employing the principles of the present invention.
  • FIG. 6 shows how two threads can implement a semaphore by using a queue wherein a kernel implements a NextEvent function.
  • FIG. 1 shows a block diagram of a home communication terminal (HCT) in which various principles of the present invention may be practiced.
  • the HCT may include a CPU card 100, graphics card 101, decoder card 102, display panel and key pad 103, main processing board 104, front end 105, tuning section 106, and audio section 107.
  • CPU 100a such as a PowerPC or Motorola 68000 series, with suitable EPROM 100c and RAM 100b.
  • application programs executing on CPU 100a can interact with various peripherals such as a mouse, game controllers, keypads, network interfaces, and the like, as is well known in the art.
  • FIG.2 shows schematically how a kernel event handler employing various inventive principles can efficiently handle incoming events and prequalify the events to certain threads executing in the HCT.
  • different threads in the system may be interested only in certain types of events, and may wish to ignore other types of events.
  • Examples of events include: a keypress on a keypad attached to the HCT; a mouse movement indication received from a mouse attached to the HCT; a button press on a game controller connected to the HCT; a message received from a headend coupled to the HCT; or a signal indicating that a movie has started.
  • Examples of different threads include a copy of a video game operated by a first player; a copy of a video game operated by a second player; an on-screen programming guide; a movie player; a channel tuning indicator; or a user interface for a customer billing application.
  • one application may correspond to a single thread, or a single application may be partitioned into multiple threads which may be concurrently executed to optimize performance.
  • One of ordinary skill in the art will recognize how various applications may be constructed out of a single or multiple threads in the system.
  • a customer billing application may only be interested in events occurring from the HCT keypad, and only if the customer has first entered a special code.
  • Two video game threads may be interested in all key press events occurring from either of two game controllers coupled to the HCT (thus requiring copies of the same event to be posted to both threads).
  • An on-screen programming guide thread may only be interested in keypress events which occur when the mouse cursor is positioned within a certain predete ⁇ ni ed area on the screen, and to ignore all other events (including keypress events when the mouse cursor is not positioned within the area).
  • Each thread may thus wish to register interest in a plurality of different types of events, and may wish to change the registered interests at a later time.
  • FIG. 2 shows an exemplary configuration including a kernel event handler 204 which can accomplish the above objectives. As shown in FIG.
  • two threads A and B can each register interest with the kernel in two different events using a function pk Registerlnterest (see Appendix 1), a kernel-provided programming interface.
  • a corresponding function pk Removelnterest allows a thread to remove an interest in a thread (see Appendix 1).
  • the kernel constructs an event interest list 200 which may comprise a linked list of "event interest" objects 200a, 200b, and 200c.
  • thread A may register interest in an event corresponding to event interest 200b
  • thread B may register interest in an event corresponding to event interest 200c, each of which are specified by parameters in corresponding function calls to pk Registerlnterest.
  • thread A comprises a video game application which is only interested in key press events from game controllers
  • thread B comprises a billing application which is only interested in button presses from a mouse.
  • the kernel manipulates event interest list 200 in response to calls to pkjtegisterlnterest and pk Removelnterest.
  • kernel event handler 204 receives or generates an event, it traverses event interest list 200 to determine the conditions under which various threads in the system should be invoked. In general, by comparing an event descriptor (which describes the event) with parameters included in each event interest object, kernel event handler 204 can efficiently determine whether and how to invoke any thread which has expressed interest in an event.
  • Each event interest object 200a, 200b, and 200c may comprise a code field, a mask field, a filter procedure field, and a queue field in accordance with the parameters set forth for pk_RegisterInterest. For example, when thread A registers interest in certain game controller events, it specifies code2, mask2, filter2, and queue2 as parameters in function pk Registerlnterest.
  • Appendix 1 includes descriptions for a plurality of functions which may be used to carry out various principles of the invention.
  • a syntax including a list of parameter types and examples of use is provided. Referring to function pk Registerlnterest, for example, thread A would invoke this function and supply parameter- values for code, mask, filter, and queue (the filter and queue parameters are optional) in order to direct the kernel to prequalify and direct events to thread A.
  • the pk Registerlnterest function creates and registers an event interest with the kernel. Its code parameter specifies a description of the desired event, its mask parameter specifies an event mask which further clarifies events of interest, its filter parameter specifies an interrupt service routine (ISR) for the kernel to call when the event occurs, and its queue parameter specifies a pointer to the queue to which to route events.
  • ISR interrupt service routine
  • Each event interest object such as element 200b in FIG. 2, holds a mask and a code specifying a "desirable" event descriptor for an incoming event.
  • the mask specifies which fields of the event descriptor are germane to the specified interest, and the code specifies the values that those fields must have in order for the posted event to trigger the event interest.
  • Bits in the mask field can be used to indicate which fields of the event descriptor are germane to a particular interest.
  • One possible example is to allocate 32 bits to the mask field. Of the 32 bits, 8 bits can be allocated to indicate the device type (i.e. , the device type which generated the event), another 8 bits for the device instance (i.e., the instance of that device type which generated the event), another 8 bits for the event type (i.e., what type of event the device generated, such as a key press), and another 8 bits for event data (i.e. , a small amount of data which can contain the event information, such as the specific key which was pressed).
  • Such an allocation is by way of example only, and is not intended to be limiting.
  • each event descriptor can include an indication of the device type, device instance, event type, and event data corresponding to the event. To register interest in events from any device type, a thread would set the device type parameter to all ones.
  • the mask FFOOOOOO would cause all the device type bits to be set, forcing a comparison of device type with that specified in the code
  • event descriptor could be created with subsets of the above fields, or with entirely different fields which qualify a particular type of event.
  • Thread A wishes to register interest in all game controller key press events which occur when the cursor is within a predetermined screen area, and to ignore all other types of events.
  • Thread A creates an event interest 200b by specifying mask2 which specifies "device type” as a qualifier, leaves the device instance open, specifies "event type” as a qualifier, and leaves the “event data” open.
  • thread A specifies code2 which identifies "game controller” as the device type and "key press” as the desired event type.
  • thread A specifies a filter procedure 203 which is to be executed to further qualify the event, and a queue onto which the event will be placed (queue2). For the example shown in FIG.
  • filter procedure 203 checks the cursor location to ensure that it is within a qualified area before passing on the event. It is assumed that filter procedure
  • kernel 203 executes in kernel mode, thus avoiding a context switch. In other words, thread A will not be scheduled by the kernel unless the event meets all of thread A's qualifications.
  • an event descriptor 201 is created which corresponds to the particulars of the event.
  • a device driver which detects a hardware change, can post such an event using pk PostEvent (see Appendix 1).
  • pk PostEvent see Appendix 1
  • Kernel event handler 204 receives the incoming event (from pk PostEvent), and traverses event interest list 200 to match the incoming event to events of interest.
  • the event matching step may be performed very efficiently by multiplying the mask of each event interest object with the incoming event descriptor, then comparing the result with the code of the event interest object. If there is a match, then the event has been pre-qualified.
  • kernel event handler 204 first examines event interest 20Ca by multiplying event descriptor 201 with maskl and comparing the result to codel, and quickly determines that there is no match. This "AND” then "COMPARE" operation can be done very efficiently on a CPU (typically, only two assembly language instructions are needed), thus adding to the performance increase which results from using the principles of the invention.
  • kernel event handler 204 After determining that event descriptor 201 does not match event interest object 200a, kernel event handler 204 next examines event interest object 200b and multiples mask2 by event descriptor 201, then compares the result with code2. For the example in FIG. 2, assume that the result is a successful match. However, because thread A specified a filter procedure 203, kernel event handler 204 does not pass the event to thread A but instead executes filter procedure 203, which checks the cursor location to see if it is in a valid area. If it is, kernel event handler 204 delivers the event to queue2 (also specified in event interest object 200b), and thread A can extract the event from queue2 using pk NextEvent (see Appendix 1).
  • filter routine 203 is executed while the kernel is executing, and need not schedule thread A. If only 20% of events are actually of interest to a particular thread, then it is much faster to make the "interest" determination in kernel code than in thread code, which requires context switches.
  • event interest object 200c by way of pk Registerlnterest, specifying device type as a qualifier, but leaving the other mask fields open.
  • event descriptor 202 When a user presses a mouse button, event descriptor 202 would be generated, indicating device type as mouse ("2"), device instance "1", event type as "key”, and event data as "mouse press”.
  • Kernel event handler 204 would traverse event interest list 200, preferably multiplying each mask with the event descriptor and comparing the result with the code.
  • FIG. 3 shows steps which may be executed by kernel event handler 204 to handle incoming events in the system. It is assumed that an event interest list has already been created through the use of pk Registerlnterest calls made by threads in the system. I ginning in step 301, the next event interest object from the event interest list is retrieved. In step 302, the mask from the event interest object is multiplied with the event descriptor corresponding to the event. In step
  • step 303 the result is compared with the code from the event interest object.
  • processing resumes at step 301 with the next event interest object.
  • step 304 If, in step 304, there is a match, then in step 305 a check is made to determine whether a filter was specified for the event interest object. If so, then in step 306 the specified filter is called, preferably directly by the kernel and without a context switch. If the event passes the filter in step 307, processing advances to step 308; otherwise, processing resumes at step 301 with the next event interest object. In step 308, if a queue was specified with the event interest, the event is placed on the specified queue in step 309; otherwise, processing resumes at step 301 until the end of the event interest list is reached. Note that if no queue was specified, the event can be discarded. Such a situation may be desirable, for example, where all of the processing is done in the filter itself.
  • filters can be reusable. For example, one filter might have a function of determining whether a cursor is in a particular location on the screen; different threads could specify and use this same filter. If, for example, three different applications are executing in the HCT, each having a separate window on a television screen, and the user moves a mouse over the screen, a single filter can be devised which determines whether the cursor is over a window boundary for the specified thread.
  • the filter can modify the event itself. For example, a filter could change the time of the event before putting it on a queue. This could be used, for example, by a "replay" filter which changes the time stamp on an event to the current time (e.g., translate to current time). Another example involves checking a signature on an event before waking up a thread.
  • FIG. 4 shows another example which applies the principles of the present invention.
  • a video game to be executed on an HCT provides two characters who fight one other, with two users each interacting with a separate game controller 401 and 402.
  • the video game may comprise a main video game thread 403 which controls overall scoring and game operation, and two player threads 404 and 405.
  • player 1 thread 404 controls the actions of a first character on the video screen
  • player 2 thread 405 controls the actions of a second character on the video screen.
  • thread 404 must manipulate the first video character based on actions taken by player 1
  • thread 405 must manipulate the second video character based on actions taken by player 2.
  • player 1 thread 404 registers interest in all events from game controller #1 (by specifying the appropriate code, mask, filter and queue parameters), and player 2 thread 405 registers interest in all events from game controller #2.
  • player 1 thread 404 wants to know when player 2 has pressed a "pause” key on game controller #2
  • player 2 thread 405 wants to know when player 1 has pressed a "pause” key on game controller #1 (in other words, either player, including the one operating the opposite controller, can pause the game).
  • player 1 thread 404 can selectively register interest only in "pause” events from game controller #2, and player 2 thread 405 can selectively register interest only in
  • FIG. 4 were provided with a copy of every event generated by each game controller, much time would be wasted in processing undesirable events, wasting CPU time and memory.
  • each thread By allowing each thread to separately register interest only in certain types of events and causing the kernel to only send those types of events to each thread, significant performance increases can be achieved.
  • main video game thread 403 is interested in receiving "game events" from each player thread, and registers accordingly.
  • Each player thread receives events from main video game thread 403, such as a command to cause the character to die because of too many blows by the other player.
  • the aforementioned registered interests are indicated by arrows in FIG.
  • instant replay thread 406 provides a capability to review all previous actions over a time window. Therefore, it registers interest in all events generated by either game controller, and thus gets its own copy of each event generated by the game controllers. This avoids individual threads from having to send "copies" of events between themselves.
  • an on-line TV guide may cause tuning tables to be downloaded at unknown times. When a new tuning table is downloaded, this could constitute an event.
  • more than one thread in the HCT may need a copy of this ning table.
  • This can be accomplished by creating a filter which creates a private copy of the mning table for each thread that registers an interest in the mning table event.
  • a single event can trigger more than one filter, and can also trigger more than one thread.
  • one thread can be an event recorder (debugger); it would want to obtain a copy of every event in the system. To accomplish this, the thread would create a mask which clears all the bits, indicating interest in any event.
  • thread synchronization functions such as semaphores, timers, media events, exceptions, messages, and so forth, can be replaced with event objects and integrated onto an event queue. to minimize memory requirements and applications development in accordance with another aspect of the invention.
  • every event in the system can include a time stamp (comprising, e.g., 64 bits), comprising a "snapshot" of the system clock.
  • a time stamp comprising, e.g., 64 bits
  • a future time can be inserted into the event object (instead of the current time).
  • the kernel can hold onto the event and not post it until the designated time arrives. Therefore, every event has the capability of being an alarm. For example, every keypress generates an event at a particular time.
  • FIG. 5 shows one possible configuration for an event object 501 in a kernel, including a pointer to the next event object, a code (corresponding to an event descriptor), time, X, Y, Z fields, and a "where" pointer.
  • event object 501 may comprise 28 bytes including both "public” portions accessible by threads and “private” portions hidden from threads. Events may be strung together into a list in an "intrusive” form (i.e. , the objects in the list have in their data structure the fields strung together), as compared to an "extrusive” form in which the list component is built separately.
  • the "where" field may comprise a queue pointer which is used for scheduled delivery: deliver event to queue at a future time.
  • a thread may set the code field (device type, instance, etc.) using pk DeliverEvent where an event is created by a thread; alternatively, a device driver may set the code field using pk PostEvent as described previously.
  • the X, Y, Z fields comprise "payload"; any data (including pointers to data structures) can be included therein.
  • an event strucmre can be used to set up an alarm.
  • a game might have a 3 minute round. At end of the round, a bell will ring.
  • a thread can set an alarm by calling pk_ScheduledDeIivery (see Appendix 1) and set the time to 3 minutes from now, and specify the event code, X, Y, Z, and the thread's own queue as the "where" pointer. The thread can then use pk NextEvent to get the next event off its queue.
  • pk_ScheduledDeIivery see Appendix 1
  • the thread can then use pk NextEvent to get the next event off its queue.
  • a thread can "pre-timestamp” the object, then the kernel can reuse that same space when the actual time is stamped. Consolidating alarm functions into an event delivery function thus saves memory, processing time, and programmer effort.
  • the kernel need not determine whether the thread is waiting for a semaphore, or for an alarm, or any other type of synchronization event. This results in faster thread switches. Whenever a thread is sleeping, the operating system is always waiting for an event on its queue. This also makes the kernel smaller. In contrast, conventional kernels need to execute instructions to determine whether a thread is waiting for an alarm, semaphore, queue wait, message wait, etc.
  • a second example of consolidation involves semaphores (FIG. 6). Assume that two threads 602 and 603 need to use a single printer. A conventional approach is to provide a semaphore to which both threads are responsive (i.e., they are in a "semaphore wait” state, and the kernel puts a thread on a "semaphore wait” queue with a semaphore wait data structure).
  • the present invention contemplates using an event object.
  • a queue To implement a semaphore using event queues, a queue
  • the first thread 604 is created (typically by printer driver 601) and a single event 604a is placed on the quote.
  • the first thread 602 which needs the resource makes a call to pk NextEvent (see Appendix 1), specifying the printer semaphore queue 604.
  • pk NextEvent (see Appendix 1), specifying the printer semaphore queue 604.
  • This function causes a wait in the thread if there is no event on the queue, but otherwise extracts the event if there is one on the queue.
  • the kernel 605 removes it from queue 604 and delivers it to thread 602. If another thread 603 attempts to use the resource (by calling pk NextEvent), it will cause the thread to wait until the event 604a is returned to queue 604 by first thread 602.
  • pk RemmEvent (see Appendix 1) which returns the event to queue 604, allowing second thread 603 to finally execute its pk_NextEvent function.
  • kernel 605 does not place second thread
  • the common event paradigm is used. This increases the efficiency of the kernel because the kernel need not maintain separate queues for all types of different activities, and when examining a thread's status, the kernel already knows that the thread can be in only one state. Furthermore, the same common event object can be used to implement semaphores in the system; no special semaphore data object needs to be defined.
  • thread synchronization including semaphores, messages, and queue messages, all of which are passed between threads as synchronization objects.
  • demand scheduling which signals that a demand was made for the upgrade of a thread's priority.
  • timer events and delayed actions an event can be posted immediately or at a specified future time
  • inter-thread exception handling an event occurs whenever a thread raises an exception
  • user interface actions including pressing a button on a remote or game controller, changing the volume, moving a pointer device, etc.
  • media events including starting the playback of audio, reaching the end of a move, inserting media into or ejecting media from a device, etc.
  • This function not only deletes the specified queue, but frees all the events in the queue as welL
  • TW ⁇ is a m ⁇ ero.
  • the outer try block can belong to an application, a device, or the operating system itself.
  • TWs is a very simple example of an exception handler, 1__e try block surrounds the PleyMgnChirneSound routine, and a single catch block catches all exceptions. void PleyHighChimeSound (void)
  • This example waits for an event to be received, and then frees the event when the application is done with it const TlmeWue FwAIEterney - ⁇ QxH+H+H Otd+H-H-H-J; event 'anevsnt; //Where gsm ⁇ p ⁇ d and system ⁇ /wfl be posted u*32 evsnt ⁇ evtoe;. //The device thet posted ft ⁇ event evertOsts; //Oeta posted in the event queue 'gAppOueue; // Input event queue
  • nEva ⁇ t ⁇ pk_,Ns ⁇ tEvent (gApp ⁇ ueue. h orA-BsmMy); sve-unala anEve ⁇ t-xxx-e * hficLMaefc. evertOevtoe - ⁇ n-Sventocode A (u02) tfX_Ma ⁇ * . pk_f ⁇ reeEvent ( nEvent);
  • ⁇ __da A pointer to a private data area. —t ⁇ twt jf The thread priority. The lower 5 bits represents die priority, which can range from 0 to 31.
  • ⁇ codi A pointer to the code to be executed by the new thread.
  • the thread priority The thread priority.
  • the lower 5 bits represents the priority, which can range from 0 to 31.
  • ⁇ d A pointer to a parameter for cods.
  • ⁇ stkj The size, in bytes, of the s ack to allocate for the new thread.
  • the application execution priority The lower 3 bits represents the priority, which can range from 1 (highest) to 7 (lowest).
  • the thread identifier a number from 1 to 31.
  • This function disables interrupts, thereby preventing context switches.
  • the queue from which to retrieve the next event ⁇ timeout The time at which to return if an event has still not been delivered to the queue specified by a.
  • An application can also spedfy kPtv.Forever to wait an indefinite period of time and return only when an event is delivered to the queue.
  • a pointer to the next event or NULL if a timeout occurred (signalling that the queue is still empty).
  • a pointer to the next event or NULL if a timeout occurred (signalling that the queue is still empty).
  • the event type (in conjunction with the mask) is diecked against each event interest in the order in which the event interests were registered. When a match is found, the event interest is triggered.
  • the filter procedure is called and the return code determines whether a copy of the event is sent to the queue. If no filter was specified, the event is delivered directly to the queue.
  • Posting an event may trigger multiple event interests and result in multiple copies of the event being delivered to multiple queues.
  • This procedure takes a pointer to the event as an argument and returns a boolean value.
  • An event interest specifies a type and mask which determine the class of events to watch for.
  • An event interest may have only a fil er or only a queue, or it may have both.
  • 77n ⁇ is ⁇ macro.
  • pfc_Rethrow is equivalent to calling pk_Throw (pK_CurrantException).
  • This macro can be thought of as an alternative return _nech__nis_n. lt passes control to the outer try block so that the exception can be processed. Control does not return to the location where the exception occurred.
  • ptrJtoturnEvent primarily for implementing semaphores. Certain events, such as semaphore events, have limited mterest and can be routed to, at most one queue. When retrieving such events for processing, we reco ⁇ unend using pk_.RetumEven which routes the actual event rather than a copy of the event
  • TTUs ie a macro.
  • This function implements a timeout with an accuracy of +/- 5 milliseconds.
  • This macro can be thought of as an alternative return me_______ ⁇ n. It passes control to the outer try block so that the exception can be processed Control does not return to the location where the exception occurred.
  • This function delimits a block of code known as a try block.
  • One or more catch blocks must follow the try block and define the actual processing functionality for the exception.
  • the scope of the try block determines its priority.
  • inner try blocks take precedence over outer ones.
  • This function re-enables interrupts, thereby enabling context switches.

Abstract

An improved operating system kernel for a home communication terminal (HCT) includes an event filtering feature (204) which allows threads (A, B) running in the HCT to register interest in events of a particular type, from a particular source, or other desirable criteria. Events occurring in the system (205, 206) are prequalified by the kernel before providing them to individual threads which have registered interest in only certain types of events. By executing a filter in the kernel's context, a thread context switch can be avoided. Events occurring in the system can be matched with events of interest (200a, 200b, 200c) registered by various threads by an efficient comparison operation including a mask field and a code field. Additionally, various thread synchronization mechanisms such as alarms and semaphores can be implemented using a common event object which is integrated onto event queues.

Description

EVENT FILTERING FEATURE
FOR A COMPUTER OPERATING SYSTEM
IN A HOME COMMUNICATIONS TERMINAL
BACKGROUND OF THE INVENTION 1. Technical Field
This invention relates generally to real-time operating systems adapted for high performance applications, such as those executing in a home communications terminal (HCT) to provide cable television or other audiovisual capabilities. More particularly, the invention provides a feature which improves the performance of operating, systems installed in devices having limited computing resources.
2. Related Information
Conventional operating systems for HCTs, such as those in cable television systems, have typically provided limited capabilities tailored to controlling hardware devices and allowing a user to step through limited menus and displays. As growth in the cable television industry has fostered new capabilities including interactive video games, video-on-demand, downloadable applications, higher performance graphics, multimedia applications and the like, there has evolved a need to provide operating systems for HCTs which can support these new capabilities. Additionally, newer generations of fiber-based networks have vastly increased the data bandwidths which can be transferred to and from individual homes, allowing entirely new uses to be developed for the HCTs. Changing federal and state regulations also portend new uses such as telephony to be available through existing cable networks. As a result, conventional HCTs and their operating systems are quickly becoming obsolete. In short, HCTs need to evolve to transform today's limited capability television sets into interactive multimedia entertainment and communication systems. One possible approach for increasing the capabilities of HCTs is to port existing operating systems, such as UNIX or the like, to PC-compatible microprocessors in the HCT. However, the huge memory reαuirements needed to support such existing operating systems render such an approach prohibitively expensive. Because memory is a primary cost component of HCTs, competitive price pressures mean that the added functions must be provided in a manner which minimizes memory use and maximizes processor performance. Consequently, it has been determined that new operating system features must be developed which provide media-centric high performance features while minimizing memory requirements. One conventional operating system design paradigm which has been determined to generally consume a large amount of memory is the partitioning of thread coordination mechanisms — such as semaphores, timers, exceptions, messages, and so forth — into separate subsystems in the operating system. Each such subsystem conventionally includes different application programming interface (API) conventions, different data structures, and different memory areas used by the kernel to keep track of and to check on them. Another conventional operating system design paradigm which has been determined to cause operating system inefficiency in a real-time operating system is the manner in which events are transferred to threads executing in the system.
Conventional approaches for delivering an event to a thread involve scheduling the thread and providing the event to the thread, even though the event may not be of interest to the thread (i.e. , after receiving the event, the thread immediately determines that it is not of interest and discards it). Such a scheme wastes processing time performing a context switch, and also wastes memory space.
As one example, if a user presses a key on an HCT keypad, a conventional kernel would transfer that event to a thread which handles the event, even though the event may be of no interest unless a cursor on the television screen is within a certain window. This results in inefficiency, since executing the thread involves a context switch, followed by the thread quickly determining that the event is of no interest because of the cursor location on the screen. Many other examples of such inefficiency stem from the fact that threads in a system cannot "prequalify" events which they are to receive from the kernel.
In summary, conventional operating systems for use in HCT applications suffer from performance and memory disadvantages which hinder their utility when used for newer, high performance graphics-intensive applications. Accordingly, there is a need to provide operating system features which can reduce memory requirements and simultaneously increase the overall performance of applications executing in conjunction with the operating system. SUMMARY OF THE INVENTION
The present invention solves the aforementioned problems by providing an efficient real-time kernel having features tailored to the needs of HCT
applications. Whereas conventional kernels typically provide separate event subsystems, semaphore subsystems and queue subsystems, one aspect of the present invention contemplates replacing such subsystems with a single, integrated event subsystem which provides the functionality of semaphores and other synchronization mechanisms through events on event queues. Instead of using different data strucmres to provide these different services, a single event data structure can be used. This single data structure can be optimized to speed up the kernel. Because the kernel needs to be aware of only of two states for each thread (executing the thread, or waiting for an event to deliver to the thread), kernel efficiency can be increased. In contrast, conventional kernels typically require that the kernel distinguish between various other states, such as waiting for a semaphore, an event, message stream, or an I/O operation, thus resulting in increased complexity and increasing memory requirements in the kernel.
Another aspect of the present invention contemplates providing means for each thread to "register" with the kernel to indicate what kind of events (or classes of events) the thread would like to receive. Each thread can also specify a "filter" procedure which, when an event is posted to the system (but before it is delivered to the thread), decides whether the posted event is appropriate for that context or not. This filter may be an interrupt service routine which runs at interrupt time instead of invoking the destination thread, which would require a context switch.
As used herein, the term "home communication terminal" (HCT) will be understood to refer to terminals that can be used in telephone networks, cable TV or other audiovisual programming networks, satellite networks, or combinations of these.
Various other objects and advantages of the present invention will become apparent through the following detailed description, figures, and the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows one possible configuration for a home communication terminal (HCT) on which an operating system employing the principles of the present invention can be installed. FIG. 2 shows schematically how a kernel event handler 204 constructed in accordance with the present invention can efficiently handle and filter incoming events.
FIG. 3 shows steps which may be executed by a kernel event handler to efficiently handle events in an HCT. FIG. 4 shows an example of different threads having registered interest in different types of events. FIG. 5 shows one possible format for an event object in a system employing the principles of the present invention.
FIG. 6 shows how two threads can implement a semaphore by using a queue wherein a kernel implements a NextEvent function. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMRNTS
FIG. 1 shows a block diagram of a home communication terminal (HCT) in which various principles of the present invention may be practiced. The HCT may include a CPU card 100, graphics card 101, decoder card 102, display panel and key pad 103, main processing board 104, front end 105, tuning section 106, and audio section 107. It is contemplated that the inventive principles may be practiced using any suitable CPU 100a such as a PowerPC or Motorola 68000 series, with suitable EPROM 100c and RAM 100b. It is also contemplated that application programs executing on CPU 100a can interact with various peripherals such as a mouse, game controllers, keypads, network interfaces, and the like, as is well known in the art.
FIG.2 shows schematically how a kernel event handler employing various inventive principles can efficiently handle incoming events and prequalify the events to certain threads executing in the HCT. Generally speaking, different threads in the system may be interested only in certain types of events, and may wish to ignore other types of events. Examples of events include: a keypress on a keypad attached to the HCT; a mouse movement indication received from a mouse attached to the HCT; a button press on a game controller connected to the HCT; a message received from a headend coupled to the HCT; or a signal indicating that a movie has started. Examples of different threads (which may be concurrently executing in a multi-threaded kernel) include a copy of a video game operated by a first player; a copy of a video game operated by a second player; an on-screen programming guide; a movie player; a channel tuning indicator; or a user interface for a customer billing application. In general, one application may correspond to a single thread, or a single application may be partitioned into multiple threads which may be concurrently executed to optimize performance. One of ordinary skill in the art will recognize how various applications may be constructed out of a single or multiple threads in the system.
A customer billing application may only be interested in events occurring from the HCT keypad, and only if the customer has first entered a special code.
Two video game threads may be interested in all key press events occurring from either of two game controllers coupled to the HCT (thus requiring copies of the same event to be posted to both threads). An on-screen programming guide thread may only be interested in keypress events which occur when the mouse cursor is positioned within a certain predeteπni ed area on the screen, and to ignore all other events (including keypress events when the mouse cursor is not positioned within the area). Many other examples are of course possible. Each thread may thus wish to register interest in a plurality of different types of events, and may wish to change the registered interests at a later time. FIG. 2 shows an exemplary configuration including a kernel event handler 204 which can accomplish the above objectives. As shown in FIG. 2, two threads A and B can each register interest with the kernel in two different events using a function pk Registerlnterest (see Appendix 1), a kernel-provided programming interface. A corresponding function pk Removelnterest allows a thread to remove an interest in a thread (see Appendix 1). In response to invocations of pk Registerlnterest, the kernel constructs an event interest list 200 which may comprise a linked list of "event interest" objects 200a, 200b, and 200c. For example, thread A may register interest in an event corresponding to event interest 200b, and thread B may register interest in an event corresponding to event interest 200c, each of which are specified by parameters in corresponding function calls to pk Registerlnterest. It will be assumed for the example in FIG. 2 that thread A comprises a video game application which is only interested in key press events from game controllers, while thread B comprises a billing application which is only interested in button presses from a mouse.
In various embodiments, the kernel manipulates event interest list 200 in response to calls to pkjtegisterlnterest and pk Removelnterest. When kernel event handler 204 receives or generates an event, it traverses event interest list 200 to determine the conditions under which various threads in the system should be invoked. In general, by comparing an event descriptor (which describes the event) with parameters included in each event interest object, kernel event handler 204 can efficiently determine whether and how to invoke any thread which has expressed interest in an event.
Each event interest object 200a, 200b, and 200c may comprise a code field, a mask field, a filter procedure field, and a queue field in accordance with the parameters set forth for pk_RegisterInterest. For example, when thread A registers interest in certain game controller events, it specifies code2, mask2, filter2, and queue2 as parameters in function pk Registerlnterest.
Reference will be made briefly to Appendix 1 , which includes descriptions for a plurality of functions which may be used to carry out various principles of the invention. For each function in Appendix 1, a syntax including a list of parameter types and examples of use is provided. Referring to function pk Registerlnterest, for example, thread A would invoke this function and supply parameter- values for code, mask, filter, and queue (the filter and queue parameters are optional) in order to direct the kernel to prequalify and direct events to thread A.
The pk Registerlnterest function (see Appendix 1) creates and registers an event interest with the kernel. Its code parameter specifies a description of the desired event, its mask parameter specifies an event mask which further clarifies events of interest, its filter parameter specifies an interrupt service routine (ISR) for the kernel to call when the event occurs, and its queue parameter specifies a pointer to the queue to which to route events. Each event interest object, such as element 200b in FIG. 2, holds a mask and a code specifying a "desirable" event descriptor for an incoming event. The mask specifies which fields of the event descriptor are germane to the specified interest, and the code specifies the values that those fields must have in order for the posted event to trigger the event interest.
Bits in the mask field can be used to indicate which fields of the event descriptor are germane to a particular interest. One possible example is to allocate 32 bits to the mask field. Of the 32 bits, 8 bits can be allocated to indicate the device type (i.e. , the device type which generated the event), another 8 bits for the device instance (i.e., the instance of that device type which generated the event), another 8 bits for the event type (i.e., what type of event the device generated, such as a key press), and another 8 bits for event data (i.e. , a small amount of data which can contain the event information, such as the specific key which was pressed). Such an allocation is by way of example only, and is not intended to be limiting.
Corresponding fields may be included in each event descriptor, as indicated by event descriptor 201 in FIG. 2, for example. Thus, each event descriptor can include an indication of the device type, device instance, event type, and event data corresponding to the event. To register interest in events from any device type, a thread would set the device type parameter to all ones. The following (hex) masks could thus be defined: kDt Any OOFFFFFF any device type (bits 24 to 31) kDi Any FFOOFFFF any device instance (bits 16 to 23) kEt Any FFFFOOFF any event type (bits 8 to 15) kEd Any FFFFFFOO any data values (bits 0 to 7) For example, to register an interest in all game controller events, an application would call: pk Registerlnterest (__Dt_Controller, kDi Any & kEt_Any & kEd Any, NULL, my_Q) where kDt Controller is a code corresponding to the game controller, and the event mask multiplies to the value FFOOOOOO (no filter procedure was specified).
As can be seen, the mask FFOOOOOO would cause all the device type bits to be set, forcing a comparison of device type with that specified in the code
(i.e., Controller type), but leaving zero ("not caring about") the device instance
(the next 8 bits), event type (the following 8 bits), or the event data (the last 8 bits). It will be appreciated that an event descriptor could be created with subsets of the above fields, or with entirely different fields which qualify a particular type of event.
Returning to the example in FIG. 2, suppose that thread A wishes to register interest in all game controller key press events which occur when the cursor is within a predetermined screen area, and to ignore all other types of events. Thread A creates an event interest 200b by specifying mask2 which specifies "device type" as a qualifier, leaves the device instance open, specifies "event type" as a qualifier, and leaves the "event data" open. Additionally, thread A specifies code2 which identifies "game controller" as the device type and "key press" as the desired event type. Finally, thread A specifies a filter procedure 203 which is to be executed to further qualify the event, and a queue onto which the event will be placed (queue2). For the example shown in FIG.
2, filter procedure 203 checks the cursor location to ensure that it is within a qualified area before passing on the event. It is assumed that filter procedure
203 executes in kernel mode, thus avoiding a context switch. In other words, thread A will not be scheduled by the kernel unless the event meets all of thread A's qualifications.
When an event from game controller 205 is generated, an event descriptor 201 is created which corresponds to the particulars of the event. A device driver, which detects a hardware change, can post such an event using pk PostEvent (see Appendix 1). For the example in FIG. 2, the "A" button on game controller 205 has been pressed, and event descriptor 201 thus indicates the device type as game controller ("1"), the device instance as "1", the event type as "key", and the event data as "A", corresponding to the "A" button. Kernel event handler 204 receives the incoming event (from pk PostEvent), and traverses event interest list 200 to match the incoming event to events of interest. In a preferred embodiment, the event matching step may be performed very efficiently by multiplying the mask of each event interest object with the incoming event descriptor, then comparing the result with the code of the event interest object. If there is a match, then the event has been pre-qualified. In FIG. 2, for example, kernel event handler 204 first examines event interest 20Ca by multiplying event descriptor 201 with maskl and comparing the result to codel, and quickly determines that there is no match. This "AND" then "COMPARE" operation can be done very efficiently on a CPU (typically, only two assembly language instructions are needed), thus adding to the performance increase which results from using the principles of the invention.
After determining that event descriptor 201 does not match event interest object 200a, kernel event handler 204 next examines event interest object 200b and multiples mask2 by event descriptor 201, then compares the result with code2. For the example in FIG. 2, assume that the result is a successful match. However, because thread A specified a filter procedure 203, kernel event handler 204 does not pass the event to thread A but instead executes filter procedure 203, which checks the cursor location to see if it is in a valid area. If it is, kernel event handler 204 delivers the event to queue2 (also specified in event interest object 200b), and thread A can extract the event from queue2 using pk NextEvent (see Appendix 1).
It should be noted that had the event not successfully passed through filter procedure 203, thread A would not have been scheduled, thus avoiding a context switch and increasing the efficiency of the kernel. In other words, in a preferred embodiment, filter routine 203 is executed while the kernel is executing, and need not schedule thread A. If only 20% of events are actually of interest to a particular thread, then it is much faster to make the "interest" determination in kernel code than in thread code, which requires context switches.
Continuing with the example in FIG. 2, suppose that thread B has registered interest in any mouse movement from mouse 206. It would have set up event interest object 200c by way of pk Registerlnterest, specifying device type as a qualifier, but leaving the other mask fields open. For device type in code3, thread B would have specified "mouse". When a user presses a mouse button, event descriptor 202 would be generated, indicating device type as mouse ("2"), device instance "1", event type as "key", and event data as "mouse press". Kernel event handler 204 would traverse event interest list 200, preferably multiplying each mask with the event descriptor and comparing the result with the code. When it reached event interest object 200c, it would find a match, and immediately place the event on queue3, which was specified by thread B as the desired queue (no filter procedure was specified). FIG. 3 shows steps which may be executed by kernel event handler 204 to handle incoming events in the system. It is assumed that an event interest list has already been created through the use of pk Registerlnterest calls made by threads in the system. I ginning in step 301, the next event interest object from the event interest list is retrieved. In step 302, the mask from the event interest object is multiplied with the event descriptor corresponding to the event. In step
303, the result is compared with the code from the event interest object. In step
304, if there is no match, processing resumes at step 301 with the next event interest object.
If, in step 304, there is a match, then in step 305 a check is made to determine whether a filter was specified for the event interest object. If so, then in step 306 the specified filter is called, preferably directly by the kernel and without a context switch. If the event passes the filter in step 307, processing advances to step 308; otherwise, processing resumes at step 301 with the next event interest object. In step 308, if a queue was specified with the event interest, the event is placed on the specified queue in step 309; otherwise, processing resumes at step 301 until the end of the event interest list is reached. Note that if no queue was specified, the event can be discarded. Such a situation may be desirable, for example, where all of the processing is done in the filter itself. For example, if the only thing to be done is to update a memory area or other type of short-lived operation, then it may be more efficient to do this in the filter itself (in the kernel), and no queue need be provided (i.e., no thread to wake up).
Additionally, filters can be reusable. For example, one filter might have a function of determining whether a cursor is in a particular location on the screen; different threads could specify and use this same filter. If, for example, three different applications are executing in the HCT, each having a separate window on a television screen, and the user moves a mouse over the screen, a single filter can be devised which determines whether the cursor is over a window boundary for the specified thread. Furthermore, when both a filter and a queue are specified, the filter can modify the event itself. For example, a filter could change the time of the event before putting it on a queue. This could be used, for example, by a "replay" filter which changes the time stamp on an event to the current time (e.g., translate to current time). Another example involves checking a signature on an event before waking up a thread.
FIG. 4 shows another example which applies the principles of the present invention. Suppose a video game to be executed on an HCT provides two characters who fight one other, with two users each interacting with a separate game controller 401 and 402. The video game may comprise a main video game thread 403 which controls overall scoring and game operation, and two player threads 404 and 405. In the example of FIG. 4, for example, player 1 thread 404 controls the actions of a first character on the video screen, while player 2 thread 405 controls the actions of a second character on the video screen. Thus, thread 404 must manipulate the first video character based on actions taken by player 1, and thread 405 must manipulate the second video character based on actions taken by player 2. Consequently, player 1 thread 404 registers interest in all events from game controller #1 (by specifying the appropriate code, mask, filter and queue parameters), and player 2 thread 405 registers interest in all events from game controller #2. In the design shown in FIG. 4, suppose that player 1 thread 404 wants to know when player 2 has pressed a "pause" key on game controller #2, and that player 2 thread 405 wants to know when player 1 has pressed a "pause" key on game controller #1 (in other words, either player, including the one operating the opposite controller, can pause the game). In order to accomplish this, player 1 thread 404 can selectively register interest only in "pause" events from game controller #2, and player 2 thread 405 can selectively register interest only in
"pause" events from game controller #1. This avoids having the operating system send all events from the opposing game controller to the thread, thus greatly increasing efficiency. If each queue (and each corresponding thread) in
FIG. 4 were provided with a copy of every event generated by each game controller, much time would be wasted in processing undesirable events, wasting CPU time and memory. By allowing each thread to separately register interest only in certain types of events and causing the kernel to only send those types of events to each thread, significant performance increases can be achieved.
Similarly, suppose that main video game thread 403 is interested in receiving "game events" from each player thread, and registers accordingly. Each player thread receives events from main video game thread 403, such as a command to cause the character to die because of too many blows by the other player. The aforementioned registered interests are indicated by arrows in FIG.
4, annotated with the type of events registered. Additionally, suppose that instant replay thread 406 provides a capability to review all previous actions over a time window. Therefore, it registers interest in all events generated by either game controller, and thus gets its own copy of each event generated by the game controllers. This avoids individual threads from having to send "copies" of events between themselves.
As yet another example, an on-line TV guide may cause tuning tables to be downloaded at unknown times. When a new tuning table is downloaded, this could constitute an event. Typically, more than one thread in the HCT may need a copy of this ning table. Thus, there becomes a problem of determining when to free up memory used to store the mning table. This can be accomplished by creating a filter which creates a private copy of the mning table for each thread that registers an interest in the mning table event. Thus, when all threads are done using their copies, they will destroy their copy, avoiding the problem of determining when the memory area(s) can be freed. Note that a single event can trigger more than one filter, and can also trigger more than one thread. For example, one thread can be an event recorder (debugger); it would want to obtain a copy of every event in the system. To accomplish this, the thread would create a mask which clears all the bits, indicating interest in any event. The following describes in more detail how thread synchronization functions, such as semaphores, timers, media events, exceptions, messages, and so forth, can be replaced with event objects and integrated onto an event queue. to minimize memory requirements and applications development in accordance with another aspect of the invention.
It is contemplated that the kernel deals with time in a very accurate sense. A clock generates time ticks at a very high rate of speed (e.g., 25 MHz), and this clock can be used for timing. In accordance with various aspects of the invention, every event in the system can include a time stamp (comprising, e.g., 64 bits), comprising a "snapshot" of the system clock. When a event is posted or delivered, a future time can be inserted into the event object (instead of the current time). Rather than immediately delivering the event to a destination thread, the kernel can hold onto the event and not post it until the designated time arrives. Therefore, every event has the capability of being an alarm. For example, every keypress generates an event at a particular time.
Instead of writing an alarm procedure which detects that a certain time has arrived (to schedule an event), a future time can be inserted into an event object, such that when the event is posted, it will not actually be delivered until the future time in the event object. This avoids the need for kernel code to implement alarms, including the attendant data structures, storage areas, and status checks indicating which of several states a thread is in (e.g., "waiting for alarm"). FIG. 5 shows one possible configuration for an event object 501 in a kernel, including a pointer to the next event object, a code (corresponding to an event descriptor), time, X, Y, Z fields, and a "where" pointer. In various embodiments, event object 501 may comprise 28 bytes including both "public" portions accessible by threads and "private" portions hidden from threads. Events may be strung together into a list in an "intrusive" form (i.e. , the objects in the list have in their data structure the fields strung together), as compared to an "extrusive" form in which the list component is built separately.
The "where" field may comprise a queue pointer which is used for scheduled delivery: deliver event to queue at a future time. A thread may set the code field (device type, instance, etc.) using pk DeliverEvent where an event is created by a thread; alternatively, a device driver may set the code field using pk PostEvent as described previously. The X, Y, Z fields comprise "payload"; any data (including pointers to data structures) can be included therein.
As one example, an event strucmre can be used to set up an alarm. A game might have a 3 minute round. At end of the round, a bell will ring. A thread can set an alarm by calling pk_ScheduledDeIivery (see Appendix 1) and set the time to 3 minutes from now, and specify the event code, X, Y, Z, and the thread's own queue as the "where" pointer. The thread can then use pk NextEvent to get the next event off its queue. Even though different API calls are used, a single small data strucmre can be used, removing the need for a separate alarm system and eliminating the need for the kernel to provide special functions for alarms. Note that a thread can "pre-timestamp" the object, then the kernel can reuse that same space when the actual time is stamped. Consolidating alarm functions into an event delivery function thus saves memory, processing time, and programmer effort. When a thread is "sleeping", the kernel need not determine whether the thread is waiting for a semaphore, or for an alarm, or any other type of synchronization event. This results in faster thread switches. Whenever a thread is sleeping, the operating system is always waiting for an event on its queue. This also makes the kernel smaller. In contrast, conventional kernels need to execute instructions to determine whether a thread is waiting for an alarm, semaphore, queue wait, message wait, etc.
A second example of consolidation involves semaphores (FIG. 6). Assume that two threads 602 and 603 need to use a single printer. A conventional approach is to provide a semaphore to which both threads are responsive (i.e., they are in a "semaphore wait" state, and the kernel puts a thread on a "semaphore wait" queue with a semaphore wait data structure).
In contrast to conventional methods, the present invention contemplates using an event object. To implement a semaphore using event queues, a queue
604 is created (typically by printer driver 601) and a single event 604a is placed on the quote. The first thread 602 which needs the resource (such as a printer) makes a call to pk NextEvent (see Appendix 1), specifying the printer semaphore queue 604. This function causes a wait in the thread if there is no event on the queue, but otherwise extracts the event if there is one on the queue. Thus, assuming the object is still on the queue, the kernel 605 removes it from queue 604 and delivers it to thread 602. If another thread 603 attempts to use the resource (by calling pk NextEvent), it will cause the thread to wait until the event 604a is returned to queue 604 by first thread 602. When first thread 602 is finished using the resource, it calls pk RemmEvent (see Appendix 1) which returns the event to queue 604, allowing second thread 603 to finally execute its pk_NextEvent function.
Thus, in a preferred embodiment, kernel 605 does not place second thread
603 in a "waiting for semaphore" or "waiting for alarm" or any other type of mode; instead, the common event paradigm is used. This increases the efficiency of the kernel because the kernel need not maintain separate queues for all types of different activities, and when examining a thread's status, the kernel already knows that the thread can be in only one state. Furthermore, the same common event object can be used to implement semaphores in the system; no special semaphore data object needs to be defined.
The above example can be extended to handle multiple resources. For example, in FIG. 6, if there is a second printer 606 (but potentially more than two threads which would need to use the two printers), then each printer could post an event onto queue 604; the first two threads which called pk NextEvent would obtain use of the printers, and the third caller would be put in a waiting - for-event state by the kernel. This has a further advantage in that no pre-set limit on the number of semaphores needs to be specified; the queue can grow as long as needed. An event can thus be used for anything; the kernel need not even be cognizant of a "semaphore" synchronization mechanism. To eliminate redundancy and simplify the event system, all of the following types of coordination mechanisms can be classified as events and implemented using functions such as those shown in Appendix 1 :
— thread synchronization, including semaphores, messages, and queue messages, all of which are passed between threads as synchronization objects.
— demand scheduling, which signals that a demand was made for the upgrade of a thread's priority.
— timer events and delayed actions (an event can be posted immediately or at a specified future time) -- inter-thread exception handling (an event occurs whenever a thread raises an exception)
— user interface actions, including pressing a button on a remote or game controller, changing the volume, moving a pointer device, etc.
— media events, including starting the playback of audio, reaching the end of a move, inserting media into or ejecting media from a device, etc.
— application-specific events (user-defined events)
It is apparent that many modifications and variations of the present invention are possible, and references to specific values are by example only. For example, although references herein are made to "threads", such a term should be understood to also include processes, tasks, or other entities which can be scheduled by an operating system. As another example, although application programming interfaces are described for performing various functions such as registering interest in an event, equivalent mechanisms are of course possible to implement the same function. Moreover, specific function names are by way of example only. It is, therefore, to be understood that within the scope of the appended claims the invention may be practiced otherwise than as specifically described.
pk_DeleteQueue
Deletes an event queue. Syntax void pk_D«_«t*Qu«u« ( queue *q) j
Parameters
→ . A pointer to the queue to delete.
Returns
None
Comments
This function not only deletes the specified queue, but frees all the events in the queue as welL
Exceptions
None Example
None See Also pK_ hMhQueue, pfc_NewQueue
pkJDeliverEvent
Places an event directly into a queue without processing the event interest list
Syntax
▼old pk__,D«liv«sBve&t( queue »q, u!33 code, 133 x, 133 133 s) ι
Figure imgf000028_0001
p*_Sleapfor (1000* 28000); 0 Sleep tor one second
//Ekoβdcast art ever* lrκiud_iQ some artΛι__γ ___*
pk.PoetEvent (dtCustomEvent 0x8. 0x29, 0xβ5):
pk_SJeepFor ( 000 * 25000); // Sleep for one second
// Poet an event to a specified queue pK_De«ver£vent (myOueue. dtCustomEvent 0x12. 0x14, 0x92);
} while (1); }
See Also piC_ScheduieEvent
pk_EndCatch
Marks the end of a try-and-catch block series. Syntax pk_tndCate_i Comments
TWβ is a møero.
If an exception has not been caught when program control reaches this maαo, the exception is thrown to an outer try block. The outer try block can belong to an application, a device, or the operating system itself.
Example
TWs is a very simple example of an exception handler, 1__e try block surrounds the PleyMgnChirneSound routine, and a single catch block catches all exceptions. void PleyHighChimeSound (void)
{ utS pleyerPtr;
P* l*y {
// Create en audio player that deletes the resources upon
// oompleoon of playing the sound plsyerPfr > «p_NewflΛMAIFF ((uiβ *)Hloh_CM-τιe, Hl0rl_Cn-T_»_efcie,TRUE);
// Start playing fie sound ap Pue/ iplayerPtr);
)
//Catch al aucfo exceptions here. ρ*r.Ca_oh (l*tt_£aeepttarιAI)
{ printf ("A-do βnor caught in PlayHighCNmeSMeNftiT*);
)
SUBSnTUTE SHEET (RULE 26)
See Also pk_AIsoCatcn, pk_Cateh, pk_CatehCleanup
pk_FlushQueue
Flushes an event queue of all events. Syntax void pk_riuebQueu (queue «q) ; Parameters
→ q A pointer to the queue to flush.
Returns
None Comments
This function frees all the events in the specified queue.
C-flD_N3O0f)a_
None Example
None See Also pk-PslotBQueue, pK_NowjQuouo
pk_FreeEvent
Frees an event Syntax
▼aid p _rre«Svene(event *•>;
Parameters
→ e A pointer to the event to free.
Returns
None
Comments
Because multiple threads might be interested in any or__ event the kernel creates a new instance of every event that it routes. To prevent memory leaks, we recommend explicitly disposing of processed events using pk_FreeCvent
Figure imgf000033_0001
None
Example
This example waits for an event to be received, and then frees the event when the application is done with it const TlmeWue FwAIEterney - {QxH+H+H Otd+H-H-H-J; event 'anevsnt; //Where gsmβpβd and system < /wfl be posted u*32 evsntθevtoe;. //The device thet posted ftβ event evertOsts; //Oeta posted in the event queue 'gAppOueue; // Input event queue
nEvaπt ■ pk_,NsκtEvent (gAppθueue. h orA-BsmMy); sve-unala anEveπt-xxx-e * hficLMaefc. evertOevtoe - βn-Sventocode A (u02) tfX_Maβ* . pk_fϊreeEvent ( nEvent);
None
pk_Galloc
Allocates memory from the general heap. Syntax
oid Pk_α_ ioc(aieβ_t alee) . Parameters
The size, in bytes, of the block to allocate. Returns
A pointer to the allocated block, or NULL if there is msuffi eru memory available
Comments
None Excoptione
None Example
None See Also pfc Wree
pk_Gfree
Frees a memory block in the general heap. Syntax
▼old pk_β£ree<void *p) ι Parameters
→p A pointer to the memory block to free.
Returns
None Comments
None
None Example
None See Also
pk_Launch
Creates a new thread. Syntax
Boolean pk_I_uιnch(void ( *code) (void *d) . βl«β_t et.__.eise, void •date, i32 priority) ι
Parameters
→code A pointer to the code to be executed by the new thread.
→d A pointer to a parameter for cods. →stJ rør The size, in bytes, of the stack to allocate for this thread.
→ __da A pointer to a private data area. —t μtwt jf The thread priority. The lower 5 bits represents die priority, which can range from 0 to 31.
Returns
TRUE if the thread was launched successfully, and FALSE if the specified priority is already in use by another thread.
Comments
No function exists for deleting the thread because the thread is automatically deleted when the function specified by coat returns.
kPtv_MernoryFulErr kPw_P-_arr___τ
Thisex___φkUι_x__h« ar_t t__readα t_vtιιexta number (priority). void ____ncnOnNaκt (void)
{ uM priority - 1$ wMe <p*J_aunch (funcdonPfr, 0x4000. *PTV J ppHeaHon*. priority))
}
See Also pk.LaunchNotify, pkJ-aunchThrββd
pk_Launc±ιNotify
Creates a new thread. When the thread is deleted, a notification event is sent to the specified queue.
Syntax
Boolean pk_l__unc__*otify(void (*eode)(void «d), else_c •tk.alxe, void *d*ca, ui32 priority, queue •notify)ι
Parameters
→ codi A pointer to the code to be executed by the new thread.
→d A pointer to a pare eter for code.
→*tfc_stse The size, in bytes, of the stack to allocate for this thread.
→data A pointer to a private data area.
→ priority The thread priority. The lower 5 bits represents the priority, which can range from 0 to 31.
→ otify Optionally, a pointer to a queue to receive a k£t_pkT resdExrt event on exit
Returns
TRUE if the thread was launched successfully, and FALSE if the specified priority is already in use by another thread.
Comments
No function exists for deleting the thread because the thread is automatically deleted when the function specified by code returns.
kPw.MβmoryfυlErr !<Pw_P_*arnErr
Nc See Alto pi(_Leuncrt, plr LaunchThreed
pk_ au_αchThread
Creates a new thread and returns the thread identifier When the thread is deleted, a notification event is sent to the specified queue.
Syntax ul32 pl___---qnch ___r_-_ ( oid (*eode) (void *d) , eise_t βt_c_eiae, void *data, -132 epp_priorlty, ςtueua •notify) !
Parameters →code A pointer to the code to be executed by the new thread.
→d A pointer to a parameter for cods. → stkjάβ The size, in bytes, of the s ack to allocate for the new thread.
→dβt A pointer to a private data area.
—* epp_ " iii ity The application execution priority. The lower 3 bits represents the priority, which can range from 1 (highest) to 7 (lowest).
→ notify An optional pointer specifying a queue to receive the kE PkTnreedExR event when the thread has run to completion.
Returns
The thread identifier, a number from 1 to 31.
Comments
No function exists for deleting the thread because the thread is automatically deleted when the function specified by code returns.
Ptv_BuβyErr κPft _MβnιoryFuκιτ kPw_F_rarnEιτ
None
See Also pk_Launeh, pk.LaunchNotWy
pk_Lock
Locks the kernel. Syntax void pk_l<ockO ; Parameters
None Returns
None
Comments
This function disables interrupts, thereby preventing context switches.
Exceptions
None Example
None See Also pk JnJocfc
pk_MakeEvent
Creates an event but does not deliver it Syntax
•vent • pk_M__k_*r«at(ul33 code. 132 x. 132 y, 132 «> ;
Parameters
→ code The event type code.
→ x An optional event data field.
→y An optional event data field.
→z An optional event data field.
Returns
A pointer to the new event Comments
Nc
kPwJWβmoryFulEfT Example None
Figure imgf000042_0001
pk__NewQueue
Creates a new event queue. Syntax queue pk__ta«Queu« ( ) #
Parameters
None Returns
A pointer to the new queue, or NULL if there was in_q_f____er_t memory from which to allocate a new queue Comments
None Exceptions kPtv_MemoryFu_Err Example
None See Also pfc_DetoaQueuo, pk_FlushOueue
pk__NextEvent
Gets the next event from the specified queue Syntax t • p)___t._α_ta*eat < queue *q, eonet Ti___.V__l.ue tlee ι_t) <
Parameters
The queue from which to retrieve the next event → timeout The time at which to return if an event has still not been delivered to the queue specified by a.
An application can also spedfy kPtv.Forever to wait an indefinite period of time and return only when an event is delivered to the queue.
Retume
A pointer to the next event or NULL if a timeout occurred (signalling that the queue is still empty).
Comments
Nc
None
This example waits for an event teetEvent ■ pk_Nsnd£vent (myOueue, kPtv.Forever);
ptt_Next_EvsntCι1tfoβ-, pk_NextCverrftmπιedlsls, p _πeturr_Evsnt
pk_NextEventCritical
Gets the next event from the specified queue, and upgrades the calling thread's condition to critical at some point
Syntax
•vent • pk_J-___,CS*entCrltlcnl (queue *q, eoaet Ti___.v___u* ti-Mout, const Tl_MVel.ua deadline) »
Parameters
→ a The queue from which to retrieve the __ext event
→ tάneout The amount of time to wait for an event before returning.
→ deadline When to upgrade the calling thread's condition to criticaL
Returns
A pointer to the next event or NULL if a timeout occurred (signalling that the queue is still empty).
Comments
None
None
Example
None See Also pk_Ns_d__veπt
Figure imgf000045_0001
pk_πetumEvent
pk_NextEvent__mmediate
Gets the next event from the specified queue, returning immediately if the queue is empty.
Syntax
•vent • pk_M«j_ts»wnL_____ed->ate (queue *q) » Parameters
→a The queue from which to retrieve the next event
Returns
A pointer to the next event or NULL if the queue is empty. Comments
None
None
Example
None
See Also
- 45 -
pk_PeekEvent
Returns a pointer to the specified event in a queue. Syntax
•vent • pk_P*«kJtv«nt(qu«u« *q, ul32 n) #
Pararnetera
-* q A pointer to the queue in which the event resides.
→ n The event number to which to return a pointer (1 for the first 2 for the second, and so on).
Returns
A pointer to the spcdfied event or NULL if the event does not exist
Comments
Handling a pointer to an event can be tricky if other threads are servicing the queue. We recommend that you lock threads as
Exceptions None
Example Nom
SeeAJoo
pk_PostEvent
Posts an event Syntax void pk_PoetBvwnt(ul 2 code, 132 x, 132 y, 132 «) ι
Parameters
→ code The event type code.
→ x An optional event data field.
→ y An optional event data field.
→ z An optional event data field.
Returns
None.
Comments
The event type (in conjunction with the mask) is diecked against each event interest in the order in which the event interests were registered. When a match is found, the event interest is triggered.
If a filter was spe fied, the filter procedure is called and the return code determines whether a copy of the event is sent to the queue. If no filter was specified, the event is delivered directly to the queue.
Posting an event may trigger multiple event interests and result in multiple copies of the event being delivered to multiple queues.
kPw_MemoryFulErr
void EveraPoβtβr (void *ds__)
{ uB2 dtCuβs_n_Event a θ3C-9000000:
do
{
// Every S seoonde poet an event
// AJtsmsts between posting' end 'dβ-vering'
pk.SleepFor (1000 * 25000); // Sleep for one second
// Broadcast an event indudmg some arbitrary data pK OstEvsnt (dtCustomEvent Oxβ, 0x29, OxβS);
pk_SleepFor (1000 * 25000); // Sleep for one second
// oet an event to a specified queue pk JeiverEvent (myOueue. dtCustomEvent 0x12. 0x1 , 0x92);
} wh_e (1); )
See Also pk_OailverEvent p ScheduleEverit
pk_RegisterInterest
Registers an interest in a particular class of events.
Syntax
•1 * pk_*eeiaterXnt«r«et(ui32 cod*, ui32 meek, Booleea (*filt*r) (βr-uae •«), queue *q);
Parameters
→ cods The event type in which you are interested.
→ αsJt An event mask which further clarifies events of interest
→ ter The interrupt service routine (ER) to be called when the event is created.
This procedure takes a pointer to the event as an argument and returns a boolean value.
→ φ ie A pointer to the queue that receives events.
Returns
A pointer to the event interest
Comments
An event interest specifies a type and mask which determine the class of events to watch for.
You can supply a filter procedure that is caDed immediately when the event la posted and that operates in the context of the event creetor(which_naybeanlSR).
An event interest may have only a fil er or only a queue, or it may have both.
H*tv_Memoryfu_Br
rneo-Hii -T-sreet ln i pit_Reg_»s_rirtβπ (V_χ_Cor-rc^^ myOueue);
pk.Rβςiβtβflntβrest (kDt.Rβmotβ I kDi_.PtaιyBf1, kELΛny & kEd_Any, 0, myOueue);
pk_Rβgjstβrlnterββt (kDt_Conβolβ I kO PIayβrt, kEt_Any & kEd.Any, 0, myOueue);
pk_Rβgistβrtπt*rββt (kOt.Systβm I kEt_PkThreadExit kEd.Any, 0, myOueue);
pk_Rβgtsterirrtβrset (dtCustomEvent kEt_Λπy & k£d_Aπy, 0. myOueue);
See Also
Figure imgf000051_0001
pk_ Remo velnterest
Removes an event interest Syntax void pk_KM.nv.Int exeat < al • interest ) ;
Parameters
→ interest A pointer to an event interest
Returns
None
Comments
If the value of interest is not valid, it is assumed to have been already removed. exceptions
Nona
None
See Also
P*J
pk_Rethrow
Passes the exception raised by pk_Throw from inside a catch block to an outer try block for continued processing.
Syntax pk_aat_urαw ( ) ; Parameters
None
Comments
77nβ is Λ macro. pfc_Rethrow is equivalent to calling pk_Throw (pK_CurrantException).
This macro can be thought of as an alternative return _nech__nis_n. lt passes control to the outer try block so that the exception can be processed. Control does not return to the location where the exception occurred.
Example
None See Also p4ι_τfιraa( pli.Thrui-llr.ul_
pk_Re urnEvent
Returns an event to a queue. Syntax
▼old __k_a~_tux-Urv«nt (queue aq, «v«nt ••);
Parameters
→ q A pointer to an event queue.
-¥ t A pointer to the event to return.
Returns
None
Comments
Use ptrJtoturnEvent primarily for implementing semaphores. Certain events, such as semaphore events, have limited mterest and can be routed to, at most one queue. When retrieving such events for processing, we recoαunend using pk_.RetumEven which routes the actual event rather than a copy of the event
None
Example
None
See Also 9~ L ~--~τ~-~~~_-w~~m~%ψ
Figure imgf000054_0001
pk_Retι_rr__nTry
Executes a C return call from within a try block after popping the try block off the exception h-uidling stack.
Syntax pk_ __tuxx__aTry ( ret ) ; Parameters
→ ret The return value.
Comments
TTUs ie a macro.
Attempts to use a standard return call from within a try block will result in an invalid exception stack, and any subsequent exception will cause control to be passed incorrectly to the catch blocks associated with the try block.
Example
None
See Also
P«-Try
pk_ScheduleDelivery
Arranges for the future delivery of an event Syntax void pk_SdMdul«_>*liv«ry(qu«u* *w_wr«, Ti__eV___u« t. ui3_ code, 132 x, 13a y, 133 a) ;
Parameters → where The queue to which to deliver the event → t The delivery time. → code The event type code. → An optional event data field.
→y An optional event data field.
An optional event data field.
Returns
None Comments
None
kPw.MemoryFulEir
Nona
Figure imgf000056_0001
pk_ScheduleEvent
Posts an event at a future ______
Syntax void pk_βeheduleBvent (Tl_M.Valu* t, uiaa cod*, l..a x, 133 y, 133 s) ι
Parameters
→ . The delivery time.
→ code The event type code.
→x An optional event data field.
→y An optional event data field.
→z An optional event data field.
Returns
None
Comments
Figure imgf000057_0001
event is delivered immediately.
kPtv_MemocyFu!Eπ' Example
None See Also
Figure imgf000057_0002
pk_SleepFor
Sleeps (blocks) for a specified number of dock ticks. Syntax void pk__βl*«pFor(ui33 cloekticka) ; Parameters
-» clocktida The number of clock ticks to block for
Returns
None
Comments
This function implements a timeout with an accuracy of +/- 5 milliseconds.
Exceptions
Ne
void CventPoeter (void 'data)
{ u-32 dtCustomEvent ■ 0x29000000; do
(
// Every 8 seconds poet an event it AJwnan DeKwMπ puβiiiy aπα uβivMii
// Broadcast an event inducing some arbitrary data pt HMt£vera (dtCuelomεverι C-d_, ____9, Oxββ);
pk_.SIeepFor (1000 ' 25000): //Sleep tor cr*
// Poet an event to a specified queue pkJDββvenrveπt (myQueue, dtCustomEvent 0x12. 0x1 , 0x92);
} while (1); }
See Also pk_CpuTime
pk_StackAvail
Determines how much of a thread's allocated stack is left Syntax
Pk_8teckAvail ( .___■•______» ,
Parameters
→ ϋ eadID A thread identifier, returned by one of the pk_Launch* functions. Comments
TUo ia mmaero. Example
Now See Also ptUjuπef., plU_ιum_hNoWy, pk_L__uιwnThreed, pk_MyStac*AvaJI,
pk_StackProbe
Raises an exception if the stack allocated to a thread has been exceeded.
Syntax pk_βtneJ_rrobe ( hτ«edTT) ) j
Parameters
→ threβdID A thread identifier, returned by one of the pfc____unch* functions.
Comments
This is m msero.
Use this function during development to check whether or not a thread's allocated stack has been exceeded.
kPtv.StackOvenVϊw Example
None See Also pfc _auneh, ptcJ-aunchNoo y, ptc_. unch hread, pk_StackAvail,
pk_StackSize
Determines how much stack was allocated for a thread. Syntax pk_Steckai»e(rhreedTP)j
Parameters
→ tkreadID A thread identifier, returned by one of the pk_l_auneh* functions.
Comments
_7ι_s t* _r«-acro. Example
Nans
See AJeo pk_L_H_nch, pk_LeuncfoNo_lfy, pk_.LauncnThfeed, pk_MySlecfcSt_e, pk_Stee*Ave_
pk_TakeEvent
Takes an event from a queue. Syntax
«v*nt * pk_T«ke*v«at(QU«u« «q, ul33 a);
Parameters
→q A pointer to the queue from which to take die event
→n Theeventntur-3ertot___e(lfort__e__r_ forhe second, and so on).
Returns
A pointer to the event or NULL if the event does not exist Comments
None Exceptions
None Example
None SβeAJβo pk_Copycven_, pk_Psβfc£veπt
pk Thro
Raises an exception by passing the specified error to the system's exception handler
Syntax pk_Throv(typ«) > Parameters
→ type The exception type.
Comments
This is s macro.
This macro can be thought of as an alternative return me_______αn. It passes control to the outer try block so that the exception can be processed Control does not return to the location where the exception occurred.
Example
None
See Also β _JfWfMm~~~~~, ~~~~^l ~~~~~~~-~~~T~~Λ~~
63 -
pk_ThrowIf ιιll
Raises a kPtv.Me oryFuilErr exception to the system's exception hand r, if the pointer is nulL
Syntax pk_T_tr©w___ru_.l (void *s-y o___ter) ; Parameters
→myPoόtter Anull pointer.
Comments τkis iβ mmmcrø.
This maαo can be thought of as an alternative return mechanism. If myPointer is NULL, it passes control to the outer try block so that the exception can be processed. Control does not return to the location where the exception occurred. pl hroarMNulf is useful after any call that creates
Example
Pk ry (
// Creete a new screen gScreenlO « βcr.NβwSharβdSereβn (appUC. kScr_ScreβnModβJj-wftββofcrionNTSC); pk T πmttNiΛ (gScreenlO);
) pk_Ca_* -fi JExBβfΛo λa} l
} ρk_EndCaich
pk__Try
Delimits a block of code in which to catch exceptions. Syntax pk_τxγ Comments
This is a macro.
This function delimits a block of code known as a try block. One or more catch blocks must follow the try block and define the actual processing functionality for the exception.
The scope of the try block determines its priority. During exception processing, inner try blocks take precedence over outer ones.
A try block is pushed onto the exception stack. All exceptions ax* thrown to this try block's catch routines or to that of later try blocks until execution of this try block completes.
Example
This is a very simple example oέ an exception handler. The try block surrounds the PtayHfghChlrneSound routine, and a single catch block catches all exceptions. void PlβyWghCMmeSound (void)
{ u_32 ptayβfPtr.
Pk-Try (
// Create an audto player thai deietee tfie reeourcee upon
Figure imgf000066_0001
// Start playing the sound ap_p_βy (p-_yerPtr);
)
//Catch al audo saceptlons here.
Figure imgf000066_0002
{
pπntf ("Audio error caught in PlayHighChimeSourκ n'); } pk_EndCatcrt
} See Also pfc_.Ae urninTry
pk_Unlock
Unlocks the kemeL Syntax void pk_0___oek( ) ; Parameters
None Returns
None Comments
This function re-enables interrupts, thereby enabling context switches.
Exceptions
None Example
None See Also pk_Lack

Claims

1. An operating system kernel for execution on a home communication
teπninal (HCT) on which a plurality of different events can occur and a plurality of different threads can be executed, the operating system kernel comprising: an event registering function which allows each of the plurality of different threads to register interest in one of the plurality of different events by specifying parameters which indicate qualifications for the one event of interest; a data structure comprising a list of event objects each comprising one or more parameters specified using the event registering function; and an event handler which, responsive to receiving an event descriptor corresponding to one of the plurality of different events, traverses the data structure, compares the event descriptor to one or more of the parameters in each of the event objects and, responsive to a determination that the descriptor matches the one or more parameters in a particular one of the event objects, provides at least part of the event descriptor to a thread having an interest registered in the one event.
2. The kernel according to claim 1, wherein each event object comprises at least two parameters, a first parameter comprising a code field and a second parameter comprising a mask field, wherein the mask field indicates which of a plurality of fields in the event descriptor are germane, and wherein the code field indicates a value that each germane field in the event descriptor must match.
3. The kernel according to claim 2, wherein the event handler multiplies the mask field with at least part of the event descriptor and compares the result with the code field in order to perform the comparison.
4. The kernel according to claim 3, wherein the event handler places the at least portion of the event descriptor on a queue designated in the event object.
5. The kernel according to claim 1, wherein the one or more parameters indicates a device type and an event type to be qualified.
6. The kernel according to claim 1, wherein each of the one or more parameters indicates a data value representing a key press from a device coupled to the HCT.
7. The kernel according to claim 1, wherein the one or more parameters comprises a filter to be executed by the event handler to further qualify the event descriptor.
8. The kernel according to claim 7, wherein the filter is executed in the context of the kernel rather than in the context of a thread.
9. The kernel according to claim 7, wherein the filter checks a position of a cursor with reference to a window in order to further qualify the event.
10. The kernel according to claim 7, wherein the event handler compares the event descriptor with the one or more parameters in each of the event objects and, responsive to a determination that the descriptor matches the one or more parameters in a particular one of the event objects, executes the filter prior to providing the at least pan of the event descriptor to the thread having an interest registered in the one event.
11. The kernel according to claim 1, further comprising an event deregistration function which allows each of the plurality of different threads to remove a previously registered event interest.
12. A method of filtering events in an operating system kernel executing on a home communication teπninal (HCT) having a plurality of threads, comprising the steps of:
(1) registering interest in one or more events from one of the plurality of threads by creating a data strucmre comprising one or more parameters which limit the type of events which are to be provided to the one thread;
(2) detecting that an event has occurred in the system and, responsive thereto, creating an event descriptor which describes details of the event;
(3) in the operating system kernel, comparing the event descriptor created in step (2) with the one or more parameters in the data structure created in step (1) and, responsive to a determination that the event descriptor matches the event, providing at least pan of the event descriptor to a thread corresponding to the data structure.
13. The method of claim 12, wherein step (1) comprises the step of creating a data strucmre comprising a code field and a mask field, wherein the mask field indicates which portions of the event descriptor are germane to the comparison in step (3), and wherein the code field indicates a value that each germane portion in the event descriptor must match.
14. The method of claim 12, wherein step (1) comprises the step of using parameters which indicate a device type and an event type to be qualified.
15. The method of claim 12, wherein step (1) comprises the step of specifying a filter to be executed by the kernel to further qualify the event descriptor, and wherein step (3) comprises the step of executing the specified filter prior to providing at least pan of the event descriptor to the thread.
16. The method of claim 15, wherein step (3) comprises the step of
executing a filter which modifies the at least pan of the event descriptor prior to providing it to the thread.
17. A method of implementing on a home communication terminal
(HCT) a semaphore function using an operating system kernel which lacks semaphore functions, comprising the steps of:
(1) creating a queue for a shared resource using a kernel-supplied queue creation function which is accessible to threads executing on the HCT; (2) storing an event object on the queue, using a kernel-supplied object delivery function which is accessible to threads executing on the HCT, to indicate the availability of the shared resource;
(3) from a first thread, executing a kernel-supplied object retrieval function which retrieves the event object from the queue, wherein if the event object is not on the queue the first thread is suspended by the kernel;
(4) from a second thread, executing the kernel-supplied object retrieval function to attempt retrieval of the event object from the queue, and, upon failure
SUBSTITUTE SHEET (RULE 2 to retrieve the event object from the queue, suspending thereby the second thread; and
(5) from the first thread, executing a kernel-supplied object release function which returns the event object back to the queue, and thereafter resuming operation of the object retrieval function started in step (4).
18. The method of claim 17, wherein step (2) comprises the step of storing a plurality of event objects on the queue, each corresponding to one of a plurality of shared resources.
PCT/US1996/020126 1995-12-29 1996-12-23 Event filtering feature for a computer operating system in a home communications terminal WO1997024671A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP96944439A EP0880745A4 (en) 1995-12-29 1996-12-23 Event filtering feature for a computer operating system in a home communications terminal
KR1019980704949A KR19990076823A (en) 1995-12-29 1996-12-23 Operating system kernel and how to filter events in it and how to implement semaphores
AU14246/97A AU1424697A (en) 1995-12-29 1996-12-23 Event filtering feature for a computer operating system in home communications terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57820395A 1995-12-29 1995-12-29
US08/578,203 1995-12-29

Publications (1)

Publication Number Publication Date
WO1997024671A1 true WO1997024671A1 (en) 1997-07-10

Family

ID=24311856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/020126 WO1997024671A1 (en) 1995-12-29 1996-12-23 Event filtering feature for a computer operating system in a home communications terminal

Country Status (4)

Country Link
EP (1) EP0880745A4 (en)
KR (1) KR19990076823A (en)
AU (1) AU1424697A (en)
WO (1) WO1997024671A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001050241A2 (en) * 1999-12-30 2001-07-12 Koninklijke Philips Electronics N.V. Multi-tasking software architecture
WO2005109185A1 (en) * 2004-05-09 2005-11-17 St Microelectronics Nv A method for improving efficiency of events transmission and processing in digital television receiving device
EP2053507A3 (en) * 1999-04-14 2009-05-27 Panasonic Corporation Event control device and digital broadcasting system
US9524163B2 (en) 2013-10-15 2016-12-20 Mill Computing, Inc. Computer processor employing hardware-based pointer processing
WO2019217295A1 (en) * 2018-05-07 2019-11-14 Micron Technology, Inc. Event messaging in a system having a self-scheduling processor and a hybrid threading fabric
US11068305B2 (en) 2018-05-07 2021-07-20 Micron Technology, Inc. System call management in a user-mode, multi-threaded, self-scheduling processor
US11074078B2 (en) 2018-05-07 2021-07-27 Micron Technology, Inc. Adjustment of load access size by a multi-threaded, self-scheduling processor to manage network congestion
US11093251B2 (en) 2017-10-31 2021-08-17 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11119972B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US11119782B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Thread commencement using a work descriptor packet in a self-scheduling processor
US11132233B2 (en) 2018-05-07 2021-09-28 Micron Technology, Inc. Thread priority management in a multi-threaded, self-scheduling processor
US11157286B2 (en) 2018-05-07 2021-10-26 Micron Technology, Inc. Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
US11513837B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread commencement and completion using work descriptor packets in a system having a self-scheduling processor and a hybrid threading fabric
US11513838B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread state monitoring in a system having a multi-threaded, self-scheduling processor
US11513840B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread creation on local or remote compute elements by a multi-threaded, self-scheduling processor
US11513839B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Memory request size management in a multi-threaded, self-scheduling processor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301270A (en) * 1989-12-18 1994-04-05 Anderson Consulting Computer-assisted software engineering system for cooperative processing environments
US5321837A (en) * 1991-10-11 1994-06-14 International Business Machines Corporation Event handling mechanism having a process and an action association process
US5325536A (en) * 1989-12-07 1994-06-28 Motorola, Inc. Linking microprocessor interrupts arranged by processing requirements into separate queues into one interrupt processing routine for execution as one routine
US5428781A (en) * 1989-10-10 1995-06-27 International Business Machines Corp. Distributed mechanism for the fast scheduling of shared objects and apparatus
US5430875A (en) * 1993-03-31 1995-07-04 Kaleida Labs, Inc. Program notification after event qualification via logical operators
US5465335A (en) * 1991-10-15 1995-11-07 Hewlett-Packard Company Hardware-configured operating system kernel having a parallel-searchable event queue for a multitasking processor
US5566337A (en) * 1994-05-13 1996-10-15 Apple Computer, Inc. Method and apparatus for distributing events in an operating system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339418A (en) * 1989-06-29 1994-08-16 Digital Equipment Corporation Message passing method
US5625821A (en) * 1991-08-12 1997-04-29 International Business Machines Corporation Asynchronous or synchronous operation of event signaller by event management services in a computer system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5428781A (en) * 1989-10-10 1995-06-27 International Business Machines Corp. Distributed mechanism for the fast scheduling of shared objects and apparatus
US5325536A (en) * 1989-12-07 1994-06-28 Motorola, Inc. Linking microprocessor interrupts arranged by processing requirements into separate queues into one interrupt processing routine for execution as one routine
US5301270A (en) * 1989-12-18 1994-04-05 Anderson Consulting Computer-assisted software engineering system for cooperative processing environments
US5321837A (en) * 1991-10-11 1994-06-14 International Business Machines Corporation Event handling mechanism having a process and an action association process
US5465335A (en) * 1991-10-15 1995-11-07 Hewlett-Packard Company Hardware-configured operating system kernel having a parallel-searchable event queue for a multitasking processor
US5430875A (en) * 1993-03-31 1995-07-04 Kaleida Labs, Inc. Program notification after event qualification via logical operators
US5566337A (en) * 1994-05-13 1996-10-15 Apple Computer, Inc. Method and apparatus for distributing events in an operating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0880745A4 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2053507A3 (en) * 1999-04-14 2009-05-27 Panasonic Corporation Event control device and digital broadcasting system
US7962568B2 (en) 1999-04-14 2011-06-14 Panasonic Corporation Event control device and digital broadcasting system
WO2001050241A3 (en) * 1999-12-30 2002-02-07 Koninkl Philips Electronics Nv Multi-tasking software architecture
US6877157B2 (en) 1999-12-30 2005-04-05 Koninklijke Philips Electronics N.V. Multi-tasking software architecture
WO2001050241A2 (en) * 1999-12-30 2001-07-12 Koninklijke Philips Electronics N.V. Multi-tasking software architecture
WO2005109185A1 (en) * 2004-05-09 2005-11-17 St Microelectronics Nv A method for improving efficiency of events transmission and processing in digital television receiving device
US9524163B2 (en) 2013-10-15 2016-12-20 Mill Computing, Inc. Computer processor employing hardware-based pointer processing
US11093251B2 (en) 2017-10-31 2021-08-17 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11880687B2 (en) 2017-10-31 2024-01-23 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11579887B2 (en) 2017-10-31 2023-02-14 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11119782B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Thread commencement using a work descriptor packet in a self-scheduling processor
US11513838B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread state monitoring in a system having a multi-threaded, self-scheduling processor
US11074078B2 (en) 2018-05-07 2021-07-27 Micron Technology, Inc. Adjustment of load access size by a multi-threaded, self-scheduling processor to manage network congestion
US11126587B2 (en) 2018-05-07 2021-09-21 Micron Technology, Inc. Event messaging in a system having a self-scheduling processor and a hybrid threading fabric
US11132233B2 (en) 2018-05-07 2021-09-28 Micron Technology, Inc. Thread priority management in a multi-threaded, self-scheduling processor
US11157286B2 (en) 2018-05-07 2021-10-26 Micron Technology, Inc. Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
US11513837B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread commencement and completion using work descriptor packets in a system having a self-scheduling processor and a hybrid threading fabric
US11119972B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US11513840B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread creation on local or remote compute elements by a multi-threaded, self-scheduling processor
US11513839B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Memory request size management in a multi-threaded, self-scheduling processor
US11068305B2 (en) 2018-05-07 2021-07-20 Micron Technology, Inc. System call management in a user-mode, multi-threaded, self-scheduling processor
US11579888B2 (en) 2018-05-07 2023-02-14 Micron Technology, Inc. Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
US11809369B2 (en) 2018-05-07 2023-11-07 Micron Technology, Inc. Event messaging in a system having a self-scheduling processor and a hybrid threading fabric
US11809368B2 (en) 2018-05-07 2023-11-07 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US11809872B2 (en) 2018-05-07 2023-11-07 Micron Technology, Inc. Thread commencement using a work descriptor packet in a self-scheduling processor
WO2019217295A1 (en) * 2018-05-07 2019-11-14 Micron Technology, Inc. Event messaging in a system having a self-scheduling processor and a hybrid threading fabric

Also Published As

Publication number Publication date
EP0880745A1 (en) 1998-12-02
EP0880745A4 (en) 1999-04-21
AU1424697A (en) 1997-07-28
KR19990076823A (en) 1999-10-25

Similar Documents

Publication Publication Date Title
EP0880745A1 (en) Event filtering feature for a computer operating system in a home communications terminal
US7107497B2 (en) Method and system for event publication and subscription with an event channel from user level and kernel level
US5563648A (en) Method for controlling execution of an audio video interactive program
US5539920A (en) Method and apparatus for processing an audio video interactive signal
CN105740326B (en) Thread state monitoring method and device for browser
US6560626B1 (en) Thread interruption with minimal resource usage using an asynchronous procedure call
EP0767938B1 (en) Method for enforcing a hierarchical invocation structure in real time asynchronous software applications
US6823518B1 (en) Threading and communication architecture for a graphical user interface
WO1995031781A1 (en) Method and apparatus for distributing events in an operating system
CN110351574A (en) Information rendering method, device, electronic equipment and the storage medium of direct broadcasting room
US20150326897A1 (en) Event booking mechanism
EP1008934A2 (en) Method and apparatus for user level monitor implementation
US7032211B1 (en) Method for managing user scripts utilizing a component object model object (COM)
CN112691365B (en) Cloud game loading method, system, device, storage medium and cloud game system
US7486276B2 (en) Key event controlling apparatus
US6877157B2 (en) Multi-tasking software architecture
EP1657640A1 (en) Method and computer system for queue processing
AU2001252975A1 (en) Operating system abstraction interface for broadband terminal platform firmware
CN100538622C (en) A kind of method that improves incident transmission and processing in the apparatus of digital television receiving
US7010781B1 (en) Methods and apparatus for managing debugging I/O
CN112578984B (en) Processing method and system for man-machine interaction event of synthetic vision system
KR20010103720A (en) Method and apparatus for operating system kernel operations
WO2000039677A1 (en) Method and apparatus for providing operating system scheduling operations
KR100324264B1 (en) Method for interrupt masking of real time operating system
Kim An Effective Processing Server for Various Database Operations of Large-scale On-line Games

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1019980704949

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1996944439

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97524380

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1996944439

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1019980704949

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 1996944439

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1019980704949

Country of ref document: KR