CA2032320A1 - Subroutine return prediction mechanism - Google Patents

Subroutine return prediction mechanism

Info

Publication number
CA2032320A1
CA2032320A1 CA002032320A CA2032320A CA2032320A1 CA 2032320 A1 CA2032320 A1 CA 2032320A1 CA 002032320 A CA002032320 A CA 002032320A CA 2032320 A CA2032320 A CA 2032320A CA 2032320 A1 CA2032320 A1 CA 2032320A1
Authority
CA
Canada
Prior art keywords
instruction
ring
subroutine
return
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002032320A
Other languages
French (fr)
Inventor
Simon C. Steely, Jr.
David J. Sager
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Equipment Corp
Original Assignee
Digital Equipment Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Equipment Corp filed Critical Digital Equipment Corp
Publication of CA2032320A1 publication Critical patent/CA2032320A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3802Instruction prefetching
    • G06F9/3804Instruction prefetching for branches, e.g. hedging, branch folding
    • G06F9/3806Instruction prefetching for branches, e.g. hedging, branch folding using address prediction, e.g. return stack, branch history buffer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3005Arrangements for executing specific machine instructions to perform operations for flow control
    • G06F9/30054Unconditional branch instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/32Address formation of the next instruction, e.g. by incrementing the instruction counter
    • G06F9/322Address formation of the next instruction, e.g. by incrementing the instruction counter for non-sequential address
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • G06F9/4484Executing subprograms
    • G06F9/4486Formation of subprogram jump address

Abstract

ABSTRACT OF THE DISCLOSURE

A method and arrangement for producing a predicted subroutine return address in response to entry of a subroutine return instruction in a computer pipeline that has a ring pointer counter and a ring buffer coupled to the ring pointer counter. The ring pointer counter contains a ring pointer that is changed when either a subroutine call instruction or return instruction enters the computer pipeline. The ring buffer has buffer locations which store a value present at its input into the buffer location pointed to by the ring pointer when a subroutine call instruction enters the pipeline. The ring buffer provides a value from the buffer location pointed to by the ring pointer when a subroutine return instruction enters the computer pipeline, this provided value being the predicted subroutine return address.

Description

~ j;; . . , ~, -` 2~3232~ ~

` ~

~ ` `

Field o~ the Invention The present invention relates to the field of changQs in the ~low-o~-control in pipelined computers.
Mor~ specifically, the invention relates to the 5 processing o~ instruction~ associated with subroutines ~ ~`
in a highly-pip~lined computer.

Backaround of the Invention The concept of pipelining of instructions in a computer 1~ well known. The processing of a single instruction is per~ormed in a number of di~ferent stages, such as ~etching, decoding and execution. In pipslined computers, each o~ the various stages are working on di~erent instructions at the sams time. For example, i~ the pipeline was only three stages long, the }5 first instruction that has passed through the pipeline is being operatQd on by the third stage while the second instruction to enter the pipeline will be operated on by the second stage, and whil~ the third instruction to entsr the pipeline is being operated on by thQ ~irst stage. Pipelining is a much more efficient method of procesaing instructions in comparison with waiting ~or a single instruction to be completely processed be~ore beginning the processing o~ a second instruction. In a normal ~low of a computer program, it is ea~y to know which instruction is to enter the pipeline next. In most instances, it is the sequentially next instruction ~ 2 20323~2~ :
that enters the pipeline so that, for example, instruction 101 will enter the pipeline after instruction 100.
One exception to this normal flow of control is known as a subroutine. A subroutine is a program or a sequence of instructions that can be ncalledn to perform the same ta~ks at different points in a program, or even in different programs. For example, instruction 101 may call for a subroutine which begins executing at instruction 200. The ~ubroutine may execute instructions 200-202 and then nreturnn to the main flow at instruction 102. Further, the sam~ subroutine, comprising instruction6 200-202, may be called from a number o~ di~arent locations in the main ~low and 15 return to different locations in the main flow. -~
Subroutines po5e a problem for heavily pipelined computers (those with many stages in the pipeline).
Although the lnstruction which calls a subroutine will contain enough in~ormation to doterminQ which is the next instruction to enter the plpeline (i.e., the first instruction in the called subroutine~, the return in~truction in the subroutine will not contain such inrormation. Instead, a return instruction must pass through all o~ the stages o~ the pipeline berore the return address will be known rrom the return lnstruction. I~ the computer waited ~or the return instruction to pass through the pipeline before entering another instruction, there would then bQ a -bubblen in the pipeline bohind the return ins~ruction in which there would be no instructions, thereby lowering the per~ormance of the computer.
To avoid bubbles, a mechanism known as a stack has been used. Basically, a stack will store the return address at the time a subroutine is called, and when the subroutine is completed and control is returned to the main ~low by a return instruction, this return address i8 located in the stack and provided to the pipeline.

203~

The pipeline will then be able ~o return the control to the main flow by entering the proper instxuction into the pipeline. By ~eeping a stacX of the return addresses, and using these return addresses to locate 5 the next instruction when there is a return from the --subroutine, bubbles in the pipeline are eliminated.
A problem with the stacX mechanism is the limited size of the stack and the complicated procedure~ to deal with stack overruns and underruns when there are a large lo number of subroutines that have been called. In other words, if the stac~ contains twelve locations, only twelve subroutinQs can be called at one time without reCorting to the complicated procedures for stack overruns.
There is a need for a mechanism that provides return addr~sses and eliminates bubbles in the pipeline, but without requiring the complicated procedures necessary when a stack is used.
:' SummarY o~ the Invantion This and other needs are met by the present invention which provides a ring bu~fer that simulates the characteristics o~ a stac~ mechanis~ to predict a subroutine return address. The ring bufrer stores a predicted return address in one o~ its ring bu~fer locations every time a subroutine i9 called (i.e., enters the pipeline). Whenever a subroutine return instruction enters the pipeline, the predicted return address is provided to the pipeline ~rom the ring buffer so that the appropriate instruction from the main rlow can enter the pipeline. In this way, bubbles in the pipeline are eliminated.
The ring buffer used in the present invention can be of limited size and have, for example, eight locations for storing eight different return addresses.
If more than eight subroutines are called without any returns, then, inring buffer fashion, the earliest 2~3.~32~

stored return addresses will be overwrit~en by th~ more recent return addresses that have ~een ~tored in the ring buffer. Eventually, when a subroutine associated with a return address that has been overwritten has completed processing through the pipeline, and the flow of control is changed to the main flow, the predicted return address in the ring buffer will be incorrect.
The actual return addrQss for the return instruction will be known at the end of tha pipeline when the return instruction has passed all the way through the pipeline.
This actual return address will be compared at this time with the predicted return address in the ring buffer.
When the predicted return address is wrong, as shown by the comparison, the pipeline is ~lushed and started again from the instruction using the actual return address.
For w~ behaved programs, return addresse~ are predicted correctly over 90% o~ the time.

Brief De~cription o~ the Drawings FIG. 1 shows a block diagram of a program having a main flow and a subroutine.
FIG. 2 shows a block diagra~ of a pipeline using an ombodlment of the present invention.
FIG. 3 shows a block diagram o~ a pipeline using another embodimQnt of the present lnvention.

Detailed Descri~tion FIG. 1 is a block illustration of a flow of control of a computer program. Instructions 100-107 are the instructions that make up a main flow of ~nstructions 10. A secondary ~low o~ instructions 200-202 comprise a subroutine 12. In the example of FIG. l, the subroutin~ 12 is called fro~ one o~ two instructions 101 and 104. When the subroutine 12 is called, from instruction 101 for example, the computer executes instructions 200,201 and returns to the main flow lO

~3~
s return with instruction 202. ~xecution of the main flow 10 begins again at instruction 102. However, i~ the subroutine 12 was called from instruction 104, the subroutine 12 must return the flow of execution to the main flow 10 at instruction 105.
As can be seen from the above flow description, the main flow lO can be returnad to from the subroutine 12 at one of two places. In a larger program, it is possible for the return to the main ~low 10 to be made to any number of places.
The performance of an in~truction in a pipelined computer involves a number of operations performed in sequential stages in the pipeline. Pipelining in computers is wellknown, and involves operating on separate instructions in the separate stages of the pipeline simultaneously. For example, if there are five stages in the pipeline, a di~ferent instruction will be in each o~ the ~ive stages at the same time, with di~erent operation~ being performed on the individual instructions ak each stage. The pipelining o~
instructions i9 a much more efficient method of processing instructions than waiting ror each instructlon to complete before beginning the processing o~ the next instruction.
A pipeline constructed in accordance with the present invention is illustrated in FIG. 2 and will be refQrred to with reference numeral 20~ The pipeline 20 has a program counter buffer 21 that buffer~ a program count or "PC" to an instruction cache 22. A number of instructions are ~tored in the instruction cache 22 at any time. When the instruction cache 22 rec~ives the PC
from the program counter buffer 21, a coded instruction is sent to an instruction fetching decoder 24. As its name implies, the decoder 24 decodes the instruction from instruction cache 22 that has been pointed to by the program counter buffer 21. In accordance with the pipeline technique, the decoder 24 will be decoding a `~ 6 ~3232~
first instruction in the pipeline while a second instruction is being looked up in the instruction cache 22.
The decoded instxuction i9 sent from decoder 24 to an instruction buffer 26, which merely buffers the decoded instruction. Following the instruction buffer 26 is issue logic 28, which performs a scheduling type of function. The remaining stage or stages in the pipeline have reference numeral 30, and can be any number of stages. ~he final stage in the pipeline 20 is the execution stage 32 in which the instruction is executed. Since there are six stages in the pipeline 20 Or FIG. 2, up to six instructions can be processed simultaneously.
Normally, after one instruction has been called from the instruction cache 22, such as instruction 100, the next instruction will be called from the cache 22 and will be the sequQntially next instruction, such as instruction 101. I~ it is a call instruction, such as instruction 101, the decoded instruction it~elr will provide the PC for the next instruction that is to be executed. In this case, the next in truction to be executed after the call instruction will be the first instruction o~ the subroutine, instruction 200. In this manner, by using either the next ~equential instruction (PC +l) or tha instruction pointed to by a decoded call instruction, the pipeline 20 will be kept full o~
in~tructions. The pipeline 20 is being used ine~iciQntly i~ there are bubbles (no instructions) in one or more of the ~tages in the pipeline. Such a bubble could potentially occur with a subroutine return ~nstruction, such as instruction 202, as described below.
A bubble in the pipeline 20 can occur with a ~ubroutine return instruction since the actual place of return to the main flow (the actual return address) will not be known until the subroutine return instruction has 7 2~3~
been executed in the execution stage 32. If no further measures are taken, stages 22-30 will be empty behind a subroutine return instruc~ion since the address of the next instruction will not be known. As stated earlier, this bubble represents ine~ficient use of the pipeline 20.
In order to prevent bubbles in the pipeline 20, there must be a mechanism for predicting what is the next instruction to be proce~sed after the return instruction in the pipeline 20. Some computer architectures allow the use of stacks which store each return address every time a subroukine i~ called. The stack stores these return addresses in a lastin, first-out manner, S3 that the last return address ctored in the stack will bæ the first return address issued from the stack when the next return instruction is decoded in the pipeline. However, some computer architectures do not provide or allow for the usei of stacks.
The present invention provides the basic functionality of a stack, without actually u~ing a stack apparatus. (Although the invention is useful in architectures without stacks, the invention is also useful in architectures that do use stacks.) Instead, a ring bu~er 34 is uised, as seen in FIG. 2. The ring buf~er 34 holds a relatively small number o~ predicted return addre~ses. As will be expla~neid in more detail below, the predicted raturn address in the ring buffer 34 is used to ~stch the next instruction from the instruction cache 22 when the instruction fetching decoder 24 has decoded a return address. The pipeline 20 will then continue operating on subsequent instructions without rorming a bubble in the pipeline.
If the predicted return address eventually turns out to be incorrect, the sequence of instructions will be aborted and execution will continue using the correct sQquencQ of instructions.
The ring buffer 34 can be an eight-deep buffer, 8 2~33~32Q
for example, with each location storing a predicted return address. As stated earlier, the instruction cache 22 receives the program count (PC) from the program counter buffer 21 to locate an instruction. The PC is also sent to an adder 38, which adds the value 1 to the PC. This value, PC +1, is sent to the ring buffer 34 and to a multiplexer 36. During a normal sequenc~ of instructions, without a subroutine or a branch, one instruction will follow another instruction sequentially such that PC - PC +1. If the program count for the first instruction is PC, the program count for the next instruction will be PC +1, and a third instruction will be PC +2. In other implementationc, the incremental value will be different then 1, as where the address of the instruction following instruction 300 is 304, and the next 308. In that case, the value of the PC will change by 4, so that PC-PC~4.
The multlplexer 36 also receives inputs ~rom the instruction fetching decoder 24. One input i9 a called subroutin~ address (CSA) contained within a call instruction that is received from instruction cache 22 and decoded in decoder 24. The called subroutine address is selected from the ~ultiplexer 36 to bs the PC
for the instruction cache 22 whenever the decoded instruction is a call instructlon. If the instruction that is decoded in the instruction fetching decoder 24 is a return instruction, and not a call instruction, a signal is sent to the ring buffer 34 through an RP
counter 40. The RP counter 40 contains a rin~-pointer (RP) that indexes into the ring buf~er 34. The ring buffer 34 sends a predicted return address (PRA) pointed to by the RP to the multiplexer 36. Under control of a signal from the instruction f~tching decoder 24, the multiplexer 36 will sel~ct tha PRA to be the PC ~or the in~truction cache 22. The operation of the ring buffer 34 will be described later.
So far, three inputs to the multiplexer 36 for .. , .. , .... ... ~ - . . . . ..... . . .. .

~03232~
.~ g selecting the Pc have been described. The first of these is Pc +l, which is seleicted by the multiplexer when the instruction decoded by the instruction fetching decoder 24 is neither a call nor a return instruction.
The second input to the multiplexer 36 is the called subroutine addres~ (CSA), contained in the decoded instruction, and sent by the instruction fe ching decoder 24 to the multiplexer 36. The CSA is used whenever the decoder 24 has decoded a call instruction.
The third input ~o the multiplexer 36 that has been described thus far is the predicted return address (PRA) that is sQnt by the rin~ bu~fer 34 to the multiplexer 36 wh~never a rQturn instruction has been decoded by the decoder 24. The operation of the ring bu~fer 34 will now be described.
As menti~ned earlier, the ring buf~er 34 comprises a finite number of buffer locations that store predicted return addresses. The buffer locatlon~ in the ring buffer 34 are ind~xed by the ring-pointer (RP) kept in the RP counter 40. The operation Or the RP counter is simple. Whenever the RP counter 40 receives the signal from the decoder 24 that th~ decoded instructlon is a call instruction, the RP counter 40 i8 incremented by one 50 that it points to the next higher bu~fer ;~
location. In equation ~orm, upon a subroutine call, RP
- RP + l. When the instruction decoded by decoder 24 is a return instruction, RP is decremented by one. This gives the equation that RP ~ RP - l for a subroutine return instruction.
The value placed into the ring bu~er 34 at the location pointed to by RP upon the decoding o~ a subroutine call instruction is the value of PC
sequentially next address after the address of the call instruction is loaded into th2 location in ring buf~er 34 pointed to by RP. This value, PC + l, then becomes the PRA. The PRA will be supplied by the ring buffer 34 to the multiplexer 36 when a subroutine return ~ 20~23'~0 instruction has been decoded by decoder 24. The loading of Pc + 1 and the return of the PRA into and out of the location pointed to by RP is performed according to a ring buffer control signal issued by the decoder 24.
An example of the operation of the ring buffer 34 now follows:
The first instruction to enter the pipeline 20 is instruction 100. This is looked up in ~he instruction cache 22 using PC - 100. The instruction is decoded in lo the decoder 24, and sinc~ it is neither a subroutine call or return, the control signal from ths decoder 24 selects the next PC to be PC + 1. In this case, the value of PC + 1 ~ 101, so that instruction 101 i~ sent from the instruction cache 22 to the decoder 24.
Instruction 101 is a subroutine call (see FIG. 1) so that the decoder 24 will send a signal to the RP
counter 40. The RP in RP counter 40 i8 increm~nted ~rom, ~or example, RP - 3, to RP - 4, so that it points to a new buffer location RP(4) in the ring buffer 34.
The value o~ PC + 1, in this instance, 101 + 1 3 102, is ~tored in th~ ring buf~er 34 at ring bu~fer location RP~4).
The dacoder 24, upon decoding the call instruction 101, has also sent a control signal to the multiplexer 36, and sent tho called subroutine address (CSA) to be the next PC. The CSA is able to be sent by the decoder 24 to the multiplexer 36 since it is contained in the subroutine call instruction 101 that has been decoded.
The subroutine 12 is executed ~uch that instruction 200 i8 retrieved ~rom the instruction cache 22 and d~coded. Instruction 201 will then be fetched ~rom the instruction cache 22 (PC - PC + 1 - 201) since instruction 200 is neither a call nor a return instruction. Similarly, instruction 202 will follow instruction 201 in the pipeline. However, instruction 202 is a subroutine return in~truction.
When subroutine return instruction 202 is decoded 203~320 by the decoder 24, potentially the next instruction could be either the instruction 102 or 105. (See FIG.
1.) Using a ring buffer 34, th~ correct return address can usually be provided. Upon decoding a return instruction, the decoder 24 sends a signal to the RP
counter 40. The PRA pcinted ~o by the RP contained in RP counter 40 will bs provided to the multiplexer 36.
In this example, the PRA ascociated with instruction 102, which has been stored at RP(4), is provided to the multiplexer 36. The decoder 24 sends a control signal to the multiplexer 36 to cause it to select PRA to be the next PC. Thus, using the supplied PRA, instruction 102 will be sent from instruction cache 22 to be decoded by the decoder 24. Also, the RP in RP counter 40 is lS decremented 50 that it now points to RPt3).
The above is a description of the operation of the pipeline 20 when the PRA i3 correct. However, in some circum_tances, the P~A will not be correct. This can happen, for example, i~ the subroutine causes a return to a di~erent location in the main rlow 10 than the instruction imm~diately following the call instruction~
The ~act that the PRA is incorrect will not ~e ~nown until the return instruction has completely gonQ through the pipeline 20 and been ex~cuted. It i9 at that point that the actual return addres~ for the return instruction becomes known. In the meantime, a number of instructions following the return in~truction have entered and are in the various stage~ o~ the pipeline 20. The pipeline 20 must recognize ~hat the actual return address is different from the predicted return address and takQ corrective measures.
The actual return address ~RA) is sent from the end o~ the pipeline to a compare unit 42. At the same time th~t the compare uni~ 42 receives the ARA, it also receives the PRA.
The PRA has been sent via a series of delay latches 44 to the compare unit 42. The PRA and the ARA

-` 2~ 232~

are compared. If there is a mis-comparison, a mis-comparison signal is sent to the decoder 24 and to the issue logic 28. The miscomparison signal causes a flushing of the pipeline 20 of all the instructions which entered the pipeline 20 after the return instruction. ~he multiplexer 36 receives at its fourth input thQ ARA so that the Pc to go to th~ instruction cache 22 will now become this ARA after a mis-comparison has been recognized. A new sequence of instructions, beginning with the instruction at the actual return address (ARA), are then processed by th~ pipeline 20.
The ring buff~r 34 has a limited number of buffer locations. This can lead to a mis-predicted return addr~ss being provided by the ring buf~er 34. As an example, assume that the ring ~uffer 34 has eight locations RP~0) - RP(7). If eight subroutines have been called, with no returns having been made, the ring bu~er 34 will be full. When a ninth subroutine call has baen made, the first ring bu~er location, RP(0), will be overwritten in standard ring ~urfer manner. The PRA
that was previously in R~(O) i~ essentially lost by the ~ ;
overwriting o~ a new PRA into that location. Thus, if nlne subroutine returns are issued, the correct subroutine return address for the ~irst subroutine that was called will no longer be ~ound in RP(0). Instead, the new value o~ the PRA that was stored in RP(0) will be issued as the PRA. This PRA will be incorrect as will eventually be determ$ned by a comparison o~ the PRA with tho ARA. The appropriate corrective measures as described earlier will then be taken upon recognition of the mis-comparison.
Events can occur during the execution Or a program by a pipelined computer that require the pipeline to abort all instructions in all the stage~ of the pipeline. This is sometimes called "flushing" the pipeline. All the worX that was going on in the pipeline 20 at this time will be discarded. These instructions . - - :: .. :: . , .. ~ . .:-: .. - ., .. ~ ,., - . .. .

?~032320 that are discarded are called tha "shadow". Typically the front end of the pipeline will then begin executing instructions at some error-handling address.
For the purpose of this description, the events that flush the pipeline will be referred to as "traps".
Examples o~ traps are branch mis-prediction, virtual memory page fault, resource busy, and parity error in some hardware.
To enhance the performance of the subroutine return prediction mechanism, the ring buffer 34 needs to be backed up to the point it was at when the instruction that caused the trap passed through the ring buffer stage of the pipeline 20. Keeping copies o~ every change that occurs to the ring buf~er 34 until it is known that no trap will occur is too expensive a solution.
The embodiment of FIG. 3 solves ~his problem by providing another ring pointer, the "confirmed ring pointer" or CRP Xept in a CRP counter 45. The CRP is changed similarly to the RP, such that it is incremented when a subroutine call instruction i9 seen and is decremented when a subroutine return instruction is s~en. The di~erence is that the CRP counter 45 watche~
the lnstructions that reach the last stage of the pipeline 20. Changes are made to the C~P based on the instru¢tions that are at the laqt stage. When a trap occurs, the RP in the RP counter 40 will be set to the value o~ the C~P in the CRP counter 45.
This preserves the RP to be in correct sync when new instructions are begun to be executed a~ter a trap.
There may have been some subroutine call instructions that occurred in the shadow of the trap which wrote over entries in the ring buffer 34 and ~ubse~uent predlctions ~or subroutinQ return instruction~ will be incorrect.
However, the ~P will be in sync and as soon as the RP
3S passes over the incorrect entries, the ring buffer 34 will again be in a good state.
The present invention is not limited to the 14 20~23~
embodiment shown, but finds application with pipelines of various sizes and with ring buffers of various sizes and wi~h ring buffers of various sizes.

Claims (7)

1. An arrangement for producing a predicted subroutine return address in response to entry of a subroutine return instruction in a computer pipeline, comprising:
a ring pointer counter that contains a ring pointer that is incremented when a subroutine call instruction enters the computer pipeline, and is decremented when a subroutine return instruction enters the computer pipeline; and a ring buffer coupled to said ring pointer counter and having buffer locations, an input and an output, said ring buffer storing a value present at said input into the buffer location pointed to by said ring pointer when a subroutine call instruction enters the computer pipeline and providing a value at said output from the buffer location pointed to by the ring pointer when a subroutine return instruction enters the computer pipeline, said value at said output being the predicted subroutine return address.
2. The arrangement of claim 1, further comprising a comparison unit coupled to said ring buffer that compares an actual return address produced by the computer pipeline in response to the processing of a return instruction with the predicted return address for that return instruction, and having an output at which is provided a mis-comparison signal when the actual return address is not the same as the predicted return address.
3. A computer pipeline comprising:
an instruction cache which stores coded instructions and has an input that receives a program count which indexes the coded instructions, and an output at which the indexed coded instructions are provided;

an instruction fetching decoder having an input coupled to the instruction cache output and which decodes the coded instructions, and having as outputs: a subroutine call address when the coded instruction is a subroutine call instruction, a multiplexer control signal, a ring pointer counter control signal, and a decoded instruction; an execution stage coupled to the instruction fetching decoder which executes the decoded instruction; a program counter coupled to the input of the instruction cache and having an output at which is provided the program count to the instruction cache input, and having an input; a multiplexer having a plurality of inputs, a control input coupled to the multiplexer control signal output of the instruction fetching decoder, and an output coupled to the program counter input; an adder having an input coupled to the output of the program counter and an output coupled to one of said multiplexer inputs at which is provided a value equal to the program count plus one; a ring pointer counter having an input coupled to the instruction fetching decoder to receive the ring pointer counter control signal, and containing a ring pointer which points to buffer locations in response to the ring pointer counter control signal, said ring pointer being incremented when the instruction fetching decoder decodes a subroutine call instruction and being decremented when the instruction fetching decoder decodes a subroutine return instruction; and a ring buffer having an input coupled to the adder output, a plurality of buffer locations, and an output coupled to one of said multiplexer inputs, said ring buffer storing said value into the buffer location pointed to by said ring pointer when a subroutine call instruction is decoded and providing said value from the buffer location pointed to by the ring pointer at the ring buffer output when a subroutine return instruction is decoded, said value at said ring buffer output being the predicted subroutine return address.
4. The pipeline of claim 3, further comprising a comparison unit coupled to said ring buffer that compares an actual return address produced by the computer pipeline in response to the processing of a return instruction with the predicted return address for that return instruction, and having an output at which is provided a mis-comparison signal when the actual return address is not the same as the predicted return address.
5. The pipeline of claim 4, further comprising means for processing a correct sequence of instructions beginning with the instruction indicated by the actual return address when the mis-comparison signal indicates that the predicted return address and the actual return address do not match.
6. The pipeline of claim 3, further comprising a confirmed ring pointer counter coupled to the execution stage and the ring pointer counter, and containing a confirmed ring pointer that is incremented when the execution stage receives a subroutine call instruction and is decremented when the execution stage receives a subroutine return instruction, and which provides the confirmed ring pointer to the ring pointer counter to replace the ring pointer when a trap occurs.
7. A method of predicting subroutine return addresses in a pipelined computer comprising:
storing in one of a plurality of buffer locations in a ring buffer a value equal to one plus an address of a call instruction in response to that call instruction;
pointing to the buffer location containing the most recently stored value;
providing the pointed to most recently stored value as an output in response to a return instruction, said output being the predicted subroutine return address;
pointing to the buffer location containing the next most recently stored value.
CA002032320A 1989-12-18 1990-12-14 Subroutine return prediction mechanism Abandoned CA2032320A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/451,943 US5179673A (en) 1989-12-18 1989-12-18 Subroutine return prediction mechanism using ring buffer and comparing predicated address with actual address to validate or flush the pipeline
US451,943 1989-12-18

Publications (1)

Publication Number Publication Date
CA2032320A1 true CA2032320A1 (en) 1991-06-19

Family

ID=23794364

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002032320A Abandoned CA2032320A1 (en) 1989-12-18 1990-12-14 Subroutine return prediction mechanism

Country Status (6)

Country Link
US (1) US5179673A (en)
EP (1) EP0433709B1 (en)
JP (1) JPH06161748A (en)
KR (1) KR940009378B1 (en)
CA (1) CA2032320A1 (en)
DE (1) DE69033339T2 (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5193205A (en) 1988-03-01 1993-03-09 Mitsubishi Denki Kabushiki Kaisha Pipeline processor, with return address stack storing only pre-return processed address for judging validity and correction of unprocessed address
US5539911A (en) * 1991-07-08 1996-07-23 Seiko Epson Corporation High-performance, superscalar-based computer system with out-of-order instruction execution
US5493687A (en) 1991-07-08 1996-02-20 Seiko Epson Corporation RISC microprocessor architecture implementing multiple typed register sets
JP2773471B2 (en) * 1991-07-24 1998-07-09 日本電気株式会社 Information processing device
JP3730252B2 (en) * 1992-03-31 2005-12-21 トランスメタ コーポレイション Register name changing method and name changing system
WO1993022722A1 (en) * 1992-05-01 1993-11-11 Seiko Epson Corporation A system and method for retiring instructions in a superscalar microprocessor
US5628021A (en) * 1992-12-31 1997-05-06 Seiko Epson Corporation System and method for assigning tags to control instruction processing in a superscalar processor
EP0682789B1 (en) * 1992-12-31 1998-09-09 Seiko Epson Corporation System and method for register renaming
US5604877A (en) * 1994-01-04 1997-02-18 Intel Corporation Method and apparatus for resolving return from subroutine instructions in a computer processor
US5758142A (en) * 1994-05-31 1998-05-26 Digital Equipment Corporation Trainable apparatus for predicting instruction outcomes in pipelined processors
US5896528A (en) * 1995-03-03 1999-04-20 Fujitsu Limited Superscalar processor with multiple register windows and speculative return address generation
US5968169A (en) * 1995-06-07 1999-10-19 Advanced Micro Devices, Inc. Superscalar microprocessor stack structure for judging validity of predicted subroutine return addresses
US5854921A (en) * 1995-08-31 1998-12-29 Advanced Micro Devices, Inc. Stride-based data address prediction structure
US5881278A (en) * 1995-10-30 1999-03-09 Advanced Micro Devices, Inc. Return address prediction system which adjusts the contents of return stack storage to enable continued prediction after a mispredicted branch
US5864707A (en) * 1995-12-11 1999-01-26 Advanced Micro Devices, Inc. Superscalar microprocessor configured to predict return addresses from a return stack storage
US6088793A (en) * 1996-12-30 2000-07-11 Intel Corporation Method and apparatus for branch execution on a multiple-instruction-set-architecture microprocessor
US5958039A (en) * 1997-10-28 1999-09-28 Microchip Technology Incorporated Master-slave latches and post increment/decrement operations
US6044456A (en) * 1998-01-05 2000-03-28 Intel Corporation Electronic system and method for maintaining synchronization of multiple front-end pipelines
US20050149694A1 (en) * 1998-12-08 2005-07-07 Mukesh Patel Java hardware accelerator using microcode engine
US6289444B1 (en) 1999-06-02 2001-09-11 International Business Machines Corporation Method and apparatus for subroutine call-return prediction
US6304962B1 (en) 1999-06-02 2001-10-16 International Business Machines Corporation Method and apparatus for prefetching superblocks in a computer processing system
KR20020028814A (en) * 2000-10-10 2002-04-17 나조미 커뮤니케이션즈, 인코포레이티드 Java hardware accelerator using microcode engine
US20020080388A1 (en) * 2000-12-27 2002-06-27 Sharp Laboratories Of America, Inc. Dynamic method for determining performance of network connected printing devices in a tandem configuration
US20040049666A1 (en) * 2002-09-11 2004-03-11 Annavaram Murali M. Method and apparatus for variable pop hardware return address stack
JP2006040173A (en) * 2004-07-29 2006-02-09 Fujitsu Ltd Branch prediction device and method
US8423751B2 (en) * 2009-03-04 2013-04-16 Via Technologies, Inc. Microprocessor with fast execution of call and return instructions
US9317293B2 (en) * 2012-11-28 2016-04-19 Qualcomm Incorporated Establishing a branch target instruction cache (BTIC) entry for subroutine returns to reduce execution pipeline bubbles, and related systems, methods, and computer-readable media
US9411590B2 (en) * 2013-03-15 2016-08-09 Qualcomm Incorporated Method to improve speed of executing return branch instructions in a processor
US20240036864A1 (en) * 2022-08-01 2024-02-01 Qualcomm Incorporated Apparatus employing wrap tracking for addressing data overflow

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3794980A (en) * 1971-04-21 1974-02-26 Cogar Corp Apparatus and method for controlling sequential execution of instructions and nesting of subroutines in a data processor
US4649472A (en) * 1981-02-04 1987-03-10 Burroughs Corporation Multi-phase subroutine control circuitry
US4399507A (en) * 1981-06-30 1983-08-16 Ibm Corporation Instruction address stack in the data memory of an instruction-pipelined processor
US4594659A (en) * 1982-10-13 1986-06-10 Honeywell Information Systems Inc. Method and apparatus for prefetching instructions for a central execution pipeline unit
US4586127A (en) * 1982-11-03 1986-04-29 Burroughs Corp. Multiple control stores for a pipelined microcontroller
US4611278A (en) * 1983-04-01 1986-09-09 Honeywell Information Systems Inc. Wraparound buffer for repetitive decimal numeric operations
JPS6054049A (en) * 1983-09-02 1985-03-28 Hitachi Ltd Subroutine link control system of data processing device
US5008807A (en) * 1984-07-05 1991-04-16 Texas Instruments Incorporated Data processing apparatus with abbreviated jump field
JPS62152043A (en) * 1985-12-26 1987-07-07 Nec Corp Control system for instruction code access
US4773041A (en) * 1986-06-02 1988-09-20 Unisys Corporation System for executing a sequence of operation codes with some codes being executed out of order in a pipeline parallel processor
JPS62285140A (en) * 1986-06-04 1987-12-11 Hitachi Ltd Information processor
US4814978A (en) * 1986-07-15 1989-03-21 Dataflow Computer Corporation Dataflow processing element, multiprocessor, and processes
JPH06100967B2 (en) * 1987-06-10 1994-12-12 三菱電機株式会社 Data processing device having pipeline processing mechanism and processing method
JPH0820952B2 (en) * 1988-03-01 1996-03-04 三菱電機株式会社 Data processing device with pipeline processing mechanism
US4890221A (en) * 1988-04-01 1989-12-26 Digital Equipment Corporation Apparatus and method for reconstructing a microstack
US5040137A (en) * 1989-12-13 1991-08-13 Audio Animation Random access FIR filtering

Also Published As

Publication number Publication date
EP0433709A3 (en) 1993-05-12
DE69033339T2 (en) 2000-08-03
JPH06161748A (en) 1994-06-10
KR910012914A (en) 1991-08-08
EP0433709B1 (en) 1999-11-03
DE69033339D1 (en) 1999-12-09
EP0433709A2 (en) 1991-06-26
US5179673A (en) 1993-01-12
KR940009378B1 (en) 1994-10-07

Similar Documents

Publication Publication Date Title
CA2032320A1 (en) Subroutine return prediction mechanism
US5889982A (en) Method and apparatus for generating event handler vectors based on both operating mode and event type
US5675759A (en) Method and apparatus for register management using issue sequence prior physical register and register association validity information
EP0365188B1 (en) Central processor condition code method and apparatus
US6694427B1 (en) Method system and apparatus for instruction tracing with out of order processors
EP0381471B1 (en) Method and apparatus for preprocessing multiple instructions in a pipeline processor
US5276882A (en) Subroutine return through branch history table
US5197138A (en) Reporting delayed coprocessor exceptions to code threads having caused the exceptions by saving and restoring exception state during code thread switching
US5003462A (en) Apparatus and method for implementing precise interrupts on a pipelined processor with multiple functional units with separate address translation interrupt means
US4734852A (en) Mechanism for performing data references to storage in parallel with instruction execution on a reduced instruction-set processor
EP0071028B1 (en) Instructionshandling unit in a data processing system with instruction substitution and method of operation
US4775927A (en) Processor including fetch operation for branch instruction with control tag
AU598969B2 (en) Method and apparatus for executing instructions for a vector processing system
EP0463628A2 (en) One cycle register mapping
JPH02234229A (en) Source list,pointer queue and result queue
EP0423906A2 (en) Method of and apparatus for nullifying an instruction
JPH03116235A (en) Data processor branch processing method
KR20060002031A (en) Method of scheduling and executing instructions
JPH0731603B2 (en) FORTH specific language microprocessor
JP3154660B2 (en) Method and system for temporarily buffering condition register data
JPH09120359A (en) Method and system for tracking of resource allocation at inside of processor
US5146570A (en) System executing branch-with-execute instruction resulting in next successive instruction being execute while specified target instruction is prefetched for following execution
US6681321B1 (en) Method system and apparatus for instruction execution tracing with out of order processors
US4736289A (en) Microprogram controlled data processing apparatus
US6295601B1 (en) System and method using partial trap barrier instruction to provide trap barrier class-based selective stall of instruction processing pipeline

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued