Title of Invention

A TRANSACTION QUEUE FOR AN AGENT, A MANAGMENT METHOD FOR EXTERNAL TRANSACTIONS, AND A BUS SEQUENCING UNIT OF AN AGENT.

Abstract Embodiments of the present invention provide a multi-mode transaction queue 300 for a computer processing agent that prov ides a measured response to congestion events. The transaction queue may initially operate according to a default priority scheme. When a congestion event is detected, the transaction queue may engage a second priority scheme to selectively invalidate stored transactions in the queue that are pending, that is, transactions that have not been posted to the external bus. In one embodiment, the transaction queue may invalidate blind prefetch requests first. The transaction queue may also invalidate non-posted prefetch requests that are stored with an associated posted prefetch request. Finally, in an extreme congestion case, as when there is no available room for new requests from the processing agent, the transaction queue may invalidate a pair of non-posted patterned prefetch requests.
Full Text FIELD OF THE INVENTION
The present invention relates to a transaction queue for an agent, a
management method for external transactions, and a bus sequencing unit of an
agent.
BACKGROUND
As is known, many modern computing systems employ a multi-agent
architecture. A typical system is shown in FIG. 1. There, a plurality of agents 110-160
communicates over an external bus 170 according to a predetermined bus protocol.
"Agents" may include general-purpose processors 110-140, memory controllers 150,
interface chipsets 160, input output devices and/or other integrated circuits that process
data requests (not shown). The bus 170 may permit several external bus transactions to
be in progress at once.
An agent (e.g., 110) typically includes a transaction management system that
receives requests from other components of the agent and processes external bus
transactions to implement the requests. A bus sequencing unit 200 ("BSU"), shown in
FIG. 2, is an example of one such transaction management system. The BSU 200 may
include an arbiter 210, an internal cache 220, an internal transaction queue 230, an
external transaction queue 240, an external bus controller 250 and a prefetch queue 260.
The BSU 200 manages transactions on the external bus 170 in response to data requests
issued by, for example, an agent core (not shown in FIG. 2).
The arbiter 210 may receive data requests not only from the core but also from
a variety of other sources such as the prefetch queue 260. Of the possibly several data
requests received simultaneously by the arbiter 210, the arbiter 210 may select and
output one of them to the remainder of the BSU 200.
The internal cache 220 may store data in several cache entries. It may possess
logic responsive to a data request to determine whether the cache 220 stores a valid
copy of requested data. "Data," as used herein, may refer to instruction data and
variable data that may be used by the agent. The internal cache 220 may furnish
requested data in response to data requests.
The internal transaction queue 230 also may receive and store data requests
issued by the arbiter 210. For read requests, it coordinates with the internal cache 220
to determine if the requested data "hits" (may be furnished by) the internal cache 220.
If not, if a data request "misses" the internal cache 220, the internal transaction queue
230 forwards the data request to the external transaction queue 240.
The external transaction queue 240 may interpret data requests and generate
external bus transactions to fulfill them. The external transaction queue 240 may be
populated by several queue registers. It manages the agent's transactions as they
progress on the external bus 170. For example, when data is available in response to
a transaction, the external transaction queue 240 retrieves the data and forwards it to a
requestor within the agent (for example, the core).
The prefetch queue 260 may identify predetermined patterns in read requests
issued by the core (not shown). For example, if the core issues read requests directed
to sequentially advancing memory locations (addresses A, A+l, A+2, A+3, ...) the
prefetch queue 260 may issue a prefetch request to read data from a next address in the
sequence (A+4) before the core actually requests the data itself. By anticipating a need
for data, the prefetch queue 260 may cause the data to be available in the internal cache
220 when the core issues a request for the data. The data would be furnished to the core
from the internal cache 220 rather than from external memory - a much faster
operation. Herein, this type of prefetch request is called a "patterned prefetch."
A BSU 200 may implement a second type of prefetch, herein called a "blind
prefetch." When a core issues a read request to data at an address (say, address B) that
will be fulfilled by an external bus transaction, a blind prefetch mechanism may cause
a second external bus transaction to retrieve data at a second memory address (B+l).
A blind prefetch may cause every read request from a core that cannot be fulfilled
internally to spawn a pair of external bus transactions. Blind prefetches may improve
processor performance by retrieving twice as many cache lines (or cache sectors) as are
necessary to satisfy the core read request. Again, if the core eventually requires data
from the data prefetched from the other address (B+l), the data may be available in the
internal cache 220 when the core issues a read request for the data. A blind prefetch
request also may be generated from a patterned prefetch request. Using the example
above, a patterned prefetch request to address A+4 may be augmented by a blind
prefetch to address A+5.
Returning to FIG. 1, it is well known that, particularly in multiprocessor
computer systems, the external bus 170 can limit system performance. The external bus
170 often operates at clock speeds that are much slower than the internal clock speeds
of the agents. A core often may issue several requests for data in the time that the
external bus 170 can complete a single external bus transaction. Thus, a single agent
can consume much of the bandwidth of an external bus 170. When a plural number of
agents must share the external bus 170, each agent is allocated only a fraction of the
bandwidth available on the bus 170. In multiple agent systems, agents very often must
wait idle while an external bus retrieves data that they need to make forward progress.
An external transaction queue 240 (FIG. 2) may include control logic that
prioritizes pending requests for posting to the external bus. Generally, core reads
should be prioritized over prefetch reads and prefetch reads should be prioritized over
writes. Core read requests identify data for which the core has an immediate need.
Prefetch read requests identify data that the core is likely to need at some point in the
future. Write requests identify data that the agent is returning to system storage.
Accordingly, the external transaction queue 240 may include control logic that posts
requests on the external bus according to this priority.
The predetermined priority scheme has its disadvantages. A request typically
is stored in the transaction queue 240 until it is completed on the external bus. During
periods of high congestion, when the transaction queue 240 is entirely or nearly full,
prefetch and write requests may prevent new core requests from being stored in the
queue 240. These lower priority requests would remain stored in the queue until an
external bus transaction for the request completes. Thus, the lower priority requests
may prevent higher priority requests from being implemented. This would limit system
performance.
Accordingly, there is a need in the art for a congestion management system for
an external transaction queue in an agent. There is a need in the art for such a system
that provides a dynamic priority system - maintaining a first priority scheme in the
absence of system congestion but implementing a second priority when congestion
events occur.
SUMMARY
Embodiments of the present invention provide a multi-mode transaction queue
for an agent. The transaction queue may operate according to a default priority scheme.
When a congestion event is detected, the transaction queue may engage a second
priority scheme.
Accordingly, the present invention provides a transaction queue for an
agent that operates according to a dynamic priority scheme, the transaction
queue operating to a default priority scheme and engaging a second priority
scheme when a congestion event is detected.
Accordingly, there is provided in an agent, a management method for
external transactions, comprising : queuing data of a plurality of read requests, for
each queued request, storing data of a blind prefetch transaction associated with
the respective request, when a transaction congestion event occurs, disabling
selected stored prefetch requests.
There is also provided in an agent, a management method for external
transactions, comprising : queuing data of a plurality of external bus transactions,
for at least one queued transaction, storing data of a blind prefetch transaction in
association with the respective transaction, when a transaction congestion event
occurs, disabling the blind prefetch transaction.
There is also provided in an agent, a management method for external
transactions, comprising : queuing data of a plurality of read requests, certain
read requests related to executions being performed by an agent core, certain
other read requests related to data being prefetched, when a transaction
congestion event occurs, disabling the prefetch requests.
There is further provided in an agent, a multi-mode management method
for external transactions, comprising : queuing data of a plurality of core read
requests, for each core read request, storing data of a blind prefetch transaction
associated with the respective core read request, queuing data of prefetch
requests related to patterns of core read requests, in a first mode, when a
transaction congestion event occurs, disabling the blind prefetch transactions, in
a second mode, when a transaction congestion event occurs, disabling the
prefetch requests.
The present invention also provides a transaction queue comprising : a
controller, a plurality of queue registers, each having an address field and status
fields associated with a pair of transactions related to the address, wherein, in
response to a congestion event, the controller modifies one of the status fields in
a register to invalidate the respective transaction.
The present invention further provides a bus sequencing unit of an agent,
comprising : an arbiter, an internal cache coupled to the arbiter, a transaction
queue coupled to the arbiter and storing data of external transactions to be
performed by the agent, the transactions selected from the group of core read
requests, blind prefetch requests and patterned prefetch requests, and an
external bus controller coupled to the transaction queue, wherein, in response to
a congestion event in the bus sequencing unit, the transaction queue invalidates
selected transactions.
The present invention further provides a congestion a management
method for external transactions, the external transactions stored in the
transaction queue in pairs and taken from the set of a core read requests-blind
prefetch request pair and a patterned prefetch request pair, comprising : receiving
a new request at a transaction queue, determining whether the transaction queue
has space to store the new request, if not, removing a pair of patterned prefetch
request from the transaction queue and storing the new request in the transaction
queue.
The present invention further provides a method comprising : operating a
transaction queue for an agent according to a default priority scheme ; and
operating the transaction queue according to a second priority scheme after a
congestion event is detected.
The present invention further provides a management method comprising :
queuing data of a plurality of read requests in an agent, for each queued request,
storing data of a blind prefetch transaction associated with the respective request,
if a transaction congestion event occurs, disabling second request stored prefetch
requests.
The present invention further provides a management method for external
transactions, comprising : queuing data of a plurality of external bus transactions
in an agent, for at least one queued transaction, storing data of a blind prefetch
transaction in association with the respective transaction, and if a transaction
congestion event occurs, disabling the blind prefetch transaction.
The present invention further provides a management method for external
transactions, comprising : queuing data of a plurality of read requests in an agent,
certain read requests related to executions to be performed by a core of the
agent, certain other read requests related to data to be prefetched, if a
transaction congestion event occurs, disabling the prefetch requests.
The present invention further provides a multi-mode management method
for external transactions, comprising : queuing data of a plurality of core read
requests in an agent, for each core read request, storing data of a blind prefetch
transaction associated with the respective core read request, queuing data of
prefetch requests related to patterns of core read requests, in a first mode, if a
transaction congestion event occurs, disabling the blind prefetch transactions, in
a second mode, if a transaction congestion event occurs, disabling the prefetch
requests.
The present invention further provides a transaction queue of an agent,
comprising : a plurality of queue registers, each comprising an address field and
status fields for a pair of transactions related to the address, and a controller to
respond to a congestion event by modifying a status field in a register to
invalidate the respective transaction.
The present invention further provides a bus sequencing unit of an agent,
comprising : an arbiter, an internal cache coupled to the arbiter, a transaction
queue, coupled to the arbiter, to store data of external transactions to be
performed by the agent, the transactions selected from the group of core read
requests, blind prefetch requests and patterned prefetch requests, the transaction
queue to invalidate a selected transaction in response to a congestion event in
the bus sequencing unit, and an external bus controller coupled to the transaction
queue.
The present invention also provides a congestion management method for
external transactions, comprising : receiving a new request at a transaction
queue, determining whether the transaction queue has space to store the new
request, if not removing a pair of patterned prefetch requests from the transaction
queue, and storing the new request in the transaction queue, wherein the
external transactions are stored in the transaction queue in pairs and taken from
a set of a core read request-blind prefetch request pair and a patterned prefetch
request pair.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
Fig.1 is a block diagram of a multi-agent computer system.
Fig.2 is a block diagram of an exemplary bus sequencing unit of an agent.
Fig.3 is a block diagram of an external transaction queue of an agent
according to an embodiment of the present invention.
FIG. 4 is a flow diagram of a congestion management method according to an
embodiment of the present invention.
DETAILED DESCRIPTION
Embodiments of the present invention provide a transaction queue that provides
a measured response to congestion events. The transaction queue selectively invalidates
stored transactions in the queue that are pending - they are not currently posted to the
external bus. In one embodiment, the transaction queue invalidates blind prefetch
requests first. The transaction queue may also invalidate non-posted prefetch requests
that are stored with an associated posted prefetch request. Finally, in an extreme
congestion case, as when there is no available room for new requests, the transaction
queue may invalidate a pair of non-posted patterned prefetch requests.
These embodiments advantageously provide a transaction queue having a
dynamic priority scheme. In the absence of congestion, the transaction queue may
operate in accordance with a first priority scheme. For example, the transaction queue
may prioritize core read requests over prefetch requests and may prioritize prefetch
requests over write requests as is discussed above. When congestion events occur,
however, the transaction queue may engage a second priority scheme. For example, the
transaction queue may maintain core read requests as highest priority requests and
reprioritize write requests as the next-highest priority requests. The transaction queue
may invalidate prefetch requests that are stored in the transaction queue.
FIG. 3 is a block diagram of an external transaction queue 300 of an agent
according to an embodiment of the present invention. The external transaction queue
300 may include a controller 310 and a plurality of queue registers 320-1 through 320-N
(labeled 320 collectively). Each queue register may be populated by several fields
including an address field 330, a first status field 340 and a second status field 350.
The external transaction queue 300 may be appropriate for use in agents that
perform blind prefetches. The status fields 340, 350 each may store information about
a respective one of the external bus transactions that will be performed according to the
blind prefetch pair. The address field 330 may store a base address to which the
transactions will be directed. Typically there will be a predetermined relationship
between the address field 330 and the status fields 340,350. For example, if an address
D is stored the address field 330 of register 320-1, status field 340 may maintain status
information about a transaction directed to address D and status field 350 may maintain
status information about a second transaction directed to address D+l.
The status fields 340, 350 may store administrative information regarding the
respective transactions. Such information may include a request type, information
regarding the respective transaction's status on the external bus (i.e., whether it has been
posted, which transaction stage the request may be in, whether the transaction is
completed, etc.) and information regarding a destination of data that may be received
pursuant to the transaction. Typically, a transaction is cleared from a register 320 when
the status fields 340, 350 both indicate that their respective transactions have
completed.
According to an embodiment of the present invention, the status fields 340,350
each may include a sub-field that identifies whether the corresponding transaction is
generated pursuant to a core request ("C") or pursuant to a prefetch request ("P"). FIG.
3 illustrates an example where seven requests are core requests and the remainder are
prefetch requests. In this example, the transactions stored in registers 320-1, 320-4,
320-5, 320-6, 320-8, 320-11 and 320-N store transactions that were initiated by a core
requests. One of the status fields 340 or 350 of those registers identify the transaction
as originating from a core request; the other status field indicates a blind prefetch
requests.
The other registers 320-2, 320-3, 320-7, 320-9 and 320-10 identify patterned
prefetch requests augmented by blind prefetches. Both of the status fields 340, 350
indicate that the requests are prefetch requests.
The controller 310 interfaces the external transaction queue 300 to other
elements within the agent (See, for example, FIG. 2). The controller 310 may cause
transactions to be entered or removed from the queue registers 320 and may write data
into the address field 330 and to the status fields 340,350. The controller 310 also may
schedule an order for transactions to be posted on the external bus 170 (FIG. 1). In one
embodiment, the controller 310 may be a state machine.
According to an embodiment of the present invention, the controller 310 may
selectively disable prefetch requests during congestion events within the BSU 200. In
a first embodiment, when the transaction queue 300 experiences congestion, the
transaction queue may disable any blind prefetch transactions that have not been posted
on the external bus. This may be accomplished, for example, by marking the status
field of the blind prefetch transaction as completed even though the transaction was
never posted. In this embodiment, when the core read request is completed on the
external bus, the transaction may be evicted from the transaction queue 300.
In another embodiment, when the transaction queue experiences congestion, the
transaction queue 300 may evict any patterned prefetch request stored in the queue that
has not been posted on the external bus. The transaction queue 300 may evict non-
started prefetch requests simply by de-allocating the associated queue register.
In a further embodiment, when the transaction queue experiences congestion and
the transaction queue 300 stores patterned prefetch transactions that have been started,
the transaction queue 300 may disable any non-posted prefetch transaction in the
prefetch pair. Consider the patterned prefetch request illustrated in register 320-2 of
FIG 3. As shown, the status field 350 indicates that the first prefetch transaction is
pending but has not been posted on the external bus. By contrast, the status field 340
indicates that the second prefetch transaction has been posted on the external bus. In
this embodiment, the transaction queue 300 may mark the first transaction as completed
in response to a congestion event. In this case, the second prefetch request would be
permitted to continue to completion. When it completed, the transaction queue 300
could de-allocate register 320-2 because both status fields 340,350 identify completed
transactions.
FIG. 4 is a flow diagram of a method 1000 that may be performed by the
transaction queue 300 (FIG. 3) according to an embodiment of the present invention.
Upon a congestion event, the transaction queue may determine whether a new request
is input to the transaction queue (Step 1010). Upon receipt of a new request, the
transaction queue may determine whether a register is available for the new request
(Step 1020). If so, it stores the request in an available register (Step 1030). Storage of
requests may be performed according to conventional methods in the art. The
transaction queue 300 may determine a base address of the request and enter appropriate
information in the various fields 330-350 of the allocated register.
If at step 1020 there was no register available, then the transaction queue 300
may de-allocate a register associated with a pair of non-posted patterned prefetch
requests (Step 1040). In performance of this step, the transaction queue 300 may de-
allocate a patterned prefetch request for which both status fields 340, 350 indicate that
the respective transactions have not been posted to the external bus. If none of the
registers 320 identify a pair of prefetch requests that are not started, then the newly
received request may be stalled (step not shown). The request is prevented from
entering the transaction queue.
At the conclusion of step 1030 or if there was no received request at step 1010,
the transaction queue determines whether it is operating in a congested mode (Step
1050). If not, the transaction queue 300 may cease this iteration of the method 1000.
If the transaction queue 300 is operating in a congested mode, the transaction
queue determines whether it stores any pending blind prefetch transactions (Step 1060).
If so, the transaction queue 300 may disable one of the blind prefetch transactions (Step
1070). Step 1070 may apply to blind prefetches associated with a core request or a
patterned prefetch request. If not, or at the conclusion of Step 1070, the method may
conclude.
The method 1000 advantageously provides a measured response to congestion.
As a first response, the transaction queue invalidates blind prefetch requests from the
transaction queue. As discussed above, prefetch requests as a class are subordinated to
core requests. Experience also teaches that it is appropriate to subordinate blind
prefetches to patterned prefetches. Patterned prefetches are likely to be more efficient
than blind prefetches. Patterned prefetches are issued in response to an established
pattern of core reads from memory. Blind prefetches are not tied to any kind of
measurable indicia. Thus, patterned prefetches may be more likely to retrieve data that
the core eventually will request and should be retained in favor of blind prefetches.
When a blind prefetch is invalidated, it increases the rate at which registers 320
will be made available for use to newly received requests. As noted, blind prefetches
are associated with core read requests. Core read requests are the highest priority
request that is handled by the transaction queue - they are posted on the external bus at
the highest priority.
At a second level of priority, if the congestion continues even after all blind
prefetches have been invalidated, the transaction queue may invalidate pending
patterned prefetch requests that are associated with in-progress prefetch requests (Step
1080). Because one of the prefetch requests has already been posted to the external bus,
it is likely to conclude in a predetermined amount of time. However, even if it
concluded the status of the second pending prefetch request (the one that is invalidated
in step 1080) would prevent the associated register from being de-allocated. Step 1080,
by marking the pending prefetch request as completed, ensures that a register will be
de-allocated when the posted prefetch request concludes.
At a third level of priority, the transaction queue de-allocates a register that
stores a pair of pending prefetch requests in favor of a newly received request. This
occurs only when there are no registers available to the newly received request.
The principles of the present invention permit several different triggering events
to cause the transaction queue 300 to decide that it is operating in a congested mode.
In a first embodiment, the transaction queue 300 may determine that it is congested
based on a number of allocated or unallocated registers 320 in the queue. For example,
if the transaction queue determines that the registers were 90% or 100% full, it may
decide that it is operating in a congested mode.
In a second example, the transaction queue may determine that a congestion
event has occurred based on measured latency of the external bus. As is known, agents
typically operate according to a predetermined bus protocol. The bus protocol may
establish rules governing when new requests may be posted on the external bus and
which of possibly many agents may post a new request on the bus for each request
"slot," each opportunity to post a new request on the bus. In such an embodiment, the
transaction queue 300 may measure a number of request slots that pass before the
transaction queue 300 acquires ownership of the bus. If the measured number of slots
exceeds some predetermined threshold, the transaction queue 300 may determine that
a congestion event has occurred.
According to another embodiment, the transaction queue 300 may respond to
a congestion event differently depending upon a type of congestion that is detected.
Consider an example where the transaction queue can detect the two types of triggering
events described above: 1) that the number of available registers drops below some
threshold number (say, the transaction queue is entirely full), and 2) that measured
latency on the external bus exceeds a threshold amount. According to an embodiment,
the transaction queue 300 may invalidate all prefetch requests when the transaction
queue 300 is entirely full but it may invalidate only the blind prefetch requests when the
measured latency on the external bus exceeds the threshold. This embodiment may be
advantageous because it provides for a simple implementation and distinguishes
between congestion events of low and high severity.
The preceding discussion has distinguished among pending and posted requests.
Herein, a posted request is one that has begun on the external bus. Typically, an
external bus is defined by a predetermined bus protocol, one that specifies incremental
stages that a transaction undergoes toward completion. The congestion management
methods described in the foregoing embodiments do not disturb transactions that have
been posted. By contrast, a pending request is one that is stored within the BSU but has
not begun on the external bis. The congestion management methods of the present
invention may invalidate pending requests according to those techniques described in
the foregoing embodiments.
As shown above, embodiments of the present invention provide a transaction
queue 300 that may operate according to a dynamic priority scheme. A first priority
scheme may be defined for the transaction queue in the absence of congestion. But
when congestion is detected, the transaction queue may implement a second priority
scheme. In the embodiments described above, the transaction queue may invalidate
prefetch requests.
The congestion management techniques described in the foregoing embodiments
are directed to read requests that are processed by transaction management systems. As
is known, a BSU may process other types of requests, such as write requests, that are
not intended to cause data to be read into an agent. The congestion management
techniques described in the foregoing embodiments are not intended to disturb the
methods by which a transaction management system processes these other types of
requests.
Several embodiments of the present invention are specifically illustrated and
described herein. However, it will be appreciated that modifications and variations of
the present invention are covered by the above teachings and within the purview of the
appended claims without departing from the spirit and intended scope of the invention.
WE CLAIM :
1. A transaction queue for an agent that operates according to a dynamic
priority scheme, the transaction queue operating to a default priority scheme and
engaging a second priority scheme when a congestion event is detected.
2. In an agent, a management method for external transactions, comprising :
queuing data of a plurality of read requests,
for each queued request, storing data of a blind prefetch transaction
associated with the respective request,
when a transaction congestion event occurs, disabling selected stored
prefetch requests.
3. The management method as claimed in claim 1, wherein the transaction
congestion event occurs when a number of queued requests exceeds a
predetermined threshold.
4. The management method as claimed in claim 1, wherein the transaction
congestion event occurs when a queue that stores the queued request becomes
full.
5. The management method as claimed in claim 1, wherein the transaction
congestion event occurs when a measured latency of posted transactions
exceeds a predetermined threshold.
6. In an agent, a management method for external transactions, comprising :
queuing data of a plurality of external bus transactions,
for at least one queued transaction, storing data of a blind prefetch
transaction in association with the respective transaction,
when a transaction congestion event occurs, disabling the blind prefetch
transaction.
7. The management method as claimed in claim 6, wherein the transaction
congestion event occurs when a number of queued requests exceeds a
predetermined threshold.
8. The management method as claimed in claim 6, wherein the transaction
congestion event occurs when a queue that stores the queued request becomes
full.
9. The management method as claimed in claim 6, wherein the transaction
congestion event occurs when a measured latency of posted transactions
exceeds a predetermined threshold.
10. In an agent, a management method for external transactions, comprising :
queuing data of a plurality of read requests, certain read requests related
to executions being performed by an agent core, certain other read requests
related to data being prefetched,
when a transaction congestion event occurs, disabling the prefetch
requests.
11. The management method as claimed in claim 10, wherein the transaction
congestion event occurs when a number of queued requests exceeds a
predetermined threshold.
12. The management method as claimed in claim 10, wherein the transaction
congestion event occurs when a queue that stores the queued request becomes
full.
13. The management method as claimed in claim 10, wherein the transaction
congestion event occurs when a measured latency of posted transactions
exceeds a predetermined threshold.
14. In an agent, a multi-mode management method for external transactions,
comprising :
queuing data of a plurality of core read requests,
for each core read request, storing data of a blind prefetch transaction
associated with the respective core read request,
queuing data of prefetch requests related to patterns of core read
requests,
in a first mode, when a transaction congestion event occurs, disabling the
blind prefetch transactions,
in a second mode, when a transaction congestion event occurs, disabling
the prefetch requests.
15. The management method as claimed in claim 14, wherein the transaction
congestion event occurs when a number of queued requests exceeds a
predetermined threshold.
16. The management method as claimed in claim 14, wherein the transaction
congestion event occurs when a queue that stores the queued request becomes
full.
17. The management method as claimed in claim 14, wherein the transaction
congestion event occurs when a measured latency of posted transactions
exceeds a predetermined threshold.
18. A transaction queue comprising :
a controller,
a plurality of queue registers, each having an address field and status
fields associated with a pair of transactions related to the address,
wherein, in response to a congestion event, the controller modifies one of
the status fields in a register to invalidate the respective transaction.
19. The transaction queue as claimed in claim 18, wherein
the transaction queue stores core read requests, blind prefetch requests
and patterned prefetch requests, and
the invalidated transaction is a blind prefetch request.
20. The transaction queue as claimed in claim 19, wherein, when there are no
valid blind prefetch requests, the controller invalidates a patterned prefetch
request.
21. A bus sequencing unit of an agent, comprising :
an arbiter,
an internal cache coupled to the arbiter,
a transaction queue coupled to the arbiter and storing data of external
transactions to be performed by the agent, the transactions selected from the
group of core read requests, blind prefetch requests and patterned prefetch
requests, and
an external bus controller coupled to the transaction queue,
wherein, in response to a congestion event in the bus sequencing unit, the
transaction queue invalidates selected transactions.
22. The bus sequencing unit as claimed in claim 21, wherein the selected
transaction is a blind prefetch request.
23. The bus sequencing unit as claimed in claim 21, wherein when there are
no blind prefetch requests in the transaction queue, the selected transaction is a
patterned prefetch request.
24. A congestion a management method for external transactions, the external
transactions stored in the transaction queue in pairs and taken from the set of a
core read requests-blind prefetch request pair and a patterned prefetch request
pair, comprising :
receiving a new request at a transaction queue,
determining whether the transaction queue has space to store the new
request,
if not,
removing a pair of patterned prefetch request from the transaction
queue and
storing the new request in the transaction queue.
25. The congestion management method as claimed in claim 24, comprising
invalidating a blind prefetch request.
26. The congestion management method as claimed in claim 24, comprising
invalidating a first patterned prefetch request when a second patterned prefetch
request in the pair has been posted.
27. The transaction queue as claimed in claim 1, wherein the default priority
scheme prioritizes core read requests over prefetch requests and the prefetch
requests over write requests, and
when the second priority scheme prioritizes core read requests over the
write requests and the write requests over the prefetch requests.
28. A method comprising :
operating a transaction queue for an agent according to a default priority
scheme; and
operating the transaction queue according to a second priority scheme
after a congestion event is detected.
29. A management method comprising :
queuing data of a plurality of read requests in an agent,
for each queued request, storing data of a blind prefetch transaction
associated with the respective request,
if a transaction congestion event occurs, disabling second request stored
prefetch requests.
30. The management method as claimed in claim 29, wherein the transaction
congestion event occurs if a number of queued requests exceeds a
predetermined threshold.
31. The management method as claimed in claim 29, wherein the transaction
congestion event occurs if a queue that stores the queued request becomes full.
32. The management method as claimed in claim 29, wherein the transaction
congestion event occurs if a measured latency of posted transactions exceeds a
predetermined threshold.
33. A management method for external transactions, comprising :
queuing data of a plurality of external bus transactions in an agent,
for at least one queued transaction, storing data of a blind prefetch
transaction in association with the respective transaction, and
if a transaction congestion event occurs, disabling the blind prefetch
transaction.
34. The management method as claimed in claim 33, wherein the transaction
congestion event occurs if a number of queued requests exceeds a
predetermined threshol.
35. The management method as claimed in claim 33, wherein the transaction
congestion event occurs if a queue that stores the queued request becomes full.
36. The management method as claimed in claim 33, wherein the transaction
congestion event occurs if a measured latency of posted transactions exceeds a
predetermined threshold.
37. A management method for external transactions, comprising :
queuing data of a plurality of read requests in an agent, certain read
requests related to executions to be performed by a core of the agent, certain
other read requests related to data to be prefetched,
if a transaction congestion event occurs, disabling the prefetch requests.
38. The management method as claimed in claim 37, wherein the transaction
congestion event occurs if a number of queued requests exceeds a
predetermined threshold.
39. The management method as claimed in claim 37, wherein the transaction
congestion event occurs if a queue that stores the queued request becomes full.
40. The management method as claimed in claim 37, wherein the transaction
congestion event occurs if a measured latency of posted transactions exceeds a
predetermined threshold.
41. A multi-mode management method for external transactions, comprising :
queuing data of a plurality of core read requests in an agent,
for each core read request, storing data of a blind prefetch transaction
associated with the respective core read request,
queuing data of prefetch requests related to patterns of core read
requests,
in a first mode, if a transaction congestion event occurs, disabling the blind
prefetch transactions,
in a second mode, if a transaction congestion event occurs, disabling the
prefetch requests.
42. The management method as claimed in claim 41, wherein the transaction
congestion event occurs if a number of queued requests exceeds a
predetermined threshold.
43. The management method as claimed in claim 41, wherein the transaction
congestion event occurs if a queue that stores the queued request becomes full.
44. The management method as claimed in claim 41, wherein the transaction
congestion event occurs if a measured latency of posted transactions exceeds a
predetermined threshold.
45. A transaction queue of an agent, comprising :
a plurality of queue registers, each comprising an address field and status
fields for a pair of transactions related to the address, and
a controller to respond to a congestion event by modifying a status field in
a register to invalidate the respective transaction.
46. The transaction queue as claimed in claim 45, wherein :
the status field indicates a type of request including core read requests,
blind prefetch requests and patterned prefetch requests, and
the invalidated transaction is a blind prefetch request.
47. The transaction queue as claimed in claim 46, wherein, if there are no
valid blind prefetch requests in the transaction queue, the controller is to modify a
status field in a register to invalidate a patterned prefetch request.
48. A bus sequencing unit of an agent, comprising :
an arbiter,
an internal cache coupled to the arbiter,
a transaction queue, coupled to the arbiter, to store data of external
transactions to be performed by the agent, the transactions selected from the
group of core read requests, blind prefetch requests and patterned prefetch
requests, the transaction queue to invalidate a selected transaction in response
to a congestion event in the bus sequencing unit, and
an external bus controller coupled to the transaction queue.
49. The bus sequencing unit as claimed in claim 48, wherein the selected
transaction is a blind prefetch request.
50. The bus sequencing unit as claimed in claim 48, wherein, if there are no
blind prefetch requests in the transaction queue, the selected transaction is a
patterned prefetch request.
51. A congestion management method for external transactions, comprising :
receiving a new request at a transaction queue,
determining whether the transaction queue has space to store the new
request,
if not
removing a pair of patterned prefetch requests from the transaction
queue, and
storing the new request in the transaction queue,
wherein the external transactions are stored in the transaction
queue in pairs and taken from a set of a core read request-blind prefetch request
pair and a patterned prefetch request pair.
52. The congestion management method as claimed in claim 51, comprising
invalidating a blind prefetch request.
53. The congestion management method as claimed in claim 51, comprising
invalidating a first patterned prefetch request if a second patterned prefetch
request in a pair has been posted.
54. The method as claimed in claim 28,
wherein the default priority scheme prioritizes core read requests over
prefetch requests and the prefetch requests over write requests, and
wherein the second priority scheme prioritizes the core read requests over
the write requests and the write requests over the prefetch requests.
55. A transaction queue for an agent, substantially as herein described,
particularly with reference to and as illustrated in the accompanying drawings.
56. A management method for external transactions, substantially as herein
described, particularly with reference to and as illustrated in the accompanying
drawings.
57. A bus sequencing unit of an agent, substantially as herein described,
particularly with reference to and as illustrated in the accompanying drawings.
58. A congestion management method, substantially as herein described,
particularly with reference to and as illustrated in the accompanying drawings.
Embodiments of the present invention provide a multi-mode transaction
queue 300 for a computer processing agent that prov ides a measured
response to congestion events. The transaction queue may initially operate
according to a default priority scheme. When a congestion event is detected,
the transaction queue may engage a second priority scheme to selectively
invalidate stored transactions in the queue that are pending, that is,
transactions that have not been posted to the external bus. In one
embodiment, the transaction queue may invalidate blind prefetch requests first.
The transaction queue may also invalidate non-posted prefetch requests that
are stored with an associated posted prefetch request. Finally, in an extreme
congestion case, as when there is no available room for new requests from the
processing agent, the transaction queue may invalidate a pair of non-posted
patterned prefetch requests.

Documents:


Patent Number 222748
Indian Patent Application Number IN/PCT/2002/00767/KOL
PG Journal Number 34/2008
Publication Date 22-Aug-2008
Grant Date 21-Aug-2008
Date of Filing 10-Jun-2002
Name of Patentee INTEL CORPORATION
Applicant Address 2200 MISSION COLLEGE BOULEVARD, SANTA CLARA, CALIFORNIA 95052-8119 U.S.A
Inventors:
# Inventor's Name Inventor's Address
1 HILL DAVID L 37000 S.W. GODDARD ROAD CORNELIUS, OREGON 97113 U.S.A
2 BACHAND DEREK T 821 N.W. 11TH AVENUE # 411 PORTLAND, OREGON 97209 U.S.A
3 PRUDVI CHINNA B 17924 NW DEERFIELD DRIVE PORTLAND,OREGON 97229
4 MARR DEBORAH T 2564 NW PETTYGROVE STREET PORTLAND, OREGON 97210
PCT International Classification Number G06F 13/368,13/16
PCT International Application Number PCT/US00/32154
PCT International Filing date 2000-11-28
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 NA