US20060020769A1 - Allocating resources to partitions in a partitionable computer - Google Patents

Allocating resources to partitions in a partitionable computer Download PDF

Info

Publication number
US20060020769A1
US20060020769A1 US10/898,590 US89859004A US2006020769A1 US 20060020769 A1 US20060020769 A1 US 20060020769A1 US 89859004 A US89859004 A US 89859004A US 2006020769 A1 US2006020769 A1 US 2006020769A1
Authority
US
United States
Prior art keywords
partition
partitions
identification value
address
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/898,590
Other versions
US7606995B2 (en
Inventor
Russ Herrell
Gerald Kaufman
John Morrison
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US10/898,590 priority Critical patent/US7606995B2/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAUFMAN, GERALD J., JR., MORRISON, JOHN A., HERRELL, RUSS
Priority to JP2005202511A priority patent/JP2006040275A/en
Priority to CN200510087531.XA priority patent/CN1725183A/en
Publication of US20060020769A1 publication Critical patent/US20060020769A1/en
Priority to US12/510,184 priority patent/US8112611B2/en
Application granted granted Critical
Publication of US7606995B2 publication Critical patent/US7606995B2/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • the present invention relates to partitionable computers and, more particularly, to techniques for allocating resources to partitions in partitionable computers.
  • servers of various kinds such as database servers, web servers, email servers, and file servers—have proliferated within enterprises in recent years.
  • a single enterprise may own or otherwise employ the services of large numbers of each of these kinds of servers.
  • the cost of purchasing (or leasing) and maintaining such servers can be substantial. It would be advantageous, therefore, to reduce the number of servers that must be used by an enterprise without decreasing system performance.
  • a consolidation server typically is a powerful computer system having significant computing resources (such as multiple processors and large amounts of memory).
  • the consolidation server may be logically subdivided into multiple “partitions,” each of which is allocated a portion of the server's resources.
  • a multi-partition consolidation server is an example of a “partitionable computer.” Each partition may execute its own operating system and software applications, and otherwise act similarly to an independent physical computer.
  • partitionable computers become more powerful, the trend is for them to include a greater and greater number of processors.
  • a single partitionable computer typically includes several (e.g., 4) “cell boards,” each of which includes several (e.g., 2, 4, 8, or 16) processors.
  • the cell boards are interconnected through a switching-fabric and collectively provide an effective processing power that approaches the aggregate processing power of the individual processors they contain.
  • Each successive generation of cell boards tends to include a greater number of processors than the previous generation.
  • a “multi-core” processor may include one or more processor cores on a single chip.
  • a multi-core processor behaves as if it were multiple processors.
  • Each of the multiple processor cores may essentially operate independently, while sharing certain common resources, such as a cache. Multi-core processors therefore provide additional opportunities for increased processing efficiency.
  • FIG. 1 is a block diagram of a multiprocessor computer system according to one embodiment of the present invention
  • FIG. 2 is a block diagram of one of the CPUs of the computer system of FIG. 1 according to one embodiment of the present invention
  • FIG. 3 is a flowchart of a method that is performed by a bit substitution circuit of FIG. 2 according to one embodiment of the present invention
  • FIG. 4A is a flowchart of a method that is performed by the cache of FIG. 2 according to one embodiment of the present invention
  • FIG. 4B is a flowchart of a method that is performed by the address mapper of FIG. 2 according to one embodiment of the present invention
  • FIG. 5 is a diagram of a mapping between processor cores and hardware partitions in a partitionable computer system according to one embodiment of the present invention
  • FIGS. 6A-6B illustrate an I/O controller according to one embodiment of the present invention
  • FIG. 7 is a diagram of a mapping between I/O ports and partitions in a partitionable computer system according to one embodiment of the present invention.
  • FIG. 8 is a flowchart of a method performed by the destination decoder of FIGS. 6A-6B to decode a physical address in an incoming transaction according to one embodiment of the present invention
  • FIG. 9 is a flowchart of a method that is performed by the bit substitution circuit of FIGS. 6A-6B according to one embodiment of the present invention.
  • FIG. 10 is a flowchart of a method that is performed by the cache of FIGS. 6A-6B according to one embodiment of the present invention.
  • FIG. 11 is a flowchart of a method that is performed by the address mapper of FIGS. 6A-6B according to one embodiment of the present invention.
  • FIG. 12A is a diagram of a partition-identifying address according to one embodiment of the present invention.
  • FIG. 12B is a diagram of a partition-identifying address according to another embodiment of the present invention.
  • the computer system 100 includes a plurality of cell boards 102 a - d interconnected using a switching fabric 116 , also referred to as a “system fabric” or simply a “fabric.” Each of the cell boards 102 a - d includes a plurality of CPUs, a system bus, and main memory.
  • a switching fabric 116 also referred to as a “system fabric” or simply a “fabric.”
  • Each of the cell boards 102 a - d includes a plurality of CPUs, a system bus, and main memory.
  • the cell board 102 a is shown in more detail in FIG. 1 and will now be described in more detail.
  • the other cell boards 102 b - d may include components and a structure that are the same as or similar to that of cell board 102 a .
  • the cell board 102 a includes a plurality of CPUs 104 a - n , where n is a number such as 2, 4, 8, or 16.
  • the CPUs 104 a - n include on-board caches 106 a - n , respectively.
  • the cell board 102 a also includes a system bus 108 , main memory 112 a , and memory controller 110 a .
  • the CPUs 102 a - n are coupled directly to the system bus 108 , while main memory 112 a is coupled to the system bus 108 through memory controller 110 a .
  • CPUs 104 a - n may communicate with each other over the system bus 108 and may access the memory 112 a over the system bus 108 through the memory controller 110 a , as is well-known to those of ordinary skill in the art.
  • cell boards 102 a - d include their own local system memories 112 a - d coupled to corresponding memory controllers 110 a - d
  • the memories 112 a - d may be addressed by the CPUs in the cell boards 102 a - d using a single combined physical address space.
  • the fabric 116 provides a mechanism for communication among the cell boards 102 a - d to perform such shared memory access and other inter-cell board communication.
  • the fabric 116 may, for example, include one or more crossbar switches.
  • a crossbar switch is a device that has a number of input/output ports to which devices may be connected.
  • a pair of devices connected to a pair of input/output ports of a crossbar switch may communicate with each other over a path formed within the switch connecting the pair of input/output ports.
  • the paths set up between devices can be fixed for some duration or changed when desired. Multiple paths may be active simultaneously within the crossbar switch, thereby allowing multiple pairs of devices to communicate with each other through the crossbar switch simultaneously and without interfering with each other.
  • the fabric 116 may be implemented using components other than crossbar switches.
  • the fabric 116 may be implemented using one or more buses.
  • Cell board 102 a also includes a fabric agent chip 114 a that is coupled to the fabric 116 and which acts as an interface between the cell board 102 a and the other cell boards 102 b - d in the system 100 .
  • the other cell boards 102 b - d similarly include their own fabric agent chips 114 b - d , respectively.
  • the fabric agent chips 114 a - d are illustrated as distinct components in FIG. 1 , fabric agent chips 114 a - d may be considered to be part of the system fabric 116 .
  • the local memories 112 a - d in the cell boards 102 a - d may be accessed using a single physical address space.
  • this is made possible by the fabric agent chips 114 a - d
  • the fabric agent chips 114 a - d For example, consider a case in which CPU 104 a issues a memory access request to memory controller 110 a that addresses a memory location (or range of memory locations) in the shared physical address space. If the memory controller 110 a cannot satisfy the memory access request from the local memory 112 a , the memory controller 110 a forwards the request to the fabric agent chip 114 a .
  • the fabric agent chip 114 a translates the physical address in the request into a new memory address (referred to as a “fabric address”) that specifies the location of the requested memory, and transmits a new memory access request using the new fabric address to the fabric 116 .
  • the fabric 116 forwards the memory access request to the fabric agent chip in the appropriate cell board.
  • the requested memory access is performed using the local memory of the receiving cell board, if possible, and the results are transmitted back over the fabric 116 to the fabric agent chip 114 a and back through the memory controller 110 a to the CPU 104 a .
  • the CPUs in cell boards 102 a - d may thereby access the main memory in any of the other cell boards 102 a - d over the fabric 116 using the fabric agent chips 114 a - d in the cell boards 102 a - d .
  • One goal of such a system is to make the implementation of memory access transparent to the CPUs 104 a - d , in the sense that the CPUs 104 a - d may transmit and receive responses to memory access requests in the same way regardless of whether such requests are satisfied from onboard memory or offboard memory.
  • techniques are provided for allocating multiple physical resources on a single chip to a plurality of partitions in a partitionable computer system.
  • a partition identification value (identifying the partition to which the resource is allocated) is stored in the physical address to create a partition-identifying address.
  • the transaction including the partition-identifying address, is transmitted over the fabric 116 and thereby routed to the appropriate destination.
  • FIG. 2 a functional block diagram is shown of the CPU 104 a according to one embodiment of the present invention.
  • the CPU 104 a is a multi-core processor.
  • the CPU 104 a includes a plurality of processor cores 204 a - n on a single chip, where n may any number, such as 2, 4, 8, or 16.
  • the cores 204 a - n may, for example, be conventional processor cores such as those found in conventional multi-core processors.
  • FIG. 2 a functional block diagram is shown of the CPU 104 a according to one embodiment of the present invention.
  • the CPU 104 a is a multi-core processor.
  • the CPU 104 a includes a plurality of processor cores 204 a - n on a single chip, where n may any number, such as 2, 4, 8, or 16.
  • the cores 204 a - n may, for example, be conventional processor cores such as those found in conventional multi-core processors.
  • all of the cores 204 a - n share a single cache 208 .
  • the cores 204 a - n need not, however, share a single cache. Rather, for example, each core may have its own cache, or groups of cores may share different caches.
  • all of the cores in a multi-core processor are required to be allocated to a single partition.
  • the cores 204 a - n would communicate directly with the cache 208 .
  • the core 204 a would transmit a memory write request, including the address of the memory address to be written, directly to the cache 208 , which would satisfy the request locally if possible or by performing an off-board write to main memory otherwise.
  • the multi-core processor 104 a illustrated in FIG. 2 enables the cores 204 a - n to be allocated to a plurality of partitions.
  • FIG. 5 a diagram is shown of a mapping 502 between processor cores 506 a - h and partitions 504 a - d in the partitionable computer system 100 according to one embodiment of the present invention.
  • core 506 a represents core 204 a
  • each of the partitions 504 a - d is not itself a physical component of the computer system 100 . Rather, each of the partitions 504 a - d is a logical construct that is defined by the resources (e.g., processor cores) that are allocated to it. The resources allocated to a particular partition may change over time.
  • resources e.g., processor cores
  • core 506 b is allocated to partition 504 a (indicated by mapping 502 b ), cores 506 a and 506 b are allocated to partition 504 b (indicated by mappings 502 a and 502 b , respectively), cores 506 c , 506 e , and 506 f are allocated to partition 504 (indicated by mappings 502 c , 502 e , and 502 f , respectively), and cores 506 g - h are allocated to partition 504 d (indicated by mappings 502 g - h , respectively).
  • mapping 502 illustrated in FIG. 5 is shown merely for purposes of example and does not constitute a limitation of the present invention. There may be any number of partitions, and cores may be allocated to partitions in any arrangement.
  • the CPU 104 a includes a plurality of partition ID registers 210 a - n associated with the plurality of cores 204 a - n respectively.
  • partition ID register 210 a is associated with core 206 a and stores a value that represents mapping 502 a ( FIG. 5 ).
  • partition ID register 210 n is associated with core 204 n and stores a value that represents mapping 502 h .
  • Each of the partition ID registers 210 a - n includes at least enough bits to represent the number of partitions in the computer system 100 .
  • each of the partition ID registers 210 a - n includes at least log 2 P bits. For example, if there are four partitions (as in the example illustrated in FIG. 5 ), each of the partition ID registers 210 a - n includes at least 2 (log 2 4) bits.
  • Each of the partition ID registers 210 a - n stores a unique partition ID value that uniquely identifies the partition to which the corresponding one of the cores 204 a - n is allocated. For example, let PIR i be the partition ID register at index i, and let C i be the corresponding processor core at index i, where i ranges from 0 to n ⁇ 1. If core C i is allocated to partition j, then the value j may be stored in the partition ID value in partition ID register PIR i . In this way, a unique value identifies each of the partitions in the system 100 .
  • the values stored in the partition ID registers 210 a - n may, for example, be set by configuration software executing in the computer system 100 .
  • the value 1 (binary 01) may be stored in partition ID register 210 a , thereby indicating that core 204 a (represented by core 506 a in FIG. 5 ) is allocated to partition 1 ( 504 b ).
  • the value 3 (binary 11) may be stored in partition ID register 210 n , thereby indicating that core 204 n (represented by core 506 h in FIG. 5 ) is allocated to partition 3 ( 504 d ).
  • the CPU 104 a may be configured so that the partition ID values stored in the partition ID registers 210 a - n cannot be changed by the operating system executing on the computer system 100 . This fixedness of the partition ID values may be enforced, for example, by any of a variety of hardware security mechanisms, or simply by agreement between the configuration software and the operating system.
  • the main memory 112 a - d of the computer system 100 is allocated among the partitions 504 a - d , so that each partition is allocated a portion of the main memory 112 a - d .
  • the main memory 112 a - d may be allocated to the partitions 504 a - d in blocks of any size.
  • the main memory 112 a - d may be allocated to partitions 504 a - d on a per-address, per-page, or per-controller basis.
  • a core that transmits a memory access request need not specify the partition to which the requested memory addresses are allocated. Rather, the core need only specify the requested memory address using a memory address (referred to as a “physical address”) within an address space (referred to as a “physical address space”) associated with the partition to which the core is allocated.
  • a memory address referred to as a “physical address”
  • a physical address space referred to as a “physical address space”
  • the main memory 112 a - d is logically divided into a plurality of physical address spaces. Each of the physical address spaces typically is zero-based, which means that the addresses in each physical address space typically is numbered beginning with address zero.
  • the CPU 104 a includes bit substitution circuits 212 a - n , which are coupled between cores 204 a - n and partition ID registers 210 a - n , respectively.
  • bit substitution circuits 212 a - n To appreciate the function performed by the bit substitution circuits 212 a - n , consider a case in which core 204 a transmits a write command 230 a on lines 214 a to bit substitution circuit 212 a .
  • the write command 230 a includes a physical address of the memory location to be written and a value to write into that location.
  • the physical address is illustrated in FIG. 2 as “a[54:0]” to indicate that bits 0 - 54 of the address contain useful (address-identifying) information.
  • system space refers herein to an address space that contains unique addresses for each memory location in the entire main memory 112 a - d .
  • the system address space is 4 GB (0x100000000) and that there are four equally-sized (1 GB) partitions.
  • the physical memory space of each of the partitions in such a case would have an address range of O-LGB (0x00000000-0x40000000).
  • the first partition might be allocated (mapped) to the first gigabyte of the system address space, the second partition might be allocated to the second gigabyte of the system address space, and so on.
  • the physical address in the write command 230 a transmitted on lines 214 a is a 64-bit value but that only the 55 least significant bits are needed to fully address the physical address space allocated to a single partition. In such a case, the 9 uppermost address bits are not needed to specify physical addresses.
  • the operating system executing in each partition is informed of the size of the physical address space that is allocated to it. As a result, a well-behaved operating system will not generate addresses that use more bits than necessary (e.g., 55) to address its allocated memory partition.
  • a flowchart is shown of a method 300 that is performed by the bit substitution circuit 212 a according to one embodiment of the present invention when write command 230 a is transmitted by core 204 a on lines 214 a .
  • the bit substitution circuit 212 a receives the write command 230 a (or other memory access request, such as a read command) (step 302 ).
  • the bit substitution circuit 212 a reads the partition ID value from the partition ID register 210 a on lines 216 a (step 304 ).
  • the bit substitution circuit 212 a writes the partition ID value into the physical address, thereby producing a partition-identifying address that includes both the original physical address and the partition ID value (step 306 ).
  • FIG. 12A a diagram is shown of an example of a partition-identifying address 1200 produced in step 306 according to one embodiment of the present invention.
  • the example partition-identifying address 1200 illustrated in FIG. 12A is 64 bits wide.
  • Portion 1202 (bits 0 - 52 ) of partition-identifying address 1200 contains bits 0 - 52 of the physical address contained in the original write command 230 a .
  • bit substitution circuit 212 a writes the partition ID value obtained from write command 230 a into portion 1204 (bits 53 - 54 ) of the partition-identifying address 1200 (step 306 ), thereby overwriting the original values stored in portion 1204 .
  • Portion 1208 which includes both portions 1202 and 1204 , therefore unambiguously identifies the system memory address indicated by the original write command 230 a .
  • Portion 1206 bits 55 - 63 ) of the partition-identifying address 1200 are unused.
  • Portion 1208 therefore represents the “used portion” of address 1200 because the combination of the partition ID portion 1204 and the physical address portion 1202 are used to specify a unique address in the system 100 .
  • bit substitution circuit 212 a may further be configured to overwrite portion 1206 with zeros or some other value. The bit substitution circuit 212 a may thereby prevent the operating system from accessing addresses outside of its partition and thereby enforce inter-partition security.
  • partition-identifying address 1200 in FIG. 12A is shown merely for purposes of example and does not constitute a limitation of the present invention. Rather, partition-identifying addresses of any size and having any layout may be used in conjunction with embodiments of the present invention.
  • the layout of partition-identifying addresses may vary from partition to partition. For example, one partition may be allocated twice as much address space as another, in which case addresses in the larger partition will include one less bit of partition ID (portion 1204 ) and one more bit of physical address (portion 1202 ) than addresses in the smaller partition.
  • the bit substitution circuits 212 a - n therefore, may be individually programmable to insert partition IDs of varying sizes into the addresses generated by the cores 204 a - n.
  • the bit substitution circuit 212 a generates a first modified write command 232 a (or other memory access request) containing the partition-identifying address generated in step 306 (step 308 ).
  • the bit substitution circuit 212 a transmits the first modified write command 232 a (or other memory access request) to the cache 208 on lines 218 a (step 310 ).
  • CPU 104 a includes extended cores 206 a - n .
  • Extended core 206 a includes core 204 a , partition ID register 210 a , and bit substitution circuit 212 a
  • extended core 206 n includes core 204 n , partition ID register 210 n , and bit substitution circuit 212 n .
  • core 204 a , bit substitution circuit 212 a , and partition ID register 210 a are illustrated as distinct components in FIG. 2 , the functions performed by the bit substitution circuit 212 a and/or partition ID register 206 a may be integrated into the core 204 a , so that the core 204 a may communicate directly with the cache 208 .
  • the cache 208 receives the first modified write command 232 a from the bit substitution circuit 212 a (step 402 ).
  • the cache 208 determines whether the write request can be satisfied locally, i.e., whether there is a cache hit in cache lines 234 based on the partition-identifying address contained in the first modified write command 232 a (step 404 ).
  • the cache 208 determines whether the value of the memory location addressed by the partition-identifying address contained in the first modified write command 232 a is stored in cache lines 234 .
  • the cache 208 may perform step 404 by using the partition-identifying address contained in the first modified write command 232 a as an index and tag and then using any of a variety of well-known techniques to determine whether there is a cache hit based on that index and tag.
  • the address bits in which the partition ID value is stored may occupy either the index or tag field of the cache 208 . If the partition ID value is stored in the index field of the cache 208 , then the partitions 504 a - d are allocated fixed and distinct (non-overlapping) portions of the cache 208 . If, however, the partition ID value is stored in the tag field of the cache 208 , then the entire cache 208 is shared by the partitions 504 a - d , and the particular cache locations used by any partition is dynamic and depends on the workload of the cores 204 a - n at any particular point in time.
  • the cache 208 performs the write locally (i.e., within the cache lines 234 ) (step 406 ) and the method 400 terminates.
  • the cache 208 may transmit an acknowledgment to the core 204 a on lines 224 a . If the core 204 a transmits a read command to the cache 208 , the cache 208 may transmit the read values to the core 204 a on lines 224 a.
  • the cache 208 transmits a second modified write command 236 to an address mapper 222 (step 408 ).
  • the second modified write command 236 contains: (1) a source terminus ID (e.g., the terminus ID of the memory controller 110 a that services the CPU 104 a ), labeled “S” in FIG. 2 ; (2) a transaction ID (a unique transaction identifier), labeled “I” in FIG. 2 ; (3) a request type (e.g., memory read or write), labeled “R” in FIG. 2 ; and (4) the partition-identifying address 1200 extracted from the first modified write command 232 a , labeled “p 1 , a[n:0]” in FIG. 2 .
  • core 204 n may generate a write command 230 n on lines 214 n , in response to which bit substitution circuit 212 n may read the value of partition ID register 210 n on lines 216 n .
  • the bit substitution circuit 212 n may transmit a first modified write command 232 n on lines 218 n , which may be processed by the cache 208 in the manner described above.
  • the cache 208 may communicate with the core 204 n directly over lines 224 n.
  • the partition-identifying address contained in the second modified write command 236 is translated into a system address.
  • FIG. 4B a flowchart is shown of a method 420 that is performed in one embodiment of the invention to perform such a translation.
  • the method 420 may, for example, be performed after step 408 of method 400 ( FIG. 4A ).
  • the CPU 104 a includes an address mapper 222 , which is coupled to the cache 208 over lines 220 and which therefore receives the second modified write command 236 (step 422 ).
  • the address mapper 222 maps the partition-identifying address 1200 contained in the second modified write command 230 to: (1) a destination terminus ID (e.g., a terminus ID of the memory controller that controls access to the requested memory addresses), and (2) a transaction type (step 424 ).
  • the transaction type serves a purpose similar to that of the original request type (e.g., memory read or write), except that the request type is used for transactions over the fabric 116 .
  • Techniques for translating request types into transaction types are well-known to those of ordinary skill in the art.
  • each of the CPUs in the system 100 e.g., CPUs 104 a - n
  • each of the memory controllers 110 a - d in the system 100 has a unique terminus identifier (terminus ID).
  • a particular physical address in a particular partition may be uniquely addressed by a combination of the physical address, the partition ID of the partition, and the terminus ID of the memory controller that controls the memory in which that physical address is stored.
  • the address transmitted over the fabric 116 is a partition-identifying address (i.e., an address which includes both a physical address and a partition ID)
  • the target memory controller may distinguish among the same physical address in different partitions. In the embodiment illustrated in FIG. 2 , therefore, a single memory controller may control memory allocated to any number of partitions.
  • the address mapper 222 may, for example, maintain an address mapping 238 that maps partition-identifying addresses to destination terminus IDs and transaction types.
  • the address mapper 222 may use the mapping 238 (which may, for example, be implemented as a lookup table) to perform the translation in step 424 .
  • the address mapping 238 need not contain an entry for every partition-identifying address. Rather, the address mapping 238 may, for example, map ranges of partition-identifying addresses (identified by their most significant bits) to pages of memory or to memory controllers.
  • the address mapper 222 may ensure that a processor core allocated to one partition cannot access memory locations in another partition by mapping such requests to a null entry, thereby causing the address mapper 222 to generate a mapping fault.
  • the address mapper 222 generates and transmits a third modified write command 240 to the system fabric 116 (step 426 ).
  • the third modified write command 240 includes: (1) the source terminus ID (S), transaction ID (I), request type (R), and partition-identifying address (p 1 , a[n:0]) from the second modified write command 236 ; and (2) the destination terminus ID (D) and transaction type (T) identified in step 424 .
  • the system fabric 116 includes a router 228 that uses techniques that are well-known to those of ordinary skill in the art to transmit the third modified write command 240 to the memory controller having the specified destination terminus ID.
  • the router 228 may, for example, maintain a mapping 244 that maps pairs of input ports and destination terminus IDs to output ports.
  • the router 228 When the router 228 receives the third modified write command 240 on a particular input port, the router uses the identity of the input port and the destination terminus ID contained in the third modified write command 240 to identify the output port that is coupled to the memory controller that controls access to the requested memory address(es). The router 228 transmits the third modified write command 240 (or a variation thereof) to the identified memory controller on lines 242 . The third modified write command 240 may then be satisfied by the destination memory controller using techniques that are well-known to those of ordinary skill in the art.
  • the router 228 may route the transaction to the cache on lines 226 using techniques that are well-known to those of ordinary skill in the art.
  • the incoming transaction may then be processed by the cache 208 and, if necessary, by one or more of the cores 206 a - n , using conventional techniques.
  • I/O controller are allocated to a plurality of partitions.
  • FIGS. 6A-6B a functional block diagram is shown of an I/O controller 602 according to one embodiment of the present invention.
  • the I/O controller 602 serves two I/O devices 604 a - b coupled to the I/O controller 602 through I/O ports 630 a - b , respectively.
  • mapping 702 between I/O ports 628 a - b and hardware partitions 704 a - d in the partitionable computer system 100 according to one embodiment of the present invention.
  • the mapping 702 includes mappings 702 a - b between I/O ports 628 a - b and partitions 704 a - b , respectively. Note that there are two partitions 704 c - d to which neither of the I/O ports 628 a - b is mapped. Other I/O ports in other I/O controllers (not shown), however, may be mapped to partitions 704 c - d . Although in the particular example illustrated in FIG. 7 there are two I/O ports 628 a - b allocated to two partitions 704 a - b , there may be any number of I/O ports and any number of partitions mapped to each other in any arrangement.
  • the I/O controller 602 includes a destination decoder 608 , which verifies that incoming transactions (on lines 610 ) are addressed to one of the I/O devices 604 a - b controlled by the I/O controller 602 . If an incoming transaction is not addressed to one of the I/O devices 604 a - b , the destination decoder 608 does not transmit the transaction further within the I/O controller 602 .
  • the destination decoder 608 receives the incoming transaction 612 (step 802 ).
  • the transaction 612 includes (1) a source terminus identifier (e.g., the terminus ID of the device that originated the transaction 612 ), represented as “S” in FIG. 6A ; (2) the physical address to access, represented as “a” in FIG. 6A ; (3) the transaction type (e.g., read or write), represented as “T” in FIG. 6A ; and (4) data associated with the transaction (e.g., data to write if the transaction 612 is a write command), represented as “d” in FIG. 6A .
  • the destination decoder 608 examines the source terminus ID in transaction 612 to determine whether the device that transmitted the transaction 612 is allocated to any of the partitions to which the I/O ports 628 a - b are allocated (step 804 ). If the transaction 612 was not transmitted by such a device, the transaction is not authorized to access the devices 604 a - b , and the destination decoder 608 does not transmit the transaction 608 to the I/O devices 604 a - b (step 806 ).
  • the destination decoder 608 may maintain a list 614 of valid source terminus IDs.
  • the list 614 may contain the source terminus IDs of those devices in the system 100 that are allocated to any of the partitions 704 a - b to which the I/O ports 628 a - b are allocated.
  • the destination decoder 608 may perform step 804 by determining whether the source terminus ID in transaction 612 is in the list 614 and by then determining that the transaction 612 is not from an appropriate partition if the source terminus ID is not in the list 614 .
  • the destination decoder 608 determines in step 804 that the transaction 612 is from an appropriate device, the destination decoder 608 maps the source terminus ID to the partition ID value of the one of the I/O ports 628 a - b that is in the same partition as the device that transmitted the transaction 612 (step 808 ).
  • the destination decoder 608 may maintain a table 616 or other mapping of source terminus identifiers to partition ID register values. The destination decoder 608 may therefore perform step 808 by using the source terminus ID in transaction 612 as an index into the table 616 and thereby identifying the corresponding partition ID register value.
  • the destination decoder 608 generates a first modified transaction 620 that contains: (1) the partition ID register value (p) identified in step 808 ; (2) the physical address (a) contained in the transaction 612 ; and (3) the data (d) contained in the transaction 612 .
  • the destination decoder 608 transmits the first modified transaction 620 to a transaction router 622 on lines 618 (step 810 ).
  • the transaction router 622 routes the transaction 620 to the one of the I/O ports 628 a - b that is allocated to the partition identified in the first modified transaction 620 (step 812 ). More specifically, the transaction router 622 identifies the one of the I/O ports 628 a - b that is allocated to the partition ID contained in the first modified transaction 620 (step 814 ).
  • the transaction router 622 may, for example, contain a lookup table that maps partition IDs to I/O ports 628 a - b , and may use that lookup table to perform step 814 .
  • the transaction router 622 may generate a second modified transaction by stripping the partition ID from the first modified transaction 620 and then transmit the second modified transaction to the device identified in step 814 (step 816 ).
  • I/O ports 628 a - b may either: (1) both be allocated to partition 704 a ; or (2) be separately allocated to partitions 704 a - b in the manner illustrated in FIG. 7 .
  • I/O controller 602 includes switch 632 .
  • I/O device 604 a is coupled to switch 632 over lines 630 a and I/O device 604 b is coupled to switch 632 over lines 630 b .
  • Switch 632 is in turn coupled to I/O port 628 a over lines 630 c .
  • switch 632 creates a permanent pass-through connection between I/O device 604 a and I/O port 628 a .
  • I/O device 604 a communicates with I/O controller 602 through I/O port 628 a .
  • Transaction router 622 may be configured to route transactions associated with partition 704 a to I/O port 628 a and thereby to implement the allocation of I/O device 604 a to partition 704 a.
  • I/O port 628 b may be disabled and the switch 632 may be set to a first setting which routes all communications to and from I/O device 604 b through I/O port 628 a . If I/O port 628 a is allocated to partition 704 a and I/O port 628 b is allocated to partition 704 b (as shown in FIG. 7 ), then I/O port 628 a may be enabled and the switch 632 may be set to a second setting which routes all communications to and from I/O device 604 b through I/O port 628 b . Note that use of the switch 632 in the manner described above is merely one example of a way in which a transaction may be decoded and routed to a specific port, and does not constitute a limitation of the present invention.
  • the transaction router 622 may maintain a mapping of partition ID values and associated I/O ports. For example, consider the case in which I/O device 604 a is mapped to partition 704 a and in which I/O device 604 b is mapped to partition 704 b (as shown in FIG. 7 ).
  • the transaction router 622 may generate and transmit a second modified transaction 626 a to I/O port 628 a on lines 624 a , through which the second modified transaction 628 a may be forwarded to I/O device 604 a on lines 630 c , through switch 632 , and then on lines 630 a .
  • the transaction router 622 may generate and transmit a second modified transaction 626 b to I/O port 628 b on lines 624 b , through which the second modified transaction 626 b may be forwarded to I/O device 604 b on lines 630 b .
  • the mapping 700 illustrated in FIG. 7 in which there is a one-to-one mapping between ports 628 a - b and partitions 702 a - b , is provided merely as an example and does not constitute a limitation of the present invention. Techniques disclosed herein may, for example, be used in conjunction with mappings of multiple ports to a single partition, as may be accomplished by using additional bits of the physical address as part of the partition ID.
  • I/O devices 604 a - b Examples of techniques will now be described for enabling the I/O devices 604 a - b to perform outgoing communications through the I/O controller 602 when the I/O devices 604 a - b are allocated to different partitions.
  • I/O port 628 a (and therefore I/O device 604 a ) is mapped to partition 704 a and that I/O port (and therefore I/O device 604 b ) is mapped to partition 704 b (as shown in FIG. 7 ).
  • an outgoing transaction 636 a is generated by I/O device 60 4 a on lines 634 a (through I/O port 628 a ).
  • Transaction 636 a includes a physical address (a) and data (d).
  • I/O controller 602 includes a plurality of partition ID registers 606 a - b associated with the I/O ports 628 a - b , respectively.
  • partition ID register 606 a is associated with I/O port 628 a and represents mapping 702 a ( FIG. 7 ).
  • partition ID register 606 b is associated with I/O port 628 b and represents mapping 702 b .
  • Each of the partition ID registers 606 a - b includes at least enough bits to distinguish among the partitions to which I/O ports 628 a - b are allocated.
  • Each of the partition ID registers 606 a - b stores a unique partition ID value that uniquely identifies the partition to which the corresponding one of the I/O ports 628 a - b is allocated.
  • the value 0 (binary 00) may be stored in partition ID register 606 a , thereby indicating that I/O port 628 a is allocated to partition 0 ( 704 a ).
  • the value 1 (binary 01) may be stored in partition ID register 606 b , thereby indicating that I/O port 628 b is allocated to partition 1 ( 704 b ).
  • the I/O controller 602 may be configured so that the partition ID values stored in the partition ID registers 606 a - b cannot be changed by the operating system executing on the computer system 100 .
  • bit substitution circuit 638 a receives the outgoing transaction 636 a (step 902 ).
  • the bit substitution circuit 638 a reads the partition ID value from partition ID register 606 a on lines 640 a (step 904 ).
  • the bit substitution circuit 638 a writes the partition ID value into the physical address, thereby producing a partition-identifying address (step 906 ).
  • the partition-identifying address produced in step 906 may, for example, have the layout illustrated in FIG. 12B .
  • the example partition-identifying address 1210 illustrated in FIG. 12B is 64 bits wide.
  • Portion 1212 (bits 0 - 54 ) of partition-identifying address 1210 contains the physical address contained in the original transaction 636 a .
  • bit substitution circuit 638 a writes the partition ID value obtained from transaction 636 a into portion 1214 (bit 55 ) of the partition-identifying address 1210 (step 906 ).
  • bit substitution circuit 638 a appends the partition ID value to the original physical address.
  • Portion 1218 which includes both portions 1212 and 1214 , therefore unambiguously identifies the system memory address indicated by the original transaction 636 a .
  • Portion 1216 (bits 56 - 63 ) of the partition-identifying address 1200 are unused.
  • Portion 1218 therefore represents the “used portion” of address 1210 because the combination of the partition ID portion 1214 and the physical address portion 1212 are used to specify a unique address in the system 100 .
  • partition ID field 1214 of address 1210 is only one bit wide, in contrast to the partition ID field 1204 of address 1200 ( FIG. 12A ), which is two bits wide.
  • the partition ID field 1214 of address 1210 need only be wide enough to distinguish among the partitions to which I/O ports 628 a - b are allocated. Because I/O ports 628 a - b are allocated to two ports in the example illustrated in FIGS. 6A-6B , partition ID field 1214 need only be one bit wide.
  • Partition ID field 1204 of address 1200 ( FIG. 12A ) in contrast, is two bits wide because it must be capable of distinguishing among all partitions 504 a - d in the system.
  • the required minimum width of the partition ID fields 1204 and 1214 may, of course, vary depending on the number of unique partitions they are required to represent.
  • the particular layout of the partition-identifying address 1210 in FIG. 12B is shown merely for purposes of example and does not constitute a limitation of the present invention. Rather, partition-identifying addresses of any size and having any layout may be used in conjunction with embodiments of the present invention.
  • the bit substitution circuit 638 a generates a first modified transaction 642 a containing the partition-identifying address generated in step 906 (step 908 ).
  • the bit substitution circuit 638 a transmits the first modified transaction 642 a to cache 646 on lines 644 a (step 910 ).
  • a flowchart is shown of a method 1000 that is performed by the cache 646 in response to receipt of the first modified transaction 642 a according to one embodiment of the present invention.
  • the cache 646 receives the first modified transaction 642 a from the bit substitution circuit 638 a (step 1002 ).
  • the cache 646 determines whether the first modified transaction 642 a can be satisfied using cache data stored locally in cache lines 648 (step 1004 ). If there is a cache hit, the cache 646 performs the transaction locally (i.e., within the cache lines 648 ) (step 1006 ) and the method 1000 terminates. Data is written into the cache from the IO card via lines 650 . If the transaction 636 a is a read command, the cache 646 may transmit the read values to the device 604 a on lines 650 .
  • the cache 646 transmits a second modified transaction 654 to an address mapper 656 on lines 654 (step 1008 ).
  • the second modified transaction 652 contains the partition ID value and physical address from the first modified transaction 642 a.
  • a flowchart is shown of a method 1100 that is performed by the address mapper 656 when it receives the second modified transaction 652 in one embodiment of the invention.
  • the address mapper 656 receives the second modified transaction 652 on lines 654 (step 1102 ).
  • the address mapper 656 maintains a mapping 658 of address-partition ID pairs to destination terminus IDs.
  • the address mapper 656 uses the mapping 658 to map the partition ID and address in the second modified transaction 652 into a destination terminus ID (step 1104 ).
  • the address mapper 656 generates and transmits a third modified transaction 670 to the system fabric 116 on lines 672 (step 1106 ).
  • the third modified transaction 670 includes: (1) the destination terminus ID identified in step 1104 ; (2) the physical address from the second modified transaction 652 ; and (3) the data from the second modified transaction 652 (if any).
  • the third modified transaction 670 does not include the partition ID identified in step 904 ( FIG. 9 ), because in the embodiment illustrated in FIGS. 6A-6B the partition ID is only used to distinguish internally (i.e., within the I/O controller 602 ) among different partitions.
  • router 228 routes the third modified transaction 670 to the memory controller or other device having the destination terminus ID contained in the third modified transaction 670 using the techniques described above with respect to FIG. 2 .
  • bit substitution circuit 638 b may receive outgoing transaction 636 b from device 604 b on lines 634 b and substitute therein the value of partition ID register 606 b , thereby generating and transmitting a first modified transaction 642 b on lines 644 b .
  • the first modified transaction 642 b may then be processed in the manner described above.
  • the techniques disclosed herein address this problem by providing the ability to allocate resources on a sub-chip basis.
  • the ability to allocate multiple resources on a single chip to multiple partitions increases the degree to which such resources may be allocated optimally in response to changing conditions.
  • Sub-chip partitioning allows partitionable computer systems to take full advantage of the cost and size reductions made possible by the current trend in computer chip design of providing an increasing number of functions on a single chip, while still providing the fine-grained resource allocation demanded by users.
  • embodiments of the present invention enable sub-chip partitioning to be accomplished using relatively localized modifications to existing circuitry, thereby enabling a substantial portion of existing circuitry to be used without modification in conjunction with embodiments of the present invention.
  • the cores 204 a - n , cache 208 , and fabric 116 may be prior art components.
  • embodiments of the present invention may be implemented relatively easily, quickly, and inexpensively.
  • bit substitution circuits 212 a - n and 638 a - b may enforce inter-partition security by preventing the operating system in the corresponding partition from accessing addresses in other partitions. As described above, such security may be provided by overwriting any values the operating system writes into the upper bits of addresses it generates (e.g., bits in portions 1204 or 1206 of address 1200 ( FIG. 12A ) and bits in portions 1214 or 1216 of address 1210 ( FIG. 12B )).
  • the techniques disclosed herein thereby provide a degree of hardware-enforced inter-partition security that cannot be circumvented by malicious or improperly-designed software.
  • resources refers herein to hardware resources in a computer system, such as processor cores ( FIG. 2 ) and I/O ports ( FIG. 6A-6B ).
  • a chip may contain one or more hardware resources.
  • processor cores and I/O ports are provided herein as examples of hardware resources that may individually be allocated to partitions in embodiments of the present invention, embodiments of the present invention may be used to allocate other kinds of hardware resources to partitions on a sub-chip basis.
  • the techniques illustrated by the example in FIG. 2 are applied to a plurality of CPU cores 206 a - n , such techniques may be applied to I/O ports or to any other kind of resource.
  • FIGS. 6A-6B are applied to a plurality of I/O ports 628 a - b , such techniques may be applied to CPU cores or to any other kind of resource.
  • techniques disclosed herein may be used in system including a cache to allocate the cache among multiple partitions. Furthermore, any resource which is accessed using memory-mapped transactions may be allocated to a particular partition in a partitionable computer system using techniques disclosed herein.
  • GPEs general purpose event registers
  • a particular GPE therefore, typically is addressable within the address space of the partition to which it is allocated.
  • Techniques disclosed herein may be employed to enable the GPEs of each partition accessible over the system fabric 116 at unique system (fabric) addresses.
  • a cell board may contain multiple memory controllers, each of which may have its own terminus ID.
  • Those of ordinary skill in the art will appreciate how to implement embodiments of the present invention in systems including multiple memory controllers on a single cell board.
  • the core 204 a issues memory write command 230 a
  • the memory write command 230 a is just one example of a memory access request, which is in turn merely one example of a transaction to which the techniques disclosed herein may apply.
  • partition ID values are stored in partition ID registers 210 a - n in FIG. 2
  • partition ID values may be represented and stored in any manner.
  • partition ID values need not each be stored in a distinct register and need not be represented using the particular numbering scheme described herein.
  • SMPs symmetric multiprocessor computer architectures
  • embodiments of the present invention are not limited to use in conjunction with SMPs.
  • Embodiments of the present invention may, for example, be used in conjunction with NUMA (non-uniform memory access) multiprocessor computer architectures.
  • NUMA non-uniform memory access
  • each cell board in the system may have any number of processors (including one).
  • the term “cell board” as used herein is not limited to any particular kind of cell board, but rather refers generally to any set of electrical and/or mechanical components that allow a set of one or more processors to communicate over a system fabric through an interface such as an agent chip.
  • fabric agent chip 114 a and memory controller 110 are illustrated as separate and distinct components in FIG. 1 , this is not a requirement of the present invention. Rather, the fabric agent chip 114 a and memory controller 110 a may be integrated into a single chip package.

Abstract

Techniques are provided for allocating a plurality of resources on a chip to a plurality of partitions in a partitionable computer system. In one embodiment, a resource allocated to a first partition generates a physical address in an address space allocated to the first partition. A partition identification value identifies the first partition. The first partition identification value is stored in the first physical address to produce a partition-identifying address, which may be transmitted to a system fabric. In another embodiment, a transaction is received which includes a source terminus identifier identifying a source device which transmitted the transaction. It is determined, based on the source terminus identifier, whether the source device is allocated to the same partition as any of the plurality of resources. If the source device is so allocated, the transaction is transmitted to a resource that is allocated to the same partition as the source device.

Description

    BACKGROUND
  • 1. Field of the Invention
  • The present invention relates to partitionable computers and, more particularly, to techniques for allocating resources to partitions in partitionable computers.
  • 2. Related Art
  • Computer system owners and operators are continually seeking to improve computer operating efficiencies and hence to reduce the cost of providing computing services. For example, servers of various kinds—such as database servers, web servers, email servers, and file servers—have proliferated within enterprises in recent years. A single enterprise may own or otherwise employ the services of large numbers of each of these kinds of servers. The cost of purchasing (or leasing) and maintaining such servers can be substantial. It would be advantageous, therefore, to reduce the number of servers that must be used by an enterprise without decreasing system performance.
  • One way to reduce the number of servers is through the process of “server consolidation,” in which multiple independent servers are replaced by a single server, referred to herein as a “consolidation server.” A consolidation server typically is a powerful computer system having significant computing resources (such as multiple processors and large amounts of memory). The consolidation server may be logically subdivided into multiple “partitions,” each of which is allocated a portion of the server's resources. A multi-partition consolidation server is an example of a “partitionable computer.” Each partition may execute its own operating system and software applications, and otherwise act similarly to an independent physical computer.
  • Unlike a collection of independent servers, typically it is possible to dynamically adjust the resources available to each partition/application in a consolidation server. Many applications experience variation in workload demand, which is frequently dependent on time of day, day of month, etc. Periods of high workload demand are frequently not coincident. Applying available resources to current high-demand workloads achieves improved resource utilization, decreased overall resource requirements, and therefore reduced overall cost.
  • As partitionable computers become more powerful, the trend is for them to include a greater and greater number of processors. In particular, a single partitionable computer typically includes several (e.g., 4) “cell boards,” each of which includes several (e.g., 2, 4, 8, or 16) processors. The cell boards are interconnected through a switching-fabric and collectively provide an effective processing power that approaches the aggregate processing power of the individual processors they contain. Each successive generation of cell boards tends to include a greater number of processors than the previous generation.
  • Early processors, like many existing processors, included only a single processor core. A “multi-core” processor, in contrast, may include one or more processor cores on a single chip. A multi-core processor behaves as if it were multiple processors. Each of the multiple processor cores may essentially operate independently, while sharing certain common resources, such as a cache. Multi-core processors therefore provide additional opportunities for increased processing efficiency.
  • As the size, power, and complexity of partitionable computer hardware continues to increase, it is becoming increasingly desirable to provide flexibility in the allocation of computer resources (such as processors and I/O devices) among partitions. Insufficient flexibility in resource allocation may, for example, lead to underutilization of resources allocated to a first partition, while a second partition lacking sufficient resources operates at maximum utilization. What is needed, therefore, are improved techniques for allocating computer resources to partitions in partitionable computer systems.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a multiprocessor computer system according to one embodiment of the present invention;
  • FIG. 2 is a block diagram of one of the CPUs of the computer system of FIG. 1 according to one embodiment of the present invention;
  • FIG. 3 is a flowchart of a method that is performed by a bit substitution circuit of FIG. 2 according to one embodiment of the present invention;
  • FIG. 4A is a flowchart of a method that is performed by the cache of FIG. 2 according to one embodiment of the present invention;
  • FIG. 4B is a flowchart of a method that is performed by the address mapper of FIG. 2 according to one embodiment of the present invention;
  • FIG. 5 is a diagram of a mapping between processor cores and hardware partitions in a partitionable computer system according to one embodiment of the present invention;
  • FIGS. 6A-6B illustrate an I/O controller according to one embodiment of the present invention;
  • FIG. 7 is a diagram of a mapping between I/O ports and partitions in a partitionable computer system according to one embodiment of the present invention;
  • FIG. 8 is a flowchart of a method performed by the destination decoder of FIGS. 6A-6B to decode a physical address in an incoming transaction according to one embodiment of the present invention;
  • FIG. 9 is a flowchart of a method that is performed by the bit substitution circuit of FIGS. 6A-6B according to one embodiment of the present invention;
  • FIG. 10 is a flowchart of a method that is performed by the cache of FIGS. 6A-6B according to one embodiment of the present invention;
  • FIG. 11 is a flowchart of a method that is performed by the address mapper of FIGS. 6A-6B according to one embodiment of the present invention;
  • FIG. 12A is a diagram of a partition-identifying address according to one embodiment of the present invention; and
  • FIG. 12B is a diagram of a partition-identifying address according to another embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Before describing embodiments of the present invention, general features of multiprocessor computer architectures will be described. Although there are a variety of multiprocessor computer architectures, the symmetric multiprocessing (SMP) architecture is one of the most widely used architectures. Referring to FIG. 1, a computer system 100 having an SMP architecture is shown in block diagram form. The computer system 100 includes a plurality of cell boards 102 a-d interconnected using a switching fabric 116, also referred to as a “system fabric” or simply a “fabric.” Each of the cell boards 102 a-d includes a plurality of CPUs, a system bus, and main memory.
  • For ease of illustration and explanation, the cell board 102 a is shown in more detail in FIG. 1 and will now be described in more detail. The other cell boards 102 b-d, however, may include components and a structure that are the same as or similar to that of cell board 102 a. The cell board 102 a includes a plurality of CPUs 104 a-n, where n is a number such as 2, 4, 8, or 16. The CPUs 104 a-n include on-board caches 106 a-n, respectively. The cell board 102 a also includes a system bus 108, main memory 112 a, and memory controller 110 a. The CPUs 102 a-n are coupled directly to the system bus 108, while main memory 112 a is coupled to the system bus 108 through memory controller 110 a. CPUs 104 a-n may communicate with each other over the system bus 108 and may access the memory 112 a over the system bus 108 through the memory controller 110 a, as is well-known to those of ordinary skill in the art.
  • Although cell boards 102 a-d include their own local system memories 112 a-d coupled to corresponding memory controllers 110 a-d, the memories 112 a-d may be addressed by the CPUs in the cell boards 102 a-d using a single combined physical address space. The fabric 116 provides a mechanism for communication among the cell boards 102 a-d to perform such shared memory access and other inter-cell board communication.
  • The fabric 116 may, for example, include one or more crossbar switches. A crossbar switch is a device that has a number of input/output ports to which devices may be connected. A pair of devices connected to a pair of input/output ports of a crossbar switch may communicate with each other over a path formed within the switch connecting the pair of input/output ports. The paths set up between devices can be fixed for some duration or changed when desired. Multiple paths may be active simultaneously within the crossbar switch, thereby allowing multiple pairs of devices to communicate with each other through the crossbar switch simultaneously and without interfering with each other.
  • The fabric 116 may be implemented using components other than crossbar switches. For example, the fabric 116 may be implemented using one or more buses.
  • Cell board 102 a also includes a fabric agent chip 114 a that is coupled to the fabric 116 and which acts as an interface between the cell board 102 a and the other cell boards 102 b-d in the system 100. The other cell boards 102 b-d similarly include their own fabric agent chips 114 b-d, respectively. Although the fabric agent chips 114 a-d are illustrated as distinct components in FIG. 1, fabric agent chips 114 a-d may be considered to be part of the system fabric 116.
  • As described above, the local memories 112 a-d in the cell boards 102 a-d may be accessed using a single physical address space. In an SMP such as the system 100 shown in FIG. 1, this is made possible by the fabric agent chips 114 a-d For example, consider a case in which CPU 104 a issues a memory access request to memory controller 110 a that addresses a memory location (or range of memory locations) in the shared physical address space. If the memory controller 110 a cannot satisfy the memory access request from the local memory 112 a, the memory controller 110 a forwards the request to the fabric agent chip 114 a. The fabric agent chip 114 a translates the physical address in the request into a new memory address (referred to as a “fabric address”) that specifies the location of the requested memory, and transmits a new memory access request using the new fabric address to the fabric 116. The fabric 116 forwards the memory access request to the fabric agent chip in the appropriate cell board.
  • The requested memory access is performed using the local memory of the receiving cell board, if possible, and the results are transmitted back over the fabric 116 to the fabric agent chip 114 a and back through the memory controller 110 a to the CPU 104 a. The CPUs in cell boards 102 a-d may thereby access the main memory in any of the other cell boards 102 a-d over the fabric 116 using the fabric agent chips 114 a-d in the cell boards 102 a-d. One goal of such a system is to make the implementation of memory access transparent to the CPUs 104 a-d, in the sense that the CPUs 104 a-d may transmit and receive responses to memory access requests in the same way regardless of whether such requests are satisfied from onboard memory or offboard memory.
  • In one embodiment of the present invention, techniques are provided for allocating multiple physical resources on a single chip to a plurality of partitions in a partitionable computer system. In this embodiment, when one of the resources generates a transaction containing a physical address, a partition identification value (identifying the partition to which the resource is allocated) is stored in the physical address to create a partition-identifying address. The transaction, including the partition-identifying address, is transmitted over the fabric 116 and thereby routed to the appropriate destination.
  • This embodiment will be explain using an example in which multiple microprocessor cores in a single microprocessor are allocated to a plurality of partitions. For example, referring to FIG. 2, a functional block diagram is shown of the CPU 104 a according to one embodiment of the present invention. In the embodiment illustrated in FIG. 2, the CPU 104 a is a multi-core processor. In particular, the CPU 104 a includes a plurality of processor cores 204 a-n on a single chip, where n may any number, such as 2, 4, 8, or 16. The cores 204 a-n may, for example, be conventional processor cores such as those found in conventional multi-core processors. In the embodiment illustrated in FIG. 1, all of the cores 204 a-n share a single cache 208. The cores 204 a-n need not, however, share a single cache. Rather, for example, each core may have its own cache, or groups of cores may share different caches.
  • In a conventional partitionable computer system, all of the cores in a multi-core processor are required to be allocated to a single partition. Furthermore, if the CPU 104 a were a conventional multi-core processor, the cores 204 a-n would communicate directly with the cache 208. For example, the core 204 a would transmit a memory write request, including the address of the memory address to be written, directly to the cache 208, which would satisfy the request locally if possible or by performing an off-board write to main memory otherwise.
  • The multi-core processor 104 a illustrated in FIG. 2, in contrast, enables the cores 204 a-n to be allocated to a plurality of partitions. For example, referring to FIG. 5, a diagram is shown of a mapping 502 between processor cores 506 a-h and partitions 504 a-d in the partitionable computer system 100 according to one embodiment of the present invention. Cores 506 a-h in FIG. 5 represent cores 204 a-n in FIG. 2 in the case where n=8. For example, core 506 a represents core 204 a and core 506 h represents core 204 n when n=8.
  • Note that each of the partitions 504 a-d is not itself a physical component of the computer system 100. Rather, each of the partitions 504 a-d is a logical construct that is defined by the resources (e.g., processor cores) that are allocated to it. The resources allocated to a particular partition may change over time.
  • In the example shown in FIG. 5, core 506 b is allocated to partition 504 a (indicated by mapping 502 b), cores 506 a and 506 b are allocated to partition 504 b (indicated by mappings 502 a and 502 b, respectively), cores 506 c, 506 e, and 506 f are allocated to partition 504 (indicated by mappings 502 c, 502 e, and 502 f, respectively), and cores 506 g-h are allocated to partition 504 d (indicated by mappings 502 g-h, respectively).
  • The particular mapping 502 illustrated in FIG. 5 is shown merely for purposes of example and does not constitute a limitation of the present invention. There may be any number of partitions, and cores may be allocated to partitions in any arrangement.
  • To enable the cores 204 a-n to be allocated to multiple partitions, the CPU 104 a includes a plurality of partition ID registers 210 a-n associated with the plurality of cores 204 a-n respectively. For example, partition ID register 210 a is associated with core 206 a and stores a value that represents mapping 502 a (FIG. 5). Similarly, partition ID register 210 n is associated with core 204 n and stores a value that represents mapping 502 h. Each of the partition ID registers 210 a-n includes at least enough bits to represent the number of partitions in the computer system 100. In particular, if P is the number of partitions in the computer system 100, each of the partition ID registers 210 a-n includes at least log2 P bits. For example, if there are four partitions (as in the example illustrated in FIG. 5), each of the partition ID registers 210 a-n includes at least 2 (log2 4) bits.
  • Each of the partition ID registers 210 a-n stores a unique partition ID value that uniquely identifies the partition to which the corresponding one of the cores 204 a-n is allocated. For example, let PIRi be the partition ID register at index i, and let Ci be the corresponding processor core at index i, where i ranges from 0 to n−1. If core Ci is allocated to partition j, then the value j may be stored in the partition ID value in partition ID register PIRi. In this way, a unique value identifies each of the partitions in the system 100. The values stored in the partition ID registers 210 a-n may, for example, be set by configuration software executing in the computer system 100.
  • For example, referring again to the example illustrated in FIG. 5, the value 1 (binary 01) may be stored in partition ID register 210 a, thereby indicating that core 204 a (represented by core 506 a in FIG. 5) is allocated to partition 1 (504 b). Similarly, the value 3 (binary 11) may be stored in partition ID register 210 n, thereby indicating that core 204 n (represented by core 506 h in FIG. 5) is allocated to partition 3 (504 d).
  • The CPU 104 a may be configured so that the partition ID values stored in the partition ID registers 210 a-n cannot be changed by the operating system executing on the computer system 100. This fixedness of the partition ID values may be enforced, for example, by any of a variety of hardware security mechanisms, or simply by agreement between the configuration software and the operating system.
  • To implement the allocation of the cores 204 a-n to the multiple partitions 504 a-d, the main memory 112 a-d of the computer system 100 is allocated among the partitions 504 a-d, so that each partition is allocated a portion of the main memory 112 a-d. The main memory 112 a-d may be allocated to the partitions 504 a-d in blocks of any size. For example, the main memory 112 a-d may be allocated to partitions 504 a-d on a per-address, per-page, or per-controller basis.
  • In one embodiment of the present invention, a core that transmits a memory access request need not specify the partition to which the requested memory addresses are allocated. Rather, the core need only specify the requested memory address using a memory address (referred to as a “physical address”) within an address space (referred to as a “physical address space”) associated with the partition to which the core is allocated. Typically the main memory 112 a-d is logically divided into a plurality of physical address spaces. Each of the physical address spaces typically is zero-based, which means that the addresses in each physical address space typically is numbered beginning with address zero.
  • To accomplish this result, mechanisms are provided for distinguishing a particular address in one partition from the same address in other partitions. In particular, the CPU 104 a includes bit substitution circuits 212 a-n, which are coupled between cores 204 a-n and partition ID registers 210 a-n, respectively.
  • To appreciate the function performed by the bit substitution circuits 212 a-n, consider a case in which core 204 a transmits a write command 230 a on lines 214 a to bit substitution circuit 212 a. The write command 230 a includes a physical address of the memory location to be written and a value to write into that location. The physical address is illustrated in FIG. 2 as “a[54:0]” to indicate that bits 0-54 of the address contain useful (address-identifying) information.
  • The term “system space” refers herein to an address space that contains unique addresses for each memory location in the entire main memory 112 a-d. Assume, for purposes of example, that the system address space is 4 GB (0x100000000) and that there are four equally-sized (1 GB) partitions. The physical memory space of each of the partitions in such a case would have an address range of O-LGB (0x00000000-0x40000000). The first partition might be allocated (mapped) to the first gigabyte of the system address space, the second partition might be allocated to the second gigabyte of the system address space, and so on. When a core allocated to a particular partition generates a physical memory address as part of a memory access request, it is necessary to translate the physical memory address into a system memory address. Examples of techniques for performing this translation according to one embodiment of the present invention will now be described.
  • For purposes of example, assume that the physical address in the write command 230 a transmitted on lines 214 a is a 64-bit value but that only the 55 least significant bits are needed to fully address the physical address space allocated to a single partition. In such a case, the 9 uppermost address bits are not needed to specify physical addresses. Upon startup of a multi-partition computer system, the operating system executing in each partition is informed of the size of the physical address space that is allocated to it. As a result, a well-behaved operating system will not generate addresses that use more bits than necessary (e.g., 55) to address its allocated memory partition. As described in more detail below, however, even if the operating system in a particular partition is not well-behaved and generates addresses outside of its allocated address range, the techniques disclosed herein prevent such an operating system from accessing such prohibited addresses, thereby enforcing inter-partition security.
  • Referring to FIG. 3, a flowchart is shown of a method 300 that is performed by the bit substitution circuit 212 a according to one embodiment of the present invention when write command 230 a is transmitted by core 204 a on lines 214 a. The bit substitution circuit 212 a receives the write command 230 a (or other memory access request, such as a read command) (step 302). In response to receiving write command 230 a, the bit substitution circuit 212 a reads the partition ID value from the partition ID register 210 a on lines 216 a (step 304). The bit substitution circuit 212 a writes the partition ID value into the physical address, thereby producing a partition-identifying address that includes both the original physical address and the partition ID value (step 306).
  • Referring to FIG. 12A, a diagram is shown of an example of a partition-identifying address 1200 produced in step 306 according to one embodiment of the present invention. The example partition-identifying address 1200 illustrated in FIG. 12A is 64 bits wide. Portion 1202 (bits 0-52) of partition-identifying address 1200 contains bits 0-52 of the physical address contained in the original write command 230 a. In one embodiment of the present invention, bit substitution circuit 212 a writes the partition ID value obtained from write command 230 a into portion 1204 (bits 53-54) of the partition-identifying address 1200 (step 306), thereby overwriting the original values stored in portion 1204. Portion 1208, which includes both portions 1202 and 1204, therefore unambiguously identifies the system memory address indicated by the original write command 230 a. Portion 1206 (bits 55-63) of the partition-identifying address 1200 are unused. Portion 1208 therefore represents the “used portion” of address 1200 because the combination of the partition ID portion 1204 and the physical address portion 1202 are used to specify a unique address in the system 100.
  • Recall that a well-behaved operating system will not attempt to access memory locations having addresses outside of the address space that has been allocated to it, and will therefore not set any of the bits in portions 1204 or 1206. If, however, an operating system does set any bits in portion 1204, such bits will be overwritten by the bit substitution circuit 212 a in step 306. The bit substitution circuit 212 a may further be configured to overwrite portion 1206 with zeros or some other value. The bit substitution circuit 212 a may thereby prevent the operating system from accessing addresses outside of its partition and thereby enforce inter-partition security.
  • The particular layout of the partition-identifying address 1200 in FIG. 12A is shown merely for purposes of example and does not constitute a limitation of the present invention. Rather, partition-identifying addresses of any size and having any layout may be used in conjunction with embodiments of the present invention. For example, the layout of partition-identifying addresses may vary from partition to partition. For example, one partition may be allocated twice as much address space as another, in which case addresses in the larger partition will include one less bit of partition ID (portion 1204) and one more bit of physical address (portion 1202) than addresses in the smaller partition. The bit substitution circuits 212 a-n, therefore, may be individually programmable to insert partition IDs of varying sizes into the addresses generated by the cores 204 a-n.
  • The bit substitution circuit 212 a generates a first modified write command 232 a (or other memory access request) containing the partition-identifying address generated in step 306 (step 308). The bit substitution circuit 212 a transmits the first modified write command 232 a (or other memory access request) to the cache 208 on lines 218 a (step 310).
  • The combination of a core, partition ID register, and bit substitution circuit in the manner described and illustrated above with respect to FIG. 2 is referred to herein as an “extended core.” For example, CPU 104 a includes extended cores 206 a-n. Extended core 206 a includes core 204 a, partition ID register 210 a, and bit substitution circuit 212 a, while extended core 206 n includes core 204 n, partition ID register 210 n, and bit substitution circuit 212 n. Although core 204 a, bit substitution circuit 212 a, and partition ID register 210 a are illustrated as distinct components in FIG. 2, the functions performed by the bit substitution circuit 212 a and/or partition ID register 206 a may be integrated into the core 204 a, so that the core 204 a may communicate directly with the cache 208.
  • Referring to FIG. 4A, a flowchart is shown of a method 400 that is performed by the cache 208 in response to receipt of the first modified write command 232 a according to one embodiment of the present invention. The cache 208 receives the first modified write command 232 a from the bit substitution circuit 212 a (step 402). The cache 208 determines whether the write request can be satisfied locally, i.e., whether there is a cache hit in cache lines 234 based on the partition-identifying address contained in the first modified write command 232 a (step 404). In other words, the cache 208 determines whether the value of the memory location addressed by the partition-identifying address contained in the first modified write command 232 a is stored in cache lines 234. The cache 208 may perform step 404 by using the partition-identifying address contained in the first modified write command 232 a as an index and tag and then using any of a variety of well-known techniques to determine whether there is a cache hit based on that index and tag.
  • Note that the address bits in which the partition ID value is stored may occupy either the index or tag field of the cache 208. If the partition ID value is stored in the index field of the cache 208, then the partitions 504 a-d are allocated fixed and distinct (non-overlapping) portions of the cache 208. If, however, the partition ID value is stored in the tag field of the cache 208, then the entire cache 208 is shared by the partitions 504 a-d, and the particular cache locations used by any partition is dynamic and depends on the workload of the cores 204 a-n at any particular point in time.
  • If there is a cache hit, the cache 208 performs the write locally (i.e., within the cache lines 234) (step 406) and the method 400 terminates. The cache 208 may transmit an acknowledgment to the core 204 a on lines 224 a. If the core 204 a transmits a read command to the cache 208, the cache 208 may transmit the read values to the core 204 a on lines 224 a.
  • If there is a cache miss, the cache 208 transmits a second modified write command 236 to an address mapper 222 (step 408). In one embodiment of the present invention, the second modified write command 236 contains: (1) a source terminus ID (e.g., the terminus ID of the memory controller 110 a that services the CPU 104 a), labeled “S” in FIG. 2; (2) a transaction ID (a unique transaction identifier), labeled “I” in FIG. 2; (3) a request type (e.g., memory read or write), labeled “R” in FIG. 2; and (4) the partition-identifying address 1200 extracted from the first modified write command 232 a, labeled “p1, a[n:0]” in FIG. 2.
  • Although particular transactions are described above with respect to core 206 a for purposes of example, the other cores 206 b-n may perform transactions in the same manner. For example, core 204 n may generate a write command 230 n on lines 214 n, in response to which bit substitution circuit 212 n may read the value of partition ID register 210 n on lines 216 n. The bit substitution circuit 212 n may transmit a first modified write command 232 n on lines 218 n, which may be processed by the cache 208 in the manner described above. The cache 208 may communicate with the core 204 n directly over lines 224 n.
  • In one embodiment of the present invention the partition-identifying address contained in the second modified write command 236 is translated into a system address. Referring to FIG. 4B, a flowchart is shown of a method 420 that is performed in one embodiment of the invention to perform such a translation. The method 420 may, for example, be performed after step 408 of method 400 (FIG. 4A).
  • The CPU 104 a includes an address mapper 222, which is coupled to the cache 208 over lines 220 and which therefore receives the second modified write command 236 (step 422). The address mapper 222 maps the partition-identifying address 1200 contained in the second modified write command 230 to: (1) a destination terminus ID (e.g., a terminus ID of the memory controller that controls access to the requested memory addresses), and (2) a transaction type (step 424). The transaction type serves a purpose similar to that of the original request type (e.g., memory read or write), except that the request type is used for transactions over the fabric 116. Techniques for translating request types into transaction types are well-known to those of ordinary skill in the art.
  • In one embodiment of the present invention, each of the CPUs in the system 100 (e.g., CPUs 104 a-n) and each of the memory controllers 110 a-d in the system 100 has a unique terminus identifier (terminus ID). In such an embodiment, a particular physical address in a particular partition may be uniquely addressed by a combination of the physical address, the partition ID of the partition, and the terminus ID of the memory controller that controls the memory in which that physical address is stored. Note further that because the address transmitted over the fabric 116 is a partition-identifying address (i.e., an address which includes both a physical address and a partition ID), the target memory controller may distinguish among the same physical address in different partitions. In the embodiment illustrated in FIG. 2, therefore, a single memory controller may control memory allocated to any number of partitions.
  • It should be appreciated, however, that this particular scheme is merely an example and does not constitute a limitation of the present invention. Other addressing schemes may be used in conjunction with the techniques disclosed herein, in which case different combinations of terminus identifiers, physical addresses, system addresses, partition identifiers, or other data may be required to uniquely address particular memory locations.
  • The address mapper 222 may, for example, maintain an address mapping 238 that maps partition-identifying addresses to destination terminus IDs and transaction types. The address mapper 222 may use the mapping 238 (which may, for example, be implemented as a lookup table) to perform the translation in step 424. The address mapping 238 need not contain an entry for every partition-identifying address. Rather, the address mapping 238 may, for example, map ranges of partition-identifying addresses (identified by their most significant bits) to pages of memory or to memory controllers. The address mapper 222 may ensure that a processor core allocated to one partition cannot access memory locations in another partition by mapping such requests to a null entry, thereby causing the address mapper 222 to generate a mapping fault.
  • The address mapper 222 generates and transmits a third modified write command 240 to the system fabric 116 (step 426). The third modified write command 240 includes: (1) the source terminus ID (S), transaction ID (I), request type (R), and partition-identifying address (p1, a[n:0]) from the second modified write command 236; and (2) the destination terminus ID (D) and transaction type (T) identified in step 424. The system fabric 116 includes a router 228 that uses techniques that are well-known to those of ordinary skill in the art to transmit the third modified write command 240 to the memory controller having the specified destination terminus ID. The router 228 may, for example, maintain a mapping 244 that maps pairs of input ports and destination terminus IDs to output ports.
  • When the router 228 receives the third modified write command 240 on a particular input port, the router uses the identity of the input port and the destination terminus ID contained in the third modified write command 240 to identify the output port that is coupled to the memory controller that controls access to the requested memory address(es). The router 228 transmits the third modified write command 240 (or a variation thereof) to the identified memory controller on lines 242. The third modified write command 240 may then be satisfied by the destination memory controller using techniques that are well-known to those of ordinary skill in the art.
  • When the router 228 receives an inbound transaction on lines 246, the router 228 may route the transaction to the cache on lines 226 using techniques that are well-known to those of ordinary skill in the art. The incoming transaction may then be processed by the cache 208 and, if necessary, by one or more of the cores 206 a-n, using conventional techniques.
  • In another embodiment of the present invention, techniques are provided for allocating a plurality of hardware resources to a plurality of partitions in a partitionable computer system. This embodiment will be explained using an example in which a plurality of resources in a single. I/O controller are allocated to a plurality of partitions. For example, referring to FIGS. 6A-6B, a functional block diagram is shown of an I/O controller 602 according to one embodiment of the present invention. The I/O controller 602 serves two I/O devices 604 a-b coupled to the I/O controller 602 through I/O ports 630 a-b, respectively. Examples of techniques will now be described for allocating the first I/O port 628 a to a first partition and the second I/O port 628 b to a second partition, and thereby for allocating the first I/O device 604 a to the first partition and the second I/O device 604 b to the second partition.
  • For example, referring to FIG. 7, a diagram is shown of a mapping 702 between I/O ports 628 a-b and hardware partitions 704 a-d in the partitionable computer system 100 according to one embodiment of the present invention. The mapping 702 includes mappings 702 a-b between I/O ports 628 a-b and partitions 704 a-b, respectively. Note that there are two partitions 704 c-d to which neither of the I/O ports 628 a-b is mapped. Other I/O ports in other I/O controllers (not shown), however, may be mapped to partitions 704 c-d. Although in the particular example illustrated in FIG. 7 there are two I/O ports 628 a-b allocated to two partitions 704 a-b, there may be any number of I/O ports and any number of partitions mapped to each other in any arrangement.
  • The I/O controller 602 includes a destination decoder 608, which verifies that incoming transactions (on lines 610) are addressed to one of the I/O devices 604 a-b controlled by the I/O controller 602. If an incoming transaction is not addressed to one of the I/O devices 604 a-b, the destination decoder 608 does not transmit the transaction further within the I/O controller 602.
  • Referring to FIG. 8, a flowchart is shown of a method 800 that is performed by the destination decoder 608 when an incoming transaction 612 is received on lines 610 in one embodiment of the present invention. The destination decoder 608 receives the incoming transaction 612 (step 802). In one embodiment of the present invention, the transaction 612 includes (1) a source terminus identifier (e.g., the terminus ID of the device that originated the transaction 612), represented as “S” in FIG. 6A; (2) the physical address to access, represented as “a” in FIG. 6A; (3) the transaction type (e.g., read or write), represented as “T” in FIG. 6A; and (4) data associated with the transaction (e.g., data to write if the transaction 612 is a write command), represented as “d” in FIG. 6A.
  • The destination decoder 608 examines the source terminus ID in transaction 612 to determine whether the device that transmitted the transaction 612 is allocated to any of the partitions to which the I/O ports 628 a-b are allocated (step 804). If the transaction 612 was not transmitted by such a device, the transaction is not authorized to access the devices 604 a-b, and the destination decoder 608 does not transmit the transaction 608 to the I/O devices 604 a-b (step 806).
  • More specifically, the destination decoder 608 may maintain a list 614 of valid source terminus IDs. The list 614 may contain the source terminus IDs of those devices in the system 100 that are allocated to any of the partitions 704 a-b to which the I/O ports 628 a-b are allocated. The destination decoder 608 may perform step 804 by determining whether the source terminus ID in transaction 612 is in the list 614 and by then determining that the transaction 612 is not from an appropriate partition if the source terminus ID is not in the list 614.
  • If the destination decoder 608 determines in step 804 that the transaction 612 is from an appropriate device, the destination decoder 608 maps the source terminus ID to the partition ID value of the one of the I/O ports 628 a-b that is in the same partition as the device that transmitted the transaction 612 (step 808). The destination decoder 608 may maintain a table 616 or other mapping of source terminus identifiers to partition ID register values. The destination decoder 608 may therefore perform step 808 by using the source terminus ID in transaction 612 as an index into the table 616 and thereby identifying the corresponding partition ID register value.
  • The destination decoder 608 generates a first modified transaction 620 that contains: (1) the partition ID register value (p) identified in step 808; (2) the physical address (a) contained in the transaction 612; and (3) the data (d) contained in the transaction 612. The destination decoder 608 transmits the first modified transaction 620 to a transaction router 622 on lines 618 (step 810).
  • The transaction router 622 routes the transaction 620 to the one of the I/O ports 628 a-b that is allocated to the partition identified in the first modified transaction 620 (step 812). More specifically, the transaction router 622 identifies the one of the I/O ports 628 a-b that is allocated to the partition ID contained in the first modified transaction 620 (step 814). The transaction router 622 may, for example, contain a lookup table that maps partition IDs to I/O ports 628 a-b, and may use that lookup table to perform step 814. The transaction router 622 may generate a second modified transaction by stripping the partition ID from the first modified transaction 620 and then transmit the second modified transaction to the device identified in step 814 (step 816).
  • In one embodiment of the present invention, I/O ports 628 a-b may either: (1) both be allocated to partition 704 a; or (2) be separately allocated to partitions 704 a-b in the manner illustrated in FIG. 7. To enable the I/O controller 602 to implement either such partitioning of the I/O ports 628 a-b, I/O controller 602 includes switch 632. I/O device 604 a is coupled to switch 632 over lines 630 a and I/O device 604 b is coupled to switch 632 over lines 630 b. Switch 632 is in turn coupled to I/O port 628 a over lines 630 c. In one embodiment of the present invention, switch 632 creates a permanent pass-through connection between I/O device 604 a and I/O port 628 a. As a result, I/O device 604 a communicates with I/O controller 602 through I/O port 628 a. Transaction router 622 may be configured to route transactions associated with partition 704 a to I/O port 628 a and thereby to implement the allocation of I/O device 604 a to partition 704 a.
  • If both I/O ports 628 a-b are allocated to partition 704 a, I/O port 628 b may be disabled and the switch 632 may be set to a first setting which routes all communications to and from I/O device 604 b through I/O port 628 a. If I/O port 628 a is allocated to partition 704 a and I/O port 628 b is allocated to partition 704 b (as shown in FIG. 7), then I/O port 628 a may be enabled and the switch 632 may be set to a second setting which routes all communications to and from I/O device 604 b through I/O port 628 b. Note that use of the switch 632 in the manner described above is merely one example of a way in which a transaction may be decoded and routed to a specific port, and does not constitute a limitation of the present invention.
  • Returning to step 812 of method 800, the transaction router 622 may maintain a mapping of partition ID values and associated I/O ports. For example, consider the case in which I/O device 604 a is mapped to partition 704 a and in which I/O device 604 b is mapped to partition 704 b (as shown in FIG. 7). In such a case, if the partition ID in the first modified transaction 620 identifies partition 704 a, the transaction router 622 may generate and transmit a second modified transaction 626 a to I/O port 628 a on lines 624 a, through which the second modified transaction 628 a may be forwarded to I/O device 604 a on lines 630 c, through switch 632, and then on lines 630 a. Similarly, if the partition ID in the first modified transaction 620 identifies partition 704 b, the transaction router 622 may generate and transmit a second modified transaction 626 b to I/O port 628 b on lines 624 b, through which the second modified transaction 626 b may be forwarded to I/O device 604 b on lines 630 b. Note that the mapping 700 illustrated in FIG. 7, in which there is a one-to-one mapping between ports 628 a-b and partitions 702 a-b, is provided merely as an example and does not constitute a limitation of the present invention. Techniques disclosed herein may, for example, be used in conjunction with mappings of multiple ports to a single partition, as may be accomplished by using additional bits of the physical address as part of the partition ID.
  • Examples of techniques will now be described for enabling the I/O devices 604 a-b to perform outgoing communications through the I/O controller 602 when the I/O devices 604 a-b are allocated to different partitions. Assume once again that I/O port 628 a (and therefore I/O device 604 a) is mapped to partition 704 a and that I/O port (and therefore I/O device 604 b) is mapped to partition 704 b (as shown in FIG. 7). Now consider an example in which an outgoing transaction 636 a is generated by I/O device 60 4a on lines 634 a (through I/O port 628 a). Transaction 636 a includes a physical address (a) and data (d).
  • I/O controller 602 includes a plurality of partition ID registers 606 a-b associated with the I/O ports 628 a-b, respectively. In particular, partition ID register 606 a is associated with I/O port 628 a and represents mapping 702 a (FIG. 7). Similarly, partition ID register 606 b is associated with I/O port 628 b and represents mapping 702 b. Each of the partition ID registers 606 a-b includes at least enough bits to distinguish among the partitions to which I/O ports 628 a-b are allocated.
  • Each of the partition ID registers 606 a-b stores a unique partition ID value that uniquely identifies the partition to which the corresponding one of the I/O ports 628 a-b is allocated. For example, referring again to the example illustrated in FIG. 7, the value 0 (binary 00) may be stored in partition ID register 606 a, thereby indicating that I/O port 628 a is allocated to partition 0 (704 a). Similarly, the value 1 (binary 01) may be stored in partition ID register 606 b, thereby indicating that I/O port 628 b is allocated to partition 1 (704 b). The I/O controller 602 may be configured so that the partition ID values stored in the partition ID registers 606 a-b cannot be changed by the operating system executing on the computer system 100.
  • Referring to FIG. 9, a flowchart is shown of a method 900 that is performed by bit substitution circuit 638 a according to one embodiment of the present invention when outgoing transaction 636 a is transmitted on lines 636 a by device 604 a. The bit substitution circuit 638 a receives the outgoing transaction 636 a (step 902). In response to receiving the transaction 636 a, the bit substitution circuit 638 a reads the partition ID value from partition ID register 606 a on lines 640 a (step 904). The bit substitution circuit 638 a writes the partition ID value into the physical address, thereby producing a partition-identifying address (step 906).
  • The partition-identifying address produced in step 906 may, for example, have the layout illustrated in FIG. 12B. The example partition-identifying address 1210 illustrated in FIG. 12B is 64 bits wide. Portion 1212 (bits 0-54) of partition-identifying address 1210 contains the physical address contained in the original transaction 636 a. In one embodiment of the present invention, bit substitution circuit 638 a writes the partition ID value obtained from transaction 636 a into portion 1214 (bit 55) of the partition-identifying address 1210 (step 906). In other words, bit substitution circuit 638 a appends the partition ID value to the original physical address. Portion 1218, which includes both portions 1212 and 1214, therefore unambiguously identifies the system memory address indicated by the original transaction 636 a. Portion 1216 (bits 56-63) of the partition-identifying address 1200 are unused. Portion 1218 therefore represents the “used portion” of address 1210 because the combination of the partition ID portion 1214 and the physical address portion 1212 are used to specify a unique address in the system 100.
  • Note that the partition ID field 1214 of address 1210 is only one bit wide, in contrast to the partition ID field 1204 of address 1200 (FIG. 12A), which is two bits wide. The partition ID field 1214 of address 1210 need only be wide enough to distinguish among the partitions to which I/O ports 628 a-b are allocated. Because I/O ports 628 a-b are allocated to two ports in the example illustrated in FIGS. 6A-6B, partition ID field 1214 need only be one bit wide. Partition ID field 1204 of address 1200 (FIG. 12A), in contrast, is two bits wide because it must be capable of distinguishing among all partitions 504 a-d in the system. The required minimum width of the partition ID fields 1204 and 1214 may, of course, vary depending on the number of unique partitions they are required to represent.
  • The particular layout of the partition-identifying address 1210 in FIG. 12B is shown merely for purposes of example and does not constitute a limitation of the present invention. Rather, partition-identifying addresses of any size and having any layout may be used in conjunction with embodiments of the present invention. The bit substitution circuit 638 a generates a first modified transaction 642 a containing the partition-identifying address generated in step 906 (step 908). The bit substitution circuit 638 a transmits the first modified transaction 642 a to cache 646 on lines 644 a (step 910).
  • Referring to FIG. 10, a flowchart is shown of a method 1000 that is performed by the cache 646 in response to receipt of the first modified transaction 642 a according to one embodiment of the present invention. The cache 646 receives the first modified transaction 642 a from the bit substitution circuit 638 a (step 1002). The cache 646 determines whether the first modified transaction 642 a can be satisfied using cache data stored locally in cache lines 648 (step 1004). If there is a cache hit, the cache 646 performs the transaction locally (i.e., within the cache lines 648) (step 1006) and the method 1000 terminates. Data is written into the cache from the IO card via lines 650. If the transaction 636 a is a read command, the cache 646 may transmit the read values to the device 604 a on lines 650.
  • If there is a cache miss, the cache 646 transmits a second modified transaction 654 to an address mapper 656 on lines 654 (step 1008). In one embodiment of the present invention, the second modified transaction 652 contains the partition ID value and physical address from the first modified transaction 642 a.
  • Referring to FIG. 11, a flowchart is shown of a method 1100 that is performed by the address mapper 656 when it receives the second modified transaction 652 in one embodiment of the invention. The address mapper 656 receives the second modified transaction 652 on lines 654 (step 1102). The address mapper 656 maintains a mapping 658 of address-partition ID pairs to destination terminus IDs. The address mapper 656 uses the mapping 658 to map the partition ID and address in the second modified transaction 652 into a destination terminus ID (step 1104).
  • The address mapper 656 generates and transmits a third modified transaction 670 to the system fabric 116 on lines 672 (step 1106). The third modified transaction 670 includes: (1) the destination terminus ID identified in step 1104; (2) the physical address from the second modified transaction 652; and (3) the data from the second modified transaction 652 (if any). Note that the third modified transaction 670 does not include the partition ID identified in step 904 (FIG. 9), because in the embodiment illustrated in FIGS. 6A-6B the partition ID is only used to distinguish internally (i.e., within the I/O controller 602) among different partitions.
  • As described above, router 228 routes the third modified transaction 670 to the memory controller or other device having the destination terminus ID contained in the third modified transaction 670 using the techniques described above with respect to FIG. 2.
  • Although the examples described above relate to partition 704 a and corresponding I/O port 628 a, the same or similar techniques may be used in conjunction with partition 704 b and corresponding I/O port 628 b. For example, bit substitution circuit 638 b may receive outgoing transaction 636 b from device 604 b on lines 634 b and substitute therein the value of partition ID register 606 b, thereby generating and transmitting a first modified transaction 642 b on lines 644 b. The first modified transaction 642 b may then be processed in the manner described above.
  • Among the advantages of the invention are one or more of the following.
  • Existing partitionable computer architectures typically allocate resources to partitions on a per-chip basis. In other words, in a conventional partitionable computer, all of the resources (such as processor cores) in a single chip must be allocated to at most one partition. As the number and power of resources in a single chip increases, such per-chip resource allocation imposes limitations on the degree of granularity with which resources may be allocated to partitions in a partitionable computer system. Such limitations limit the extent to which resources may be dynamically allocated to partitions in a manner that makes optimal use of such resources.
  • The techniques disclosed herein address this problem by providing the ability to allocate resources on a sub-chip basis. The ability to allocate multiple resources on a single chip to multiple partitions increases the degree to which such resources may be allocated optimally in response to changing conditions. Sub-chip partitioning allows partitionable computer systems to take full advantage of the cost and size reductions made possible by the current trend in computer chip design of providing an increasing number of functions on a single chip, while still providing the fine-grained resource allocation demanded by users.
  • Furthermore, embodiments of the present invention enable sub-chip partitioning to be accomplished using relatively localized modifications to existing circuitry, thereby enabling a substantial portion of existing circuitry to be used without modification in conjunction with embodiments of the present invention. For example, in the system illustrated in FIG. 2, the cores 204 a-n, cache 208, and fabric 116 may be prior art components. As a result, embodiments of the present invention may be implemented relatively easily, quickly, and inexpensively.
  • A further advantage of techniques disclosed herein is that the bit substitution circuits 212 a-n and 638 a-b may enforce inter-partition security by preventing the operating system in the corresponding partition from accessing addresses in other partitions. As described above, such security may be provided by overwriting any values the operating system writes into the upper bits of addresses it generates (e.g., bits in portions 1204 or 1206 of address 1200 (FIG. 12A) and bits in portions 1214 or 1216 of address 1210 (FIG. 12B)). The techniques disclosed herein thereby provide a degree of hardware-enforced inter-partition security that cannot be circumvented by malicious or improperly-designed software.
  • It is to be understood that although the invention has been described above in terms of particular embodiments, the foregoing embodiments are provided as illustrative only, and do not limit or define the scope of the invention. Various other embodiments, including but not limited to the following, are also within the scope of the claims. For example, elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.
  • The term “resources” refers herein to hardware resources in a computer system, such as processor cores (FIG. 2) and I/O ports (FIG. 6A-6B). A chip may contain one or more hardware resources. Although processor cores and I/O ports are provided herein as examples of hardware resources that may individually be allocated to partitions in embodiments of the present invention, embodiments of the present invention may be used to allocate other kinds of hardware resources to partitions on a sub-chip basis. Furthermore, the techniques illustrated by the example in FIG. 2 are applied to a plurality of CPU cores 206 a-n, such techniques may be applied to I/O ports or to any other kind of resource. Similarly, although the techniques illustrated by the example in FIGS. 6A-6B are applied to a plurality of I/O ports 628 a-b, such techniques may be applied to CPU cores or to any other kind of resource.
  • In general, techniques disclosed herein may be used in system including a cache to allocate the cache among multiple partitions. Furthermore, any resource which is accessed using memory-mapped transactions may be allocated to a particular partition in a partitionable computer system using techniques disclosed herein.
  • For example, general purpose event registers (GPEs) typically are allocated to particular partitions. A particular GPE, therefore, typically is addressable within the address space of the partition to which it is allocated. Techniques disclosed herein may be employed to enable the GPEs of each partition accessible over the system fabric 116 at unique system (fabric) addresses.
  • Although certain examples provided above involving allocating a plurality of resources on a single chip (integrated circuit) to a plurality of partitions, the techniques disclosed herein are not limited to use in conjunction with resources on a single chip. Rather, more generally, techniques disclosed herein may be used to allocated a plurality of resources in a computer system to a plurality of partitions in the computer system.
  • Although only a single memory controller is shown in each of the cell boards 102 a-d in FIG. 1, this is not a requirement of the present invention. Rather, a cell board may contain multiple memory controllers, each of which may have its own terminus ID. Those of ordinary skill in the art will appreciate how to implement embodiments of the present invention in systems including multiple memory controllers on a single cell board.
  • Although in the example illustrated in FIG. 2 the core 204 a issues memory write command 230 a, the memory write command 230 a is just one example of a memory access request, which is in turn merely one example of a transaction to which the techniques disclosed herein may apply.
  • Although partition ID values are stored in partition ID registers 210 a-n in FIG. 2, partition ID values may be represented and stored in any manner. For example, partition ID values need not each be stored in a distinct register and need not be represented using the particular numbering scheme described herein.
  • Although various embodiments of the present invention are described herein in conjunction with symmetric multiprocessor computer architectures (SMPs), embodiments of the present invention are not limited to use in conjunction with SMPs. Embodiments of the present invention may, for example, be used in conjunction with NUMA (non-uniform memory access) multiprocessor computer architectures.
  • Although four cell boards 102 a-d are shown in FIG. 3, this is not a requirement of the present invention. Rather, the techniques disclosed herein may be used in conjunction with multiprocessor computer systems having any number of cell boards. Furthermore, each cell board in the system may have any number of processors (including one). The term “cell board” as used herein is not limited to any particular kind of cell board, but rather refers generally to any set of electrical and/or mechanical components that allow a set of one or more processors to communicate over a system fabric through an interface such as an agent chip.
  • Although the fabric agent chip 114 a and memory controller 110 are illustrated as separate and distinct components in FIG. 1, this is not a requirement of the present invention. Rather, the fabric agent chip 114 a and memory controller 110 a may be integrated into a single chip package.

Claims (44)

1. A partitionable computer system partitioned into a plurality of partitions, the computer system comprising an integrated circuit, the integrated circuit comprising a first hardware resource allocated to a first one of the plurality of partitions and a second hardware resource allocated to a second one of the plurality of partitions that differs from the first one of the plurality of partitions.
2. The computer system of claim 1, wherein the integrated circuit comprises a processor and wherein the first and second hardware resources respectively comprise a first and second processor core.
3. The computer system of claim 1, wherein the integrated circuit comprises an input/output controller and wherein the first and second hardware resources respectively comprise a first and second input/output port.
4. The computer system of claim 1, wherein the first hardware resource comprises means for outputting a first physical address in a first address space allocated to the first one of the plurality of partitions, and wherein the computer system further comprises:
first partition identification means for storing a first partition identification value identifying the first one of the plurality of partitions; and
first bit substitution means, coupled to the first hardware resource and the first partition identification means, for producing a first partition-identifying address by storing the first partition identification value in at least part of the first physical address.
5. The computer system of claim 4, further comprising:
a system fabric; and
means for transmitting the first partition-identifying address over the system fabric.
6. The computer system of claim 4, wherein the second hardware resource comprises means for outputting a second physical address in a second address space allocated to the second one of the plurality of partitions, and wherein the computer system further comprises:
second partition identification means for storing a second partition identification value identifying the second one of the plurality of partitions; and
second bit substitution means, coupled to the second hardware resource and the second partition identification means, for producing a second partition-identifying address by storing the second partition identification value in at least part of the second physical address.
7. The computer system of claim 6, further comprising:
a system fabric; and
means for transmitting the second partition-identifying address over the system fabric.
8. The computer system of claim 6, wherein the first partition identification value has a first size and wherein the second partition identification value has a second size which differs from the first size.
9. The computer system of claim 4, further comprising:
a cache; and
means for transmitting the first partition-identifying address to the cache.
10. The computer system of claim 6, further comprising:
an address mapper coupled to the cache to map the first physical address into a first system address, the first system address including the first partition identification value.
11. The computer system of claim 4, further comprising:
a system fabric; and
means for transmitting the first system address over the system fabric.
12. The computer system of claim 4, wherein the first bit substitution means comprises means for storing a programmable size of the first partition identification value, and wherein the first partition identification value has the programmable size.
13. The computer system of claim 1, wherein the first one of the plurality of partitions is associated with a first partition identification value, wherein the second one of the plurality of partitions is associated with a second partition identification value, and wherein the computer system further comprises:
means for receiving a first transaction including a source terminus identifier, the source terminus identifier identifying a source device from which the incoming transaction was received;
means for identifying, based on the source terminus identifier, a third partition identification value identifying one of the first and second of the plurality of partitions;
means for transmitting at least some of the first transaction to the first hardware resource if the third partition identification value is equal to the first partition identification value; and
means for transmitting at least some of the first transaction to the second hardware resource if the third partition identification value is equal to the second partition identification value.
14. The computer system of claim 1, wherein the first and second hardware resources comprise microprocessor cores.
15. The computer system of claim 1, wherein the first and second hardware resources comprise input/output ports of an input/output controller.
16. A computer-implemented method for use in a computer system partitioned into a plurality of partitions, the computer system comprising an integrated circuit, the integrated circuit comprising a first hardware resource and a second hardware resource, the method comprising steps of:
(A) allocating the first one of the plurality of hardware resources to a first one of the plurality of hardware partitions; and
(B) allocating the second one of the plurality of hardware resources to a second one of the plurality of hardware partitions.
17. The method of claim 16, wherein the integrated circuit comprises a processor and wherein the first and second hardware resources respectively comprise a first and second processor core.
18. The method of claim 16, wherein the integrated circuit comprises an input/output controller and wherein the first and second hardware resources respectively comprise a first and second input/output port of the input/output controller.
19. The method of claim 16, wherein step (A) comprises steps of:
(A)(1) receiving from the first hardware resource a first physical address in a first address space allocated to the first one of the plurality of partitions;
(A)(2) producing a first partition-identifying address by storing, in at least part of the first physical address, a first partition identification value identifying the first one of the plurality of partitions.
20. The method of claim 19, further comprising a step of:
(A)(3) transmitting the first partition-identifying address over a system fabric.
21. The method of claim 19, wherein step (B) comprises steps of:
(B)(1) receiving from the second hardware resource a second physical address in a second address space allocated to the second one of the plurality of partitions;
(B)(2) producing a second partition-identifying address by storing, in at least part of the second physical address, a second partition identification value identifying the second one of the plurality of partitions.
22. The method of claim 21, further comprising a step of:
(B)(3) transmitting the second partition-identifying address over a system fabric.
23. The method of claim 21, wherein the first partition identification value has a first size and wherein the second partition identification value has a second size which differs from the first size.
24. The method of claim 19, wherein the step (A) further comprises a step of:
(A)(3) transmitting the first partition-identifying address to a cache.
25. The method of claim 24, wherein the step (A) further comprises a step of:
(A)(4) mapping the first physical address into a first system-address, the first system addressing including the first partition identification value.
26. The method of claim 25, wherein the step (A) further comprises a step of:
(A)(5) transmitting the first system address to a system fabric.
27. The method of claim 19, wherein the physical address comprises a used portion and an unused portion, and wherein step (A)(2) comprises a step of storing the first partition identification value in the used portion of the physical address.
28. The method of claim 16, wherein the first one of the plurality of partitions is associated with a first partition identification value, wherein the second one of the plurality of partitions is associated with a second partition identification value, and wherein step (A) comprises steps of:
(A)(1) receiving a first transaction including a source terminus identifier, the source terminus identifier identifying a source device from which the incoming transaction was received;
(A)(2) identifying, based on the source terminus identifier, a third partition identification value identifying one of the first and second of the plurality of partitions;
(A)(3) transmitting at least some of the first transaction to the first hardware resource if the third partition identification value is equal to the first partition identification value; and
(A)(4) transmitting at least some of the first transaction to the second hardware resource if the third partition identification value is equal to the second partition identification value.
29. A device for use in a partitionable computer system partitioned into a plurality of partitions, the device comprising:
partition identification means for storing a partition identification value identifying a select one of the plurality of partitions;
means for receiving a first transaction including a physical address in an address space allocated to the select one of the plurality of partitions; and
means for producing a partition-identifying address by storing the partition identification value in at least part of the physical address.
30. The device of claim 29, wherein the physical address comprises a used portion and an unused portion, and wherein the means for producing comprises means for storing the first partition identification value in the used portion of the physical address.
31. The device of claim 29, further comprising:
means for generating a modified transaction including the partition-identifying address; and
means for transmitting the modified transaction to a system fabric.
32. The device of claim 29, further comprising:
means for generating a first modified transaction including the partition-identifying address;
means for generating, based on the first modified transaction, a second modified transaction not including the partition-identifying address; and
means for transmitting the second modified transaction to a system fabric.
33. The device of claim 29, wherein the means for producing the partition-identifying address comprises means for storing a programmable size of the partition identification value, and wherein the partition identification value has the programmable size.
34. A method for use in a partitionable computer system partitioned into a plurality of partitions, the method comprising steps of:
(A) receiving a first transaction including a physical address in an address space allocated to the select one of the plurality of partitions; and
(B) producing a partition-identifying address by storing the partition identification value in at least part of the physical address.
35. The method of claim 34, wherein the physical address comprises a used portion and an unused portion, and wherein step (B) comprises a step of storing the first partition identification value in the used portion of the physical address.
36. The method of claim 34, further comprising steps of:
(C) generating a modified transaction including the partition-identifying address; and
(D) transmitting the modified transaction to a system fabric.
37. The method of claim 34, further comprising steps of:
(C) generating a first modified transaction including the partition-identifying address;
(D) generating, based on the first modified transaction, a second modified transaction not including the partition-identifying address; and
(E) transmitting the second modified transaction to a system fabric.
38. A device for use in a partitionable computer system partitioned into a plurality of partitions, the device comprising:
means for receiving a first transaction including a physical address in an address space allocated to the select one of the plurality of partitions; and
means for producing a partition-identifying address by storing the partition identification value in at least part of the physical address.
39. The device of claim 38, wherein the physical address comprises a used portion and an unused portion, and wherein the means for producing comprises means for storing the first partition identification value in the used portion of the physical address.
40. The device of claim 38, further comprising:
means for generating a modified transaction including the partition-identifying address; and
means for transmitting the modified transaction to a system fabric.
41. The device of claim 38, further comprising:
means for generating a first modified transaction including the partition-identifying address;
means for generating, based on the first modified transaction, a second modified transaction not including the partition-identifying address; and
means for transmitting the second modified transaction to a system fabric.
42. A device for use in a partitionable computer system partitioned into a plurality of partitions, wherein a first one of the plurality of partitions is associated with a first partition identification value, wherein a second one of the plurality of partitions is associated with a second partition identification value, the device comprising:
means for receiving a transaction including a source terminus identifier, the source terminus identifier identifying a source device from which the incoming transaction was received;
means for identifying, based on the source terminus identifier, a third partition identification value identifying one of the first and second of the plurality of partitions;
means for transmitting at least some of the transaction to a first hardware resource allocated to the first one of the plurality of partitions if the third partition identification value is equal to the first partition identification value; and
means for transmitting at least some of the transaction to a second hardware resource allocated to the second one of the plurality of partitions if the third partition identification value is equal to the second partition identification value.
43. A method for use in a partitionable computer system partitioned into a plurality of partitions, wherein a first one of the plurality of partitions is associated with a first partition identification value, wherein a second one of the plurality of partitions is associated with a second partition identification value, the method comprising steps of:
(A) receiving a transaction including a source terminus identifier, the source terminus identifier identifying a source device from which the incoming transaction was received;
(B) identifying, based on the source terminus identifier, a third partition identification value identifying one of the first and second of the plurality of partitions;
(C) transmitting at least some of the transaction to a first hardware resource allocated to the first one of the plurality of partitions if the third partition identification value is equal to the first partition identification value; and
(D) transmitting at least some of the transaction to a second hardware resource allocated to the second one of the plurality of partitions if the third partition identification value is equal to the second partition identification value.
44. A device for use in a partitionable computer system partitioned into a plurality of partitions, wherein a first one of the plurality of partitions is associated with a first partition identification value, wherein a second one of the plurality of partitions is associated with a second partition identification value, the device comprising:
means for receiving a transaction including a source terminus identifier, the source terminus identifier identifying a source device from which the incoming transaction was received;
means for identifying, based on the source terminus identifier, a third partition identification value identifying one of the first and second of the plurality of partitions;
means for transmitting at least some of the transaction to a first hardware resource allocated to the first one of the plurality of partitions if the third partition identification value is equal to the first partition identification value; and
means for transmitting at least some of the transaction to a second hardware resource allocated to the second one of the plurality of partitions if the third partition identification value is equal to the second partition identification value.
US10/898,590 2004-07-23 2004-07-23 Allocating resources to partitions in a partitionable computer Expired - Fee Related US7606995B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US10/898,590 US7606995B2 (en) 2004-07-23 2004-07-23 Allocating resources to partitions in a partitionable computer
JP2005202511A JP2006040275A (en) 2004-07-23 2005-07-12 Allocating resource to partition in partitionable computer
CN200510087531.XA CN1725183A (en) 2004-07-23 2005-07-22 Allocating resources to partitions in a partitionable computer
US12/510,184 US8112611B2 (en) 2004-07-23 2009-07-27 Allocating resources to partitions in a partitionable computer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/898,590 US7606995B2 (en) 2004-07-23 2004-07-23 Allocating resources to partitions in a partitionable computer

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/510,184 Continuation US8112611B2 (en) 2004-07-23 2009-07-27 Allocating resources to partitions in a partitionable computer

Publications (2)

Publication Number Publication Date
US20060020769A1 true US20060020769A1 (en) 2006-01-26
US7606995B2 US7606995B2 (en) 2009-10-20

Family

ID=35658611

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/898,590 Expired - Fee Related US7606995B2 (en) 2004-07-23 2004-07-23 Allocating resources to partitions in a partitionable computer
US12/510,184 Active 2025-02-04 US8112611B2 (en) 2004-07-23 2009-07-27 Allocating resources to partitions in a partitionable computer

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/510,184 Active 2025-02-04 US8112611B2 (en) 2004-07-23 2009-07-27 Allocating resources to partitions in a partitionable computer

Country Status (3)

Country Link
US (2) US7606995B2 (en)
JP (1) JP2006040275A (en)
CN (1) CN1725183A (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060212677A1 (en) * 2005-03-15 2006-09-21 Intel Corporation Multicore processor having active and inactive execution cores
US20070050767A1 (en) * 2005-08-31 2007-03-01 Grobman Steven L Method, apparatus and system for a virtual diskless client architecture
US20070136508A1 (en) * 2005-12-13 2007-06-14 Reiner Rieke System Support Storage and Computer System
US20070226456A1 (en) * 2006-03-21 2007-09-27 Mark Shaw System and method for employing multiple processors in a computer system
US20070226735A1 (en) * 2006-03-22 2007-09-27 Anthony Nguyen Virtual vector processing
US20070230477A1 (en) * 2006-04-03 2007-10-04 Worley John S Method and system for managing computational resources
US20070239965A1 (en) * 2006-03-31 2007-10-11 Saul Lewites Inter-partition communication
GB2437624A (en) * 2006-04-25 2007-10-31 Hewlett Packard Development Co Array-Based Memory Abstraction for Translating a System Address to a Fabric Address
US20080126652A1 (en) * 2006-09-27 2008-05-29 Intel Corporation Managing Interrupts in a Partitioned Platform
US20080162734A1 (en) * 2006-12-28 2008-07-03 Keitaro Uehara Computer system and a chipset
US20080320272A1 (en) * 2006-02-28 2008-12-25 Fujitsu Limited Partition priority controlling system and method
US20090013160A1 (en) * 2007-07-05 2009-01-08 Board Of Regents, The University Of Texas System Dynamically composing processor cores to form logical processors
US20100146344A1 (en) * 2008-12-05 2010-06-10 Shusaku Uchibori Multi-partition computer system, failure handling method and program therefor
EP2446359A1 (en) * 2009-06-22 2012-05-02 Citrix Systems, Inc. Systems and methods for a distributed hash table in a multi-core system
US20120278597A1 (en) * 2007-08-31 2012-11-01 Dallas Blake De Atley Compatible trust in a computing device
US20140269751A1 (en) * 2013-03-15 2014-09-18 Oracle International Corporation Prediction-based switch allocator
US8948267B1 (en) * 2007-11-21 2015-02-03 Marvell International Ltd. System and method of video coding using adaptive macroblock processing
US20160335184A1 (en) * 2015-05-15 2016-11-17 Oracle International Corporation Method and Apparatus for History-Based Snooping of Last Level Caches
US20180024939A1 (en) * 2015-02-09 2018-01-25 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for executing a request to exchange data between first and second disjoint physical addressing spaces of chip or card circuit
US9946548B2 (en) 2015-06-26 2018-04-17 Microsoft Technology Licensing, Llc Age-based management of instruction blocks in a processor instruction window
US9952867B2 (en) 2015-06-26 2018-04-24 Microsoft Technology Licensing, Llc Mapping instruction blocks based on block size
US10169044B2 (en) 2015-06-26 2019-01-01 Microsoft Technology Licensing, Llc Processing an encoding format field to interpret header information regarding a group of instructions
US10175988B2 (en) 2015-06-26 2019-01-08 Microsoft Technology Licensing, Llc Explicit instruction scheduler state information for a processor
US10191747B2 (en) 2015-06-26 2019-01-29 Microsoft Technology Licensing, Llc Locking operand values for groups of instructions executed atomically
US20190069658A1 (en) * 2017-08-17 2019-03-07 Kathleen Militello Cover for an adverisement structure
US10346168B2 (en) 2015-06-26 2019-07-09 Microsoft Technology Licensing, Llc Decoupled processor instruction window and operand buffer
US10409606B2 (en) 2015-06-26 2019-09-10 Microsoft Technology Licensing, Llc Verifying branch targets
US10409599B2 (en) 2015-06-26 2019-09-10 Microsoft Technology Licensing, Llc Decoding information about a group of instructions including a size of the group of instructions
US10768936B2 (en) 2015-09-19 2020-09-08 Microsoft Technology Licensing, Llc Block-based processor including topology and control registers to indicate resource sharing and size of logical processor
US11016770B2 (en) 2015-09-19 2021-05-25 Microsoft Technology Licensing, Llc Distinct system registers for logical processors
US11126433B2 (en) 2015-09-19 2021-09-21 Microsoft Technology Licensing, Llc Block-based processor core composition register
US11531552B2 (en) 2017-02-06 2022-12-20 Microsoft Technology Licensing, Llc Executing multiple programs simultaneously on a processor core
US11755484B2 (en) 2015-06-26 2023-09-12 Microsoft Technology Licensing, Llc Instruction block allocation

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160098279A1 (en) * 2005-08-29 2016-04-07 Searete Llc Method and apparatus for segmented sequential storage
JP2009252202A (en) * 2008-04-11 2009-10-29 Hitachi Ltd Computer system
JP2010122805A (en) * 2008-11-18 2010-06-03 Hitachi Ltd Virtual server system, physical cpu and method for allocating physical memory
US20120159124A1 (en) * 2010-12-15 2012-06-21 Chevron U.S.A. Inc. Method and system for computational acceleration of seismic data processing
US9229884B2 (en) * 2012-04-30 2016-01-05 Freescale Semiconductor, Inc. Virtualized instruction extensions for system partitioning
US9152587B2 (en) 2012-05-31 2015-10-06 Freescale Semiconductor, Inc. Virtualized interrupt delay mechanism
US9442870B2 (en) 2012-08-09 2016-09-13 Freescale Semiconductor, Inc. Interrupt priority management using partition-based priority blocking processor registers
US9436626B2 (en) 2012-08-09 2016-09-06 Freescale Semiconductor, Inc. Processor interrupt interface with interrupt partitioning and virtualization enhancements

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4300192A (en) * 1974-04-18 1981-11-10 Honeywell Information Systems Inc. Method and means for storing and accessing information in a shared access multiprogrammed data processing system
US4843541A (en) * 1987-07-29 1989-06-27 International Business Machines Corporation Logical resource partitioning of a data processing system
US4925311A (en) * 1986-02-10 1990-05-15 Teradata Corporation Dynamically partitionable parallel processors
US5117350A (en) * 1988-12-15 1992-05-26 Flashpoint Computer Corporation Memory address mechanism in a distributed memory architecture
US5129077A (en) * 1983-05-31 1992-07-07 Thinking Machines Corporation System for partitioning a massively parallel computer
US5193202A (en) * 1990-05-29 1993-03-09 Wavetracer, Inc. Processor array with relocated operand physical address generator capable of data transfer to distant physical processor for each virtual processor while simulating dimensionally larger array processor
US5210844A (en) * 1988-09-29 1993-05-11 Hitachi, Ltd. System using selected logical processor identification based upon a select address for accessing corresponding partition blocks of the main memory
US5319760A (en) * 1991-06-28 1994-06-07 Digital Equipment Corporation Translation buffer for virtual machines with address space match
US5428758A (en) * 1991-05-10 1995-06-27 Unisys Corporation Method and system for remapping memory from one physical configuration to another physical configuration
US5522075A (en) * 1991-06-28 1996-05-28 Digital Equipment Corporation Protection ring extension for computers having distinct virtual machine monitor and virtual machine address spaces
US5561768A (en) * 1992-03-17 1996-10-01 Thinking Machines Corporation System and method for partitioning a massively parallel computer system
US5710938A (en) * 1995-07-19 1998-01-20 Unisys Corporation Data processing array in which sub-arrays are established and run independently
US5761516A (en) * 1996-05-03 1998-06-02 Lsi Logic Corporation Single chip multiprocessor architecture with internal task switching synchronization bus
US5765198A (en) * 1996-02-01 1998-06-09 Cray Research, Inc. Transparent relocation of real memory addresses in the main memory of a data processor
US6163834A (en) * 1998-01-07 2000-12-19 Tandem Computers Incorporated Two level address translation and memory registration system and method
US6226671B1 (en) * 1996-07-02 2001-05-01 Sun Microsystems, Inc. Shared memory system for symmetric multiprocessor systems
US6260155B1 (en) * 1998-05-01 2001-07-10 Quad Research Network information server
US6295584B1 (en) * 1997-08-29 2001-09-25 International Business Machines Corporation Multiprocessor computer system with memory map translation
US6356991B1 (en) * 1997-12-31 2002-03-12 Unisys Corporation Programmable address translation system
US6446182B1 (en) * 1998-12-28 2002-09-03 Bull Sa Method for a memory organization by physical zones in a computerized or data processing machine or arrangement and the computerized or data processing machine or arrangement for using the method
US20030110205A1 (en) * 2001-12-07 2003-06-12 Leith Johnson Virtualized resources in a partitionable server
US6598130B2 (en) * 2000-07-31 2003-07-22 Hewlett-Packard Development Company, L.P. Technique for referencing distributed shared memory locally rather than remotely
US6910108B2 (en) * 2002-01-09 2005-06-21 International Business Machines Corporation Hardware support for partitioning a multiprocessor system to allow distinct operating systems

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819061A (en) * 1994-07-25 1998-10-06 International Business Machines Corporation Method and apparatus for dynamic storage reconfiguration in a partitioned environment

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4300192A (en) * 1974-04-18 1981-11-10 Honeywell Information Systems Inc. Method and means for storing and accessing information in a shared access multiprogrammed data processing system
US5129077A (en) * 1983-05-31 1992-07-07 Thinking Machines Corporation System for partitioning a massively parallel computer
US4925311A (en) * 1986-02-10 1990-05-15 Teradata Corporation Dynamically partitionable parallel processors
US4843541A (en) * 1987-07-29 1989-06-27 International Business Machines Corporation Logical resource partitioning of a data processing system
US5210844A (en) * 1988-09-29 1993-05-11 Hitachi, Ltd. System using selected logical processor identification based upon a select address for accessing corresponding partition blocks of the main memory
US5117350A (en) * 1988-12-15 1992-05-26 Flashpoint Computer Corporation Memory address mechanism in a distributed memory architecture
US5193202A (en) * 1990-05-29 1993-03-09 Wavetracer, Inc. Processor array with relocated operand physical address generator capable of data transfer to distant physical processor for each virtual processor while simulating dimensionally larger array processor
US5428758A (en) * 1991-05-10 1995-06-27 Unisys Corporation Method and system for remapping memory from one physical configuration to another physical configuration
US5319760A (en) * 1991-06-28 1994-06-07 Digital Equipment Corporation Translation buffer for virtual machines with address space match
US5522075A (en) * 1991-06-28 1996-05-28 Digital Equipment Corporation Protection ring extension for computers having distinct virtual machine monitor and virtual machine address spaces
US5561768A (en) * 1992-03-17 1996-10-01 Thinking Machines Corporation System and method for partitioning a massively parallel computer system
US5710938A (en) * 1995-07-19 1998-01-20 Unisys Corporation Data processing array in which sub-arrays are established and run independently
US5765198A (en) * 1996-02-01 1998-06-09 Cray Research, Inc. Transparent relocation of real memory addresses in the main memory of a data processor
US5761516A (en) * 1996-05-03 1998-06-02 Lsi Logic Corporation Single chip multiprocessor architecture with internal task switching synchronization bus
US6226671B1 (en) * 1996-07-02 2001-05-01 Sun Microsystems, Inc. Shared memory system for symmetric multiprocessor systems
US6295584B1 (en) * 1997-08-29 2001-09-25 International Business Machines Corporation Multiprocessor computer system with memory map translation
US6356991B1 (en) * 1997-12-31 2002-03-12 Unisys Corporation Programmable address translation system
US6163834A (en) * 1998-01-07 2000-12-19 Tandem Computers Incorporated Two level address translation and memory registration system and method
US6260155B1 (en) * 1998-05-01 2001-07-10 Quad Research Network information server
US6446182B1 (en) * 1998-12-28 2002-09-03 Bull Sa Method for a memory organization by physical zones in a computerized or data processing machine or arrangement and the computerized or data processing machine or arrangement for using the method
US6598130B2 (en) * 2000-07-31 2003-07-22 Hewlett-Packard Development Company, L.P. Technique for referencing distributed shared memory locally rather than remotely
US20030110205A1 (en) * 2001-12-07 2003-06-12 Leith Johnson Virtualized resources in a partitionable server
US6910108B2 (en) * 2002-01-09 2005-06-21 International Business Machines Corporation Hardware support for partitioning a multiprocessor system to allow distinct operating systems

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060212677A1 (en) * 2005-03-15 2006-09-21 Intel Corporation Multicore processor having active and inactive execution cores
US20070050767A1 (en) * 2005-08-31 2007-03-01 Grobman Steven L Method, apparatus and system for a virtual diskless client architecture
US20070136508A1 (en) * 2005-12-13 2007-06-14 Reiner Rieke System Support Storage and Computer System
US8275949B2 (en) * 2005-12-13 2012-09-25 International Business Machines Corporation System support storage and computer system
US7752378B2 (en) 2006-02-28 2010-07-06 Fujitsu Limited Partition priority controlling system and method
US20080320272A1 (en) * 2006-02-28 2008-12-25 Fujitsu Limited Partition priority controlling system and method
US20070226456A1 (en) * 2006-03-21 2007-09-27 Mark Shaw System and method for employing multiple processors in a computer system
US20070226735A1 (en) * 2006-03-22 2007-09-27 Anthony Nguyen Virtual vector processing
US10768989B2 (en) * 2006-03-22 2020-09-08 Intel Corporation Virtual vector processing
US9870267B2 (en) * 2006-03-22 2018-01-16 Intel Corporation Virtual vector processing
US20070239965A1 (en) * 2006-03-31 2007-10-11 Saul Lewites Inter-partition communication
WO2007117541A3 (en) * 2006-04-03 2008-04-24 Secure64 Software Method and system for managing computational resources
US20070230477A1 (en) * 2006-04-03 2007-10-04 Worley John S Method and system for managing computational resources
US8464265B2 (en) 2006-04-03 2013-06-11 Secure64 Software Method and system for reallocating computational resources using resource reallocation enabling information
US20070261059A1 (en) * 2006-04-25 2007-11-08 Orth Joseph F Array-based memory abstraction
GB2437624A (en) * 2006-04-25 2007-10-31 Hewlett Packard Development Co Array-Based Memory Abstraction for Translating a System Address to a Fabric Address
GB2437624B (en) * 2006-04-25 2011-08-24 Hewlett Packard Development Co Array-based memory abstraction
US20080126652A1 (en) * 2006-09-27 2008-05-29 Intel Corporation Managing Interrupts in a Partitioned Platform
US20080162734A1 (en) * 2006-12-28 2008-07-03 Keitaro Uehara Computer system and a chipset
US20090013160A1 (en) * 2007-07-05 2009-01-08 Board Of Regents, The University Of Texas System Dynamically composing processor cores to form logical processors
US8180997B2 (en) * 2007-07-05 2012-05-15 Board Of Regents, University Of Texas System Dynamically composing processor cores to form logical processors
US20120278597A1 (en) * 2007-08-31 2012-11-01 Dallas Blake De Atley Compatible trust in a computing device
US8789037B2 (en) * 2007-08-31 2014-07-22 Apple Inc. Compatible trust in a computing device
US8948267B1 (en) * 2007-11-21 2015-02-03 Marvell International Ltd. System and method of video coding using adaptive macroblock processing
US8108736B2 (en) * 2008-12-05 2012-01-31 Nec Computertechno Ltd. Multi-partition computer system, failure handling method and program therefor
US20100146344A1 (en) * 2008-12-05 2010-06-10 Shusaku Uchibori Multi-partition computer system, failure handling method and program therefor
US9621437B2 (en) 2009-06-22 2017-04-11 Citrix Systems, Inc. Systems and methods for distributed hash table in a multi-core system
EP2446359A1 (en) * 2009-06-22 2012-05-02 Citrix Systems, Inc. Systems and methods for a distributed hash table in a multi-core system
US20140269751A1 (en) * 2013-03-15 2014-09-18 Oracle International Corporation Prediction-based switch allocator
US8976802B2 (en) * 2013-03-15 2015-03-10 Oracle International Corporation Prediction-based switch allocator
US20180024939A1 (en) * 2015-02-09 2018-01-25 Commissariat A L'energie Atomique Et Aux Energies Alternatives Method for executing a request to exchange data between first and second disjoint physical addressing spaces of chip or card circuit
US20160335184A1 (en) * 2015-05-15 2016-11-17 Oracle International Corporation Method and Apparatus for History-Based Snooping of Last Level Caches
US9734071B2 (en) * 2015-05-15 2017-08-15 Oracle International Corporation Method and apparatus for history-based snooping of last level caches
US9952867B2 (en) 2015-06-26 2018-04-24 Microsoft Technology Licensing, Llc Mapping instruction blocks based on block size
US10169044B2 (en) 2015-06-26 2019-01-01 Microsoft Technology Licensing, Llc Processing an encoding format field to interpret header information regarding a group of instructions
US10175988B2 (en) 2015-06-26 2019-01-08 Microsoft Technology Licensing, Llc Explicit instruction scheduler state information for a processor
US10191747B2 (en) 2015-06-26 2019-01-29 Microsoft Technology Licensing, Llc Locking operand values for groups of instructions executed atomically
US11755484B2 (en) 2015-06-26 2023-09-12 Microsoft Technology Licensing, Llc Instruction block allocation
US10346168B2 (en) 2015-06-26 2019-07-09 Microsoft Technology Licensing, Llc Decoupled processor instruction window and operand buffer
US10409606B2 (en) 2015-06-26 2019-09-10 Microsoft Technology Licensing, Llc Verifying branch targets
US10409599B2 (en) 2015-06-26 2019-09-10 Microsoft Technology Licensing, Llc Decoding information about a group of instructions including a size of the group of instructions
US9946548B2 (en) 2015-06-26 2018-04-17 Microsoft Technology Licensing, Llc Age-based management of instruction blocks in a processor instruction window
US11016770B2 (en) 2015-09-19 2021-05-25 Microsoft Technology Licensing, Llc Distinct system registers for logical processors
US10768936B2 (en) 2015-09-19 2020-09-08 Microsoft Technology Licensing, Llc Block-based processor including topology and control registers to indicate resource sharing and size of logical processor
US11126433B2 (en) 2015-09-19 2021-09-21 Microsoft Technology Licensing, Llc Block-based processor core composition register
US11531552B2 (en) 2017-02-06 2022-12-20 Microsoft Technology Licensing, Llc Executing multiple programs simultaneously on a processor core
US20190069658A1 (en) * 2017-08-17 2019-03-07 Kathleen Militello Cover for an adverisement structure

Also Published As

Publication number Publication date
CN1725183A (en) 2006-01-25
US7606995B2 (en) 2009-10-20
US20090287906A1 (en) 2009-11-19
US8112611B2 (en) 2012-02-07
JP2006040275A (en) 2006-02-09

Similar Documents

Publication Publication Date Title
US7606995B2 (en) Allocating resources to partitions in a partitionable computer
US11567803B2 (en) Inter-server memory pooling
US9665724B2 (en) Logging in secure enclaves
JP5735070B2 (en) Guest address to host address translation for devices to access memory in partitioned systems
US7617376B2 (en) Method and apparatus for accessing a memory
US8893267B1 (en) System and method for partitioning resources in a system-on-chip (SoC)
US20080162865A1 (en) Partitioning memory mapped device configuration space
US8185602B2 (en) Transaction processing using multiple protocol engines in systems having multiple multi-processor clusters
US20120284437A1 (en) Pci express sr-iov/mr-iov virtual function clusters
US10713081B2 (en) Secure and efficient memory sharing for guests
CN114328295A (en) Storage management apparatus, processor, related apparatus and related method
TWI785320B (en) Intra-device notational data movement system, information handling system and method for providing intra-device notational data movement
CN109857517B (en) Virtualization system and data exchange method thereof
US7793051B1 (en) Global shared memory subsystem
CN116010296A (en) Method, device and system for processing request
US6094710A (en) Method and system for increasing system memory bandwidth within a symmetric multiprocessor data-processing system
US11494092B2 (en) Address space access control
US10481951B2 (en) Multi-queue device assignment for application groups
CN110447019B (en) Memory allocation manager and method for managing memory allocation performed thereby
US10936219B2 (en) Controller-based inter-device notational data movement system
CN113849262A (en) Techniques for moving data between virtual machines without replication
US11281612B2 (en) Switch-based inter-device notational data movement system
CN117827449A (en) Physical memory expansion architecture of server, method, equipment and medium
CN111666579A (en) Computer device, access control method thereof, and computer-readable medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HERRELL, RUSS;KAUFMAN, GERALD J., JR.;MORRISON, JOHN A.;REEL/FRAME:015617/0322;SIGNING DATES FROM 20040714 TO 20040720

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.)

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20171020