CA2166252C - Global variable coalescing - Google Patents
Global variable coalescingInfo
- Publication number
- CA2166252C CA2166252C CA002166252A CA2166252A CA2166252C CA 2166252 C CA2166252 C CA 2166252C CA 002166252 A CA002166252 A CA 002166252A CA 2166252 A CA2166252 A CA 2166252A CA 2166252 C CA2166252 C CA 2166252C
- Authority
- CA
- Canada
- Prior art keywords
- variables
- global data
- program
- selected global
- data variables
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/443—Optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/43—Checking; Contextual analysis
- G06F8/433—Dependency analysis; Data or control flow analysis
Abstract
An interprocedural compilation method for aggregating global data variables in external storage to maximize data locality. Using the information displayed in a weighted interference graph in which node weights represent the size of data stored in each global variable and edges between variables represent access relationships between the globals, the global variables can be mapped into aggregates based on this frequency of access, while preventing the cumulative data size in any aggregate from exceeding a memory size restriction.
Description
- 211~6252 GLOBAL VARL~BLE COALESCING
This invention is directed to a method for efficient interprocedural handling of global data variables.
It is related to our concurrently filed applications titled "Improving Memory Layout Based on Connectivity Considerations and "Connectivity Based Program Partitioningn (IBM docket nos. CA9-95-016 and CA9-95-021), but addresses a specific aspect of the problems of computer program compilation having to do with data storage.
Compilation is the process that translates a source program (written is a high-level, human-readable programming language) into an equivalent program in machine language or object code, so that it can be executed by a computer. For each of the main source program and a number of additional source programs or subroutines, the compiler translates each statement in the program into machine language equivalents.
The output is a number of object programs corresponding to the input source programs. A linlcer program then combines the object programs created by the compiler (that is, supplies the interconnecting linlcs between the program components) to create a single machine - executable program.
Central processing units utilized in general purpose programmable computers malce extensive use of hardware registers to hold data items utilized during programs execution, thus avoiding the overhead associated with memory ~efelellces, and effective management of such registers forms a major aspect of compiler optimization, 216625~
CAg-95-020 although the number, size and restrictions of use of such registers may vary widely from processor to processor. Some general principles relating to such management are discussed in Effective Register Management During Code Generation", KM. Gilbert, IBM Technical Disclosure Bulletin, January 1973, pages 2640 to 2645. Optimization tedmiques in compilers for re(lucecl instruction set (RISC) computers are discussed in an artide entitled "Advanced Compiler Technology for the RISC System/6000 Architecture", by O'Brien et al, at pages 154-161 of IBM RISC System/6000 Tedhnology, published 1990 by IBM Corporation. Both of these documents emphasize the importance of efficient hardware register usage. The linlcer provides the mapping pattern for storage of data variables.
In many programs, data is stored in a proliferation of small global variables. ("Global"
indicates that a variable is ~cc.o.ssihle to all elements of the computer program, rather than just the elements in which it is defined.) Groups of these global variables are typic~lly c~cc~ssec~ together in a particular section of the program. However, the linlcer does not talce access patterns into account when mapping the storage for global variables, so an opportunity to exploit locality is lost.
In addition, on some systems, extensive use of global variables (especially small scalars) implies a relatively high cost of access for global data.
Modern day compiler programs include optimizing techniques directed at maximizing the efficient use of the haldw~ resources of the computer, and the present invention is aimed at an improvement is this area. 5 C~9-95-020 Many compilers are designed to take multiple passes of the source program input in order to collect information that can be used for optimally restructuring the program.
For example, interprocedural analysis (IPA) is a 2-pass compilation procedure developed by IBM and used in its XL compilers. IPA is described in detail in Canadian PatentApplication No. 2,102,089, commonly assigned. The first IPA pass is performed at compile time and collects summaxy information that is written to the object file for each procedure compiled. This summary information includes a list of all callsites in each procedure, alias information for each variable and procedure. The second IPA pass is an information dissemination pass performed at link time when all files in the application have been compiled. The IPA driver reads the summary information from the object files and computes the application's "call-weighted multigraph" or "callgraph", a graph illustrating all of the procedures in the program and the possible calls between them. A callgraph consists of a number of nodes, each of which represents a procedure in the program . The nodes are interconnected byedges, and each of the edges has a direction. The edges represent possible procedure or method calls between the nodes or procedures. The information collected from the IPA, or from any of the multiple compilation passes, is used for improving the structure of the code to be produced in the program executables.
The information collected using the information gathering pass includes data dependencies, and these can be analyzed for use by the compiler during code optimi7~tion. U.S. Patent No. 5,107,418, titled "Method for Representing Scalar Data Dependencies for an Optimizing Compiler" of Supercomputer Systems Limited, discusses a method for constructing a scalar dependence graph of a program that l~lesents all of the local and global scalar data dependencies. The information in the 2l662s2 constructed graph can then be used by the compiler for implementing all types ofoptimizations, including scheduling, register allocation, loop invariant expression identification, array dependence graph construction, etc.
U.S. Patent No. 5,367,683 titled "Smart Recompilation of Performing Matchup/Difference After Code Generation" of Digital Equipment Corporation discusses a 'smart recompilation" method that uses a fragmented global context table.
The information contained in the table permits a closer examination of the dependencies between separate program modules to reduce the number of modules that must be recompiled due to dependency when one module is altered and recompiled.
However, neither of these references addresses the issue of the actual optimal mapping of global data variables, which is traditionally done by the human programmer manually.
Given an interprocedural view of a computer program, an optimizing compiler, accoldillg to the pfesellt invention, can analyze the usage patterns of global variables and then remap some of the globals as members of global aggregates. This allows the compiler to explicitly place the globals in memory independent of the linlcer mapping.
It also allows the compiler to generate more efficient code to access the globals on some systems.
The ~r~ t invention th~we provides a method for remapping global data variables during program compilation, comprising:
~ 2166~52 i) selecting global data variables referenced in the program;
ii) assigning a weight value to each selected global data variable representing byte size of data in that variable;
iii) dividing the selected global data variables into pairs of variables that are accessed together and re-ordering the pairs from highest to lowest affinity; andiv) beginning from a pair of variables of highest affinity, dividing the selected global data variables into aggregate groupings, each grouping having an aggregate weight value not exceeding a preset limit.
The invention is also directed to a program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for pelrolllling the above-described method.
Figure 1 is a flow diagram showing the steps for aggregating global variables according to the invention; and Figure 2 is a weighted interference graph of global variables according to the aspect of the invention illustrated in Figure 1.
Detailed Description of the Pler~led Embodiments The method of the present invention is used to rearrange external storage in order to maximize data locality. The first advantage of this optimization is that a number of separate small data objects are brought together. Secondly, the data objects areordered so that those used together most frequently are grouped together. This aspect of the invention is particularly useful in optimizing programs in languages such as C
and C+ + that have a proliferation of small data objects.
2l66252 The method used is set out in Figure 1. A number of global variables that are eligible for remapping are selected. Global variables that are eligible for remapping are those variables whose uses are lcnown to the compiler. The type of members selected includes members of global structures, and these are talcen as separate entities. Only S those variables that are referenced are selected, umer~enced variables are discarded (block 30).
weighted interference graph is constructed on the selected variables (block 32).From the following listing of data objects, a weighted interference graph as illustrated in Figure 2 can be constructed:
int z [ 100]
struct a {
dbl s dbl t int x;
int y;
Certain variables are excluded from the weighted interference graph. Variables that may be referenced by the invisible portion of the program cannot be included because 2l66~52 references to such variables are based on their original names and, following variable aggregation, these names will no longer access them.
In the weighted interference graph of Figure 2, the node weights represent the size in bytes of the data stored in the global. The edges between variables represent access relationships between the globals represented by the incident nodes, that is, the fact that the variables are used together in the same procedure or in the same control region. The weightings on the edges lt~Sellt, in general terms, a measure of affinity, indicating how often the two variables are accessed together across the whole program and in what context. For example, two variables accessed together in a loop get a higher incident edge weight than a simple access. Also, if the two variables areaccessed together in a single nesting of a loop, then the edge between them is weighted 10. If they are inside two nestings of a loop, the weighting is 100. If these procedures were used inside two procedures, doubly-nested, then the weighting on the lS edge between them would be 200.
The problem of mapping the globals into one or more aggregates reduces to the problem of finding m~im~l weight subgraphs of the weighted interference graph subject to the restriction that the sum of the node weights in any subgraph is less than -- ~166252 a selected system dependent parameter, usually related to the size of the displacement field in a base-displacement form load instruction.
The method for determining the maximal weight subgraphs is an adaptation of the maximal weight spanning forest algorithm.
To compute a m~xim~l spanning forest, the edges from a weighted graph are sorted from highest to lowest weight. Spanning trees are created by adding edges to groups, except where the addition of an edge would create a cyde. In a maximal spanning forest, the spanning trees cannot exceed a preset size.
However, in the method of the present invention, the criterion for inserting an edge into a subgraph is that the node weight restriction is not violated, instead of the cycle restriction in creating a spanning forest.
The edges between the global variables are sorted by weight from heaviest to lightest (block 34 of Figure 1). This results in an ordering of the global variables by frequency of access with y being accessed together with a.t. (From Figure 2). According to this embodiment of the invention, structures are not broken up. Consequently, a.t.
actually brings in all of a.
The whole order of the global variables based on the weighted intclrelellce graph of Figure 2 is:
y->a.t. (1000) x-> z ( 100) y-> a.s. ( 100) x-> y ( 10) z-> a.t. ( 10) y-> z ( 1) a.t.-> a ( 1) As the global variables are ordered, variable aggregates are built up. The first aggregate has y and a in it, and the ordering of the aggregate is significant because members which are most often accessed together can then share cache lines/pages within the aggregate.
The maximum permissible weight for any global aggregate col,csponds to the type of addressing used to access members of structures on the particular target machine. For example, if the target machine uses a relative base displacement load, then the m~imum aggregate size is limited by the displacement field on the machine.
In respect of the example illustrated in Figure 2, assume that the limit is 404. The first aggregate, that includes y and a, has a size of 136. The next edge selected (from the hierarchy) would being the aggregates x and y together. However, the new total would exceed the size limitation. Therefore, just two aggregates are produced, and edges continue to be added to them.
In the ~lefelled embodiment of the invention, some trade-offs were made between storage usage and locality. A distinction was made between array and non-array program objects that minimized the padding space required for proper alignment of aggregate members, but that could result in suboptimal data locality.
The embodiment was designed so that aggregates were created starting with the largest non-array objects (i.e., integers and structures) and proceeding to the smallest objects. The arrays were added at the end. The last array is stored as a single value rather than as the entire node size of each array being added to the aggregate. The value is all that is required to enable the caller to access the base of the array.
division was also made between initialized and uniniti~li7ecl external data because of the number of zeroes in the middle to the data.
Use of the method of the present invention can result in aggregate members having disjoint live ranges because global variables that are unconnected are aggregated. A
further optimization on storage can be made by overlapping some symbols for variables with disjoint live ranges. If two members of an aggregate never hold a useful value at the same program point, then they may use overlapping storage.
Once some of the global variables have been remapped as members of global ag~regates, these global variables can be explicitly placed in memory by the compiler through code optimization, independently of the linlcer mapping (block 38).
This invention is directed to a method for efficient interprocedural handling of global data variables.
It is related to our concurrently filed applications titled "Improving Memory Layout Based on Connectivity Considerations and "Connectivity Based Program Partitioningn (IBM docket nos. CA9-95-016 and CA9-95-021), but addresses a specific aspect of the problems of computer program compilation having to do with data storage.
Compilation is the process that translates a source program (written is a high-level, human-readable programming language) into an equivalent program in machine language or object code, so that it can be executed by a computer. For each of the main source program and a number of additional source programs or subroutines, the compiler translates each statement in the program into machine language equivalents.
The output is a number of object programs corresponding to the input source programs. A linlcer program then combines the object programs created by the compiler (that is, supplies the interconnecting linlcs between the program components) to create a single machine - executable program.
Central processing units utilized in general purpose programmable computers malce extensive use of hardware registers to hold data items utilized during programs execution, thus avoiding the overhead associated with memory ~efelellces, and effective management of such registers forms a major aspect of compiler optimization, 216625~
CAg-95-020 although the number, size and restrictions of use of such registers may vary widely from processor to processor. Some general principles relating to such management are discussed in Effective Register Management During Code Generation", KM. Gilbert, IBM Technical Disclosure Bulletin, January 1973, pages 2640 to 2645. Optimization tedmiques in compilers for re(lucecl instruction set (RISC) computers are discussed in an artide entitled "Advanced Compiler Technology for the RISC System/6000 Architecture", by O'Brien et al, at pages 154-161 of IBM RISC System/6000 Tedhnology, published 1990 by IBM Corporation. Both of these documents emphasize the importance of efficient hardware register usage. The linlcer provides the mapping pattern for storage of data variables.
In many programs, data is stored in a proliferation of small global variables. ("Global"
indicates that a variable is ~cc.o.ssihle to all elements of the computer program, rather than just the elements in which it is defined.) Groups of these global variables are typic~lly c~cc~ssec~ together in a particular section of the program. However, the linlcer does not talce access patterns into account when mapping the storage for global variables, so an opportunity to exploit locality is lost.
In addition, on some systems, extensive use of global variables (especially small scalars) implies a relatively high cost of access for global data.
Modern day compiler programs include optimizing techniques directed at maximizing the efficient use of the haldw~ resources of the computer, and the present invention is aimed at an improvement is this area. 5 C~9-95-020 Many compilers are designed to take multiple passes of the source program input in order to collect information that can be used for optimally restructuring the program.
For example, interprocedural analysis (IPA) is a 2-pass compilation procedure developed by IBM and used in its XL compilers. IPA is described in detail in Canadian PatentApplication No. 2,102,089, commonly assigned. The first IPA pass is performed at compile time and collects summaxy information that is written to the object file for each procedure compiled. This summary information includes a list of all callsites in each procedure, alias information for each variable and procedure. The second IPA pass is an information dissemination pass performed at link time when all files in the application have been compiled. The IPA driver reads the summary information from the object files and computes the application's "call-weighted multigraph" or "callgraph", a graph illustrating all of the procedures in the program and the possible calls between them. A callgraph consists of a number of nodes, each of which represents a procedure in the program . The nodes are interconnected byedges, and each of the edges has a direction. The edges represent possible procedure or method calls between the nodes or procedures. The information collected from the IPA, or from any of the multiple compilation passes, is used for improving the structure of the code to be produced in the program executables.
The information collected using the information gathering pass includes data dependencies, and these can be analyzed for use by the compiler during code optimi7~tion. U.S. Patent No. 5,107,418, titled "Method for Representing Scalar Data Dependencies for an Optimizing Compiler" of Supercomputer Systems Limited, discusses a method for constructing a scalar dependence graph of a program that l~lesents all of the local and global scalar data dependencies. The information in the 2l662s2 constructed graph can then be used by the compiler for implementing all types ofoptimizations, including scheduling, register allocation, loop invariant expression identification, array dependence graph construction, etc.
U.S. Patent No. 5,367,683 titled "Smart Recompilation of Performing Matchup/Difference After Code Generation" of Digital Equipment Corporation discusses a 'smart recompilation" method that uses a fragmented global context table.
The information contained in the table permits a closer examination of the dependencies between separate program modules to reduce the number of modules that must be recompiled due to dependency when one module is altered and recompiled.
However, neither of these references addresses the issue of the actual optimal mapping of global data variables, which is traditionally done by the human programmer manually.
Given an interprocedural view of a computer program, an optimizing compiler, accoldillg to the pfesellt invention, can analyze the usage patterns of global variables and then remap some of the globals as members of global aggregates. This allows the compiler to explicitly place the globals in memory independent of the linlcer mapping.
It also allows the compiler to generate more efficient code to access the globals on some systems.
The ~r~ t invention th~we provides a method for remapping global data variables during program compilation, comprising:
~ 2166~52 i) selecting global data variables referenced in the program;
ii) assigning a weight value to each selected global data variable representing byte size of data in that variable;
iii) dividing the selected global data variables into pairs of variables that are accessed together and re-ordering the pairs from highest to lowest affinity; andiv) beginning from a pair of variables of highest affinity, dividing the selected global data variables into aggregate groupings, each grouping having an aggregate weight value not exceeding a preset limit.
The invention is also directed to a program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for pelrolllling the above-described method.
Figure 1 is a flow diagram showing the steps for aggregating global variables according to the invention; and Figure 2 is a weighted interference graph of global variables according to the aspect of the invention illustrated in Figure 1.
Detailed Description of the Pler~led Embodiments The method of the present invention is used to rearrange external storage in order to maximize data locality. The first advantage of this optimization is that a number of separate small data objects are brought together. Secondly, the data objects areordered so that those used together most frequently are grouped together. This aspect of the invention is particularly useful in optimizing programs in languages such as C
and C+ + that have a proliferation of small data objects.
2l66252 The method used is set out in Figure 1. A number of global variables that are eligible for remapping are selected. Global variables that are eligible for remapping are those variables whose uses are lcnown to the compiler. The type of members selected includes members of global structures, and these are talcen as separate entities. Only S those variables that are referenced are selected, umer~enced variables are discarded (block 30).
weighted interference graph is constructed on the selected variables (block 32).From the following listing of data objects, a weighted interference graph as illustrated in Figure 2 can be constructed:
int z [ 100]
struct a {
dbl s dbl t int x;
int y;
Certain variables are excluded from the weighted interference graph. Variables that may be referenced by the invisible portion of the program cannot be included because 2l66~52 references to such variables are based on their original names and, following variable aggregation, these names will no longer access them.
In the weighted interference graph of Figure 2, the node weights represent the size in bytes of the data stored in the global. The edges between variables represent access relationships between the globals represented by the incident nodes, that is, the fact that the variables are used together in the same procedure or in the same control region. The weightings on the edges lt~Sellt, in general terms, a measure of affinity, indicating how often the two variables are accessed together across the whole program and in what context. For example, two variables accessed together in a loop get a higher incident edge weight than a simple access. Also, if the two variables areaccessed together in a single nesting of a loop, then the edge between them is weighted 10. If they are inside two nestings of a loop, the weighting is 100. If these procedures were used inside two procedures, doubly-nested, then the weighting on the lS edge between them would be 200.
The problem of mapping the globals into one or more aggregates reduces to the problem of finding m~im~l weight subgraphs of the weighted interference graph subject to the restriction that the sum of the node weights in any subgraph is less than -- ~166252 a selected system dependent parameter, usually related to the size of the displacement field in a base-displacement form load instruction.
The method for determining the maximal weight subgraphs is an adaptation of the maximal weight spanning forest algorithm.
To compute a m~xim~l spanning forest, the edges from a weighted graph are sorted from highest to lowest weight. Spanning trees are created by adding edges to groups, except where the addition of an edge would create a cyde. In a maximal spanning forest, the spanning trees cannot exceed a preset size.
However, in the method of the present invention, the criterion for inserting an edge into a subgraph is that the node weight restriction is not violated, instead of the cycle restriction in creating a spanning forest.
The edges between the global variables are sorted by weight from heaviest to lightest (block 34 of Figure 1). This results in an ordering of the global variables by frequency of access with y being accessed together with a.t. (From Figure 2). According to this embodiment of the invention, structures are not broken up. Consequently, a.t.
actually brings in all of a.
The whole order of the global variables based on the weighted intclrelellce graph of Figure 2 is:
y->a.t. (1000) x-> z ( 100) y-> a.s. ( 100) x-> y ( 10) z-> a.t. ( 10) y-> z ( 1) a.t.-> a ( 1) As the global variables are ordered, variable aggregates are built up. The first aggregate has y and a in it, and the ordering of the aggregate is significant because members which are most often accessed together can then share cache lines/pages within the aggregate.
The maximum permissible weight for any global aggregate col,csponds to the type of addressing used to access members of structures on the particular target machine. For example, if the target machine uses a relative base displacement load, then the m~imum aggregate size is limited by the displacement field on the machine.
In respect of the example illustrated in Figure 2, assume that the limit is 404. The first aggregate, that includes y and a, has a size of 136. The next edge selected (from the hierarchy) would being the aggregates x and y together. However, the new total would exceed the size limitation. Therefore, just two aggregates are produced, and edges continue to be added to them.
In the ~lefelled embodiment of the invention, some trade-offs were made between storage usage and locality. A distinction was made between array and non-array program objects that minimized the padding space required for proper alignment of aggregate members, but that could result in suboptimal data locality.
The embodiment was designed so that aggregates were created starting with the largest non-array objects (i.e., integers and structures) and proceeding to the smallest objects. The arrays were added at the end. The last array is stored as a single value rather than as the entire node size of each array being added to the aggregate. The value is all that is required to enable the caller to access the base of the array.
division was also made between initialized and uniniti~li7ecl external data because of the number of zeroes in the middle to the data.
Use of the method of the present invention can result in aggregate members having disjoint live ranges because global variables that are unconnected are aggregated. A
further optimization on storage can be made by overlapping some symbols for variables with disjoint live ranges. If two members of an aggregate never hold a useful value at the same program point, then they may use overlapping storage.
Once some of the global variables have been remapped as members of global ag~regates, these global variables can be explicitly placed in memory by the compiler through code optimization, independently of the linlcer mapping (block 38).
Claims (6)
1. A method for remapping global data variables during program compilation, comprising:
i) selecting global data variables referenced in the program;
ii) assigning a weight value to each selected global data variable representing byte size of data in that variable;
iii) dividing the selected global data variables into pairs of variables that are accessed together and re-ordering the pairs from highest to lowest affinity; and iv) beginning from a pair of variables of highest affinity, dividing the selected global data variables into aggregate groupings, each grouping having an aggregate weight value not exceeding a preset limit.
i) selecting global data variables referenced in the program;
ii) assigning a weight value to each selected global data variable representing byte size of data in that variable;
iii) dividing the selected global data variables into pairs of variables that are accessed together and re-ordering the pairs from highest to lowest affinity; and iv) beginning from a pair of variables of highest affinity, dividing the selected global data variables into aggregate groupings, each grouping having an aggregate weight value not exceeding a preset limit.
2. A method, according to claim 1, wherein step ii) comprises initially differentiating arrays from non-array global data variables.
3. A method, according to claim 2, wherein the weight value assigned to a final array is one.
4. A method, according to claim ?, wherein steps ii) and iii) comprise building a weighted interference graph of the selected global data variables having:
nodes representing the selected global data variables;
node weights representing the assigned weight value of each selected global data variable;
edges connecting nodes representing access relationships between pairs of selected global data variables; and edge weights representing degree of affinity between pairs of selected global data variables.
nodes representing the selected global data variables;
node weights representing the assigned weight value of each selected global data variable;
edges connecting nodes representing access relationships between pairs of selected global data variables; and edge weights representing degree of affinity between pairs of selected global data variables.
5. A method, according to claim 1, wherein the preset limit in step iv) is related to a displacement field size value.
6. A program storage device readable by a machine in a data processing system, tangibly embodying a program of instructions executable by the machine to perform method steps for remapping global data variables during program compilation, said method steps comprising:
i) selecting global data variables referenced in the program;
ii) assigning a weight value to each selected global data variable representing byte size of data in that variable;
iii) dividing the selected global data variables into pairs of variables that are accessed together and re-ordering the pairs from highest to lowest affinity; and iv) beginning from a pair of variables of highest affinity, dividing the selected global data variables into aggregate groupings, each grouping having an aggregate weight value not exceeding a preset limit.
i) selecting global data variables referenced in the program;
ii) assigning a weight value to each selected global data variable representing byte size of data in that variable;
iii) dividing the selected global data variables into pairs of variables that are accessed together and re-ordering the pairs from highest to lowest affinity; and iv) beginning from a pair of variables of highest affinity, dividing the selected global data variables into aggregate groupings, each grouping having an aggregate weight value not exceeding a preset limit.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002166252A CA2166252C (en) | 1995-12-28 | 1995-12-28 | Global variable coalescing |
US08/726,039 US5850549A (en) | 1995-12-28 | 1996-10-07 | Global variable coalescing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002166252A CA2166252C (en) | 1995-12-28 | 1995-12-28 | Global variable coalescing |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2166252A1 CA2166252A1 (en) | 1997-06-29 |
CA2166252C true CA2166252C (en) | 1999-08-24 |
Family
ID=4157249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002166252A Expired - Fee Related CA2166252C (en) | 1995-12-28 | 1995-12-28 | Global variable coalescing |
Country Status (2)
Country | Link |
---|---|
US (1) | US5850549A (en) |
CA (1) | CA2166252C (en) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7337173B1 (en) * | 1997-12-04 | 2008-02-26 | Netscape Communications Corporation | Compiler having global element optimization |
US6072952A (en) * | 1998-04-22 | 2000-06-06 | Hewlett-Packard Co. | Method and apparatus for coalescing variables |
CA2288614C (en) * | 1999-11-08 | 2004-05-11 | Robert J. Blainey | Loop allocation for optimizing compilers |
CA2321016A1 (en) * | 2000-09-27 | 2002-03-27 | Ibm Canada Limited-Ibm Canada Limitee | Interprocedural dead store elimination |
GB2377038A (en) * | 2001-04-10 | 2002-12-31 | I2 Ltd | Method for identifying patterns in sequential event streams |
US7185328B2 (en) * | 2002-05-30 | 2007-02-27 | Microsoft Corporation | System and method for improving a working set |
US7069548B2 (en) * | 2002-06-28 | 2006-06-27 | Intel Corporation | Inter-procedure global register allocation method |
US7275242B2 (en) * | 2002-10-04 | 2007-09-25 | Hewlett-Packard Development Company, L.P. | System and method for optimizing a program |
US7310799B2 (en) * | 2002-12-31 | 2007-12-18 | International Business Machines Corporation | Reducing load instructions via global data reordering |
JP3877753B2 (en) * | 2003-10-31 | 2007-02-07 | 富士通株式会社 | Design support apparatus, design support method, and design support program |
US7765534B2 (en) * | 2004-04-30 | 2010-07-27 | International Business Machines Corporation | Compiler with cache utilization optimizations |
US7472382B2 (en) * | 2004-08-30 | 2008-12-30 | International Business Machines Corporation | Method for optimizing software program using inter-procedural strength reduction |
US7555748B2 (en) * | 2004-08-30 | 2009-06-30 | International Business Machines Corporation | Method and apparatus for improving data cache performance using inter-procedural strength reduction of global objects |
JP4559937B2 (en) * | 2005-09-01 | 2010-10-13 | 株式会社東芝 | Program generator |
US7590977B2 (en) * | 2005-10-13 | 2009-09-15 | International Business Machines Corporation | Method and system for reducing memory reference overhead associated with threadprivate variables in parallel programs |
US20090055628A1 (en) * | 2007-08-21 | 2009-02-26 | International Business Machine Corporation | Methods and computer program products for reducing load-hit-store delays by assigning memory fetch units to candidate variables |
US8392878B2 (en) * | 2007-10-31 | 2013-03-05 | National Instruments Corporation | In-place structure in a graphical program |
US11321236B2 (en) * | 2018-01-08 | 2022-05-03 | Microsoft Technology Licensing, Llc. | Reduced instructions to generate global variable addresses |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5428793A (en) * | 1989-11-13 | 1995-06-27 | Hewlett-Packard Company | Method and apparatus for compiling computer programs with interproceduural register allocation |
US5107418A (en) * | 1990-06-11 | 1992-04-21 | Supercomputer Systems Limited Partnership | Method for representing scalar data dependences for an optimizing compiler |
IL100989A (en) * | 1991-02-27 | 1995-10-31 | Digital Equipment Corp | Analyzing inductive expressions in a multilanguage optimizing compiler |
US5535392A (en) * | 1992-06-26 | 1996-07-09 | Digital Equipment Corporation | Using hint generation to cause portions of object files to remain the same |
US5446899A (en) * | 1992-06-26 | 1995-08-29 | Digital Equipment Corporation | Hint generation in smart recompilation |
US5367683A (en) * | 1992-06-26 | 1994-11-22 | Digital Equipment Corporation | Smart recompilation of performing matchup/difference after code generation |
US5625822A (en) * | 1992-06-26 | 1997-04-29 | Digital Equipment Corporation | Using sorting to do matchup in smart recompilation |
CA2102089C (en) * | 1993-10-29 | 1999-05-25 | David M. Gillies | Recompilation of computer programs for enhanced optimization |
-
1995
- 1995-12-28 CA CA002166252A patent/CA2166252C/en not_active Expired - Fee Related
-
1996
- 1996-10-07 US US08/726,039 patent/US5850549A/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
US5850549A (en) | 1998-12-15 |
CA2166252A1 (en) | 1997-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2166252C (en) | Global variable coalescing | |
US5787284A (en) | Improving memory layout based on connectivity considerations | |
CN1306399C (en) | Virtual machine for network processor | |
Sarkar | Automatic selection of high-order transformations in the IBM XL FORTRAN compilers | |
Dave et al. | Cetus: A source-to-source compiler infrastructure for multicores | |
US5428793A (en) | Method and apparatus for compiling computer programs with interproceduural register allocation | |
US7086038B2 (en) | System and method for creating systolic solvers | |
US5920723A (en) | Compiler with inter-modular procedure optimization | |
US9600305B2 (en) | Code partitioning for enhanced performance | |
US8032873B2 (en) | Computer program code size partitioning system for multiple memory multi-processing systems | |
Tu et al. | Efficient building and placing of gating functions | |
Leupers et al. | Variable partitioning for dual memory bank dsps | |
US5797012A (en) | Connectivity based program partitioning | |
US8037463B2 (en) | Computer program functional partitioning system for heterogeneous multi-processing systems | |
EP1700209B1 (en) | Data layout mechanism to reduce hardware resource conflicts | |
Lorenz et al. | Energy aware compilation for DSPs with SIMD instructions | |
Carr et al. | Compiler blockability of dense matrix factorizations | |
Zhuang et al. | Resolving register bank conflicts for a network processor | |
Quinton et al. | On deriving data parallel code from a functional program | |
Ottoni et al. | Improving offset assignment through simultaneous variable coalescing | |
Park et al. | Treegraph-based instruction scheduling for stack-based virtual machines | |
Dominguez et al. | Unstructured control flow in gpgpu | |
Kunchithapadam et al. | Using lightweight procedures to improve instruction cache performance | |
US20030233638A1 (en) | Memory allocation system for compiler | |
Anantharaman et al. | Compiler optimizations for real time execution of loops on limited memory embedded systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |