US20070260843A1 - Memory tuning for garbage collection and central processing unit (cpu) utilization optimization - Google Patents

Memory tuning for garbage collection and central processing unit (cpu) utilization optimization Download PDF

Info

Publication number
US20070260843A1
US20070260843A1 US11/382,161 US38216106A US2007260843A1 US 20070260843 A1 US20070260843 A1 US 20070260843A1 US 38216106 A US38216106 A US 38216106A US 2007260843 A1 US2007260843 A1 US 2007260843A1
Authority
US
United States
Prior art keywords
garbage collection
candidate
computer usable
cpu utilization
heap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/382,161
Other versions
US7467278B2 (en
Inventor
Thomas Creamer
Curtis Hrischuk
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/382,161 priority Critical patent/US7467278B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION PATENT APPLICATION Assignors: HRISCHUK, CURTIS E, CREAMER, THOMAS E
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION DOCUMENT RE-RECORDED TO CORRECT AN ERROR CONTAINED IN PROPERTY NUMBER 10/381,161 IN THE DOCUMENT PREVIOUSLY RECORDED ON REEL 017588, FRAME 0287. ASSINGOR HEREBY CONFIRMS THE ASSINGMENT OF THE ENTIRE INTEREST. Assignors: HRISCHUK, CURTIS E., CREAMER, THOMAS E
Publication of US20070260843A1 publication Critical patent/US20070260843A1/en
Priority to US12/259,275 priority patent/US7716451B2/en
Application granted granted Critical
Publication of US7467278B2 publication Critical patent/US7467278B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3428Benchmarking

Definitions

  • the present invention relates to the field of memory management and more particularly to the field of garbage collection for memory management.
  • Memory leakage is the gradual loss of allocable memory due to the failure to de-allocate previously allocated, but no longer utilized memory.
  • memory can be reserved for data having a brief lifespan. Once the lifespan has completed, the reserved memory ought to be returned to the pool of allocable memory so that the reserved memory can be used at a subsequent time as necessary.
  • memory leakage persists without remediation, ultimately not enough memory will remain to accommodate the needs of other processes.
  • Garbage collection refers to the automated analysis of allocated memory to identify regions of allocated memory containing data which no longer are required for the operation of associated processes.
  • object oriented programming languages such as the JavaTM programming language
  • garbage collection when objects residing in memory are no longer accessible within a corresponding application, the memory allocated to the “dead” object can be returned to the pool of allocable memory.
  • the process of garbage collection can be time consuming and can result in a degradation of performance for a hosted application.
  • a primary factor affecting the time consumption of a garbage collection operation can include heap size. Generally, the larger the heap size, the more time consuming a garbage collection operation can be. Heap size, however, can be limited for a virtual machine for a number of reasons unrelated to garbage collection. To circumvent the limitation on heap size, it is common to utilize multiple virtual machines for a single central processing unit (CPU) in order to support the execution of a hosted application. Notwithstanding, the typical garbage collection operation can fully utilize a supporting CPU such that a garbage collection operation in one virtual machine can degrade the performance of another virtual machine supported by the same CPU.
  • CPU central processing unit
  • soft real-time systems In most cases, the degradation of performance will have little impact on the performance of a hosted application as most hosted applications are not time sensitive. However, some classes of hosted applications, including soft real-time systems, depend upon consistent performance at a guaranteed level of Quality of Service (QoS). Generally, soft real-time systems include speech recognition and text to speech systems. As it will be well understood in the art, soft real-time systems prefer to avoid the degradation in performance caused by garbage collection.
  • QoS Quality of Service
  • Embodiments of the present invention address deficiencies of the art in respect to load balancing in an enterprise environment and provide a novel and non-obvious method, system and apparatus for garbage collection sensitive load balancing.
  • a method for memory tuning for garbage collection and CPU utilization optimization can be provided.
  • the method can include benchmarking an application across multiple different heap sizes to accumulate garbage collection metrics and utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes.
  • One of the candidate heap sizes can be matched to a desired CPU utilization and garbage collection time. As such, the matched one of the candidate heap sizes can be applied to a host environment.
  • a maximum CPU utilization can be determined that is acceptable for a QoS goal.
  • a desired garbage collection time can be determined as a maximum garbage collection time consumed that is acceptable for a QoS goal.
  • a garbage collection data processing system can be provided.
  • the system can include a host environment, such as a virtual machine, configured for garbage collection, a heap of particular heap size coupled to the host environment and configured for use by applications executing in the host environment, and a host environment tuner coupled to the host environment.
  • the tuner can include program code enabled to benchmark an application across multiple different heap sizes of the heap to accumulate garbage collection metrics, utilize the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, match one of the candidate heap sizes to a desired CPU utilization and garbage collection time, and apply the matched one of the candidate heap sizes to the host environment.
  • FIG. 1 is a schematic illustration of a memory tuning data processing system enabled for garbage collection and CPU utilization optimization.
  • FIG. 2 is a flow chart illustrating a process for memory tuning for garbage collection and CPU utilization optimization.
  • Embodiments of the present invention provide a method, system and computer program product for memory tuning for garbage collection and CPU utilization optimization.
  • an application can be benchmarked across multiple different heap sizes to determine the rate of garbage collections, the amount of memory collected for each garbage collection activity, and the average duration of each garbage collection. Utilizing metrics accumulated during benchmarking, both CPU utilization and garbage collection time can be computed for each of a selection of candidate heap sizes. Subsequently, a candidate heap size can be matched to a desired CPU utilization and garbage collection time and applied to the host environment.
  • FIG. 1 is a schematic illustration of a memory tuning data processing system enabled for garbage collection time and CPU utilization optimization.
  • the data processing system can include a host computing platform 100 .
  • the host computing platform 100 can include a CPU 110 , current memory 120 and fixed storage 130 .
  • An operating system 140 can moderate the utilization of the CPU 110 , current memory 120 and fixed storage 130 for one or more host virtual machines 150 .
  • the virtual machines 150 can directly moderate access to the CPU 110 , current memory 120 and fixed storage 130 in the absence of an intermediate operating system 140 .
  • each virtual machine 150 can host the operation of one or more applications 160 .
  • Each virtual machine 150 can be configured to maximally consume a set amount of memory according to a pre-configured heap size.
  • a heap 170 can be allocated for use by the virtual machine 150 .
  • the selection of a heap size for the heap 170 can be applied by the virtual machine tuner 200 .
  • the virtual machine tuner 200 can include program code enabled to benchmark an application 160 operating in the virtual machine 150 across multiple different heap sizes for the heap 170 utilizing timer/clock 190 B.
  • the benchmarking can produce CPU utilization and garbage collection time metrics 190 A for each heap size. Consequently, utilizing the metrics 190 A, a target CPU utilization and garbage collection time 180 can be matched to a particular heap size in order to select an optimal heap size for the heap 170 .
  • each virtual machine 150 can be configured to limit the number of threads permitted to engage in garbage collection activities.
  • FIG. 2 is a flow chart illustrating a process for memory tuning for garbage collection and CPU utilization optimization.
  • an application can be benchmarked across a number of heap sizes for the virtual machine.
  • the benchmarking process can include measuring a rate of garbage collections, an amount of memory collected in each garbage collection activity, and the average duration of each garbage collection.
  • a first candidate heap size can be selected for determining optimization.
  • QoS input parameters can be provided, including maximum garbage collection delay, maximum number of threads to be allocated for garbage collection and a maximum CPU utilization permitted.
  • a number of CPU seconds used for each garbage collection activity for the candidate heap size can be computed by measuring CPU utilization for the garbage collection activity and multiplying the CPU utilization by the time consumed by the CPU in total during that period.
  • an amount of time consumed by a single thread performing the garbage collection activity can be computed.
  • a base garbage collection time can be computed in block 230 .
  • the base garbage collection time can include the minimal amount of time required to mark and sweep threads during mark and sweep style garbage collection.
  • the CPU utilization for the candidate heap size can be computed as the number of CPU seconds used for each garbage collection divided by a number of threads involved in the garbage collection.
  • a total garbage collection time can be computed for the candidate heap size as the base garbage collection time divided by the number of threads involved in the garbage collection combined with the average sweep time for the mark and sweep operation.
  • the resulting CPU utilization and total garbage collection time can be compared to pre-determined performance objectives.
  • the candidate heap size can be established for the virtual machine. Otherwise, in decision block 260 if additional candidate heap sizes remain to be evaluated, in block 260 a next candidate heap size can be selected for analysis. Subsequently, the matching process can repeat through blocks 220 and 225 and 230 . When no further candidate heap sizes remain to be analyzed, and if no match has been found for the pre-determined performance objectives in decision block 255 , in block 270 a default heap size can be established for the virtual machine irrespective of the pre-determined performance objectives. In this circumstance, it can be recommended that additional processors are added to the machine to achieve optimization. Also, a number CPUs required to meet the pre-determined performance objectives, and a default number of recommended threads for garbage collection to meet the performance objectives can be recommended.
  • Embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, and the like.
  • the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.

Abstract

Embodiments of the present invention address deficiencies of the art in respect to load balancing in an enterprise environment and provide a method, system and computer program product for garbage collection sensitive load balancing. In an embodiment of the invention, a method for memory tuning for garbage collection and CPU utilization optimization can be provided. The method can include benchmarking an application across multiple different heap sizes to accumulate garbage collection metrics and utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes. One of the candidate heap sizes can be matched to a desired CPU utilization and garbage collection time. As such, the matched one of the candidate heap sizes can be applied to a host environment.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to the field of memory management and more particularly to the field of garbage collection for memory management.
  • 2. Description of the Related Art
  • Memory leakage has confounded software developers for decades resulting in the sometimes global distribution of bug-ridden, crash-prone software applications. Particularly in respect to those programming languages which permitted the manual allocation of memory, but also required the manual de-allocation of allocated memory, memory leakage has proven to be the principal run-time bug most addressed during the software development cycle. So prevalent a problem has memory leakage become, entire software development tools have been developed and marketed solely to address the memory leakage problem.
  • Memory leakage, broadly defined, is the gradual loss of allocable memory due to the failure to de-allocate previously allocated, but no longer utilized memory. Typically, memory can be reserved for data having a brief lifespan. Once the lifespan has completed, the reserved memory ought to be returned to the pool of allocable memory so that the reserved memory can be used at a subsequent time as necessary. Importantly, where memory leakage persists without remediation, ultimately not enough memory will remain to accommodate the needs of other processes.
  • Recognizing the importance of addressing the memory leakage problem, computer programming language theorists have developed the notion of garbage collection. Garbage collection refers to the automated analysis of allocated memory to identify regions of allocated memory containing data which no longer are required for the operation of associated processes. In the context of object oriented programming languages such as the Java™ programming language, when objects residing in memory are no longer accessible within a corresponding application, the memory allocated to the “dead” object can be returned to the pool of allocable memory.
  • The process of garbage collection can be time consuming and can result in a degradation of performance for a hosted application. A primary factor affecting the time consumption of a garbage collection operation can include heap size. Generally, the larger the heap size, the more time consuming a garbage collection operation can be. Heap size, however, can be limited for a virtual machine for a number of reasons unrelated to garbage collection. To circumvent the limitation on heap size, it is common to utilize multiple virtual machines for a single central processing unit (CPU) in order to support the execution of a hosted application. Notwithstanding, the typical garbage collection operation can fully utilize a supporting CPU such that a garbage collection operation in one virtual machine can degrade the performance of another virtual machine supported by the same CPU.
  • In most cases, the degradation of performance will have little impact on the performance of a hosted application as most hosted applications are not time sensitive. However, some classes of hosted applications, including soft real-time systems, depend upon consistent performance at a guaranteed level of Quality of Service (QoS). Generally, soft real-time systems include speech recognition and text to speech systems. As it will be well understood in the art, soft real-time systems prefer to avoid the degradation in performance caused by garbage collection.
  • BRIEF SUMMARY OF THE INVENTION
  • Embodiments of the present invention address deficiencies of the art in respect to load balancing in an enterprise environment and provide a novel and non-obvious method, system and apparatus for garbage collection sensitive load balancing. In a first embodiment of the invention, a method for memory tuning for garbage collection and CPU utilization optimization can be provided. The method can include benchmarking an application across multiple different heap sizes to accumulate garbage collection metrics and utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes. One of the candidate heap sizes can be matched to a desired CPU utilization and garbage collection time. As such, the matched one of the candidate heap sizes can be applied to a host environment.
  • In a particular aspect of the embodiment, a maximum CPU utilization can be determined that is acceptable for a QoS goal. In another aspect of the embodiment, a desired garbage collection time can be determined as a maximum garbage collection time consumed that is acceptable for a QoS goal. In both circumstances, multiple virtual machines can share a host platform without allowing the garbage collection process of each virtual machine to invalidate the QoS requirements for each other virtual machine.
  • In another embodiment of the invention, a garbage collection data processing system can be provided. The system can include a host environment, such as a virtual machine, configured for garbage collection, a heap of particular heap size coupled to the host environment and configured for use by applications executing in the host environment, and a host environment tuner coupled to the host environment. The tuner can include program code enabled to benchmark an application across multiple different heap sizes of the heap to accumulate garbage collection metrics, utilize the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, match one of the candidate heap sizes to a desired CPU utilization and garbage collection time, and apply the matched one of the candidate heap sizes to the host environment.
  • Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The aspects of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention. The embodiments illustrated herein are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown, wherein:
  • FIG. 1 is a schematic illustration of a memory tuning data processing system enabled for garbage collection and CPU utilization optimization; and,
  • FIG. 2 is a flow chart illustrating a process for memory tuning for garbage collection and CPU utilization optimization.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Embodiments of the present invention provide a method, system and computer program product for memory tuning for garbage collection and CPU utilization optimization. In accordance with an embodiment of the present invention, an application can be benchmarked across multiple different heap sizes to determine the rate of garbage collections, the amount of memory collected for each garbage collection activity, and the average duration of each garbage collection. Utilizing metrics accumulated during benchmarking, both CPU utilization and garbage collection time can be computed for each of a selection of candidate heap sizes. Subsequently, a candidate heap size can be matched to a desired CPU utilization and garbage collection time and applied to the host environment.
  • In illustration, FIG. 1 is a schematic illustration of a memory tuning data processing system enabled for garbage collection time and CPU utilization optimization. The data processing system can include a host computing platform 100. The host computing platform 100 can include a CPU 110, current memory 120 and fixed storage 130. An operating system 140 can moderate the utilization of the CPU 110, current memory 120 and fixed storage 130 for one or more host virtual machines 150. It is to be noted, however, that the virtual machines 150 can directly moderate access to the CPU 110, current memory 120 and fixed storage 130 in the absence of an intermediate operating system 140. In any case, each virtual machine 150 can host the operation of one or more applications 160.
  • Each virtual machine 150 can be configured to maximally consume a set amount of memory according to a pre-configured heap size. In further illustration, a heap 170 can be allocated for use by the virtual machine 150. The selection of a heap size for the heap 170 can be applied by the virtual machine tuner 200. In this regard, the virtual machine tuner 200 can include program code enabled to benchmark an application 160 operating in the virtual machine 150 across multiple different heap sizes for the heap 170 utilizing timer/clock 190B. The benchmarking can produce CPU utilization and garbage collection time metrics 190A for each heap size. Consequently, utilizing the metrics 190A, a target CPU utilization and garbage collection time 180 can be matched to a particular heap size in order to select an optimal heap size for the heap 170. Further, each virtual machine 150 can be configured to limit the number of threads permitted to engage in garbage collection activities.
  • In more particular illustration of the operation of the virtual machine tuner 200, FIG. 2 is a flow chart illustrating a process for memory tuning for garbage collection and CPU utilization optimization. Beginning in block 210, an application can be benchmarked across a number of heap sizes for the virtual machine. In particular, the benchmarking process can include measuring a rate of garbage collections, an amount of memory collected in each garbage collection activity, and the average duration of each garbage collection. In block 215, a first candidate heap size can be selected for determining optimization. Additionally, QoS input parameters can be provided, including maximum garbage collection delay, maximum number of threads to be allocated for garbage collection and a maximum CPU utilization permitted.
  • In block 220, a number of CPU seconds used for each garbage collection activity for the candidate heap size can be computed by measuring CPU utilization for the garbage collection activity and multiplying the CPU utilization by the time consumed by the CPU in total during that period. Concurrently, in block 225, an amount of time consumed by a single thread performing the garbage collection activity can be computed. As well, a base garbage collection time can be computed in block 230. The base garbage collection time can include the minimal amount of time required to mark and sweep threads during mark and sweep style garbage collection.
  • In block 235, the CPU utilization for the candidate heap size can be computed as the number of CPU seconds used for each garbage collection divided by a number of threads involved in the garbage collection. Likewise, in block 240, a total garbage collection time can be computed for the candidate heap size as the base garbage collection time divided by the number of threads involved in the garbage collection combined with the average sweep time for the mark and sweep operation. Thereafter, in block 245 the resulting CPU utilization and total garbage collection time can be compared to pre-determined performance objectives.
  • If a match is found in decision block 250, in block 255 the candidate heap size can be established for the virtual machine. Otherwise, in decision block 260 if additional candidate heap sizes remain to be evaluated, in block 260 a next candidate heap size can be selected for analysis. Subsequently, the matching process can repeat through blocks 220 and 225 and 230. When no further candidate heap sizes remain to be analyzed, and if no match has been found for the pre-determined performance objectives in decision block 255, in block 270 a default heap size can be established for the virtual machine irrespective of the pre-determined performance objectives. In this circumstance, it can be recommended that additional processors are added to the machine to achieve optimization. Also, a number CPUs required to meet the pre-determined performance objectives, and a default number of recommended threads for garbage collection to meet the performance objectives can be recommended.
  • Embodiments of the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, and the like. Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.

Claims (16)

1. A method for memory tuning for garbage collection and central processing unit (CPU) utilization optimization, the method comprising:
benchmarking an application across multiple different heap sizes to accumulate garbage collection metrics;
utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes;
matching one of the candidate heap sizes to a desired CPU utilization and garbage collection time; and,
applying the matched one of the candidate heap sizes to a host environment.
2. The method of claim 1, wherein benchmarking an application across multiple different heap sizes, comprises benchmarking an application across multiple different heap sizes to determine a rate of garbage collections, an amount of memory collected for each garbage collection activity, and an average duration of each garbage collection.
3. The method of claim 2, wherein utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, comprises:
computing CPU utilization for a candidate heap size as the number of CPU seconds used for a garbage collection divided by a number of threads involved in the garbage collection; and,
further computing a total garbage collection time for the candidate heap size as a base garbage collection time divided by a number of threads involved in the garbage collection combined with an average sweep time for the garbage collection.
4. The method of claim 3, wherein further computing a total garbage collection time for the candidate heap size, comprises computing a base garbage collection time as a minimal amount of time required to mark and sweep threads during mark and sweep style garbage collection for the garbage collection.
5. The method of claim 1, wherein utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, further comprises calculating a number of threads for use in garbage collection to meet stated quality of service (QoS) goals.
6. The method of claim 1, wherein utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, further comprises calculating a number of CPUs to be provisioned to meet stated quality of service (QoS) goals.
7. The method of claim 1, further comprising adding additional processors when a match of the candidate heap sizes to a desired CPU utilization and garbage collection time cannot be found.
8. A garbage collection data processing system comprising:
a host environment configured for garbage collection;
a heap of particular heap size coupled to the host environment and configured for use by applications executing in the host environment; and,
a host environment tuner coupled to the host environment, the tuner comprising program code enabled to benchmark an application across multiple different heap sizes of the heap to accumulate garbage collection metrics, utilize the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, match one of the candidate heap sizes to a desired CPU utilization and garbage collection time, and apply the matched one of the candidate heap sizes to the host environment.
9. The data processing system of claim 8, wherein the host environment is a virtual machine.
10. A computer program product comprising a computer usable medium having computer usable program code for memory tuning for garbage collection and central processing unit (CPU) utilization optimization, the computer program product including:
computer usable code for benchmarking an application across multiple different heap sizes to accumulate garbage collection metrics;
computer usable code for utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes;
computer usable code for matching one of the candidate heap sizes to a desired CPU utilization and garbage collection time; and,
computer usable code for applying the matched one of the candidate heap sizes to a host environment.
11. The computer program product of claim 10, wherein the computer usable code for benchmarking an application across multiple different heap sizes, comprises computer usable code for benchmarking an application across multiple different heap sizes to determine a rate of garbage collections, an amount of memory collected for each garbage collection activity, and an average duration of each garbage collection.
12. The computer usable program code of claim 11, wherein the computer usable code for utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, comprises:
computer usable code for computing CPU utilization for a candidate heap size as the number of CPU seconds used for a garbage collection divided by a number of threads involved in the garbage collection; and,
computer usable code for further computing a total garbage collection time for the candidate heap size as a base garbage collection time divided by a number of threads involved in the garbage collection combined with an average sweep time for the garbage collection.
13. The computer program product of claim 12, wherein the computer usable code for further computing a total garbage collection time for the candidate heap size, comprises computer usable code for computing a base garbage collection time as a minimal amount of time required to mark and sweep threads during mark and sweep style garbage collection for the garbage collection.
14. The computer program product of claim 10, wherein the computer usable program code for utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, further comprises computer usable program code for calculating a number of threads for use in garbage collection to meet stated quality of service (QoS) goals.
15. The computer program product of claim 10, wherein the computer usable program code for utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes, further comprises computer usable program code for calculating a number of CPUs to be provisioned to meet stated quality of service (QoS) goals.
16. The computer program product of claim 10, further comprising computer usable code for adding additional processors when a match of the candidate heap sizes to a desired CPU utilization and garbage collection time cannot be found.
US11/382,161 2006-05-08 2006-05-08 Memory tuning for garbage collection and central processing (CPU) utilization optimization Expired - Fee Related US7467278B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/382,161 US7467278B2 (en) 2006-05-08 2006-05-08 Memory tuning for garbage collection and central processing (CPU) utilization optimization
US12/259,275 US7716451B2 (en) 2006-05-08 2008-10-27 Memory tuning for garbage collection and central processing unit (CPU) utilization optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/382,161 US7467278B2 (en) 2006-05-08 2006-05-08 Memory tuning for garbage collection and central processing (CPU) utilization optimization

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/259,275 Division US7716451B2 (en) 2006-05-08 2008-10-27 Memory tuning for garbage collection and central processing unit (CPU) utilization optimization

Publications (2)

Publication Number Publication Date
US20070260843A1 true US20070260843A1 (en) 2007-11-08
US7467278B2 US7467278B2 (en) 2008-12-16

Family

ID=38662474

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/382,161 Expired - Fee Related US7467278B2 (en) 2006-05-08 2006-05-08 Memory tuning for garbage collection and central processing (CPU) utilization optimization
US12/259,275 Expired - Fee Related US7716451B2 (en) 2006-05-08 2008-10-27 Memory tuning for garbage collection and central processing unit (CPU) utilization optimization

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/259,275 Expired - Fee Related US7716451B2 (en) 2006-05-08 2008-10-27 Memory tuning for garbage collection and central processing unit (CPU) utilization optimization

Country Status (1)

Country Link
US (2) US7467278B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090319720A1 (en) * 2008-06-20 2009-12-24 Seagate Technology Llc System and method of garbage collection in a memory device
US20100083248A1 (en) * 2008-09-30 2010-04-01 Wood Timothy W Optimizing a prediction of resource usage of multiple applications in a virtual environment
US20100082319A1 (en) * 2008-09-30 2010-04-01 Ludmila Cherkasova Predicting resource usage of an application in a virtual environment
US20100082320A1 (en) * 2008-09-30 2010-04-01 Wood Timothy W Accuracy in a prediction of resource usage of an application in a virtual environment
US20110225346A1 (en) * 2010-03-10 2011-09-15 Seagate Technology Llc Garbage collection in a storage device
US20170109276A1 (en) * 2015-10-15 2017-04-20 SK Hynix Inc. Memory system and operation method thereof
US20170279697A1 (en) * 2016-03-22 2017-09-28 Intel Corporation Control device for estimation of power consumption and energy efficiency of application containers
US10430332B2 (en) * 2013-03-25 2019-10-01 Salesforce.Com, Inc. System and method for performance tuning of garbage collection algorithms
CN110716881A (en) * 2018-07-11 2020-01-21 爱思开海力士有限公司 Memory system and operating method thereof
CN110858180A (en) * 2018-08-22 2020-03-03 爱思开海力士有限公司 Data processing system and method of operation thereof

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8145456B2 (en) * 2008-09-30 2012-03-27 Hewlett-Packard Development Company, L.P. Optimizing a prediction of resource usage of an application in a virtual environment
US8260603B2 (en) * 2008-09-30 2012-09-04 Hewlett-Packard Development Company, L.P. Scaling a prediction model of resource usage of an application in a virtual environment
US9183134B2 (en) 2010-04-22 2015-11-10 Seagate Technology Llc Data segregation in a storage device
US8499010B2 (en) * 2010-12-22 2013-07-30 International Business Machines Corporation Garbage collection in a multiple virtual machine environment
JP6045567B2 (en) 2011-04-26 2016-12-14 シーゲイト テクノロジー エルエルシーSeagate Technology LLC Variable over-provisioning for non-volatile storage
CN102708476A (en) * 2012-05-09 2012-10-03 上海中海龙高新技术研究院 IoT (Internet of Things)-technology-based refuse collection management system
US9256469B2 (en) 2013-01-10 2016-02-09 International Business Machines Corporation System and method for improving memory usage in virtual machines
JP2015022504A (en) * 2013-07-18 2015-02-02 富士通株式会社 Information processing device, method, and program
US9710379B2 (en) * 2015-02-27 2017-07-18 International Business Machines Corporation Tuning utilization and heap memory size for real-time garbage collection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6081665A (en) * 1997-12-19 2000-06-27 Newmonics Inc. Method for efficient soft real-time execution of portable byte code computer programs
US20030140071A1 (en) * 2001-12-14 2003-07-24 Takuji Kawamoto Apparatus, method, and program for implementing garbage collection suitable for real-time processing
US6611858B1 (en) * 1999-11-05 2003-08-26 Lucent Technologies Inc. Garbage collection method for time-constrained distributed applications
US20030182597A1 (en) * 2002-03-21 2003-09-25 Coha Joseph A. Method for optimization of memory usage for a computer program
US20040072764A1 (en) * 2000-12-15 2004-04-15 Lionel Breton Composition, in particular cosmetic, containing 7-hydroxy dhea and/or 7-keto dhea and at least an isoflavonoid
US20050149589A1 (en) * 2004-01-05 2005-07-07 International Business Machines Corporation Garbage collector with eager read barrier

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030196061A1 (en) * 2002-04-16 2003-10-16 Hideya Kawahara System and method for secure execution of multiple applications using a single GC heap
US7389506B1 (en) * 2002-07-30 2008-06-17 Unisys Corporation Selecting processor configuration based on thread usage in a multiprocessor system
JP4116877B2 (en) * 2002-12-26 2008-07-09 富士通株式会社 Heap size automatic optimization processing method, heap size automatic optimization device and program thereof
US7519639B2 (en) * 2004-01-05 2009-04-14 International Business Machines Corporation Method and apparatus for dynamic incremental defragmentation of memory
US7376684B2 (en) * 2004-06-04 2008-05-20 International Business Machines Corporation Efficient parallel bitwise sweep during garbage collection
US20060230242A1 (en) * 2005-04-12 2006-10-12 Mehta Virendra K Memory for multi-threaded applications on architectures with multiple locality domains
US7672983B2 (en) * 2005-12-19 2010-03-02 Sun Microsystems, Inc. Method and apparatus for tracking activity of a garbage collector with a plurality of threads that operate concurrently with an application program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6081665A (en) * 1997-12-19 2000-06-27 Newmonics Inc. Method for efficient soft real-time execution of portable byte code computer programs
US6611858B1 (en) * 1999-11-05 2003-08-26 Lucent Technologies Inc. Garbage collection method for time-constrained distributed applications
US20040072764A1 (en) * 2000-12-15 2004-04-15 Lionel Breton Composition, in particular cosmetic, containing 7-hydroxy dhea and/or 7-keto dhea and at least an isoflavonoid
US20030140071A1 (en) * 2001-12-14 2003-07-24 Takuji Kawamoto Apparatus, method, and program for implementing garbage collection suitable for real-time processing
US20030182597A1 (en) * 2002-03-21 2003-09-25 Coha Joseph A. Method for optimization of memory usage for a computer program
US20050149589A1 (en) * 2004-01-05 2005-07-07 International Business Machines Corporation Garbage collector with eager read barrier

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090319720A1 (en) * 2008-06-20 2009-12-24 Seagate Technology Llc System and method of garbage collection in a memory device
US8145455B2 (en) * 2008-09-30 2012-03-27 Hewlett-Packard Development Company, L.P. Predicting resource usage of an application in a virtual environment
US8180604B2 (en) * 2008-09-30 2012-05-15 Hewlett-Packard Development Company, L.P. Optimizing a prediction of resource usage of multiple applications in a virtual environment
US20100082320A1 (en) * 2008-09-30 2010-04-01 Wood Timothy W Accuracy in a prediction of resource usage of an application in a virtual environment
US20100082319A1 (en) * 2008-09-30 2010-04-01 Ludmila Cherkasova Predicting resource usage of an application in a virtual environment
US8131519B2 (en) * 2008-09-30 2012-03-06 Hewlett-Packard Development Company, L.P. Accuracy in a prediction of resource usage of an application in a virtual environment
US20100083248A1 (en) * 2008-09-30 2010-04-01 Wood Timothy W Optimizing a prediction of resource usage of multiple applications in a virtual environment
US8458417B2 (en) * 2010-03-10 2013-06-04 Seagate Technology Llc Garbage collection in a storage device
US20110225346A1 (en) * 2010-03-10 2011-09-15 Seagate Technology Llc Garbage collection in a storage device
US10430332B2 (en) * 2013-03-25 2019-10-01 Salesforce.Com, Inc. System and method for performance tuning of garbage collection algorithms
US20170109276A1 (en) * 2015-10-15 2017-04-20 SK Hynix Inc. Memory system and operation method thereof
US20170279697A1 (en) * 2016-03-22 2017-09-28 Intel Corporation Control device for estimation of power consumption and energy efficiency of application containers
US10432491B2 (en) * 2016-03-22 2019-10-01 Intel Corporation Control device for estimation of power consumption and energy efficiency of application containers
CN110716881A (en) * 2018-07-11 2020-01-21 爱思开海力士有限公司 Memory system and operating method thereof
CN110858180A (en) * 2018-08-22 2020-03-03 爱思开海力士有限公司 Data processing system and method of operation thereof

Also Published As

Publication number Publication date
US7716451B2 (en) 2010-05-11
US20090055615A1 (en) 2009-02-26
US7467278B2 (en) 2008-12-16

Similar Documents

Publication Publication Date Title
US7467278B2 (en) Memory tuning for garbage collection and central processing (CPU) utilization optimization
US10067680B2 (en) Methods and apparatus to manage workload memory allocation
US7730464B2 (en) Code compilation management service
US7992151B2 (en) Methods and apparatuses for core allocations
US7203941B2 (en) Associating a native resource with an application
US20070294693A1 (en) Scheduling thread execution among a plurality of processors based on evaluation of memory access data
JP5147728B2 (en) Qualitatively annotated code
US8453132B2 (en) System and method for recompiling code based on locality domain and thread affinity in NUMA computer systems
US20100005222A1 (en) Optimizing virtual memory allocation in a virtual machine based upon a previous usage of the virtual memory blocks
US10884468B2 (en) Power allocation among computing devices
US7237064B2 (en) Method and apparatus for feedback-based management of combined heap and compiled code caches
US20070011658A1 (en) Memory management configuration
US7418568B2 (en) Memory management technique
US8701095B2 (en) Add/remove memory pressure per object
US7761852B2 (en) Fast detection of the origins of memory leaks when using pooled resources
EP2341441B1 (en) Methods and apparatus to perform adaptive pre-fetch operations in managed runtime environments
CN113157379A (en) Cluster node resource scheduling method and device
US8296552B2 (en) Dynamically migrating channels
US8533712B2 (en) Virtual machine stage detection
US8001341B2 (en) Managing dynamically allocated memory in a computer system
CN102105847B (en) Method and system for power management using tracing data
CN111782466A (en) Big data task resource utilization detection method and device
Fang et al. An auto-tuning solution to data streams clustering in opencl
US20240036845A1 (en) Runtime environment optimizer for jvm-style languages
US20230418688A1 (en) Energy efficient computing workload placement

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: PATENT APPLICATION;ASSIGNORS:CREAMER, THOMAS E;HRISCHUK, CURTIS E;REEL/FRAME:017588/0287;SIGNING DATES FROM 20060427 TO 20060501

AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: DOCUMENT RE-RECORDED TO CORRECT AN ERROR CONTAINED IN PROPERTY NUMBER 10/381,161 IN THE DOCUMENT PREVIOUSLY RECORDED ON REEL 017588, FRAME 0287. ASSINGOR HEREBY CONFIRMS THE ASSINGMENT OF THE ENTIRE INTEREST.;ASSIGNORS:CREAMER, THOMAS E;HRISCHUK, CURTIS E.;REEL/FRAME:018162/0320;SIGNING DATES FROM 20060427 TO 20060501

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20201216