CN103488577A - Method and device of memory allocation and batch recovery for user applications based on use numbering - Google Patents
Method and device of memory allocation and batch recovery for user applications based on use numbering Download PDFInfo
- Publication number
- CN103488577A CN103488577A CN201310430994.6A CN201310430994A CN103488577A CN 103488577 A CN103488577 A CN 103488577A CN 201310430994 A CN201310430994 A CN 201310430994A CN 103488577 A CN103488577 A CN 103488577A
- Authority
- CN
- China
- Prior art keywords
- memory
- purposes
- sub
- numbering
- pool
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention provides a method and a device of memory allocation and batch recovery for user applications based on use numbering. The more efficient memory allocation and recovery method is provided for the functional requirement which requires gradual memory allocation and release of all pre-allocated space at the same; accordingly, no matter how many times of allocation are in the memory allocation process, the allocations can be completed once during space release; consumption of system resources by the applications is improved greatly, and the efficiency of the memory allocation method is improved.
Description
Technical field
The present invention relates to memory management technology field in computer operating system, relate in particular to the method and the device that the fine granularity task are carried out discrete Memory Allocation and batch Memory recycle in Distributed Calculation.
Background technology
In computer system, the distribution of internal memory and recovery are a kind of Core Features, support operating system and application program to obtain dynamically and the delivery system memory source, the Memory Allocation Strategy in operating system nucleus is served the function in all kernels, is a kind of general Memory Allocation Strategy.
Common memory allocation algorithm and relative merits are as follows:
(1) first-fit algorithm.While using this algorithm to carry out Memory Allocation, from the Free Partition first-in-chain(FIC), start to search, until find a Free Partition that can meet its size requirements.And then, according to the size of operation, marking a Memory Allocation to the requestor from this subregion, remaining Free Partition is still stayed in the Free Partition chain.
This algorithm tends to use the Free Partition of the part of low address in internal memory, seldom is utilized at the Free Partition of high address part, thereby has retained the large free area of high address part.Obviously for the large operation arrived later, distribute large memory headroom to create condition.Shortcoming is that low location part constantly is divided, stay manyly be difficult to utilize, very little free area, and search all from the part of low location at every turn, this can increase the expense of searching undoubtedly.
(2) circulation first-fit algorithm.This algorithm is developed and is formed by first-fit algorithm.When being the course allocation memory headroom, start to search from first-in-chain(FIC) no longer at every turn, but search since the Free Partition that found last time, until find a Free Partition that can meet the demands, and therefrom mark one and give operation.This algorithm can make the memory partitioning in the free time distribute more evenly, but will lack large Free Partition.
(3) optimal adaptation algorithm.This algorithm, always meeting the demands, is again that minimum Free Partition is distributed to operation.
In order to accelerate to search, this algorithm forms a blank chain with incremental order after requiring to be sorted by its size in all free areas.Each like this first free area met the demands of finding must be optimum.See, this algorithm is seemingly optimum, but in fact might not isolatedly.Because after every sub-distribution, remaining space must be minimum, will stay many little free areas that utilize that are difficult in storer.After every sub-distribution simultaneously, must resequence, this has also brought certain expense.
(4) the poorest adaptive algorithm.In the poorest adaptive algorithm, the order that this algorithm successively decreases by size forms the free area chain, and minute timing directly distributes (can not satisfy the demand and not distribute) from first Free Partition of free area chain.Obviously, if first Free Partition can not meet, do not have so again Free Partition to satisfy the demand.This distribution method is not too reasonable at first sight, but it also has very strong attractive force directly perceived: put into program in large free area after, remaining free area is usually also very large, so can also fill next larger new procedures.
The worst adaptive algorithm is just in time contrary with the sequence of optimal adaptation algorithm, and its queue pointer always points to maximum free area, is dividing timing, always from maximum free area, starts to search.
This algorithm has overcome the deficiency of the many little fragment that the optimal adaptation algorithm stays, but the possibility that retains large free area reduced, and free area is reclaimed also the same with the optimal adaptation algorithm complicated.
And the difference in functionality in kernel is different to the demand of internal memory, the use of memory headroom is also existed to different rules, use a kind of general memory management method can cause that memory usage reduces, memory fragmentation is serious gradually.
For some modules in kernel, it obtains gradually in the process of application internal memory, constantly increases the memory source that it is held, and completes a certain function in this module, in the time of will releasing resource, wish to discharge all memory sources that it is held at synchronization.Existing memory allocation method requires corresponding position and the size of dividing timing of internal memory discharged, and the larger memory headroom obtained for repeated dispensing, also need to be discharged one by one.This large amount of concentrated operations occur while just having caused internal memory to discharge, bring larger expense.
Common Memory recycle algorithm and relative merits are as follows:
The advantage of reference counting algorithm and defect are obvious equally.This algorithm is speed when carrying out the refuse collection task, but algorithm is in program, Memory Allocation and pointer operation have proposed extra requirement (increasing or reduce the reference count of memory block) each time.The more important thing is the reference counting algorithm memory block that correctly release cycle is quoted.
The implementation of mark-removing algorithm is divided into " mark " and " removing " two megastages.The thinking that this substep is carried out has been established the idea basis of modern garbage collection algorithm.Different from reference counting algorithm, mark-removing algorithm does not need running environment to monitor Memory Allocation and pointer operation each time, and as long as follow the tracks of the sensing of each pointer variable in the stage at " mark ".The garbage collector of realizing by similar thinking also often is referred to as the tracking gatherer by the descendant.Be accompanied by the success of Lisp language, mark-removing algorithm also yields unusually brilliant results in most of early stage Lisp running environment.Although also there are many defects such as efficiency not high (mark and removing are two processes quite consuming time) in the mark of initial release-removing algorithm in today.
In order to solve mark-removing algorithm in the defect aspect refuse collection efficiency, M.L.Minsky delivered famous paper " a kind of Lisp language garbage collector (A LISP Garbage Collector Algorithm Using Serial Secondary Storage) that uses double memory area " in 1963.The algorithm that M.L.Minsky describes in this paper is called replication strategy by people, and it also successfully has been incorporated into of Lisp language by M.L.Minsky and has realized in version.
Subregion, the thinking copied have not only significantly improved the efficiency of refuse collection, and the memory allocation algorithm of numerous confused complexity becomes like never before simple and clear and brief originally.Since each Memory recycle is all the recovery to whole halfth district, also just need not consider the complex situations such as memory fragmentation during Memory Allocation, as long as mobile heap top pointer, storage allocation is just passable in order.In garbage collection techniques, the cost that replication strategy is raised the efficiency is artificially free memory have been dwindled to half.
Mark-arrangement algorithm is the combination of mark-removing algorithm and replication strategy.The overall execution efficiency of mark-arrangement algorithm, higher than mark-removing algorithm, need to be sacrificed the storage space of half again unlike replication strategy, and this is a kind of ideal method.Mark-arrangement algorithm or its improvement version have all been used in the garbage collector in many modern times.
Summary of the invention
Purpose of the present invention, for providing a kind of memory headroom for discrete distribution based on the purposes numbering to carry out the method and the device that reclaim in batches, has improved efficiency when internal memory discharges in batches, thereby eliminated the larger expense of bringing when concentrating releasing memory.
For solving the problems of the technologies described above, the invention provides a kind of memory headroom for discrete distribution based on the purposes numbering and carry out the method and the device that reclaim in batches, as shown in Figure 1, described method comprises:
The Memory Allocation interface.The caller of this interface need to provide two parameters.First parameter is the size that needs the memory headroom of distribution, take byte as unit.Second parameter is a numerical value arbitrarily, and for distinguishing the purposes of internal memory, the internal memory that identical numeric representation distributes is for identical purposes, and will reclaim together using after finishing.This numerical value is called as " purposes numbering "
The Memory recycle interface.The caller of this interface need to provide a parameter.The purposes numbering that this parameter correspondence provides when Memory Allocation, invoke memory reclaims interface and means the corresponding all memory headrooms of parameter values are all reclaimed, and for following, distributes.
Memory management module.For each reserved continuous memory headroom of purposes numbering occurred, the different corresponding no memory headrooms of purposes numbering.Internal memory application within purposes is applied for internal memory among the space under own.A corresponding reserved memory headroom of purposes numbering is called as a sub-memory pool, and correspondingly, a sub-memory pool is served a purposes.The memory management structure as shown in Figure 2.
The accompanying drawing explanation
The structural drawing that Fig. 1 is the Memory Allocation device.
Fig. 2 is the memory management module structural drawing.
Embodiment
For a sub-memory pool, storage allocation wherein is order.The beginning of a unallocated application heap of free space pointed.Batch operation adopts the atomic operation instruction of computing machine to be operated the free space pointer, returns to the address of current free space, and moves the size of applying for backward, makes it point to the start address of new free space.
The insufficient space of group memory pool is when distributing new element, the bottom memory management function of administration module call operation system, for sub-memory pool dilatation, apply for more space, space and the sub-memory pool of new application are continuous, guarantee that the address space in memory pool is always continuous.
Closing sub-memory pool means no longer to continue to use a purposes to number corresponding memory source.When closing a memory pool, the shared memory headroom of whole memory pool is eliminated, and returns operating system management.
Storage unit in sub-memory pool can not reclaim separately.
Reclaim sub-memory pool and mean the storage space reset all in a sub-memory pool is given back to memory management module, but this sub-memory pool also will continue as follow-up application, continue service, the memory source that sub-memory pool comprises can not given back operating system.
When system memory resource is not enough, memory management module can start to be forced the Memory recycle operation.Force the Memory recycle operation steps as follows:
A sub-memory pool of random selection.Check reserved memory headroom of current sub-memory pool.Check the memory headroom that current sub-memory pool has distributed.If the memory headroom distributed is less than reserved memory headroom, mean that current memory pool is not used fully, can carry out reclaimer operation to this sub-memory pool.
Reclaimer operation is adjusted to the size of actual allocated by the memory management function of call operation system by the actual memory boundary taken of sub-memory pool, thereby has reduced the actual memory size taken.
If the free memory of current system can meet the demands, complete and force the Memory recycle operation.Can't obtain enough memory headrooms if current, again choose at random a sub-memory pool and carry out above-mentioned operation.Iterative operation like this, until reclaimed enough memory headrooms.If traveled through all sub-memory pools, still do not obtain enough memory headrooms, report low memory.
Claims (5)
1. Memory Allocation interface.The caller of this interface need to provide two parameters.First parameter is the size that needs the memory headroom of distribution, take byte as unit.Second parameter is a numerical value arbitrarily, and for distinguishing the purposes of internal memory, the internal memory that identical numeric representation distributes is for identical purposes, and will reclaim together using after finishing.This numerical value is called as " purposes numbering ".
2. Memory recycle interface.The caller of this interface need to provide a parameter.The purposes numbering that this parameter correspondence provides when Memory Allocation, invoke memory reclaims interface and means the corresponding all memory headrooms of parameter values are all reclaimed, and for following, distributes.
3. memory management module.For each reserved continuous memory headroom of purposes numbering occurred, the different corresponding no memory headrooms of purposes numbering.Internal memory application within purposes is applied for internal memory among the space under own.A corresponding reserved memory headroom of purposes numbering is called as a sub-memory pool, and correspondingly, a sub-memory pool is served a purposes.
4. for a sub-memory pool, storage allocation wherein is order.The beginning of a unallocated application heap of free space pointed.Batch operation adopts the atomic operation instruction of computing machine to be operated the free space pointer, returns to the address of current free space, and moves the size of applying for backward, makes it point to the start address of new free space.
5. reclaiming sub-memory pool means the storage space reset all in a sub-memory pool, give back memory management module, but this sub-memory pool also will continue as follow-up application continues service, and the memory source that sub-memory pool comprises can not given back operating system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310430994.6A CN103488577A (en) | 2013-09-22 | 2013-09-22 | Method and device of memory allocation and batch recovery for user applications based on use numbering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310430994.6A CN103488577A (en) | 2013-09-22 | 2013-09-22 | Method and device of memory allocation and batch recovery for user applications based on use numbering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103488577A true CN103488577A (en) | 2014-01-01 |
Family
ID=49828826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310430994.6A Pending CN103488577A (en) | 2013-09-22 | 2013-09-22 | Method and device of memory allocation and batch recovery for user applications based on use numbering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103488577A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914337A (en) * | 2014-03-24 | 2014-07-09 | 小米科技有限责任公司 | Service calling method, device and terminal |
CN105930217A (en) * | 2016-04-15 | 2016-09-07 | 上海斐讯数据通信技术有限公司 | Thread optimization system and method |
CN111177019A (en) * | 2019-08-05 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Memory allocation management method, device, equipment and storage medium |
CN111538584A (en) * | 2019-07-19 | 2020-08-14 | 新华三技术有限公司 | Memory resource allocation method, device, equipment and machine readable storage medium |
CN115617531A (en) * | 2022-11-16 | 2023-01-17 | 沐曦集成电路(上海)有限公司 | Method, device, storage medium and equipment for rapidly detecting discrete resources |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080243969A1 (en) * | 2007-03-30 | 2008-10-02 | Sap Ag | Method and system for customizing allocation statistics |
CN101984417A (en) * | 2010-11-01 | 2011-03-09 | 中兴通讯股份有限公司 | Memory management method and device |
CN102063385A (en) * | 2010-12-23 | 2011-05-18 | 深圳市金宏威实业发展有限公司 | Memory management method and system |
CN102103541A (en) * | 2011-02-28 | 2011-06-22 | 中国人民解放军国防科学技术大学 | Kernel-module memory management method for preventing memory leaks and multiple memory releases |
CN103218360A (en) * | 2012-01-18 | 2013-07-24 | 中国石油天然气集团公司 | Method of industrial real-time database for realizing dynamic memory management by adopting memory pool technology |
-
2013
- 2013-09-22 CN CN201310430994.6A patent/CN103488577A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080243969A1 (en) * | 2007-03-30 | 2008-10-02 | Sap Ag | Method and system for customizing allocation statistics |
CN101984417A (en) * | 2010-11-01 | 2011-03-09 | 中兴通讯股份有限公司 | Memory management method and device |
CN102063385A (en) * | 2010-12-23 | 2011-05-18 | 深圳市金宏威实业发展有限公司 | Memory management method and system |
CN102103541A (en) * | 2011-02-28 | 2011-06-22 | 中国人民解放军国防科学技术大学 | Kernel-module memory management method for preventing memory leaks and multiple memory releases |
CN103218360A (en) * | 2012-01-18 | 2013-07-24 | 中国石油天然气集团公司 | Method of industrial real-time database for realizing dynamic memory management by adopting memory pool technology |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914337A (en) * | 2014-03-24 | 2014-07-09 | 小米科技有限责任公司 | Service calling method, device and terminal |
CN103914337B (en) * | 2014-03-24 | 2016-04-13 | 小米科技有限责任公司 | Service calling method, device and terminal |
CN105930217A (en) * | 2016-04-15 | 2016-09-07 | 上海斐讯数据通信技术有限公司 | Thread optimization system and method |
CN111538584A (en) * | 2019-07-19 | 2020-08-14 | 新华三技术有限公司 | Memory resource allocation method, device, equipment and machine readable storage medium |
CN111177019A (en) * | 2019-08-05 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Memory allocation management method, device, equipment and storage medium |
CN115617531A (en) * | 2022-11-16 | 2023-01-17 | 沐曦集成电路(上海)有限公司 | Method, device, storage medium and equipment for rapidly detecting discrete resources |
CN115617531B (en) * | 2022-11-16 | 2023-04-28 | 沐曦集成电路(上海)有限公司 | Method, device, storage medium and equipment for rapidly detecting discrete resources |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103488577A (en) | Method and device of memory allocation and batch recovery for user applications based on use numbering | |
CN103514102B (en) | A kind of Java Virtual Machine realizes the method and device of internal memory garbage reclamation | |
TWI771332B (en) | Resource recovery method and device | |
CN110362407A (en) | Computing resource dispatching method and device | |
CN101859279B (en) | Memory allocation and release method and device | |
CN103336744A (en) | Garbage recovery method for solid-state storage device and system for garbage recovery method | |
CN105843748B (en) | The processing method and processing device of page in a kind of pair of memory | |
CN103455433A (en) | Memory management method and system | |
CN105302739A (en) | Memory management method and device | |
CN104079503A (en) | Method and device of distributing resources | |
CN101984417A (en) | Memory management method and device | |
CN104090848A (en) | Memory management method and device for periodic large big data processing | |
CN103365784B (en) | The method of Memory recycle and distribution and device | |
CN103336722A (en) | Virtual machine CPU source monitoring and dynamic distributing method | |
CN103902384A (en) | Method and device for allocating physical machines for virtual machines | |
CN1532708A (en) | Static internal storage management method | |
CN102053916B (en) | Method for distributing large continuous memory of kernel | |
CN105975049A (en) | Task synchronization-based low-power dispatching method for sporadic tasks | |
CN103106147A (en) | Memory allocation method and system | |
CN104850505A (en) | Memory management method and system based on chain type stacking | |
CN104536773B (en) | Embedded software dynamic memory recovery method based on internal memory scanning | |
CN105469173A (en) | Method of optimal management on static memory | |
CN110580195A (en) | Memory allocation method and device based on memory hot plug | |
CN106598697A (en) | Virtual memory dynamic allocation method of virtual machine | |
CN102497410B (en) | Method for dynamically partitioning computing resources of cloud computing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20140101 |