CN101561783B - Method and device for Cache asynchronous elimination - Google Patents

Method and device for Cache asynchronous elimination Download PDF

Info

Publication number
CN101561783B
CN101561783B CN2008100899801A CN200810089980A CN101561783B CN 101561783 B CN101561783 B CN 101561783B CN 2008100899801 A CN2008100899801 A CN 2008100899801A CN 200810089980 A CN200810089980 A CN 200810089980A CN 101561783 B CN101561783 B CN 101561783B
Authority
CN
China
Prior art keywords
cache
superseded
level cache
thread
recovery area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008100899801A
Other languages
Chinese (zh)
Other versions
CN101561783A (en
Inventor
杨含飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taobao China Software Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN2008100899801A priority Critical patent/CN101561783B/en
Publication of CN101561783A publication Critical patent/CN101561783A/en
Priority to HK10103891.2A priority patent/HK1135782A1/en
Application granted granted Critical
Publication of CN101561783B publication Critical patent/CN101561783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method and a device for Cache asynchronous elimination. The method comprises the following steps: obtaining the occupation condition of space of an elimination object in a recovery zone; comparing the occupation condition of space of the elimination object with a pre-determined threshold value; and closing the elimination object in the recovery zone if the occupation condition of space of the elimination object is greater than the pre-determined threshold value. In the invention, through obtaining the occupation condition of the space of the elimination object in the recovery zone and comparing the pre-determined threshold value with the occupation condition of the space of the elimination object, when the occupation state of space of the elimination object is greater than the pre-determined threshold value, the elimination object in the recovery zone is asynchronously closed with a calling thread to ensure high efficiency of the calling thread and improve theexecution efficiency of the system completely.

Description

The asynchronous superseded method and apparatus of a kind of Cache
Technical field
The present invention relates to communication technical field, relate in particular to the asynchronous superseded method and apparatus of a kind of Cache.
Background technology
Cache (buffer memory) preserves a kind of method of reusing for the performance that improves program with the object of some, has following characteristic: multithreading is shared the limitation of using, have certain capacity, capacity and is caused that the Cache data are superseded.
Multithreading is shared to use and has been determined that Cache should be a thread-safe, only to a thread service is provided simultaneously.The characteristic of capacity has determined Cache need eliminate the lower object of some weights according to certain rule.For application scenes, eliminating an object and object is carried out often need consuming more computer resource when resource discharges.
Object among the Cache has been closed two kinds of situation.First kind is when object in Cache is eliminated, and the object that is eliminated need be closed to discharge resource.Under existing technical scheme, in order to let the Cache operate as normal, do not produce the concurrency problem, often to wait for that current invokes thread is accomplished eliminating of object and resource and discharged after, use just Cache can discharge to other threads.Therefore carrying out object when eliminating; If this object is closed and resource release need take a large amount of CPU times and can cause other threads to spend a lot of time waitings; Can not make full use of cpu resource; The concurrency of whole procedure can descend thereupon, and the purpose of using Cache to improve the program execution performance can not effectively realize.
Object among second kind of Cache is closed and is being carried out that Cache refreshes or Cache takes place when closing; In existing technology; Accomplish object among the Cache close and resource discharges by single-threaded completion; Ask for something carry out that quick Cache refreshes or the scene of closing in, single-threaded mode is closed by the object in the Cache and resource discharges and all under the mode of serial, accomplishes, can not the quick closedown object.
The inventor finds that there is following problem at least in prior art in realizing process of the present invention:
In the prior art, general eliminating added object in Cache to liking invokes thread, when finding that simultaneously an object need be eliminated, directly accomplishes eliminating and resource release work of object by invokes thread.Because the thread synchronization characteristic of Cache, make a thread carry out when object eliminates, the another one thread can only be waited for, causes the serious decline of performance of Cache, makes Cache that service can not effectively be provided.
Summary of the invention
The present invention provides a kind of Cache asynchronous superseded method and apparatus, is closing object wherein and is discharging the performance loss problem that related resource causes with effective solution Cache.
For addressing the above problem, the present invention provides a kind of Cache asynchronous superseded method, may further comprise the steps:
Obtain the situation that takes up room of eliminating object in the recovery area;
The situation that takes up room of more said superseded object and predefined threshold value;
The situation that takes up room of said superseded object is closed the superseded object in the said recovery area during greater than said threshold value.
Said recovery area is specially second-level cache, and utilization is closed or is recovered again in the superseded object wait that is placed in the said recovery area.
Also comprise before the said step of obtaining in the recovery area situation that takes up room of eliminating object: when one-level Cache capacity has been expired, will eliminate object and be placed in the recovery area.
Said superseded object step of closing said recovery area is specially: the number of threads of eliminating object is closed in adjustment, from the recovery area, removes superseded object, and discharges the resource that said superseded object takies.Close superseded object with parallel mode between the thread of said some.
Said superseded object step of closing in the said recovery area is specially: put into the preferential object of the superseded order of orderly close-down of recovery area according to eliminating object.
Said adjustment is closed the thread of eliminating object and is specially: the quantity of eliminating object is as required adjusted the quantity of thread in preestablishing the scope of number of threads.
Space in the said recovery area less than the time, close superseded object on the backstage with the mode of asynchronous parallel by said thread, invokes thread does not participate in eliminating closing of object.When has expired in the space in the said recovery area, directly close the current object of eliminating in the one-level Cache by invokes thread.
For reaching above-mentioned purpose, the asynchronous superseded device of a kind of Cache of the present invention comprises:
Acquisition module is used for obtaining the situation that takes up room that object is eliminated in the recovery area;
Comparison module is used for superseded object that the comparison acquisition module obtains take up room situation and predefined threshold value;
The object closing module is used for when acquisition module gets access to superseded object occupation space situation greater than predefined threshold value, closing the superseded object in the said recovery area.
Said object closing module further comprises:
The object-order detection sub-module is used for detecting the order that superseded object is put into the recovery area;
The thread controlling sub is used in predefined scope, adjusting and closes the quantity of eliminating the object thread;
Close closed submodule, be used to utilize the thread parallel of said thread controlling sub adjustment asynchronous close the preferential superseded object of the detected order of object-order detection sub-module; The sub-acquisition module of online process control adjustment thread reaches maximal value and gets access to space in the recovery area when having expired, directly closes the current object of eliminating in the one-level Cache by invokes thread.
The asynchronous superseded device of said Cache, also comprise: the recovery area is specially second-level cache, and utilization is closed or is recovered again in the superseded object wait that is placed in the said recovery area.
Compared with prior art, the present invention has the following advantages:
The present invention is through obtaining in the recovery area situation that takes up room of eliminating object and do comparison with predefined threshold value, and the situation that takes up room of said superseded object is during greater than said threshold value, can with the asynchronous superseded object of closing in the recovery area of invokes thread.Guaranteeing invokes thread concurrent working efficiently, thereby improve the execution efficient of system on the whole.
Description of drawings
Fig. 1 is the asynchronous superseded structural representation of Cache among the present invention;
Fig. 2 is the asynchronous superseded process flow diagram of Cache among the present invention;
Fig. 3 is the process flow diagram when the Cache placing objects among the present invention;
Fig. 4 is the process flow diagram that takes out object among the present invention to Cache;
Fig. 5 is the closing flow path figure of Cache among the present invention;
Fig. 6 is the asynchronous superseded installation drawing of Cache among the present invention.
Embodiment
The present invention provides a kind of Cache asynchronous superseded method and apparatus, is closing object and is discharging the performance loss problem that related resource causes with effective solution Cache.
Though the lifting of CPU frequency can drive the improvement of system performance; But CPU is not only depended in the raising of system performance; Also relevant with the factors such as access speed of system architecture, order structure, information transfer rate and the memory unit between each parts, particularly with the CPU/ internal memory between access speed relevant.If the CPU operating rate is higher, but memory access speed is relatively low, will cause CPU to wait for, processing speed reduces, the ability waste of CPU.
For improving the processing speed of CPU, computing machine now all is furnished with cache memory (cache), also claims buffer memory, is actually a kind of special HSM.The access speed of Cache is faster than internal memory, so improved the processing speed of CPU.
Cache is a kind of special storer, and it is made up of Cache memory unit and Cache control assembly.The Cache memory unit is general to adopt and CPU semiconductor storage unit of the same type fast several times even tens times than internal memory of access speeds.And the Cache controller part comprises core address register, Cache address register, main memory-Cache address mapping parts and replacement control assembly etc.
But, only rely on the capacity that increases one-level Cache, the performance of CPU is improved thereupon in direct ratioly, also second-level cache need be set.One-level Cache also is master cache, or inner buffer, and directly design is inner at cpu chip.The level cache capacity is very little, usually between 8KB~64KB.Second-level cache also is external cache, and at the inner but sram chip independently of CPU, its speed is not slower slightly than level cache, but capacity is bigger, how between 64KB~2MB.
The main advantage of Cache hierarchy is, calls for 80% internal memory of a typical level cache system and all occurs in CPU inside, and it is relevant with external memory having only 20% internal memory to call.And this external memory of 20% 80% relevant in calling with L2 cache.Therefore, having only 4% internal memory to call is directed among the DRAM.
The deficiency of Cache hierarchy is that the cache set number is limited, needs busy line board space and some to support logical circuit, and cost is increased.
The object that the embodiment of the invention provides is closed thread can improve the processing speed of invokes thread under the situation that does not increase the cache set number.
As shown in Figure 1; Carrying out when object eliminates; Just will eliminating object, to put into the recovery area be second-level cache, and object really discharges, and carries out asynchronous release but close thread pool by object; Wherein the object number of threads scope of closing in the thread pool can be set in advance, and can dynamically adjust according to the object situation of closing of reality.In this way; There is no need to wait for that in the enterprising column rule journey of superseded object when superseded, it is in the second-level cache that the object that just simply will eliminate is put into the recovery area; Object is not real being closed; But the action that will close given a plurality of threads that object closes thread pool and accomplished, and makes the shutoff operation of these objects not take the cpu resource of invokes thread like this, effectively raises concurrency.For putting into the object that the recovery area is a second-level cache, if there are other threads need reuse this object, can reclaim, to avoid repeatedly calling and produce mistake for same resource.In like manner, when require Cache close/when refreshing, it is in the second-level cache that invokes thread is also just simply put into the recovery area with object, a plurality of threads that object is closed in the thread pool can concurrently be closed object efficiently, improve the efficient that Cache closes/refreshes.
Below in conjunction with accompanying drawing and specific embodiment the asynchronous superseded method and apparatus of Cache provided by the invention is elaborated.
As shown in Figure 2, be the asynchronous superseded process flow diagram of Cache among the present invention, may further comprise the steps:
Step S201, obtain in the recovery area situation that takes up room of eliminating object.
Concrete, said recovery area is specially second-level cache, and utilization is closed or is recovered again in the superseded object wait that is placed in the said recovery area.
Further comprising the steps of before this step:
When one-level Cache capacity is expired, will eliminate object and be placed in the recovery area
Step S202, the more said superseded object situation that takes up room and the predefined threshold value in the recovery area.
Comparative result is to eliminate the take up room situation of object in the recovery area when being less than or equal to threshold value, and the superseded object in the recovery area is not done closing motion.
Comparative result is the situation that takes up room during greater than said threshold value of eliminating object, execution in step S103.
Concrete, when threshold value is 0, can be closed thread pool by object when an object is put into the recovery area and close.Generally speaking, one-level Cache puts into the speed of eliminating object and object and closes thread pool and close the speed of object and compare comparatively fast in the recovery area, so constantly increase of the superseded object number in the recovery area.In the recovery area, not closed superseded object that thread pool the closes thread that can also be called by object reactivates.
The situation that takes up room of step S203, said superseded object is closed the superseded object in the said recovery area during greater than said threshold value.
Concrete, the number of the thread of eliminating object is closed in adjustment, from the recovery area, removes superseded object, and discharges the resource that said superseded object takies.
Close superseded object with parallel mode between the thread of said some.
Said removing when eliminating object put into the preferential object of the superseded order of orderly close-down of recovery area according to eliminating object.
Said adjustment is closed the number of threads of eliminating object and is specially: the quantity of eliminating object is as required adjusted the quantity of thread in preestablishing the scope of number of threads.
Space in the said recovery area less than the time, close superseded object on the backstage with the mode of asynchronous parallel by said thread, invokes thread does not participate in eliminating closing of object.When has expired in the space in the said recovery area, directly close the current object of eliminating in the one-level Cache by invokes thread.
The method that provides through the foregoing description can be implemented in the performance loss problem that the Cache of thread synchronization under the multi-thread environment causes when eliminating object; Carry out asynchronous superseded mode through the data in the Cache; Make when eliminating object shared cpu resource be independent of the cpu resource of invokes thread; Guaranteeing invokes thread concurrent working efficiently, thereby improve the execution efficient of system on the whole.
Further embodiment of this invention is carried out detailed description to concrete condition of the present invention.
Fig. 3 is the process flow diagram when the Cache placing objects, and is as shown in the figure, may further comprise the steps:
Step S301, invokes thread are when the Cache placing objects, and Cache at first checks the capacity of one-level Cache.If the capacity of one-level Cache is less than, execution in step S302; If the capacity of one-level Cache is full, obtain the object that should eliminate in the one-level Cache, it is removed from one-level Cache, and new object is put into one-level Cache, continue execution in step S303.
The capacity of step S302, one-level Cache less than, directly object is put in the one-level Cache, calling of Cache returned, and discharges the thread resources that caller takies and finishes.
If the capacity of step S303 one-level Cache is full, detect the capacity of second-level cache.If the capacity of second-level cache is less than, execution in step S304; If the capacity of second-level cache is full, execution in step S305.
If step S304 second-level cache capacity less than; Algorithm design according to one-level; Eliminate corresponding object in second-level cache, and the new object of putting into is placed on one-level Cache, superseded object wait object to second-level cache is closed thread pool and is closed; Invokes thread is returned, and discharges thread resources and end that caller takies.
Said object is closed thread pool and is detected the object occupation space situation in the second-level cache; If object occupation space situation is during greater than predefined threshold value; In second-level cache, take out object; Simultaneously the object that takes out is removed in second-level cache, the resource that then object is taken discharges.Object is closed thread pool can be according to the busy extent of recycle object; The dynamic quantity of adjustment thread and close object in the number of threads scope of setting according to the sequencing that object is put into second-level cache; And object close thread pool can with invokes thread simultaneously asynchronous close superseded object so that carry out resource release work efficiently.
The Cache object that is eliminated did not carry out resource and discharged when calling this moment, accomplish eliminate object and put into object that second-level cache also will newly put into and put into one-level Cache after invokes thread return.Time when therefore this process does not take the recovery of object resource.Utilization is closed thread pool and is closed object and can make invokes thread discharge resource as soon as possible, and then carries out other operations, guaranteeing invokes thread concurrent working efficiently, thereby improves the execution efficient of system on the whole.
Step S305, when second-level cache is put into object, the space in the said second-level cache less than the time, close superseded object on the backstage with the mode of asynchronous parallel by said thread, invokes thread does not participate in eliminating closing of object.When has expired in the space in the said recovery area, directly close the current object of eliminating in the one-level Cache by invokes thread.
Fig. 4 is the process flow diagram that takes out object to Cache, and is as shown in the figure, may further comprise the steps:
Step S401, when in Cache, taking out object, at first check whether to have in the one-level Cache and call the object that to get,, then directly return this object if having; If no, continue to carry out following steps.
If step S402 does not find corresponding object in one-level Cache, having in the inspection second-level cache does not have corresponding object to exist.If do not have in the second-level cache, then returning does not have corresponding object and end in the Cache.If the object that will call to some extent in the second-level cache then continues to carry out following steps.
If step S403 finds corresponding object at second-level cache, detected object is closed thread pool and whether is being closed this object.If said object is not closed, execution in step S304; If said object is closed, execution in step S405.
If the said object of step S404 is not closed, then object is removed in second-level cache, notify object is closed thread pool then, and this object will be reused, and can not carry out shutoff operation.After object is put into one-level Cache, this object is returned invokes thread and end.
If the object that step S405 reclaims is closed thread by object and carried out shutoff operation, then invokes thread will be closed by the time and just returned after the completion and finish, to guarantee the correctness of Cache work.
Cache need close Cache after accomplishing responsibility, and to discharge shared resource in the Cache, Fig. 5 is the closing flow path figure of Cache, and is as shown in the figure, specifically may further comprise the steps:
Step S501, request are at first carried out the shutoff operation of one-level Cache when closing Cache, when the closing of one-level Cache, the object in the one-level Cache all are placed in the second-level cache.
Step S502, detect the free space of second-level cache, the space less than the time, object is closed thread pool and is started a plurality of threads, concurrent takes out corresponding object in second-level cache, carry out the release of resource.
At this moment because the concurrent reclaimer operation that carries out resource with the speed that effectively raises the resource recovery, thereby has reduced the efficient that Cache closes.
Step S503, when second-level cache is put into object, if the capacity of second-level cache is full, Cache will adopt invokes thread and object to close the thread pool asynchronous shutoff operation that carries out object simultaneously.
Invention has been described on principle for the foregoing description, is example so that Cache is closed below, through specific embodiment the asynchronous superseded method of Cache provided by the invention described.
When (1) all objects in the Cache storer are closed in the invokes thread request, to being arranged in the shutoff operation that carries out of the inner one-level Cache storer object of cpu chip.
When closing the object in the one-level Cache storer, the Cache controller all is in place the object in the one-level Cache storer in the second-level cache storer of individual chips.
(2) detect the free space of second-level cache, the space less than the time, the object that is arranged in individual chips is closed thread pool and is started a plurality of threads, concurrent takes out corresponding object in the second-level cache storer, carry out the release of resource.
At this moment because the concurrent reclaimer operation that carries out resource with the speed that effectively raises the resource recovery, thereby has reduced the efficient that Cache closes.
When (3) the Cache controller is put into object with the object in the one-level Cache storer in the second-level cache storer; If the capacity of second-level cache storer is full, the Cache controller will adopt invokes thread and object to cut out the thread pool asynchronous shutoff operation that carries out object simultaneously.
The method that provides through the foregoing description can be implemented in the performance loss problem that the Cache of thread synchronization under the multi-thread environment causes when eliminating object; Carry out asynchronous superseded mode through the data in the Cache; Make when eliminating object shared cpu resource be independent of the cpu resource of invokes thread; Guaranteeing invokes thread concurrent working efficiently, thereby improve the execution efficient of system on the whole.
Simultaneously, the present invention also provides a kind of Cache asynchronous superseded device, is used to realize method provided by the invention, and is as shown in Figure 6, specifically comprises:
Acquisition module 10 is used for obtaining the situation that takes up room that object is eliminated in the recovery area;
Comparison module 20 is used for superseded object that the comparison acquisition module obtains take up room situation and predefined threshold value;
Object closing module 30 is used for when acquisition module gets access to superseded object occupation space situation greater than predefined threshold value, closing the superseded object in the said recovery area.
Said object closing module 30 further comprises:
Object-order detection sub-module 31 is used for detecting the order that superseded object is put into the recovery area;
Thread controlling sub 32 is used in predefined scope, adjusting and closes the quantity of eliminating the object thread;
Close closed submodule 33, be used to utilize said thread controlling sub adjustment thread parallel close the preferential superseded object of the detected order of object-order detection sub-module; The sub-acquisition module of online process control adjustment thread reaches maximal value and gets access to space in the recovery area when having expired, directly closes the current object of eliminating in the one-level Cache by invokes thread.
The asynchronous superseded device of said Cache, also comprise: the recovery area is specially second-level cache, and utilization is closed or is recovered again in the superseded object wait that is placed in the said recovery area.
The present invention is through obtaining in the recovery area situation that takes up room of eliminating object and do comparison with predefined threshold value, and the situation that takes up room of said superseded object is closed the superseded object in the recovery area during greater than said threshold value.Guaranteeing invokes thread concurrent working efficiently, thereby improve the execution efficient of system on the whole.
Through the description of above embodiment, those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential general hardware platform, can certainly pass through hardware, but the former is better embodiment under a lot of situation.Based on such understanding; The part that technical scheme of the present invention contributes to prior art in essence in other words can be come out with the embodied of software product; This obtains the machine software product and is stored in the storage medium, comprises that some instructions are used so that a station terminal equipment is carried out the described method of each embodiment of the present invention.
More than disclosedly be merely several specific embodiment of the present invention, still, the present invention is not limited thereto, any those skilled in the art can think variation all should fall into protection scope of the present invention.

Claims (10)

1. the asynchronous superseded method of Cache is characterized in that, may further comprise the steps:
Invokes thread is when the Cache placing objects, and said Cache at first checks the capacity of one-level Cache;
The capacity of said one-level Cache less than the time, said one-level Cache directly is put into object in the said one-level Cache;
When the capacity of said one-level Cache has been expired, obtain the object that to eliminate in the said one-level Cache, it is removed from said one-level Cache, and new object is put in the said one-level Cache, and obtain the situation that takes up room of eliminating object in the recovery area;
The situation that takes up room of more said superseded object and predefined threshold value;
The situation that takes up room of said superseded object is closed the superseded object in the said recovery area during greater than said threshold value;
Space in the said recovery area less than the time, close superseded object on the backstage with the mode of asynchronous parallel by said thread, invokes thread does not participate in eliminating closing of object.
2. the asynchronous superseded method of Cache according to claim 1 is characterized in that said recovery area is specially second-level cache, is placed into superseded object in the said recovery area and waits for and close or be recovered again utilization.
3. the asynchronous superseded method of Cache according to claim 1 is characterized in that, also comprises before the said step of obtaining the situation that takes up room of eliminating object in the recovery area:
When one-level Cache capacity has been expired, will eliminate object and be placed in the recovery area.
4. the asynchronous superseded method of Cache according to claim 1 is characterized in that said superseded object step of closing said recovery area is specially:
The number of threads of eliminating object is closed in adjustment, from the recovery area, removes superseded object, and discharges the resource that said superseded object takies;
Close superseded object with parallel mode between the thread of said some.
5. like claim 1 or the asynchronous superseded method of 4 described Cache, it is characterized in that said superseded object step of closing in the said recovery area is specially:
Put into the preferential object of the superseded order of orderly close-down of recovery area according to eliminating object.
6. the asynchronous superseded method of Cache as claimed in claim 4 is characterized in that, said adjustment is closed the thread of eliminating object and is specially:
The quantity of eliminating object is as required adjusted the quantity of thread in preestablishing the scope of number of threads.
7. the asynchronous superseded method of Cache as claimed in claim 1 is characterized in that, when has expired in the space in the said recovery area, directly closes the current object of eliminating in the one-level Cache by invokes thread.
8. the asynchronous superseded device of Cache is characterized in that, comprising:
Acquisition module is used for obtaining the situation that takes up room that object is eliminated in the recovery area; Invokes thread is when the Cache placing objects, and said Cache at first checks the capacity of one-level Cache; The capacity of said one-level Cache less than the time, said one-level Cache directly is put into object in the said one-level Cache; When the capacity of said one-level Cache has been expired, obtain the object that to eliminate in the said one-level Cache, it is removed from said one-level Cache, and new object is put in the said one-level Cache;
Comparison module is used for superseded object that the comparison acquisition module obtains take up room situation and predefined threshold value;
The object closing module is used for when acquisition module gets access to superseded object occupation space situation greater than predefined threshold value, closing the superseded object in the said recovery area; Space in the said recovery area less than the time, close superseded object on the backstage with the mode of asynchronous parallel by said thread, invokes thread does not participate in eliminating closing of object.
9. like the asynchronous superseded device of the said Cache of claim 8, it is characterized in that said object closing module further comprises:
The object-order detection sub-module is used for detecting the order that superseded object is put into the recovery area;
The thread controlling sub is used in predefined scope, adjusting and closes the quantity of eliminating the object thread;
Close closed submodule, be used to utilize said thread controlling sub adjustment thread parallel close the preferential superseded object of the detected order of object-order detection sub-module; When has expired in the space in the said recovery area, directly close the current object of eliminating in the one-level Cache by invokes thread.
10. like claim 8 or the asynchronous superseded device of 9 said Cache, it is characterized in that, also comprise:
The recovery area is specially second-level cache, and utilization is closed or is recovered again in the superseded object wait that is placed in the said recovery area.
CN2008100899801A 2008-04-14 2008-04-14 Method and device for Cache asynchronous elimination Active CN101561783B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2008100899801A CN101561783B (en) 2008-04-14 2008-04-14 Method and device for Cache asynchronous elimination
HK10103891.2A HK1135782A1 (en) 2008-04-14 2010-04-21 Method for asynchronous elimination in cache and apparatus thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100899801A CN101561783B (en) 2008-04-14 2008-04-14 Method and device for Cache asynchronous elimination

Publications (2)

Publication Number Publication Date
CN101561783A CN101561783A (en) 2009-10-21
CN101561783B true CN101561783B (en) 2012-05-30

Family

ID=41220595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100899801A Active CN101561783B (en) 2008-04-14 2008-04-14 Method and device for Cache asynchronous elimination

Country Status (2)

Country Link
CN (1) CN101561783B (en)
HK (1) HK1135782A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104243425B (en) 2013-06-19 2018-09-04 深圳市腾讯计算机系统有限公司 A kind of method, apparatus and system carrying out Content Management in content distributing network
CN103761052B (en) * 2013-12-28 2016-12-07 华为技术有限公司 A kind of method managing cache and storage device
CN106649139B (en) * 2016-12-29 2020-01-10 北京奇虎科技有限公司 Data elimination method and device based on multiple caches
CN110309079B (en) * 2018-03-27 2023-06-02 阿里巴巴集团控股有限公司 Data caching method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713612A (en) * 2004-06-25 2005-12-28 中兴通讯股份有限公司 Data packet storing method by pointer technology
US7209437B1 (en) * 1998-10-15 2007-04-24 British Telecommunications Public Limited Company Computer communication providing quality of service
CN1967507A (en) * 2005-11-18 2007-05-23 国际商业机器公司 Decoupling storage controller cache read replacement from write retirement

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209437B1 (en) * 1998-10-15 2007-04-24 British Telecommunications Public Limited Company Computer communication providing quality of service
CN1713612A (en) * 2004-06-25 2005-12-28 中兴通讯股份有限公司 Data packet storing method by pointer technology
CN1967507A (en) * 2005-11-18 2007-05-23 国际商业机器公司 Decoupling storage controller cache read replacement from write retirement

Also Published As

Publication number Publication date
HK1135782A1 (en) 2010-06-11
CN101561783A (en) 2009-10-21

Similar Documents

Publication Publication Date Title
US11531625B2 (en) Memory management method and apparatus
JP4481999B2 (en) Method and apparatus for reducing page replacement execution time in a system to which demand paging technique is applied
CN101561783B (en) Method and device for Cache asynchronous elimination
CN1534478A (en) Equipment and method of relocating shared computer data in multiline procedure computer
US20180322041A1 (en) Data storage device and method for operating data storage device
EP3198361B1 (en) Hardware controlled power domains with automatic power on request
CN101458668A (en) Caching data block processing method and hard disk
US9632958B2 (en) System for migrating stash transactions
US8914592B2 (en) Data storage apparatus with nonvolatile memories and method for controlling nonvolatile memories
CN107209716A (en) Memory management apparatus and method
CN1556474A (en) On line upgrading method of software and its device
CN103744736A (en) Method for memory management and Linux terminal
US10198180B2 (en) Method and apparatus for managing storage device
US10261918B2 (en) Process running method and apparatus
CN102369511B (en) Resource removing method, device and system
US20130086352A1 (en) Dynamically configurable storage device
CN114356248B (en) Data processing method and device
US11635904B2 (en) Matrix storage method, matrix access method, apparatus and electronic device
CN101316240A (en) Data reading and writing method and device
CN104932876A (en) Semiconductor device and control method for reading instructions
CN111338981B (en) Memory fragmentation prevention method and system and storage medium
US20130305007A1 (en) Memory management method, memory management device, memory management circuit
CN104750425A (en) Storage system and control method for nonvolatile memory of storage system
CN102158416A (en) Method and equipment for processing messages based on memory allocation
CN112346556A (en) Method, device, computer equipment and medium for improving low power consumption efficiency of chip

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1135782

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1135782

Country of ref document: HK

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211105

Address after: Room 554, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee after: TAOBAO (CHINA) SOFTWARE CO.,LTD.

Address before: Cayman Islands Grand Cayman capital building, a four storey No. 847 mailbox

Patentee before: ALIBABA GROUP HOLDING Ltd.