US20090055587A1 - Adaptive Caching of Input / Output Data - Google Patents

Adaptive Caching of Input / Output Data Download PDF

Info

Publication number
US20090055587A1
US20090055587A1 US12/206,051 US20605108A US2009055587A1 US 20090055587 A1 US20090055587 A1 US 20090055587A1 US 20605108 A US20605108 A US 20605108A US 2009055587 A1 US2009055587 A1 US 2009055587A1
Authority
US
United States
Prior art keywords
entropy
data
data block
cache
selecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/206,051
Inventor
John E. Kellar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intellectual Ventures I LLC
Original Assignee
Mossman Holdings LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mossman Holdings LLC filed Critical Mossman Holdings LLC
Priority to US12/206,051 priority Critical patent/US20090055587A1/en
Assigned to MOSSMAN HOLDINGS LLC reassignment MOSSMAN HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: QUICKSHIFT, INC.
Assigned to QUICKSHIFT, INC. reassignment QUICKSHIFT, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KELLAR, JOHN E.
Publication of US20090055587A1 publication Critical patent/US20090055587A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0886Variable-length word access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/40Specific encoding of data in memory or cache
    • G06F2212/401Compressed data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation

Definitions

  • the present invention relates, in general, to data processing systems and more particularly to adaptive data caching in data processing systems to reduce transfer latency or increase transfer bandwidth of data movement within these systems.
  • Encoding or compressing cached data in operating system system caches increase the logical effective cache size and cache hit rate, and thus improves system response time.
  • compressed data requires variable-length record management, free space search and garbage collection. This overhead may negate performance improvements achieved by increasing effective cache size.
  • a client should not notice any substantial degradation in response time for a given transaction even as the number of transactions requested per second by other clients to the database server increases.
  • the availability of main memory plays a critical role in a database server's ability to scale for this application.
  • a database server will continue to scale up until the point that the application data no longer fits in main memory. Beyond this point, the buffer manager resorts to swapping pages between main memory and storage sub-systems. The amount of this paging increases exponentially as a function of the fraction of main memory available, causing application performance and response time to degrade exponentially as well. At this point, the application is said to be I/O bound.
  • the memory data encoding/compression increases the effective size of system wide file and/or buffer cache by encoding and storing a large block of data into a smaller space.
  • the effective available reach of these caches is typically doubled, where reach is defined as the total immediately accessible data requested by the system, without recourse to out-of-core (not in main memory) storage.
  • This allows client/server applications, which typically work on data sets much larger than main memory, to execute more efficiently due to the decreased number of volatile, or persistent, storage data requests.
  • the numbers of data requests to the storage sub-systems are reduced because pages or disk blocks that have been accessed before are statistically more likely to still be in main memory when accessed again due to the increased capacity of cache memory.
  • a secondary effect of such compression or encoding is reduced latency in data movement due to the reduced size of the data.
  • the average compression ratio tradeoff against the original data block size as well as the internal cache hash bucket size must be balanced in order to reap the greatest benefit from this tradeoff.
  • the Applicant of the present invention believes that an original uncompressed block size of 4096 bytes with an average compression ratio of 2:1 stored internally in the cache, in a data structure known as an open hash, in blocks of 256 bytes results in the greatest benefit towards reducing data transfer latency for data movement across the north and south bridge devices as well as to and from the processors across the Front-Side-Bus.
  • the cache must be able to modify these values in order to reap the greatest benefits from this second order effect.
  • the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. This signature is produced from the entropy estimation function described herein, and stored in the tag structure of the cache.
  • This signature provides a unique way to group previously seen data; this grouping is then used to bias or alter the pre-fetching gaps produced by the prefetching function described below.
  • Empirical evidence shows that this entropy signature improve pre-fetching operations over large data sets (greater than 4 GBytes of addressable space) by approximately 11% over current techniques that do not have this feature available.
  • the in-core file-tracking database would also allow the end-user to over-ride an application's caching request and either allow or deny write-through or write-back or non-conservative or no-caching to an application on a file by file basis, through the use of manual file tracking or, on a system wide basis, through the use of dynamic file tracking.
  • This capability could also be offered in a more global, system-wide way by allowing caching of file system metadata; this caching technique (the caching of file system metadata specifically) is referred to throughout this document as “non-conservative caching.”
  • PAE Physical Address Extension
  • the PAE memory addressing mode is limited to the Intel, Inc. ⁇ 86 architecture.
  • the I/O and Front Side Buses are utilized less. This reduced bandwidth requirement can be used to scale system performance beyond its original capabilities, or allow the I/O subsystem to be cost reduced due to reduced component requirements based on the increased effective bandwidth available.
  • the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. This signature is produced from the entropy estimation function described herein, and stored in the tag structure of the cache.
  • This signature provides a unique way to group previously seen data; this grouping is then used to bias or alter the pre-fetching gaps produced by the prefetching function described below.
  • Empirical evidence shows that this entropy signature improve pre-fetching operations over large data sets (greater than 4 GBytes of addressable space) by approximately 11% over current techniques that do not have this feature available.
  • the method for caching data in accordance with the present invention involves detecting a data input/output request, relative to a data object, and then selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge. Such selecting may occur dynamically or manually.
  • the method of the present invention further involves estimating an entropy of a data block to be cached in response to the data input/output request; selecting a compressor using a value of the entropy of the data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark; and storing the data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the data block from the estimating step falls in a first range of entropy values relative to the entropy watermark.
  • the method for caching data in accordance with the present invention can also include the step of prefetching a data block using gap prediction with an applied entropy bias, wherein the data block is the data block to be cached, as referenced above, or is a separate second data block.
  • the method of the present invention can also involve the following additional steps: adaptively adjusting the plurality of ranges of entropy values; scheduling a flush of the data block from the cache; and suppressing operating system flushes in conjunction with the foregoing scheduling step.
  • FIG. 1A depicts a generalized system architecture of a modern data processing system
  • FIG. 1B depicts generalized software architecture for the I/O subsystem of Windows 2000, XP, and beyond;
  • FIG. 2A illustrates a high-level logical view of an adaptive compressed cache architecture in accordance with the present inventive principles
  • FIG. 2B illustrates, in more detail, a high-level logical view of an adaptive compressed cache
  • FIG. 2C illustrates a logical view of an adaptive compressed caching architecture in accordance with the present inventive principals
  • FIG. 2D is a table showing opened file policy for cache in accordance with an embodiment of the present invention.
  • FIG. 2E illustrates the flags used for file tracking specifications in accordance with an embodiment of the present invention
  • FIG. 3 illustrates a cache protocol in a state diagram format view in accordance with the present state of the art principals
  • FIG. 4A shows a modified MSI cache protocol, wherein the MSI protocol is modified in accordance with the present inventive design principals
  • FIG. 4B shows state transitions for write-invalidation in accord with the present inventive design principles
  • FIGS. 5 and 6 are flow diagrams illustrating implementation details in accordance with an embodiment of the present invention.
  • FIG. 7A-7D are further flow diagrams illustrating implementation details in accordance with an embodiment of the present invention.
  • FIG. 7E is a schematic representation of a data structure in accordance with an embodiment of the present invention.
  • FIG. 7F schematically depicts a set of entropy bands about the maximum-entropy watermark which have pre-selected relative widths about the maximum-entropy watermark;
  • FIG. 7G , 8 A- 8 J are flow diagrams illustrating implementation details in accordance with an embodiment of the present invention.
  • FIG. 9 illustrates an exemplary hardware configuration of a data processing system in accordance with the present invention.
  • FIG. 1A depicts a generalized system architecture of a modern data processing system.
  • FIG. 1B depicts generalized software architecture for the I/O subsystem of Windows 2000, XP, and beyond.
  • This diagram is not intended to be literally accurate, but a generalized view of the software components, and how they exist within the system from a hierarchical point of view.
  • This diagram utilizes the Windows operating system only for illustrative purposes, as the present inventive embodiment may be implemented in any modern operating system in fundamentally the same way.
  • this figure illustrates both a file and data cache, as well as a network controller device cache.
  • the present invention may be adapted to either a network controller device or a disk controller device using the same inventive design principles discussed below.
  • FIG. 2A illustrates a high-level logical view of an adaptive compressed cache architecture in accordance with the present inventive principles.
  • FIG. 2B illustrates, in more detail, a high-level logical view of an adaptive compressed cache.
  • FIG. 2C illustrates a logical view of an adaptive compressed caching architecture 100 in accordance with the present inventive principles.
  • Modern data processing systems may be viewed from a logical perspective as a layered structure 102 in which a software application 104 occupies the top level, with the operating system (OS) application program interfaces (APIs) 106 between the application and the OS 108 .
  • OS APIs 106 expose system services to the application 104 . These may include, for example, file input/output (I/O), network I/O, etc.
  • Hardware devices are abstracted at the lowest level 110 . Hardware devices (see FIGS.
  • FIG. 2A and 2B may include the central processing unit (CPU) 112 , memory, persistent storage (e.g., disk controller 114 ), and other peripheral devices 116 .
  • CPU central processing unit
  • memory volatile and non-volatile memory
  • persistent storage e.g., disk controller 114
  • other peripheral devices 116 e.g., peripheral devices 116 .
  • FIG. 2C these are handled on an equal footing. That is, each device “looks” the same to the OS.
  • filter driver 118 intercepts the operating system file access and performs caching operations, described further herein below, transparently. That is, the caching, file tracking and, in particular, the compression associated therewith, is transparent to the application 104 .
  • Data selected for caching is stored in a (compressed) cache (denoted as ZCache 120 ).
  • ZCache notation is used as a mnemonic device to call attention to the fact that the cache in accordance with the present invention is distinct from the instruction/data caches commonly employed in modern microprocessor systems, and typically denoted by the nomenclature “L1”, “L2” etc. cache.
  • ZCache 120 may be physically implemented as a region in main memory.
  • Filter 118 maintains a file tracking database (DB) 122 which contains information regarding which files are to be cached or not cached, and other information useful to the management of file I/O operations, as described further herein below.
  • DB file tracking database
  • FIG. 2C A few notes on FIG. 2C :
  • the ZCache module exists as a stand-alone device driver adjunct to the file system filter and disk filter device drivers.
  • a TDI Filter Driver is inserted between box (TDI) 8 , with connection tracking for network connections that operates the same as the file tracking modules in the compressed data cache, and the peer group of modules that consist of (AFD) 3 , (SRV) 4 , (RDR) 5 , (NPFS) 6 , and (MSFS) 7 .
  • a complete reference on TDI is available on the Microsoft MSDN website at
  • a NDIS intermediate cache driver is inserted between the bottom edge of the transport drivers and the upper edge of the NDIS components.
  • FIG. 3 illustrates a cache protocol, in a state diagram format view and in accordance with the present state of the art principles.
  • This state diagram describes the Modified-Shared-Invalid (MSI) Cache protocol.
  • MSI Modified-Shared-Invalid
  • This cache protocol is one used on processor caches, and is closest to what is needed for a block-based cache.
  • Other possible cache protocols which are not precluded by this preferred embodiment include MESI, MOESI, Dragon and others.
  • the cache line contains the most recent data, and is different than data contained in backing store.
  • FIG. 4A shows the modified MSI cache protocol.
  • the MSI protocol must be modified, as in FIG. 4A , to accomplish the present inventive design goals.
  • Many factors are considered in development of caching protocols, and most of the above-mentioned cache protocols are of a general purpose only, or are designed for a specific target implementation, such as a processor (CPU) cache.
  • processor CPU
  • other cache protocol factors rather than only those embodied by the MSI protocol, must be considered.
  • a cache line is allocated, but the disk I/O may not complete in hundreds if not thousands of microseconds. During this time, additional I/O requests could be made against the same allocated cache line.
  • FIG. 4B shows state transitions for write-invalidation in accord with the present inventive design principles.
  • methodology 200 may be primarily performed by filter driver 118 , or alternatively, may be logically between filter driver 118 and ZCache driver 120 .
  • Methodology 200 watches for I/O operations involving data block moves, in step 502 . See FIG. 5 .
  • a data block move may be detected by “peeking” at, or disassembling, the I/O request packets that control the handling of I/O operations. If an I/O operation involving data block moves is detected, methodology 200 performs operations to determine if the subject data is to be cached. This is described in conjunction with step 204 of FIG. 6 and steps 204 - 214 of FIG. 7A .
  • caching decisions are based on user-selectable caching policies in combination with caching “instructions” that may be set by the application making the data transfer request.
  • Step 204 instructs the operating system how a I/O operation should be handled.
  • each I/O packet includes descriptive data that may include information (i.e. “flags”) for control the caching of the data transported in the packet.
  • a file-tracking database 122 (equivalently, a file tracking “registry”) may be maintained in accordance with caching architecture 100 .
  • This registry may include a set of file tracking flags 20 , FIG. 2E .
  • each entry may be a hexadecimal (hex) digit.
  • GlobalPolicy flag 21 which may be set by the user, may be set to determine the global policy that determines the most aggressive policy for any file. In other words, as described further below, other parameters may override the global policy to reduce the aggressiveness for a particular file.
  • the values GlobalPolicy flag 21 may take predetermined values (e.g., a predetermined hex digit) representing respective ones of a writeback policy, writethrough policy and no caching.
  • Writeback caching means that a given I/O write request may be inserted in the ZCache instead of immediately writing the data to the persistent store.
  • Writethrough caching means that the data is also immediately written to the persistent store.
  • step 206 caching is turned off, such as if GlobalPolicy flag 21 is set to a predetermined hex value representing “no cache,” process 200 passes the I/O request to the operating system (OS) for handling, step 208 . Otherwise, process 200 proceeds to step 210 .
  • OS operating system
  • decision block 210 it is determined if dynamic, manual or alternatively, non-conservative tracking is set. This may be responsive to a value of Dynamic flag 28 , FIG. 2E .
  • the value of the flag is “writethrough,” the dynamic tracking is enabled, and if the value of the flag is “no cache,” manual tracking is enabled. (Manual tracking allows the user to explicitly list in the file tracking database which files are to be cached.)
  • dynamic mode if, in step 212 the subject file is a tracked file, it is cached in the ZCache in accordance with cache policy (either as writethrough or writeback). File flags associated with the subject file are ignored in manual mode and honored in dynamic mode.
  • a FO_NO_INTERMEDIATE_BUFFERING flag is ignored in manual mode (and honored in dynamic mode), and likewise an analogous flag in other OS environments. If the subject file is an untracked file, process 200 proceeds to step 214 .
  • Untracked files include metadata and files that may have been opened before the caching process started.
  • Metadata files are files that contain descriptions of data such as information concerning the location of files and directories; log files to recover corrupt volumes and flags which indicate bad clusters on a physical disk. Metadata can represent a significant portion of the I/O to a physical persistent store because the contents of small files (e.g., ⁇ 4,096 bytes) may be completely stored in metadata files.
  • step 214 it is determined if non-conservative caching is enabled. In an embodiment of the present invention using file tracking flags 21 , FIG. 2E , step 214 may be performed by examining Default flag 24 , FIG. 2D .
  • Default flag 24 is the hex digit representing “writeback,” then non-conservative caching is enabled, and decision block 214 proceeds by the “Y” branch. Conversely, if the value of Default flag 24 is the hex digit representing “no cache,”, then non-conservative caching is disabled, and decision block 214 proceeds by the “N” branch, and the respective file operation is passed to the OS for handling (step 208 ).
  • step 214 it is determined if the subject file is a pagefile. If so, in step 214 it is determined if caching of pagefiles is enabled.
  • the flag 28 ( FIG. 2E ) has the value representing page file I/O. Pagefile I/O is passed to the OS for handling.
  • step 220 file object information is extracted from the I/O request packet and stored in the file tracking DB, step 222 ( FIG. 6 ).
  • file object information is extracted from the I/O request packet and stored in the file tracking DB, step 222 ( FIG. 6 ).
  • Such data may include any policy flags set by the application issuing the subject I/O request. If, for example, in a Windows NT environment, the FO_WRITE_THROUGH flag is set in the packet descriptor the WRITE_THROUGH flag 28 , FIG. 2E , may be set in step 222 . Similarly, if the FO_NO_INTERMEDIATE_BUFFERING is set in the I/O request packet, then the NO_BUFF flag 28 may be set in step 222 . Additionally, sequential file access flags, for example, also may be stored.
  • step 224 proceeds by the “Y” branch in step 224 , to step 226 . If the request is not a write request, decision block 224 proceeds by the “No” branch to decision block 228 , to determine if the request is a read.
  • step 226 storage space in the ZCache is reserved, and in step 230 , a miss counter associated with the subject data block to be cached is cleared.
  • Each such block may have a corresponding tag that represents a fixed-size block of data.
  • a block size which is normally equivalent to the PAGE SIZE of a computer processor that would execute the instructions for carrying out the method of the present invention, of 4,096 bytes may be used in an embodiment of the present invention, however other block sizes may be used in accordance with the present inventive principles, as shown in FIG.
  • Block tag 300 may be viewed as a data structure having a plurality of members including counter member 302 including miss counter 304 .
  • Counter member 302 may, in an embodiment of the present invention may be one-byte wide, and miss counter 304 may be one bit wide (“true/false”). The operation of the miss counter will be discussed further herein below.
  • step 232 a compression estimation is made.
  • the amount of compression that may be achieved on any particular block is determined by the degree of redundancy in the data block, in accordance with the classic theory information of Shannon.
  • a block of data that is perfectly random has a maximum entropy in this picture and does not compress.
  • An estimation of the entropy of the subject block may be used as a measure of the maximum compression that may be achieved for that block.
  • Different data compression techniques are known in the art, and the “better” the compressor, the closer the compression ratio achieved will be to the entropy-theoretic value. However, the greater compression comes at the price of computational complexity, or, equivalently, CPU cycles.
  • an entropy estimate may be made using a frequency table for the data representation used.
  • frequency tables are used in the cryptographic arts and represent the statistical properties of the data. For example, for ASCII data, a 256-entry relative frequency table may be used.
  • Frequency tables are often used in cryptography and compression; they are pre-built tables used for predicting the probability frequency of presumed alphabetic token occurrences in a data stream. In this embodiment, the token stream is presumed to be ASCII-encoded tokens, but is not restricted to this.
  • the entropy may be returned as a signed integer value in the range.+ ⁇ .50.
  • a maximal entropy block would return the value 50.
  • the entropy estimate may also be stored in the block tag (tag member 310 , FIG. 3 ).
  • the value of the entropy estimate may be used to select a compressor, step 234 or the value of the entropy estimate may also be used to provide a bias to pre-fetching for previously seen read data blocks.
  • step 234 which may be viewed as a three-way decision block if three levels of compression are provided, the subject data block is compressed using an entropy estimate based compressor selection.
  • FIG. 7F schematically depicts a set of entropy bands about the maximum-entropy watermark (which may correspond to a value of zero for a random block) which have pre-selected relative widths about the maximum entropy watermark.
  • bands 402 a and 402 b are shown with a width of 6%, and represent a block that deviates by a relatively small amount from a random block and would be expected to benefit little from compression. Therefore, in step 234 , FIG.
  • zero compression, 236 may be selected. In other words, such a block may be cached without compression.
  • a zero-bit compressor 238 FIG. 2C may be selected.
  • a zero-bit compressor counts the number of zeros occurring before a one occurs in the word. The zeros are replaced by the value representing the number of zeros. If the entropy estimate returns a value in bands 406 a, 406 b , having an illustrated width of 25%, a more sophisticated compression may be used, as the degree of compression expected may warrant the additional CPU cycles that such a compressor would consume.
  • a compressor of the Lempel-Ziv (LZ) type 240 may be selected.
  • LZ type compressors are based on the concept, described by A. Ziv and J. Lempel in 1977, of parsing strings from a finite alphabet into substrings of different lengths (not greater than a predetermined maximum) and a coding scheme that maps the substrings into code words of fixed length, also predetermined. The substrings are selected so they have about equal probability of occurrence.
  • Algorithms for implementing LZ type compression are known in the art, for example, the lzw algorithm described in U.S. Pat. No. 4,558,302 issued Dec. 10, 1985 to Welch and the lzo compressors of Markus F. X. J.
  • the bands may be adaptively adjusted. If, for example, the CPU is being underutilized, it may be advantageous to use a more aggressive compressor, even if the additional compression might not otherwise be worth the tradeoff. In this circumstance, the width of bands 404 a, b and 406 a, b may be expanded. Conversely, if CPU cycles are at a premium relative to memory, it may be advantageous to increase the width of bands 402 a, b , and shrink the width of bands 406 a, b . A methodology for adapting the compressor selection is described in conjunction with FIG. 7F .
  • the data is cached, and any unused space reserved is freed. It is determined if the cached data block previously existed on the persistent store (e.g., disk). If not, an I/O packet of equal size to the uncompressed data block is issued to the persistent store. In this way, the persistent store reserves the space for a subsequent flush, which may also occur if the OS crashes. Additionally, if a read comes in the block will be returned without waiting for the I/O packet request to complete, in accordance with the writeback mechanism. If the block previously existed on the persistent store, or if the cache policy for the block is writethrough (overriding the writeback default), the block is written to the persistent store. Otherwise, the block is scheduled for a flush. Additionally, a “write squashing” may be implemented whereby flushes coming through from the OS are suppressed. In this way, process 200 may lay down contiguous blocks at one time, to avoid fragmenting the persistent store. Process 200 then returns to step 208 .
  • the persistent store e.g., disk
  • step 288 in FIG. 7B if the request is a read request, in FIG. 7E , the prefetch and miss counters of the subject block are reset, and reference counters for all blocks updated. A methodology for updating the reference counter for a block will be described in conjunction with FIG. 7D , below.
  • step 258 it is determined if the block has been previously read. This may be determined by a non-zero access count in number of accesses member 316 , FIG. 7E .
  • Gap prediction is accomplished by testing the distance in Logical Blocks (LBN's) from one read request in a file to a subsequent read request on the same file, if the LBN's are not adjacent (e.g. each read takes place at the next higher or lower LBN associated with this file) but there is a regular skip pattern (e.g., a read is done, some, regular, number of LBN's is skipped, either positively or negatively, a subsequent read is issued at this skipped distance) that has been detected from at least two previous reads of this file.
  • LBN's Logical Blocks
  • step 260 it is determined if a reference counter in the next block in the sequence is smaller than two. If a block that has been prefetched is not hit in the next two references, then it will not be prefetched again, unless its entropy estimation is approximately equal plus or minus 2% (this value is arrived at empirically and may be different for different operating systems or platforms) to the entropy of the previously fetched block, and process 200 bypasses step 264 .
  • counter member 302 may, in an embodiment of the present invention, be one-byte wide, and may contain a prefetch counter 306 which may be one bit wide (“true/false”).
  • an entropy estimate is made for the block (using the same technique as in step 232 ) that is stored in the file tracking database (e.g., in compression estimate member 310 , FIG. 7E ).
  • a next block is then selected for prefetching based on entropy and distance ( FIG. 7E ). That is, of the blocks nearest in entropy (once again within 2%), the closest block in distance to the subject block of the read request is prefetched. (Recall that a block has a unique entropy value, but a given entropy value may map into a multiplicity of blocks.) If, however, in FIG.
  • the miss counter for the selected block is set, prefetching of that block is bypassed (“Y” branch of decision block). Otherwise, in step 274 , the block is prefetched, and the miss counter (e.g., miss counter 304 , FIG. 7E ) for the prefetched block is set (or logically “True”). The prefetch counter is set in step 266 , as before.
  • step 204 the read is returned.
  • FIG. 9 illustrates an exemplary hardware configuration of data processing system 700 in accordance with the subject invention.
  • Data processing system 700 includes central processing unit (CPU) 710 , such as a conventional microprocessor, and a number of other units interconnected via system bus 712 .
  • Data processing system 700 may also include random access memory (RAM) 714 , read only memory (ROM) (not shown) and input/output (I/O) adapter 722 for connecting peripheral devices such as disk units 720 to bus 712 .
  • System 700 may also include communication adapter for connecting data processing system 700 to a data processing network, enabling the system to communicate with other systems.
  • CPU 710 may include other circuitry not shown herein, which will include circuitry commonly found within a microprocessor, e.g., execution units, bus interface units, arithmetic logic units, etc.
  • CPU 710 may also reside on a single integrated circuit.
  • Preferred implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product.
  • sets of instructions for executing the method or methods are resident in the random access memory 714 of one or more computer systems configured generally as described above. These sets of instructions, in conjunction with system components that execute them may perform operations in conjunction with data block caching as described hereinabove.
  • the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 720 (which may include a removable memory such as an optical disk or floppy disk for eventual use in the disk drive 720 ).
  • the computer program product can also be stored at another computer and transmitted to the user's workstation by a network or by an external network such as the Internet.
  • a network such as the Internet.
  • the physical storage of the sets of instructions physically changes the medium upon which is the stored so that the medium carries computer-readable information.
  • the change may be electrical, magnetic, chemical, biological, or some other physical change. While it is convenient to describe the invention in terms of instructions, symbols, characters, or the like, the reader should remember that all of these in similar terms should be associated with the appropriate physical elements.
  • the invention may describe terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator.
  • terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator.
  • no action by a human operator is desirable.
  • the operations described are, in large part, machine operations processing electrical signals to generate other electrical signals.

Abstract

To improve caching techniques, so as to realize greater hit rates within available memory, of the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. The method of the present invention for caching data involves detecting a data I/O request, relative to a data object, and then selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge. Such selecting may occur dynamically or manually. The method further involves estimating an entropy of a first data block to be cached in response to the data I/O request; selecting a compressor using a value of the entropy of the data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark; and storing the data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the data block from the estimating step falls in a first range of entropy values relative to the entropy watermark. The method can also include the step of prefetching a data block using gap prediction with an applied entropy bias, wherein the data block is the same as the first data block to be cached or is a separate second data block. The method can also involve the following additional steps: adaptively adjusting the plurality of ranges of entropy values; scheduling a flush of the data block from the cache; and suppressing operating system flushes in conjunction with the foregoing scheduling step.

Description

    RELATED APPLICATION DATA
  • This application is a continuation of U.S. patent application Ser. No. 11/152,363, filed on Jun. 14, 2005, entitled “Adaptive Input/Output Compressed System and Data Cache and System Using Same”, invented by John E. Kellar, which claims benefit of priority of U.S. provisional application Ser. No. 60/579,344 titled “Adaptive Input/Output Cache and System Using Same,” filed Jun. 14, 2004, and which are all hereby incorporated by reference in their entirety as though fully and completely set forth herein.
  • FIELD OF THE INVENTION
  • The present invention relates, in general, to data processing systems and more particularly to adaptive data caching in data processing systems to reduce transfer latency or increase transfer bandwidth of data movement within these systems.
  • DESCRIPTION OF THE RELATED ART
  • In modern data processing systems, the continual increase in processor speeds has outpaced the rate of increase of data transfer rates from peripheral persistent data storage devices and sub-systems. In systems such as enterprise scale server systems in which substantial volumes of volatile, or persistent data are manipulated, the speed at which data can be transferred may be the limiting factor in system efficiency. Commercial client/server database environments are emblematic of such systems. These environments are usually constructed to accommodate a large number of users performing a large number of sophisticated database queries and operations to a large distributed database. These compute, memory and I/O intensive environments put great demands on database servers. If a database client or server is not properly balanced, then the number of database transactions per second that it can process can drop dramatically. A system is considered balanced for a particular application when the CPU(s) tends to saturate about the same time as the I/O subsystem.
  • Continual improvements in processor technology have been able to keep pace with ever-increasing performance demands, but the physical limitations imposed on retrieving data from disk has caused I/O transfer rates to become an inevitable bottleneck. Bypassing these physical limitations has been an obstacle to overcome in the quest for better overall system performance.
  • In the computer industry, this bottleneck, known as a latency gap because of the speed differential, has been addressed in several ways. Caching the data in memory is known to be an effective way to diminish the time taken to access the data from a rotating disk. Unfortunately, memory resources are in high demand on many systems, and traditional cache designs have not made the best use of memory devoted to them. For instance, many conventional caches simply cache data existing ahead of the last host request. Implementations such as these, known as Read Ahead caching, can work in unique situations, but for non-sequential read requests, data is fruitlessly brought into the cache memory. This blunt approach to caching however has become quite common due to simplicity of the design. In fact, this approach has been put in use as read buffers within the persistent data storage systems such as disks and disk controllers.
  • Encoding or compressing cached data in operating system, system caches increase the logical effective cache size and cache hit rate, and thus improves system response time. On the other hand, compressed data requires variable-length record management, free space search and garbage collection. This overhead may negate performance improvements achieved by increasing effective cache size. Thus, there is a need for a new operating system, system file, data and buffer cache data managing method with low overhead, transparent to the operating systems in conventional data managing methods. With such an improved method, it is expected that the effective, logically accessible, memory available for file and data buffer cache size will increase by 30% to 400%, effectively improving system-cost performance.
  • Ideally, a client should not notice any substantial degradation in response time for a given transaction even as the number of transactions requested per second by other clients to the database server increases. The availability of main memory plays a critical role in a database server's ability to scale for this application. In general, a database server will continue to scale up until the point that the application data no longer fits in main memory. Beyond this point, the buffer manager resorts to swapping pages between main memory and storage sub-systems. The amount of this paging increases exponentially as a function of the fraction of main memory available, causing application performance and response time to degrade exponentially as well. At this point, the application is said to be I/O bound.
  • When a user performs a sophisticated data query, thousands of pages may be needed from the database, which is typically distributed across many storage devices, and possibly distributed across many systems. To minimize the overall response time of the query, access times must be as small as possible to any database pages that are referenced more than once. Access time is also negatively impacted by the enormous amount of temporary data that is generated by the database server, which normally cannot fit into main memory, such as the temporary files generated for sorting. If the buffer cache is not large enough, then many of those pages will have to be repeatedly fetched to and from the storage sub-system.
  • Independent studies have shown that when 70% to 90% of the working data fits in main memory, most applications will run several times slower. When only 50% fits, most run 5 to 20 times slower. Typical relational database operations run 4 to 8 times slower when only 66% of the working data fits in main memory. The need to reduce or eliminate application page faults, data or file system I/O is compelling. Unfortunately for system designers, the demand for more main memory by database applications will continue to far exceed the rate of advances in memory density. Coupled with this demand from the application area comes competing demands from the operating system, as well as associated I/O controllers and peripheral devices. Cost-effective methods are needed to increase the, apparent, effective size of system memory.
  • It is difficult for I/O bound applications to take advantage of recent advances in CPU, processor cache, Front Side Bus (FSB) speeds, >100 Mbit network controllers, and system memory performance improvements (e.g., DDR2) since they are constrained by the high latency and low bandwidth of volatile or persistent data storage subsystems. The most common way to reduce data transfer latency is to add memory. Adding memory to database servers may be expensive since these applications demand a lot of memory, or may even be impossible, due to physical system constraints such as slot limitations. Alternatively, adding more disks and disk caches with associated controllers, or Network Attached Storage (NAS) and network controllers or even Storage Aware Network (SAN) devices with Host Bus Adapters (HBA's) can increase storage sub-system request and data bandwidth. It may be even necessary to move to a larger server with multiple, higher performance I/O buses. Memory and disks are added until the database server becomes balanced.
  • First, the memory data encoding/compression increases the effective size of system wide file and/or buffer cache by encoding and storing a large block of data into a smaller space. The effective available reach of these caches is typically doubled, where reach is defined as the total immediately accessible data requested by the system, without recourse to out-of-core (not in main memory) storage. This allows client/server applications, which typically work on data sets much larger than main memory, to execute more efficiently due to the decreased number of volatile, or persistent, storage data requests. The numbers of data requests to the storage sub-systems are reduced because pages or disk blocks that have been accessed before are statistically more likely to still be in main memory when accessed again due to the increased capacity of cache memory. A secondary effect of such compression or encoding is reduced latency in data movement due to the reduced size of the data. Basically, the average compression ratio tradeoff against the original data block size as well as the internal cache hash bucket size must be balanced in order to reap the greatest benefit from this tradeoff. The Applicant of the present invention believes that an original uncompressed block size of 4096 bytes with an average compression ratio of 2:1 stored internally in the cache, in a data structure known as an open hash, in blocks of 256 bytes results in the greatest benefit towards reducing data transfer latency for data movement across the north and south bridge devices as well as to and from the processors across the Front-Side-Bus. The cache must be able to modify these values in order to reap the greatest benefits from this second order effect.
  • There is a need to improve caching techniques, so as to realize greater hit rates within the available memory of modern systems. Current hit rates, from methods such as LRU (Least Recently Used), LFU (Least Frequently Used), GCLOCK and others, have increased very slowly in the past decade and many of these techniques do not scale well with the availability of the large amounts of memory that modern computer systems have available today. To help meet this need, the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. This signature is produced from the entropy estimation function described herein, and stored in the tag structure of the cache. This signature provides a unique way to group previously seen data; this grouping is then used to bias or alter the pre-fetching gaps produced by the prefetching function described below. Empirical evidence shows that this entropy signature improve pre-fetching operations over large data sets (greater than 4 GBytes of addressable space) by approximately 11% over current techniques that do not have this feature available.
  • There is also a need for user applications to be able to access the capabilities for reducing transfer latency or increasing transfer bandwidth of data movement within these systems. There is a further need to supply these capabilities to these applications in a transparent way, allowing an end-user application to access these capabilities without requiring any recoding or alteration of the application. The Applicant of the present invention believes this may be accomplished through an in-core file-tracking database maintained by the invention. Such a core file-tracking data base would offer seamless access to the capabilities of the invention by monitoring file open and close requests from the user-application/operating system interface, decoding the file access flags, while maintaining an internal list of the original file object name and flags, and offering the capabilities of the invention to appropriate file access. The in-core file-tracking database would also allow the end-user to over-ride an application's caching request and either allow or deny write-through or write-back or non-conservative or no-caching to an application on a file by file basis, through the use of manual file tracking or, on a system wide basis, through the use of dynamic file tracking. This capability could also be offered in a more global, system-wide way by allowing caching of file system metadata; this caching technique (the caching of file system metadata specifically) is referred to throughout this document as “non-conservative caching.”
  • There is a further need to allow an end-user application to seamlessly access PAE (Physical Address Extension) memory for use in file caching/data buffering, without the need to re-code or modify the application in any way. The PAE memory addressing mode is limited to the Intel, Inc. ×86 architecture. There is a need for replacement of the underlying memory allocator to allow a PAE memory addressing mode to function on other processor architectures. This would allow end-user applications to utilize the modern memory addressing capabilities without the need to re-code or modify the end-user application in any way. This allows transparent seamless access to PAE memory, for use by the buffer and data cache, without user intervention or system modification.
  • Today, large numbers of storage sub-systems are added to a server system to satisfy the high I/O request rates generated by client/server applications. As a result, it is common that only a fraction of the storage space on each storage device is utilized. By effectively reducing the I/O request rate, fewer storage sub-system caches and disk spindles are needed to queue the requests, and fewer disk drives are needed to serve these requests. The reason that the storage sub-system space is not efficiently utilized is that, on today's hard-disk, storage systems, access latency increases as the data written to the storage sub-system moves further inward from the edge of the magnetic platter, in order to keep access latency at a minimum system designers over-design storage sub-systems to take advantage of this phenomenon. This results in under-utilization of available storage. There is a need to reduce average latency to the point that this trade-off is not needed, resulting in storage space associated with each disk that can be more fully utilized at an equivalent or reduced latency penalty.
  • In addition, by reducing the size of data to be transferred between local and remote persistent storage and system memory, the I/O and Front Side Buses (FSB) are utilized less. This reduced bandwidth requirement can be used to scale system performance beyond its original capabilities, or allow the I/O subsystem to be cost reduced due to reduced component requirements based on the increased effective bandwidth available.
  • Thus, there is a need in the art for mechanisms to balance the increases in clock cycles of the CPU and data movement latency gap without the need for adding additional volatile or persistent storage and memory sub-systems or increasing the clock cycle frequency of internal system and I/O buses. Furthermore, there is a need to supply this capability transparently to end user applications so that they can take advantage of this capability in both a dynamic and a directed way.
  • SUMMARY OF THE INVENTION
  • There is a need to improve caching techniques, so as to realize greater hit rates within the available memory of modern systems. Current hit rates, from methods such as LRU (Least Recently Used), LFU (Least Frequently Used), GCLOCK and others, have increased very slowly in the past decade and many of these techniques do not scale well with the availability of the large amounts of memory that modern computer systems have available today. To help meet this need, the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. This signature is produced from the entropy estimation function described herein, and stored in the tag structure of the cache. This signature provides a unique way to group previously seen data; this grouping is then used to bias or alter the pre-fetching gaps produced by the prefetching function described below. Empirical evidence shows that this entropy signature improve pre-fetching operations over large data sets (greater than 4 GBytes of addressable space) by approximately 11% over current techniques that do not have this feature available.
  • The method for caching data in accordance with the present invention involves detecting a data input/output request, relative to a data object, and then selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge. Such selecting may occur dynamically or manually. The method of the present invention further involves estimating an entropy of a data block to be cached in response to the data input/output request; selecting a compressor using a value of the entropy of the data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark; and storing the data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the data block from the estimating step falls in a first range of entropy values relative to the entropy watermark. The method for caching data in accordance with the present invention can also include the step of prefetching a data block using gap prediction with an applied entropy bias, wherein the data block is the data block to be cached, as referenced above, or is a separate second data block. The method of the present invention can also involve the following additional steps: adaptively adjusting the plurality of ranges of entropy values; scheduling a flush of the data block from the cache; and suppressing operating system flushes in conjunction with the foregoing scheduling step.
  • The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter, which form the subject of the claims of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.
  • FIG. 1A (prior art) depicts a generalized system architecture of a modern data processing system;
  • FIG. 1B (prior art) depicts generalized software architecture for the I/O subsystem of Windows 2000, XP, and beyond;
  • FIG. 2A illustrates a high-level logical view of an adaptive compressed cache architecture in accordance with the present inventive principles;
  • FIG. 2B illustrates, in more detail, a high-level logical view of an adaptive compressed cache;
  • FIG. 2C illustrates a logical view of an adaptive compressed caching architecture in accordance with the present inventive principals;
  • FIG. 2D is a table showing opened file policy for cache in accordance with an embodiment of the present invention;
  • FIG. 2E illustrates the flags used for file tracking specifications in accordance with an embodiment of the present invention;
  • FIG. 3 illustrates a cache protocol in a state diagram format view in accordance with the present state of the art principals;
  • FIG. 4A shows a modified MSI cache protocol, wherein the MSI protocol is modified in accordance with the present inventive design principals;
  • FIG. 4B shows state transitions for write-invalidation in accord with the present inventive design principles;
  • FIGS. 5 and 6 are flow diagrams illustrating implementation details in accordance with an embodiment of the present invention;
  • FIG. 7A-7D are further flow diagrams illustrating implementation details in accordance with an embodiment of the present invention;
  • FIG. 7E is a schematic representation of a data structure in accordance with an embodiment of the present invention;
  • FIG. 7F schematically depicts a set of entropy bands about the maximum-entropy watermark which have pre-selected relative widths about the maximum-entropy watermark;
  • FIG. 7G, 8A-8J are flow diagrams illustrating implementation details in accordance with an embodiment of the present invention; and
  • FIG. 9 illustrates an exemplary hardware configuration of a data processing system in accordance with the present invention.
  • DESCRIPTION OF THE PREFERRED EMBODIMENT(S)
  • In the following description, numerous specific details are set forth such as specific word or byte lengths, etc. to provide a thorough understanding of the present invention. However, it will be obvious to those skilled in the art that the present invention may be practiced without such specific details. In other instances, well-known circuits have been shown in block diagram form in order not to obscure the present invention in unnecessary detail. For the most part, details concerning timing considerations and the like have been omitted inasmuch as such details are not necessary to obtain a complete understanding of the present invention and are within the skills of persons of ordinary skill in the relevant art.
  • Refer now to the drawings wherein depicted elements are not necessarily shown to scale and wherein like or similar elements are designated by the same reference numeral through the several views.
  • FIG. 1A (prior art) depicts a generalized system architecture of a modern data processing system.
  • FIG. 1B (prior art) depicts generalized software architecture for the I/O subsystem of Windows 2000, XP, and beyond. This diagram is not intended to be literally accurate, but a generalized view of the software components, and how they exist within the system from a hierarchical point of view. This diagram utilizes the Windows operating system only for illustrative purposes, as the present inventive embodiment may be implemented in any modern operating system in fundamentally the same way. Note that this figure illustrates both a file and data cache, as well as a network controller device cache. The present invention may be adapted to either a network controller device or a disk controller device using the same inventive design principles discussed below.
  • FIG. 2A illustrates a high-level logical view of an adaptive compressed cache architecture in accordance with the present inventive principles.
  • FIG. 2B illustrates, in more detail, a high-level logical view of an adaptive compressed cache.
  • FIG. 2C illustrates a logical view of an adaptive compressed caching architecture 100 in accordance with the present inventive principles. Modern data processing systems may be viewed from a logical perspective as a layered structure 102 in which a software application 104 occupies the top level, with the operating system (OS) application program interfaces (APIs) 106 between the application and the OS 108. OS APIs 106 expose system services to the application 104. These may include, for example, file input/output (I/O), network I/O, etc. Hardware devices are abstracted at the lowest level 110. Hardware devices (see FIGS. 2A and 2B) may include the central processing unit (CPU) 112, memory, persistent storage (e.g., disk controller 114), and other peripheral devices 116. In the logical view represented in FIG. 2C, these are handled on an equal footing. That is, each device “looks” the same to the OS.
  • In accordance with the present inventive principles, filter driver 118 intercepts the operating system file access and performs caching operations, described further herein below, transparently. That is, the caching, file tracking and, in particular, the compression associated therewith, is transparent to the application 104. Data selected for caching is stored in a (compressed) cache (denoted as ZCache 120). (The “ZCache” notation is used as a mnemonic device to call attention to the fact that the cache in accordance with the present invention is distinct from the instruction/data caches commonly employed in modern microprocessor systems, and typically denoted by the nomenclature “L1”, “L2” etc. cache. Furthermore the Z is a common mnemonic used to indicate compression or encoding activity.) In an embodiment of the present invention, ZCache 120 may be physically implemented as a region in main memory. Filter 118 maintains a file tracking database (DB) 122 which contains information regarding which files are to be cached or not cached, and other information useful to the management of file I/O operations, as described further herein below. Although logically part of filter driver 118, physically, file tracking DB 122 may be included in ZCache 120.
  • A few notes on FIG. 2C:
  • 1) The preferred embodiment of the File system driver layers itself between boxes #2 (I/O Manager Library) and #18 (FS Driver).
  • 2) The disk filter layers itself between boxes #18 (FS Driver) and the boxes in the peer group depicted by #19 (Disk Class), #20 (CD-ROM Class), and #21 (Class).
  • 3) The ZCache module exists as a stand-alone device driver adjunct to the file system filter and disk filter device drivers.
  • 4) A TDI Filter Driver is inserted between box (TDI) 8, with connection tracking for network connections that operates the same as the file tracking modules in the compressed data cache, and the peer group of modules that consist of (AFD) 3, (SRV) 4, (RDR) 5, (NPFS) 6, and (MSFS) 7. A complete reference on TDI is available on the Microsoft MSDN website at
  • http://msdn.microsoft.com/library/default.asp?url=/library/en-us/network/hh/network/303tdi.sub.--519j.asp, which is incorporated herein by reference.
  • 5) A NDIS intermediate cache driver is inserted between the bottom edge of the transport drivers and the upper edge of the NDIS components.
  • FIG. 3 illustrates a cache protocol, in a state diagram format view and in accordance with the present state of the art principles. This state diagram describes the Modified-Shared-Invalid (MSI) Cache protocol. This cache protocol is one used on processor caches, and is closest to what is needed for a block-based cache. Other possible cache protocols which are not precluded by this preferred embodiment include MESI, MOESI, Dragon and others.
  • The definitions of the states shown in FIG. 2 are:
  • 1) Invalid: The cache line does not contain valid data.
  • 2) Shared: The cache line contains data, which is consistent with the backing store in the next level of the memory hierarchy.
  • 3) Modified: The cache line contains the most recent data, and is different than data contained in backing store.
  • FIG. 4A shows the modified MSI cache protocol. In accordance with present inventive design principles, the MSI protocol must be modified, as in FIG. 4A, to accomplish the present inventive design goals. Many factors are considered in development of caching protocols, and most of the above-mentioned cache protocols are of a general purpose only, or are designed for a specific target implementation, such as a processor (CPU) cache. In order to meet the design goals of the present inventive principles other cache protocol factors, rather than only those embodied by the MSI protocol, must be considered.
  • Other caching protocols factors to consider are:
  • 1) Read/Write ordering consistency
  • 2) Allocate on Write Policy
  • 3) Write-through, Write-Back, and Non-cacheable attributes
  • 4) Blocking vs. a Non-Blocking design
  • 5) Support for hardware codec support
  • 6) Squashing support to save I/O Requests
  • Another important item to consider when applying this concept to the invention's cache protocol is the high latencies associated with issuing and completing disk I/Os. It is necessary to break apart the MSI Shared and Modified states to take into consideration the following cases:
  • 1) A cache line is allocated, but the disk I/O may not complete in hundreds if not thousands of microseconds. During this time, additional I/O requests could be made against the same allocated cache line.
  • 2) Dynamically changing cache policies based on file-stream attributes, in different process contexts.
  • 3) Take maximum advantage of the Asynchronous I/O model.
  • Application of these considerations is shown in the state diagram FIG. 4B, which shows state transitions for write-invalidation in accord with the present inventive design principles.
  • Many operating systems have features that can be exploited for maximum performance benefit. As previously mentioned, some of these feature are Asynchronous I/O models, I/O Request Packets or IRPS that can be pended, managed and queued by intermediate drivers, and internal list manipulation techniques, such as look-aside lists or buddy lists. These features may vary slightly from operating system to operating system; none of these features are precluded or required by the present inventive design principles.
  • Refer now to FIG. 5 that illustrates in flow chart form, an adaptive, transparent compression caching methodology 200 in accordance with the present inventive principles. In the logical view of FIG. 2C, methodology 200 may be primarily performed by filter driver 118, or alternatively, may be logically between filter driver 118 and ZCache driver 120.
  • Methodology 200 watches for I/O operations involving data block moves, in step 502. See FIG. 5. As illustrated in FIG. 6, a data block move may be detected by “peeking” at, or disassembling, the I/O request packets that control the handling of I/O operations. If an I/O operation involving data block moves is detected, methodology 200 performs operations to determine if the subject data is to be cached. This is described in conjunction with step 204 of FIG. 6 and steps 204-214 of FIG. 7A. In general, caching decisions are based on user-selectable caching policies in combination with caching “instructions” that may be set by the application making the data transfer request. Step 204 instructs the operating system how a I/O operation should be handled. In particular, each I/O packet includes descriptive data that may include information (i.e. “flags”) for control the caching of the data transported in the packet.
  • Firstly, the user may specify a list of files to be ignored. If, in step 204, the subject file of the data move is in the “ignored” list, process 200 returns to step 208 to continue to watch for data block moves. Otherwise, in step 206, it is determined if caching is turned off in accordance with a global caching policy. As discussed in conjunction with FIG. 2C, a file-tracking database 122 (equivalently, a file tracking “registry”) may be maintained in accordance with caching architecture 100. This registry may include a set of file tracking flags 20, FIG. 2E. In an embodiment of file tracking flags 20, each entry may be a hexadecimal (hex) digit. GlobalPolicy flag 21, which may be set by the user, may be set to determine the global policy that determines the most aggressive policy for any file. In other words, as described further below, other parameters may override the global policy to reduce the aggressiveness for a particular file. The values GlobalPolicy flag 21 may take predetermined values (e.g., a predetermined hex digit) representing respective ones of a writeback policy, writethrough policy and no caching. Writeback caching means that a given I/O write request may be inserted in the ZCache instead of immediately writing the data to the persistent store. Writethrough caching means that the data is also immediately written to the persistent store. If, in step 206, caching is turned off, such as if GlobalPolicy flag 21 is set to a predetermined hex value representing “no cache,” process 200 passes the I/O request to the operating system (OS) for handling, step 208. Otherwise, process 200 proceeds to step 210.
  • In decision block 210, it is determined if dynamic, manual or alternatively, non-conservative tracking is set. This may be responsive to a value of Dynamic flag 28, FIG. 2E. In an embodiment of the present invention, if the value of the flag is “writethrough,” the dynamic tracking is enabled, and if the value of the flag is “no cache,” manual tracking is enabled. (Manual tracking allows the user to explicitly list in the file tracking database which files are to be cached.) In dynamic mode, if, in step 212 the subject file is a tracked file, it is cached in the ZCache in accordance with cache policy (either as writethrough or writeback). File flags associated with the subject file are ignored in manual mode and honored in dynamic mode. In particular, in a Windows NT environment, a FO_NO_INTERMEDIATE_BUFFERING flag is ignored in manual mode (and honored in dynamic mode), and likewise an analogous flag in other OS environments. If the subject file is an untracked file, process 200 proceeds to step 214.
  • Untracked files include metadata and files that may have been opened before the caching process started. Metadata files are files that contain descriptions of data such as information concerning the location of files and directories; log files to recover corrupt volumes and flags which indicate bad clusters on a physical disk. Metadata can represent a significant portion of the I/O to a physical persistent store because the contents of small files (e.g., <4,096 bytes) may be completely stored in metadata files. In step 214 it is determined if non-conservative caching is enabled. In an embodiment of the present invention using file tracking flags 21, FIG. 2E, step 214 may be performed by examining Default flag 24, FIG. 2D. If the value of Default flag 24 is the hex digit representing “writeback,” then non-conservative caching is enabled, and decision block 214 proceeds by the “Y” branch. Conversely, if the value of Default flag 24 is the hex digit representing “no cache,”, then non-conservative caching is disabled, and decision block 214 proceeds by the “N” branch, and the respective file operation is passed to the OS for handling (step 208).
  • In step 214, it is determined if the subject file is a pagefile. If so, in step 214 it is determined if caching of pagefiles is enabled. The flag 28 (FIG. 2E) has the value representing page file I/O. Pagefile I/O is passed to the OS for handling.
  • Process 200 having determined that the subject data is to be cached, in step 220 file object information is extracted from the I/O request packet and stored in the file tracking DB, step 222 (FIG. 6). Such data may include any policy flags set by the application issuing the subject I/O request. If, for example, in a Windows NT environment, the FO_WRITE_THROUGH flag is set in the packet descriptor the WRITE_THROUGH flag 28, FIG. 2E, may be set in step 222. Similarly, if the FO_NO_INTERMEDIATE_BUFFERING is set in the I/O request packet, then the NO_BUFF flag 28 may be set in step 222. Additionally, sequential file access flags, for example, also may be stored.
  • In FIG. 7B, if the I/O request is a write, process 200 proceeds by the “Y” branch in step 224, to step 226. If the request is not a write request, decision block 224 proceeds by the “No” branch to decision block 228, to determine if the request is a read.
  • In step 226 (FIG. 7C), storage space in the ZCache is reserved, and in step 230, a miss counter associated with the subject data block to be cached is cleared. Each such block may have a corresponding tag that represents a fixed-size block of data. For example, a block size, which is normally equivalent to the PAGE SIZE of a computer processor that would execute the instructions for carrying out the method of the present invention, of 4,096 bytes may be used in an embodiment of the present invention, however other block sizes may be used in accordance with the present inventive principles, as shown in FIG. 7E, schematically illustrating a block tag 300 which may be stored in the file tracking database Block tag 300 may be viewed as a data structure having a plurality of members including counter member 302 including miss counter 304. Counter member 302 may, in an embodiment of the present invention may be one-byte wide, and miss counter 304 may be one bit wide (“true/false”). The operation of the miss counter will be discussed further herein below.
  • In step 232 (FIG. 7C), a compression estimation is made. The amount of compression that may be achieved on any particular block is determined by the degree of redundancy in the data block, in accordance with the classic theory information of Shannon. A block of data that is perfectly random has a maximum entropy in this picture and does not compress. An estimation of the entropy of the subject block may be used as a measure of the maximum compression that may be achieved for that block. Different data compression techniques are known in the art, and the “better” the compressor, the closer the compression ratio achieved will be to the entropy-theoretic value. However, the greater compression comes at the price of computational complexity, or, equivalently, CPU cycles. Thus, although memory may be saved by the higher compression ratios, the savings may come at the price of reduced responsiveness because of the added CPU burden. In other words, different compression schemes may be employed to trade off space and time. In an embodiment of the present invention, an entropy estimate may be made using a frequency table for the data representation used. Such frequency tables are used in the cryptographic arts and represent the statistical properties of the data. For example, for ASCII data, a 256-entry relative frequency table may be used. Frequency tables are often used in cryptography and compression; they are pre-built tables used for predicting the probability frequency of presumed alphabetic token occurrences in a data stream. In this embodiment, the token stream is presumed to be ASCII-encoded tokens, but is not restricted to this. For computational convenience, the entropy may be returned as a signed integer value in the range.+−.50. A maximal entropy block would return the value 50. The entropy estimate may also be stored in the block tag (tag member 310, FIG. 3). The value of the entropy estimate may be used to select a compressor, step 234 or the value of the entropy estimate may also be used to provide a bias to pre-fetching for previously seen read data blocks.
  • In step 234, which may be viewed as a three-way decision block if three levels of compression are provided, the subject data block is compressed using an entropy estimate based compressor selection. This may be further understood by referring to FIG. 7F. FIG. 7F schematically depicts a set of entropy bands about the maximum-entropy watermark (which may correspond to a value of zero for a random block) which have pre-selected relative widths about the maximum entropy watermark. Thus, bands 402 a and 402 b are shown with a width of 6%, and represent a block that deviates by a relatively small amount from a random block and would be expected to benefit little from compression. Therefore, in step 234, FIG. 7G, zero compression, 236 may be selected. In other words, such a block may be cached without compression. If the entropy estimate returns a value in bands 404 a, 404 b, shown with a width of 19%, a zero-bit compressor 238, FIG. 2C may be selected. A zero-bit compressor, counts the number of zeros occurring before a one occurs in the word. The zeros are replaced by the value representing the number of zeros. If the entropy estimate returns a value in bands 406 a, 406 b, having an illustrated width of 25%, a more sophisticated compression may be used, as the degree of compression expected may warrant the additional CPU cycles that such a compressor would consume. In step 234, a compressor of the Lempel-Ziv (LZ) type 240 may be selected. LZ type compressors are based on the concept, described by A. Ziv and J. Lempel in 1977, of parsing strings from a finite alphabet into substrings of different lengths (not greater than a predetermined maximum) and a coding scheme that maps the substrings into code words of fixed length, also predetermined. The substrings are selected so they have about equal probability of occurrence. Algorithms for implementing LZ type compression are known in the art, for example, the lzw algorithm described in U.S. Pat. No. 4,558,302 issued Dec. 10, 1985 to Welch and the lzo compressors of Markus F. X. J. Oberhumer, http://www.oberhumer.com/, which are incorporated herein by reference. The type of compressor used and the compression ratio attained may be stored in the block tag, FIG. 7E, 312, 314, respectively. Bands may be added for other compressor types known to the art such as Burroughs-Wheeler (BWT) or PPM (Prediction by Partial Match).
  • Moreover, the bands may be adaptively adjusted. If, for example, the CPU is being underutilized, it may be advantageous to use a more aggressive compressor, even if the additional compression might not otherwise be worth the tradeoff. In this circumstance, the width of bands 404 a, b and 406 a, b may be expanded. Conversely, if CPU cycles are at a premium relative to memory, it may be advantageous to increase the width of bands 402 a, b, and shrink the width of bands 406 a, b. A methodology for adapting the compressor selection is described in conjunction with FIG. 7F.
  • In FIGS. 8A-8J, the data is cached, and any unused space reserved is freed. It is determined if the cached data block previously existed on the persistent store (e.g., disk). If not, an I/O packet of equal size to the uncompressed data block is issued to the persistent store. In this way, the persistent store reserves the space for a subsequent flush, which may also occur if the OS crashes. Additionally, if a read comes in the block will be returned without waiting for the I/O packet request to complete, in accordance with the writeback mechanism. If the block previously existed on the persistent store, or if the cache policy for the block is writethrough (overriding the writeback default), the block is written to the persistent store. Otherwise, the block is scheduled for a flush. Additionally, a “write squashing” may be implemented whereby flushes coming through from the OS are suppressed. In this way, process 200 may lay down contiguous blocks at one time, to avoid fragmenting the persistent store. Process 200 then returns to step 208.
  • Returning to step 288 in FIG. 7B, if the request is a read request, in FIG. 7E, the prefetch and miss counters of the subject block are reset, and reference counters for all blocks updated. A methodology for updating the reference counter for a block will be described in conjunction with FIG. 7D, below. In step 258 (FIG. 7E), it is determined if the block has been previously read. This may be determined by a non-zero access count in number of accesses member 316, FIG. 7E.
  • If the block has been previously read, in step 260 it is determined if a gap prediction is stored in the tag (e.g., gap prediction member 318, FIG. 7E). Gap prediction is accomplished by testing the distance in Logical Blocks (LBN's) from one read request in a file to a subsequent read request on the same file, if the LBN's are not adjacent (e.g. each read takes place at the next higher or lower LBN associated with this file) but there is a regular skip pattern (e.g., a read is done, some, regular, number of LBN's is skipped, either positively or negatively, a subsequent read is issued at this skipped distance) that has been detected from at least two previous reads of this file. If gap prediction has been detected then prefetching will continue as if normal sequential access had been detected, to the length of the gap. If so, in step 260 it is determined if a reference counter in the next block in the sequence is smaller than two. If a block that has been prefetched is not hit in the next two references, then it will not be prefetched again, unless its entropy estimation is approximately equal plus or minus 2% (this value is arrived at empirically and may be different for different operating systems or platforms) to the entropy of the previously fetched block, and process 200 bypasses step 264.
  • Otherwise, in step 264 the next sequential block is prefetched and a prefetch counter is set for the block. Referring to FIG. 7E, counter member 302 may, in an embodiment of the present invention, be one-byte wide, and may contain a prefetch counter 306 which may be one bit wide (“true/false”).
  • Returning to step 258, if the block has not been previously read, in FIG. 7F an entropy estimate is made for the block (using the same technique as in step 232) that is stored in the file tracking database (e.g., in compression estimate member 310, FIG. 7E). A next block is then selected for prefetching based on entropy and distance (FIG. 7E). That is, of the blocks nearest in entropy (once again within 2%), the closest block in distance to the subject block of the read request is prefetched. (Recall that a block has a unique entropy value, but a given entropy value may map into a multiplicity of blocks.) If, however, in FIG. 7E the miss counter for the selected block is set, prefetching of that block is bypassed (“Y” branch of decision block). Otherwise, in step 274, the block is prefetched, and the miss counter (e.g., miss counter 304, FIG. 7E) for the prefetched block is set (or logically “True”). The prefetch counter is set in step 266, as before.
  • Similarly, if there is no gap prediction, a prefetch based on solely on entropy is performed via the “No” branch of decision block 260.
  • In step 204 the read is returned.
  • FIG. 9 illustrates an exemplary hardware configuration of data processing system 700 in accordance with the subject invention. The system in conjunction with the methodologies illustrated in FIG. 5 and architecture 100, FIG. 2C may be used for data caching in accordance with the present inventive principles. Data processing system 700 includes central processing unit (CPU) 710, such as a conventional microprocessor, and a number of other units interconnected via system bus 712. Data processing system 700 may also include random access memory (RAM) 714, read only memory (ROM) (not shown) and input/output (I/O) adapter 722 for connecting peripheral devices such as disk units 720 to bus 712. System 700 may also include communication adapter for connecting data processing system 700 to a data processing network, enabling the system to communicate with other systems. CPU 710 may include other circuitry not shown herein, which will include circuitry commonly found within a microprocessor, e.g., execution units, bus interface units, arithmetic logic units, etc. CPU 710 may also reside on a single integrated circuit.
  • Preferred implementations of the invention include implementations as a computer system programmed to execute the method or methods described herein, and as a computer program product. According to the computer system implementation, sets of instructions for executing the method or methods are resident in the random access memory 714 of one or more computer systems configured generally as described above. These sets of instructions, in conjunction with system components that execute them may perform operations in conjunction with data block caching as described hereinabove. Until required by the computer system, the set of instructions may be stored as a computer program product in another computer memory, for example, in disk drive 720 (which may include a removable memory such as an optical disk or floppy disk for eventual use in the disk drive 720). Further, the computer program product can also be stored at another computer and transmitted to the user's workstation by a network or by an external network such as the Internet. One skilled in the art would appreciate that the physical storage of the sets of instructions physically changes the medium upon which is the stored so that the medium carries computer-readable information. The change may be electrical, magnetic, chemical, biological, or some other physical change. While it is convenient to describe the invention in terms of instructions, symbols, characters, or the like, the reader should remember that all of these in similar terms should be associated with the appropriate physical elements.
  • Note that the invention may describe terms such as comparing, validating, selecting, identifying, or other terms that could be associated with a human operator. However, for at least a number of the operations described herein which form part of at least one of the embodiments, no action by a human operator is desirable. The operations described are, in large part, machine operations processing electrical signals to generate other electrical signals.

Claims (14)

1. A method for caching data comprising:
detecting a data input/output (I/O) request, relative to a data object;
selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge;
estimating an entropy of a data block to be cached in response to the data input/output request;
selecting a compressor using a value of the entropy of the data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark;
storing the data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the data block from the estimating step falls in a first range of entropy values relative to the entropy watermark; and
prefetching the data block using gap prediction with an applied entropy bias.
2. The method of claim 1 further comprising adaptively adjusting the plurality of ranges of entropy values.
3. The method of claim 1 further comprising scheduling a flush of the data block from the cache.
4. The method of claim 3 further comprising suppressing operating system flushes in conjunction with the scheduling step.
5. The method of claim 1, wherein said selecting occurs dynamically.
6. The method of claim 1, wherein said selecting occurs manually.
7. A method for caching data comprising:
detecting a data input/output (I/O) request, relative to a data object;
selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge;
estimating an entropy of a first data block to be cached in response to the data input/output request;
selecting a compressor using a value of the entropy of the first data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark;
storing the first data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the first data block from the estimating step falls in a first range of entropy values relative to the entropy watermark; and
prefetching a second data block using gap prediction with an applied entropy bias.
8. The method of claim 7 further comprising adaptively adjusting the plurality of ranges of entropy values.
9. The method of claim 7 further comprising scheduling a flush of the data block from the cache.
10. The method of claim 9 further comprising suppressing operating system flushes in conjunction with the scheduling step.
11. The method of claim 7, wherein said selecting occurs dynamically.
12. The method of claim 7, wherein said selecting occurs manually.
13. One or more computer program products readable by a machine and containing instructions for performing the method contained in claim 1.
14. One or more computer program products readable by a machine and containing instructions for performing the method contained in claim 7.
US12/206,051 2004-06-14 2008-09-08 Adaptive Caching of Input / Output Data Abandoned US20090055587A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/206,051 US20090055587A1 (en) 2004-06-14 2008-09-08 Adaptive Caching of Input / Output Data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US57934404P 2004-06-14 2004-06-14
US11/152,363 US7430638B2 (en) 2004-06-14 2005-06-14 Adaptive input / output compressed system and data cache and system using same
US12/206,051 US20090055587A1 (en) 2004-06-14 2008-09-08 Adaptive Caching of Input / Output Data

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/152,363 Continuation US7430638B2 (en) 2004-06-14 2005-06-14 Adaptive input / output compressed system and data cache and system using same

Publications (1)

Publication Number Publication Date
US20090055587A1 true US20090055587A1 (en) 2009-02-26

Family

ID=37591174

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/152,363 Active 2026-06-21 US7430638B2 (en) 2004-06-14 2005-06-14 Adaptive input / output compressed system and data cache and system using same
US12/206,051 Abandoned US20090055587A1 (en) 2004-06-14 2008-09-08 Adaptive Caching of Input / Output Data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/152,363 Active 2026-06-21 US7430638B2 (en) 2004-06-14 2005-06-14 Adaptive input / output compressed system and data cache and system using same

Country Status (1)

Country Link
US (2) US7430638B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006741A1 (en) * 2007-06-29 2009-01-01 Seagate Technology Llc Preferred zone scheduling
US20120198171A1 (en) * 2010-09-28 2012-08-02 Texas Instruments Incorporated Cache Pre-Allocation of Ways for Pipelined Allocate Requests
US9697129B2 (en) * 2015-06-29 2017-07-04 International Business Machines Corporation Multiple window based segment prefetching
US10169360B2 (en) 2015-11-11 2019-01-01 International Business Machines Corporation Mixing software based compression requests with hardware accelerated requests

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7424482B2 (en) * 2004-04-26 2008-09-09 Storwize Inc. Method and system for compression of data for block mode access storage
US7430638B2 (en) * 2004-06-14 2008-09-30 Mossman Holdings Llc Adaptive input / output compressed system and data cache and system using same
US7437510B2 (en) * 2005-09-30 2008-10-14 Intel Corporation Instruction-assisted cache management for efficient use of cache and memory
US8862813B2 (en) 2005-12-29 2014-10-14 Datacore Software Corporation Method, computer program product and appartus for accelerating responses to requests for transactions involving data operations
US7752386B1 (en) * 2005-12-29 2010-07-06 Datacore Software Corporation Application performance acceleration
US7937758B2 (en) * 2006-01-25 2011-05-03 Symantec Corporation File origin determination
US8213607B2 (en) * 2006-10-18 2012-07-03 Qualcomm Incorporated Method for securely extending key stream to encrypt high-entropy data
US7529867B2 (en) * 2006-11-01 2009-05-05 Inovawave, Inc. Adaptive, scalable I/O request handling architecture in virtualized computer systems and networks
US20080104589A1 (en) * 2006-11-01 2008-05-01 Mccrory Dave Dennis Adaptive, Scalable I/O Request Handling Architecture in Virtualized Computer Systems and Networks
JP2009217135A (en) * 2008-03-12 2009-09-24 Panasonic Corp Display data outputting device
US8214343B2 (en) * 2008-03-19 2012-07-03 Microsoft Corporation Purposing persistent data through hardware metadata tagging
US8055849B2 (en) * 2008-04-04 2011-11-08 International Business Machines Corporation Reducing cache pollution of a software controlled cache
US8146064B2 (en) * 2008-04-04 2012-03-27 International Business Machines Corporation Dynamically controlling a prefetching range of a software controlled cache
US8239841B2 (en) 2008-04-04 2012-08-07 International Business Machines Corporation Prefetching irregular data references for software controlled caches
US8554745B2 (en) * 2009-04-27 2013-10-08 Netapp, Inc. Nearstore compression of data in a storage system
US9063663B2 (en) * 2010-09-21 2015-06-23 Hitachi, Ltd. Semiconductor storage device and data control method thereof
US9665630B1 (en) * 2012-06-18 2017-05-30 EMC IP Holding Company LLC Techniques for providing storage hints for use in connection with data movement optimizations
US11468218B2 (en) * 2012-08-28 2022-10-11 Synopsys, Inc. Information theoretic subgraph caching
US9170944B2 (en) 2013-06-25 2015-10-27 International Business Machines Corporation Two handed insertion and deletion algorithm for circular buffer
US9582426B2 (en) * 2013-08-20 2017-02-28 International Business Machines Corporation Hardware managed compressed cache
US10725922B2 (en) 2015-06-25 2020-07-28 Intel Corporation Technologies for predictive file caching and synchronization
US10804930B2 (en) 2015-12-16 2020-10-13 International Business Machines Corporation Compressed data layout with variable group size
US10803018B2 (en) 2015-12-16 2020-10-13 International Business Machines Corporation Compressed data rearrangement to optimize file compression
US10311026B2 (en) 2016-05-27 2019-06-04 International Business Machines Corporation Compressed data layout for optimizing data transactions
US10389837B2 (en) * 2016-06-17 2019-08-20 International Business Machines Corporation Multi-tier dynamic data caching
US10983911B2 (en) * 2017-09-01 2021-04-20 Seagate Technology Llc Capacity swapping based on compression
KR102347871B1 (en) * 2017-11-14 2022-01-06 삼성전자주식회사 Computing system with cache management mechanism and method of operation thereof
US11507511B2 (en) * 2020-04-09 2022-11-22 EMC IP Holding Company LLC Method, electronic device and computer program product for storing data
US11500540B2 (en) * 2020-10-28 2022-11-15 EMC IP Holding Company LLC Adaptive inline compression
US11726699B2 (en) * 2021-03-30 2023-08-15 Alibaba Singapore Holding Private Limited Method and system for facilitating multi-stream sequential read performance improvement with reduced read amplification

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4325085A (en) * 1980-06-09 1982-04-13 Digital Communications Corporation Method and apparatus for adaptive facsimile compression using a two dimensional maximum likelihood predictor
US5357618A (en) * 1991-04-15 1994-10-18 International Business Machines Corporation Cache prefetch and bypass using stride registers
US5794228A (en) * 1993-04-16 1998-08-11 Sybase, Inc. Database system with buffer manager providing per page native data compression and decompression
US5805932A (en) * 1994-04-22 1998-09-08 Sony Corporation System for transmitting compressed data if compression ratio is at least preset ratio and pre-compressed data if compression ratio is less than preset ratio
US6324621B2 (en) * 1998-06-10 2001-11-27 International Business Machines Corporation Data caching with a partially compressed cache
US6360300B1 (en) * 1999-08-31 2002-03-19 International Business Machines Corporation System and method for storing compressed and uncompressed data on a hard disk drive
US20020033762A1 (en) * 2000-01-05 2002-03-21 Sabin Belu Systems and methods for multiple-file data compression
US20030161541A1 (en) * 2002-02-28 2003-08-28 Nokia Corporation Method and device for reducing image by palette modification
US6624761B2 (en) * 1998-12-11 2003-09-23 Realtime Data, Llc Content independent data compression method and system
US6671772B1 (en) * 2000-09-20 2003-12-30 Robert E. Cousins Hierarchical file system structure for enhancing disk transfer efficiency
US6968424B1 (en) * 2002-08-07 2005-11-22 Nvidia Corporation Method and system for transparent compressed memory paging in a computer system
US7051126B1 (en) * 2003-08-19 2006-05-23 F5 Networks, Inc. Hardware accelerated compression
US7098822B2 (en) * 2003-12-29 2006-08-29 International Business Machines Corporation Method for handling data
US7188227B2 (en) * 2003-09-30 2007-03-06 International Business Machines Corporation Adaptive memory compression
US7430638B2 (en) * 2004-06-14 2008-09-30 Mossman Holdings Llc Adaptive input / output compressed system and data cache and system using same

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4325085A (en) * 1980-06-09 1982-04-13 Digital Communications Corporation Method and apparatus for adaptive facsimile compression using a two dimensional maximum likelihood predictor
US5357618A (en) * 1991-04-15 1994-10-18 International Business Machines Corporation Cache prefetch and bypass using stride registers
US5794228A (en) * 1993-04-16 1998-08-11 Sybase, Inc. Database system with buffer manager providing per page native data compression and decompression
US5805932A (en) * 1994-04-22 1998-09-08 Sony Corporation System for transmitting compressed data if compression ratio is at least preset ratio and pre-compressed data if compression ratio is less than preset ratio
US6324621B2 (en) * 1998-06-10 2001-11-27 International Business Machines Corporation Data caching with a partially compressed cache
US6624761B2 (en) * 1998-12-11 2003-09-23 Realtime Data, Llc Content independent data compression method and system
US6360300B1 (en) * 1999-08-31 2002-03-19 International Business Machines Corporation System and method for storing compressed and uncompressed data on a hard disk drive
US20020033762A1 (en) * 2000-01-05 2002-03-21 Sabin Belu Systems and methods for multiple-file data compression
US6671772B1 (en) * 2000-09-20 2003-12-30 Robert E. Cousins Hierarchical file system structure for enhancing disk transfer efficiency
US20030161541A1 (en) * 2002-02-28 2003-08-28 Nokia Corporation Method and device for reducing image by palette modification
US6968424B1 (en) * 2002-08-07 2005-11-22 Nvidia Corporation Method and system for transparent compressed memory paging in a computer system
US7051126B1 (en) * 2003-08-19 2006-05-23 F5 Networks, Inc. Hardware accelerated compression
US7188227B2 (en) * 2003-09-30 2007-03-06 International Business Machines Corporation Adaptive memory compression
US7098822B2 (en) * 2003-12-29 2006-08-29 International Business Machines Corporation Method for handling data
US7430638B2 (en) * 2004-06-14 2008-09-30 Mossman Holdings Llc Adaptive input / output compressed system and data cache and system using same

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006741A1 (en) * 2007-06-29 2009-01-01 Seagate Technology Llc Preferred zone scheduling
US9329800B2 (en) * 2007-06-29 2016-05-03 Seagate Technology Llc Preferred zone scheduling
US10082968B2 (en) 2007-06-29 2018-09-25 Seagate Technology Llc Preferred zone scheduling
US20120198171A1 (en) * 2010-09-28 2012-08-02 Texas Instruments Incorporated Cache Pre-Allocation of Ways for Pipelined Allocate Requests
US8683137B2 (en) * 2010-09-28 2014-03-25 Texas Instruments Incorporated Cache pre-allocation of ways for pipelined allocate requests
US9697129B2 (en) * 2015-06-29 2017-07-04 International Business Machines Corporation Multiple window based segment prefetching
US10169360B2 (en) 2015-11-11 2019-01-01 International Business Machines Corporation Mixing software based compression requests with hardware accelerated requests
US10452615B2 (en) 2015-11-11 2019-10-22 International Business Machines Corporation Mixing software based compression requests with hardware accelerated requests

Also Published As

Publication number Publication date
US7430638B2 (en) 2008-09-30
US20070005901A1 (en) 2007-01-04

Similar Documents

Publication Publication Date Title
US7430638B2 (en) Adaptive input / output compressed system and data cache and system using same
US7124152B2 (en) Data storage device with deterministic caching and retention capabilities to effect file level data transfers over a network
US8176251B2 (en) Dynamic optimization of cache memory
US8255630B1 (en) Optimization of cascaded virtual cache memory
US8943272B2 (en) Variable cache line size management
US6457104B1 (en) System and method for recycling stale memory content in compressed memory systems
Makatos et al. Using transparent compression to improve SSD-based I/O caches
US7725661B2 (en) Data-aware cache state machine
US7047382B2 (en) System and method for managing compression and decompression and decompression of system memory in a computer system
US8230179B2 (en) Administering non-cacheable memory load instructions
Abali et al. Performance of hardware compressed main memory
JP5511965B2 (en) Method, apparatus, program and cache controller for controlling read and write aware cache
US20030105926A1 (en) Variable size prefetch cache
US20080104591A1 (en) Adaptive, Scalable I/O Request Handling Architecture in Virtualized Computer Systems and Networks
US20080104589A1 (en) Adaptive, Scalable I/O Request Handling Architecture in Virtualized Computer Systems and Networks
US20120210066A1 (en) Systems and methods for a file-level cache
US20120210068A1 (en) Systems and methods for a multi-level cache
Franaszek et al. Algorithms and data structures for compressed-memory machines
US7752395B1 (en) Intelligent caching of data in a storage server victim cache
JP2013511081A (en) Method, system, and computer program for destaging data from a cache to each of a plurality of storage devices via a device adapter
JP2000090009A (en) Method and device for replacing cache line of cache memory
Klonatos et al. Azor: Using two-level block selection to improve SSD-based I/O caches
US5809526A (en) Data processing system and method for selective invalidation of outdated lines in a second level memory in response to a memory request initiated by a store operation
US20080104590A1 (en) Adaptive, Scalable I/O Request Handling Architecture in Virtualized Computer Systems and Networks
US8478755B2 (en) Sorting large data sets

Legal Events

Date Code Title Description
AS Assignment

Owner name: MOSSMAN HOLDINGS LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:QUICKSHIFT, INC.;REEL/FRAME:021528/0065

Effective date: 20070320

AS Assignment

Owner name: QUICKSHIFT, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KELLAR, JOHN E.;REEL/FRAME:021662/0649

Effective date: 20050614

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION