US20070186001A1 - Data replication method and apparatus - Google Patents

Data replication method and apparatus Download PDF

Info

Publication number
US20070186001A1
US20070186001A1 US11/561,512 US56151206A US2007186001A1 US 20070186001 A1 US20070186001 A1 US 20070186001A1 US 56151206 A US56151206 A US 56151206A US 2007186001 A1 US2007186001 A1 US 2007186001A1
Authority
US
United States
Prior art keywords
storage system
snapshot
data storage
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/561,512
Other versions
US20110087792A2 (en
Inventor
James George Wayda
Kent Lee
Elizabeth G. Rodriguez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seagate Cloud Systems Corp
Original Assignee
Dot Hill Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dot Hill Systems Corp filed Critical Dot Hill Systems Corp
Priority to US11/561,680 priority Critical patent/US8990153B2/en
Priority to US11/561,512 priority patent/US20110087792A2/en
Assigned to DOT HILL SYSTEMS CORP. reassignment DOT HILL SYSTEMS CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RODRIGUEZ, ELIZABETH G., WAYDA, JAMES GEORGE, LEE, KENT
Publication of US20070186001A1 publication Critical patent/US20070186001A1/en
Priority to US12/555,454 priority patent/US20090327568A1/en
Publication of US20110087792A2 publication Critical patent/US20110087792A2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/24Negotiation of communication capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • G06F11/1662Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device

Definitions

  • the present invention is directed to data replication.
  • the present invention is directed to methods and apparatuses for replicating data between different data storage systems or appliances.
  • data storage systems are available that can provide at least limited data archiving through backup facilities and/or snapshot facilities. These facilities can comprise automated or semi-automated batch replication facilities.
  • disaster recovery is facilitated. For example, if a virus causes a current storage volume version to be lost or otherwise unusable, the system can be rolled back to an earlier version that does not include the file that introduced the virus.
  • a data storage system that is the source of the data that is to be replicated must act as an initiator device in order to move the data to the recipient data storage system that is acting as the target.
  • data storage systems typically are targets, not initiators.
  • an initiator is a more complex and difficult to implement function, requiring more intelligence and processing power to implement as compared to a typical target device. Accordingly, in order for a source data storage system to perform data replication in cooperation with a remote data storage system, the source data storage system must typically be enhanced to include initiator functions, as well as remote replication functions.
  • Data replication between different storage systems is further complicated where the initiator and the target data storage systems are from different data storage system vendors. This is because different vendors typically use different protocols and instructions to control operation of their data storage systems. As a result, the automated or semi-automated batch replication of data between data storage systems from different vendors is at best difficult, and is often impossible.
  • a remote batch data replication service that consists of asynchronous replication of data between a local and a remote system through the use of snapshots.
  • the remote batch data replication service features asynchronous replication of block-level data from a volume at a local data storage system to a remote data storage system.
  • the local or source data storage system is able to operate as a target device.
  • the source and remote data storage systems can exchange information regarding their respective capabilities, and can use a common protocol to enable data replication even where the data storage systems are from different vendors (i.e. they are heterogeneous systems).
  • the destination data storage system operates to pull data from the source data storage system. Pulling data can include making use of data snapshot capabilities native to the source data storage system.
  • the destination data storage system commands the source data storage system to take a first snapshot of the storage volume that is to be replicated. The source data storage system is also instructed to make the first snapshot visible to the destination data storage system, and the destination data storage system makes a copy of the data contained in the first snapshot. That is, the destination data storage system copies the complete data storage volume. The destination data storage system copies all of the blocks from the first snapshot and then the destination takes a snapshot.
  • the destination data storage system thereafter commands the source storage system to take another snapshot, and requests a list of the data block numbers that have changed. A copy of the changed data blocks is then requested from the source data storage system by the destination data storage system.
  • a complete backup copy of the replicated storage volume can be maintained on the destination data storage system, without requiring copying the complete storage volume at each update of the backup copy.
  • embodiments of the present invention do not require that the source data storage system have the additional intelligence and features required to operate as an initiator. Instead, the source data storage system can operate as a target by responding to commands initiated by the destination data storage system, or by some other initiator.
  • data replication and other features can be performed between heterogeneous data storage systems using a heterogeneous communication protocol.
  • the heterogeneous communication protocol provides a bridge between data storage systems from different vendors, and allows heterogeneous data storage systems to advertise and discover their respective data replication capabilities.
  • the vendors of different data storage systems each use a common replication protocol.
  • a source data storage system in response to a query by a remote data storage system using the heterogeneous communication protocol, can be instructed to advertise or otherwise provide information regarding its capabilities to the remote data storage system. It should be noted that the remote data storage system could also be commanded to provide information regarding its capabilities.
  • heterogeneous data storage systems can be performed with the source data storage system operating as a target.
  • embodiments implementing a heterogeneous communication protocol allow replication of data from a source data storage system operating as a target device to a remote data storage system operating as an initiator, it should be appreciated that the heterogeneous communication protocol can support other types of communications between heterogeneous data storage systems.
  • the heterogeneous replication protocol provides for data replication in either direction between two storage systems from the same or different vendors.
  • a simple heterogeneous replication protocol is a simple remote data replication protocol that provides the capability for systems that do not inherently implement remote data replication to perform remote data replication with another system as both a source and target of replication data.
  • Simple heterogeneous replication protocol may be implemented as a cross-vendor/cross platform protocol.
  • the system that does not inherently implement remote data replication implements snapshot capability and provides the ability to map a newly created snapshot to a Logical Unit Number (LUN) or other type of mechanism defined in the protocol capable of getting the data.
  • LUN Logical Unit Number
  • the system also implements simple heterogeneous replication protocol such that a partner system can query data management services capabilities (e.g., SHRP, snapshot, split mirror, replication, etc.) and issue commands to control the snapshot services in order to facilitate asynchronous batch (snapshot based) replication.
  • data management services capabilities e.g., SHRP, snapshot, split mirror, replication, etc.
  • FIG. 1 is a functional block diagram depicting components of an electronic data system incorporating a source data storage system and a remote data storage system in accordance with embodiments of the present invention
  • FIG. 2A is a block diagram depicting components of a data storage system in accordance with embodiments of the present invention.
  • FIG. 2B is a block diagram depicting components of a data storage system in accordance with other embodiments of the present invention.
  • FIG. 3A is a block diagram depicting components of a storage appliance in accordance with embodiments of the present invention.
  • FIG. 3B is a block diagram depicting components of a storage controller in accordance with embodiments of the present invention.
  • FIG. 4 is a flow chart depicting aspects of a data replication process in accordance with embodiments of the present invention.
  • FIG. 5 is a flow chart depicting aspects of a process for exchanging information between heterogeneous data storage systems in accordance with embodiments of the present invention.
  • FIG. 6 depicts an exemplary data structure of a response to a destination storage system's query command in accordance with embodiments of the present invention.
  • a snapshot is a block level point-in-time representation of data on a storage volume.
  • the data is essentially frozen in time at the instant that the snapshot is taken.
  • data on the storage volume may change as a result of write operations, the data represented by the snapshot will remain constant and frozen in time at the instant that the snapshot was taken.
  • a repository is used to store data that is not otherwise represented in the storage volume and snapshot metadata. All data and metadata associated with the snapshot is stored in the repository, although such storage on the repository is not required.
  • data stored within the snapshot is stored in “chunks.” A chunk is equivalent to a number of logical data blocks (LBAs).
  • data may also be stored at a subchunk level.
  • a subchunk is a fixed size subset of a chunk. Accordingly, data can be moved between data storage systems 104 , 108 in units of chunks, subchunks, or any multiple thereof. The units that are used for data replication operations can be selected to optimize the performance of the network link between the data storage systems.
  • FIG. 1 is a block diagram depicting an electronic data system 100 in accordance with embodiments of the present invention incorporating a first data storage system 104 and a second data storage system 108 .
  • the electronic data system 100 may also include one or more host processors, computers or computer systems 112 .
  • the electronic data system 100 may include or may be interconnected to an administrative computer 116 .
  • embodiments of the present invention have applications in association with single or multiple hosts 112 in storage area network (SAN) or direct connect environments.
  • SAN storage area network
  • the data storage systems 104 , 108 are typically interconnected to one another through an in-band network 120 .
  • the in-band network 120 may also interconnect the data storage systems 104 , 108 to a host computer 112 and/or an administrative computer 116 .
  • the electronic data system 100 may also include an out-of-band network 124 interconnecting some or all of the electronic data system 100 nodes 104 , 108 , 112 and/or 116 .
  • one or more host computers 112 are connected to each data storage system 104 , 108 .
  • a first data storage system 104 is connected to a second data storage system 108 across some distance by a Fibre Channel or a TCP/IP network 120 , and each of these data storage systems 104 , 108 is connected to a host computer 1 12 through an in-band 120 and/or an out-of-band 124 network.
  • the in-band or storage area network 120 generally functions to transport data between data storage systems 104 and/or 108 and host devices 1 12 , and can be any data pipe capable of supporting multiple initiators and targets. Accordingly, examples of in-band networks 120 include Fibre Channel (FC), iSCSI, parallel SCSI, Ethernet, ESCON, or FICON connections or networks, which may typically be characterized by an ability to transfer relatively large amounts of data at medium to high bandwidths.
  • the out-of-band network 124 generally functions to support the transfer of communications and/or commands between various network nodes, such as data storage resource systems 104 , 108 , host devices 112 , and/or administrative computers 116 , although such data may also be transferred over the in-band communication network 120 .
  • Examples of an out-of-band communication network 124 include a local area network (LAN) or other transmission control protocol/Internet protocol (TCP/IP) network.
  • LAN local area network
  • TCP/IP transmission control protocol/Internet protocol
  • the out-of-band communication network 124 is characterized by an ability to interconnect disparate nodes or other devices through uniform user interfaces, such as a web browser.
  • the out-of-band communication network may provide the potential for globally or other widely distributed management or globally distributed remote replication between data storage systems 104 , 108 via TCP/IP.
  • Every electronic data system node or computer 104 , 108 , 112 and 116 need not be interconnected to every other node or device through both the in-band network 120 and the out-of-band network 124 .
  • no host device 112 needs to be interconnected to any other host device 112 , data storage system 104 , 108 , or administrative computer 116 through the out-of-band communication network 124 , although interconnections between a host device 112 and other devices 104 , 108 , 116 through the out-of-band communication network 124 are not prohibited.
  • an administrative computer 116 may be interconnected to at least one storage resource device 104 or 108 through the out-of-band communication network 124 .
  • An administrator computer 116 may also be interconnected to the in-band network 120 directly, although such an interconnection is not required. For example, instead of a direct connection, an administrator computer 116 may communicate using the in-band network 120 to a controller of a data storage system 104 , 108 .
  • a host computer 112 exchanges data with one or more of the data storage systems 104 , 108 in connection with the performance of the execution of application programming, whether that application programming concerns data management or otherwise.
  • an electronic data system 100 may include multiple host computers 112 .
  • An administrative computer 116 may provide a user interface for controlling aspects of the operation of the storage systems 104 , 108 .
  • the administrative computer 116 may be interconnected to the storage system 104 directly, and/or through a bus or network 120 and/or 124 .
  • an administrative computer 116 may be integrated with a host computer 112 .
  • multiple administrative computers 116 may be provided as part of the electronic data system 100 .
  • an electronic data system 100 may include more than two data storage systems.
  • FIG. 2A illustrates components that may be included in a data storage system 104 , 108 in accordance with embodiments of the present invention.
  • the data storage system 104 , 108 includes a number of storage devices 204 .
  • Examples of storage devices 204 include hard disk drives, such as serial advanced technology attachment (SATA), small computer system interface (SCSI), serial attached SCSI (SAS), Fibre Channel (FC), or parallel advanced technology attached (PATA) hard disk drives.
  • Other examples of storage devices 204 include magnetic tape storage devices, optical storage devices or solid state disk devices.
  • LUNs logical unit numbers
  • an array is understood to refer to a RAID set and the array partition. It can be appreciated that some subset of the array partition can be represented as a single unit to a host.
  • a LUN may be implemented in accordance with any one of the various array levels or other arrangements for storing data on one or more storage devices 104 .
  • the storage devices 204 may contain data comprising a master storage volume, which may correspond to a LUN, in addition to one or more snapshots of the master storage volume taken at different times.
  • snapshots may comprise metadata and data stored in a backing store on the storage devices 204 .
  • the storage devices 204 contain data comprising a master storage volume, which may correspond to a LUN, and one or more snapshots. In one embodiment, the snapshots may be mapped to the LUNs.
  • a data storage system 104 , 108 in accordance with embodiments of the present invention may be provided with a first controller slot 208 a .
  • other embodiments may include additional controller slots, such as a second controller slot 208 b .
  • a controller slot 208 may comprise a connection or set of connections to enable a controller 212 to be operably interconnected to other components of the data storage system 104 , 108 .
  • a data storage system 104 , 108 in accordance with embodiments of the present invention includes at least one controller 212 a .
  • the data storage system 104 , 108 may include exactly one controller 212 .
  • a data storage system 104 , 108 in accordance with other embodiments of the present invention may be operated in a dual redundant active-active controller mode by providing a second controller 212 b .
  • the second controller slot 208 b receives the second controller.
  • the provision of two controllers, 212 a to 212 b permits data to be mirrored between the controllers 212 a - 212 b , providing redundant active-active controller operation.
  • One or more busses or channels 216 are generally provided to interconnect a controller or controllers 212 through the associated controller slot or slots 208 to the storage devices 204 . Furthermore, while illustrated as a single shared bus or channel 216 , it can be appreciated that a number of dedicated and/or shared buses or channels may be provided. Additional components that may be included in a data storage system 104 include one or more power supplies 128 and one or more cooling units 132 . In addition, a bus or network interface 136 may be provided to interconnect the data storage system 104 , 108 to the bus or network 112 , and/or to a host computer 108 or administrative computer 116 .
  • the data storage system 104 can comprise one or more storage volumes implemented in various other ways.
  • the data storage system 104 may comprise a hard disk drive or other storage device 204 connected or associated with a server or a general purpose computer.
  • the storage system 104 may comprise a Just a Bunch of Disks (JBOD) system or a Switched Bunch of Disks (SBOD) system.
  • JBOD Just a Bunch of Disks
  • SBOD Switched Bunch of Disks
  • the data storage system 104 , 108 includes a storage appliance 220 interconnecting one or more storage devices 204 to a bus or network. Furthermore, the storage appliance 220 may be inserted between a host 112 or other device and one or more storage devices 204 .
  • a storage device 204 may itself comprise a collection of hard disk drives or other storage devices, for example provided as part of a RAID or SBOD system.
  • a storage appliance 220 in connection with an embodiment of the present invention in which at least some data replication establishment and management functions are provided by software running on the storage appliance 220 interconnecting one or more storage devices 204 to a host device 112 or other data storage systems 104 , 108 is illustrated.
  • the components may include a processor 304 a capable of executing program instructions.
  • the processor 304 a may include any general purpose programmable processor or controller for executing application programming.
  • the processor 304 a may comprise a specially configured application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • the processor 304 a generally functions to run programming code including a data replication application 328 a.
  • the host 112 or the administrative 116 computer may additionally include memory 308 a for use in connection with data replication applications.
  • the memory 308 a may store a copy of the data replication application 328 a configuration instructions.
  • the memory 308 a may comprise solid state memory resident, removable or remote in nature, such as FLASH, DRAM and/or SDRAM.
  • the host 112 or the administrative 116 computer may also include data storage 314 a for the storage of application programming and or data.
  • data storage 314 a for the storage of application programming and or data.
  • operating system software 318 may be stored in the data storage 314 a .
  • the data storage 314 a may be used to store a data replication application 328 a comprising instructions for pulling data from a source data storage system 104 and providing the data to a destination storage volume 108 as described herein.
  • the data replication application 328 a may itself include a number of modules or components, such as a main input/output (IO) module and a restore thread or module.
  • IO main input/output
  • the host 112 or the administrative 116 computer may also include one or more network interfaces 332 a .
  • a first network interface 332 a 1 may interconnect the storage appliance to a host device 112 through a first network and a second network interface 332 a may interconnect the storage appliances to the storage device or devices 204 that, together with the storage appliance 220 , comprise a data storage system 104 , 108 .
  • the first and second networks may be of the same type or different types, and the storage appliance 200 may be “inline” between a host device 112 and a storage device 204 .
  • Examples of a network interface 332 a include a fiber channel (FC) interface, Ethernet, or any other type of communication interface.
  • a network interface 332 a may be provided in the form of a network interface card or other adapter.
  • FIG. 3B illustrates aspects of a storage controller 212 in accordance with embodiments of the present invention.
  • a storage controller 212 includes a processor subsystem 304 capable of executing instructions for performing, implementing and or controlling various controller 212 functions. Such instructions may include instructions for implementing aspects of a snapshot restore method and apparatus. Furthermore, such instructions may be stored as software and/or firmware.
  • operations concerning the generation of parity data or other operations may be performed using one or more hardwired and or programmable logic circuits provided as part of the processor subsystem 304 .
  • the processor subsystem 304 may be implemented as a number of discrete components, such as one or more programmable processors in combination with one or more logic circuits.
  • Processor subsystem 304 may also include or be implemented as one or more integrated devices or processors.
  • a processor subsystem may comprise a complex programmable logic device (LPLD).
  • LPLD complex programmable logic device
  • a controller 212 also generally includes memory 306 .
  • the memory 306 is not specifically limited to memory of any particular type.
  • the memory 306 may comprise a solid state memory device, or a number of solid state memory devices.
  • the memory 306 may include separate volatile memory 308 and non-volatile memory 310 portions.
  • the memory 306 typically includes a write cache 312 and a read cache 316 that are provided as part of the volatile memory 308 portion of the memory 306 , although other arrangements are possible.
  • a storage controller 212 can improve the speed of input/output (IO) operations between a host 108 and the data storage devices 204 comprising an array or array partition.
  • Examples of volatile memory 308 include DRAM and SDRAM.
  • the non-volatile memory 310 may be used to store data that was written to the write cache 312 of memory 306 in the event of a power outage affecting the data storage system 104 .
  • the non-volatile memory portion 310 of the storage controller memory 306 may include any type of data memory device that is capable of retaining data without requiring power from an external source. Examples of non-volatile memory 310 include, but are not limited to, compact flash or other standardized non-volatile memory devices.
  • the memory 306 also includes portions of the memory 306 comprising a region 324 that provides storage for controller code 326 .
  • the controller code 326 may comprise a number of components, including a data replication process or application 328 comprising instructions for pulling data from a source data storage system as described herein.
  • the data replication application 328 may itself include a number of modules or components, such as a main input/output (IO) module and a restore thread or module.
  • IO main input/output
  • the controller code region 324 may be established in a volatile memory 308 portion of the storage controller memory 306 .
  • controller code 326 may be stored in non-volatile memory 310 .
  • a storage controller 212 may additionally include other components.
  • a bus and/or network interface 332 may be provided for operably interconnecting the storage controller 212 to the remainder of the data storage system 104 , for example through a controller slot 208 and a bus or channel 216 .
  • the interface 332 may be configured to facilitate removal or replacement of the storage controller 212 in a controller slot 208 as a field replaceable unit (FRU).
  • integral signal and power channels may be provided for interconnecting the various components of the storage controller 212 to one another.
  • a process for replicating data stored on a source data storage system 104 to a destination data storage system 108 in which the source data storage system 104 has the attributes of a target device, are illustrated.
  • the destination data storage system 108 or a storage appliance (hereinafter referred to as the initiator) commands the source data storage system 104 to take a snapshot of the master storage volume maintained by the source data storage system 104 (or of some specified storage volume maintained by the source data storage system 104 ).
  • a snapshot is a virtual volume that represents the data that existed on the master storage volume at the point in time that the snapshot was taken.
  • the master storage volume is the current set of data maintained on the source data storage system 104 .
  • the master storage volume may correspond to a standard RAID volume or LUN.
  • the initiator then commands the source data storage system 104 to map the snapshot to a LUN and to make the LUN visible to the initiator (step 408 ). All of the blocks of data in the snapshot are then read by the destination data storage system 108 (or by another device acting as the initiator), and the data is copied to the destination data storage system (e.g., the second data storage system 108 )(step 412 ).
  • copying the data to the destination storage system 108 generally comprises storing the data in data storage devices 204 that are local to or associated with the destination data storage system 108 .
  • the source data storage system 104 rather than transferring all data blocks from the source data storage system 104 to the destination data storage system 108 , it may be possible to perform a hash operation on a range of blocks on the source data storage system 104 and the destination data storage system 108 in order to create as hash list.
  • the hash list can then be transferred from the source data storage system 104 to the destination data storage system 108 . If the hash key for a range of disk blocks on the source data storage system 104 is identical to the hash key for the same range of blocks on the destination data storage system 108 , then it is not necessary to transfer the data blocks.
  • the hash algorithm facilitates effective use of the bandwidth of the asynchronous link between the local and remote systems.
  • the destination data storage system 108 takes a snapshot of the data copied from the source data storage system 104 . Both data storage systems therefore have identical copies of data and a snapshot of the data from the same point in time after this step is completed. It should be noted that the same algorithm can be used to copy any volume, even if not snapped, so long as the source volume is not being written to during the copy. This results in a destination initiated volume copy.
  • the initiator commands the source data storage system 104 to take another snapshot of the master storage volume.
  • the initiator then commands the source data storage system 104 to map this snapshot to a LUN and to make the LUN visible to the destination data storage system 108 (step 424 ).
  • the initiator next commands the source data storage system 104 to provide a list of the changes (i.e. the delta data) between the current snapshot and the previous snapshot (step 428 ). Using the provided list, the initiator then reads the blocks in the list from the source system, and copies the data to the local data storage (step 432 ). The destination data storage system 108 then takes a snapshot of the data now stored in the local data storage (step 436 ).
  • the source 104 and destination 108 data storage systems thus have identical snapshots or copies of the master data storage volume on the source data storage system 104 , at the time the first snapshot was taken, and at the time the second snapshot was taken. If desired, earlier snapshots can be deleted from one or both of the data storage systems 104 , 108 , for example if it is considered unimportant to maintain a number of restore points for the master storage volume.
  • copies of the master storage volume are made periodically. For example, an updated copy of the master storage volume can be taken every 10 minutes.
  • data can be replicated from a source data storage system 104 , even if the source data storage system 104 does not provide native support for data replication to another data storage system 108 .
  • the source data storage system 104 does not need to act as an initiator in connection with replicating stored data. Instead, the source data system 104 can operate as a target device.
  • the source data storage system 104 should be able to respond to commands to take snapshots, to map a snapshot to a LUN, and provide lists of LBAs that contain data (or changed data for subsequent copies).
  • the initial snapshot data comprising a complete copy of the data in the source's master storage volume
  • only the changed data needs to be transferred in order to establish an updated copy of the master storage volume on the destination storage device.
  • a simple bridging communication protocol may be provided that enables these and other functions to be performed between data storage systems 104 , 108 from different vendors (i.e. between heterogeneous data storage systems).
  • the simple bridging protocol may comprise a set of commands recognized by data storage system 104 , 108 sourced from different vendors.
  • the simple bridging protocol may be provided to different vendors, in order to facilitate communications necessary in order to replicate data or performing other functions or services, for example between heterogeneous data storage systems.
  • a destination data storage system 108 may communicate with a source communication system 104 that natively supports a different native communication protocol, in order to perform data replication as described herein. Therefore, a source data storage system 104 from a different vendor than the destination data storage system 108 need not be provided with any extra intelligence in order to provide data replication in cooperation with the destination data storage system 108 , other than an ability to take snapshots and respond to commands expressed in the simple bridging protocol.
  • the simple bridging protocol is not limited to use in connection with pulling data from a source data storage system 104 operating as a target.
  • the simple bridging protocol can be used to allow a source data storage system 104 to act as an initiator and the destination data storage system 108 to act as a target in connection with data replication operations.
  • the source data storage system 104 (operating as an initiator) replicates data to a volume on the destination storage system 108 and commands the destination storage system 108 (operating as a target) to take a snapshot of that volume.
  • a process for providing and using a simple bridging protocol comprises loading the protocol into a source data storage system 104 and into a destination storage system 108 (step 504 ).
  • the protocol may be loaded onto the storage systems 104 , 108 as a software plug-in or may already be resident within firmware of the data storage systems 104 , 108 .
  • a query for replication capabilities is then issued by the data storage system 104 or 108 operating as an initiator to the data storage systems 104 or 108 operating as a target (step 508 ).
  • the query may be issued by another initiator in the electronic data system 100 . This query may be issued using the simple bridging protocol.
  • the data storage system 104 or 108 operating as a target device provides capability information (step 512 ).
  • the capability information may comprise data storage system 104 or 108 data management services capabilities, such as snapshot capabilities, the ability to act as an initiator, remote data replication capabilities, storage volume naming capabilities, snapshot volume naming capabilities, storage volume mapping capabilities, snapshot volume mapping capabilities, etc.
  • the data storage system 104 or 108 acting as the initiator, or some other initiator in the electronic data system 100 then uses the capability information to determine how to effect data replication between the source data storage system 104 and the destination data storage system 108 (step 516 ).
  • a system using the simple bridging protocol is not limited to any fixed set of features. Accordingly, as features are added to versions of a data storage system 104 or 108 , those added features may be reported to the data storage systems 104 or 108 or other components of an electronic data storage system 100 and can be made available to those other components.
  • FIG. 6 depicts an exemplary data structure of a response to a destination storage system's 108 query command in accordance with embodiments of the present invention.
  • the response to a query command generally comprises, without limitation, snapshot capability information 604 , remote data replication capability information 608 , storage volume naming capability information 612 , snapshot volume naming capability information 616 , and other volume, snapshot and/or storage system characteristic information 620 .
  • the destination data storage system 108 can determine what types and methods of remote data replication can be employed to replicate data to/from the source data storage system 104 .
  • the shapshot capability information 604 may include the number of supported snapshots for the entire storage system and/or on a per volume basis and how those snapshots are characterized. For example some snapshots may be fully allocated whereas other shapsshots may be sparse snapshots and others may be both. Additionally, the snapshot capability information 604 may include snapshot naming formats for snapshots on the system.
  • Information included as a part of the remote data replication capabilities 608 may include, for example, whether remote replication is supported on the source data storage system 104 , and if remote replication is supported what types of remote replication are supported (e.g., transactional asynchronous, batch asynchronous, synchronous, CDP, or any other type of data replication protocol known in the art).
  • the remote data replication capability information 608 may further indicate whether the replying system is capably of acting as a remote replication target and/or as an initiator. This essentially indicates whether the replying system is enabled to operate in a target mode and/or an initiator mode. Additionally, the remote data replication capability information 608 may indicate what type of support the replying system has for a pull data replication model (if any). Also, the number of remote targets supported per storage volume and if remote replication chaining is supported may be included in the remote data replication capability information 608 .
  • Storage volume naming capabilities 612 may include the protocol used to support volume naming, the maximum number of characters allowed in a storage volume name, the minimum number of characters allowed in a storage volume name, and so on. By providing storage volume naming capabilities 612 , the destination data storage system 108 can name storage volumes as they would be named in the source data storage system 104 , or at least be aware of the naming schemes in the source data storage system 104 .
  • the replying system may identify how snapshots are named by providing snapshot volume naming capabilities 616 .
  • the snapshot volume naming capabilities 616 may include, for instance, the maximum number of characters in a snapshot volume name, the minimum number of characters in a snapshot volume name, and any off limits characters in a snapshot volume name.
  • additional information may be provided by the replying system in the other volume, snapshot, and/or storage system characteristics information field 620 .
  • Data stored in this particular field may include, without limitation, storage system configuration information, the number of storage volumes in use, and any state information related to the storage system, volume, and/or snapshots.
  • any reliable communication protocol may be used as the bridging protocol to provide communication capabilities between storage systems 104 , 108 .
  • the bridge communication protocol used between the storage systems 104 , 108 in some embodiments may include Fibre Channel and/or iSCSI.
  • the use of iSCSI as the bridge protocol may afford the use of SCSI Command Descriptor Blocks (CDBs) to transfer commands, data, and responses between storage systems 104 , 108 .
  • CDBs SCSI Command Descriptor Blocks
  • the communication protocol in one embodiment, is vendor specific and uses SCSI Send/Receive Diagnostics, Read/Write Buffer, and/or vendor specific SCSI operation codes.
  • SCSI Send/Receive Diagnostics
  • Read/Write Buffer Read/Write Buffer
  • the destination data storage system 108 issues commands to a source data storage system 104 requesting the source data storage system 104 perform certain tasks (e.g., take snapshots, map snapshots to host LUNs, etc.).
  • the source data storage system 104 responds to received commands, performs the requested tasks, and returns the appropriate response.
  • the bridging protocol affords for communications between different types of storage systems.
  • Command sets that may be included as a part of the bridging protocol include, but are not limited to, creating and naming of snapshots, deletion of snapshots, establishment of remote data replication characteristics/parameters, initiation and termination of remote data replication, naming of remote storage volumes, naming of remote snapshot volumes, mapping of remote storage volumes to a LUN, and mapping of remote snapshot volumes to a LUN.
  • the create and name a snapshot command is generally used by the destination data storage system 108 to request the source data storage system 104 to take a snapshot of the volume that is targeted during the replication process.
  • the snapshot delete command is used to delete snapshots on the source data storage system 104 in order to free-up additional storage resources.
  • the destination data storage system 108 issues a remote data replication characteristic/parameter command, it is essentially asking the source data storage system 104 to provide its replication characteristics, which will ultimately determine how the replication process will proceed between the storage systems 104 , 108 .
  • the initiation and termination commands are generally used by the destination data storage system 108 to specify that replication is starting or terminating at a particular source data storage system 104 .
  • the initiation and termination commands may be used to specify when the replication process should start/end.
  • Storage volume naming commands may be used by the destination data storage system 108 to name or change the name of a source data storage volume.
  • snapshot naming commands may be used by the destination data storage system 108 to name or change the name of a snapshot volume and may be included as a part of the storage volume naming command.
  • the command to map a storage volume to a LUN is generally used by the destination data storage system 108 to have the source data storage system 104 map a storage volume to a LUN.
  • the destination data storage system 108 may use a map snapshot volume to LUN command to have the source data storage system 104 map one or more snapshots to a LUN.
  • the map snapshot volume to LUN command may be included in the map storage volume to LUN command.

Abstract

A data storage system, device, and method are provided for replicating data between different data storage systems or appliances. More specifically, the present invention affords communications between heterogeneous data storage systems that potential employ different communication protocols. A bridging communication protocol is utilized by one or both storage systems in order to accommodate different communication protocols. Alternatively, a storage appliance connecting the data storage systems may employ the bridging communication protocol.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This Application claims the benefit of U.S. Provisional Application No. 60/771,384, filed Feb. 7, 2006, the entire disclosure of which is hereby incorporated herein by reference. Also, this application is related to U.S. patent application Ser. No. 11/561,680, filed Nov. 20, 2006, entitled Pull Data Replication Model.
  • FIELD
  • The present invention is directed to data replication. In particular, the present invention is directed to methods and apparatuses for replicating data between different data storage systems or appliances.
  • BACKGROUND
  • The need to store digital files, documents, pictures, images and other data continues to increase rapidly. In connection with the electronic storage of data, various data storage systems have been devised for the rapid and secure storage of large amounts of data. Such systems may include one or a plurality of storage devices that are used in a coordinated fashion. Systems in which data can be distributed across multiple storage devices such that data will not be irretrievably lost if one of the storage devices (or in some cases, more than one storage device) fails are also available. Systems that coordinate operation of a number of individual storage devices can also provide improved data access and/or storage times. Examples of systems that can provide such advantages can be found in the various RAID (redundant array of independent disks) levels that have been developed. Whether implemented using one or a plurality of storage devices, the storage provided by a data storage system can be treated as one or more storage volumes.
  • In order to facilitate the availability of desired data, it is often desirable to maintain different versions of a data storage volume. Indeed, data storage systems are available that can provide at least limited data archiving through backup facilities and/or snapshot facilities. These facilities can comprise automated or semi-automated batch replication facilities. By maintaining different versions, disaster recovery is facilitated. For example, if a virus causes a current storage volume version to be lost or otherwise unusable, the system can be rolled back to an earlier version that does not include the file that introduced the virus. In order to further facilitate the use and security of data, it is often desirable to create a copy of a data storage volume that is originally maintained by a first data storage system on a second data storage system. However, creating backup copies or snapshots of a first data storage system on a second data storage system, commonly referred to as remote replication, introduces complications. In particular, a data storage system that is the source of the data that is to be replicated must act as an initiator device in order to move the data to the recipient data storage system that is acting as the target. However, data storage systems typically are targets, not initiators. Furthermore, an initiator is a more complex and difficult to implement function, requiring more intelligence and processing power to implement as compared to a typical target device. Accordingly, in order for a source data storage system to perform data replication in cooperation with a remote data storage system, the source data storage system must typically be enhanced to include initiator functions, as well as remote replication functions.
  • Data replication between different storage systems is further complicated where the initiator and the target data storage systems are from different data storage system vendors. This is because different vendors typically use different protocols and instructions to control operation of their data storage systems. As a result, the automated or semi-automated batch replication of data between data storage systems from different vendors is at best difficult, and is often impossible.
  • SUMMARY
  • The present invention is directed to solving these and other problems and disadvantages of the prior art. In accordance with embodiments of the present invention, a remote batch data replication service is provided that consists of asynchronous replication of data between a local and a remote system through the use of snapshots. In one embodiment, the remote batch data replication service features asynchronous replication of block-level data from a volume at a local data storage system to a remote data storage system. More particularly, in a pull data replication model, the local or source data storage system is able to operate as a target device. In accordance with still other embodiments of the present invention, the source and remote data storage systems can exchange information regarding their respective capabilities, and can use a common protocol to enable data replication even where the data storage systems are from different vendors (i.e. they are heterogeneous systems).
  • In order to allow a source data storage system to operate as a target device while replicating data to a remote or destination data storage system, the destination data storage system operates to pull data from the source data storage system. Pulling data can include making use of data snapshot capabilities native to the source data storage system. In accordance with embodiments of the present invention, the destination data storage system commands the source data storage system to take a first snapshot of the storage volume that is to be replicated. The source data storage system is also instructed to make the first snapshot visible to the destination data storage system, and the destination data storage system makes a copy of the data contained in the first snapshot. That is, the destination data storage system copies the complete data storage volume. The destination data storage system copies all of the blocks from the first snapshot and then the destination takes a snapshot. The destination data storage system thereafter commands the source storage system to take another snapshot, and requests a list of the data block numbers that have changed. A copy of the changed data blocks is then requested from the source data storage system by the destination data storage system. By copying only the changed data blocks, a complete backup copy of the replicated storage volume can be maintained on the destination data storage system, without requiring copying the complete storage volume at each update of the backup copy. Furthermore, embodiments of the present invention do not require that the source data storage system have the additional intelligence and features required to operate as an initiator. Instead, the source data storage system can operate as a target by responding to commands initiated by the destination data storage system, or by some other initiator.
  • In accordance with further embodiments of the present invention, data replication and other features can be performed between heterogeneous data storage systems using a heterogeneous communication protocol. The heterogeneous communication protocol provides a bridge between data storage systems from different vendors, and allows heterogeneous data storage systems to advertise and discover their respective data replication capabilities. In order to implement the heterogeneous communication or bridging protocol, the vendors of different data storage systems each use a common replication protocol. In accordance with embodiments of the present invention, in response to a query by a remote data storage system using the heterogeneous communication protocol, a source data storage system can be instructed to advertise or otherwise provide information regarding its capabilities to the remote data storage system. It should be noted that the remote data storage system could also be commanded to provide information regarding its capabilities. Furthermore, data replication between heterogeneous data storage systems can be performed with the source data storage system operating as a target. Although embodiments implementing a heterogeneous communication protocol allow replication of data from a source data storage system operating as a target device to a remote data storage system operating as an initiator, it should be appreciated that the heterogeneous communication protocol can support other types of communications between heterogeneous data storage systems. For example, the heterogeneous replication protocol provides for data replication in either direction between two storage systems from the same or different vendors.
  • In one embodiment, a simple heterogeneous replication protocol is a simple remote data replication protocol that provides the capability for systems that do not inherently implement remote data replication to perform remote data replication with another system as both a source and target of replication data. Simple heterogeneous replication protocol may be implemented as a cross-vendor/cross platform protocol. The system that does not inherently implement remote data replication implements snapshot capability and provides the ability to map a newly created snapshot to a Logical Unit Number (LUN) or other type of mechanism defined in the protocol capable of getting the data. The system also implements simple heterogeneous replication protocol such that a partner system can query data management services capabilities (e.g., SHRP, snapshot, split mirror, replication, etc.) and issue commands to control the snapshot services in order to facilitate asynchronous batch (snapshot based) replication.
  • Additional features and advantages of embodiments of the present invention will become more readily apparent from the following description, particularly when taken together with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a functional block diagram depicting components of an electronic data system incorporating a source data storage system and a remote data storage system in accordance with embodiments of the present invention;
  • FIG. 2A is a block diagram depicting components of a data storage system in accordance with embodiments of the present invention;
  • FIG. 2B is a block diagram depicting components of a data storage system in accordance with other embodiments of the present invention;
  • FIG. 3A is a block diagram depicting components of a storage appliance in accordance with embodiments of the present invention;
  • FIG. 3B is a block diagram depicting components of a storage controller in accordance with embodiments of the present invention;
  • FIG. 4 is a flow chart depicting aspects of a data replication process in accordance with embodiments of the present invention;
  • FIG. 5 is a flow chart depicting aspects of a process for exchanging information between heterogeneous data storage systems in accordance with embodiments of the present invention; and
  • FIG. 6 depicts an exemplary data structure of a response to a destination storage system's query command in accordance with embodiments of the present invention.
  • DETAILED DESCRIPTION
  • In accordance with embodiments of the present invention, a snapshot is a block level point-in-time representation of data on a storage volume. The data is essentially frozen in time at the instant that the snapshot is taken. Although data on the storage volume may change as a result of write operations, the data represented by the snapshot will remain constant and frozen in time at the instant that the snapshot was taken. In order to preserve snapshot data, a repository is used to store data that is not otherwise represented in the storage volume and snapshot metadata. All data and metadata associated with the snapshot is stored in the repository, although such storage on the repository is not required. In accordance with embodiments of the present invention, data stored within the snapshot is stored in “chunks.” A chunk is equivalent to a number of logical data blocks (LBAs). As a further optimization, data may also be stored at a subchunk level. A subchunk is a fixed size subset of a chunk. Accordingly, data can be moved between data storage systems 104, 108 in units of chunks, subchunks, or any multiple thereof. The units that are used for data replication operations can be selected to optimize the performance of the network link between the data storage systems.
  • FIG. 1 is a block diagram depicting an electronic data system 100 in accordance with embodiments of the present invention incorporating a first data storage system 104 and a second data storage system 108. The electronic data system 100 may also include one or more host processors, computers or computer systems 112. In addition, the electronic data system 100 may include or may be interconnected to an administrative computer 116. As will be appreciated by one of skill in the art after consideration of the present disclosure, embodiments of the present invention have applications in association with single or multiple hosts 112 in storage area network (SAN) or direct connect environments.
  • The data storage systems 104, 108 are typically interconnected to one another through an in-band network 120. The in-band network 120 may also interconnect the data storage systems 104, 108 to a host computer 112 and/or an administrative computer 116. The electronic data system 100 may also include an out-of-band network 124 interconnecting some or all of the electronic data system 100 nodes 104, 108, 112 and/or 116. For example, in a typical remote replication environment, one or more host computers 112 are connected to each data storage system 104, 108. For instance, a first data storage system 104 is connected to a second data storage system 108 across some distance by a Fibre Channel or a TCP/IP network 120, and each of these data storage systems 104, 108 is connected to a host computer 1 12 through an in-band 120 and/or an out-of-band 124 network.
  • The in-band or storage area network 120 generally functions to transport data between data storage systems 104 and/or 108 and host devices 1 12, and can be any data pipe capable of supporting multiple initiators and targets. Accordingly, examples of in-band networks 120 include Fibre Channel (FC), iSCSI, parallel SCSI, Ethernet, ESCON, or FICON connections or networks, which may typically be characterized by an ability to transfer relatively large amounts of data at medium to high bandwidths. The out-of-band network 124 generally functions to support the transfer of communications and/or commands between various network nodes, such as data storage resource systems 104, 108, host devices 112, and/or administrative computers 116, although such data may also be transferred over the in-band communication network 120. Examples of an out-of-band communication network 124 include a local area network (LAN) or other transmission control protocol/Internet protocol (TCP/IP) network. In general, the out-of-band communication network 124 is characterized by an ability to interconnect disparate nodes or other devices through uniform user interfaces, such as a web browser. Furthermore, the out-of-band communication network may provide the potential for globally or other widely distributed management or globally distributed remote replication between data storage systems 104, 108 via TCP/IP.
  • Every electronic data system node or computer 104, 108, 112 and 116, need not be interconnected to every other node or device through both the in-band network 120 and the out-of-band network 124. For example, no host device 112 needs to be interconnected to any other host device 112, data storage system 104, 108, or administrative computer 116 through the out-of-band communication network 124, although interconnections between a host device 112 and other devices 104, 108, 116 through the out-of-band communication network 124 are not prohibited. As another example, an administrative computer 116 may be interconnected to at least one storage resource device 104 or 108 through the out-of-band communication network 124. An administrator computer 116 may also be interconnected to the in-band network 120 directly, although such an interconnection is not required. For example, instead of a direct connection, an administrator computer 116 may communicate using the in-band network 120 to a controller of a data storage system 104, 108.
  • In general, a host computer 112 exchanges data with one or more of the data storage systems 104, 108 in connection with the performance of the execution of application programming, whether that application programming concerns data management or otherwise. Furthermore, an electronic data system 100 may include multiple host computers 112. An administrative computer 116 may provide a user interface for controlling aspects of the operation of the storage systems 104, 108. The administrative computer 116 may be interconnected to the storage system 104 directly, and/or through a bus or network 120 and/or 124. In accordance with still other embodiments of the present invention, an administrative computer 116 may be integrated with a host computer 112. In addition, multiple administrative computers 116 may be provided as part of the electronic data system 100. Furthermore, although only two data storage systems 104, 108 are shown in FIG. 1, an electronic data system 100 may include more than two data storage systems.
  • FIG. 2A illustrates components that may be included in a data storage system 104, 108 in accordance with embodiments of the present invention. In general, the data storage system 104, 108 includes a number of storage devices 204. Examples of storage devices 204 include hard disk drives, such as serial advanced technology attachment (SATA), small computer system interface (SCSI), serial attached SCSI (SAS), Fibre Channel (FC), or parallel advanced technology attached (PATA) hard disk drives. Other examples of storage devices 204 include magnetic tape storage devices, optical storage devices or solid state disk devices. Furthermore, although a number of storage devices 204 are illustrated, it should be appreciated that embodiments of the present invention are not limited to any particular number of storage devices, and that a lesser or greater number of storage devices 204 may be provided as part of a data storage system 104. As can be appreciated by one of skill in the art, one or more arrays and/or array partitions, hereinafter referred to as logical unit numbers (LUNs) comprising a storage volume, may be established on the data storage devices 204. As used herein, an array is understood to refer to a RAID set and the array partition. It can be appreciated that some subset of the array partition can be represented as a single unit to a host. As can be further appreciated by one of skill in the art, a LUN may be implemented in accordance with any one of the various array levels or other arrangements for storing data on one or more storage devices 104. As can also be appreciated by one of skill in the art, the storage devices 204 may contain data comprising a master storage volume, which may correspond to a LUN, in addition to one or more snapshots of the master storage volume taken at different times. As can further be appreciated by one of skill in the art, snapshots may comprise metadata and data stored in a backing store on the storage devices 204. As can also be appreciated by one of skill in the art, the storage devices 204 contain data comprising a master storage volume, which may correspond to a LUN, and one or more snapshots. In one embodiment, the snapshots may be mapped to the LUNs.
  • A data storage system 104, 108 in accordance with embodiments of the present invention may be provided with a first controller slot 208 a. In addition, other embodiments may include additional controller slots, such as a second controller slot 208 b. As can be appreciated by one of skill in the art, a controller slot 208 may comprise a connection or set of connections to enable a controller 212 to be operably interconnected to other components of the data storage system 104, 108. Furthermore, a data storage system 104, 108 in accordance with embodiments of the present invention includes at least one controller 212 a. For example, while the data storage system 104, 108 is operated in a single controller, non-failover mode, the data storage system 104, 108 may include exactly one controller 212. A data storage system 104, 108 in accordance with other embodiments of the present invention may be operated in a dual redundant active-active controller mode by providing a second controller 212 b. When a second controller 212 b is used in addition to a first controller 212 a, the second controller slot 208 b receives the second controller. As can be appreciated by one of skill in the art, the provision of two controllers, 212 a to 212 b, permits data to be mirrored between the controllers 212 a-212 b, providing redundant active-active controller operation.
  • One or more busses or channels 216 are generally provided to interconnect a controller or controllers 212 through the associated controller slot or slots 208 to the storage devices 204. Furthermore, while illustrated as a single shared bus or channel 216, it can be appreciated that a number of dedicated and/or shared buses or channels may be provided. Additional components that may be included in a data storage system 104 include one or more power supplies 128 and one or more cooling units 132. In addition, a bus or network interface 136 may be provided to interconnect the data storage system 104, 108 to the bus or network 112, and/or to a host computer 108 or administrative computer 116.
  • Although illustrated as a complete RAID system in FIG. 2B, it should be appreciated that the data storage system 104 can comprise one or more storage volumes implemented in various other ways. For example, the data storage system 104 may comprise a hard disk drive or other storage device 204 connected or associated with a server or a general purpose computer. As further examples, the storage system 104 may comprise a Just a Bunch of Disks (JBOD) system or a Switched Bunch of Disks (SBOD) system.
  • With reference to FIG. 2B, components that may be included in a data storage system 104, 108 in accordance with other embodiments of the present invention are illustrated. In general, the data storage system 104, 108 according to such embodiments includes a storage appliance 220 interconnecting one or more storage devices 204 to a bus or network. Furthermore, the storage appliance 220 may be inserted between a host 112 or other device and one or more storage devices 204. In accordance with still other embodiments of the present invention, a storage device 204 may itself comprise a collection of hard disk drives or other storage devices, for example provided as part of a RAID or SBOD system.
  • With reference to FIG. 3A, a storage appliance 220 in connection with an embodiment of the present invention in which at least some data replication establishment and management functions are provided by software running on the storage appliance 220 interconnecting one or more storage devices 204 to a host device 112 or other data storage systems 104, 108 is illustrated. The components may include a processor 304 a capable of executing program instructions. Accordingly, the processor 304 a may include any general purpose programmable processor or controller for executing application programming. Alternatively, the processor 304 a may comprise a specially configured application specific integrated circuit (ASIC). The processor 304 a generally functions to run programming code including a data replication application 328 a.
  • The host 112 or the administrative 116 computer may additionally include memory 308 a for use in connection with data replication applications. For example, the memory 308 a may store a copy of the data replication application 328a configuration instructions. The memory 308 a may comprise solid state memory resident, removable or remote in nature, such as FLASH, DRAM and/or SDRAM.
  • The host 112 or the administrative 116 computer may also include data storage 314a for the storage of application programming and or data. For example, operating system software 318 may be stored in the data storage 314 a. In addition, the data storage 314 a may be used to store a data replication application 328 a comprising instructions for pulling data from a source data storage system 104 and providing the data to a destination storage volume 108 as described herein. The data replication application 328 a may itself include a number of modules or components, such as a main input/output (IO) module and a restore thread or module.
  • The host 112 or the administrative 116 computer may also include one or more network interfaces 332 a. For example, a first network interface 332 a 1 may interconnect the storage appliance to a host device 112 through a first network and a second network interface 332 a may interconnect the storage appliances to the storage device or devices 204 that, together with the storage appliance 220, comprise a data storage system 104, 108. Furthermore, the first and second networks may be of the same type or different types, and the storage appliance 200 may be “inline” between a host device 112 and a storage device 204. Examples of a network interface 332 a include a fiber channel (FC) interface, Ethernet, or any other type of communication interface. As can be appreciated by one of skill in the art, a network interface 332 a may be provided in the form of a network interface card or other adapter.
  • FIG. 3B illustrates aspects of a storage controller 212 in accordance with embodiments of the present invention. In general, a storage controller 212 includes a processor subsystem 304 capable of executing instructions for performing, implementing and or controlling various controller 212 functions. Such instructions may include instructions for implementing aspects of a snapshot restore method and apparatus. Furthermore, such instructions may be stored as software and/or firmware. As can be appreciated by one of skill in the art, operations concerning the generation of parity data or other operations may be performed using one or more hardwired and or programmable logic circuits provided as part of the processor subsystem 304. Accordingly, the processor subsystem 304 may be implemented as a number of discrete components, such as one or more programmable processors in combination with one or more logic circuits. Processor subsystem 304 may also include or be implemented as one or more integrated devices or processors. For example a processor subsystem may comprise a complex programmable logic device (LPLD).
  • A controller 212 also generally includes memory 306. The memory 306 is not specifically limited to memory of any particular type. For example, the memory 306 may comprise a solid state memory device, or a number of solid state memory devices. In addition, the memory 306 may include separate volatile memory 308 and non-volatile memory 310 portions. As can be appreciated by one of skill in the art, the memory 306 typically includes a write cache 312 and a read cache 316 that are provided as part of the volatile memory 308 portion of the memory 306, although other arrangements are possible. By providing caches 312, 316, a storage controller 212 can improve the speed of input/output (IO) operations between a host 108 and the data storage devices 204 comprising an array or array partition. Examples of volatile memory 308 include DRAM and SDRAM.
  • The non-volatile memory 310 may be used to store data that was written to the write cache 312 of memory 306 in the event of a power outage affecting the data storage system 104. The non-volatile memory portion 310 of the storage controller memory 306 may include any type of data memory device that is capable of retaining data without requiring power from an external source. Examples of non-volatile memory 310 include, but are not limited to, compact flash or other standardized non-volatile memory devices.
  • The memory 306 also includes portions of the memory 306 comprising a region 324 that provides storage for controller code 326. The controller code 326 may comprise a number of components, including a data replication process or application 328 comprising instructions for pulling data from a source data storage system as described herein. The data replication application 328 may itself include a number of modules or components, such as a main input/output (IO) module and a restore thread or module. As shown in FIG. 3B, the controller code region 324 may be established in a volatile memory 308 portion of the storage controller memory 306. Alternatively or in addition, controller code 326 may be stored in non-volatile memory 310.
  • A storage controller 212 may additionally include other components. For example, a bus and/or network interface 332 may be provided for operably interconnecting the storage controller 212 to the remainder of the data storage system 104, for example through a controller slot 208 and a bus or channel 216. Furthermore, the interface 332 may be configured to facilitate removal or replacement of the storage controller 212 in a controller slot 208 as a field replaceable unit (FRU). In addition, integral signal and power channels may be provided for interconnecting the various components of the storage controller 212 to one another.
  • With reference to FIG. 4, a process for replicating data stored on a source data storage system 104 to a destination data storage system 108, in which the source data storage system 104 has the attributes of a target device, are illustrated. Initially, at step 404, the destination data storage system 108 or a storage appliance (hereinafter referred to as the initiator) commands the source data storage system 104 to take a snapshot of the master storage volume maintained by the source data storage system 104 (or of some specified storage volume maintained by the source data storage system 104). As used herein, a snapshot is a virtual volume that represents the data that existed on the master storage volume at the point in time that the snapshot was taken. The master storage volume is the current set of data maintained on the source data storage system 104. The master storage volume may correspond to a standard RAID volume or LUN. The initiator then commands the source data storage system 104 to map the snapshot to a LUN and to make the LUN visible to the initiator (step 408). All of the blocks of data in the snapshot are then read by the destination data storage system 108 (or by another device acting as the initiator), and the data is copied to the destination data storage system (e.g., the second data storage system 108)(step 412). As can be appreciated by one of skill in the art after consideration of the present disclosure, copying the data to the destination storage system 108 generally comprises storing the data in data storage devices 204 that are local to or associated with the destination data storage system 108.
  • In accordance with at least one embodiment of the present invention, rather than transferring all data blocks from the source data storage system 104 to the destination data storage system 108, it may be possible to perform a hash operation on a range of blocks on the source data storage system 104 and the destination data storage system 108 in order to create as hash list. The hash list can then be transferred from the source data storage system 104 to the destination data storage system 108. If the hash key for a range of disk blocks on the source data storage system 104 is identical to the hash key for the same range of blocks on the destination data storage system 108, then it is not necessary to transfer the data blocks. The hash algorithm facilitates effective use of the bandwidth of the asynchronous link between the local and remote systems.
  • At step 416, the destination data storage system 108 takes a snapshot of the data copied from the source data storage system 104. Both data storage systems therefore have identical copies of data and a snapshot of the data from the same point in time after this step is completed. It should be noted that the same algorithm can be used to copy any volume, even if not snapped, so long as the source volume is not being written to during the copy. This results in a destination initiated volume copy.
  • At step 420, the initiator commands the source data storage system 104 to take another snapshot of the master storage volume. The initiator then commands the source data storage system 104 to map this snapshot to a LUN and to make the LUN visible to the destination data storage system 108 (step 424).
  • The initiator next commands the source data storage system 104 to provide a list of the changes (i.e. the delta data) between the current snapshot and the previous snapshot (step 428). Using the provided list, the initiator then reads the blocks in the list from the source system, and copies the data to the local data storage (step 432). The destination data storage system 108 then takes a snapshot of the data now stored in the local data storage (step 436). The source 104 and destination 108 data storage systems thus have identical snapshots or copies of the master data storage volume on the source data storage system 104, at the time the first snapshot was taken, and at the time the second snapshot was taken. If desired, earlier snapshots can be deleted from one or both of the data storage systems 104, 108, for example if it is considered unimportant to maintain a number of restore points for the master storage volume.
  • At step 440, a determination can be made as to whether the copy of the master storage volume on the destination storage device 108 should be updated. If it is determined that an update should be performed, the process may return to step 420. If no further updates are required, the process may end. In accordance with embodiments of the present invention, copies of the master storage volume are made periodically. For example, an updated copy of the master storage volume can be taken every 10 minutes.
  • As can be appreciated by one of skill in the art after consideration of the present disclosure, data can be replicated from a source data storage system 104, even if the source data storage system 104 does not provide native support for data replication to another data storage system 108. In addition, the source data storage system 104 does not need to act as an initiator in connection with replicating stored data. Instead, the source data system 104 can operate as a target device. In accordance with embodiments of the present invention where the source data storage system 104 is not an initiator, in order to provide data replication as described, the source data storage system 104 should be able to respond to commands to take snapshots, to map a snapshot to a LUN, and provide lists of LBAs that contain data (or changed data for subsequent copies). Furthermore, it can be appreciated that, following the transfer of the initial snapshot data comprising a complete copy of the data in the source's master storage volume, only the changed data needs to be transferred in order to establish an updated copy of the master storage volume on the destination storage device.
  • These commands necessary to perform data replication may be communicated using a simple communication protocol. In accordance with still other embodiments of the present invention, a simple bridging communication protocol may be provided that enables these and other functions to be performed between data storage systems 104, 108 from different vendors (i.e. between heterogeneous data storage systems). Accordingly, the simple bridging protocol may comprise a set of commands recognized by data storage system 104, 108 sourced from different vendors. The simple bridging protocol may be provided to different vendors, in order to facilitate communications necessary in order to replicate data or performing other functions or services, for example between heterogeneous data storage systems. As a further example, a destination data storage system 108 may communicate with a source communication system 104 that natively supports a different native communication protocol, in order to perform data replication as described herein. Therefore, a source data storage system 104 from a different vendor than the destination data storage system 108 need not be provided with any extra intelligence in order to provide data replication in cooperation with the destination data storage system 108, other than an ability to take snapshots and respond to commands expressed in the simple bridging protocol.
  • As can be appreciated by one of skill in the art after consideration of the present disclosure, the simple bridging protocol is not limited to use in connection with pulling data from a source data storage system 104 operating as a target. For example, the simple bridging protocol can be used to allow a source data storage system 104 to act as an initiator and the destination data storage system 108 to act as a target in connection with data replication operations. For instance, using the simple bridging protocol, the source data storage system 104 (operating as an initiator) replicates data to a volume on the destination storage system 108 and commands the destination storage system 108 (operating as a target) to take a snapshot of that volume.
  • In accordance with embodiments of the present invention, and with reference to FIG. 5, a process for providing and using a simple bridging protocol comprises loading the protocol into a source data storage system 104 and into a destination storage system 108 (step 504). The protocol may be loaded onto the storage systems 104, 108 as a software plug-in or may already be resident within firmware of the data storage systems 104, 108. A query for replication capabilities is then issued by the data storage system 104 or 108 operating as an initiator to the data storage systems 104 or 108 operating as a target (step 508). Alternatively, the query may be issued by another initiator in the electronic data system 100. This query may be issued using the simple bridging protocol.
  • In response to the query, the data storage system 104 or 108 operating as a target device provides capability information (step 512). The capability information may comprise data storage system 104 or 108 data management services capabilities, such as snapshot capabilities, the ability to act as an initiator, remote data replication capabilities, storage volume naming capabilities, snapshot volume naming capabilities, storage volume mapping capabilities, snapshot volume mapping capabilities, etc. The data storage system 104 or 108 acting as the initiator, or some other initiator in the electronic data system 100, then uses the capability information to determine how to effect data replication between the source data storage system 104 and the destination data storage system 108 (step 516).
  • By providing or advertising capability information in response to a query, a system using the simple bridging protocol is not limited to any fixed set of features. Accordingly, as features are added to versions of a data storage system 104 or 108, those added features may be reported to the data storage systems 104 or 108 or other components of an electronic data storage system 100 and can be made available to those other components.
  • FIG. 6 depicts an exemplary data structure of a response to a destination storage system's 108 query command in accordance with embodiments of the present invention. The response to a query command generally comprises, without limitation, snapshot capability information 604, remote data replication capability information 608, storage volume naming capability information 612, snapshot volume naming capability information 616, and other volume, snapshot and/or storage system characteristic information 620. When the destination data storage system 108 receives the information describing the source data storage system's 104 capabilities, the destination data storage system 108 can determine what types and methods of remote data replication can be employed to replicate data to/from the source data storage system 104.
  • The shapshot capability information 604 may include the number of supported snapshots for the entire storage system and/or on a per volume basis and how those snapshots are characterized. For example some snapshots may be fully allocated whereas other shapsshots may be sparse snapshots and others may be both. Additionally, the snapshot capability information 604 may include snapshot naming formats for snapshots on the system.
  • Information included as a part of the remote data replication capabilities 608 may include, for example, whether remote replication is supported on the source data storage system 104, and if remote replication is supported what types of remote replication are supported (e.g., transactional asynchronous, batch asynchronous, synchronous, CDP, or any other type of data replication protocol known in the art). The remote data replication capability information 608 may further indicate whether the replying system is capably of acting as a remote replication target and/or as an initiator. This essentially indicates whether the replying system is enabled to operate in a target mode and/or an initiator mode. Additionally, the remote data replication capability information 608 may indicate what type of support the replying system has for a pull data replication model (if any). Also, the number of remote targets supported per storage volume and if remote replication chaining is supported may be included in the remote data replication capability information 608.
  • Storage volume naming capabilities 612 may include the protocol used to support volume naming, the maximum number of characters allowed in a storage volume name, the minimum number of characters allowed in a storage volume name, and so on. By providing storage volume naming capabilities 612, the destination data storage system 108 can name storage volumes as they would be named in the source data storage system 104, or at least be aware of the naming schemes in the source data storage system 104.
  • Similar to storage volume naming capabilities 612, the replying system may identify how snapshots are named by providing snapshot volume naming capabilities 616. The snapshot volume naming capabilities 616 may include, for instance, the maximum number of characters in a snapshot volume name, the minimum number of characters in a snapshot volume name, and any off limits characters in a snapshot volume name.
  • As can be appreciated by one of skill in the art, additional information may be provided by the replying system in the other volume, snapshot, and/or storage system characteristics information field 620. Data stored in this particular field may include, without limitation, storage system configuration information, the number of storage volumes in use, and any state information related to the storage system, volume, and/or snapshots.
  • As can be appreciated, any reliable communication protocol may be used as the bridging protocol to provide communication capabilities between storage systems 104, 108. The bridge communication protocol used between the storage systems 104, 108 in some embodiments may include Fibre Channel and/or iSCSI. The use of iSCSI as the bridge protocol may afford the use of SCSI Command Descriptor Blocks (CDBs) to transfer commands, data, and responses between storage systems 104, 108. The communication protocol, in one embodiment, is vendor specific and uses SCSI Send/Receive Diagnostics, Read/Write Buffer, and/or vendor specific SCSI operation codes. Of course, it is important to note that even though the protocol can be based upon SCSI, reliance on SCSI as a data transport mechanism is not a requirement of the present invention.
  • To perform remote replication in accordance with embodiments of the present invention, the destination data storage system 108 issues commands to a source data storage system 104 requesting the source data storage system 104 perform certain tasks (e.g., take snapshots, map snapshots to host LUNs, etc.). The source data storage system 104 responds to received commands, performs the requested tasks, and returns the appropriate response. The bridging protocol affords for communications between different types of storage systems. Command sets that may be included as a part of the bridging protocol include, but are not limited to, creating and naming of snapshots, deletion of snapshots, establishment of remote data replication characteristics/parameters, initiation and termination of remote data replication, naming of remote storage volumes, naming of remote snapshot volumes, mapping of remote storage volumes to a LUN, and mapping of remote snapshot volumes to a LUN.
  • The create and name a snapshot command is generally used by the destination data storage system 108 to request the source data storage system 104 to take a snapshot of the volume that is targeted during the replication process. The snapshot delete command is used to delete snapshots on the source data storage system 104 in order to free-up additional storage resources.
  • When the destination data storage system 108 issues a remote data replication characteristic/parameter command, it is essentially asking the source data storage system 104 to provide its replication characteristics, which will ultimately determine how the replication process will proceed between the storage systems 104, 108.
  • The initiation and termination commands are generally used by the destination data storage system 108 to specify that replication is starting or terminating at a particular source data storage system 104. Alternatively, the initiation and termination commands may be used to specify when the replication process should start/end.
  • Storage volume naming commands may be used by the destination data storage system 108 to name or change the name of a source data storage volume. Likewise, snapshot naming commands may be used by the destination data storage system 108 to name or change the name of a snapshot volume and may be included as a part of the storage volume naming command.
  • The command to map a storage volume to a LUN is generally used by the destination data storage system 108 to have the source data storage system 104 map a storage volume to a LUN. Similarly, the destination data storage system 108 may use a map snapshot volume to LUN command to have the source data storage system 104 map one or more snapshots to a LUN. Also, the map snapshot volume to LUN command may be included in the map storage volume to LUN command.
  • The foregoing discussion of the invention has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit the invention to the form disclosed herein. Consequently, variations and modifications commensurate with the above teachings, within the skill and knowledge of the relevant art, are within the scope of the present invention. The embodiments described hereinabove are further intended to explain the best modes presently known of practicing the invention and to enable others skilled in the art to utilize the invention in such, or in other embodiments, and with the various modifications required by their particular application or use of the invention. It is intended that the appended claims be construed to include alternative embodiments to the extent permitted by the prior art.

Claims (20)

1. A method of reproducing data, comprising:
sending a first command from an initiator storage system to a target storage system requesting the target storage system to take a snapshot of data stored on the target storage system;
sending a second command from the initiator storage system to the target storage system requesting the target storage system to make the snapshot viewable to the initiator storage system; and
copying at least a portion of data from the viewable snapshot.
2. The method of claim 1, further comprising:
sending a third command from the initiator storage system to the target storage system requesting the target storage system to take a second snapshot of data stored on the target storage system;
sending a fourth command from the initiator storage system to the target storage system requesting the target storage system to make the second snapshot viewable to the initiator storage system; and
copying at least a portion of data from the second viewable snapshot.
3. The method of claim 2, further comprising:
identifying at least one difference between the viewable snapshot and the second viewable snapshot;
creating at least that outline the identified at least one difference; and
updating the copy of the snapshot from the viewable snapshot to the second viewable snapshot by optionally referencing the differences outlined in the list.
4. The method of claim 2, wherein the third and fourth commands are sent a predetermined time after the first and second commands are sent.
5. The method of claim 1, further comprising the initiator storage system taking a snapshot of the copied data.
6. The method of claim 1, further comprising identifying when the snapshot was taken and time stamping the snapshot with the identified time.
7. The method of claim 1, wherein at least one of the first and second command comprise a Small Computer Systems Interface (SCSI) command.
8. An electronic data system, comprising:
a target data storage system; and
an initiator data storage system operable to send a first command to the target data storage system requesting the target data storage system to take a snapshot of data stored on the target data storage system, send a second command to the target data storage system requesting the target data storage system to make the snapshot viewable to the initiator data storage system, and copy at least a portion of data from the viewable snapshot.
9. The system of claim 8, wherein the initiator data storage system is further operable to send a third command to the target data storage system requesting the target data storage system to take a second snapshot of data stored on the target data storage system, send a fourth command to the target data storage system requesting the target data storage system to make the second snapshot viewable to the initiator data storage system, and copy at least a portion of data from the second viewable snapshot.
10. The system of claim 9, wherein the target data storage system is operable to identify at least one difference between the viewable snapshot and the second viewable snapshot, create a list that outlines the identified at least one difference, and present the list to the initiator data storage system, and wherein the initiator data storage system is operable to update the copy of the snapshot from the viewable snapshot to the second viewable snapshot.
11. The system of claim 9, wherein the third and fourth commands are sent a predetermined time after the first and second commands are sent.
12. The system of claim 8, wherein the target data storage system is further operable to identify when the snapshot was taken and time stamp the snapshot with the identified time.
13. The system of claim 8, wherein the second command comprises a command to map the snapshot to a Logical Unit Numbers (LUN) and to make the LUN visible to the initiator data storage system.
14. The system of claim 8, wherein the initiator data storage system is further operable to send a command to the target data storage system requesting the target data storage system to determine changes that have occurred since the previous snapshot was taken, send a command to the target data storage system requesting the target data storage system to make the changes visible to the initiator data storage system, and copy the changes that have occurred since the previous snapshot was taken.
15. A device for use in conjunction with data storage, comprising:
an interface for communicating with a target storage system; and
a processor operable to generate a first command for transmission to the target storage system requesting the target storage system to take a snapshot of data stored on the target storage system, generate a second command for transmission to the target data storage system requesting the target data storage to make the snapshot viewable, and copy at least a portion of data from the viewable snapshot.
16. The device of claim 15, wherein the processor is further operable to generate a third command for transmission to the target storage system requesting the target storage system to take a second snapshot of data stored on the target storage system, generate a fourth command for transmission to the target storage system requesting the target storage system to make the second snapshot, and copy at least a portion of data from the second viewable snapshot.
17. The device of claim 16, wherein the processor is operable to use a list generated by the target storage system to update the copy of the snapshot, wherein the list outlines at least one difference between the snapshot and the second snapshot.
18. The device of claim 16, wherein the third and fourth commands are sent a predetermined time after first and second commands are sent.
19. The device of claim 15, wherein the processor if further operable to take a snapshot of the copied data.
20. The device of claim 15, wherein the processor is further operable to generate a command for transmission to the target storage system requesting the target storage system to determine changes that have occurred since the previous snapshot was taken, generate a command for transmission the target storage system requesting the target storage system to make the changes visible, and update the copied data with the visible changes.
US11/561,512 2006-02-07 2006-11-20 Data replication method and apparatus Abandoned US20110087792A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/561,680 US8990153B2 (en) 2006-02-07 2006-11-20 Pull data replication model
US11/561,512 US20110087792A2 (en) 2006-02-07 2006-11-20 Data replication method and apparatus
US12/555,454 US20090327568A1 (en) 2006-02-07 2009-09-08 Data Replication method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US77138406P 2006-02-07 2006-02-07
US11/561,512 US20110087792A2 (en) 2006-02-07 2006-11-20 Data replication method and apparatus

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/555,454 Division US20090327568A1 (en) 2006-02-07 2009-09-08 Data Replication method and apparatus

Publications (2)

Publication Number Publication Date
US20070186001A1 true US20070186001A1 (en) 2007-08-09
US20110087792A2 US20110087792A2 (en) 2011-04-14

Family

ID=38353658

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/561,512 Abandoned US20110087792A2 (en) 2006-02-07 2006-11-20 Data replication method and apparatus
US11/561,680 Active 2034-01-22 US8990153B2 (en) 2006-02-07 2006-11-20 Pull data replication model
US12/555,454 Abandoned US20090327568A1 (en) 2006-02-07 2009-09-08 Data Replication method and apparatus

Family Applications After (2)

Application Number Title Priority Date Filing Date
US11/561,680 Active 2034-01-22 US8990153B2 (en) 2006-02-07 2006-11-20 Pull data replication model
US12/555,454 Abandoned US20090327568A1 (en) 2006-02-07 2009-09-08 Data Replication method and apparatus

Country Status (1)

Country Link
US (3) US20110087792A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070185973A1 (en) * 2006-02-07 2007-08-09 Dot Hill Systems, Corp. Pull data replication model
US20080072003A1 (en) * 2006-03-28 2008-03-20 Dot Hill Systems Corp. Method and apparatus for master volume access during colume copy
US20080153488A1 (en) * 2006-12-21 2008-06-26 Nokia Corporation Managing subscriber information
US20080256141A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Method and apparatus for separating snapshot preserved and write data
US20080256311A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Snapshot preserved data cloning
US20080281877A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Backing store re-initialization method and apparatus
US20080281875A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Automatic triggering of backing store re-initialization
US20080320258A1 (en) * 2007-06-25 2008-12-25 Dot Hill Systems Corp. Snapshot reset method and apparatus
US20110093855A1 (en) * 2006-12-21 2011-04-21 Emc Corporation Multi-thread replication across a network
US8364920B1 (en) * 2009-04-06 2013-01-29 Network Appliance, Inc. System and method for transferring and backing up luns and lun clones on primary and secondary servers
US20130036212A1 (en) * 2011-08-02 2013-02-07 Jibbe Mahmoud K Backup, restore, and/or replication of configuration settings in a storage area network environment using a management interface
US8538919B1 (en) * 2009-05-16 2013-09-17 Eric H. Nielsen System, method, and computer program for real time remote recovery of virtual computing machines
US9069709B1 (en) * 2013-06-24 2015-06-30 Emc International Company Dynamic granularity in data replication
US9940378B1 (en) * 2014-09-30 2018-04-10 EMC IP Holding Company LLC Optimizing replication of similar backup datasets
CN108874593A (en) * 2018-06-21 2018-11-23 郑州云海信息技术有限公司 A kind of three center disaster recovery method, apparatus of two places, equipment and system
CN108984346A (en) * 2018-07-18 2018-12-11 郑州云海信息技术有限公司 A kind of method, system and the storage medium of creation data disaster tolerance

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307487B1 (en) 1998-09-23 2001-10-23 Digital Fountain, Inc. Information additive code generator and decoder for communication systems
US7068729B2 (en) 2001-12-21 2006-06-27 Digital Fountain, Inc. Multi-stage code generator and decoder for communication systems
US9240810B2 (en) 2002-06-11 2016-01-19 Digital Fountain, Inc. Systems and processes for decoding chain reaction codes through inactivation
CN100539439C (en) 2002-10-05 2009-09-09 数字方敦股份有限公司 The system coding of chain reaction sign indicating number and decode system and method
KR101170629B1 (en) * 2003-10-06 2012-08-02 디지털 파운튼, 인크. Error-correcting multi-stage code generator and decoder for communication systems having single transmitters or multiple transmitters
KR101205758B1 (en) 2004-05-07 2012-12-03 디지털 파운튼, 인크. File download and streaming system
WO2006020826A2 (en) * 2004-08-11 2006-02-23 Digital Fountain, Inc. Method and apparatus for fast encoding of data symbols according to half-weight codes
CN101686107B (en) 2006-02-13 2014-08-13 数字方敦股份有限公司 Streaming and buffering using variable FEC overhead and protection periods
US9270414B2 (en) 2006-02-21 2016-02-23 Digital Fountain, Inc. Multiple-field based code generator and decoder for communications systems
US7971129B2 (en) 2006-05-10 2011-06-28 Digital Fountain, Inc. Code generator and decoder for communications systems operating using hybrid codes to allow for multiple efficient users of the communications systems
US9432433B2 (en) 2006-06-09 2016-08-30 Qualcomm Incorporated Enhanced block-request streaming system using signaling or block creation
US9386064B2 (en) 2006-06-09 2016-07-05 Qualcomm Incorporated Enhanced block-request streaming using URL templates and construction rules
US9380096B2 (en) 2006-06-09 2016-06-28 Qualcomm Incorporated Enhanced block-request streaming system for handling low-latency streaming
US9419749B2 (en) 2009-08-19 2016-08-16 Qualcomm Incorporated Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes
US9178535B2 (en) 2006-06-09 2015-11-03 Digital Fountain, Inc. Dynamic stream interleaving and sub-stream based delivery
US9209934B2 (en) 2006-06-09 2015-12-08 Qualcomm Incorporated Enhanced block-request streaming using cooperative parallel HTTP and forward error correction
KR101129260B1 (en) 2007-09-12 2012-03-27 디지털 파운튼, 인크. Generating and communicating source identification information to enable reliable communications
US8019727B2 (en) * 2007-09-26 2011-09-13 Symantec Corporation Pull model for file replication at multiple data centers
US8626936B2 (en) * 2008-01-23 2014-01-07 International Business Machines Corporation Protocol independent server replacement and replication in a storage area network
US7856419B2 (en) * 2008-04-04 2010-12-21 Vmware, Inc Method and system for storage replication
US8015343B2 (en) * 2008-08-08 2011-09-06 Amazon Technologies, Inc. Providing executing programs with reliable access to non-local block data storage
KR101522443B1 (en) 2008-10-30 2015-05-21 인터내셔널 비지네스 머신즈 코포레이션 Flashcopy handling
US8286030B1 (en) 2009-02-09 2012-10-09 American Megatrends, Inc. Information lifecycle management assisted asynchronous replication
US9281847B2 (en) 2009-02-27 2016-03-08 Qualcomm Incorporated Mobile reception of digital video broadcasting—terrestrial services
US9288010B2 (en) 2009-08-19 2016-03-15 Qualcomm Incorporated Universal file delivery methods for providing unequal error protection and bundled file delivery services
EP2306319B1 (en) * 2009-09-14 2012-06-06 Software AG Database server, replication server and method for replicating data of a database server by at least one replication server
US9917874B2 (en) * 2009-09-22 2018-03-13 Qualcomm Incorporated Enhanced block-request streaming using block partitioning or request controls for improved client-side handling
US9176853B2 (en) 2010-01-29 2015-11-03 Symantec Corporation Managing copy-on-writes to snapshots
US8745002B2 (en) * 2010-02-04 2014-06-03 Symantec Corporation Mounting applications on a partially replicated snapshot volume
US9485546B2 (en) 2010-06-29 2016-11-01 Qualcomm Incorporated Signaling video samples for trick mode video representations
US8918533B2 (en) 2010-07-13 2014-12-23 Qualcomm Incorporated Video switching for streaming video data
US9185439B2 (en) 2010-07-15 2015-11-10 Qualcomm Incorporated Signaling data for multiplexing video components
US9596447B2 (en) 2010-07-21 2017-03-14 Qualcomm Incorporated Providing frame packing type information for video coding
US8806050B2 (en) 2010-08-10 2014-08-12 Qualcomm Incorporated Manifest file updates for network streaming of coded multimedia data
US8725692B1 (en) * 2010-12-16 2014-05-13 Emc Corporation Replication of xcopy command
US8799222B2 (en) * 2010-12-17 2014-08-05 Symantec Corporation Host based software block level replication using metadata indicating changed data objects at source and secondary nodes
JP5541149B2 (en) * 2010-12-27 2014-07-09 富士通株式会社 Snapshot collection program, server, and snapshot collection method
US8958375B2 (en) 2011-02-11 2015-02-17 Qualcomm Incorporated Framing for an improved radio link protocol including FEC
US9270299B2 (en) 2011-02-11 2016-02-23 Qualcomm Incorporated Encoding and decoding using elastic codes with flexible source block mapping
US8244831B1 (en) 2011-05-23 2012-08-14 Ilesfay Technology Group, LLC Method for the preemptive creation of binary delta information within a computer network
US9253233B2 (en) 2011-08-31 2016-02-02 Qualcomm Incorporated Switch signaling methods providing improved switching between representations for adaptive HTTP streaming
US9843844B2 (en) 2011-10-05 2017-12-12 Qualcomm Incorporated Network streaming of media data
US9043283B2 (en) 2011-11-01 2015-05-26 International Business Machines Corporation Opportunistic database duplex operations
US9294226B2 (en) 2012-03-26 2016-03-22 Qualcomm Incorporated Universal object delivery and template-based file delivery
US9696939B1 (en) * 2013-03-14 2017-07-04 EMC IP Holding Company LLC Replicating data using deduplication-based arrays using network-based replication
EP2979431B1 (en) * 2013-03-28 2017-09-06 Oracle International Corporation Methods, systems, and computer readable media for performing stateful diameter routing with diameter routing agents that use different mechanisms to achieve stateful routing
US9734230B2 (en) * 2013-09-12 2017-08-15 Sap Se Cross system analytics for in memory data warehouse
US9734221B2 (en) 2013-09-12 2017-08-15 Sap Se In memory database warehouse
US20150212913A1 (en) * 2014-01-28 2015-07-30 International Business Machines Corporation Performance mitigation of logical unit numbers (luns) using small computer system interface (scsi) inband management
US9317380B2 (en) 2014-05-02 2016-04-19 International Business Machines Corporation Preserving management services with self-contained metadata through the disaster recovery life cycle
US9218407B1 (en) 2014-06-25 2015-12-22 Pure Storage, Inc. Replication and intermediate read-write state for mediums
US9881014B1 (en) * 2014-06-30 2018-01-30 EMC IP Holding Company LLC Snap and replicate for unified datapath architecture
US9195402B1 (en) 2014-07-15 2015-11-24 Oracle International Corporation Target and initiator mode configuration of tape drives for data transfer between source and destination tape drives
US10289690B1 (en) 2014-09-22 2019-05-14 EMC IP Holding Company LLC Accessing file system replica during ongoing replication operations
US10185637B2 (en) * 2015-02-16 2019-01-22 International Business Machines Corporation Preserving management services with distributed metadata through the disaster recovery life cycle
US9983942B1 (en) 2015-03-11 2018-05-29 EMC IP Holding Company LLC Creating consistent user snaps at destination during replication
US9953070B1 (en) 2015-04-05 2018-04-24 Simply Data Now Inc. Enterprise resource planning (ERP) system data extraction, loading, and directing
US9830105B1 (en) 2015-12-21 2017-11-28 EMC IP Holding Company LLC Migrating data objects together with their snaps
US10733161B1 (en) 2015-12-30 2020-08-04 EMC IP Holding Company LLC Atomically managing data objects and assigned attributes
US11057263B2 (en) * 2016-09-27 2021-07-06 Vmware, Inc. Methods and subsystems that efficiently distribute VM images in distributed computing systems
US10353605B2 (en) 2017-01-30 2019-07-16 International Business Machines Corporation Optimizing off-loaded input/output (I/O) requests
US10318207B1 (en) * 2017-04-27 2019-06-11 EMC IP Holding Company LLC Apparatus and methods for inter-version replication checking
US10620843B2 (en) * 2017-07-26 2020-04-14 Netapp, Inc. Methods for managing distributed snapshot for low latency storage and devices thereof
US11016694B1 (en) * 2017-10-30 2021-05-25 EMC IP Holding Company LLC Storage drivers for remote replication management
US10884820B1 (en) 2018-08-31 2021-01-05 Veritas Technologies Llc Intelligent and automatic replication load score based load balancing and resiliency of replication appliances

Citations (86)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5551046A (en) * 1991-06-14 1996-08-27 International Business Machines Corporation Method for non-hierarchical lock management in a multi-system shared data environment
US5778189A (en) * 1996-05-29 1998-07-07 Fujitsu Limited System and method for converting communication protocols
US5812843A (en) * 1994-05-31 1998-09-22 Fujitsu Limited System and method for executing job between different operating systems
US5963962A (en) * 1995-05-31 1999-10-05 Network Appliance, Inc. Write anywhere file-system layout
US6073209A (en) * 1997-03-31 2000-06-06 Ark Research Corporation Data storage controller providing multiple hosts with access to multiple storage subsystems
US6076148A (en) * 1997-12-26 2000-06-13 Emc Corporation Mass storage subsystem and backup arrangement for digital data processing system which permits information to be backed up while host computer(s) continue(s) operating in connection with information stored on mass storage subsystem
US6292808B1 (en) * 1996-12-17 2001-09-18 Oracle Corporation Method and apparatus for reapplying changes to a database
US20010039629A1 (en) * 1999-03-03 2001-11-08 Feague Roy W. Synchronization process negotiation for computing devices
US6341341B1 (en) * 1999-12-16 2002-01-22 Adaptec, Inc. System and method for disk control with snapshot feature including read-write snapshot half
US20020083037A1 (en) * 2000-08-18 2002-06-27 Network Appliance, Inc. Instant snapshot
US20020099907A1 (en) * 2001-01-19 2002-07-25 Vittorio Castelli System and method for storing data sectors with header and trailer information in a disk cache supporting memory compression
US20020112084A1 (en) * 2000-12-29 2002-08-15 Deen Gary D. Methods, systems, and computer program products for controlling devices through a network via a network translation device
US6548634B1 (en) * 1998-09-30 2003-04-15 Chiron Corporation Synthetic peptides having FGF receptor affinity
US6557079B1 (en) * 1999-12-20 2003-04-29 Emc Corporation Remote data facility prefetch
US6594744B1 (en) * 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
US20030154314A1 (en) * 2002-02-08 2003-08-14 I/O Integrity, Inc. Redirecting local disk traffic to network attached storage
US20030158863A1 (en) * 2002-02-15 2003-08-21 International Business Machines Corporation File system snapshot with ditto address feature
US6615223B1 (en) * 2000-02-29 2003-09-02 Oracle International Corporation Method and system for data replication
US20030167380A1 (en) * 2002-01-22 2003-09-04 Green Robbie A. Persistent Snapshot Management System
US20030188223A1 (en) * 2002-03-27 2003-10-02 Alexis Alan Previn BIOS shadowed small-print hard disk drive as robust, always on, backup for hard disk image & software failure
US20030191745A1 (en) * 2002-04-04 2003-10-09 Xiaoye Jiang Delegation of metadata management in a storage system by leasing of free file system blocks and i-nodes from a file system owner
US20030229764A1 (en) * 2002-06-05 2003-12-11 Hitachi, Ltd. Data storage subsystem
US20040030727A1 (en) * 2002-08-06 2004-02-12 Philippe Armangau Organization of multiple snapshot copies in a data storage system
US20040030951A1 (en) * 2002-08-06 2004-02-12 Philippe Armangau Instantaneous restoration of a production copy from a snapshot copy in a data storage system
US20040030846A1 (en) * 2002-08-06 2004-02-12 Philippe Armangau Data storage system having meta bit maps for indicating whether data blocks are invalid in snapshot copies
US20040034647A1 (en) * 2002-05-08 2004-02-19 Aksa-Sds, Inc. Archiving method and apparatus for digital information from web pages
US6711409B1 (en) * 1999-12-15 2004-03-23 Bbnt Solutions Llc Node belonging to multiple clusters in an ad hoc wireless network
US20040093555A1 (en) * 2002-09-10 2004-05-13 Therrien David G. Method and apparatus for managing data integrity of backup and disaster recovery data
US20040117567A1 (en) * 2002-12-13 2004-06-17 Lee Whay Sing System and method for efficient write operations for repeated snapshots by copying-on-write to most recent snapshot
US20040133718A1 (en) * 2001-04-09 2004-07-08 Hitachi America, Ltd. Direct access storage system with combined block interface and file interface access
US6771843B1 (en) * 2001-05-11 2004-08-03 Lsi Logic Corporation Data timeline management using snapshot volumes
US20040172509A1 (en) * 2003-02-27 2004-09-02 Hitachi, Ltd. Data processing system including storage systems
US20040204071A1 (en) * 2002-05-01 2004-10-14 Microsoft Corporation Method for wireless capability discovery and protocol negotiation, and wireless device including same
US20040267836A1 (en) * 2003-06-25 2004-12-30 Philippe Armangau Replication of snapshot using a file system copy differential
US20050004979A1 (en) * 2002-02-07 2005-01-06 Microsoft Corporation Method and system for transporting data content on a storage area network
US20050044088A1 (en) * 2003-08-21 2005-02-24 Lindsay Bruce G. System and method for asynchronous data replication without persistence for distributed computing
US20050065985A1 (en) * 2003-09-23 2005-03-24 Himabindu Tummala Organization of read-write snapshot copies in a data storage system
US20050066095A1 (en) * 2003-09-23 2005-03-24 Sachin Mullick Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server
US20050066128A1 (en) * 2003-02-17 2005-03-24 Ikuya Yagisawa Storage system
US20050122791A1 (en) * 2003-04-03 2005-06-09 Hajeck Michael J. Storage subsystem with embedded circuit for protecting against anomalies in power signal from host
US6907512B2 (en) * 2002-05-21 2005-06-14 Microsoft Corporation System and method for filtering write operations to a storage medium containing an operating system image
US20050166022A1 (en) * 2004-01-28 2005-07-28 Hitachi, Ltd. Method and apparatus for copying and backup in storage systems
US20050182910A1 (en) * 2004-02-04 2005-08-18 Alacritus, Inc. Method and system for adding redundancy to a continuous data protection system
US20050193180A1 (en) * 2004-03-01 2005-09-01 Akira Fujibayashi Method and apparatus for data migration with the efficient use of old assets
US20050198452A1 (en) * 2004-03-02 2005-09-08 Naoki Watanabe Method and apparatus of remote copy for multiple storage subsystems
US20050240635A1 (en) * 2003-07-08 2005-10-27 Vikram Kapoor Snapshots of file systems in data storage systems
US20050246397A1 (en) * 2004-04-30 2005-11-03 Edwards John K Cloning technique for efficiently creating a copy of a volume in a storage system
US20050246503A1 (en) * 2004-04-30 2005-11-03 Fair Robert L Online clone volume splitting technique
US20060020762A1 (en) * 2004-07-23 2006-01-26 Emc Corporation Storing data replicas remotely
US20060053139A1 (en) * 2004-09-03 2006-03-09 Red Hat, Inc. Methods, systems, and computer program products for implementing single-node and cluster snapshots
US20060064541A1 (en) * 2004-09-17 2006-03-23 Hitachi Ltd. Method of and system for controlling attributes of a plurality of storage devices
US7047380B2 (en) * 2003-07-22 2006-05-16 Acronis Inc. System and method for using file system snapshots for online data backup
US7050457B2 (en) * 2000-06-08 2006-05-23 Siemens Aktiengesellschaft Method of communication between communications networks
US20060155946A1 (en) * 2005-01-10 2006-07-13 Minwen Ji Method for taking snapshots of data
US7100089B1 (en) * 2002-09-06 2006-08-29 3Pardata, Inc. Determining differences between snapshots
US20060212481A1 (en) * 2005-03-21 2006-09-21 Stacey Christopher H Distributed open writable snapshot copy facility using file migration policies
US20060271604A1 (en) * 2003-07-08 2006-11-30 Shoens Kurt A Management of file system snapshots
US20070011137A1 (en) * 2005-07-11 2007-01-11 Shoji Kodama Method and system for creating snapshots by condition
US7165156B1 (en) * 2002-09-06 2007-01-16 3Pardata, Inc. Read-write snapshots
US20070038703A1 (en) * 2005-07-14 2007-02-15 Yahoo! Inc. Content router gateway
US20070055710A1 (en) * 2005-09-06 2007-03-08 Reldata, Inc. BLOCK SNAPSHOTS OVER iSCSI
US7191304B1 (en) * 2002-09-06 2007-03-13 3Pardata, Inc. Efficient and reliable virtual volume mapping
US7194550B1 (en) * 2001-08-30 2007-03-20 Sanera Systems, Inc. Providing a single hop communication path between a storage device and a network switch
US7206961B1 (en) * 2002-09-30 2007-04-17 Emc Corporation Preserving snapshots during disk-based restore
US20070094466A1 (en) * 2001-12-26 2007-04-26 Cisco Technology, Inc., A Corporation Of California Techniques for improving mirroring operations implemented in storage area networks and network based virtualization
US20070100808A1 (en) * 2001-11-01 2007-05-03 Verisign, Inc. High speed non-concurrency controlled database
US20070143563A1 (en) * 2005-12-16 2007-06-21 Microsoft Corporation Online storage volume shrink
US7243157B2 (en) * 2004-02-20 2007-07-10 Microsoft Corporation Dynamic protocol construction
US20070185973A1 (en) * 2006-02-07 2007-08-09 Dot Hill Systems, Corp. Pull data replication model
US20070198605A1 (en) * 2006-02-14 2007-08-23 Nobuyuki Saika Snapshot management device and snapshot management method
US20070276885A1 (en) * 2006-05-29 2007-11-29 Microsoft Corporation Creating frequent application-consistent backups efficiently
US7313581B1 (en) * 1999-04-29 2007-12-25 International Business Machines Corporation Method for deferred deletion of entries for a directory service backing store
US20080072003A1 (en) * 2006-03-28 2008-03-20 Dot Hill Systems Corp. Method and apparatus for master volume access during colume copy
US20080082593A1 (en) * 2006-09-28 2008-04-03 Konstantin Komarov Using shrinkable read-once snapshots for online data backup
US7363366B2 (en) * 2004-07-13 2008-04-22 Teneros Inc. Network traffic routing
US7373366B1 (en) * 2005-06-10 2008-05-13 American Megatrends, Inc. Method, system, apparatus, and computer-readable medium for taking and managing snapshots of a storage volume
US20080114951A1 (en) * 2006-11-15 2008-05-15 Dot Hill Systems Corp. Method and apparatus for transferring snapshot data
US20080177954A1 (en) * 2007-01-18 2008-07-24 Dot Hill Systems Corp. Method and apparatus for quickly accessing backing store metadata
US20080177957A1 (en) * 2007-01-18 2008-07-24 Dot Hill Systems Corp. Deletion of rollback snapshot partition
US7426618B2 (en) * 2005-09-06 2008-09-16 Dot Hill Systems Corp. Snapshot restore method and apparatus
US20080256141A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Method and apparatus for separating snapshot preserved and write data
US20080256311A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Snapshot preserved data cloning
US20080281877A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Backing store re-initialization method and apparatus
US20080281875A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Automatic triggering of backing store re-initialization
US20080320258A1 (en) * 2007-06-25 2008-12-25 Dot Hill Systems Corp. Snapshot reset method and apparatus
US7526640B2 (en) * 2003-06-30 2009-04-28 Microsoft Corporation System and method for automatic negotiation of a security protocol

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US710089A (en) * 1900-07-11 1902-09-30 Stenotype Company Type-writing machine.
JPH05173988A (en) * 1991-12-26 1993-07-13 Toshiba Corp Decentralized processing system and transaction processing system applied to the same
ATE409907T1 (en) 1993-06-03 2008-10-15 Network Appliance Inc METHOD AND DEVICE FOR DESCRIBING ANY AREAS OF A FILE SYSTEM
US5436203A (en) * 1994-07-05 1995-07-25 Motorola, Inc. Shielded liquid encapsulated semiconductor device and method for making the same
CA2165912C (en) 1995-12-21 2004-05-25 David Hitz Write anywhere file-system layout
US6941490B2 (en) 2000-12-21 2005-09-06 Emc Corporation Dual channel restoration of data between primary and backup servers
JP2004318940A (en) * 2003-04-14 2004-11-11 Renesas Technology Corp Storage device
JP4581518B2 (en) * 2003-12-19 2010-11-17 株式会社日立製作所 How to get a snapshot
US20080112151A1 (en) * 2004-03-04 2008-05-15 Skyworks Solutions, Inc. Overmolded electronic module with an integrated electromagnetic shield using SMT shield wall components

Patent Citations (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5551046A (en) * 1991-06-14 1996-08-27 International Business Machines Corporation Method for non-hierarchical lock management in a multi-system shared data environment
US6289356B1 (en) * 1993-06-03 2001-09-11 Network Appliance, Inc. Write anywhere file-system layout
US20020091670A1 (en) * 1993-06-03 2002-07-11 David Hitz Write anywhere file-system layout
US20040260673A1 (en) * 1993-06-03 2004-12-23 David Hitz Copy on write file system consistency and block usage
US5812843A (en) * 1994-05-31 1998-09-22 Fujitsu Limited System and method for executing job between different operating systems
US5963962A (en) * 1995-05-31 1999-10-05 Network Appliance, Inc. Write anywhere file-system layout
US5778189A (en) * 1996-05-29 1998-07-07 Fujitsu Limited System and method for converting communication protocols
US6292808B1 (en) * 1996-12-17 2001-09-18 Oracle Corporation Method and apparatus for reapplying changes to a database
US6073209A (en) * 1997-03-31 2000-06-06 Ark Research Corporation Data storage controller providing multiple hosts with access to multiple storage subsystems
US6076148A (en) * 1997-12-26 2000-06-13 Emc Corporation Mass storage subsystem and backup arrangement for digital data processing system which permits information to be backed up while host computer(s) continue(s) operating in connection with information stored on mass storage subsystem
US6548634B1 (en) * 1998-09-30 2003-04-15 Chiron Corporation Synthetic peptides having FGF receptor affinity
US20040054131A1 (en) * 1998-09-30 2004-03-18 Marcus Ballinger Synthetic peptides having FGF receptor affinity
US20010039629A1 (en) * 1999-03-03 2001-11-08 Feague Roy W. Synchronization process negotiation for computing devices
US7313581B1 (en) * 1999-04-29 2007-12-25 International Business Machines Corporation Method for deferred deletion of entries for a directory service backing store
US6711409B1 (en) * 1999-12-15 2004-03-23 Bbnt Solutions Llc Node belonging to multiple clusters in an ad hoc wireless network
US6341341B1 (en) * 1999-12-16 2002-01-22 Adaptec, Inc. System and method for disk control with snapshot feature including read-write snapshot half
US6557079B1 (en) * 1999-12-20 2003-04-29 Emc Corporation Remote data facility prefetch
US6615223B1 (en) * 2000-02-29 2003-09-02 Oracle International Corporation Method and system for data replication
US7050457B2 (en) * 2000-06-08 2006-05-23 Siemens Aktiengesellschaft Method of communication between communications networks
US20020083037A1 (en) * 2000-08-18 2002-06-27 Network Appliance, Inc. Instant snapshot
US6594744B1 (en) * 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
US20020112084A1 (en) * 2000-12-29 2002-08-15 Deen Gary D. Methods, systems, and computer program products for controlling devices through a network via a network translation device
US20020099907A1 (en) * 2001-01-19 2002-07-25 Vittorio Castelli System and method for storing data sectors with header and trailer information in a disk cache supporting memory compression
US20040133718A1 (en) * 2001-04-09 2004-07-08 Hitachi America, Ltd. Direct access storage system with combined block interface and file interface access
US6771843B1 (en) * 2001-05-11 2004-08-03 Lsi Logic Corporation Data timeline management using snapshot volumes
US7194550B1 (en) * 2001-08-30 2007-03-20 Sanera Systems, Inc. Providing a single hop communication path between a storage device and a network switch
US20070100808A1 (en) * 2001-11-01 2007-05-03 Verisign, Inc. High speed non-concurrency controlled database
US20070094466A1 (en) * 2001-12-26 2007-04-26 Cisco Technology, Inc., A Corporation Of California Techniques for improving mirroring operations implemented in storage area networks and network based virtualization
US20060107006A1 (en) * 2002-01-22 2006-05-18 Green Robbie A Persistent snapshot management system
US20030167380A1 (en) * 2002-01-22 2003-09-04 Green Robbie A. Persistent Snapshot Management System
US20050004979A1 (en) * 2002-02-07 2005-01-06 Microsoft Corporation Method and system for transporting data content on a storage area network
US20030154314A1 (en) * 2002-02-08 2003-08-14 I/O Integrity, Inc. Redirecting local disk traffic to network attached storage
US20030158863A1 (en) * 2002-02-15 2003-08-21 International Business Machines Corporation File system snapshot with ditto address feature
US20030188223A1 (en) * 2002-03-27 2003-10-02 Alexis Alan Previn BIOS shadowed small-print hard disk drive as robust, always on, backup for hard disk image & software failure
US20030191745A1 (en) * 2002-04-04 2003-10-09 Xiaoye Jiang Delegation of metadata management in a storage system by leasing of free file system blocks and i-nodes from a file system owner
US20040204071A1 (en) * 2002-05-01 2004-10-14 Microsoft Corporation Method for wireless capability discovery and protocol negotiation, and wireless device including same
US20040034647A1 (en) * 2002-05-08 2004-02-19 Aksa-Sds, Inc. Archiving method and apparatus for digital information from web pages
US6907512B2 (en) * 2002-05-21 2005-06-14 Microsoft Corporation System and method for filtering write operations to a storage medium containing an operating system image
US20030229764A1 (en) * 2002-06-05 2003-12-11 Hitachi, Ltd. Data storage subsystem
US20050071393A1 (en) * 2002-06-05 2005-03-31 Hitachi, Ltd. Data storage subsystem
US6792518B2 (en) * 2002-08-06 2004-09-14 Emc Corporation Data storage system having mata bit maps for indicating whether data blocks are invalid in snapshot copies
US20040030951A1 (en) * 2002-08-06 2004-02-12 Philippe Armangau Instantaneous restoration of a production copy from a snapshot copy in a data storage system
US20040030727A1 (en) * 2002-08-06 2004-02-12 Philippe Armangau Organization of multiple snapshot copies in a data storage system
US20040030846A1 (en) * 2002-08-06 2004-02-12 Philippe Armangau Data storage system having meta bit maps for indicating whether data blocks are invalid in snapshot copies
US7191304B1 (en) * 2002-09-06 2007-03-13 3Pardata, Inc. Efficient and reliable virtual volume mapping
US7165156B1 (en) * 2002-09-06 2007-01-16 3Pardata, Inc. Read-write snapshots
US7100089B1 (en) * 2002-09-06 2006-08-29 3Pardata, Inc. Determining differences between snapshots
US20040093555A1 (en) * 2002-09-10 2004-05-13 Therrien David G. Method and apparatus for managing data integrity of backup and disaster recovery data
US7206961B1 (en) * 2002-09-30 2007-04-17 Emc Corporation Preserving snapshots during disk-based restore
US20040117567A1 (en) * 2002-12-13 2004-06-17 Lee Whay Sing System and method for efficient write operations for repeated snapshots by copying-on-write to most recent snapshot
US20050066128A1 (en) * 2003-02-17 2005-03-24 Ikuya Yagisawa Storage system
US20040172509A1 (en) * 2003-02-27 2004-09-02 Hitachi, Ltd. Data processing system including storage systems
US20050122791A1 (en) * 2003-04-03 2005-06-09 Hajeck Michael J. Storage subsystem with embedded circuit for protecting against anomalies in power signal from host
US20040267836A1 (en) * 2003-06-25 2004-12-30 Philippe Armangau Replication of snapshot using a file system copy differential
US7526640B2 (en) * 2003-06-30 2009-04-28 Microsoft Corporation System and method for automatic negotiation of a security protocol
US20050240635A1 (en) * 2003-07-08 2005-10-27 Vikram Kapoor Snapshots of file systems in data storage systems
US20070266066A1 (en) * 2003-07-08 2007-11-15 Vikram Kapoor Snapshots of file systems in data storage systems
US20060271604A1 (en) * 2003-07-08 2006-11-30 Shoens Kurt A Management of file system snapshots
US7047380B2 (en) * 2003-07-22 2006-05-16 Acronis Inc. System and method for using file system snapshots for online data backup
US20050044088A1 (en) * 2003-08-21 2005-02-24 Lindsay Bruce G. System and method for asynchronous data replication without persistence for distributed computing
US20050066095A1 (en) * 2003-09-23 2005-03-24 Sachin Mullick Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server
US20050065985A1 (en) * 2003-09-23 2005-03-24 Himabindu Tummala Organization of read-write snapshot copies in a data storage system
US20050166022A1 (en) * 2004-01-28 2005-07-28 Hitachi, Ltd. Method and apparatus for copying and backup in storage systems
US20050182910A1 (en) * 2004-02-04 2005-08-18 Alacritus, Inc. Method and system for adding redundancy to a continuous data protection system
US7243157B2 (en) * 2004-02-20 2007-07-10 Microsoft Corporation Dynamic protocol construction
US20050193180A1 (en) * 2004-03-01 2005-09-01 Akira Fujibayashi Method and apparatus for data migration with the efficient use of old assets
US20050198452A1 (en) * 2004-03-02 2005-09-08 Naoki Watanabe Method and apparatus of remote copy for multiple storage subsystems
US20050246503A1 (en) * 2004-04-30 2005-11-03 Fair Robert L Online clone volume splitting technique
US20050246397A1 (en) * 2004-04-30 2005-11-03 Edwards John K Cloning technique for efficiently creating a copy of a volume in a storage system
US7363366B2 (en) * 2004-07-13 2008-04-22 Teneros Inc. Network traffic routing
US20060020762A1 (en) * 2004-07-23 2006-01-26 Emc Corporation Storing data replicas remotely
US20060053139A1 (en) * 2004-09-03 2006-03-09 Red Hat, Inc. Methods, systems, and computer program products for implementing single-node and cluster snapshots
US20060064541A1 (en) * 2004-09-17 2006-03-23 Hitachi Ltd. Method of and system for controlling attributes of a plurality of storage devices
US7363444B2 (en) * 2005-01-10 2008-04-22 Hewlett-Packard Development Company, L.P. Method for taking snapshots of data
US20060155946A1 (en) * 2005-01-10 2006-07-13 Minwen Ji Method for taking snapshots of data
US20060212481A1 (en) * 2005-03-21 2006-09-21 Stacey Christopher H Distributed open writable snapshot copy facility using file migration policies
US7373366B1 (en) * 2005-06-10 2008-05-13 American Megatrends, Inc. Method, system, apparatus, and computer-readable medium for taking and managing snapshots of a storage volume
US20070011137A1 (en) * 2005-07-11 2007-01-11 Shoji Kodama Method and system for creating snapshots by condition
US20070038703A1 (en) * 2005-07-14 2007-02-15 Yahoo! Inc. Content router gateway
US7426618B2 (en) * 2005-09-06 2008-09-16 Dot Hill Systems Corp. Snapshot restore method and apparatus
US20070055710A1 (en) * 2005-09-06 2007-03-08 Reldata, Inc. BLOCK SNAPSHOTS OVER iSCSI
US20070143563A1 (en) * 2005-12-16 2007-06-21 Microsoft Corporation Online storage volume shrink
US20070185973A1 (en) * 2006-02-07 2007-08-09 Dot Hill Systems, Corp. Pull data replication model
US20070198605A1 (en) * 2006-02-14 2007-08-23 Nobuyuki Saika Snapshot management device and snapshot management method
US20080072003A1 (en) * 2006-03-28 2008-03-20 Dot Hill Systems Corp. Method and apparatus for master volume access during colume copy
US20070276885A1 (en) * 2006-05-29 2007-11-29 Microsoft Corporation Creating frequent application-consistent backups efficiently
US20080082593A1 (en) * 2006-09-28 2008-04-03 Konstantin Komarov Using shrinkable read-once snapshots for online data backup
US20080114951A1 (en) * 2006-11-15 2008-05-15 Dot Hill Systems Corp. Method and apparatus for transferring snapshot data
US20080177957A1 (en) * 2007-01-18 2008-07-24 Dot Hill Systems Corp. Deletion of rollback snapshot partition
US20080177954A1 (en) * 2007-01-18 2008-07-24 Dot Hill Systems Corp. Method and apparatus for quickly accessing backing store metadata
US20080256141A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Method and apparatus for separating snapshot preserved and write data
US20080256311A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Snapshot preserved data cloning
US20080281877A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Backing store re-initialization method and apparatus
US20080281875A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Automatic triggering of backing store re-initialization
US20080320258A1 (en) * 2007-06-25 2008-12-25 Dot Hill Systems Corp. Snapshot reset method and apparatus

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110072104A2 (en) * 2006-02-07 2011-03-24 Dot Hill Systems Corporation Pull data replication model
US8990153B2 (en) 2006-02-07 2015-03-24 Dot Hill Systems Corporation Pull data replication model
US20070185973A1 (en) * 2006-02-07 2007-08-09 Dot Hill Systems, Corp. Pull data replication model
US20090327568A1 (en) * 2006-02-07 2009-12-31 Dot Hill Systems Corporation Data Replication method and apparatus
US20080072003A1 (en) * 2006-03-28 2008-03-20 Dot Hill Systems Corp. Method and apparatus for master volume access during colume copy
US7783850B2 (en) 2006-03-28 2010-08-24 Dot Hill Systems Corporation Method and apparatus for master volume access during volume copy
US20080153488A1 (en) * 2006-12-21 2008-06-26 Nokia Corporation Managing subscriber information
US8750867B2 (en) 2006-12-21 2014-06-10 Nokia Corporation Managing subscriber information
US8428583B2 (en) * 2006-12-21 2013-04-23 Nokia Corporation Managing subscriber information
US8126844B2 (en) * 2006-12-21 2012-02-28 Emc Corporation Multi-thread replication across a network
US20110093855A1 (en) * 2006-12-21 2011-04-21 Emc Corporation Multi-thread replication across a network
US7975115B2 (en) 2007-04-11 2011-07-05 Dot Hill Systems Corporation Method and apparatus for separating snapshot preserved and write data
US8656123B2 (en) 2007-04-11 2014-02-18 Dot Hill Systems Corporation Snapshot preserved data cloning
US7716183B2 (en) 2007-04-11 2010-05-11 Dot Hill Systems Corporation Snapshot preserved data cloning
US20080256141A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Method and apparatus for separating snapshot preserved and write data
US20080256311A1 (en) * 2007-04-11 2008-10-16 Dot Hill Systems Corp. Snapshot preserved data cloning
US20080281875A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Automatic triggering of backing store re-initialization
US20080281877A1 (en) * 2007-05-10 2008-11-13 Dot Hill Systems Corp. Backing store re-initialization method and apparatus
US7783603B2 (en) 2007-05-10 2010-08-24 Dot Hill Systems Corporation Backing store re-initialization method and apparatus
US8001345B2 (en) 2007-05-10 2011-08-16 Dot Hill Systems Corporation Automatic triggering of backing store re-initialization
US20080320258A1 (en) * 2007-06-25 2008-12-25 Dot Hill Systems Corp. Snapshot reset method and apparatus
US8204858B2 (en) 2007-06-25 2012-06-19 Dot Hill Systems Corporation Snapshot reset method and apparatus
US8200631B2 (en) 2007-06-25 2012-06-12 Dot Hill Systems Corporation Snapshot reset method and apparatus
US8364920B1 (en) * 2009-04-06 2013-01-29 Network Appliance, Inc. System and method for transferring and backing up luns and lun clones on primary and secondary servers
US8538919B1 (en) * 2009-05-16 2013-09-17 Eric H. Nielsen System, method, and computer program for real time remote recovery of virtual computing machines
US20130036212A1 (en) * 2011-08-02 2013-02-07 Jibbe Mahmoud K Backup, restore, and/or replication of configuration settings in a storage area network environment using a management interface
US9069709B1 (en) * 2013-06-24 2015-06-30 Emc International Company Dynamic granularity in data replication
US9940378B1 (en) * 2014-09-30 2018-04-10 EMC IP Holding Company LLC Optimizing replication of similar backup datasets
CN108874593A (en) * 2018-06-21 2018-11-23 郑州云海信息技术有限公司 A kind of three center disaster recovery method, apparatus of two places, equipment and system
CN108984346A (en) * 2018-07-18 2018-12-11 郑州云海信息技术有限公司 A kind of method, system and the storage medium of creation data disaster tolerance

Also Published As

Publication number Publication date
US8990153B2 (en) 2015-03-24
US20110087792A2 (en) 2011-04-14
US20090327568A1 (en) 2009-12-31
US20070185973A1 (en) 2007-08-09
US20110072104A2 (en) 2011-03-24

Similar Documents

Publication Publication Date Title
US8990153B2 (en) Pull data replication model
US10467246B2 (en) Content-based replication of data in scale out system
JP4252301B2 (en) Storage system and data backup method thereof
US8204858B2 (en) Snapshot reset method and apparatus
US10146436B1 (en) Efficiently storing low priority data in high priority storage devices
US7975115B2 (en) Method and apparatus for separating snapshot preserved and write data
US7783850B2 (en) Method and apparatus for master volume access during volume copy
JP4515132B2 (en) Storage system, storage device, and remote copy method
US7296126B2 (en) Storage system and data processing system
US7831565B2 (en) Deletion of rollback snapshot partition
US20060230243A1 (en) Cascaded snapshots
US10216450B2 (en) Mirror vote synchronization
CN109313595B (en) Cross-platform replication
JP2008108145A (en) Computer system, and management method of data using the same
US10620843B2 (en) Methods for managing distributed snapshot for low latency storage and devices thereof
US9830237B2 (en) Resynchronization with compliance data preservation
US11768624B2 (en) Resilient implementation of client file operations and replication
EP4139802A1 (en) Methods for managing input-ouput operations in zone translation layer architecture and devices thereof
US8977827B1 (en) System, method and computer program product for recovering stub files
US8117493B1 (en) Fast recovery in data mirroring techniques
US10324652B2 (en) Methods for copy-free data migration across filesystems and devices thereof
US10976937B1 (en) Disparate local and remote replication technologies configured for the same device
US20210026780A1 (en) Methods for using extended physical region page lists to improve performance for solid-state drives and devices thereof
US11513900B2 (en) Remote replication of snapshots taken while replication was inactive
US11221928B2 (en) Methods for cache rewarming in a failover domain and devices thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: DOT HILL SYSTEMS CORP., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WAYDA, JAMES GEORGE;LEE, KENT;RODRIGUEZ, ELIZABETH G.;REEL/FRAME:018572/0007;SIGNING DATES FROM 20061113 TO 20061116

Owner name: DOT HILL SYSTEMS CORP., COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WAYDA, JAMES GEORGE;LEE, KENT;RODRIGUEZ, ELIZABETH G.;SIGNING DATES FROM 20061113 TO 20061116;REEL/FRAME:018572/0007

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION