US20130166670A1 - Networked storage system and method including private data network - Google Patents
Networked storage system and method including private data network Download PDFInfo
- Publication number
- US20130166670A1 US20130166670A1 US13/333,992 US201113333992A US2013166670A1 US 20130166670 A1 US20130166670 A1 US 20130166670A1 US 201113333992 A US201113333992 A US 201113333992A US 2013166670 A1 US2013166670 A1 US 2013166670A1
- Authority
- US
- United States
- Prior art keywords
- source
- mass storage
- storage device
- data
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Definitions
- Embodiments of the present invention relate generally to network-based storage of digital data.
- Network-based storage of digital data provides a number of advantages over local storage such as, for example, improved availability and accessibility of the data by multiple users, centralization of storage administration, and increased storage capacity.
- industries such as cinema, television, and film advertising that generate, manipulate, and share large amounts of data, often take advantage of network-based storage systems for video production, post-production, delivery, and consumption.
- FIG. 1 is a block diagram of a network-based storage system that is typically employed for video-post production.
- the system includes one or more storage clients 1000 a - 1000 d coupled to one or more storage nodes 2000 a - 2000 d over one or more storage area networks (SANs) 3000 a - 3200 d.
- the one or more SANs are interfaces between the storage clients and the storage nodes.
- the different SANs are coupled together over a data communications network 3400 such as, for example, an Ethernet network.
- Data migration from one SAN to another typical involves reading files from a source storage node, such as an ingest storage node 2000 a, and writing it to a target storage node, such as a production storage node 2000 b.
- a copy of data stored in the ingest storage node 2000 a is transferred to its client 1000 a over the ingest SAN 3200 a, and then over the data communications network 3400 to a production storage client 1000 b.
- the production storage client 1000 b in turn writes the received data over the production SAN 3200 b to the production storage node 2000 b.
- This type of transfer may create a heavy traffic load on the data communications network 3400 and additional traffic load on the ingest and production SANs 3200 a, 3200 b, impacting normal data operations.
- the overhead of transferring the data over the data communications network 3400 is eliminated where the storage client 1000 a has access to both the source and destination SANs, it does not eliminate the traffic on the SANs.
- the conventional mechanism of effectuating this transfer requires data to be read and written across the SAN(s) in two different operations.
- the file when a file is copied from one folder to another folder within the same file system, the file is first retrieved from the storage node and transferred to the storage client over the SAN, and the read data is then transferred from the client back to the storage node over the same SAN.
- Such traffic on the SAN can have negative impact on normal operations that also utilize the SAN for operations other than data transfer.
- a secondary private interconnect between the storage clients may be alleviated by a secondary private interconnect between the storage clients, this solution increases complexity to the users by requiring the users to reconfigure their storage clients with additional hardware and software. Furthermore, a secondary private interconnect between the storage clients does not circumvent the SAN congestion as data migrates within the same SAN or between two different SANs.
- Embodiments of the present invention are directed to a system and method for transferring data between data storage units along a separate data transfer network to reduce the performance impact of transferring large amounts of data between different network storage devices over an IP-based network and/or storage area network.
- a client device Apart from initiating the data transfer, a client device is not involved in the actual movement of the data from a source to a destination. Instead, data is transferred directly from one storage unit to another (or within a single storage unit), without having the data traverse to the client device. The client device, as well as the storage area network coupled to the client device, is therefore bypassed in this data transfer. This helps minimize traffic on the IP-based network and/or storage area network.
- the present invention are directed to a networked data storage system which includes a source mass storage device storing source data, and a target mass storage device coupled to the source mass storage device via a private data network.
- the source mass storage device is in turn coupled to a source client via a storage area network (SAN).
- SAN storage area network
- the source mass storage device In response to a first type of request from the source client, the source mass storage device is configured to provide the source data to the source client via the SAN.
- the source mass storage device is configured to provide the source data to the target mass storage device via the private data network.
- the source mass storage device is configured to receive an identifier identifying the target mass storage device.
- the first type of request is a request to read the source data and the second type of request is a request to copy or move the source data to the target mass storage device.
- the source mass storage device is configured to determine the identifier identifying the target mass storage device based on information provided in the second type of request.
- the information may be an address of a target SAN, where the target mass storage device is coupled to a target client via the target SAN.
- the target SAN may be the same as the SAN coupling the source client to the source mass storage device.
- the source storage device includes a processor and a memory storing computer program instructions.
- the processor is configured to execute the program instructions, where the program instructions include receiving a list of source blocks; retrieving the source data based on the source blocks; writing the retrieved source data in the memory; reading the source data from the memory; generating a request packet including the read source data; establishing connection with the target mass storage device; and transmitting the request packet to the target mass storage device via the established connection.
- the connection may be a point-to-point connection.
- the source blocks are identified by a file system controller based on file system metadata.
- the target mass storage device is configured to pre-allocate storage space for storing the source data.
- the target mass storage device may be configured to receive the source data from the source mass storage device and store the received source data in the pre-allocated storage space.
- the source data is stored in a plurality of source mass storage devices, where each of the plurality of source mass storage devices is configured to concurrently retrieve a portion of the source data and provide the retrieved portion to one or more target mass storage devices for storing therein.
- the source mass storage device in response to a plurality of second type requests from one or more source clients, is configured to concurrently provide data requested by each of the plurality of second type requests, to one or more target mass storage devices, for concurrently storing the data in the one or more target mass storage devices.
- the private data network provides a point-to-point communication channel between the source mass storage device and the target mass storage device for transferring the source data from the source mass storage device to the target mass storage device.
- the source data from the source mass storage device is transferred to the target mass storage device independent of involvement by the source client in moving the source data.
- the source mass storage device in response to a third type of request from the source client, is configured to transfer the source data from a first location in the source mass storage device to a second location in the source mass storage device, where the transfer bypasses the SAN.
- the present invention is also directed to a method for transferring source data stored in the source mass storage device.
- the source data is provided to the source client by the source mass storage device via the SAN.
- an identifier identifying the target mass storage device is provided to the source mass storage device, and the source data is provided, based on the identifier, to the target mass storage device via the private data network.
- FIG. 1 is a block diagram of a network-based storage system hat is typically employed for video-post production;
- FIG. 2 is a block diagram of a networked storage system according to one embodiment of the invention.
- FIG. 3 is a flow diagram of a process executed by various modules of the networked storage system of FIG. 2 for transferring a data file from a source location to a target location over a private network, according to one embodiment of the invention
- FIG. 4 is a flow diagram of a process for intra-storage transfer according to one embodiment of the invention.
- FIG. 5 is a flow diagram of a process for registering a supervisor module with a parent command and control module according to one embodiment of the invention.
- FIG. 6 is a block diagram illustrating an exemplary data transfer of files stored in specific LUNs according to one embodiment of the invention.
- embodiments of the present invention are directed to a networked data storage system with intelligence and infrastructure to directly transfer large amounts of data from a source location to a target location, within a file system of a single storage device, between different file systems within the same SAN, and/or between different SANs.
- the transfer is aimed to occur in a manner that is transparent to users, and with minimal impact on I/O performance.
- the term transfer is used to broadly encompass the transfer of data copied from a source location without deleting the data from the source, as well as the migration of data from the source to a target location that deletes the data at the source.
- FIG. 2 is a block diagram of a networked storage system according to one embodiment of the invention.
- the system includes storage area networks (SANs) 32 a, 32 b (collectively 32 ) coupling one or more clients 10 a ′, 10 a ′′, 10 b (collectively 10 ) to one or more storage nodes 20 a ′, 20 a ′′, 20 b ′, 20 b ′′ (collectively 20 ).
- SANs storage area networks
- SAN 32 a provides storage clients 10 a ′, 10 a ′′ (collectively 10 a ) access to data stored in storage nodes 20 a ′, 20 a ′′ (collectively 20 a ), and SAN 32 b provides storage client 10 b access to data stored in storage nodes 20 b ′, 20 b ′′ (collectively 20 b ).
- Each SAN 32 may be implemented using a high-speed network technology such as, for example, Fibre Channel, Ethernet (e.g., Gigabit Ethernet or 10 Gigabit Ethernet), InfiniBand®, PCIe (Peripheral Component Interconnect Express), or any other network technology conventional in the art.
- the SANs 32 are coupled to one another over a data communications network 34 .
- the data communications network may be an IP-based network, such as, for example, a local area network (LAN), wide area network (WAN), or any other IP or non-IP based data communications network conventional in the art.
- FIG. 2 depicts two storage nodes 20 coupled to a particular SAN 32
- the number of storage nodes may be greater or less than two.
- the storage system depicts one or two clients coupled to a particular SAN, a person of skill in the art should recognize that the number of clients may also vary.
- embodiments of the present invention are not limited to any particular number of clients, storage nodes, nor SANs.
- Each storage node 20 includes a mass storage device 22 a ′, 22 a ′′, 22 b ′, 22 b ′′ (collectively 22 ) such as, for example, an array of physical disk drives in a RAID (redundant array of independent disks).
- the various disks 22 in each storage node 20 are assembled using a disk array controller (e.g. RAID controller) 28 a ′, 28 a ′′, 28 b ′, 28 b ′′ (collectively 28 ) to form one or more logical/virtual drives.
- Each logical drive may be identified by a logical unit number (LUN) or any other identification mechanism conventional in the art.
- LUN logical unit number
- All or a portion of the physical hard disks 22 in the storage node 20 may be mapped to a logical drive identified by a LUN.
- the disk array controller 28 interfaces with the physical hard disks 22 via API function calls and direct memory interface. Any type of mass storage data may be stored in the physical hard disks, including, without limitation, video, still images, text, audio, animation, and/or other multimedia and non-multimedia data.
- the storage devices 22 are described as disks, a person of ordinary skill in the art should recognize that any block structured storage media may be used in addition or in lieu of disks.
- Each storage client 10 is a desktop, laptop, tablet computer, or any other wired or wireless computer device conventional in the art.
- Each storage client includes a processor, memory, input unit (e.g. keyboard, keypad, mouse-type controller, touch screen display, and the like), and output unit (e.g. display screen).
- Each storage client 10 is coupled to a file system controller 12 , also referred to as a metadata controller (MDC), via a private IP network (not shown) or the data communications network 34 .
- the file system controller 12 stores and manages file system metadata for a clustered file system hosted by the clients 10 .
- An exemplary file system may be a Quantum® StorNext® File System (SNFS), NTFS, ext3,Xfs, Lustre, GFS, GPFS, or any other cluster file system conventional in the art.
- the file system controller 12 may be implemented via software, firmware (e.g. ASIC), hardware, or any combination of software, firmware, and/or hardware.
- the file system controller 12 is installed in a dedicated server.
- An exemplary file system controller hosted by a dedicated server is StorNext® MDC.
- the file system controller 12 may be installed in a storage node 20 as part of, or separate from, the disk array controller 28 .
- a person of skill in the art should recognize that other locations are also possible for hosting the file system controller 12 , such as, for example, the storage client 10 , and embodiments of the present invention are not limited to the expressly described locations.
- the networked data storage system includes a private data mover network 36 for directly transferring data from one storage node 20 coupled to one SAN 32 to another storage node coupled to the same or different SAN.
- the private network 36 may be arranged in a switch fabric or other network topology conventional in the art.
- the fabric may be formed using Ethernet (e.g., Gigabit Ethernet or 10 Gigabit Ethernet), Fibre Channel, InfiniBand®, PCIe, or any other high bandwidth, low latency physical transport conventional in the art.
- the software architecture for the private data mover network 36 is independent of the particular physical transport that is used, and, as such, may be implemented over a variety of different physical transports.
- any of a variety of data transfer protocols may be used to transfer data between the storage nodes 20 .
- data transfer protocols e.g., TCP/IP, UDP/IP, Small Computer System Interface (SCSI), Memory Mapping, and other remote DMA (RDMA) protocols
- RDMA remote DMA
- the networked data storage system of FIG. 2 also includes one or more objects or modules for initiating and controlling the direct transfer of stored data.
- the modules include, without limitation, a user interface module 15 a ′, 15 a ′′, 15 b (collectively 15 ), a command and control (CC) module 14 a, 14 b (collectively 14 ), and a supervisor module 24 a ′, 24 a ′′, 24 b ′, 24 b ′′ (collectively 24 ).
- each of the modules is a software module implemented via computer program instructions stored in memory and executed by a processor.
- the computer program instructions may also be stored in non-transient computer readable media such as, for example, a CD-ROM, flash drive, or the like.
- the modules may be implemented via hardware, firmware (e.g. via an ASIC), or in any combination of software, firmware, and/or hardware.
- each user interface module 15 is hosted by a storage client 10 for providing an interface for users or other computing devices to initiate a data transfer.
- the interface might be, for example, a command line interface (CLI), application programming interface (API), web based graphical user interface (WebGUI), or any other user interface conventional in the art.
- CLI command line interface
- API application programming interface
- WebGUI web based graphical user interface
- the user interface may allow a user to select one or more files to be transferred, and a target location for the transfer.
- the target location may be the same storage node as the source of the files, or a different storage node within the same SAN or on a different SAN.
- each command and control (CC) module 14 is hosted on the same device that hosts the file system controller 12 , and is configured to receive a transfer command from the user interface module 15 and manage the data transfer process.
- the CC module 14 determines which storage nodes 20 , and LUNs owned by those storage nodes, need to be involved for the data transfer. Because data transfer requests are generally file based, the CC module generally needs knowledge of which logical data blocks belong to a particular file and on which storage node(s) 20 for translating a filename into a list of logical blocks and a list of storage nodes which own those blocks.
- a particular CC module 14 may also receive requests from a source CC module 14 to create a target file, allocate storage for it, and translate target file pathnames to lists of newly allocated extents/blocks that will receive the source data.
- each supervisor module 24 runs on the disk array controller 28 of a particular storage node 20 , and spawns worker threads 26 a ′, 26 a ′′, 26 b ′, 26 b ′′ (collectively 26 ) as needed to service the transfer requests.
- the supervisor takes the role of a source or target supervisor.
- the supervisor module 24 takes data transfer requests from its parent CC module 14 and stores the request in a source queue.
- the supervisor module 24 receives requests from a remote source supervisor module and stores the request in a target queue.
- the supervisor module 24 may take the role of a source for one data transfer request, while concurrently taking the role of a target for a different data transfer request.
- the supervisor module 24 invoked for a particular data transfer request spawns one or more worker threads 26 to handle the actual data transfer. If multiple data transfer requests are made to the supervisor module 24 , the module spawns at least one separate worker thread for each data transfer request for concurrently transferring the files indicated in the multiple requests.
- Each source worker thread 26 is configured to retrieve data from a LUN to which it interfaces, and directly transfer the data to a target worker thread for storing in the target storage node.
- FIG. 3 is a flow diagram of a process executed by the various modules of the networked storage system of FIG. 2 for transferring a data file from a source location (e.g. SAN 32 a ) to a target location (e.g. SAN 32 b ) over the private network 36 .
- the sequence of steps of the process is not fixed, but can be altered into any desired sequence as recognized by a person of skill in the art. Also, the person of skill in the art will recognize that one or more of the steps of the process may be executed concurrently with one or more other steps.
- a user command is received via the user interface module 15 of the initiating client 10 a .
- the command may be a request to transfer data from a source location to a target location, or a simple retrieval/reading of the data without copying or moving the data to another location.
- a, request to transfer data includes sufficient information to identify the file to be transferred and the target location of the transfer.
- a request to transfer data may also be initiated by a computing device in communication with the source client 10 a.
- the user identifies the file to be transferred by selecting the file from a list of files, entering a specific file path of the file to the transferred, or providing other identification for the file as is conventional in the art. Although a single file is described as being identified by the user, a person of skill in the art should recognize that the user may also select multiple files for being concurrently transferred in the same data transfer session.
- the user may further identify the target location to which the file is to be transferred.
- the target location may be identified by an address (e.g. an IP address) of the target SAN 32 b. Alternatively, the user may identify the address, name, or another identifier of the target client 10 b or target file system controller 12 b .
- embodiments of the present invention may utilize other conventional mechanisms to identify the source and target of the transfer without being limited to the particular mechanisms described herein.
- the user may further indicate whether the file is to be moved (deleting the file from the source), or replicated and then moved.
- the user interface module 15 a bundles the user command into a request packet, and in step 304 , transmits the request packet to the source CC module 14 a over the data communications network 34 .
- the request packet may be transmitted to the source CC module 14 a over the private management network.
- the request packet may include, without limitation, the file path of the file to the transferred, and the address of the target location.
- the source CC module determines whether the received request is a request to copy or move the data from a source to a target. If the answer is YES, the source CC module 14 a queries, in step 306 , the file system managed by the file system controller 12 a for obtaining one or more source LUNs and file extents/blocks within those LUNs for the file specified in the request packet. Any conventional mechanism for obtaining logical file block information for the specified file may be employed by the CC module 14 a.
- the source CC module 14 a uses the StorNext API library (SNAPI) to interrogate the file system controller about the file to be transferred, and obtain the LUN and extent/block information associated with the file.
- SNAPI StorNext API library
- the CC module or the file system API interrogates the striping driver to determine the mapping of file system extent information to the logical block numbers on the LUNs that are exported by the RAID systems 20 .
- a goal of this operation is to obtain the logical block numbers on every LUN of each RAID system 20 that is involved in the file transfer operation.
- the CC module identifies the source LUN and logical block numbers/file extents on each storage node 20 a ′, 20 ′′ that store chunks of the file. If the source CC module 14 does not have direct access to the file system to process the request, it communicates with other modules that do have access to such file system data.
- embodiments of the present invention identify storage areas of the storage nodes via LUNs and file extents, a person of skill in the art should recognize that other forms of identifying the storage areas may also be used.
- the direct access by the CC module 14 a to the file system metadata allows the CC module to create a Scatter/Gather map of the source LUNs and associated logical data blocks that contain the data to be transferred to the target location.
- the source CC module 14 a may further identify the storage node(s) 20 a ′, 20 a ′′ associated with the identified source LUNs.
- the source CC module 14 a In step 308 , the source CC module 14 a generates a second request packet including, without limitation, the filename and size of the file to be transferred, and transmits the second packet to the target CC module 14 b over the data communications network 34 .
- the source CC module 14 a identifies the address of the target file system controller 12 b based on the address of the target SAN 32 b provided by the user. The identification of the target file system controller 12 b automatically identifies the target CC module 14 b that is to receive the second request packet.
- the target CC module 14 b receives the second request packet from the source CC module and pre-allocates space in the target file system for the file to be transferred according to any pre-allocation mechanism conventional in the art.
- the target CC module 14 b may make a request to the target storage nodes 20 b ′, 20 b ′′ for allocation of storage space that corresponds to the size of the data that is to be received, and the target storage nodes may respond with available block numbers that it will reserve for the later data transfer.
- it is assumed that the file is to be striped across two target storage nodes 20 b ′, 20 b ′′.
- the target CC module identifies one or more target LUNs, file extents/blocks within the target LUNs, and one or more storage nodes 20 b ′, 20 b ′′ associated with the identified target LUNs, that are to store the transferred file.
- the file system/operating system includes a striping driver
- the CC module or the file system API interrogates the striping driver to determine the mapping of file system extent information to the logical block numbers on the LUNs that are exported by the RAID systems 20 .
- a goal of this operation is to obtain the logical block numbers on every LUN of each RAID system that is involved in the file transfer operation.
- the identified target LUNs and file extents and associated physical blocks are reserved for the file to be received from the source and not used to store other data received by the target storage nodes 20 b.
- the target CC module 14 b returns the allocated target logical blocks to the source CC module 14 a over the data communications network 34 .
- the target CC module 14 b may return the target LUNs and file extents.
- the target CC module 14 b may further return the addresses of the target storage nodes 20 b ′, 20 b ′′ associated with allocated target logical blocks.
- the source CC module 14 a makes a request to the source supervisor 24 a ′, 24 a ′′ in each storage node 20 a ′, 20 a ′′ over the data communications network 34 .
- the information passed to the source supervisors 24 a ′, 24 a ′′ with the request may include, for example, a list of source logical blocks, a list of target logical blocks, and identification of the target storage nodes 20 b ′, 20 b ′′ corresponding to the target logical blocks.
- Each source supervisor 24 a ′, 24 a ′′ stores the request and associated information from the parent CC module 14 a in a source queue (not shown) for handling.
- each source supervisor 24 a ′, 24 a ′′ communicates, over the private network 36 , with one of the target supervisors 24 b ′, 24 b ′′ in the target storage node 20 b ′, 20 b ′′ where the transferred file is to be stored.
- each source supervisor 24 a ′, 24 a ′′ passes to one of the target supervisors 24 b ′, 24 b ′′ the target logical blocks that have been pre-allocated for the file to be transferred.
- each target supervisor 24 b ′, 24 b ′′ stores the incoming request and associated information from a peer source supervisor 24 a ′, 24 a ′′, in a target queue (not shown) for handling.
- the source and target supervisors 24 each spawn one or more worker threads 26 for processing the data transfer.
- the supervisor threads communicate with the spawned worker threads over a private path including, for example, sockets, pipes, shared memory, and the like.
- each source supervisor 24 a ′, 24 a ′′ spawns at least one source worker thread 26 a ′, 26 a ′′ for interacting with the disk array controller 28 a ′, 28 a ′′ of the corresponding storage node 20 a ′, 20 a ′′.
- Each source worker thread retrieves the chunks of the file that are physically present on its storage node.
- the access is based on the source logical blocks provided to the worker threads 26 a ′, 26 a ′′.
- the actual data retrieval from the physical blocks is based on RAID controller API function calls and direct memory interface, although embodiments of the present invention contemplate other conventional mechanisms of accessing the data stored in the identified source blocks.
- each source worker thread 26 a ′, 26 a ′′ interacts with the disk array controller 28 a ′, 28 a ′′ to read the data from the source blocks into a memory of the controller.
- the determination of which physical blocks of the mass storage devices 22 a ′, 22 a ′′ correspond to the source logical blocks is done by the corresponding controllers 28 a ′, 28 a ′′ according to conventional mechanisms.
- Each target supervisor 24 b ′, 24 b ′′ also spawns at least one target worker thread 26 b ′, 26 b ′′ for interfacing with the disk array controller 28 b ′, 28 b ′′ to store the received data based on the target logical blocks.
- Each target supervisor may send to the source supervisors 24 a ′, 24 a ′′, the work thread identifier spawned for handling the transfer.
- Any type of identifier that identifies the target storage nodes 20 b ′, 20 b ′′ and/or target worker threads 26 b ′, 26 b ′′ that are to ultimately receive chunks of data that are to be transferred are contemplated, including without limitation, full or partial IP addresses, names, ID numbers, and the like.
- each spawned source worker thread 26 a ′, 26 a ′′ directly transfers the data that was written into the memory of the source controller 28 a ′, 28 a ′′, to a particular target worker thread 26 b ′, 26 b ′′, in a point-to-point dynamic interconnect using the private data network 36 .
- Each spawned source worker thread 26 a ′, 26 a ′′ is configured to transfer the data to the corresponding target worker thread 26 b ′, 26 b ′′ based on the identifier to such target worker thread.
- each spawned source worker thread 26 a ′, 26 a ′′ reads the data written into the memory of the corresponding controller 28 a ′, 28 a ′ and generates a request packet including the read data.
- the request packet may further include, without limitation, the identifier of the target worker thread and/or target storage node 20 b ′, 20 b ′′ to receive the request packet, and a list of the target LUNs and associated file extents in which to store the transferred data.
- the source worker threads 26 a ′, 26 a ′′ may further be configured to establish the direct point-to-point connection with the target storage nodes 20 b ′, 20 b ′′ over a point-to-point communication channel provided by the private data network.
- each target worker thread 26 b ′, 26 b ′′ receives the request packet from the corresponding source worker thread 26 a ′, 26 a ′′ and interacts with its disk array controller 28 b ′, 28 b ′′ to write the received data into the physical blocks of the target mass storage devices 22 b ′, 22 b ′′ that correspond to the target LUNs and associated file extents.
- the storage nodes 20 may drive data up to line speeds without interfacing with the SANs 32 or data communications network 34 .
- a client may use the SAN to retrieve data to make edits to the data without encountering traffic that carries other data that is intended to be copied or moved to another location.
- the worker threads 26 exit after the data transfer completes (normally or abnormally), and status in formation is returned to the parent supervisor modules 24 .
- the supervisor modules 24 collect the status information and report the information back to the corresponding parent CC modules 14 . Errors encountered during the transfer may be handled according to any conventional error handling policy. For example, the supervisor modules 24 may attempt to retry the job or simply return a failed status to the parent CC module 14 for allowing the CC module determine if and how to restart the data transfer job based on its internal policy.
- step 305 if the request provided by the user is not a request to copy or move the data from a source to a target, the request is handled as it would conventionally in step 322 . That is, the requested data is provided to the initiating client 10 a via the SAN 32 a.
- the list of the target logical blocks are passed to the target supervisors 24 b ′, 24 b ′′ by the source supervisors 24 a ′, 24 a ′′.
- the block list may alternatively be passed to the target supervisors 24 b ′, 24 b ′′ by the parent target CC module 14 b or any other module residing at the source or target location.
- embodiments of the present invention are not limited to the particular manner in which source and target logical block lists are passed to the corresponding supervisors and/or worker threads. All conventional mechanisms for conveying this information are contemplated.
- a person of skill in the art should recognize that the process of transferring data between two different SANs as described with respect to FIG. 3 is also applicable to a process of transferring data between storage nodes within the same SAN, such as, for example, transfers between storage nodes 20 a ′ and 20 a ′′.
- embodiments of the present invention provide improved network performance even where the source and target locations for a transfer are within the same file system within the same storage node 20 (intra-storage transfer).
- FIG. 4 is a flow diagram of a process for intra-storage transfer according to one embodiment of the invention.
- the sequence of steps of the process is not fixed, but can be altered into any desired sequence as recognized by a person of skill in the art. Also, the person of skill in the art will recognize that one or more of the steps of the process may be executed concurrently with one or more other steps.
- the file to be transferred is striped across source storage nodes 20 a ′, 20 a ′′, and that after its transfer, the file is striped across target storage nodes 20 a ′, 20 a′′
- the process starts, and in step 400 , the user interface module 15 a at the source client 10 a is invoked for initiating the intra-storage transfer.
- the client indicates an intra-storage transfer by copying a file from a source folder and identifying a target folder in which to store the copied file as the target location.
- the target folder may be the same or different than the source folder.
- the user interface module 15 a bundles the file path information and target location provided by the user into a request packet, and in step 404 , transmits the request packet to the CC module 14 a in a manner similar to step 304 of FIG. 3 .
- the request packet may include, without limitation, the file path of the file to the transferred, and the address of the target location.
- step 406 the CC module 14 a queries the file system managed by the file system controller 12 a for obtaining one or more source logical blocks for the file specified in the request packet, in a manner similar to step 306 of FIG. 3 .
- the CC module 14 a identifies the target location as being within the same SAN as the source location by comparing, for example, the address of the source SAN with the address of the target SAN, and pre-allocates space in the file system in a manner similar to step 310 of FIG. 3 . In pre-allocating the space, the CC module 14 a identifies one or more target logical blocks, and storage nodes 20 a ′, 20 a ′′ associated with the identified target logical blocks, that are to store the transferred file.
- the CC module 14 a makes a request to the supervisor 24 a ′, 24 a ′′ in each storage node 20 a ′, 20 a ′′ storing the desired file, over the data communications network 34 .
- the information passed to the supervisors with the request may include, for example, a list of source logical blocks from which to retrieve the file, and a list of target logical blocks in which to store a new copy of the file.
- each supervisor 24 a ′, 24 a ′′ takes the role of a source and stores at least the source logical blocks in source queue (not shown) for handling.
- each supervisor 24 a ′, 24 a ′′ also takes the role of a target and stores at least the target logical blocks in a target queue (not shown) for handling.
- each supervisor 24 a ′, 24 a ′′ spawns a source or target worker thread in step 416 , for respectively retrieving data from, or storing data into, the corresponding blocks of the mass storage devices 22 a ′, 22 a ′′.
- each source worker thread 26 a ′, 26 a ′′ interacts with the disk array controller 28 a ′, 28 a ′′ to copy the data in the source logical blocks into a memory of the controller.
- Each target worker thread (not shown) also interacts with the disk array controller 28 a ′, 28 a ′′ to take the data copied into the memory, and write it into the physical blocks of the mass storage devices 22 a ′, 22 a ′′ that correspond to the target logical blocks.
- the intra-storage data transfer mechanism of FIG. 4 bypasses the SAN 32 a and thus, avoids creating traffic on the SAN 32 a that would generally occur during a conventional intra-storage data transfer event. Instead of traversing through the SAN (e.g. once for reading the data and once for writing the data), data is simply read into the memory of the disk array controller 28 a ′, 28 a ′′, and the read data is written to physical blocks of the mass storage devices 22 a ′, 22 a ′′ that correspond to the target logical blocks without traversing the SAN.
- each CC module 14 maintains enough information about the supervisors 24 , storage nodes 20 , and LUNs within the storage nodes, to understand which resources are controlled by each supervisor.
- Each supervisor module registers with the parent CC module 14 in order to provide this understanding to the CC module.
- FIG. 5 is a flow diagram of a process for registering a supervisor module 24 with a parent CC module 14 according to one embodiment of the invention.
- the sequence of steps of the process is not fixed, but can be altered into any desired sequence as recognized by a person of skill in the art. Also, the person of skill in the art will recognize that one or more of the steps of the process may be executed concurrently with one or more other steps.
- the registration process may be invoked, for instance, during power-up of the storage node 20 hosting the supervisor module 24 .
- the supervisor module 24 discovers available LUNs in the corresponding storage node 20 .
- the discovery is performed according to any conventional mechanism known in the art.
- the supervisor module 24 resides on the disk array controller 28 and has access via an internal controller API to the list of all LUNs and the information that describes those LUNs such as, for example, LUN Inquiry Data (e.g. identifiers, make, model, serial number, manufacturer, and the like).
- the supervisor module issues commands via the controller API to obtain information about all LUNs that the controller is presenting.
- the LUN information is collected into an internal data structure and subsequently sent to the CC module.
- the supervisor module sends the list of LUNs to the parent CC module 14 .
- the supervisor module transmits the list of LUNs in a broadcast message along with a request for registration.
- Each LUN is identified by a serial number which is unique for each LUN within a SAN.
- the request for registration may also contain a supervisor ID for the registering supervisor module 24 , and storage node ID of the storage node 20 hosting the supervisor.
- a broadcast protocol is anticipated, other conventional mechanisms of transmitting the request may also be employed.
- the parent CC module 14 receives the request for registration and maps the list of LUNs to the corresponding supervisor ID and/or the storage node ID.
- the mapping information may be stored, for example, in a mapping table stored in a memory accessible to the CC module 14 .
- the mapping table may further contain the LUNs' pathnames, band major/minor numbers on the file system controller 12 , and/or other information on how data is stored in the file system.
- addition or removal of LUNs cause the corresponding supervisors to send updates to the parent CC modules 14 .
- the CC modules are kept up to date of the available resources controlled by the supervisor modules 24 .
- FIG. 6 is a block diagram illustrating an exemplary data transfer of files stored in specific LUNs according to one embodiment of the invention.
- the CC module at a source location receives a request to transfer two files to a target location.
- the source CC module 14 a communicates with the file system controller 12 a and determines that one file is located in blocks 25-75 of LUN 3, and a second file located in blocks 50-75 of LUN 5, both of which are controlled by a first controller 28 a ′ (e.g. a RAID-1 controller).
- a first controller 28 a ′ e.g. a RAID-1 controller
- the source CC module 14 a sends a request with the names of the files and the corresponding file sizes to the CC module at the target location (e.g. CC module 14 b ).
- the target CC module 14 b allocates space for the first file in blocks 3-53 of LUN 3 under a second controller 28 b ′ (e.g. RAID-3 controller), and space for the second file in blocks 100-125 of LUN 6 under a third controller 28 b ′′ (e.g. RAID-4 controller).
- the allocation information is sent back to the source CC module 14 a.
- the target CC module 14 b requests the target supervisor modules 24 b ′ and 24 b ′′ to set up the target worker threads 26 a ′, 26 b ′′ to receive the data transfer.
- the request may be transmitted by the source supervisor module 24 a ′ in a peer-to-peer communication.
- the source CC module 14 a also requests the source supervisor module 24 a ° to set up the source worker threads 26 a ′, 26 a ′′ for concurrently transferring the data in blocks 25-75 of LUN 3, and blocks 50-75 of LUN 5, over the private data mover network 36 , to target worker threads 26 b ′ and 26 b ′′, respectively.
- each source worker thread 26 a ′, 26 a ′′ receives the ID of the target worker thread 26 b ′, 26 b ′′ to which it is to transmit its retrieved data.
- the target worker threads 26 b ′, 26 b ′′ concurrently write the received data to blocks 3-53 of LUN 3, and blocks 100-125 of LUN 6, respectively.
- the source and target worker threads report their status to their corresponding supervisor modules, and terminate when the data transfer is complete, or when they encounter an unrecoverable error.
- embodiments of the present invention have minimum impact on the I/O performance of the data communications network 34 and/or SAN 32 since the storage nodes 20 have direct local physical access to disk drives 22 .
- the data communications network 34 and/or SAN 32 may be fully utilized for normal production data traffic.
- the private network 36 allows the multiple storage nodes 20 to transfer data in parallel in a point-to-point dynamic interconnect, making the data transfer faster than in conventional systems that utilize the SAN 32 and/or data communications network 34 for the data transfer.
- the storage nodes 20 can drive data at up to line speeds without inter-node interference.
- a single master worker thread is spawned at each storage node 20 .
- the source MWT receives instructions from the CC module 14 for the multiple data transfers and does all the work of retrieving all of file's extents/blocks and shipping them to the target MWT on another SAN.
- This embodiment may be simpler than the above described embodiment and may reduce setup costs due to the fact that only one worker thread is spawned. However, it may result in reduced performance because it uses a single network connection from a single storage node to send all of the data to other storage nodes and does not perform simultaneous transfers as is the case with multiple worker threads.
- Another embodiment is structurally similar to the embodiments described above, with the difference being that a copy of the file system client (e.g., the SNFS client) runs on each storage node 20 in a SAN.
- the source worker threads use standard file system calls to retrieve the data to be transferred to a target SAN. Because each storage node runs its own copy of the file system client, the source worker thread on the particular node retrieves only chunks of the file that are physically present on that node.
- such an embodiment may reduce system complexity by allowing the worker threads direct access to file system information, it may result in increased costs due to licensing fees that may be required to install the file system client on each storage node.
- the overhead of the file system client in the I/O path may also reduce performance.
- Another embodiment of the present invention is similar to the embodiment with a single master worker thread, except that it differs by including a single copy of the file system client per SAN that runs on one of the storage nodes. This local file system client knows how to access all of the source file data regardless of how it is distributed over multiple storage nodes 20 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A networked storage system includes a source mass storage device coupled to a client via a storage area network (SAN). A target mass storage device is coupled to the source mass storage device via a private data network. The source mass storage device stores source data which is provided to the client via the SAN in response to a request by the client to read the data. If the request is to copy or move the source data, however, the source mass storage device determines an identifier for the target mass storage device and directly provides, based on the identifier, the source data to the target mass storage device via the private data network. The transfer via the private data network bypasses the client and the SAN.
Description
- Embodiments of the present invention relate generally to network-based storage of digital data.
- Network-based storage of digital data provides a number of advantages over local storage such as, for example, improved availability and accessibility of the data by multiple users, centralization of storage administration, and increased storage capacity. As such, industries such as cinema, television, and film advertising that generate, manipulate, and share large amounts of data, often take advantage of network-based storage systems for video production, post-production, delivery, and consumption.
-
FIG. 1 is a block diagram of a network-based storage system that is typically employed for video-post production. The system includes one or more storage clients 1000 a-1000 d coupled to one or more storage nodes 2000 a-2000 d over one or more storage area networks (SANs) 3000 a-3200 d. The one or more SANs are interfaces between the storage clients and the storage nodes. The different SANs are coupled together over adata communications network 3400 such as, for example, an Ethernet network. - Data migration from one SAN to another typical involves reading files from a source storage node, such as an
ingest storage node 2000 a, and writing it to a target storage node, such as aproduction storage node 2000 b. In this regard, a copy of data stored in theingest storage node 2000 a is transferred to itsclient 1000 a over theingest SAN 3200 a, and then over thedata communications network 3400 to aproduction storage client 1000 b. Theproduction storage client 1000 b in turn writes the received data over theproduction SAN 3200 b to theproduction storage node 2000 b. This type of transfer may create a heavy traffic load on thedata communications network 3400 and additional traffic load on the ingest andproduction SANs data communications network 3400 is eliminated where thestorage client 1000 a has access to both the source and destination SANs, it does not eliminate the traffic on the SANs. Even when data is transferred within a single file system or between two file systems within the same SAN, the conventional mechanism of effectuating this transfer requires data to be read and written across the SAN(s) in two different operations. For example, when a file is copied from one folder to another folder within the same file system, the file is first retrieved from the storage node and transferred to the storage client over the SAN, and the read data is then transferred from the client back to the storage node over the same SAN. Such traffic on the SAN can have negative impact on normal operations that also utilize the SAN for operations other than data transfer. - Although the network congestion created via typical data transfers over the
data communications network 3400 may be alleviated by a secondary private interconnect between the storage clients, this solution increases complexity to the users by requiring the users to reconfigure their storage clients with additional hardware and software. Furthermore, a secondary private interconnect between the storage clients does not circumvent the SAN congestion as data migrates within the same SAN or between two different SANs. - The above problem is not unique to the field of video postproduction. Similar issues of reduced network performance may also arise in fields that utilize SANs to transfer large amounts of data, such as, for example, scientific research, music production, data forensics, satellite imaging, and the like.
- Accordingly, what is desired is a network-based storage system and method for transferring data within or between SANs with improved performance for the data transfers.
- Embodiments of the present invention are directed to a system and method for transferring data between data storage units along a separate data transfer network to reduce the performance impact of transferring large amounts of data between different network storage devices over an IP-based network and/or storage area network. Apart from initiating the data transfer, a client device is not involved in the actual movement of the data from a source to a destination. Instead, data is transferred directly from one storage unit to another (or within a single storage unit), without having the data traverse to the client device. The client device, as well as the storage area network coupled to the client device, is therefore bypassed in this data transfer. This helps minimize traffic on the IP-based network and/or storage area network.
- Accordingly, the present invention are directed to a networked data storage system which includes a source mass storage device storing source data, and a target mass storage device coupled to the source mass storage device via a private data network. The source mass storage device is in turn coupled to a source client via a storage area network (SAN). In response to a first type of request from the source client, the source mass storage device is configured to provide the source data to the source client via the SAN. However, in response to a second type of request from the source client, the source mass storage device is configured to provide the source data to the target mass storage device via the private data network. In providing the source data to the target mass storage device, the source mass storage device is configured to receive an identifier identifying the target mass storage device.
- According to one embodiment of the invention, the first type of request is a request to read the source data and the second type of request is a request to copy or move the source data to the target mass storage device.
- According to one embodiment of the invention, the source mass storage device is configured to determine the identifier identifying the target mass storage device based on information provided in the second type of request. The information may be an address of a target SAN, where the target mass storage device is coupled to a target client via the target SAN. The target SAN may be the same as the SAN coupling the source client to the source mass storage device.
- According to one embodiment of the invention, the source storage device includes a processor and a memory storing computer program instructions. The processor is configured to execute the program instructions, where the program instructions include receiving a list of source blocks; retrieving the source data based on the source blocks; writing the retrieved source data in the memory; reading the source data from the memory; generating a request packet including the read source data; establishing connection with the target mass storage device; and transmitting the request packet to the target mass storage device via the established connection. The connection may be a point-to-point connection.
- According to one embodiment, the source blocks are identified by a file system controller based on file system metadata.
- According to one embodiment of the invention, the target mass storage device is configured to pre-allocate storage space for storing the source data. The target mass storage device may be configured to receive the source data from the source mass storage device and store the received source data in the pre-allocated storage space.
- According to one embodiment of the invention, the source data is stored in a plurality of source mass storage devices, where each of the plurality of source mass storage devices is configured to concurrently retrieve a portion of the source data and provide the retrieved portion to one or more target mass storage devices for storing therein.
- According to one embodiment of the invention, in response to a plurality of second type requests from one or more source clients, the source mass storage device is configured to concurrently provide data requested by each of the plurality of second type requests, to one or more target mass storage devices, for concurrently storing the data in the one or more target mass storage devices.
- According to one embodiment of the invention, the private data network provides a point-to-point communication channel between the source mass storage device and the target mass storage device for transferring the source data from the source mass storage device to the target mass storage device.
- According to one embodiment of the invention, the source data from the source mass storage device is transferred to the target mass storage device independent of involvement by the source client in moving the source data.
- According to one embodiment of the invention, in response to a third type of request from the source client, the source mass storage device is configured to transfer the source data from a first location in the source mass storage device to a second location in the source mass storage device, where the transfer bypasses the SAN.
- The present invention is also directed to a method for transferring source data stored in the source mass storage device. In response to a first type of request, the source data is provided to the source client by the source mass storage device via the SAN. In response to a second type of request, an identifier identifying the target mass storage device is provided to the source mass storage device, and the source data is provided, based on the identifier, to the target mass storage device via the private data network.
- These and other features, aspects and advantages of the present invention will be more fully understood when considered with respect to the following detailed description, appended claims, and accompanying drawings. Of course, the actual scope of the invention is defined by the appended claims.
- The accompanying drawings, together with the specification, illustrate exemplary embodiments of the present invention, and, together with the description, serve to explain the principles of the present invention.
-
FIG. 1 is a block diagram of a network-based storage system hat is typically employed for video-post production; -
FIG. 2 is a block diagram of a networked storage system according to one embodiment of the invention; -
FIG. 3 is a flow diagram of a process executed by various modules of the networked storage system ofFIG. 2 for transferring a data file from a source location to a target location over a private network, according to one embodiment of the invention; -
FIG. 4 is a flow diagram of a process for intra-storage transfer according to one embodiment of the invention; -
FIG. 5 is a flow diagram of a process for registering a supervisor module with a parent command and control module according to one embodiment of the invention; and -
FIG. 6 is a block diagram illustrating an exemplary data transfer of files stored in specific LUNs according to one embodiment of the invention. - In the following detailed description, only certain exemplary embodiments of the present invention are shown and described, by way of illustration. As those skilled in the art would recognize, the invention may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Like reference numerals designate like elements throughout the specification.
- In general terms, embodiments of the present invention are directed to a networked data storage system with intelligence and infrastructure to directly transfer large amounts of data from a source location to a target location, within a file system of a single storage device, between different file systems within the same SAN, and/or between different SANs. The transfer is aimed to occur in a manner that is transparent to users, and with minimal impact on I/O performance. Hereinafter, the term transfer is used to broadly encompass the transfer of data copied from a source location without deleting the data from the source, as well as the migration of data from the source to a target location that deletes the data at the source.
-
FIG. 2 is a block diagram of a networked storage system according to one embodiment of the invention. The system includes storage area networks (SANs) 32 a, 32 b (collectively 32) coupling one ormore clients 10 a′, 10 a″, 10 b (collectively 10) to one ormore storage nodes 20 a′, 20 a″, 20 b′, 20 b″ (collectively 20). For example, in the illustrated embodiment,SAN 32 a providesstorage clients 10 a′, 10 a″ (collectively 10 a) access to data stored instorage nodes 20 a′, 20 a″ (collectively 20 a), andSAN 32 b providesstorage client 10 b access to data stored instorage nodes 20 b′, 20 b″ (collectively 20 b). Each SAN 32 may be implemented using a high-speed network technology such as, for example, Fibre Channel, Ethernet (e.g., Gigabit Ethernet or 10 Gigabit Ethernet), InfiniBand®, PCIe (Peripheral Component Interconnect Express), or any other network technology conventional in the art. The SANs 32 are coupled to one another over adata communications network 34. The data communications network may be an IP-based network, such as, for example, a local area network (LAN), wide area network (WAN), or any other IP or non-IP based data communications network conventional in the art. - Although the storage system of
FIG. 2 depicts two storage nodes 20 coupled to a particular SAN 32, a person of skill in the art should recognize that the number of storage nodes may be greater or less than two. In addition, although the storage system depicts one or two clients coupled to a particular SAN, a person of skill in the art should recognize that the number of clients may also vary. Thus, embodiments of the present invention are not limited to any particular number of clients, storage nodes, nor SANs. - Each storage node 20 includes a
mass storage device 22 a′, 22 a″, 22 b′, 22 b″ (collectively 22) such as, for example, an array of physical disk drives in a RAID (redundant array of independent disks). The various disks 22 in each storage node 20 are assembled using a disk array controller (e.g. RAID controller) 28 a′, 28 a″, 28 b′, 28 b″ (collectively 28) to form one or more logical/virtual drives. Each logical drive may be identified by a logical unit number (LUN) or any other identification mechanism conventional in the art. All or a portion of the physical hard disks 22 in the storage node 20 may be mapped to a logical drive identified by a LUN. The disk array controller 28 interfaces with the physical hard disks 22 via API function calls and direct memory interface. Any type of mass storage data may be stored in the physical hard disks, including, without limitation, video, still images, text, audio, animation, and/or other multimedia and non-multimedia data. Although the storage devices 22 are described as disks, a person of ordinary skill in the art should recognize that any block structured storage media may be used in addition or in lieu of disks. - Each storage client 10 is a desktop, laptop, tablet computer, or any other wired or wireless computer device conventional in the art. Each storage client includes a processor, memory, input unit (e.g. keyboard, keypad, mouse-type controller, touch screen display, and the like), and output unit (e.g. display screen). Each storage client 10 is coupled to a file system controller 12, also referred to as a metadata controller (MDC), via a private IP network (not shown) or the
data communications network 34. The file system controller 12 stores and manages file system metadata for a clustered file system hosted by the clients 10. An exemplary file system may be a Quantum® StorNext® File System (SNFS), NTFS, ext3,Xfs, Lustre, GFS, GPFS, or any other cluster file system conventional in the art. The file system controller 12 may be implemented via software, firmware (e.g. ASIC), hardware, or any combination of software, firmware, and/or hardware. According to one embodiment, the file system controller 12 is installed in a dedicated server. An exemplary file system controller hosted by a dedicated server is StorNext® MDC. Alternatively, the file system controller 12 may be installed in a storage node 20 as part of, or separate from, the disk array controller 28. A person of skill in the art should recognize that other locations are also possible for hosting the file system controller 12, such as, for example, the storage client 10, and embodiments of the present invention are not limited to the expressly described locations. - According to one embodiment of the invention, the networked data storage system includes a private
data mover network 36 for directly transferring data from one storage node 20 coupled to one SAN 32 to another storage node coupled to the same or different SAN. Theprivate network 36 may be arranged in a switch fabric or other network topology conventional in the art. The fabric may be formed using Ethernet (e.g., Gigabit Ethernet or 10 Gigabit Ethernet), Fibre Channel, InfiniBand®, PCIe, or any other high bandwidth, low latency physical transport conventional in the art. The software architecture for the privatedata mover network 36 is independent of the particular physical transport that is used, and, as such, may be implemented over a variety of different physical transports. In addition, any of a variety of data transfer protocols (e.g., TCP/IP, UDP/IP, Small Computer System Interface (SCSI), Memory Mapping, and other remote DMA (RDMA) protocols) may be used to transfer data between the storage nodes 20. - The networked data storage system of
FIG. 2 also includes one or more objects or modules for initiating and controlling the direct transfer of stored data. The modules include, without limitation, auser interface module 15 a′, 15 a″, 15 b (collectively 15), a command and control (CC)module supervisor module 24 a′, 24 a″, 24 b′, 24 b″ (collectively 24). According to one embodiment, each of the modules is a software module implemented via computer program instructions stored in memory and executed by a processor. The computer program instructions may also be stored in non-transient computer readable media such as, for example, a CD-ROM, flash drive, or the like. In other embodiments, the modules may be implemented via hardware, firmware (e.g. via an ASIC), or in any combination of software, firmware, and/or hardware. - According to one exemplary embodiment, each user interface module 15 is hosted by a storage client 10 for providing an interface for users or other computing devices to initiate a data transfer. The interface might be, for example, a command line interface (CLI), application programming interface (API), web based graphical user interface (WebGUI), or any other user interface conventional in the art. The user interface may allow a user to select one or more files to be transferred, and a target location for the transfer. The target location may be the same storage node as the source of the files, or a different storage node within the same SAN or on a different SAN.
- According to one exemplary embodiment, each command and control (CC) module 14 is hosted on the same device that hosts the file system controller 12, and is configured to receive a transfer command from the user interface module 15 and manage the data transfer process. In this regard, the CC module 14 determines which storage nodes 20, and LUNs owned by those storage nodes, need to be involved for the data transfer. Because data transfer requests are generally file based, the CC module generally needs knowledge of which logical data blocks belong to a particular file and on which storage node(s) 20 for translating a filename into a list of logical blocks and a list of storage nodes which own those blocks. A particular CC module 14 may also receive requests from a source CC module 14 to create a target file, allocate storage for it, and translate target file pathnames to lists of newly allocated extents/blocks that will receive the source data.
- According to one exemplary embodiment, each supervisor module 24 runs on the disk array controller 28 of a particular storage node 20, and spawns
worker threads 26 a′, 26 a″, 26 b′, 26 b″ (collectively 26) as needed to service the transfer requests. Depending on whether the storage node that it supervises is the source or the target of a particular data transfer, the supervisor takes the role of a source or target supervisor. When taking the role of a source, the supervisor module 24 takes data transfer requests from its parent CC module 14 and stores the request in a source queue. When taking the role of a target, the supervisor module 24 receives requests from a remote source supervisor module and stores the request in a target queue. As a person of skill in the art will appreciate, the supervisor module 24 may take the role of a source for one data transfer request, while concurrently taking the role of a target for a different data transfer request. - According to one embodiment, the supervisor module 24 invoked for a particular data transfer request spawns one or more worker threads 26 to handle the actual data transfer. If multiple data transfer requests are made to the supervisor module 24, the module spawns at least one separate worker thread for each data transfer request for concurrently transferring the files indicated in the multiple requests. Each source worker thread 26 is configured to retrieve data from a LUN to which it interfaces, and directly transfer the data to a target worker thread for storing in the target storage node.
-
FIG. 3 is a flow diagram of a process executed by the various modules of the networked storage system ofFIG. 2 for transferring a data file from a source location (e.g.SAN 32 a) to a target location (e.g. SAN 32 b) over theprivate network 36. The sequence of steps of the process is not fixed, but can be altered into any desired sequence as recognized by a person of skill in the art. Also, the person of skill in the art will recognize that one or more of the steps of the process may be executed concurrently with one or more other steps. - The process starts, and in
step 300, a user command is received via the user interface module 15 of the initiatingclient 10 a. The command may be a request to transfer data from a source location to a target location, or a simple retrieval/reading of the data without copying or moving the data to another location. According to one embodiment, a, request to transfer data includes sufficient information to identify the file to be transferred and the target location of the transfer. A request to transfer data may also be initiated by a computing device in communication with thesource client 10 a. - Where the data transfer is initiated by a user, the user identifies the file to be transferred by selecting the file from a list of files, entering a specific file path of the file to the transferred, or providing other identification for the file as is conventional in the art. Although a single file is described as being identified by the user, a person of skill in the art should recognize that the user may also select multiple files for being concurrently transferred in the same data transfer session. The user may further identify the target location to which the file is to be transferred. The target location may be identified by an address (e.g. an IP address) of the
target SAN 32 b. Alternatively, the user may identify the address, name, or another identifier of thetarget client 10 b or targetfile system controller 12 b. A person of skill in the art will recognize that embodiments of the present invention may utilize other conventional mechanisms to identify the source and target of the transfer without being limited to the particular mechanisms described herein. The user may further indicate whether the file is to be moved (deleting the file from the source), or replicated and then moved. - In
step 302, theuser interface module 15 a bundles the user command into a request packet, and instep 304, transmits the request packet to thesource CC module 14 a over thedata communications network 34. Alternatively, if thesource client 10 a is coupled to the sourcefile system controller 12 a over a private management network, the request packet may be transmitted to thesource CC module 14 a over the private management network. If the user command is a request to transfer data from a source to a target, the request packet may include, without limitation, the file path of the file to the transferred, and the address of the target location. - In
step 305, the source CC module determines whether the received request is a request to copy or move the data from a source to a target. If the answer is YES, thesource CC module 14 a queries, instep 306, the file system managed by thefile system controller 12 a for obtaining one or more source LUNs and file extents/blocks within those LUNs for the file specified in the request packet. Any conventional mechanism for obtaining logical file block information for the specified file may be employed by theCC module 14 a. For example, if thefile system controller 12 a is implemented as a StorNext File System, thesource CC module 14 a uses the StorNext API library (SNAPI) to interrogate the file system controller about the file to be transferred, and obtain the LUN and extent/block information associated with the file. In addition, if the file system/operating system includes a striping driver, the CC module or the file system API interrogates the striping driver to determine the mapping of file system extent information to the logical block numbers on the LUNs that are exported by the RAID systems 20. A goal of this operation is to obtain the logical block numbers on every LUN of each RAID system 20 that is involved in the file transfer operation. For example, if a particular file that is to be transferred is striped across twostorage nodes 20 a′, 20 a″, the CC module identifies the source LUN and logical block numbers/file extents on eachstorage node 20 a′, 20″ that store chunks of the file. If the source CC module 14 does not have direct access to the file system to process the request, it communicates with other modules that do have access to such file system data. Although embodiments of the present invention identify storage areas of the storage nodes via LUNs and file extents, a person of skill in the art should recognize that other forms of identifying the storage areas may also be used. - The direct access by the
CC module 14 a to the file system metadata allows the CC module to create a Scatter/Gather map of the source LUNs and associated logical data blocks that contain the data to be transferred to the target location. Thesource CC module 14 a may further identify the storage node(s) 20 a′, 20 a″ associated with the identified source LUNs. - In
step 308, thesource CC module 14 a generates a second request packet including, without limitation, the filename and size of the file to be transferred, and transmits the second packet to thetarget CC module 14 b over thedata communications network 34. According to one embodiment, thesource CC module 14 a identifies the address of the targetfile system controller 12 b based on the address of thetarget SAN 32 b provided by the user. The identification of the targetfile system controller 12 b automatically identifies thetarget CC module 14 b that is to receive the second request packet. - In
step 310, thetarget CC module 14 b receives the second request packet from the source CC module and pre-allocates space in the target file system for the file to be transferred according to any pre-allocation mechanism conventional in the art. For example, thetarget CC module 14 b may make a request to thetarget storage nodes 20 b′, 20 b″ for allocation of storage space that corresponds to the size of the data that is to be received, and the target storage nodes may respond with available block numbers that it will reserve for the later data transfer. For purposes of this example, it is assumed that the file is to be striped across twotarget storage nodes 20 b′, 20 b″. In this regard, the target CC module identifies one or more target LUNs, file extents/blocks within the target LUNs, and one ormore storage nodes 20 b′, 20 b″ associated with the identified target LUNs, that are to store the transferred file. In addition, if the file system/operating system includes a striping driver, the CC module or the file system API interrogates the striping driver to determine the mapping of file system extent information to the logical block numbers on the LUNs that are exported by the RAID systems 20. A goal of this operation is to obtain the logical block numbers on every LUN of each RAID system that is involved in the file transfer operation. The identified target LUNs and file extents and associated physical blocks are reserved for the file to be received from the source and not used to store other data received by thetarget storage nodes 20 b. - In
step 312, thetarget CC module 14 b returns the allocated target logical blocks to thesource CC module 14 a over thedata communications network 34. According to another embodiment, thetarget CC module 14 b may return the target LUNs and file extents. Thetarget CC module 14 b may further return the addresses of thetarget storage nodes 20 b′, 20 b″ associated with allocated target logical blocks. - In
step 314, thesource CC module 14 a makes a request to thesource supervisor 24 a′, 24 a″ in eachstorage node 20 a′, 20 a″ over thedata communications network 34. The information passed to thesource supervisors 24 a′, 24 a″ with the request may include, for example, a list of source logical blocks, a list of target logical blocks, and identification of thetarget storage nodes 20 b′, 20 b″ corresponding to the target logical blocks. Eachsource supervisor 24 a′, 24 a″ stores the request and associated information from theparent CC module 14 a in a source queue (not shown) for handling. - In
step 316, eachsource supervisor 24 a′, 24 a″ communicates, over theprivate network 36, with one of thetarget supervisors 24 b′, 24 b″ in thetarget storage node 20 b′, 20 b″ where the transferred file is to be stored. According to one embodiment, eachsource supervisor 24 a′, 24 a″ passes to one of thetarget supervisors 24 b′, 24 b″ the target logical blocks that have been pre-allocated for the file to be transferred. According to one embodiment, eachtarget supervisor 24 b′, 24 b″ stores the incoming request and associated information from apeer source supervisor 24 a′, 24 a″, in a target queue (not shown) for handling. - In
step 318, the source and target supervisors 24 each spawn one or more worker threads 26 for processing the data transfer. According to one embodiment, the supervisor threads communicate with the spawned worker threads over a private path including, for example, sockets, pipes, shared memory, and the like. For example, eachsource supervisor 24 a′, 24 a″ spawns at least onesource worker thread 26 a′, 26 a″ for interacting with thedisk array controller 28 a′, 28 a″ of the correspondingstorage node 20 a′, 20 a″. Each source worker thread retrieves the chunks of the file that are physically present on its storage node. The access is based on the source logical blocks provided to theworker threads 26 a′, 26 a″. The actual data retrieval from the physical blocks is based on RAID controller API function calls and direct memory interface, although embodiments of the present invention contemplate other conventional mechanisms of accessing the data stored in the identified source blocks. In performing the data retrieval, eachsource worker thread 26 a′, 26 a″ interacts with thedisk array controller 28 a′, 28 a″ to read the data from the source blocks into a memory of the controller. The determination of which physical blocks of themass storage devices 22 a′, 22 a″ correspond to the source logical blocks is done by the correspondingcontrollers 28 a′, 28 a″ according to conventional mechanisms. - Each
target supervisor 24 b′, 24 b″ also spawns at least onetarget worker thread 26 b′, 26 b″ for interfacing with thedisk array controller 28 b′, 28 b″ to store the received data based on the target logical blocks. Each target supervisor may send to thesource supervisors 24 a′, 24 a″, the work thread identifier spawned for handling the transfer. Any type of identifier that identifies thetarget storage nodes 20 b′, 20 b″ and/ortarget worker threads 26 b′, 26 b″ that are to ultimately receive chunks of data that are to be transferred are contemplated, including without limitation, full or partial IP addresses, names, ID numbers, and the like. - In
step 320, each spawnedsource worker thread 26 a′, 26 a″ directly transfers the data that was written into the memory of thesource controller 28 a′, 28 a″, to a particulartarget worker thread 26 b′, 26 b″, in a point-to-point dynamic interconnect using theprivate data network 36. Each spawnedsource worker thread 26 a′, 26 a″ is configured to transfer the data to the correspondingtarget worker thread 26 b′, 26 b″ based on the identifier to such target worker thread. In this regard, each spawnedsource worker thread 26 a′, 26 a″ reads the data written into the memory of the correspondingcontroller 28 a′, 28 a′ and generates a request packet including the read data. The request packet may further include, without limitation, the identifier of the target worker thread and/ortarget storage node 20 b′, 20 b″ to receive the request packet, and a list of the target LUNs and associated file extents in which to store the transferred data. Thesource worker threads 26 a′, 26 a″ may further be configured to establish the direct point-to-point connection with thetarget storage nodes 20 b′, 20 b″ over a point-to-point communication channel provided by the private data network. - In
step 321, eachtarget worker thread 26 b′, 26 b″ receives the request packet from the correspondingsource worker thread 26 a′, 26 a″ and interacts with itsdisk array controller 28 b′, 28 b″ to write the received data into the physical blocks of the targetmass storage devices 22 b′, 22 b″ that correspond to the target LUNs and associated file extents. - In this manner, neither the SAN nor the
data communications network 34 is used for the transfer of the actual data stored in the storage node(s). Furthermore, aside from the initial command that initiates the data transfer, no storage clients are involved in the actual movement of the data from the source storage node to the target storage node. As such, the data is transferred independent of client involvement at both the source and target locations, as well as without involving the SANs coupled to the storage clients. Depending on the fabric that is used for the private data network, the storage nodes 20 may drive data up to line speeds without interfacing with the SANs 32 ordata communications network 34. Accordingly, normal operations which continue to use the SAN to simply retrieve data from the one or more storage node(s) without copying or moving the data to another location, are not impacted by the data transfer operations. For example, with the embodiments of the present invention, a client may use the SAN to retrieve data to make edits to the data without encountering traffic that carries other data that is intended to be copied or moved to another location. - The worker threads 26 exit after the data transfer completes (normally or abnormally), and status in formation is returned to the parent supervisor modules 24. The supervisor modules 24 collect the status information and report the information back to the corresponding parent CC modules 14. Errors encountered during the transfer may be handled according to any conventional error handling policy. For example, the supervisor modules 24 may attempt to retry the job or simply return a failed status to the parent CC module 14 for allowing the CC module determine if and how to restart the data transfer job based on its internal policy.
- Referring again to step 305, if the request provided by the user is not a request to copy or move the data from a source to a target, the request is handled as it would conventionally in
step 322. That is, the requested data is provided to the initiatingclient 10 a via theSAN 32 a. - In the process described with respect to
FIG. 3 , the list of the target logical blocks are passed to thetarget supervisors 24 b′, 24 b″ by thesource supervisors 24 a′, 24 a″. A person of skill in the art should recognize, however, that the block list may alternatively be passed to thetarget supervisors 24 b′, 24 b″ by the parenttarget CC module 14 b or any other module residing at the source or target location. Thus, embodiments of the present invention are not limited to the particular manner in which source and target logical block lists are passed to the corresponding supervisors and/or worker threads. All conventional mechanisms for conveying this information are contemplated. - A person of skill in the art should recognize that the process of transferring data between two different SANs as described with respect to
FIG. 3 is also applicable to a process of transferring data between storage nodes within the same SAN, such as, for example, transfers betweenstorage nodes 20 a′ and 20 a″. In addition, embodiments of the present invention provide improved network performance even where the source and target locations for a transfer are within the same file system within the same storage node 20 (intra-storage transfer). -
FIG. 4 is a flow diagram of a process for intra-storage transfer according to one embodiment of the invention. The sequence of steps of the process is not fixed, but can be altered into any desired sequence as recognized by a person of skill in the art. Also, the person of skill in the art will recognize that one or more of the steps of the process may be executed concurrently with one or more other steps. For purposes of this example, it is assumed that the file to be transferred is striped acrosssource storage nodes 20 a′, 20 a″, and that after its transfer, the file is striped acrosstarget storage nodes 20 a′, 20 a″ - The process starts, and in
step 400, theuser interface module 15 a at thesource client 10 a is invoked for initiating the intra-storage transfer. According to one embodiment, the client indicates an intra-storage transfer by copying a file from a source folder and identifying a target folder in which to store the copied file as the target location. The target folder may be the same or different than the source folder. - In
step 402, theuser interface module 15 a bundles the file path information and target location provided by the user into a request packet, and instep 404, transmits the request packet to theCC module 14 a in a manner similar to step 304 ofFIG. 3 . The request packet may include, without limitation, the file path of the file to the transferred, and the address of the target location. - In
step 406, theCC module 14 a queries the file system managed by thefile system controller 12 a for obtaining one or more source logical blocks for the file specified in the request packet, in a manner similar to step 306 ofFIG. 3 . - In
step 408, theCC module 14 a identifies the target location as being within the same SAN as the source location by comparing, for example, the address of the source SAN with the address of the target SAN, and pre-allocates space in the file system in a manner similar to step 310 ofFIG. 3 . In pre-allocating the space, theCC module 14 a identifies one or more target logical blocks, andstorage nodes 20 a′, 20 a″ associated with the identified target logical blocks, that are to store the transferred file. - In
step 410, theCC module 14 a makes a request to thesupervisor 24 a′, 24 a″ in eachstorage node 20 a′, 20 a″ storing the desired file, over thedata communications network 34. The information passed to the supervisors with the request may include, for example, a list of source logical blocks from which to retrieve the file, and a list of target logical blocks in which to store a new copy of the file. - In
step 412, eachsupervisor 24 a′, 24 a″ takes the role of a source and stores at least the source logical blocks in source queue (not shown) for handling. - In
step 414, eachsupervisor 24 a′, 24 a″ also takes the role of a target and stores at least the target logical blocks in a target queue (not shown) for handling. - Depending on the role, each
supervisor 24 a′, 24 a″ spawns a source or target worker thread instep 416, for respectively retrieving data from, or storing data into, the corresponding blocks of themass storage devices 22 a′, 22 a″. Specifically, eachsource worker thread 26 a′, 26 a″ interacts with thedisk array controller 28 a′, 28 a″ to copy the data in the source logical blocks into a memory of the controller. Each target worker thread (not shown) also interacts with thedisk array controller 28 a′, 28 a″ to take the data copied into the memory, and write it into the physical blocks of themass storage devices 22 a′, 22 a″ that correspond to the target logical blocks. - As a person of skill in the art should recognize, the intra-storage data transfer mechanism of
FIG. 4 bypasses theSAN 32 a and thus, avoids creating traffic on theSAN 32 a that would generally occur during a conventional intra-storage data transfer event. Instead of traversing through the SAN (e.g. once for reading the data and once for writing the data), data is simply read into the memory of thedisk array controller 28 a′, 28 a″, and the read data is written to physical blocks of themass storage devices 22 a′, 22 a″ that correspond to the target logical blocks without traversing the SAN. - According to one embodiment of the invention each CC module 14 maintains enough information about the supervisors 24, storage nodes 20, and LUNs within the storage nodes, to understand which resources are controlled by each supervisor. Each supervisor module registers with the parent CC module 14 in order to provide this understanding to the CC module.
-
FIG. 5 is a flow diagram of a process for registering a supervisor module 24 with a parent CC module 14 according to one embodiment of the invention. The sequence of steps of the process is not fixed, but can be altered into any desired sequence as recognized by a person of skill in the art. Also, the person of skill in the art will recognize that one or more of the steps of the process may be executed concurrently with one or more other steps. - The registration process may be invoked, for instance, during power-up of the storage node 20 hosting the supervisor module 24. In this regard, in
step 500, the supervisor module 24 discovers available LUNs in the corresponding storage node 20. The discovery is performed according to any conventional mechanism known in the art. According to one embodiment, the supervisor module 24 resides on the disk array controller 28 and has access via an internal controller API to the list of all LUNs and the information that describes those LUNs such as, for example, LUN Inquiry Data (e.g. identifiers, make, model, serial number, manufacturer, and the like). The supervisor module issues commands via the controller API to obtain information about all LUNs that the controller is presenting. The LUN information is collected into an internal data structure and subsequently sent to the CC module. - In
step 502, the supervisor module sends the list of LUNs to the parent CC module 14. According to one embodiment, the supervisor module transmits the list of LUNs in a broadcast message along with a request for registration. Each LUN is identified by a serial number which is unique for each LUN within a SAN. The request for registration may also contain a supervisor ID for the registering supervisor module 24, and storage node ID of the storage node 20 hosting the supervisor. Although a broadcast protocol is anticipated, other conventional mechanisms of transmitting the request may also be employed. - In
step 504, the parent CC module 14 receives the request for registration and maps the list of LUNs to the corresponding supervisor ID and/or the storage node ID. The mapping information may be stored, for example, in a mapping table stored in a memory accessible to the CC module 14. According to one embodiment of the invention, the mapping table may further contain the LUNs' pathnames, band major/minor numbers on the file system controller 12, and/or other information on how data is stored in the file system. - According to one embodiment, addition or removal of LUNs cause the corresponding supervisors to send updates to the parent CC modules 14. In this manner, the CC modules are kept up to date of the available resources controlled by the supervisor modules 24.
-
FIG. 6 is a block diagram illustrating an exemplary data transfer of files stored in specific LUNs according to one embodiment of the invention. According to the illustrated example, the CC module at a source location (e.g.CC module 14 a) receives a request to transfer two files to a target location. Thesource CC module 14 a communicates with thefile system controller 12 a and determines that one file is located in blocks 25-75 ofLUN 3, and a second file located in blocks 50-75 ofLUN 5, both of which are controlled by afirst controller 28 a′ (e.g. a RAID-1 controller). - The
source CC module 14 a sends a request with the names of the files and the corresponding file sizes to the CC module at the target location (e.g.CC module 14 b). Thetarget CC module 14 b allocates space for the first file in blocks 3-53 ofLUN 3 under asecond controller 28 b′ (e.g. RAID-3 controller), and space for the second file in blocks 100-125 ofLUN 6 under athird controller 28 b″ (e.g. RAID-4 controller). According to one embodiment, the allocation information is sent back to thesource CC module 14 a. - The
target CC module 14 b requests thetarget supervisor modules 24 b′ and 24 b″ to set up thetarget worker threads 26 a′, 26 b″ to receive the data transfer. Alternatively, the request may be transmitted by thesource supervisor module 24 a′ in a peer-to-peer communication. - The
source CC module 14 a also requests thesource supervisor module 24 a° to set up thesource worker threads 26 a′, 26 a″ for concurrently transferring the data in blocks 25-75 ofLUN 3, and blocks 50-75 ofLUN 5, over the privatedata mover network 36, to targetworker threads 26 b′ and 26 b″, respectively. In this regard, eachsource worker thread 26 a′, 26 a″ receives the ID of thetarget worker thread 26 b′, 26 b″ to which it is to transmit its retrieved data. - Once the data is received, the
target worker threads 26 b′, 26 b″ concurrently write the received data to blocks 3-53 ofLUN 3, and blocks 100-125 ofLUN 6, respectively. The source and target worker threads report their status to their corresponding supervisor modules, and terminate when the data transfer is complete, or when they encounter an unrecoverable error. - As a person of skill in the art will appreciate, embodiments of the present invention have minimum impact on the I/O performance of the
data communications network 34 and/or SAN 32 since the storage nodes 20 have direct local physical access to disk drives 22. Thus, thedata communications network 34 and/or SAN 32 may be fully utilized for normal production data traffic. Furthermore, theprivate network 36 allows the multiple storage nodes 20 to transfer data in parallel in a point-to-point dynamic interconnect, making the data transfer faster than in conventional systems that utilize the SAN 32 and/ordata communications network 34 for the data transfer. Using an intelligent, fast, and fabric appropriate switch, the storage nodes 20 can drive data at up to line speeds without inter-node interference. - According to another embodiment of the invention, a single master worker thread (MWT) is spawned at each storage node 20. Instead of spawning a separate worker thread for each data transfer request, the source MWT receives instructions from the CC module 14 for the multiple data transfers and does all the work of retrieving all of file's extents/blocks and shipping them to the target MWT on another SAN. This embodiment may be simpler than the above described embodiment and may reduce setup costs due to the fact that only one worker thread is spawned. However, it may result in reduced performance because it uses a single network connection from a single storage node to send all of the data to other storage nodes and does not perform simultaneous transfers as is the case with multiple worker threads.
- Another embodiment is structurally similar to the embodiments described above, with the difference being that a copy of the file system client (e.g., the SNFS client) runs on each storage node 20 in a SAN. According to this embodiment, the source worker threads use standard file system calls to retrieve the data to be transferred to a target SAN. Because each storage node runs its own copy of the file system client, the source worker thread on the particular node retrieves only chunks of the file that are physically present on that node. Although such an embodiment may reduce system complexity by allowing the worker threads direct access to file system information, it may result in increased costs due to licensing fees that may be required to install the file system client on each storage node. In addition, the overhead of the file system client in the I/O path may also reduce performance.
- Another embodiment of the present invention is similar to the embodiment with a single master worker thread, except that it differs by including a single copy of the file system client per SAN that runs on one of the storage nodes. This local file system client knows how to access all of the source file data regardless of how it is distributed over multiple storage nodes 20.
- While the present invention has been described in connection with certain exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims, and equivalents thereof.
Claims (30)
1. A networked data storage system comprising:
a source mass storage device storing source data, wherein the source mass storage device is coupled to a source client via a storage area network (SAN);
a target mass storage device coupled to the source mass storage device via a private data network;
wherein, in response to a first type of request from the source client, the source mass storage device is configured to provide the source data to the source client via the SAN, and in response to a second type of request from the source client, the source mass storage device is configured to provide the source data to the target mass storage device via the private data network, wherein in providing the source data to the target mass storage device, the source mass storage device is configured to receive an identifier identifying the target mass storage device.
2. The networked data storage system of claim 1 , wherein the first type of request is a request to read the source data and the second type of request is a request to copy or move the source data to the target mass storage device.
3. The networked data storage system of claim 1 , wherein the source mass storage device is configured to determine the identifier identifying the target mass storage device based on information provided in the second type of request.
4. The networked data storage system of claim 3 , wherein the information provided in the second type of request is an address of a target SAN, wherein the target mass storage device is coupled to a target client via the target SAN.
5. The networked data storage system of claim 4 , wherein the target SAN is the same as the SAN coupling the source client to the source mass storage device.
6. The networked data storage system of claim 1 , wherein the source storage device includes a processor and a memory storing computer program instructions, the processor being configured to execute the program instructions, the program instructions including:
receiving a list of source blocks;
retrieving the source data based on the source blocks;
writing the retrieved source data in the memory;
reading the source data from the memory;
generating a request packet including the read source data;
establishing connection with the target mass storage device; and
transmitting the request packet to the target mass storage device via the established connection.
7. The networked data storage system of claim 6 , wherein the connection is a point-to-point connection.
8. The networked data storage system of claim 6 , wherein the source blocks are identified by a file system controller based on file system metadata.
9. The networked data storage system of claim 1 , wherein the target mass storage device is configured to pre-allocate storage space for storing the source data.
10. The networked data storage system of claim 9 , wherein the target mass storage device is configured to receive the source data from the source mass storage device and store the received source data in the pre-allocated storage space.
11. The networked data storage system of claim 1 , wherein the source data is stored in a plurality of source mass storage devices, wherein each of the plurality of source mass storage devices is configured to concurrently retrieve a portion of the source data and provide the retrieved portion to one or more target mass storage devices for storing therein.
12. The networked data storage system of claim 1 , wherein in response to a plurality of second type requests from one or more source clients, the source mass storage device is configured to concurrently provide data requested by each of the plurality of second type requests, to one or more target mass storage devices, for concurrently storing the data in the one or more target mass storage devices.
13. The networked data storage system of claim 1 , wherein the private data network provides a point-to-point communication channel between the source mass storage device and the target mass storage device for transferring the source data from the source mass storage device to the target mass storage device.
14. The networked data storage system of claim 1 , wherein the source data from the source mass storage device is transferred to the target mass storage device independent of involvement by the source client in moving the source data.
15. The networked data storage system of claim 1 , wherein in response to a third type of request from the source client, the source mass storage device is configured to transfer the source data from a first location in the source mass storage device to a second location in the source mass storage device, wherein the transfer bypasses the SAN.
16. In a networked data storage system including a source mass storage device coupled to a source client via a storage area network (SAN), and a target mass storage device coupled to the source mass storage device via a private data network, a method for transferring source data stored in the source mass storage device comprising:
in response to a first type of request, providing the source data to the source client by the source mass storage device via the SAN; and
in response to a second type of request, providing to the source mass storage device an identifier identifying the target mass storage device, and further providing, based on the identifier, the source data to the target mass storage device via the private data network.
17. The method of claim 16 , wherein the first type of request is a request to read the source data and the second type of request is a request to copy or move the source data to the target mass storage device.
18. The method of claim 16 , wherein the determining of the identifier identifying the target mass storage device is based on information provided in the second type of request.
19. The method of claim 18 , wherein the information provided in the second type of request is an address of a target SAN, wherein the target mass storage device is coupled to a target client via the target SAN.
20. The method of claim 19 , wherein the target SAN is the same as the SAN coupling the source client to the source mass storage device.
21. The method of claim 16 further comprising:
receiving by the source mass storage device a list of source blocks;
retrieving by the source mass storage device the source data based on the source blocks;
writing by the source mass storage device the retrieved source data in the memory;
reading by the source mass storage device the source data from the memory;
generating by the source mass storage device a request packet including the read source data;
establishing by the source mass storage device connection with the target mass storage device; and
transmitting by the source mass storage device the request packet to the target mass storage device via the established connection.
22. The method of claim 21 , wherein the connection is a point-to-point connection.
23. The method of claim 21 , wherein the source blocks are identified by a file system controller based on file system metadata.
24. The method of claim 16 further comprising:
pre-allocating by the target mass storage device storage space for storing the source data.
25. The method of claim 24 further comprising:
receiving by the target mass storage device the source data from the source mass storage device; and
storing the received source data in the pre-allocated storage space.
26. The method of claim 16 , wherein the source data is stored in a plurality of source mass storage devices, the method comprising:
concurrently retrieving by each of the plurality of source mass storage devices a portion of the source data; and
providing the retrieved portion to one or more target mass storage devices for storing therein.
27. The method of claim 16 further comprising:
in response to a plurality of second type requests from one or more source clients, concurrently providing by the source mass storage device data requested by each of the plurality of second type requests, to one or more target mass storage devices, for concurrently storing the data in the one or more target mass storage devices.
28. The method of claim 16 , wherein the private data network provides a point-to-point communication channel between the source mass storage device and the target mass storage device for transferring the source data from the source mass storage device to the target mass storage device.
29. The method of claim 16 , wherein the source data from the source mass storage device is transferred to the target mass storage device independent of involvement by the source client in moving the source data.
30. The method of claim 16 further comprising:
in response to a third type of request from the source client, transferring by the source mass storage device the source data from a first location in the source mass storage device to a second location in the source mass storage device, wherein the transfer bypasses the SAN.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/333,992 US20130166670A1 (en) | 2011-12-21 | 2011-12-21 | Networked storage system and method including private data network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/333,992 US20130166670A1 (en) | 2011-12-21 | 2011-12-21 | Networked storage system and method including private data network |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130166670A1 true US20130166670A1 (en) | 2013-06-27 |
Family
ID=48655635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/333,992 Abandoned US20130166670A1 (en) | 2011-12-21 | 2011-12-21 | Networked storage system and method including private data network |
Country Status (1)
Country | Link |
---|---|
US (1) | US20130166670A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140317367A1 (en) * | 2013-04-22 | 2014-10-23 | Hitachi, Ltd. | Storage apparatus and data copy control method |
US20150281361A1 (en) * | 2013-02-20 | 2015-10-01 | Panasonic Intellectual Property Management Co., Ltd. | Wireless access device and wireless access system |
US20150278246A1 (en) * | 2013-02-20 | 2015-10-01 | Panasonic Intellectual Property Management Co., Ltd. | Wireless access device and wireless access system |
US20190171828A1 (en) * | 2017-12-01 | 2019-06-06 | Bank Of America Corporation | Digital Data Processing System For Efficiently Storing, Moving, And/Or Processing Data Across A Plurality Of Computing Clusters |
US11171930B2 (en) * | 2016-01-08 | 2021-11-09 | Capital One Services, Llc | Methods and systems for securing data in the public cloud |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6466978B1 (en) * | 1999-07-28 | 2002-10-15 | Matsushita Electric Industrial Co., Ltd. | Multimedia file systems using file managers located on clients for managing network attached storage devices |
US20030105830A1 (en) * | 2001-12-03 | 2003-06-05 | Duc Pham | Scalable network media access controller and methods |
US6654830B1 (en) * | 1999-03-25 | 2003-11-25 | Dell Products L.P. | Method and system for managing data migration for a storage system |
US20040143642A1 (en) * | 2002-06-28 | 2004-07-22 | Beckmann Curt E. | Apparatus and method for fibre channel data processing in a storage process device |
US20040236868A1 (en) * | 2003-05-22 | 2004-11-25 | International Business Machines Corporation | Method, system, and program for performing a data transfer operation with respect to source and target storage devices in a network |
US20040267902A1 (en) * | 2001-08-15 | 2004-12-30 | Qing Yang | SCSI-to-IP cache storage device and method |
US20060112243A1 (en) * | 2004-11-19 | 2006-05-25 | Mcbride Gregory E | Application transparent autonomic availability on a storage area network aware file system |
US7080223B2 (en) * | 2002-10-15 | 2006-07-18 | International Business Machines Corporation | Apparatus and method to manage and copy computer files |
US7089293B2 (en) * | 2000-11-02 | 2006-08-08 | Sun Microsystems, Inc. | Switching system method for discovering and accessing SCSI devices in response to query |
US7103638B1 (en) * | 2002-09-04 | 2006-09-05 | Veritas Operating Corporation | Mechanism to re-export NFS client mount points from nodes in a cluster |
US20060277383A1 (en) * | 2005-06-03 | 2006-12-07 | Lefthand Networks, Inc. | System for Providing Multi-path Input/Output in a Clustered Data Storage Network |
US20070011424A1 (en) * | 2005-07-08 | 2007-01-11 | Cisco Technology, Inc. | Apparatus and methods for facilitating data tapping with host clustering in a storage area network |
US7165096B2 (en) * | 2000-12-22 | 2007-01-16 | Data Plow, Inc. | Storage area network file system |
US7191225B1 (en) * | 2002-11-27 | 2007-03-13 | Veritas Operating Corporation | Mechanism to provide direct multi-node file system access to files on a single-node storage stack |
US7275103B1 (en) * | 2002-12-18 | 2007-09-25 | Veritas Operating Corporation | Storage path optimization for SANs |
US20080010409A1 (en) * | 2006-07-05 | 2008-01-10 | Cisco Technology, Inc. | Dynamic, on-demand storage area network (SAN) cache |
US7457846B2 (en) * | 2001-10-05 | 2008-11-25 | International Business Machines Corporation | Storage area network methods and apparatus for communication and interfacing with multiple platforms |
US7606167B1 (en) * | 2002-04-05 | 2009-10-20 | Cisco Technology, Inc. | Apparatus and method for defining a static fibre channel fabric |
US7765329B2 (en) * | 2002-06-05 | 2010-07-27 | Silicon Graphics International | Messaging between heterogeneous clients of a storage area network |
US7849274B2 (en) * | 2003-12-29 | 2010-12-07 | Netapp, Inc. | System and method for zero copy block protocol write operations |
US7913056B2 (en) * | 2004-12-20 | 2011-03-22 | Emc Corporation | Method to perform parallel data migration in a clustered storage environment |
US8019915B2 (en) * | 2007-04-16 | 2011-09-13 | Tixel Gmbh | Method and device for controlling access to multiple applications |
US8082362B1 (en) * | 2006-04-27 | 2011-12-20 | Netapp, Inc. | System and method for selection of data paths in a clustered storage system |
US8099474B2 (en) * | 2003-02-14 | 2012-01-17 | Promise Technology, Inc. | Hardware-accelerated high availability integrated networked storage system |
US8327004B2 (en) * | 2001-10-05 | 2012-12-04 | International Business Machines Corporation | Storage area network methods and apparatus with centralized management |
US20130151646A1 (en) * | 2004-02-13 | 2013-06-13 | Sriram Chidambaram | Storage traffic communication via a switch fabric in accordance with a vlan |
-
2011
- 2011-12-21 US US13/333,992 patent/US20130166670A1/en not_active Abandoned
Patent Citations (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6654830B1 (en) * | 1999-03-25 | 2003-11-25 | Dell Products L.P. | Method and system for managing data migration for a storage system |
US6466978B1 (en) * | 1999-07-28 | 2002-10-15 | Matsushita Electric Industrial Co., Ltd. | Multimedia file systems using file managers located on clients for managing network attached storage devices |
US7089293B2 (en) * | 2000-11-02 | 2006-08-08 | Sun Microsystems, Inc. | Switching system method for discovering and accessing SCSI devices in response to query |
US7165096B2 (en) * | 2000-12-22 | 2007-01-16 | Data Plow, Inc. | Storage area network file system |
US20120278450A1 (en) * | 2000-12-22 | 2012-11-01 | Dataplow, Inc. | Storage area network file system |
US20040267902A1 (en) * | 2001-08-15 | 2004-12-30 | Qing Yang | SCSI-to-IP cache storage device and method |
US8327004B2 (en) * | 2001-10-05 | 2012-12-04 | International Business Machines Corporation | Storage area network methods and apparatus with centralized management |
US7457846B2 (en) * | 2001-10-05 | 2008-11-25 | International Business Machines Corporation | Storage area network methods and apparatus for communication and interfacing with multiple platforms |
US20030105830A1 (en) * | 2001-12-03 | 2003-06-05 | Duc Pham | Scalable network media access controller and methods |
US7606167B1 (en) * | 2002-04-05 | 2009-10-20 | Cisco Technology, Inc. | Apparatus and method for defining a static fibre channel fabric |
US7765329B2 (en) * | 2002-06-05 | 2010-07-27 | Silicon Graphics International | Messaging between heterogeneous clients of a storage area network |
US20040143642A1 (en) * | 2002-06-28 | 2004-07-22 | Beckmann Curt E. | Apparatus and method for fibre channel data processing in a storage process device |
US7103638B1 (en) * | 2002-09-04 | 2006-09-05 | Veritas Operating Corporation | Mechanism to re-export NFS client mount points from nodes in a cluster |
US7080223B2 (en) * | 2002-10-15 | 2006-07-18 | International Business Machines Corporation | Apparatus and method to manage and copy computer files |
US7191225B1 (en) * | 2002-11-27 | 2007-03-13 | Veritas Operating Corporation | Mechanism to provide direct multi-node file system access to files on a single-node storage stack |
US7275103B1 (en) * | 2002-12-18 | 2007-09-25 | Veritas Operating Corporation | Storage path optimization for SANs |
US8099474B2 (en) * | 2003-02-14 | 2012-01-17 | Promise Technology, Inc. | Hardware-accelerated high availability integrated networked storage system |
US20040236868A1 (en) * | 2003-05-22 | 2004-11-25 | International Business Machines Corporation | Method, system, and program for performing a data transfer operation with respect to source and target storage devices in a network |
US7849274B2 (en) * | 2003-12-29 | 2010-12-07 | Netapp, Inc. | System and method for zero copy block protocol write operations |
US20130151646A1 (en) * | 2004-02-13 | 2013-06-13 | Sriram Chidambaram | Storage traffic communication via a switch fabric in accordance with a vlan |
US20060112243A1 (en) * | 2004-11-19 | 2006-05-25 | Mcbride Gregory E | Application transparent autonomic availability on a storage area network aware file system |
US7913056B2 (en) * | 2004-12-20 | 2011-03-22 | Emc Corporation | Method to perform parallel data migration in a clustered storage environment |
US20060277383A1 (en) * | 2005-06-03 | 2006-12-07 | Lefthand Networks, Inc. | System for Providing Multi-path Input/Output in a Clustered Data Storage Network |
US20070011424A1 (en) * | 2005-07-08 | 2007-01-11 | Cisco Technology, Inc. | Apparatus and methods for facilitating data tapping with host clustering in a storage area network |
US8082362B1 (en) * | 2006-04-27 | 2011-12-20 | Netapp, Inc. | System and method for selection of data paths in a clustered storage system |
US20080010409A1 (en) * | 2006-07-05 | 2008-01-10 | Cisco Technology, Inc. | Dynamic, on-demand storage area network (SAN) cache |
US8019915B2 (en) * | 2007-04-16 | 2011-09-13 | Tixel Gmbh | Method and device for controlling access to multiple applications |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150281361A1 (en) * | 2013-02-20 | 2015-10-01 | Panasonic Intellectual Property Management Co., Ltd. | Wireless access device and wireless access system |
US20150278246A1 (en) * | 2013-02-20 | 2015-10-01 | Panasonic Intellectual Property Management Co., Ltd. | Wireless access device and wireless access system |
US9762671B2 (en) * | 2013-02-20 | 2017-09-12 | Panasonic Intellectual Property Management Co., Ltd. | Wireless access device and wireless access system |
US10162833B2 (en) * | 2013-02-20 | 2018-12-25 | Panasonic Intellectual Property Management Co., Ltd. | Wireless access device and wireless access system |
US20140317367A1 (en) * | 2013-04-22 | 2014-10-23 | Hitachi, Ltd. | Storage apparatus and data copy control method |
US9170750B2 (en) * | 2013-04-22 | 2015-10-27 | Hitachi, Ltd. | Storage apparatus and data copy control method |
US11171930B2 (en) * | 2016-01-08 | 2021-11-09 | Capital One Services, Llc | Methods and systems for securing data in the public cloud |
US11843584B2 (en) | 2016-01-08 | 2023-12-12 | Capital One Services, Llc | Methods and systems for securing data in the public cloud |
US20190171828A1 (en) * | 2017-12-01 | 2019-06-06 | Bank Of America Corporation | Digital Data Processing System For Efficiently Storing, Moving, And/Or Processing Data Across A Plurality Of Computing Clusters |
US10678936B2 (en) * | 2017-12-01 | 2020-06-09 | Bank Of America Corporation | Digital data processing system for efficiently storing, moving, and/or processing data across a plurality of computing clusters |
US10839090B2 (en) | 2017-12-01 | 2020-11-17 | Bank Of America Corporation | Digital data processing system for efficiently storing, moving, and/or processing data across a plurality of computing clusters |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10782880B2 (en) | Apparatus and method for providing storage for providing cloud services | |
US9639277B2 (en) | Storage system with virtual volume having data arranged astride storage devices, and volume management method | |
US20230333768A1 (en) | Object store mirroring based on checkpoint | |
US20210303165A1 (en) | Dynamic recycling algorithm to handle overlapping writes during synchronous replication of application workloads with large number of files | |
US9578101B2 (en) | System and method for sharing san storage | |
US11797491B2 (en) | Inofile management and access control list file handle parity | |
US11797213B2 (en) | Freeing and utilizing unused inodes | |
US11449260B2 (en) | Persistent hole reservation | |
US11907261B2 (en) | Timestamp consistency for synchronous replication | |
US10503693B1 (en) | Method and system for parallel file operation in distributed data storage system with mixed types of storage media | |
US20130166670A1 (en) | Networked storage system and method including private data network | |
US10872036B1 (en) | Methods for facilitating efficient storage operations using host-managed solid-state disks and devices thereof | |
US10031682B1 (en) | Methods for improved data store migrations and devices thereof | |
US20170318093A1 (en) | Method and System for Focused Storage Access Notifications from a Network Storage System | |
WO2015127647A1 (en) | Storage virtualization manager and system of ceph-based distributed mechanism | |
US20210334025A1 (en) | Methods for handling storage devices with different zone sizes and devices thereof | |
EP3788501B1 (en) | Data partitioning in a distributed storage system | |
KR101470857B1 (en) | Network distributed file system and method using iSCSI storage system | |
US10768834B2 (en) | Methods for managing group objects with different service level objectives for an application and devices thereof | |
WO2017180143A1 (en) | Distributed lock management enabling scalability | |
KR20140060959A (en) | System and method for load balancing of network distributed file system using iscsi storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ACTIVE STORAGE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WAYDA, JAMES GEORGE;BERTAGNOLLI, JOHN L.;LONSDALE, MARK;AND OTHERS;SIGNING DATES FROM 20120501 TO 20120703;REEL/FRAME:028504/0452 |
|
AS | Assignment |
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:ACTIVE STORAGE, INC.;REEL/FRAME:028808/0869 Effective date: 20120817 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |