DVD PRO Conference
V2B Conference
TechVideo Expo
Current Issue
Article Archive
News Archive
Buyer's Guide
nbsp; Home Magazine eNewsletters Events Contact Navigation



Current Issue Current Issue
Buyers Guide 2001

Buyers GuideCompany SearchProduct Search

News Indices

CD TrackerCD/DVD-ROM IndexFact, Figures & FindingsConference Calendar



NEW! 2002 Online
Buyer's Guide
Listings


eNewsletters
DVD TodayDigital EarfulTechVideo NEWSSubscript
Ad Links

i in the Sky: the iSCSI Plan versus Fibre-Channel SAN

David Doering

August 2001 | We measure technology change today in Internet "years." These "years" last six months at best or as little as three months at worst, rather than a traditional twelve. In the past, you could depend on a given approach lasting years before being replaced. Now, we can watch the birth and maturation of a new standard almost overnight.

Few technologies have so clearly demonstrated the fluidity of the technology frontier than Adaptec's brilliant initiative in Ethernet-based storage. Just one year ago, at Networld+Interop 2000, Adaptec debuted its EtherStor concept. EtherStor looked to replace standard SCSI or Fibre-Channel cables with ubiquitous Cat5 cable running Gigabit Ethernet to connect storage devices to servers and clients. This startling innovation galvanized an industry almost overnight. Many other vendors (both adapter and device) latched unto this idea of Ethernet-connectivity to make EtherStor into a standard. Thus, the up-and-coming iSCSI standard was born.

This year's N+I had iSCSI concepts in droves, which, of course, brought us heaps of hype: "Fibre-Channel's dead!" cried one vendor. "iSCSI is the future of all storage!" proclaimed another. On the opposite side of the aisle, though, sat the FC-AL vendors who decried iSCSI as an "underperforming solution" with "no standards now." Clearly, it is time for EMedia to step in to bring some daylight into the discussion. TOMORROW'S STORAGE A few years ago, we looked at how network storage was developing. We could see that the demand for multimedia from end-users and from artists at production facilities to create that media demanded a new way of handling storage. We saw a quick migration from a multiple server with attached storage model to a more flexible architecture with NAS and SAN.

There are several drawbacks to direct-attached storage. First, it is a single bottleneck for access. Next, it is hardly scalable. If additional storage is needed, then you'll need to add a new server. Sadly, each new server requires additional licenses (with substantial fees), which make the purchase cycle for such additions often lengthy. (While it is always possible to add storage to an existing server, it soon becomes apparent that this is no trivial task. Adding another server is often easier, despite those difficulties.)

The state of the art in the year 2000 was a different approach from the familiar server-attached model. As users required more network storage, administrators added additional network-attached servers (NAS) with that storage. As these boxes natively supported NDS or MS Domain Services, they could be configured and secured using standard Windows tools. In effect, the once-mighty file server was now relegated to the smaller, but critical, role of security server. (In this case, a Novell NDS server.) High-performance storage requirements demand a different solution. Here, an FC-AL SAN environment supports databases for the mail, Web, and business SQL servers. SANs provide an easily expanded storage pool with redundant links and high-speed connections.

2001–THE REALITY

It's late 2001 now and we've seen limitations to the current storage apparatus that temper the dreams of the 1990s and 2000 for storage access. First, Fibre-Channel products from various vendors have proven themselves quite resistant to peaceful co-existence. As late as June 2001, FC vendors were announcing further "commitments" to ensuring compatibility between their products. Second, the number of Fibre-Channel-trained administrators remains small. Since 40-50% of sites do their own storage deployments, this lack of skill sets has discouraged FC SAN growth. Third, as a result of modest demand, commodity pricing has never hit for SANs; to declare SANs at the commodity level reached by popular NAS implementations, with adjustments for scale, we'd have to see pricing drop at least as low as $100,000–if not $50,000–and we aren't even close.

On the NAS side, the proliferation of network-attached appliances has proceeded apace. But the lack of heterogeneous tools to integrate these into one manageable whole has created an upper-limit to that growth. Also, the existing infrastructure of 10/100BaseT Ethernet simply hasn't allowed for substantial numbers of NAS boxes to run data back and forth over the network.

Finally, protocol support in NAS units was and has remained spotty. While some used IP from the beginning (as a sort of cheap intranet Web server), others solely supported prominent network protocols from Microsoft, Apple, or Novell.

INTRODUCING–GIGABIT ETHERNET

The surprising twist in the storyline came about when many companies actively began moving towards Gigabit Ethernet as an in-house backbone and/or high-speed server/storage network. The price of Gig Ether plummeted, which has made it much simpler to upgrade than we might have thought in 1998 or 1999. (Prices for Gig Ether installations are now one-fifth of comparable Fibre Channel systems.)

Yet, even with this wonderful new level of bandwidth, the initial problem of NAS boxes not talking to one another necessitated one further step.

ENTER iSCSI

NAS clearly demonstrated the value of using the existing Category 5 network cable to do double duty as a storage bus as well. The Internet created a vast pool of experienced engineers with TCP/IP as the "lingua franca" of the network cable. And the tidal wave of data being generated simply demanded an improved architecture.

So the Adaptec engineers and other iSCSI players all agreed to create a type of hybrid from existing hardware. Using TCP/IP as the carrier, they would use Fibre Channel's protocol as a model to create a new protocol for SCSI to go over Gigabit Ethernet–iSCSI.

The iSCSI-equipped network allows storage devices from RAID systems to optical jukeboxes to tape autoloaders to connect directly to a network cable. A discovery protocol links the new devices up with a sort of storage "DNS" server (called, appropriately enough, the Internet Storage Name Service, or iSNS). The iSNS tracks the available devices and presents this information to a single management console. Operating systems like Windows or NetWare would need to support iSCSI in order to present an integrated picture of storage resources on the network–much like SAN solutions need to do today.

In effect, iSCSI cards will be a combination of Gigabit Ethernet NIC and storage bus. As the "NIC" cards for storage devices, various vendors will provide gateways to existing SCSI devices. As TCP/IP transports iSCSI, existing Internet connections allow for storage devices to connect over far greater distances than is possible with even Fibre-Channel. Except for latency issues, there's no reason that an integrated iSCSI solution couldn't span continental distances. This would make disaster recovery and business continuity solutions much easier to implement.

What's more, the obvious value of using existing TCP/IP connections over Gigabit Ethernet networks makes iSCSI especially compelling. As more sophisticated tools become available, aggregating volumes from independent network-attached iSCSI devices will make them act in many ways like FC SANs.

The defenders of FC-AL cite a few perceived drawbacks to iSCSI. First, iSCSI isn't clearly a block-level or a file-level protocol. FC-AL performs well because it offers database users block-level transfers, which are fast. There's no native redundancy to Gigabit Ethernet, like there is with Fibre-Channel Arbitrated Loop. Existing Gig Ether networks are clogged with existing client traffic. Trying to add storage demands to such systems inevitably bogs the whole thing down. Finally, iSCSI must use TCP/IP, and as everyone who's tried it knows, TCP/IP is an incredible CPU hog. It often grabs 50-60% utilization or more just to process packets, let alone do any useful work.

FC-AL SANs are indeed an ideal solution for sites requiring a system with needs architecturally comparable, but insufficiently met, by traditional direct server-attached networks. In support of multiple database-type servers, the investment in FC-AL is well-rewarded with a robust, reliable platform.

However, for the majority of network users, a less expensive, yet similarly flexible, architecture is needed. Various iSCSI players are already discussing incorporating both file- and block-level data transfers in the spec. While a single Gig Ether connection is not redundant, it can be made so with multiple cards and a dedicated link for data storage devices between them. Also, real-world experience with Gig Ether suggests significant uptime (easily 5-9s or better), so for many sites, the need for redundancy is less.

The dual-network/backbone approach also relocates the iSCSI traffic to a dedicated line. This takes care of clogging the network. (Real-world experience also suggests that the Gig Ether connection isn't as clogged as might be supposed.) Finally, the problem of TCP/IP overhead will largely be resolved this year with the introduction of NICs that do on-board processing of the TCP/IP packets, thus eliminating the CPU burden.

THE FINAL HURDLE

The challengers to iSCSI do make one valid point–as of now, there is no standard for it. Although the iSCSI group is working almost daily with the IETF to create the standard, there are still a variety of architectural approaches to decide on. This means the mainstream implementation of iSCSI remains at least a year away.

However, with the promise of 10GB Ethernet in the wings, the universal understanding of TCP/IP, and the evergrowing need for storage, iSCSI represents a compelling solution and one worth planning for when it does arrive.


Block Versus file–What's the Difference

One of the primary discussion points about iSCSI versus FC-AL is block-level data transfer versus file-level transfers. Block transfers are faster than file transfers, so databases like block transfers. Users, on the other hand, see data as files. So they prefer to think in terms of file transfers.

The easiest way to describe this distinction is to ask the question, "Where does the file system reside?"

In a NAS environment, each NAS box maintains its own file system. So requests from clients to the NAS box are file requests. In a SAN environment, the file system resides with the SAN management software. So while requests from clients to the SAN manager will be file requests, those sent by the manager to the storage devices in the SAN are not file requests, they are block requests.

On the plus side for NAS is the fact that file system is maintained on every box. The downside is that you cannot easily make all those boxes act as one. On the plus side for SANs, it is easy to make all the SAN-attached devices appear as one. The drawback is that all file traffic must go through the SAN manager before coming or going to the SAN.


Storage Clusters Over Ethernet

While iSCSI provides a simple way to attach and expand network storage, it doesn't answer the needs of database users effectively. Databases routinely use block data transfers to maintain throughput. So FC-AL SANs would hold a strong sway over that part of the storage market.

Or do they? Already one iSCSI support, LeftHand Networks, has looked into this problem and developed a proprietary solution. They call it "Network Unified Storage" (NUS), which describes both their hardware and a lightweight protocol. In its simplest form, NUS is a NAS box using iSCSI connections to clients. The power of NUS though is as you add modules to the system. Rather than simply become a set of independent NAS boxes, NUS modules become one virtual volume to clients. In effect, NUS allows end-users to create actual storage clusters in an IP SAN configuration.

This means that should a single storage module fail, NUS would continue to perform. If multiple requests come in, NUS responds as a system, not as individual NAS components, to these requests. No one processor is answering the incoming requests as would be the case of a simple NAS solution.

The drawback to this is that only LeftHand's NUS units work together to create these clusters. It would, therefore, mean a commitment to this product–but one that could be compelling to many, if not most database users.


Infiniband–ISCI Trumped?

In the PC world, we deal with at least three separate, distinct connections in hardware. These include internal I/O buses (most often PCI), external storage (often SCSI), and network connections (mostly Ethernet). Each has its own associated protocols, configuration, and limitations. So is it any wonder that setup and diagnostics is problematic?

Each connection has performance maximums. The most obvious is the PCI bus. At 533MB/sec, it simply isn't going to support multiple connections with advanced technologies. If Ultra-3 SCSI will need 320MB/sec and Gig Ether 125MB/sec, then this server won't handle the load if we do redundant connectors. So the PCI bus becomes a major bottleneck.

The other connections will suffer in turn. First with advancing CPU speeds, the frontside bus must become faster than 1GB/sec. Memory is speeding up as well. The proposed nDRAM spec calls for an awesome (by today's standards) 12.6GB/sec throughput!

To answer this demand for better throughput, and to reduce confusion and difficulties in setup and repair, Intel launched a new approach–Infiniband–to replace not just one of these connections (like PCI replaced ISA), but rather to replace all three. Infiniband can do this because it supports multiple channels with each channel handling up to 2.5Gbps (i.e.: 250MB/sec). A 24-channel bi-directional Infiniband link then could support about 6GB/sec.

With Infiniband, memory, storage devices, and HBAs would all support a single protocol/connector. Since Infiniband uses optical fiber as well as motherboard traces, it can also connect to any of a variety of storage subsystems like iSCSI. Unlike iSCSI, it uses a fabric approach (more like FC-AL) to interconnect devices. This creates a redundant path and improves performance by avoiding bottlenecks. It also reduces management hassles as all network devices reside on a single managed infrastructure.

The drawback to Infiniband, network nirvana that it seems, is that it is four or five years away.


Companies Mentioned in This Article

Adaptec, Inc.
801 South Milpitas Boulevard, Milpitas, CA 95035; 800/442-7274, 408/945-8600; Fax 408/262-2553; http://www.adaptec.com


David Doering (dave@techvoice.com), an EMedia Magazine contributing editor, is also the Network Observer columnist and a senior analyst with TechVoice, Inc., an Orem, Utah-based consultancy.

Comments? Email us at letters@onlineinc.com.


Copyright 2000-2001 Online, Inc.
213 Danbury Road, Wilton, Connecticut 06897-4007
203/761-1466, 800/248-8466
Fax 203/761-1444
info@onlineinc.com