November, 2000 | Data is every company's lifeblood.
Lose access to the accounts receivable database and there
goes your access to revenues right along with it. Lose the
parts inventory and there goes your production schedule.
If EMedia Magazine loses its subscriber list, we
lose our readers. Are you there? Hello? Hello?
As we have discussed in past articles, RAID and DVD provide
superior ways to store, archive, and protect our data and
forestall whatever catastrophes and logical conundrums may
befall us if we lose it. Storage Area Networks (SANs) provide
a superior way to protect access to that data. SANs today
have a reputation for being difficult to construct and maintain;
there's lots of talk (most of it true) about incompatibilities
between SAN products from different vendors. Many of these
concerns have not gone away, even with multiple laboratories
claiming to test and ensure compatibility. Still, despite
this dubious reputation, it is possible to design and deploy
an effective SAN today.
WHY SAN IS SANE
Many techie types are drawn to SAN technology simply because
it is the hottest thing going in network storage. Senior management,
however, might need to consider some more realistic goals
to justify SAN deployment. SANs deliver at least three major
benefits to networks:
As networked computing use has evolved in the last few years
with enormous increases in networkable storage capacity, throughput,
and performance reliabilityand the concomitant expectation
of being able to store and deliver more demanding data types
like high-quality audio and videofile sizes have grown
and file retention has also escalated exponentially. More
data types, such as MPEG video and MP3, are now commonly stored
and served in networked environments, which adds up to ever-increasing
- Relatively easy scaling of network storage
- Significantly enhanced reliability of data access
- Ease of management for company data
In the old model, to accommodate these demands, we would
have added storage directly to the network server. This
meant taking that server (and its data) offline until we
had fully tested the new configuration. The downtime this
approach imposes is no longer considered a necessary or
acceptable interruption of workflow. SANs provide a way
to continue to add storage devices and capacity in those
devices without pulling a server offline.
We say "relatively easy scaling" since SAN deployment
in existing heterogeneous environments is still problematic.
Also, although in theory you can set up more than 120 devices
on a SAN, in practice one-third to one-half that number
is the limit. However, once you establish a working SAN,
adding large amounts of storage involves far fewer considerations
than adding a similar amount to a traditional network server
using SCSI, for example.
In addition to adding ease and volume to a network's scalability,
SANs also enhance network reliability by providing fewer
points of failure in accessing data than directly host-attached
storage. SANs can be configured to have redundant paths
to the stored data, so cutting or disconnecting a cable
won't sever your connection. You can also configure SANs
so the loss of any one or even several servers still does
not prevent access to your stored data. Clustering (shared
work environment) and fail-over (backup) server strategies
can protect against the loss of the servers themselves.
By securing data onto a SAN, administrators no longer
have widely disparate locations to track down and protect
data. Backup and archiving is simplified (using serverless
backup procedures, for example.) You can configure a SAN
so multiple OS clients can access the same data.
THE LAY OF THE SAN
The design of most SANs consists of five components:
- Fibre Channel host-bus adapter (HBA) in each server
accessing the SAN
- Fibre Channel Switch connecting the storage devices
to the HBAs (small SANs could use a lower-cost hub instead)
- Fibre Channel-attached storage devices
- SAN management software on each server (or in a special-purpose
SAN appliance) (see NETWORK OBSERVER, EMedia Magazine,
August 2000, p. 60)
Although we highlight Fibre Channel hereand SAN
and Fibre Channel are often, however erroneously, treated
as synonymousthere is another emerging SAN model (still
a few months shy of spec finalization and product availability)
that uses Gigabit Ethernet. The choice of hub or switch
is primarily one of capacity; any SAN capable of supporting
terabytes must be switch-based for performance.
The most common way to create a SAN is to use Fibre Channel
Arbitrated Loop, colloquially known as FC-AL. Fibre Channel
itself is an optical network standard supporting 1Gbps throughput
over either fiber-optic cable or CAT5/ CAT6 copper cable.
(2- and 4-Gbps versions of Fibre Channel are also in the
works.) The Arbitrated Loop component provides for a dual
cable running from each server's HBA to each storage device.
This allows for a redundant connection in the event one
cable or connector is broken.
In deploying an FC-AL SAN, there are a number of considerations.
These include HBA choice, cable choice, volume versus file
abstraction schemas, number of devices to support, and finally
operating system choice.
Your Fibre Channel HBA has to be certified for SANs, and not
just a standard FC cardHBAs from Qlogic, JNI, and Emulex
can fit this bill. You'll also need to choose between a single-port
and a dual-port HBA. For the loop to work in the SAN, you'll
need two Fibre Channel connections at the server, and you
can accomplish this by using a single dual-port card or two
The advantage to the dual-port HBA is lower cost. As the
cost per card runs from $1,495 up to $3,995, buying two
cards for each server can run into some money. The advantage
to a single-port card is that you avoid a single point-of-failure
scenario that is a consequence of opting for the dual-port
card. While the dual-port card protects you from cable failure,
it doesn't protect you from the card itself failing. With
the single-port card, if one goes bad, you still have SAN
The choice of copper over fiber cable is primarily one of
distance. Copper supports Fibre Channel connections from 10
to 25 meters (33 to 82.5 feet). Fiber-optic cable can support
distances of 2 to 10 KM (from 1.2 to 6.2 miles). Optical cable
costs more, often much more than copper. Also, there are far
fewer engineers who understand fiber cable than know how to
work with CAT5 or CAT6.
Some sites already have existing CAT5 cable; however,
you'll need to test any such cable before certifying it
for FC-AL. Older CAT5 simply doesn't support the 1Gbps throughput
of Fibre Channel, hence the creation of the newer CAT6 cable
to ensure the throughput required.
For what it is worth, Fibre Channel can support all of
its protocols simultaneously. This means you can run IP
traffic, SCSI, and Ethernet 802.2 packets all over the SAN
if you so desire. This eclectic bent may prove useful in
the future, when longer distances of optic cable are available
and users may find IP traffic preferable to simply relying
on SCSI transfers as in traditional FC network use.
in the abstract
No network operating system today natively supports a heterogeneous
SAN. Storing and retrieving data from these OS clients requires
add-on support in one of two forms:
- Volume abstraction
- File system abstraction
Choosing one or the other is dependent on your site demands
rather than any inherent advantage of one over the other.
Volume abstraction, also called "LUN masking," is far
more common and involves the use of volume managers on each
of the servers accessing the SAN. The volume managers talk
with each other and agree on the layout of virtual volumes
on the SAN. (These virtual volumes may consist of a subsection
of disk space on a RAID system, or may combine two or more
devices into a single volume.)
The trick here is that each server only sees the volumes
assigned to it. All other storage devices are "masked" or
hidden from it and therefore the server cannot access them.
(Hence "LUN masking.") This is a simple approach, but it
does have various drawbacks.
First, every server accessing the SAN must have appropriate
volume management software on it. Maintaining a Solaris-NetWare-NT
system using volume managers can be complex if you're trying
to keep all systems upgraded to the same version. Worse,
if a server can access the SAN, but lacks the volume manager,
it can access or corrupt data it shouldn't have access to.
Volume abstraction also limits each server and each server's
users to a limited subset of the SAN's available resources.
Rather than make all storage services available to users,
the administrator must decide ahead of time how to carve
up the storage and allocate it to users statically, rather
than dynamically in real time.
Performance-wise, the volume manager approach has a plus
and a minus. The minus is the overhead in working through
the volume manager software. The plus is that data can be
accessed on a block level, rather than on a file-by-file
basis. For databases, block-level access constitutes a distinct
improvement. For video archives, file-level access is preferable.
File system abstraction, on the other hand, creates a
new file system addressing all storage on the SAN. To do
this, you load a package such as Tivoli's SANergy or HP's
SANManager on each server. All requests for data on the
SAN get processed through this software. The advantage here
is similar to volume abstractionthe user's workstation
needn't know about the SAN. Instead, the client workstation
simply sees the files as part of their OS directory access.
The added advantage of file system abstraction is that
the entire SAN can be accessed from multiple servers and
clients without having to designate specific virtual volumes
to each server. (This isn't to say that that restriction
couldn't be implemented using SANergy or SAN Manager, only
that it doesn't have to be.) This file system approach allows
all network users to access all SAN resources with appropriate
file system controls.
The drawback to this is the need for this separate file
systemwhich may introduce a learning curve for network
support. Also, file system access may have more overhead
than a pure volume abstraction play. As we mentioned, however,
for video archives and other large media files, the file
system approach can perform as well as or better than the
block-level access of the volume manager.
sate and switch
The number of storage devices you currently have or will want
to support in the near term will dictate what size FC-AL switch
you need to buy. These come in 8, 16, 32, and 64 port versions.
SANs support up to 127 devices, so two 64-port switches would
be the maximum.
The other consideration about devices is SCSI versus Fibre
Channel. While SCSI bridges can handle most SCSI devices
in optimum (i.e., standalone and small network) settings,
in the field they don't as they lack a breadth of drivers
for those devices. SANs created with all native Fibre Channel
storage devices have fewer problems. Fortunately, SAN tape
drives support the same spec and so are also easily incorporated
into the SAN.
CHOOSING AN OS
The primary and most problematic area of working with SANs
in today's network environments has to do with Microsoft Windows
NT. NT does not allow a change of physical location of data
storage. That means that when you add storage to a SAN connected
to NT servers, you will need to down those servers to reinitialize
the storage locations.
In essence, this eliminates one of the significant benefits
of SANs: dynamic scalability. Solaris and NetWare do not
suffer from this drawback.
DELL'S SAN PLAN
With all this theory, we wanted to know just how much all
this would actually cost to set up. For our discussion, we
asked Dell SAN product manager Eric Bergner to lay out a typical
solution that his company might offer to a client looking
to deploy a SAN. He explained that costs will vary widely
depending on what existing equipment a client may have in
the way of servers at the time of purchase and implementation
(are more required for the SAN, or is it just the RAID arrays
and the Fibre Channel switch that are needed?).
We assumed a need for both additional servers and storage
subsystems, so Bergner laid out a SAN with the following
- Six 4-Way Dell Windows NT servers
- Two tape drives
- Two RAID arrays with 4TB of storage
Combining these hardware components with appropriate volume
management software, the switch, connectivity cabling yielded
a total of $383,218 in product costs. Consulting feessite
survey, customization, installation, testing, trainingcame
to $25,000 ($15,000 for the base SAN and $10,000 for the
tape backup). Bergner pointed out that a single-vendor system
like this one costs about 30% less than a comparably equipped
one created by an integrator using components from various
Setup could take from two to five days, depending on site
SAN SANS FC: THE SCSI-OVER-IP ALTERNATIVE
Fibre Channel is the way to go when what you need is high
performance implemented immediately, and any other concerns
are secondary. Many sites, if not most, though, will find
a budget of even $150,000 (just over a third of Dell's estimate)
difficult to fund. Yet the needs for scalability, reliability,
and ease of management are all storage enhancements eagerly
sought after by every network administrator. Video production
houses, music recording studios, and other DVD and CD sites
often have enormous (and rapidly expanding) storage demands,
but with staffs of 20 or less, they can't cost-justify an
However, by mid-year, an alternative to the pricier FC
will become available. This approach, called "SCSI-Over-IP,"
uses off-the-shelf Gigabit Ethernet hardware to create SANs.
The chief proponent of this wave is SCSI master Adaptec.
It calls its version "EtherStorage" and claims similar performance
to FC when using its system. Architecturally, EtherStorage
is similar to an FC-AL SAN.
Essentially, an EtherStorage SAN uses an Adaptec Gigabit
Ethernet host adapter in the server, which then routes traffic
through a typical GigEther switch using the SCSI-Over-IP
protocol. Data requests then go to an upgraded SCSI-to-Ethernet
bridge (upgraded to support the newer protocol). The bridge
connects directly to the storage device. Although Adaptec
recommends a dedicated Gigabit network for the SAN in this
initial release, the intent is to have both users and data
share the same network cable in the future.
The difference is that the NAS device maintains a separate
file system for its storage. With Adaptec's EtherStorage,
the network server maintains the file system. All client
requests go through the network server rather than through
the NAS device to retrieve data.
Using SCSI-Over-IP to deploy SANs creates a rather neat
package for most administrators. Both clients and storage
are now linked by the same protocol using similar hardware
and cabling, which should reduce management and costs.
Granted, Ethernet isn't tailored for block data seeks
or for redundant data paths, and there remains the question
of latency in data transfer. Finally, Gigabit Ethernet hardware
isn't much less expensive than FC-AL components, at least
right now. However, the cost is expected to plummet as demand
increases for faster networks as well as for SCSI-Over-IP
The trick is to have plenty of bridge and HBA vendors
support EtherStorage, and most importantly, to get the storage
management software at the server to support EtherStorage-attached
devices. Adaptec is highly confident that it will have these
partners by the second or third quarter of 2001.
And they'll have help. No less a hardware player than
IBM has proposed its own SCSI-Over-IP initiative, with a
slight variation in the method for handling SCSI in the
protocol. Adaptec isn't concerned about which initiative
wins, only that an integrated protocol appears quickly from
the standards committee; then the focus turns to products
and services, which of course will benefit its core card
business mightily as it increasingly shifts its business
model from the broad but cheap standalone connector market
to emphasizing SCSI products for the multilevel network
The primary savings engendered by SCSI-Over-IP will come
with the reduced costs in engineering and management. There
are thousands of network engineers who know how to plan
out and implement an IP network using Gigabit Ethernet,
and many sites have existing cable suitable for EtherStorage
use. So even with current Gigabit Ethernet SAN hardware
costing similar to Fibre Channel, the overall cost of building
and maintaining the system is less. This should make the
system extremely appealing even when FC finally overcomes
its interoperability hurdles.
NOT ALWAYS AN ETHER OR....
While a full SCSI-Over-IP spec awaits ratification, various
SAN deployments today already incorporate an Ethernet segment.
This occurs for several reasons, disaster recovery and data
management being the primary ones. FC-AL supports 10 KM distances
using single-mode optical fiber, which in comparison to direct-attached
SCSI seems ideal for local disaster recovery. While adequate
for structural disasters such as a fire, flood, or impact
from low-flying aircraft, this isn't far enough for disaster
recovery strategies involving a natural disaster. To be specific,
earthquakes. These require hundreds of miles between sites.
By introducing an Ethernet connection to a leased line/ATM
backbone, even continental distances can be spanned between
By tying together SANs using an Ethernet/Internet/ATM
link, companies can provide for remote warm sites for disaster
recovery (equipped to come online within a few minutes of
the primary system going offline). They can also centralize
data on a single SAN from remote sites, thereby reducing
some of the management of data (including the backup function).
Designing and deploying a Fibre Channel SAN presents challenges
most often resolved through single-sourcing the technology.
The interoperability problem is dissipating, but remains
a significant-enough deterrent to a do-it-yourself solution,
even for sophisticated administratorsprimarily because
of the relative unfamiliarity with FC hardware.
EtherStorage today holds the promise of a lower-cost,
highly effective SAN solution for the masses. The challenge
to EtherStorage and SCSI-Over-IP in general is the same
as for FC-AL: Will product interoperability exist and if
so, when? The advantage to EtherStorage and other SCSI-Over-IP
solutions is that Ethernet is far better understood, so
resolving or troubleshooting incompatibilities should be
significantly easier than it has been for FC-AL. This may
make EtherStorage systems well worth waiting forparticularly
if the budget demands it.
Where's the Jukebox Support?
Enterprise storage integrators we've talked to highlight
the merits of a possible DVD-RAM/DVD-R jukebox solution for
effective network SAN-based backup. Their customers are intrigued
by the dual nature of archiving to DVD: effective backup while
keeping the data in a near-online state. Unfortunately, although
they feel they could sell this type of solution, they don't
have the hardware support they need. Currently, not a single
vendor provides native FC-AL support in its jukebox. Worse,
by and large the SCSI-to-FC-AL bridge vendors don't support
jukeboxes either. So although other types of SCSI devices
can bridge to the SAN, SCSI jukeboxes can't. This means that
crafting an effective DVD SAN archive today is difficult.
Dell's Bergner hopes this will change within the next few
months as jukebox vendors bring FC boxes to the market.
Companies Mentioned in this Article
691 South Milpitas Boulevard, Milpitas, CA 95035; 800/442-7274,
408/945-8600; Fax 408/262-2533; http://www.adaptec.com
Dell Computer Corporation
One Dell Way, Round Rock, TX 78682; 800/274-3355; Fax
3535 Harbor Boulevard, Costa Mesa, CA 92626; 800/854-7112,
714/662-5600; Fax 714/513-8267; http://www.emulex.com
800 South Taft Avenue, Loveland, CO 80537; 800/752-0900;
Fax 970/635-1610; http://www.hp.com/storage
9775 Towne Centre Drive, San Diego, CA 92121; 800/452-9267,
858/535-3121; Fax 858/552-1428; http://www.jni.com
3545 Harbor Boulevard, Costa Mesa, CA 92626; 800/662-4471,
714/438-2200; Fax 714/668-5008; http://www.qlc.com
Tivoli Systems Inc.
9442 Capital of Texas Highway North, Arboretum Plaza One,
Austin, TX 78759; 800/284-8654, 512/436-8000; Fax 512/794-0623;