MySpace adds music features in bid to reinvent itself

As part of its attempt to reinvent itself, MySpace unveiled a slew of new music products, including a massive collection of music videos, at the Web 2.0 Summit in San Francisco. But Van Natta strove to keep the packed session on the topic of new music services being dished up on the site. MySpace CEO Owen Van Natta took the main stage Wednesday to talk about the lagging social network's business strategy and its position behind rival Facebook.

Separately, reports circulated Wednesday that Google was also planning a music service . The company announced MySpace Music Videos, which is set up to be one of the most biggest collections of online videos. And to give users better access to the video library, MySpace also unveiled a new Video Search Tab. Van Natta explained that they worked with the company's music label partners to gather fully licensed music videos. The tab is designed to help users search for videos, songs and artist profiles. The dashboard is designed to give bands and singers with MySpace profile analytics on who is listening to their music and how they're interacting with it. "We think MySpace has the opportunity to be the next generation digital distributor of content," said Van Natta, who was an early executive at Facebook before leaving to join MySpace. "MySpace is positioned uniquely to be the place where the socialization of content occurs." MySpace has been slipping in popularity as rival Facebook moved to the top of the social networking pile.

MySpace's roster of new music products also includes an Artist Dashboard. Last December, Facebook drew almost twice as many worldwide visitors as MySpace. At the beginning of Van Natta's presentation, the moderator polled the audience about what social networking site they used. In June, Facebook surpassed MySpace in the U.S. , which had been MySpace's stronghold. A smattering of hands went up to show people who used MySpace. Later in his presentation, the MySpace CEO said he's optimistic about the company's ability to get back on its feet. "We believe that we have all of the building blocks and we need to focus on execution," he said. "If we do a great job at executing and building a great user experience... then we will realize this vision to be the place where you discover a huge amount of content through other people.

When asked who used Facebook, a sea of hands shot up, along with a ripple of laughter from the audience. "Thanks for framing that up for me," Van Natta said. If that is happening in music or other areas, like games, TV and films, it'll be easy to recognize success because you'll just know this is where a huge amount of that socialization is happening."

New Banking Trojan Horses Gain Polish

Criminals today can hijack active online banking sessions, and new Trojan horses can fake the account balance to prevent victims from seeing that they're being defrauded. To stop those attacks, financial services developed authentication methods such as device ID, geolocation, and challenging questions. Traditionally, such malware stole usernames and passwords for specific banks; but the criminal had to access the compromised account manually to withdraw funds.

Unfortunately, criminals facing those obstacles have gotten smarter, too. Greater Sophistication Banking attacks today are much stealthier and occur in real time. One Trojan horse, URLzone, is so advanced that security vendor Finjan sees it as a next-generation program. Unlike keyloggers, which merely re­­cord your keystrokes, URLzone lets crooks log in, supply the required authentication, and hijack the session by spoofing the bank pages. According to Finjan, a so­­phisticated URLzone process lets criminals preset the percentage to take from a victim's bank account; that way, the ac­­tivity won't trip a financial institution's built-in fraud alerts. The assaults are known as man-in-the-middle attacks because the victim and the attacker access the account at the same time, and a victim may not even notice anything out of the ordinary with their account.

Last August, Finjan documented a URLzone-based theft of $17,500 per day over 22 days from several German bank ac­­count holders, many of whom had no idea it was happening. Criminals using bank Trojan horses typically grab the money and transfer it from a victim's account to various "mules"-people who take a cut for themselves and transfer the rest of the money overseas, often in the form of goods shipped to foreign addresses. But URLzone goes a step further than most bank botnets or Trojan horses, the RSA antifraud team says. URLzone also seems to detect when it is being watched: When the researchers at RSA tried to document how URLzone works, the malware transferred money to fake mules (often legitimate parties), thus thwarting the investigation. When victims visited the crooks' fake banking site, Silentbanker in­­stalled malware on their PCs without triggering any alarm. Silentbanker and Zeus Silentbanker, which appeared three years ago, was one of the first malware programs to em­­ploy a phishing site.

Silentbanker also took screenshots of bank accounts, redirected users from legitimate sites, and altered HTML pages. According to security vendor SecureWorks, Zeus often focuses on a specific bank. Zeus (also known as Prg Banking Trojan and Zbot) is a banking botnet that targets commercial banking accounts. It was one of the first banking Trojan horses to defeat authentication processes by waiting until after a victim had logged in to an account successfully. Zeus uses traditional e-mail phishing methods to infect PCs whether or not the person enters banking credentials. It then impersonates the bank and unobtrusively injects a request for a Social Security number or other personal information.

One recent Zeus-related attack posed as e-mail from the IRS. Unlike previous banking Trojan horses, however, the Zeus infection is very hard to detect because each victim receives a slightly different version of it. According to Joe Stewart, director of malware research for SecureWorks, Clampi captures username and password information for about 4500 financial sites. Clampi Clampi, a bank botnet similar to Zeus, lay dormant for years but recently became quite active. It relays this information to its command and control servers; criminals can use the data immediately to steal funds or purchase goods, or save it for later use. Clampi defeats user authentication by waiting for the victim to log in to a bank account.

The Washington Post has collected stories from several victims of the Clampi botnet. It then displays a screen stating that the bank server is temporarily down for maintenance. Defending Your Data Since most of these malware infections occur when victims respond to a phishing e-mail or surf to a compromised site, SecureWorks' Stewart recommends confining your banking activities to one dedicated machine that you use only to check your balances or pay bills. When the victim moves on, the crooks surreptitiously hijack the still-active bank session and transfer money out of the account. Alternatively, you can use a free OS, such as Ubuntu Linux, that boots from a CD or a thumbdrive.

Most banking Trojan horses run on Windows, so temporarily using a non-Windows OS defeats them, as does banking via mobile phone. Before doing any online banking, boot Ubuntu and use the included Firefox browser to ac­­cess your bank site. The key step, however, is to keep your antivirus software current; most security programs will detect the new banking Trojan horses. Older antivirus signature files can be slow to defend PCs against the latest attacks, but the 2010 editions have cloud-based signature protection to nullify threats instantly.

Seagate Goes Solid State with Pulsar Drive

Seagate tosses its hat into the solid state drive (SSD) market today with the unveiling of its Pulsar drive, a unit aimed at enterprise-level blade and server applications. With the Pulsar drive, Seagate lays claim to being "the first enterprise HDD vendor to deliver an enterprise-class SSD solution." The Pulsar drive is built with single-layer-cell (SLC) technology, which Seagate says enhances the reliability and durability of the SSD. Solid state drives offer much faster data access speeds than the rotating media in conventional hard disk drives (HDDs) since there are no moving parts. The new drive stores up to 200GB of data in a 2.5-inch form factor with a SATA interface. According to Seagate, the Pulsar drive achieves a peak performance of 30,000 read IOPS (input/output operations per second) and 25,000 write IOPS, which is a measure of how a drive processes small, random blocks of information.

The drive comes with a five-year warranty and has an annualized failure rate (AFR) of 0.44 percent, according to Seagate. "Seagate is optimistic about the enterprise SSD opportunity and views the product category as enabling expansion of the overall storage market for both SSDs and HDDs," said Dave Mosley, Seagate's executive vice president for sales, marketing, and product line management in a press release. The drive is rated at up to 240 megabytes per second for sequential reads and 200 mbps for sequential writes; a measure of how it accesses large chunks of contiguous data. Solid state drives built with single layer cell technology can offer faster read/write speeds than those built with multiple layer cell technology (MLC), but MLC drives can offer more storage. The Pulsar drive, which was made available to select OEM (original equipment manufacturer) customers in September, is now available to all OEMs.

IPv6: Not a Security Panacea

With only 10% of reserved IPv4 blocks remaining, the time to migrate to IPv6 will soon be upon us, yet the majority of stakeholders have yet to grasp the true security implications of this next generation protocol. While IPv6 provides enhancements like encryption, it was never designed to natively replace security at the IP layer. Many simply have deemed it an IP security savior without due consideration for its shortcomings. The old notion that anything encrypted is secure doesn't stand much ground in today's Internet, considering the pace and sophistication in which encryptions are cracked.

Unfortunately, IPsec, the IPv6 encryption standard, is viewed as the answer for all things encryption. For example, at the last Black Hat conference hacker Moxie Marlinspike revealed vulnerabilities that breaks SSL encryption and allows one to intercept traffic with a null-termination certificate. But it should be noted that:  IPsec "support" is mandatory in IPv6; usage is optional (reference RFC4301). There is a tremendous lack of IPsec traffic in the current IPv4 space due to scalability, interoperability, and transport issues. Many organizations believe that not deploying IPv6 shields them from IPv6 security vulnerabilities. This will carry into the IPv6 space and the adoption of IPsec will be minimal. IPsec's ability to support multiple encryption algorithms greatly enhances the complexity of deploying it; a fact that is often overlooked.

This is far from the truth and a major misconception. For starters, most new operating systems are being shipped with IPv6 enabled by default (a simple TCP/IP configuration check should reveal this). IPv4 based security appliances and network monitoring tools are not able to inspect nor block IPv6 based traffic. The likelihood that rogue IPv6 traffic is running on your network (from the desktop to the core) is increasingly high. The ability to tunnel IPv6 traffic over an IPv4 network using brokers without natively migrating to IPv6 is a great feature. Which begs the question, why are so many users routing data across unknown and non-trusted IPv6 tunnel brokers?

However, this same feature allows hackers to setup rogue IPv6 tunnels on non-IPv6 aware networks and carry malicious attacks at will. IPv6 tunneling should never be used for any sensitive traffic. By enabling the tunneling feature on the client (e.g. 6to4 on MAC, Teredo on Windows), you are exposing your network to open, non-authenticated, unencrypted, non-registered and remote worldwide IPv6 gateways. Whether it's patient data that transverses a healthcare WAN or Government connectivity to an IPv6 internet, tunneling should be avoided at all costs. The rate at which users are experimenting with this feature and consequently exposing their networks to malicious gateways is alarming.

The advanced network discovery feature of IPv6 allows Network Administrators to select the paths they can use to route packets. Is your security conscious head spinning yet? In theory, this is a great enhancement, however, from a Security perspective it becomes a problem. So where are the vendors that are supposed to protect us against these types of security flaws? In the event that a local IPv6 Network is compromised, this feature will allow the attacker to trace and reach remote networks with little to no effort. The answer is, not very far along.

Since there are no urgent mandates to migrate to IPv6, most are developing interoperability and compliance at the industry's pace. Like most of the industry, the vendors are still playing catch-up. So the question becomes: will the delay in IPv6 adoption give the hacker community a major advantage over industry? As we gradually migrate to IPv6, the lack of interoperability and support at the application and appliance levels will expose loopholes. Absolutely! This will create a chaotic and reactive circle of patching, on-the-go updates and application revamp to combat attacks.

There is more to IPv6 than just larger IP blocks. Regardless of your expertise in IPv4, treat your migration to IPv6 with the utmost sensitivity. The learning curve for IPv6 is extensive. Many of the fundamental network principles like routing, DNS, QoS, Multicast and IP addressing will have to be revisited. People can't be patched as easily as Windows applications, thus staff training should start very early.

Reliance on given IPv4 security features like spam control and DOS (denial of service) protection will be minimal in the IPv6 space as the Internet 'learns' and 'adjusts' to the newly allocated IP structure. Jaghori is the Chief Network & Security Architect at L-3 Communications EITS. He is a Cisco Internetwork Expert, Adjunct Professor and industry SME in IPv6, Ethical Hacking, Cloud Security and Linux. It's essential that your network security posture is of the utmost priority in the migration to IPv6. Stakeholders should take into account the many security challenges associated with IPv6 before deeming it a cure-all security solution. Jaghori is presently authoring an IPv6 textbook and actively involved with next generation initiatives at the IEEE, IETF, and NIST. Contact him at ciscoworkz@gmail.com.

New gadgets, prototypes to debut next week in Japan

Japan's biggest electronics and gadgets show, Ceatec, runs all of next week and many new technologies and prototype gadgets are expected to be on show. Originally developed by Toshiba, IBM and Sony for use in the PlayStation 3 games console, the Cell is expected to bring functions like real-time upscaling and processing of recorded videos. The first big news is expected on Monday afternoon when Toshiba unveils its first commercial LCD TV that includes the Cell multimedia processor, after showing a prototype of the television last year.

Panasonic will also focus on TV technology and showing a 50-inch plasma TV that can display images 3D. At the IFA electronics show in September the company said it planned to launch such a set next year, so Ceatec will provide more insight into what consumers can expect. The camera is aimed at content producers, not consumers, but the technology could eventually scale down into more compact cameras. Sony is also pushing 3D and will use Ceatec to show a new video camera that can record 3D images through a single lens. In the cell phone arena, NTT DoCoMo is planning to show a cell phone with a wooden rather than plastic case. The phone uses surplus cypress wood from trees culled during thinning operations to maintain healthy forests.

The prototype phone was made in conjunction with Olympus, which has developed a method for wooden casing, and Sharp. DoCoMo and its partners are also expected to show their progress in developing a cell-phone platform for future LTE (Long Term Evolution) wireless services. Meanwhile Fujitsu will show a new cell phone with a built-in golf-swing analyzer. The company is working with Panasonic, NEC and Fujitsu on development of a phone that can download data at up to 100M bps and upload at half that speed. The phone's sensors feed motion data to a 3D sensing program that analyzes the swing and then provides advice.

One of the hits from last year's Ceatec, Murata's unicycling robot, is due to make an appearance and show off a new trick. Each swing can also be compared against past swings. The latest version of the robot is capable of cycling at about 3 times the speed of last year's model. Specifically, the company plans to show off a technology that allows several cars to automatically follow a lead car. Nissan will also be at Ceatec showing off some of its latest research into advanced automotive IT systems.

The futuristic system, which will be demonstrated in robot cars, could one day be used to allow cars to automatically move along roads in "trains" of vehicles with little input from the driver. The exhibition, which is now in its tenth year, attracted just under 200,000 visitors last year. Ceatec runs at Makuhari Messe in Chiba, just outside of Tokyo, from Tuesday until Saturday.

iSuppli now ranks Acer ahead of Dell in PC market

Lifted by fast-growing notebook shipments, Taiwan's Acer Inc. grabbed the No. 2 spot in the global PC market for the first time over Dell Inc., according to iSuppli Corp. That helped it leap ahead of Dell. The market researcher also confirmed that the PC market is starting to rebound, and now expects this year's sales to be almost flat compared to the prior year's. Boosted by a 17% year-over-year growth in notebook (including netbook) shipments, Acer had 13.4% of the 79.9 million PCs shipped globally in the third quarter, said iSuppli. Hurt by sluggish corporate IT spending, Dell's sales fell 5.9% and it recorded a 12.9% share.

On the rebound, Lenovo's shipments growing 17.2% year-over-year, giving it fourth place. "Acer's rise to the No. 2 rank in the global PC business reflects not only its strong performance in the notebook segment, but also the historic rise of Asia as a primary force in the computer industry," said iSuppli analyst Matthew Wilkins in a statement. Another Asian manufacturer, Lenovo Corp., also had a standout quarter. Acer and Lenovo were ranked just No. 6 and No. 8, respectively, in 2003, Wilkins said. "The Asian manufacturers are a growing force in the global PC business due to their aggressive pricing along with their ability to quickly react and embrace new developments, such as the netbook PC," Wilkins said. Both IDC Corp. and Gartner Inc. had already ranked Acer ahead of Dell. iSuppli is the third market tracker to note Acer's rise to number two. HP remained atop the heap for the 13th straight quarter, with 19.9% of the market.

iSuppli also said that Q3 shipments overall grew year-over-year (1.1%) for the first time in a year, while growing 19% from the second quarter. "The sequential and year-over-year shipment increases show that the PC industry emerged from the downturn and began to grow again in the third quarter," Wilkins said. Toshiba is No. 5 globally, with a 5.0% share, iSuppli said. Notebook shipments were "critical in driving growth," as they never wavered into the negative even during the worst quarters, he added. As a result, the PC market is now expected to decline just 0.9%, rather than iSuppli's earlier prediction of a 4% decline. Christmas and Windows 7 will conspire to "bring more good news for PC makers," said Wilkins.

Remaking the data center

A major transformation is sweeping over data center switching. Ethernet switch vendors propose data center collapse Three factors are driving the transformation: server virtualization, direct connection of Fibre Channel storage to the IP switching and enterprise cloud computing. Over the next few years the old switching equipment needs to be replaced with faster and more flexible switches. They all need speed and higher throughput to succeed but unlike the past it will take more than just a faster interface.

Without these changes, the dream of a more flexible and lower cost data center will remain just a dream. This time speed needs to be coupled with lower latency, abandoning spanning tree and supporting new storage protocols. Networking in the data center must evolve to a unified switching fabric. The answer is yes. Times are hard, money is tight; can a new unified-fabric really be justified? The cost savings from supporting server virtualization along with merging the separate IP and storage networks is just too great.

The good news is that the switching transformation will take years, not months, so there is still time to plan for the change. Supporting these changes is impossible without the next evolution in switching. The Drivers The story of how server virtualization can save money is well known. Virtualization allows multiple applications to run on the server within their own image, allowing utilization to climb into the 70% to 90% range. Running a single application on a server commonly results in utilization in the 10% to 30% range. This cuts the number of physical servers required; saves on power and cooling and increases operational flexibility.

Storage has been moving to IP for years, with a significant amount of storage already attached via NAS or iSCSI devices. The storage story is not as well known, but the savings are as compelling as the virtualization story. The cost saving and flexibility gain is well known. Moving Fibre Channel to the IP infrastructure is a cost saver. The move now is to directly connect Fibre Channel storage to the IP switches, eliminating the separate Fibre Channel storage-area network. The primary way is by reducing the number of adapters on a server.

Guaranteeing high availability means that each adapters needs to be duplicated resulting in four adapters per server. Currently servers need an Ethernet adapter for IP traffic and a separate storage adapter for the Fibre Channel traffic. A unified fabric reduces the number to two since the IP and Fibre Channel or iSCSI traffic share the same adapter. It also reduces operational costs since there is only one network to maintain. The savings grow since halving the number of adapters reduces the number of switch ports and the amount of cabling.

The third reason is internal or enterprise cloud computing. Over the years, this way of design and implementing applications has changed. In the past when a request reached an application, the work stayed within the server/application. Increasingly when a request arrives at the server, the application may only do a small part of the work; it distributes the work to other applications in the data center, making the data center one big internal cloud. It becomes critical that the cloud provide very low latency with no dropped packets.

Attaching storage directly to this IP cloud only increases the number of critical flows that pass over the switching cloud. A simple example shows why low latency is a must. With most of the switches installed in enterprises the get can take 50 to 100 microseconds to cross the cloud, which depending on the number of calls adds significant delays to processing. If the action took place within the server, then each storage get would only take a few microseconds to a nanosecond to perform. If a switch discards the packet, the response can be even longer.

What is the problem for the network? The only way internal cloud computing works is with a very low latency and non-discarding cloud. Why change the switches? Compared with the rest of the network the current data center switches provide very low latency, discard very few packets and support 10 Gigabit Ethernet interconnects. Why can't the current switching infrastructure handle virtualization, storage and cloud computing? The problem is that these new challenges need even lower latency, better reliability, higher throughput and support for Fibre Channel over Ethernet (FCoE) protocol.

The problem with the current switches is that they are based on a store-and-forward architecture. The first challenge is latency. Store-and-forward is generally associated with applications such as e-mail where the mail server receives the mail, stores it on a disk and then later forwards it to where it needs to go. How are layer 2 switches, which are very fast, store-and-forward devices? Store-and-forward is considered very slow. Switches have large queues.

Putting the packet in a queue is a form of store-and-forward. When a switch receives a packet, it puts it in a queue, and when the message reaches the front of the queue, it is sent. A large queue has been sold as an advantage since it means the switch can handle large bursts of data without discards. The math works as follows. The result of all the queues is that it can take 80 microseconds or more for a large packet to cross a three-tier data center.

It can take 10 microseconds to go from the server to the switch. For example, assume two servers are at the "far" end of the data center. Each switch to switch hop adds 15 microseconds and can add as much as 40 microseconds. A packet leaving the requesting server travels to the top of rack switch, then the end-of-row switch and onward to the core switch. That is four switch-to-switch hops for a minimum of 60 microseconds. The hops are then repeated to the destination server.

Add in the 10 microseconds to reach each server and the total is 80 microseconds. Latency of 80 microseconds each way was acceptable in the past when response time was measured in seconds, but with the goal to provide sub-second response time, the microseconds add up. The delay can increase to well over 100 microseconds and becomes a disaster if a switch has to discard the packet, requiring the TCP stack on the sending server to time out and retransmit the packet. An application that requires a large chunk of data can take a long time to get it when each get can only retrieve 1,564 byes at a time. The impact is not only on response time.

A few hundred round trips add up. The application has to wait for the data resulting in an increase in the elapsed time it takes to process the transaction. The new generation of switches overcomes the large latency of the past by eliminating or significantly reducing queues and speeding up their own processing. That means that while a server is doing the same amount of work, there is an increase in the number of concurrent tasks, lowering the server overall throughput. The words used to describe it are: lossless transport; non-blocking; low latency; guaranteed delivery; multipath and congestion management. Non-blocking means they either don't queue the packet or have a queue length of one or two.

Lossless transport and guaranteed delivery mean they don't discard packets. The first big change in the switches is the design of the way the switch forwards packets. A cut-through design can reduce switch time from 15 to 50 microseconds to 2 to 4 microseconds. Instead of a store-and-forward design, a cut-through design is generally used, which significantly reduces or eliminates queuing inside the switch. Cut-through is not new, but it has always been more complex and expensive to implement. The second big change is abandoning spanning tree within the data center switching fabric.

It is only now with the very low latency requirement that switch manufacturers can justify spending the money to implement it. The new generation of switches use multiple paths through the switching fabric to the destination. Currently all layer 2 switches determine the "best" path from one end-point to another one using the spanning tree algorithm. They are constantly monitoring potential congestion points, or queuing points, and pick the fastest and best path at the time the packet is being sent. Only one path is active, the other paths through the fabric to the destination are only used if the "best" path fails. A current problem with the multi-path approach is that there is no standard on how they do it.

Spanning tree has worked well since the beginning of layer 2 networking but the "only one path" is not good enough in a non-queuing and non-discarding world. Work is underway within standard groups to correct this problem but for the early versions each vendor has their own solution. Even when DCB and other standards are finished there will be many interoperability problems to work out, thus a single vendor solution may be the best strategy. A significant amount of the work falls under a standard referred to as Data Center Bridging (DCB). The reality is that for the immediate future mixing and matching different vendor's switches within the data center is not possible. Speed is still part of the solution.

The result of all these changes reduces the trip time mentioned from 80 microseconds to less than 10 microseconds, providing the needed latency and throughput to make fiber channel and cloud computing practical. The new switches are built for very dense deployment of 10 Gigabit and prepared for 40/100 Gigabit. Virtualization curve ball Server virtualization creates additional problems for the current data center switching environment. This causes operational complications and is a real problem if two virtual servers communicate with each other. The first problem is each physical server has multiple virtual images, each with their own media access control (MAC) address. The easiest answer is to put a soft-switch in the VM, which all the VM vendors provide.

There are several problems with this approach. This allows the server to present a single MAC address to the network switch and perform the functions of a switch for the VMs in the server. The soft switch needs to enforce policy and access control list (ACL); make sure VLANs are followed and implement security. If they were on different physical servers the network would make sure policy and security procedures were followed. For example, if one image is compromised, it should not be able to freely communicate with the other images on the server, if policy says they should not be talking to each other. The simple answer is that the group that maintains the server and the soft switch needs to make sure all the network controls are followed and in place.

Having the network group maintain the soft switch in the server creates the same set of problems. The practical problem with this approach is the coordination required between the two groups and the level of knowledge of the networking required by the server group. Today, the answer is to learn to deal with confusion and develop procedures to make the best of the situation and hope for the best. The idea is that coordination will be easier since the switch vendor built it and has hopefully made the coordination easier. A variation on this is to use a soft switch from the same vendor as the switches in the network. Cisco is offering this approach with VMware.

This would simplify the switch in the VM since it would not have to enforce policy, tag packets or worry about security. The third solution is to have all the communications from the virtual server sent to the network switch. The network switch would perform all these functions as if the virtual servers were directly connected to the servers and this was the first hop into the network. The problem is spanning tree does not allow a port to receive a packet and send it back on the same port. This approach has appeal since it keeps all the well developed processes in place and restores clear accountability on who does what.

The answer is to eliminate the spanning tree restriction of not allowing a message to be sent back over the port it came from. As the number of processors on the physical server keep increasing, the number of images increase, with the result that increasingly large amounts of data need to be moved in and out of the server. Spanning Tree and virtualization The second curve ball from virtualization is ensuring that there is enough throughput to and from the server and that the packet takes the best path through the data center. The first answer is to use 10 Gigabit and eventually 40 or 100 Gigabit. Using both adapters attached to different switches allows multiple paths along the entire route, helping to ensure low latency. This is a good answer but may not be enough since the data center needs to create a very low latency, non-blocking fabric with multiple paths.

Once again spanning tree is the problem. The reality is the new generation layer 2 switches in the data center will act more like routers, implementing their own version of OSPF at layer 2. Storage The last reason new switches are needed is Fibre Channel storage. The solution is to eliminate spanning tree, allowing both adapters to be used. Switches need to support the ability to run storage traffic over Ethernet/IP such as NAS, ISCSI or FCoE. Besides adding support for the FCoE protocol they will also be required to abandon spanning tree and enable greater cross sectional bandwidth. Currently the FCoE protocol is not finished and vendors are implementing a draft version.

For example Fibre Channel requires that both adapters to the server are active and carrying traffic, something the switch's spanning tree algorithm doesn't support. The good news is that it is getting close to finalization. The first step is to determine how much of your traffic needs very low latency right now. Current state of the market How should the coming changes in the data center affect your plan? If cloud computing, migrating critical storage or a new low latency application such as algorithmic stock trading is on the drawing broad, then it is best to start the move now to the new architecture.

The transformation can also be taken in steps. Most enterprises don't fall in that group yet but they will in 2010 or 2011 and thus have time to plan an orderly transformation. For example, one first step would be to migrate Fibre Channel storage onto the IP fabric and immediately reduce the number of adapters on each server. The storage traffic flows over the server's IP adapters and to the top of the rack switch which send the Fibre Channel traffic directly to the SAN. The core and end of rack switch do not have to be replaced. This can be accomplished by replacing just the top of the rack switch.

The top of the rack switch supports having both IP adapters active for storage traffic only with spanning tree's requirement of only one active adapter applying to just the data traffic. If low latency is needed, then all the data center switches need to be replaced. Brocade and Cisco currently offer this option. Most vendors have not yet implemented the full range features needed to support the switching environment described here. The first part is whether the switch can provide very low latency.

To understand where a vendor is; it is best to break it down into two parts. Many vendors such as Arista Networks, Brocade, Cisco, Extreme, Force 10 and Voltaire have switches that can. As is normally the case vendors are split on whether to wait until standards are finished before providing a solution or provide an implementation based on their best guess of what the standards will look like. The second part is whether the vendor can overcome the spanning tree problem along with support for dual adapters and multiple pathing with congestion monitoring. Cisco and Arista Networks have jumped in early and provide the most complete solutions. Other vendors are waiting for the standards to be completed in the next year before releasing products.