MySpace adds music features in bid to reinvent itself

As part of its attempt to reinvent itself, MySpace unveiled a slew of new music products, including a massive collection of music videos, at the Web 2.0 Summit in San Francisco. But Van Natta strove to keep the packed session on the topic of new music services being dished up on the site. MySpace CEO Owen Van Natta took the main stage Wednesday to talk about the lagging social network's business strategy and its position behind rival Facebook.

Separately, reports circulated Wednesday that Google was also planning a music service . The company announced MySpace Music Videos, which is set up to be one of the most biggest collections of online videos. And to give users better access to the video library, MySpace also unveiled a new Video Search Tab. Van Natta explained that they worked with the company's music label partners to gather fully licensed music videos. The tab is designed to help users search for videos, songs and artist profiles. The dashboard is designed to give bands and singers with MySpace profile analytics on who is listening to their music and how they're interacting with it. "We think MySpace has the opportunity to be the next generation digital distributor of content," said Van Natta, who was an early executive at Facebook before leaving to join MySpace. "MySpace is positioned uniquely to be the place where the socialization of content occurs." MySpace has been slipping in popularity as rival Facebook moved to the top of the social networking pile.

MySpace's roster of new music products also includes an Artist Dashboard. Last December, Facebook drew almost twice as many worldwide visitors as MySpace. At the beginning of Van Natta's presentation, the moderator polled the audience about what social networking site they used. In June, Facebook surpassed MySpace in the U.S. , which had been MySpace's stronghold. A smattering of hands went up to show people who used MySpace. Later in his presentation, the MySpace CEO said he's optimistic about the company's ability to get back on its feet. "We believe that we have all of the building blocks and we need to focus on execution," he said. "If we do a great job at executing and building a great user experience... then we will realize this vision to be the place where you discover a huge amount of content through other people.

When asked who used Facebook, a sea of hands shot up, along with a ripple of laughter from the audience. "Thanks for framing that up for me," Van Natta said. If that is happening in music or other areas, like games, TV and films, it'll be easy to recognize success because you'll just know this is where a huge amount of that socialization is happening."

New Banking Trojan Horses Gain Polish

Criminals today can hijack active online banking sessions, and new Trojan horses can fake the account balance to prevent victims from seeing that they're being defrauded. To stop those attacks, financial services developed authentication methods such as device ID, geolocation, and challenging questions. Traditionally, such malware stole usernames and passwords for specific banks; but the criminal had to access the compromised account manually to withdraw funds.

Unfortunately, criminals facing those obstacles have gotten smarter, too. Greater Sophistication Banking attacks today are much stealthier and occur in real time. One Trojan horse, URLzone, is so advanced that security vendor Finjan sees it as a next-generation program. Unlike keyloggers, which merely re­­cord your keystrokes, URLzone lets crooks log in, supply the required authentication, and hijack the session by spoofing the bank pages. According to Finjan, a so­­phisticated URLzone process lets criminals preset the percentage to take from a victim's bank account; that way, the ac­­tivity won't trip a financial institution's built-in fraud alerts. The assaults are known as man-in-the-middle attacks because the victim and the attacker access the account at the same time, and a victim may not even notice anything out of the ordinary with their account.

Last August, Finjan documented a URLzone-based theft of $17,500 per day over 22 days from several German bank ac­­count holders, many of whom had no idea it was happening. Criminals using bank Trojan horses typically grab the money and transfer it from a victim's account to various "mules"-people who take a cut for themselves and transfer the rest of the money overseas, often in the form of goods shipped to foreign addresses. But URLzone goes a step further than most bank botnets or Trojan horses, the RSA antifraud team says. URLzone also seems to detect when it is being watched: When the researchers at RSA tried to document how URLzone works, the malware transferred money to fake mules (often legitimate parties), thus thwarting the investigation. When victims visited the crooks' fake banking site, Silentbanker in­­stalled malware on their PCs without triggering any alarm. Silentbanker and Zeus Silentbanker, which appeared three years ago, was one of the first malware programs to em­­ploy a phishing site.

Silentbanker also took screenshots of bank accounts, redirected users from legitimate sites, and altered HTML pages. According to security vendor SecureWorks, Zeus often focuses on a specific bank. Zeus (also known as Prg Banking Trojan and Zbot) is a banking botnet that targets commercial banking accounts. It was one of the first banking Trojan horses to defeat authentication processes by waiting until after a victim had logged in to an account successfully. Zeus uses traditional e-mail phishing methods to infect PCs whether or not the person enters banking credentials. It then impersonates the bank and unobtrusively injects a request for a Social Security number or other personal information.

One recent Zeus-related attack posed as e-mail from the IRS. Unlike previous banking Trojan horses, however, the Zeus infection is very hard to detect because each victim receives a slightly different version of it. According to Joe Stewart, director of malware research for SecureWorks, Clampi captures username and password information for about 4500 financial sites. Clampi Clampi, a bank botnet similar to Zeus, lay dormant for years but recently became quite active. It relays this information to its command and control servers; criminals can use the data immediately to steal funds or purchase goods, or save it for later use. Clampi defeats user authentication by waiting for the victim to log in to a bank account.

The Washington Post has collected stories from several victims of the Clampi botnet. It then displays a screen stating that the bank server is temporarily down for maintenance. Defending Your Data Since most of these malware infections occur when victims respond to a phishing e-mail or surf to a compromised site, SecureWorks' Stewart recommends confining your banking activities to one dedicated machine that you use only to check your balances or pay bills. When the victim moves on, the crooks surreptitiously hijack the still-active bank session and transfer money out of the account. Alternatively, you can use a free OS, such as Ubuntu Linux, that boots from a CD or a thumbdrive.

Most banking Trojan horses run on Windows, so temporarily using a non-Windows OS defeats them, as does banking via mobile phone. Before doing any online banking, boot Ubuntu and use the included Firefox browser to ac­­cess your bank site. The key step, however, is to keep your antivirus software current; most security programs will detect the new banking Trojan horses. Older antivirus signature files can be slow to defend PCs against the latest attacks, but the 2010 editions have cloud-based signature protection to nullify threats instantly.

Seagate Goes Solid State with Pulsar Drive

Seagate tosses its hat into the solid state drive (SSD) market today with the unveiling of its Pulsar drive, a unit aimed at enterprise-level blade and server applications. With the Pulsar drive, Seagate lays claim to being "the first enterprise HDD vendor to deliver an enterprise-class SSD solution." The Pulsar drive is built with single-layer-cell (SLC) technology, which Seagate says enhances the reliability and durability of the SSD. Solid state drives offer much faster data access speeds than the rotating media in conventional hard disk drives (HDDs) since there are no moving parts. The new drive stores up to 200GB of data in a 2.5-inch form factor with a SATA interface. According to Seagate, the Pulsar drive achieves a peak performance of 30,000 read IOPS (input/output operations per second) and 25,000 write IOPS, which is a measure of how a drive processes small, random blocks of information.

The drive comes with a five-year warranty and has an annualized failure rate (AFR) of 0.44 percent, according to Seagate. "Seagate is optimistic about the enterprise SSD opportunity and views the product category as enabling expansion of the overall storage market for both SSDs and HDDs," said Dave Mosley, Seagate's executive vice president for sales, marketing, and product line management in a press release. The drive is rated at up to 240 megabytes per second for sequential reads and 200 mbps for sequential writes; a measure of how it accesses large chunks of contiguous data. Solid state drives built with single layer cell technology can offer faster read/write speeds than those built with multiple layer cell technology (MLC), but MLC drives can offer more storage. The Pulsar drive, which was made available to select OEM (original equipment manufacturer) customers in September, is now available to all OEMs.

IPv6: Not a Security Panacea

With only 10% of reserved IPv4 blocks remaining, the time to migrate to IPv6 will soon be upon us, yet the majority of stakeholders have yet to grasp the true security implications of this next generation protocol. While IPv6 provides enhancements like encryption, it was never designed to natively replace security at the IP layer. Many simply have deemed it an IP security savior without due consideration for its shortcomings. The old notion that anything encrypted is secure doesn't stand much ground in today's Internet, considering the pace and sophistication in which encryptions are cracked.

Unfortunately, IPsec, the IPv6 encryption standard, is viewed as the answer for all things encryption. For example, at the last Black Hat conference hacker Moxie Marlinspike revealed vulnerabilities that breaks SSL encryption and allows one to intercept traffic with a null-termination certificate. But it should be noted that:  IPsec "support" is mandatory in IPv6; usage is optional (reference RFC4301). There is a tremendous lack of IPsec traffic in the current IPv4 space due to scalability, interoperability, and transport issues. Many organizations believe that not deploying IPv6 shields them from IPv6 security vulnerabilities. This will carry into the IPv6 space and the adoption of IPsec will be minimal. IPsec's ability to support multiple encryption algorithms greatly enhances the complexity of deploying it; a fact that is often overlooked.

This is far from the truth and a major misconception. For starters, most new operating systems are being shipped with IPv6 enabled by default (a simple TCP/IP configuration check should reveal this). IPv4 based security appliances and network monitoring tools are not able to inspect nor block IPv6 based traffic. The likelihood that rogue IPv6 traffic is running on your network (from the desktop to the core) is increasingly high. The ability to tunnel IPv6 traffic over an IPv4 network using brokers without natively migrating to IPv6 is a great feature. Which begs the question, why are so many users routing data across unknown and non-trusted IPv6 tunnel brokers?

However, this same feature allows hackers to setup rogue IPv6 tunnels on non-IPv6 aware networks and carry malicious attacks at will. IPv6 tunneling should never be used for any sensitive traffic. By enabling the tunneling feature on the client (e.g. 6to4 on MAC, Teredo on Windows), you are exposing your network to open, non-authenticated, unencrypted, non-registered and remote worldwide IPv6 gateways. Whether it's patient data that transverses a healthcare WAN or Government connectivity to an IPv6 internet, tunneling should be avoided at all costs. The rate at which users are experimenting with this feature and consequently exposing their networks to malicious gateways is alarming.

The advanced network discovery feature of IPv6 allows Network Administrators to select the paths they can use to route packets. Is your security conscious head spinning yet? In theory, this is a great enhancement, however, from a Security perspective it becomes a problem. So where are the vendors that are supposed to protect us against these types of security flaws? In the event that a local IPv6 Network is compromised, this feature will allow the attacker to trace and reach remote networks with little to no effort. The answer is, not very far along.

Since there are no urgent mandates to migrate to IPv6, most are developing interoperability and compliance at the industry's pace. Like most of the industry, the vendors are still playing catch-up. So the question becomes: will the delay in IPv6 adoption give the hacker community a major advantage over industry? As we gradually migrate to IPv6, the lack of interoperability and support at the application and appliance levels will expose loopholes. Absolutely! This will create a chaotic and reactive circle of patching, on-the-go updates and application revamp to combat attacks.

There is more to IPv6 than just larger IP blocks. Regardless of your expertise in IPv4, treat your migration to IPv6 with the utmost sensitivity. The learning curve for IPv6 is extensive. Many of the fundamental network principles like routing, DNS, QoS, Multicast and IP addressing will have to be revisited. People can't be patched as easily as Windows applications, thus staff training should start very early.

Reliance on given IPv4 security features like spam control and DOS (denial of service) protection will be minimal in the IPv6 space as the Internet 'learns' and 'adjusts' to the newly allocated IP structure. Jaghori is the Chief Network & Security Architect at L-3 Communications EITS. He is a Cisco Internetwork Expert, Adjunct Professor and industry SME in IPv6, Ethical Hacking, Cloud Security and Linux. It's essential that your network security posture is of the utmost priority in the migration to IPv6. Stakeholders should take into account the many security challenges associated with IPv6 before deeming it a cure-all security solution. Jaghori is presently authoring an IPv6 textbook and actively involved with next generation initiatives at the IEEE, IETF, and NIST. Contact him at ciscoworkz@gmail.com.

New gadgets, prototypes to debut next week in Japan

Japan's biggest electronics and gadgets show, Ceatec, runs all of next week and many new technologies and prototype gadgets are expected to be on show. Originally developed by Toshiba, IBM and Sony for use in the PlayStation 3 games console, the Cell is expected to bring functions like real-time upscaling and processing of recorded videos. The first big news is expected on Monday afternoon when Toshiba unveils its first commercial LCD TV that includes the Cell multimedia processor, after showing a prototype of the television last year.

Panasonic will also focus on TV technology and showing a 50-inch plasma TV that can display images 3D. At the IFA electronics show in September the company said it planned to launch such a set next year, so Ceatec will provide more insight into what consumers can expect. The camera is aimed at content producers, not consumers, but the technology could eventually scale down into more compact cameras. Sony is also pushing 3D and will use Ceatec to show a new video camera that can record 3D images through a single lens. In the cell phone arena, NTT DoCoMo is planning to show a cell phone with a wooden rather than plastic case. The phone uses surplus cypress wood from trees culled during thinning operations to maintain healthy forests.

The prototype phone was made in conjunction with Olympus, which has developed a method for wooden casing, and Sharp. DoCoMo and its partners are also expected to show their progress in developing a cell-phone platform for future LTE (Long Term Evolution) wireless services. Meanwhile Fujitsu will show a new cell phone with a built-in golf-swing analyzer. The company is working with Panasonic, NEC and Fujitsu on development of a phone that can download data at up to 100M bps and upload at half that speed. The phone's sensors feed motion data to a 3D sensing program that analyzes the swing and then provides advice.

One of the hits from last year's Ceatec, Murata's unicycling robot, is due to make an appearance and show off a new trick. Each swing can also be compared against past swings. The latest version of the robot is capable of cycling at about 3 times the speed of last year's model. Specifically, the company plans to show off a technology that allows several cars to automatically follow a lead car. Nissan will also be at Ceatec showing off some of its latest research into advanced automotive IT systems.

The futuristic system, which will be demonstrated in robot cars, could one day be used to allow cars to automatically move along roads in "trains" of vehicles with little input from the driver. The exhibition, which is now in its tenth year, attracted just under 200,000 visitors last year. Ceatec runs at Makuhari Messe in Chiba, just outside of Tokyo, from Tuesday until Saturday.

iSuppli now ranks Acer ahead of Dell in PC market

Lifted by fast-growing notebook shipments, Taiwan's Acer Inc. grabbed the No. 2 spot in the global PC market for the first time over Dell Inc., according to iSuppli Corp. That helped it leap ahead of Dell. The market researcher also confirmed that the PC market is starting to rebound, and now expects this year's sales to be almost flat compared to the prior year's. Boosted by a 17% year-over-year growth in notebook (including netbook) shipments, Acer had 13.4% of the 79.9 million PCs shipped globally in the third quarter, said iSuppli. Hurt by sluggish corporate IT spending, Dell's sales fell 5.9% and it recorded a 12.9% share.

On the rebound, Lenovo's shipments growing 17.2% year-over-year, giving it fourth place. "Acer's rise to the No. 2 rank in the global PC business reflects not only its strong performance in the notebook segment, but also the historic rise of Asia as a primary force in the computer industry," said iSuppli analyst Matthew Wilkins in a statement. Another Asian manufacturer, Lenovo Corp., also had a standout quarter. Acer and Lenovo were ranked just No. 6 and No. 8, respectively, in 2003, Wilkins said. "The Asian manufacturers are a growing force in the global PC business due to their aggressive pricing along with their ability to quickly react and embrace new developments, such as the netbook PC," Wilkins said. Both IDC Corp. and Gartner Inc. had already ranked Acer ahead of Dell. iSuppli is the third market tracker to note Acer's rise to number two. HP remained atop the heap for the 13th straight quarter, with 19.9% of the market.

iSuppli also said that Q3 shipments overall grew year-over-year (1.1%) for the first time in a year, while growing 19% from the second quarter. "The sequential and year-over-year shipment increases show that the PC industry emerged from the downturn and began to grow again in the third quarter," Wilkins said. Toshiba is No. 5 globally, with a 5.0% share, iSuppli said. Notebook shipments were "critical in driving growth," as they never wavered into the negative even during the worst quarters, he added. As a result, the PC market is now expected to decline just 0.9%, rather than iSuppli's earlier prediction of a 4% decline. Christmas and Windows 7 will conspire to "bring more good news for PC makers," said Wilkins.

Remaking the data center

A major transformation is sweeping over data center switching. Ethernet switch vendors propose data center collapse Three factors are driving the transformation: server virtualization, direct connection of Fibre Channel storage to the IP switching and enterprise cloud computing. Over the next few years the old switching equipment needs to be replaced with faster and more flexible switches. They all need speed and higher throughput to succeed but unlike the past it will take more than just a faster interface.

Without these changes, the dream of a more flexible and lower cost data center will remain just a dream. This time speed needs to be coupled with lower latency, abandoning spanning tree and supporting new storage protocols. Networking in the data center must evolve to a unified switching fabric. The answer is yes. Times are hard, money is tight; can a new unified-fabric really be justified? The cost savings from supporting server virtualization along with merging the separate IP and storage networks is just too great.

The good news is that the switching transformation will take years, not months, so there is still time to plan for the change. Supporting these changes is impossible without the next evolution in switching. The Drivers The story of how server virtualization can save money is well known. Virtualization allows multiple applications to run on the server within their own image, allowing utilization to climb into the 70% to 90% range. Running a single application on a server commonly results in utilization in the 10% to 30% range. This cuts the number of physical servers required; saves on power and cooling and increases operational flexibility.

Storage has been moving to IP for years, with a significant amount of storage already attached via NAS or iSCSI devices. The storage story is not as well known, but the savings are as compelling as the virtualization story. The cost saving and flexibility gain is well known. Moving Fibre Channel to the IP infrastructure is a cost saver. The move now is to directly connect Fibre Channel storage to the IP switches, eliminating the separate Fibre Channel storage-area network. The primary way is by reducing the number of adapters on a server.

Guaranteeing high availability means that each adapters needs to be duplicated resulting in four adapters per server. Currently servers need an Ethernet adapter for IP traffic and a separate storage adapter for the Fibre Channel traffic. A unified fabric reduces the number to two since the IP and Fibre Channel or iSCSI traffic share the same adapter. It also reduces operational costs since there is only one network to maintain. The savings grow since halving the number of adapters reduces the number of switch ports and the amount of cabling.

The third reason is internal or enterprise cloud computing. Over the years, this way of design and implementing applications has changed. In the past when a request reached an application, the work stayed within the server/application. Increasingly when a request arrives at the server, the application may only do a small part of the work; it distributes the work to other applications in the data center, making the data center one big internal cloud. It becomes critical that the cloud provide very low latency with no dropped packets.

Attaching storage directly to this IP cloud only increases the number of critical flows that pass over the switching cloud. A simple example shows why low latency is a must. With most of the switches installed in enterprises the get can take 50 to 100 microseconds to cross the cloud, which depending on the number of calls adds significant delays to processing. If the action took place within the server, then each storage get would only take a few microseconds to a nanosecond to perform. If a switch discards the packet, the response can be even longer.

What is the problem for the network? The only way internal cloud computing works is with a very low latency and non-discarding cloud. Why change the switches? Compared with the rest of the network the current data center switches provide very low latency, discard very few packets and support 10 Gigabit Ethernet interconnects. Why can't the current switching infrastructure handle virtualization, storage and cloud computing? The problem is that these new challenges need even lower latency, better reliability, higher throughput and support for Fibre Channel over Ethernet (FCoE) protocol.

The problem with the current switches is that they are based on a store-and-forward architecture. The first challenge is latency. Store-and-forward is generally associated with applications such as e-mail where the mail server receives the mail, stores it on a disk and then later forwards it to where it needs to go. How are layer 2 switches, which are very fast, store-and-forward devices? Store-and-forward is considered very slow. Switches have large queues.

Putting the packet in a queue is a form of store-and-forward. When a switch receives a packet, it puts it in a queue, and when the message reaches the front of the queue, it is sent. A large queue has been sold as an advantage since it means the switch can handle large bursts of data without discards. The math works as follows. The result of all the queues is that it can take 80 microseconds or more for a large packet to cross a three-tier data center.

It can take 10 microseconds to go from the server to the switch. For example, assume two servers are at the "far" end of the data center. Each switch to switch hop adds 15 microseconds and can add as much as 40 microseconds. A packet leaving the requesting server travels to the top of rack switch, then the end-of-row switch and onward to the core switch. That is four switch-to-switch hops for a minimum of 60 microseconds. The hops are then repeated to the destination server.

Add in the 10 microseconds to reach each server and the total is 80 microseconds. Latency of 80 microseconds each way was acceptable in the past when response time was measured in seconds, but with the goal to provide sub-second response time, the microseconds add up. The delay can increase to well over 100 microseconds and becomes a disaster if a switch has to discard the packet, requiring the TCP stack on the sending server to time out and retransmit the packet. An application that requires a large chunk of data can take a long time to get it when each get can only retrieve 1,564 byes at a time. The impact is not only on response time.

A few hundred round trips add up. The application has to wait for the data resulting in an increase in the elapsed time it takes to process the transaction. The new generation of switches overcomes the large latency of the past by eliminating or significantly reducing queues and speeding up their own processing. That means that while a server is doing the same amount of work, there is an increase in the number of concurrent tasks, lowering the server overall throughput. The words used to describe it are: lossless transport; non-blocking; low latency; guaranteed delivery; multipath and congestion management. Non-blocking means they either don't queue the packet or have a queue length of one or two.

Lossless transport and guaranteed delivery mean they don't discard packets. The first big change in the switches is the design of the way the switch forwards packets. A cut-through design can reduce switch time from 15 to 50 microseconds to 2 to 4 microseconds. Instead of a store-and-forward design, a cut-through design is generally used, which significantly reduces or eliminates queuing inside the switch. Cut-through is not new, but it has always been more complex and expensive to implement. The second big change is abandoning spanning tree within the data center switching fabric.

It is only now with the very low latency requirement that switch manufacturers can justify spending the money to implement it. The new generation of switches use multiple paths through the switching fabric to the destination. Currently all layer 2 switches determine the "best" path from one end-point to another one using the spanning tree algorithm. They are constantly monitoring potential congestion points, or queuing points, and pick the fastest and best path at the time the packet is being sent. Only one path is active, the other paths through the fabric to the destination are only used if the "best" path fails. A current problem with the multi-path approach is that there is no standard on how they do it.

Spanning tree has worked well since the beginning of layer 2 networking but the "only one path" is not good enough in a non-queuing and non-discarding world. Work is underway within standard groups to correct this problem but for the early versions each vendor has their own solution. Even when DCB and other standards are finished there will be many interoperability problems to work out, thus a single vendor solution may be the best strategy. A significant amount of the work falls under a standard referred to as Data Center Bridging (DCB). The reality is that for the immediate future mixing and matching different vendor's switches within the data center is not possible. Speed is still part of the solution.

The result of all these changes reduces the trip time mentioned from 80 microseconds to less than 10 microseconds, providing the needed latency and throughput to make fiber channel and cloud computing practical. The new switches are built for very dense deployment of 10 Gigabit and prepared for 40/100 Gigabit. Virtualization curve ball Server virtualization creates additional problems for the current data center switching environment. This causes operational complications and is a real problem if two virtual servers communicate with each other. The first problem is each physical server has multiple virtual images, each with their own media access control (MAC) address. The easiest answer is to put a soft-switch in the VM, which all the VM vendors provide.

There are several problems with this approach. This allows the server to present a single MAC address to the network switch and perform the functions of a switch for the VMs in the server. The soft switch needs to enforce policy and access control list (ACL); make sure VLANs are followed and implement security. If they were on different physical servers the network would make sure policy and security procedures were followed. For example, if one image is compromised, it should not be able to freely communicate with the other images on the server, if policy says they should not be talking to each other. The simple answer is that the group that maintains the server and the soft switch needs to make sure all the network controls are followed and in place.

Having the network group maintain the soft switch in the server creates the same set of problems. The practical problem with this approach is the coordination required between the two groups and the level of knowledge of the networking required by the server group. Today, the answer is to learn to deal with confusion and develop procedures to make the best of the situation and hope for the best. The idea is that coordination will be easier since the switch vendor built it and has hopefully made the coordination easier. A variation on this is to use a soft switch from the same vendor as the switches in the network. Cisco is offering this approach with VMware.

This would simplify the switch in the VM since it would not have to enforce policy, tag packets or worry about security. The third solution is to have all the communications from the virtual server sent to the network switch. The network switch would perform all these functions as if the virtual servers were directly connected to the servers and this was the first hop into the network. The problem is spanning tree does not allow a port to receive a packet and send it back on the same port. This approach has appeal since it keeps all the well developed processes in place and restores clear accountability on who does what.

The answer is to eliminate the spanning tree restriction of not allowing a message to be sent back over the port it came from. As the number of processors on the physical server keep increasing, the number of images increase, with the result that increasingly large amounts of data need to be moved in and out of the server. Spanning Tree and virtualization The second curve ball from virtualization is ensuring that there is enough throughput to and from the server and that the packet takes the best path through the data center. The first answer is to use 10 Gigabit and eventually 40 or 100 Gigabit. Using both adapters attached to different switches allows multiple paths along the entire route, helping to ensure low latency. This is a good answer but may not be enough since the data center needs to create a very low latency, non-blocking fabric with multiple paths.

Once again spanning tree is the problem. The reality is the new generation layer 2 switches in the data center will act more like routers, implementing their own version of OSPF at layer 2. Storage The last reason new switches are needed is Fibre Channel storage. The solution is to eliminate spanning tree, allowing both adapters to be used. Switches need to support the ability to run storage traffic over Ethernet/IP such as NAS, ISCSI or FCoE. Besides adding support for the FCoE protocol they will also be required to abandon spanning tree and enable greater cross sectional bandwidth. Currently the FCoE protocol is not finished and vendors are implementing a draft version.

For example Fibre Channel requires that both adapters to the server are active and carrying traffic, something the switch's spanning tree algorithm doesn't support. The good news is that it is getting close to finalization. The first step is to determine how much of your traffic needs very low latency right now. Current state of the market How should the coming changes in the data center affect your plan? If cloud computing, migrating critical storage or a new low latency application such as algorithmic stock trading is on the drawing broad, then it is best to start the move now to the new architecture.

The transformation can also be taken in steps. Most enterprises don't fall in that group yet but they will in 2010 or 2011 and thus have time to plan an orderly transformation. For example, one first step would be to migrate Fibre Channel storage onto the IP fabric and immediately reduce the number of adapters on each server. The storage traffic flows over the server's IP adapters and to the top of the rack switch which send the Fibre Channel traffic directly to the SAN. The core and end of rack switch do not have to be replaced. This can be accomplished by replacing just the top of the rack switch.

The top of the rack switch supports having both IP adapters active for storage traffic only with spanning tree's requirement of only one active adapter applying to just the data traffic. If low latency is needed, then all the data center switches need to be replaced. Brocade and Cisco currently offer this option. Most vendors have not yet implemented the full range features needed to support the switching environment described here. The first part is whether the switch can provide very low latency.

To understand where a vendor is; it is best to break it down into two parts. Many vendors such as Arista Networks, Brocade, Cisco, Extreme, Force 10 and Voltaire have switches that can. As is normally the case vendors are split on whether to wait until standards are finished before providing a solution or provide an implementation based on their best guess of what the standards will look like. The second part is whether the vendor can overcome the spanning tree problem along with support for dual adapters and multiple pathing with congestion monitoring. Cisco and Arista Networks have jumped in early and provide the most complete solutions. Other vendors are waiting for the standards to be completed in the next year before releasing products.

Wipro sets up global services delivery from China

Indian outsourcer Wipro has set up a global services delivery center in Chengdu in southwest China, targeting customers in the U.S., Europe, and other markets outside the country. The center, set up in 2004, is focused on local customers and on Chinese operations of multinational companies, Suchira Iyer, general manager at Wipro Chengdu, said Thursday. The company already runs a services center in Shanghai with about 300 to 400 staff.

The move by Wipro to open a global services facility in Chengdu reflects a growing trend for Indian outsourcers to set up global delivery facilities outside India. "The center is part of our strategy to have development centers worldwide, and to use local talent that is available across the world," Iyer said. Setting up operations outside India also helps outsourcers offer their customers assurances about business continuity and disaster recovery, analysts said. Indian outsourcing companies have to become global with the flexibility to offer services from a large number of countries, said Siddharth Pai, a partner at outsourcing consultancy Technology Partners International (TPI) in Houston. The center at Chengdu has 100 staff with plans to increase the number to about 1,000 in a few years, Iyer said. The Chengdu center, though predominantly focused on foreign customers, will also address the local market, Iyer said. Chengdu offers skilled staff at costs similar to those in India, she added.

Chengdu has a large number of universities, and there is large pool of skilled staff that Wipro hopes to hire, she said. The Chengdu center will provide IT and business process outsourcing (BPO) services, Wipro said. The local government in Chengdu is also actively promoting outsourcing, she added. The center will have an initial focus on testing and enterprise application services for the manufacturing, banking, financial services, and insurance industries. It will provide multilingual services in English, Chinese and Japanese, Wipro added.

Mac News Briefs

Apple has released Logic Pro 9.0.2, a minor update to its professional music recording, editing, and mixing software. According to the release notes, the 9.0.2 update allows Flex Markers to align and snap to MIDI notes, makes performing a punch-in recording with Replace Mode behave correctly, adds an option for latency measurement to the I/O plug-in, and causes TDM plug-ins to behave as expected (only an issue previously for users with Pro Tools HD audio hardware.) The update is available via Software Update or from Apple's support Web site. Logic Pro 9 is part of the Logic Studio suite of music applications. Apple had yet to update its Logic Pro 9: Release Notes Web page with additional details when this story was posted.-Jonathan Seff Prosoft updates Data Rescue recovery utility Prosoft Engineering updated Data Rescue, introducing a new interface and a number of speed and performance improvements to its data-recovery utility.

Prosoft also added more than 100 new Reconstructed file types for Deleted and Deep scans. Data Rescue 3 features animated visual effects in its redesigned interface to help guide users through recovering files from corrupted hard drives or accidental deletions. The new FileIQ features lets the software learn about new file types from user-supplied samples, extending the number of potential Reconstructed file types supported by Data Rescue. Prosoft improved support for scanning Apple software RAID drives and 1TB or greater drives as well as support for recovering large sparse disk image files, pkzip files, and hard linked files. Other enhancements include the ability to suspend and resume scans and manage the results from multiple scans. Data Rescue 3 runs on OSX 10.4.11 or later, including Snow Leopard.

In addition, File Stitcher 2.1 features a pre-stitch validation check list, expanded bitrate support, and Snow Leopard compatibility. The software costs $99 for a personal use license; licenses for IT pros cost $249.-Philip Michaels File Stitcher 2.1 features redesigned merging engine File Stitcher, the MP3 merging tool from Pariahware, has been updated to version 2.1. The latest update features redesigns to both File Stitcher's interface and merging engine. Version 2.1 is a free upgrade to all File Stitcher 2.0 license holders. Available for $15, the program also offers a demo where you're limited to stitching two files together at a time.-PM

Juniper̢۪s enterprise business hums in Q3

Juniper Networks (JNPR) can thank its enterprise business for third-quarter results that exceeded expectations. Profits came in at $122.5 million, or 23 cents per diluted share, an increase of 21% quarter-over-quarter but a decrease of 28% from the third quarter of 2008. Still, the results were better than the $800 million in revenue and 21 cents per share earnings Wall Street was expecting. For the period ended Sept. 30, Juniper recorded revenue of $823.9 million, an increase of 5% sequentially but a decrease of 13% from the same period a year ago.

And that's due to a 10% sequential growth in Juniper's enterprise business, which was "better than expected," according to Juniper CFO Robyn Denholm. Johnson added that the IBM-branded Juniper products offered under a recent OEM arrangement are now available. Juniper's answer to Cisco in the data center: Stratus Project CEO Kevin Johnson said those results for the enterprise business represent "a starting point for a level of momentum" Juniper believes it can achieve in that market. "Our vision of the data center architecture of the future is resonating," Johnson said in a conference call with analysts. Juniper's EX LAN switching line, which debuted in the first half of last year, accounted for $50 million in sales in the quarter and is on a $200 million annual run rate. The MX debuted in 2006. The SRX firewall, which was unveiled a year ago, is on a $100 million annual run rate.

The MX series Ethernet router, deployed mostly in carrier networks but also in some enterprise data centers, is on a $400 million annual run rate. Together, the EX, MX and SRX product lines accounted for $180 million of Juniper's $634 million in product revenue in the quarter. "We are executing better, and that's coming mainly from the US," Johnson said of the enterprise results in the quarter. We're share takers in the enterprise market, we've got a lot of upside. Sales were particularly strong in the US federal government marketplace. "We will continue to throttle up execution globally. As the economy improves, enterprise investments will improve, but at a slower rate than service providers. "The level of buzz with customers in the enterprise continues to grow," Johnson added. "It's indicative of our opportunity. SLT revenue was a record for the quarter at $229 million, Denholm said, an increase of $11 million from 2008's Q3. In general, Juniper sees the economy and its business improving. "Our visibility has improved in key areas of our business," Johnson said. "We're in an economic recovery.

But we've got to execute and engage with customers." Juniper experienced increased sales of its Service Layer Technology products – traditionally enterprise security and WAN acceleration gear – to service providers in Q3 as well. The pace varies across geographies" with improvements domestically, stabilization in Asia and a slower uptick in Europe. For the fourth quarter, Juniper expects revenue of $860 million to $895 million, and earnings per share in the 23 cents to 26 cents range.

Google, Facebook to offer music sales

Facebook plans to let users buy music and other virtual products on its Web site, the company said Wednesday, expanding its sources of revenue as the company seeks to turn its huge popularity into fiscal profit. The move comes as Google looks to hold its dominance against Bing, which has stolen around 9 percent of the U.S. online search market since its launch earlier this year, according to Internet monitoring companies. Separately, Google will let users sample and buy songs directly from its search results page with a service it plans to announce next week, according to reports.

Songs and official sports icons are among the new virtual gifts Facebook will add to its store, the company said on its blog. The service, powered by music streaming site Lala.com, will be available by the end of this week, a Lala representative said in an e-mail. Users in the U.S. will be able to pay US$0.10 to send friends a song that can only be listened to online, or $0.90 to send a copy that can be downloaded and transferred, the company said. Google will let users stream songs from Lala and iLike.com, which is owned by MySpace, according to a report in the The Wall Street Journal. Google already has an ad-supported music search service, offered only in China, that lets users stream and download songs for free.

A Lala link will let users stream a full song once for free and pay about $1 to download a copy, the report said. A Google executive earlier this year said the company had started work on applying the model in other countries. Lala declined to comment on any deal with Google. Google did not immediately reply to a request for comment. Google's rivalry with Bing was visible as both countries announced search deals with Twitter on Wednesday.

Google said it would launch a search service for Twitter messages just hours after Microsoft announced a similar deal for Bing.

Wall Street Beat: Red Hat, 3Com, PC sector boosts tech

Macroeconomic concerns put pressure on stocks in all sectors this week, but acquisitions and financial news continued to stoke investor hopes for an imminent recovery from the recession for IT. Though not all the news was positive, revenue numbers from Red Hat, 3Com and Palm, Dell's acquisition of services company Perot, and continued improvement in hardware sector surveys fed confidence in the tech sector. A drop in oil prices also raised concerns about economic activity and demand for energy. Stocks in major indices fell Thursday as market watchers absorbed news from the National Association of Realtors, which said home sales fell 2.7 percent in August compared with a rise of 7.2 percent in July. Meanwhile tech vendors, while feeling the effects of the recession, have been doing better than expected.

Excluding one-time items, earnings were $0.08 per share. Networking vendor 3Com Thursday morning reported that net income for the quarter ending Aug. 28 fell to US$7.5 million, or $0.02 per share, from $79.8 million a year earlier. Revenue declined 15 percent to $290.5 million. Excluding exceptional items, 3Com actually beat analyst expectations of $0.05-per-share earnings on revenue of $278.2 million, according to a Thomson Reuters poll. Though the numbers sound bad, $70 million in earnings from the prior year period came from a one-time occurrence: resolution of a patent dispute. For the current quarter, 3Com forecasts also trump analyst expectations.

Analysts were forecasting $0.06 a share on revenue of $286.9 million. 3Com was trading at $5.05, $0.26 up from the day earlier, after the announcement. The company expects earnings of $0.06 to $0.07 a share on revenue of $295 million to $305 million. On its part, Research In Motion had mixed financial news late Thursday. Excluding the charge, however, RIM would have earned $588.4 million, or $1.03 per share, on revenue of $3.53 billion, up 37 percent from a year earlier. The company reported that earnings declined by 4 percent for its second fiscal quarter as a legal charge offset sales of BlackBerry devices. Analysts had forecast earnings of $1.00 per share on revenue of $3.62 billion.

Acquisitions point to where the action is in tech, as companies jostle to ramp up on hot areas of tech. The real tech-stock success this week was Linux software and services vendor Red Hat, which Wednesday said that for the quarter ending Aug. 30, revenue was $183.6 million, an increase of 12 percent from the year-earlier period. "IT organizations continue to move ahead with purchases of high value solutions, and Red Hat is capitalizing on this demand as a result of our strong customer relationships and proven value proposition," said CEO Jim Whitehurst in the company's earnings statement. "We continue to be optimistic about Red Hat's future and believe the company is well positioned when the economic and IT spending environment improves," Bank of America-Merrill Lynch upgraded its recommendation on the company's stock and Bank of America raised its rating for the company to buy, noting the strong sales during a decline in corporate spending on IT. Red Hat shares were trading at $27.96 Thursday afternoon, up $3.08. M&A activity has also stirred excitement in tech lately. Dell Monday announced it would pay $3.9 billion to acquire IT services provider Perot Systems. HP last year bought service company EDS, and IBM has long been able to offer services to support a wide product portfolio. The move was widely seen as a way for Dell, the number-two PC company behind Hewlett-Packard, to match HP's and IBM's services offerings.

The move takes place as analysts revise estimates for PC sales upward. Its latest PC report said that current data show worldwide shipments could hit 285 million units in 2009, a 2 percent decline from 2008 shipments of 291 million, but well above its June forecast, which forecast a 6 percent unit decline in 2009. "PC demand appears to be running much stronger than we expected back in June, especially in the U.S. and China," said George Shiffler, research director at Gartner. "We think shipments are likely to be growing again in the fourth quarter of 2009 compared to the fourth quarter of 2008." Gartner said Wednesday that the worst may be over for the PC sector.

Mac News Briefs

Apple has released Logic Pro 9.0.2, a minor update to its professional music recording, editing, and mixing software. According to the release notes, the 9.0.2 update allows Flex Markers to align and snap to MIDI notes, makes performing a punch-in recording with Replace Mode behave correctly, adds an option for latency measurement to the I/O plug-in, and causes TDM plug-ins to behave as expected (only an issue previously for users with Pro Tools HD audio hardware.) The update is available via Software Update or from Apple's support Web site. Logic Pro 9 is part of the Logic Studio suite of music applications. Apple had yet to update its Logic Pro 9: Release Notes Web page with additional details when this story was posted.-Jonathan Seff Prosoft updates Data Rescue recovery utility Prosoft Engineering updated Data Rescue, introducing a new interface and a number of speed and performance improvements to its data-recovery utility.

Prosoft also added more than 100 new Reconstructed file types for Deleted and Deep scans. Data Rescue 3 features animated visual effects in its redesigned interface to help guide users through recovering files from corrupted hard drives or accidental deletions. The new FileIQ features lets the software learn about new file types from user-supplied samples, extending the number of potential Reconstructed file types supported by Data Rescue. Prosoft improved support for scanning Apple software RAID drives and 1TB or greater drives as well as support for recovering large sparse disk image files, pkzip files, and hard linked files. Other enhancements include the ability to suspend and resume scans and manage the results from multiple scans.

Data Rescue 3 runs on OSX 10.4.11 or later, including Snow Leopard. In addition, File Stitcher 2.1 features a pre-stitch validation check list, expanded bitrate support, and Snow Leopard compatibility. The software costs $99 for a personal use license; licenses for IT pros cost $249.-Philip Michaels File Stitcher 2.1 features redesigned merging engine File Stitcher, the MP3 merging tool from Pariahware, has been updated to version 2.1. The latest update features redesigns to both File Stitcher's interface and merging engine. Version 2.1 is a free upgrade to all File Stitcher 2.0 license holders. Available for $15, the program also offers a demo where you're limited to stitching two files together at a time.-PM

NASA: Orbiter spots ice in Martian meteor craters

A NASA spacecraft orbiting Mars has spotted exposed ice in five different spots on the Red Planet. NASA scientists said they found the exposed ice inside craters , caused by meteors slamming into the Red Planet last year. After years of speculation and last year's intensive hunt for water and other elements that could support life , NASA scientists reported today that they've found frozen water just a few feet below the planet's surface. "This ice is a relic of a more humid climate from perhaps just several thousand years ago," said Shane Byrne of the University of Arizona, Tucson, during a press conference today.

Scientific instruments onboard the Mars Reconnaissance Orbiter found that the icy craters range from 1 1/2 to 8 feet deep. The images are sent back to Earth where scientists pour over them, comparing any new spots, or possible craters, to photos taken earlier. The exposed ice first appeared as bright patches and then darkened in a matter of weeks as the ice vaporized in the Martian atmosphere. "Craters tell us a lot about the object on which they occur," said Ken Edgett, a senior staff scientist at Malin Space Science Systems. "They're great probes of what lies beneath the surface." In the average week, the orbiter's high-resolution camera captures more than 200 images of Mars, covering an area greater than the size of California. Because of the area where the ice was discovered, scientists said today that if NASA's Viking Lander 2, which worked on the surface of Mars in 1976, had dug four inches deeper than it had at the time, it would have struck ice. Before NASA's Phoenix Mars Lander froze to death in the long, cold Martian winter last year, the robotic vehicle dug up and analyzed soil samples and verified the existence of ice on Mars . The found ice proved that water - a key element to support life - exists there.

The Net's Most Heinous Hoaxes

Most online hoaxes are mildly annoying, and a few are hilarious. Plastering an epilepsy forum with flashing images? But propagating a false AMBER Alert over Twitter?

Not cool. Twitter/Facebook Amber Alert The AMBER Alert system-a child abduction alert system broadcast over radio, TV, satellite radio, and other media whenever a child is abducted-was created after nine-year-old Amber Hagerman was abducted and murdered in Arlington, Texas, in 1996. Recently, some users have also broadcast alerts over text messages and Twitter. We'll take a look at some of the Web's most heinous hoaxes over the years, and sprinkle in a handful of amusing ones. Last July, someone tweeted an AMBER Alert for a three-year-old girl. It turned out to be a false alarm. People responded by spreading the alert as fast and as far as they could.

A similar sequence of panicked, rapid-fire tweeting followed another false AMBER Alert occurred in September. Though we're glad that no abduction occurred in either case, there's a disturbing "cry wolf" aspect to the story-what happens the next time a real AMBER Alert goes out? How heinous is this? For eroding the value of a potentially vital line of defense against child abduction, this hoax sets the platinum standard for repugnance. The site included tips on how to insert a feeding tube and a waste removal tube, and where to drill air-holes "prior to kitten insertion." It also included a gallery of pictures of "Bonsai Kittens" and a guestbook filled with love (and hate) mail. Bonsai Kitten Paging PETA: In 2001, a group of enterprising MIT grad students put together a little Web site called Bonsai Kitten, which detailed how to grow a kitten in a jar for aesthetic purposes.

The site was so realistic that it caused uproar among kitty enthusiasts and animal rights activists (including the Humane Society), and it eventually gained enough notoriety that the FBI investigated the site's authenticity (or lack thereof). But since no kittens were actually harmed in the perpetration of this hoax, we think it tends more toward the hilarious than the heinous. Some of the pranks they allegedly pulled are a bit more serious, however, such as the Epilepsy Forum Raid. Epilepsy Forum Raid Anonymous, a group of online pranksters, has been blamed for an array of notorious acts of Internet grief-from uploading porn on YouTube to launching denial-of-service attacks on Scientology sites. In March of 2008, an epilepsy support forum run by the Epilepsy Foundation of America was attacked with uploads of flashing animations. The animations-which were clearly intended to induce seizures and/or migraines in epileptics-can be very dangerous for epilepsy sufferers. The National Society for Epilepsy, based in the UK, fell prey to a similar attack.

The attack was investigated by the FBI, which found no connections to the group Anonymous. Bigfoot's Body Bigfoot is alive-okay, actually he's dead, and he's in a freezer in Georgia. Internet speculation has attributed the attack variously to The Internet Hate Machine, to 7chan.org, or to eBaum's World. At least, that's what The New York Times and other major news outlets reported on August 14, 2008. In the finest "made you look" tradition, two men from Georgia announced that they had found the body of Bigfoot and would present definitive proof (in the form of photographs and DNA) that Bigfoot existed. Quasi-expert Tom Biscardi, an inveterate promoter of all things Bigfoot (and perpetrator of his own Bigfoot hoax just three years prior), vouched for the men. In fact, they revealed, they saw three other Bigfoots in the woods as they were dragging the dead beast's body back to their car-possible evidence that these creatures had mastered the intricacies of contract bridge but had not yet learned to control their tempers over botched bidding.

How bad is this? But an Indiana man fronted $50,000 on behalf of Biscardi for the "body," and is now suing the pair of hoaxers to get his money back. Not surprisingly, the body turned out to be a costume stuffed in a freezer. The most heinous part of this hoax is the fact that someone actually fell for it. Alabama legislators began receiving letters from outraged scientists and civilians, but that's about as dangerous as the situation got.

Changing the Value of Pi On April Fool's Day 1998, Mark Boslough wrote a fictional piece about Alabama legislators calling on the state government to pass a law that would change the value of pi from 3.14159... to the "Biblical value" of 3. Boslough's titled his article "Alabama Legislature Lays Siege to Pi." Though the piece was originally posted to a newsgroup, it ended up being forwarded...and forwarded...and forwarded... The funniest part of the hoax? Save Toby Taking a cue from Bonsai Kitten, a site called Save Toby used a creepy premise to throw animal rights activists into a tizzy. It echoes an actual event: In 1897, the Indiana House of Representatives passed a resolution to change the value of pi to 3-luckily, irrationality prevailed and the bill died in the State Senate. The Save Toby saga began in the early days of 2005, when the site announced that its owners had found a wounded rabbit (which they named Toby) and nursed it back to health-but then declared that if they did not receive $50,000 in donations for the care of Toby by July 30, 2005, they would be forced to cook and eat the rabbit.

Animal rights activists cried "animal cruelty," to which the owners responded that they were doing nothing cruel to Toby-in fact, they were trying to save him. The owners asserted that the site was not a hoax: They would, indeed, cook and eat Toby if they did not receive the money. Supposedly, the site collected more than $24,000 before Bored.com bought it, and Toby was saved. (By the way, possible inspirations from pre-Internet days for the Save Toby hoaxers aren't hard to find.) But holding a bunny hostage for ransom? MySpace Suicide This hoax may have been the most senselessly cruel of any listed here. Real classy, fellas. In 2007, a 13-year-old girl committed suicide after being dumped by her MySpace "boyfriend." The girl's family later learned that the MySpace "boyfriend"-a cute boy named Josh-never existed.

The Josh character had gained the girl's confidence before sending her a message that told her he didn't want to be friend anymore because he'd heard she was a mean person. He was a fictional character made up by the mother of another girl. The girl, who was on medication for depression and attention deficit disorder, took her own life the next day. Then again, the scammers send out thousands of e-mail appeals every day in the hope of getting just one gullible person to reply. Our take: Unforgivable. 419 Nigerian Money Scams Nigerian money scams are so overexposed in the media these days that it's hard to believe people still fall for them.

The scam itself is pretty simple: The grifter promises the randomly chosen e-mail recipient an absurd amount of money to help the crook "transfer funds" from one bank to another (or some variation thereof). To help the con artist, all the victim has to do is provide his/her personal information, bank information, and, oh yeah, a small fee (around $200-a small price to pay, considering the impending payoff) to help transfer the money. The scammer obtains all of the scammee's personal info, and a tidy little sum besides. If the scammee goes along, bam! Not bad for one e-mail. In some cases, the scammers invite the victims to travel to Nigeria or a bordering country to complete the transaction. These scams can be life-threatening as well as costly.

In 1995, an American was killed in Lagos, Nigeria, while pursuing such a scam. Work-At-Home Scams Like the Nigerian money scams, work-at-home come-ons are heavily reported in the media. Truly horrific. Yet people still fall for them. But desperation or greed makes some people forget. Most people know that if it sounds too good to be true, it probably is.

Work-at-home scams promise you the opportunity to make quick, easy money from the comfort of your house; all you need is a computer-which, of course, you have. Except, of course, that materials will never come, and you'll have lost your money, and you still won't have a job. Any number of activities may be your ticket to riches-stuffing envelopes, transcribing, medical billing-but first you need to do send the scammer some money for preliminary materials. Heinous? And the fact that they prey primarily on unemployed or underemployed people who aren't exactly swimming in discretionary income (it's hard to imagine Warren Buffett jumping at the chance to make money by stuffing envelopes) increases their vileness quotient at least a little. Such scams aren't life threatening, but they can certainly put a dent in your savings-especially if you fall for them more than once.

Remember, if prospective employers ask you to send money before you start working for them...it's probably a scam. In September 2009, Facebook's PR went rogue and punk'd TechCrunch with a "Fax This Photo" option. Facebook Hoax on TechCrunch Guess you should stay on the good side of people who run your primary social networking site. TechCrunch reporter Jason Kincaid opened his Facebook on September 10, 2009, and discovered that under every photo there was a new option: "Fax This Photo." It seemed ridiculous-but everyone in the TechCrunch network saw it, so he sent an e-mail to Facebook. He then called Facebook PR...and discovered that it was all a big prank, and that Facebook staffers were placing bets on how long it would be before TechCrunch posted it. They didn't respond, so he posted a skeptical note.

Heinous? TechCrunch got PWN'd. Of Related Interest For two discussions-one old and one fairly new-of online scams, check out these stories: • "Top Five Online Scams" (2005) • "5 Facebook Schemes That Threaten Your Privacy" (2009) For a look at some relatively benign online hoaxes (mixed in with some evil ones), read this: • "The Top 25 Web Hoaxes and Pranks" (2007) And from deep in the vaults of PCWorld.com come these chestnuts: • "Devious Internet Hoaxes" (2002) • "The Worst Internet Hoaxes" (2001) Not at all.

Realmac Software acquires social app EventBox

EventBox, the one-stop shop for many of your social media needs, is taking a big step up in the world of Mac software. Last July, Macworld's James Dempsey dove into EventBox and all the socializing it has to offer, and I picked it as part of my $300 Student Challenge last month. Realmac Software, makers of RapidWeaver and LittleSnapper, announced Tuesday morning that it has acquired EventBox from its developers, The Cosmic Machine.

Instead of visiting separate websites to get your daily dose of Facebook, Digg, Twitter, Google Reader, Reddit, Flickr, Identi.ca, and even plain ol' RSS feeds, EventBox wraps them all into one polished, centralized application. You can create smart folders to organize your friends and information for the way you, erm, "work," upload photos to compatible services, and even send links to Instapaper for reading later. It even makes a few services work together in useful ways, such as letting you post Google Reader headlines to Facebook or Twitter right from inside the app. Realmac Software's acquisition means that EventBox will have more resources and room to grow, as Realmac is no stranger to bringing solid products to market. EventBox is now huddling into a cocoon, undergoing a transformative process that should finish in November.

RapidWeaver has long been known as a sort of "iWeb Pro" upgrade, and LittleSnapper quickly gained traction as a powerful "iPhoto for designers and web developers." EventBox doesn't currently have much in the way of competition as far as tackling such a broad sample of the social media space, so The Cosmic Machine and Realmac are already a step or three ahead of the game. When it reemerges, it will be renamed as Socialite. EventBox owners who purchased a license in the past will get a free Socialite 1.0 license, which will cost everyone else $20. Customers who scored licenses through MacHeist will receive an email with the option of purchasing a license upgrade at a discounted price. Realmac is soliciting feedback in its forums for what users want out of version 1.0 and beyond.

Large online payroll service hacked

In a somewhat unusual data breach, hackers recently stole the login credentials of an unknown number of customers of payroll processing company PayChoice Inc., and then attempted to use the data to steal additional information directly from the customers themselves. Hackers broke into the site and managed to access the real legal name, username and the partially masked passwords used by customers to log into the site. The breach, first reported by the Washington Post this week, took place on Sept. 23 and involved PayChoice's onlineemployer.com portal site. They then used the information to send very realistic looking phishing e-mails to PayChoice's customers directing them to download a Web browser plug-in to be able to continue using the onlineemployer.com service.

Users who clicked on the link to download the plug-in instead got infected with a username and password stealing Trojan. Each of the messages addressed people by their real names and contained their real username and passwords (partially masked), which had been harvested earlier from PayChoice. It is not immediately clear how many customers might have actually clicked on the malicious link. The company bills itself as the "national leader" in the payroll services and software industry and claims over 125,000 business customers. PayChoice, based in Moorestown, N.J, proivides payroll processing services and technology.

In an e-mail statement to Computerworld , PayChoice said today it discovered the security breach in its online system last Wednesday. "We are handling this incident with the highest level of attention as well as concern for our clients, software customers and the employees they serve," CEO Robert Digby said in the statement. The company has also engaged two outside forensic experts to help figure out the full scope of the intrusion. "PayChoice is determined to find the cause and extent of the breach and to take further measures to prevent a future occurrence," Digby said. Once the company discovered the breach, it immediately shut down the online system and instituted "fresh measures" to protect client information, the statement said. Steve Friedl, an independent security consultant, said he first heard of the breach last Thursday when a PayChoice customer informed him. But it appears very likely that the only data the hackers accessed was the information they included in the fake e-mails that PayChoice's customers received, said Friedl, who wrote about the incident in his blog . If hackers had in fact accessed on more data, it is highly unlikely that they would have resorted to sending out those additional e-mails to PayChoice's customers, and thereby running the risk of being exposed, he said. At this point, it is not clear what other information the hackers might have gotten access to, said Friedl who consults for a rival payroll services firm.

Friedl said the links in the phishing e-mails were to Websites hosted at Yahoo. The relatively poor English in the e-mails appear to indicate that those behind the attack were from outside the country, he said. The malware itself was a password-stealing Trojan that was designed to send the stolen information to a Web server in Sweden. Chris Wysopal, chief technology officer at application security vendor Veracode Inc., said the breach is interesting because it shows that hackers are looking for targets other than credit card numbers and social security numbers to steal. "The market is saturated with [stolen] credit card data," Wysopal said. As a result cybercrooks looking to monetize what they are doing are moving up to higher value attacks where possible, he said.

A credit card record that was worth $10 in the underground in 2007 today can be had for about 50 cents, he said. In this case, the hackers appear to have been trying to install keystroke loggers to get information that would have allowed then to access online banking accounts of PayChoice's customers, he said. "That is where they would have got tens of thousands of dollars," had they been able to pull it off. An online payroll service company such as PayChoice presents a "huge attack surface" to those looking for ways to compromise it, Wysopal said. "An application like that, which is exposed to the Internet, is susceptible to SQL injection, cross-site scripting," and numerous other Web application attacks, he said.

Want to make BI pervasive? It's the culture, stupid

Business intelligence software may have been around for several decades, but it remains an esoteric niche in most companies, according to an analyst. It's the people that often get in the way," said Dan Vessett, an analyst with IDC Corp. Unfriendly corporate cultures, not the BI tools or apps themselves, are preventing BI from becoming pervasive. "The technology has been around for a long time.

IDC recently conducted a study of 1,100 organizations in 11 countries measuring how pervasive BI is in companies, what factors helped make it more pervasive, and what "triggers" data warehousing architects and IT managers can use to the further the spread of BI in their companies. According to IDC, that was between 48% to 50%. Degree of external use, or how much it shared data with vendors or customers. In a speech Tuesday at Computerworld's Business Intelligence Perspectives conference in Chicago, Vessett said IDC measured BI's pervasiveness via six factors: Degree of internal use. Sharing BI data keeps customers loyal, Vesset said. Percentage of power users in a company. And canny BI users in industries such as retail can sell that data to generate non-trivial revenue, he said.

The mean was 20% in surveyed companies. Over five years, the average at surveyed companies grew to 28 from 11. Data update frequency. Number of domains, or subject areas, inside the data warehouse. While real-time updates can be indicative of heavy dependence upon BI, "right-time data updates" is more important. "Daily, weekly or monthly could be sufficient," he said. They still rely more on experience rather than analytics," Vesset said. Analytical orientation, or how much the BI crunching helped large groups or the entire organization make decisions, rather than isolated individuals. "The fact is that most individuals and companies are not data driven.

According to Vesset, these factors in descending order had the most impact on BI pervasiveness: Degree of training, not in the BI tools - "the vendors do a pretty good job" - but in the meaning of the data, what the key performance indicators (KPIs) mean, etc. Satisfied users will talk up the BI software, creating "BI envy" in other employees, helping spread the software's use. Design quality,or the extent to which IT-deployed performance dashboards are able to satisfy user needs. Unsatisfied users will go around IT and use Excel or some SaaS applications. Involvement of non-executive employees. Prominence of the data governance group.

Prominence of a performance management methodology. Vesset also listed a number of potential "triggers" for BI projects that IT should take advantage of:

Report: New net neutrality rule coming next week

Federal Communications Commission chairman Julius Genachowski will propose a new network neutrality rule during a speech at the Brookings Institute on Monday, the Washington Post reports. Additionally, the principles state that consumers are "entitled to competition among network providers, application and service providers and content providers." Broadly speaking, net neutrality is the principle that ISPs should not be allowed to block or degrade Internet traffic from their competitors in order to speed up their own. Anonymous sources have told the Post that Genachowski won't offer too many details about the proposed rule and will likely only propose "an additional guideline for networks to be clear that they can't discriminate, or act as gatekeepers, of Web content."  The Post speculates that the rule will essentially be an add-on to the FCC's existing policy statement that networks must allow users to access any lawful Internet content of their choice, to run any legal Web applications of their choice, and to connect to the network using any device that does not harm the network.

The major telcos have uniformly opposed net neutrality by arguing that such government intervention would take away ISPs' incentives to upgrade their networks, thus stalling the widespread deployment of broadband Internet. The debate over net neutrality has heated up over the past few years, especially after the Associated Press first reported back in 2007 that Comcast was throttling peer-to-peer applications such as BitTorrent during peak hours. Several consumer rights groups, as well as large Internet companies such as Google and eBay, have led the charge to get Congress to pass laws restricting ISPs from blocking or slowing Internet traffic, so far with little success. Essentially, the AP reported that Comcast had been employing technology that is activated when a user attempts to share a complete file with another user through such P2P technologies. The FCC explicitly prohibited Comcast from engaging in this type of traffic shaping last year.

As the user is uploading the file, Comcast would then send a message to both the uploader and the downloader telling them there has been an error within the network and that a new connection must be established. Both friends and foes of net neutrality have been waiting anxiously to see how Genachowski would deal with the issue, ever since his confirmation as FCC chairman earlier this year. Tim Karr, the campaign director for media advocacy group Free Press, said at the time of Genachowski's nomination that he was instrumental at getting then-presidential candidate Barack Obama to endorse net neutrality during his presidential campaign. Net neutrality advocates cheered when Genachowski took over the FCC, as many speculated that he would be far more sympathetic to net neutrality than his predecessor Kevin Martin.

The Net at 40: What's Next?

When the Internet hit 40 years old - which, by many accounts, it did earlier this month - listing the epochal changes it has brought to the world was an easy task. Businesses stay in touch with customers using the Twitter and Facebook online social networks. It delivers e-mail, instant messaging, e-commerce and entertainment applications to billions of people. CEOs of major corporations blog about their companies and their activities.

On Sept. 2, 1969, a team of computer scientists created the first network connection, a link between two computers at the University of California, Los Angeles. Astronauts have even used Twitter during space shuttle missions. But according to team member Leonard Kleinrock , although the Internet is turning 40, it's still far from its middle age. "The Internet has just reached its teenage years," said Kleinrock, now a distinguished professor of computer science at UCLA. "It's just beginning to flex its muscles. That will pass as it matures." The next phase of the Internet will likely bring more significant changes to daily life - though it's still unclear exactly what those may be. "We're clearly not through the evolutionary stage," said Rob Enderle, president and principal analyst at Enderle Group. "It's going to be taking the world and the human race in a quite different direction. The fact that it's just gotten into its dark side - with spam and viruses and fraud - means it's like an [unruly] teenager.

We just don't know what the direction is yet. It may doom us. It may save us. But it's certainly going to change us." Marc Weber, founding curator of the Internet History Program at the Computer History Museum in Mountain View, Calif., suggested that the Internet's increasing mobility will drive its growth in the coming decades. Sean Koehl, technology evangelist in Intel Corp.'s Intel Labs research unit, expects that the Internet will someday take on a much more three-dimensional look. "[The Internet] really has been mostly text-based since its inception," he said. "There's been some graphics on Web pages and animation, but bringing lifelike 3-D environments onto the Web really is only beginning. "Some of it is already happening ... though the technical capabilities are a little bit basic right now," Koehl added. The mobile Internet "will show you things about where you are," he said. "Point your mobile phone at a billboard, and you'll see more information." Consumers will increasingly use the Internet to immediately pay for goods, he added.

The beginnings of the Internet aroused much apprehension among the developers who gathered to watch the test of the first network - which included a new, state-of-the-art Honeywell DDP 516 computer about the size of a telephone booth, a Scientific Data Systems computer and a 50-foot cable connecting the two. We were confident the technology was secure. The team on hand included engineers from UCLA, top technology companies like GTE, Honeywell and Scientific Data Systems, and government agencies like the Defense Advanced Research Projects Agency. "Everybody was ready to point the finger at the other guy if it didn't work," Kleinrock joked. "We were worried that the [Honeywell] machine, which had just been sent across the country, might not operate properly when we threw the switch. I had simulated the concept of a large data network many, many times - all the connections, hop-by-hop transmissions, breaking messages into pieces. It was thousands of hours of simulation." As with many complex and historically significant inventions, there's some debate over the true date of the Internet's birth. The mathematics proved it all, and then I simulated it.

Some say it was that September day in '69. Others peg it at Oct. 29 of the same year, when Kleinrock sent a message from UCLA to a node at the Stanford Research Institute in Palo Alto, Calif. Kleinrock, who received a 2007 National Medal of Science, said both 1969 dates are significant. "If Sept. 2 was the day the Internet took its first breath," he said, "we like to say Oct. 29 was the day the infant Internet said its first words." This version of this story originally appeared in Computerworld 's print edition. Still others argue that the Internet was born when other key events took place. It's an edited version of an article that first appeared on Computerworld.com.

Microsoft issues XP, Vista anti-worm updates

Four months after it modified Windows 7 to stop the Conficker worm from spreading through infected flash drives, Microsoft has ported the changes to older operating systems, including Windows XP and Vista, the company announced on Friday. Conficker copied a malicious "autorun.inf" file to any USB storage device that was connected to an already-infected machines, then spread to any other PC if the user connected the device to that second computer and picked the "Open folder to view files" option under "Install or run program" in the AutoPlay dialog. In April, Microsoft altered AutoRun and AutoPlay, a pair of technologies originally designed for CD-ROM content, to keep malware from silently installing on a victim's PC. The Conficker worm , which exploded onto the PC scene in January, snatching control of millions of machines, used several methods to jump from PC to PC, including USB flash drives. Microsoft responded by changing Windows 7 so that the AutoPlay dialog no longer let users run programs, except when the device was a nonremovable optical drive, like a CD or DVD drive.

Four months ago, Microsoft promised to make similar changes in other operating systems - Windows XP, Vista, Server 2003 and Server 2008 - but declined to set a timeline. After the change, a flash drive connected to a Windows 7 system only let users open a folder to browser a list of files. On Friday, Microsoft used its Security Research & Defense blog to announce the availability of the updates for XP, Vista and the two Server editions. Links to the download are included in a document posted on the company's support site. Microsoft issued the updates almost three weeks ago, on Aug. 25, but did not push them to users automatically via Windows Update, or the corporate patch service Windows Server Update Services (WSUS). Instead, users must steer to Microsoft's download site, then download and install the appropriate update manually.

The Windows XP update weighs in at 3MB, while the one for Vista is about 7MB. The AutoRun and AutoPlay changes debuted in the Windows 7 Release Candidate (RC), which was available for public downloading from May 4 to Aug. 20 . Windows 7 is set to go on sale Oct. 22.

How to get fired

IT professionals can do a lot to avoid layoffs, but they may be unwittingly doing even more to make themselves a target for downsizing.

How to make yourself layoff-proof

"No one can get too comfortable in their position right now. If you get complacent and have no intentions of improving upon yourself, you will lose your job to that person – and there is always at least one – who is constantly looking for ways to better himself and add more value to the business," says Colt Mercer, a network engineer at Citigroup in Dallas and a Network World Google Subnet blogger.

Here IT professionals and career experts point out five ways high-tech workers could earn themselves a spot in the unemployment ranks.

1. Be invisible

Now is not the time to go unnoticed.

"It's not the time to shrivel and try to be invisible to management. Many people tend to default to hide-and-retreat mode when layoffs come up, but that could call more attention to you and make it appear you aren't contributing enough to be kept around," says Adam Lawrence, vice president of service delivery at talent and outsourcing service provider Yoh.

Even those working hard could unknowingly be at risk due to their in-office time. Some IT workers who operate from a home office might need to make a few extra trips into work to remind managers, in person, of all that they do.

"Being visible during downtime is a big deal. If you are always remote and people at the office don't see you as part of the team, that could cause problems," says Bryan Sullins, principal tech trainer at New Horizons in Hartford, Conn., and a Network World blogger covering Microsoft certifications and training. "Often it can be a case of out of sight, out of mind, and remote workers could unwittingly become a target to be cut."

2. Let skills stagnate

There may be no training dollars, but that doesn't mean managers won't be considering IT pros' lack of updated skills when making layoff decisions. Regardless of the current economic trouble, high-workers should always be looking for ways to advance their knowledge.

"IT staffers that don't maintain their certifications and stay trained show poor strategic thinking and will very quickly find themselves behind the curve," says Chris Silva, senior analyst at Forrester Research. 'Turning a blind eye to new technology and thinking it can wait will wear thin in a down economy. Managers don't want staff that add to the 'can't do' list in times like these."

And the employee who uses the excuse about lack of dollars won't make points when it comes to cutting staff.

"A pet peeve of mine is people asking companies for more than they are willing to give," says Rich Milgram, CEO of Beyond.com, an online job board. "There has to be some level of mutual understanding about what contributions can feasibly be made on both the employer and employee's side. There are low- and no-cost training options if the employee is willing to make the effort."

3. Snoop in systems

It goes without saying that IT workers shouldn't abuse their access to company confidential systems, but industry watchers warn that if layoffs are going to happen, those high-tech pros with questionable practices will be the first to go.

"It is really easy for an IT person to see what others are doing and to look at confidential data, without being caught," says Beth Carvin, CEO of Nobscot Corp., a maker of employee retention and other HR-related software based on Kailua, Hawaii. "But if you are suspected of some shady stuff, that would be reason enough to bring your name to the top of the layoff list."

And even if the practices aren't breaking corporate policies, IT professionals need to be on their best behavior. Try to avoid abusing a flexible schedule with long lunches and don't use your high-tech position as a reason to spend too much time on the Internet for non-work-related activities.

"If you are the person viewed as someone just logging their hours to collect a paycheck and don't plan to contribute more than the minimum, management will see that and you will become vulnerable," says John Reed, district president with Robert Half Technology.

4. Make demands

Pay cuts, hiring freezes, layoffs – none of these factors suggest it's an appropriate time to ask for a raise. Yet experts say some will use their ongoing service to a company during a recession as a reason to demand more money and other benefits.

"Now is not the time to ask for a raise; now is not the time to complain about needing more time off," Sullins says. "In these cases, the squeaky wheel will get the shaft."

While it may seem to IT pros they are going above and beyond and deserve compensation for their efforts, those in the position to fire staff might not want to hear it.

"Right now, employees should be nodding their heads a lot, not being surly or pushing back on responsibility," says Sean Ebner, regional managing director for IT staffing and recruiting firm Technisource

5. Spew negativity

Employers now more than ever want positive attitudes on staff, and those spewing negativity will be weeded out.

"The truth is that everybody from a technical standpoint is replaceable. I notice more than anything the negativity an employee displays. Negativity is contagious, and once an employee goes that route, it is nearly impossible to turn them back," says Michael Kirven, principal and co-founder of IT resourcing firm Bluewolf.

Do you Tweet? Follow Denise Dubie on Twitter

Windows president tries to calm fears of Win 7 critical bug

Microsoft's Windows Division president Steven Sinofsky tried Wednesday to tamp down a growing roar that Windows 7 RTM has a critical flaw that can shut down the OS by running a simple command.

"Sorry to get dragged into this," wrote Sinofsky, taking the unusual step of responding via the comments section of an industry blog called Chris123NT's blog.  Monday, the blog posted a recipe to execute the crash and included a picture of the results. 

Other testers also reported errors.

 "Of course [we] always want to investigate each and every report of any unexpected behavior," he wrote [Microsoft confirmed it was indeed him]. But Sinofsky, who is leading Windows 7 development, said Microsoft has not reproduced the crash, which is triggered by the Windows "CHKDSK /r" command.

"We are certainly going to continue to look for, monitor, and address issues as they arise if required. So far this is not one of those issues," he wrote. "While we appreciate the drama of 'critical bug' and then the pickup of 'showstopper' that I've seen, we might take a step back and realize that this might not have that defcon level," Sinofsky wrote.

"Bugs that are so severe as to require immediate patches and attention would have to have no workarounds and would generally be such that a large set of people would run across them in the normal course of using their PC."

Reports of a potential critical bug come a day before Microsoft is set to make Windows 7 available to MSDN subscribers. General availability is slated for Oct. 22.

Testers report that the bug only works on PCs that have a second hard disk or multiple hard disks. The bug, which gobbles up memory and leads to a "blue screen" crash, does not affect the main drive where the OS is installed.

"We're not seeing any crashes with CHKDSK on the stack reported in any measurable number that we could find," Sinofsky wrote. "We had one beta report on the memory usage, but that was resolved by design since we actually did design it to use more memory."

The memory usage is intended to speed up checking the disk for damage and errors, but, Sinofsky said, memory usage was not intended to be "unbounded."

He said the command is intended to leave at least "50M of physical memory. Our assumption was that using /r means your disk is such that you would prefer to get the repair done and over with rather than keep working."

Users on blogs and discussion sites are reporting consistently that they see a jump in memory usage, but reports of outright crashes of the OS have been spotty.

One post on the Windows SevenForum from a user named "Everlong18" said, It's not *that* much of a concern for me. It's not like I'm going to be running chkdsk on my D drive every day, but it would be nice if it got sorted."

On the Chris123NT's blog, a user name FireRX, who appears to be a Microsoft MVP, said, "the chkdsk /r tool is not at fault here. It was simply a chipset controller issue. Please update [your] chipset drivers to the current driver from your motherboard manufacturer. I did mine, and this fixed the issue. Yes, it still uses a lot of physical memory, because [you're] checking for physical damage, and errors on the Harddrive [you're] testing… Again, there is no Bug." FireRX also said he was sure a hotfix would be issued today.

The Microsoft official acknowledged FireRX's post in his comments, and said "some have reported that this specific issue [reproduces] and then goes away with updated drivers. We haven't yet confirmed that either but continue to try."

Sinofsky did not say anything about a hotfix.

Sinofsky, who posted his response at 7 pm Tuesday night, said Microsoft had started overnight stress testing of 40 machines of variants "as reported by FireRx."Microsoft has not made public results of those tests.

The Microsoft official ended his post saying. "Let's see if we can work on this one and future issues together. Deep breath –Steven."

Follow John on Twitter