Tuesday, March 30, 2010

Desktop PC Buying Guide: Details of Specs

Once you've determined the type of desktop system you want--a compact PC, a budget system, a mainstream all-purpose model, or a performance crackerjack--you need to know what components to look for. The processor and graphics chip you choose will determine many of your machine's capabilities, as will the system's memory and hard drive. Understanding those components will help you get the performance you need, without paying for things you don't.
You'll also want to consider details like the layout of the case, which can also make the difference between a pleasant workstation and a nightmare PC.
The CPU is one of your PC's most important components. The processor you choose is likely to determine your PC's shape and size, and will definitely determine its price. Generally, the higher the CPU clock speed, the faster the performance you may see--and the higher the price. A 3.46GHz Core i5-670 PC will trounce a 2.93GHz Core i3-530 system, but you'll pay nearly twice as much for the faster CPU. Another spec to watch is cache size: More is better, here: Core i3 and Core i5 parts have 4MB caches, while performance-geared Core i7 chips have 6MB or 8MB caches.
Compact PCs and some all-in-ones use relatively puny netbook or notebook processors. Though these CPUs deliver weaker performance than desktop processors, they're also smaller and generate less heat, which makes them ideal for small machines. A PC packing an Atom processor should be fine for basic word processing, Web surfing, and limited media playback--but little more.
Intel's new Clarkdale line of Core i3 and Core i5 desktop processors tend to appear on systems in the budget desktop and mainstream desktop PC categories. Most users will find something they like in the Core i3 and Core i5 lines, as these CPUs offer dual-core performance at palatable price points. Core i3 chips are the cheaper, lower-powered models, so you'll generally find them in cheaper machines. The quad-core Core i7 targets users who need a real workhorse processor. If you play high-end games or edit hours of audio or video, you'll benefit from the Core i7. The lowliest Core i3 CPU can easily handle basic computing tasks, so stay within a reasonable price range when possible. At the lowest end are dual-core Pentium and Celeron processors. These chips appear in budget PCs, where price tags starting at $400 compensate for weaker performance.
Desktop PCs use either Intel or AMD processors. Intel currently holds the performance crown, but AMD has priced its dual- and quad-core chips aggressively. If you're looking for quad-core performance on a budget, AMD-based offerings are certainly worth a look.
If you use your computer for little more than light Web browsing and e-mail, 2GB of RAM will be enough, whether the system has Windows XP or Windows 7. More RAM will allow you to run more programs simultaneously, and generally improve the speed and performance of your machine. Systems today typically come with at least 4GB of RAM, though some small PCs and budget systems may be limited to 2GB or 3GB.
If you tend to multitask or play games, you'll want at least 4GB of RAM. If you play graphics-intensive games or do serious video or image editing, you might want to spring for even more RAM-some performance systems include 8GB or even 16GB of memory.
When you shop for RAM, you'll notice two types: DDR2 and DDR3. Of the two, DDR3 is faster and thus more expensive. You'll also notice a clock speed, much like processors, presented in MHz. Again, higher numbers are better. That said, quantifying the differences isn't easy; and if you're on a budget, 4GB of DDR2 RAM won't leave you at much of a disadvantage versus DDR3. In general, buy as much memory as you can afford. If you have to choose between getting more RAM at a slower clock speed and getting less RAM at a faster clock speed, you'll see more tangible results
with the greater amount of RAM. If you want to buy more than 4GB of RAM, make sure that the system ships with Microsoft's 64-bit Windows 7 operating system; a 32-bit OS will recognize only a little more than 3GB of whatever RAM your system has. If you purchase a new machine, it will probably come bundled with a 64-bit OS, as more retailers move toward including 4GB of RAM. Budget systems are the most likely to lean toward a 32-bit OS--but even there we've seen a shift to 64-bit, so if you decide to upgrade your system memory later, the operating system will be able to handle it. If you intend to upgrade your PC yourself, make sure that your system's motherboard can support additional RAM modules. Check your computer's specs to see how many user-accessible DIMM connectors are available; this information can be found on a system's technical specifications page. Our how-to guide on installing more memory will help you along the way.
Desktop Case
A good case can make your everyday work easier and can simplify such tasks as upgrading and servicing components in a workplace. A well-designed case provides tool-less access to the interior, hard drives mounted on easy-to-slide-out trays, readily accessible USB ports and memory card slots, and color-coded cables for internal and external parts.
The most common cases are minitower and tower designs that use ATX. The ATX specification dictates where the connectors on the back of the motherboard should be (to line up with the holes in the case), and encompasses details such as the power-supply connector.
Slimline systems and other smaller PCs may use Micro-ATX, which follows the basic ATX specification but includes fewer expansion slots. Mini-ITX is smaller still; Mini-ITX motherboards often appear in small PCs, where they offer quiet, low-power performance (making these systems a great choice for a home-theater PCs).
If you're purchasing a minitower or tower system, you may have more flexibility in configuring it, whether you want to specify optional components to fill the slots or leave room for future expansion. You should reserve at least a couple of open hard drive bays and a free PCI slot, too. And since motherboards come in different shapes and sizes, so do case designs.
If you're buying an all-in-one or small PC or ordering a traditional tower from a major vendor such as HP or Dell, you rarely have much of a say in your machine's chassis. If the case's size and weight are important to you, try to inspect the machine in a store, or take note of its dimensions when shopping online.
Operating System
It may be a decade old, but Windows XP remains a stalwart--even on some new systems. Nevertheless, most systems on the market today run Windows 7. Microsoft's latest operating system has received generally positive reviews, improving on many of Windows Vista's foibles.
Microsoft sells six different versions of Windows 7, but only three--Windows 7 Home Premium, Windows 7 Professional, and Windows 7 Ultimate--are available to most desktop buyers. Windows 7 Home Premium, the standard offering, includes the visually appealing Aero Glass UI, plus enhancements to Windows Media Center. Advanced users should consider Windows 7 Professional, typically a $75 to $100 step up; it offers location-aware printing and improved security features that many business users like. Windows 7 Ultimate--which costs about $150 more--is a good choice for power users and business users, thanks to its wealth of networking and encryption tools. Consult a full list of OS features before settling on a particular version.
Once again, if you're running a 32-bit operating system, your computer can use only slightly more than 3GB of RAM, regardless of how much your system carries. So be sure to pick a 64-bit OS; you'll be glad you did when you're ready to upgrade.
Graphics Cards
The GPU (graphics processing unit) is responsible for everything you see on your display, whether you play games, watch videos, or just stare at the Aero desktop baked into Windows 7.
If you aren't interested in gaming on your PC, integrated graphics built onto the motherboard--or in the CPU itself with Intel's new Core i3 and Core i5 Clarkdale chips--is the way to go. Integrated graphics help keep a system's cost low, and they deliver enough power to run simple games or high-definition Flash video. Intel's integrated graphics chips are widely used, but some PCs include an nVidia Ion graphics chip, which offers superior integrated video performance.
If you plan to render your own high-definition content or play BioShock 2, you'll need a discrete graphics card. Such cards come installed in a PCIe x16 slot on your motherboard and deliver significantly more power than integrated graphics do. Both ATI and nVidia offer plenty of options to choose from. The naming conventions can be a bit overwhelming, but the rule of thumb is that higher numbers indicate better performance--and higher prices.
Variables such as power consumption, size, and the brand of your motherboard (which may limit which cards you can use) help determine which GPU is right for you.
Gamers with deep pockets can opt for a multiple-graphics-card setup using either nVidia's SLI or ATI's CrossFire technology, either of which sets multiple cards to work in tandem for vastly improved performance. That performance will cost you, however: Prices for higher-end graphics cards generally range between $200 and $400 for apiece.
Hard Drive
Even a basic full-size PC should offer at least 320GB of hard drive space. Small PCs, however, tend to start around 160GB. At the upper end of the performance spectrum, power PCs may offer space for 2TB of storage--or more--along with choices of RAID for data redundancy (RAID 1) or speed optimization (RAID 0), or an option to combine a solid-state drive with a hard drive.
When shopping for a PC, check the specifications to see how many internal 2.5-inch hard drive bays are available.
Many all-in-one and small PCs limit you to just one. But with additional internal hard drives, you can store more data and create RAID arrays to safeguard your data from hardware failure, deliver faster performance, or do both. Most drives today are Serial ATA-300 models, which spin at 7200 rpm. When shopping, pay close attention to the speed of the PC's hard drive: Small PCs may use 2.5-inch hard drives that spin at 5400 rpm, and the potential money savings may not justify the performance hit if you plan to do a lot disk-intensive tasks. For people who care more about speed than about capacity, Western Digital's VelociRaptor line offers 10,000-rpm drives, though these max out at 300GB.
Another option for speed-conscious buyers is a solid state drive. The cost-per-gigabyte is still far greater for SSDs than for traditional hard drives, but prices have come down, and performance has improved. Some PC makers offering an SSD in tandem with a hard-disk drive--a low-capacity SSD to store applications and the OS, and a high-capacity HDD for data storage duties.
The days of dial-up are done. Broadband speed and performance vary by service provider and location, but you can maximize your PC's connectivity by choosing the right networking options. Fortunately, the options are clear-cut: wired or wireless.
Every system comes with a wired ethernet connection--at least 10/100 ethernet, and more often gigabit ethernet. Wireless connectivity is an attractive option for small PCs and all-in-ones, but also for some tower and minitower systems (though you'll need only if your system will be nowhere near your router). If you'd rather not tie down your otherwise svelte machine with an ethernet cable, go wireless and opt for 802.11n; this wireless standard offers better performance than the older 802.11b/g standards.
Wireless performance still has limitations. If you plan to use your PC to stream high-definition Internet content from sites like Hulu and Netflix, consider using a wired connection for best performance. You'll get measurably superior performance, and you'll future-proof your machine in case you later upgrade to network-attached machines with faster transfer speeds.
Keyboard and Mouse
Your keyboard and mouse are crucial devices, so get a set that works for you. But if you're buying a PC online, don't pay the upgrade price that the vendor offers: You can usually get a better deal by shopping around. If you aren't sure what your keyboard or a mouse options are, visit your local PC or electronics retailer and try out a few of the display models.
Most physical attributes of keyboards and mice vary from user to user, so keep in mind where and how you'll be using your machine. Every system comes with at least a basic mouse and keyboard. At build-to-order PC sites, though, you usually have relatively few options. If you plan to use a small PC to stream media, consider a small, lightweight wireless keyboard and mouse combo--or a wireless keyboard with built-in pointing device--so you can operate it from the comfort of a couch. Wireless keyboards and mice use either radio-frequency (RF) or Bluetooth technology, and require you to plug a USB receiver into a USB port on your machine. When shopping for a keyboard, watch for handy media keys. These put media playback buttons and volume controls on your keyboard, heightening the couch-based computing experience. If you plan to buy a tower PC, you'll likely have space on your desk for a full-size keyboard with number pad. If comfort is an issue or you struggle with wrist pain, look for ergonomic keyboards and mice that conform to the shape of your hands, and workspace. If you're an avid gaming fan, consider keyboards and mice from brands like
Razer and Logitech that offer backlit keys, programmable macro buttons, and other features that may give you a competitive edge.
Removable Storage
Your operating system and system restore disks (if any) will still ship on a DVD; consequently all but a handful of small PCs ship with a dual-layer multiformat DVD burner. If you're a fan of high-definition media, consider adding a Blu-ray reader/DVD burner combo drive to store data on your own CDs and DVDs and to watch media stored on Blu-ray discs. To take advantage of the massive storage opportunities offered by Blu-ray discs, you'll need a Blu-ray writer--add-on that lets you read and write in every disc-based media format. HP and other companies market portable media drives, ranging from less than $100 to as high as $250. These hard-drive models work with a USB cable, but are designed to slide into a media drive bay included on select desktop models. Portable hard drives are and crucial to anyone who wants to protect data from hard-drive failure or to transport lots of content. Consult our "Top 10 Portable Hard Drives" chart for further options.
The integrated sound provided on a typical PC's motherboard today supports 5.1-channel audio. This should suffice for users who don't want to spend a lot of money on their PC's audio system. But a dedicated sound card will improve the dynamic range of compressed audio, add rich environmental effects to games, and improve system performance when you record or mix audio.
On most PCs above the budget level, motherboards come with 7.1-channel audio. If you're shopping for a PC with integrated graphics, look for models sporting the nVidia Ion graphics processor, which also offers 7.1-channel HD audio.
A sound card can increase your PC's initial cost by $40 to $80, depending on the technology that the card uses. Higher-end cards can cost more than $200, but these generally target creative professionals or gamers who require 3D environmental audio effects for competitive play.
If you do opt for a sound card, make sure that your motherboard has a spare PCI or PCI-Express slot, depending on the requirements of the card that you've chosen. The manufacturer's specifications for the machine or motherboard you're buying will list the slots it has available.
As with all upgrade options, comparison-shop before you settle on a particular sound card or set of speakers. You may find a better deal elsewhere. On the other hand, if you buy the card yourself, you'll have to crack open your system to install the card.
Speaker preferences are personal, and the physical dimensions of the room your computer is in may limit your options. PCs of all shapes and sizes have analog audio outputs, and some models include a digital optical connection, which reduces the number of cables you need.
Many all-in-one PCs include a speaker bar attached to the screen. Audio from these sound bars varies from model to model, but in general the quality will be akin to that of laptop speakers, with deeper, richer sound from more-expensive models. If sound quality isn't a high priority, the included speaker bar will perform adequately, just as built-in speakers usually suffice for an HDTV. But if you plan to use your all-in-one as a primary media machine, we recommend choosing dedicated speakers with a subwoofer.

Monday, March 29, 2010

Keep You Laptop Battery Working

Laptop batteries are like people--eventually and inevitably, they die. And like people, they don't obey Moore's Law--You can't expect next year's batteries to last twice as long as this year's. Battery technology may improve a bit over time (after all, there's plenty of financial incentive for better batteries), but, while interesting possibilities may pop up, don't expect major battery breakthroughs in the near future.
Although your battery will eventually die, proper care can put off the inevitable. Here's how to keep your laptop battery working for as long as possible. With luck, it could last until you need to replace that aging notebook (perhaps with a laptop having a longer battery life).
I've also included a few tips on keeping the battery going longer between charges, so you can work longer without AC power.

Don't Run It Down to Empty
Squeezing every drop of juice out of a lithium ion battery (the type used in today's laptops) strains and weakens it. Doing this once or twice won't kill the battery, but the cumulative effect of frequently emptying your battery will shorten its lifespan.
(There's actually an exception to this rule--a circumstance where you should run down the battery all the way. I'll get to that later.)
The good news: You probably can't run down the battery, anyway--at least not without going to a lot of trouble to do so. Most modern laptops are designed to shut down before the battery is empty.
In fact, Vista and Windows 7 come with a setting for just this purpose. To see it, click Start, type power, and select Power Options. Click any one of the Change plan settings links, then the Change advanced power settings link. In the resulting dialog box, scroll down to and expand the Battery option. Then expand Critical battery level. The setting will probably be about 5 percent, which is a good place to leave it.
XP has no such native setting, although your laptop may have a vendor-supplied tool that does the same job.
Myth: You should never recharge your battery all the way.
There's considerable controversy on this point, and in researching this article I interviewed experts both for and against. But I've come down on the side of recharging all the way. The advantages of leaving home with a fully-charged battery--you can use your PC longer without AC power--are worth the slight risk of doing damage.
Keep It Cool
Heat breaks down the battery, and reduces its overall life.
When you use your laptop, make sure the vents are unblocked. Never work with the laptop on pillows or cushions. If possible, put it on a raised stand that allows for plenty of airflow.
Also, clean the vents every so often with a can of compressed air. You can buy this for a few dollars at any computer store. Be sure to follow the directions on the can, and do this only when the notebook is off.
Give It a Rest
If you're going to be working exclusively on AC power for a week or more, remove the battery first.
Otherwise, you'll be wearing out the battery--constantly charging and discharging it--at a time when you don't need to use it at all. You're also heating it up (see "Keep It Cool," above).
You don't want it too empty when you take it out. An unused battery loses power over time, and you don't want all the power to drain away, so remove it when it's at least half-charged.
Never remove the battery while the computer is on, or even in standby or sleep mode; doing so will crash your system and possibly damage your hardware. Even inserting a battery into a running laptop can damage the system. So only remove or reinsert the battery when the laptop is completely off or hibernating.
If you've never removed your laptop's battery and don't know how, check your documentation. (If you don't have it, you can probably find it online.) The instructions generally involve turning the laptop upside-down and holding down a button while you slide out the battery.
Myth: Refrigerate your battery.
Some people recommend you store it in the refrigerator, inside a plastic bag. While you should keep a battery cool, the last thing you want is a wet battery, and condensation is a real danger in the fridge. Instead, store it in a dry place at room temperature. A filing cabinet works fine.
You don't want the battery to go too long without exercise or let it empty out entirely. If you go without the battery for more than two months, put it in the PC and use it for a few hours, then remove it again.
Also, before you take the laptop on the road, reinsert the battery and let it charge for a few hours before unplugging the machine. Allow the battery time to get a full charge before you remove the AC power.
Heal a Sick Battery Myth: You can rejuvenate a worn-out battery.
This isn't, strictly speaking, the case. You can't make old lithium hold more electrons than it can currently manage. But if the battery is running out unexpectedly fast, or if your laptop is having trouble figuring out how much power it has left, you might be able to fix the battery's "gas gauge," so it at least gives a more accurate reading.
If you suspect the battery can't tell if it's charged or not, run it through a couple of cycles. Drain it of all its power (yes, this is the exception to the "don't drain the battery" rule mentioned above), recharge it to 100 percent, and then repeat.
But how do you drain the battery when Windows won't let you do just that? Don't bother with the settings described above. They're not safe (you might forget to change them back), they may not be getting an accurate reading, and they quite possibly won't let you set the critical battery level to 0 percent. (If they did, it would crash Windows.)
Instead, unplug your AC power and keep your laptop running (you can work on it if you like) until it automatically hibernates. Then reboot your PC back and go directly to the system setup program.
I can't tell you exactly how to get there; each computer is different. Turn on your PC and look for an onscreen message (one of the first you'll see) that says something like "Press the X key for setup." Immediately press the designated key.
It may take a couple of times to get the timing right. If there isn't enough power to let it boot, plug in AC until you're at the setup program, then unplug it.
Leave the notebook on until it shuts off. This can take some time (45 minutes on my laptop); setup uses a lot less power than Windows.
Once the PC is off, plug in the AC power, then wait a few hours before rebooting to Windows and making sure you've got a full recharge.
Repeat the process once or twice. With luck and proper care, your battery will still be useful when you're looking for a new laptop.
Longer Life Between Charges
The tips above should lengthen the time before you need to replace your laptop's battery. But on a daily basis, we're far more concerned with another type of battery life: how long we can keep our laptop running without AC power. You may know most of the following tips already, but it never hurts to refresh (or recharge) your memory.
Dim your screen
Your laptop's backlight requires a lot of juice. Keep it as dim as you can comfortably read it.
Shut off unneeded hardware
Turn off your Bluetooth, and if you're not using the Internet, turn off your Wi-Fi receiver, as well. Don't use an external mouse or other device. And muting the PC's sound system not only saves power, it avoids annoying everyone else in the café.
Avoid multitasking
Run as few programs as you can get away with. If possible, stick to the one application (word processor, browser, or whatever) you're currently using, plus your antivirus and firewall in the background. And if you're not on the Internet, you don't need those two.
Avoid multimedia
Save chores like photo editing and watching old Daily Show videos for when you have AC power. And if you must listen to music, use your iPod (or similar device).
Know when to sleep and when to hibernate
You need to think about when you want to save power by sending your laptop into Standby or Sleep mode, and when you want to hibernate it.
There's a difference. XP's Standby and Vista and Windows 7's Sleep modes keep your PC on, using some power, but less of it than in normal use. Hibernate saves the PC's state to the hard drive, then shuts it off entirely, so that no power is used.
On the other hand, Windows takes much longer--sometimes minutes--to go into and come out of hibernation. And those are minutes that the battery is draining heavily and you can't work.
XP's Standby mode isn't really all that efficient. If your laptop will be inactive for more than about half an hour, hibernate it. Otherwise, use Standby.
But Vista and Windows 7 do a much better job with their Sleep mode. Don't bother hibernating your PC unless you think you're going to go more than two or three hours without using it.
Myth: Adding RAM saves battery life.
True, more RAM means less hard drive access, and the hard drive uses a lot of electricity. But RAM uses electricity as well, and unless you're doing a lot of multitasking (not a good idea when you're on battery power), more RAM won't reduce hard drive use.
Juiced for more battery life tips? Check out our other battery life tips or post your favorites in the comments!

Monday, March 22, 2010

Grid Computing

Grid computing is increasingly being viewed as the next phase of distributed computing. Built on pervasive Internet standards, grid computing enables organizations to share computing and information resources across department and organizational boundaries in a secure, highly efficient manner. Over the last few years, Grid Computing has dramatically evolved from its roots in science/academia, and is currently at the onset of mainstream commercial adoption. But with the recent explosion of commercial interest in grid, we're seeing some industry confusion about what the term means. This is partially because the true definition has been somewhat convoluted by the onslaught of marketing hype around the category.
So what is grid? Here's a three-part checklist. According to this checklist, a grid:
Coordinates resources that are not subject to centralized control.
A grid integrates and coordinates resources and users that live within different control domains -- for example, different administrative units of the same company, or even different companies
A grid addresses the issues of security, policy, payment membership, and so forth that arise in these settings. Uses standard, open, general-purpose protocols and interfaces.
A grid is built from multi-purpose protocols and interfaces that address such fundamental issues as authentication, authorization, resource discovery, and resource access. It is important that these protocols and interfaces be standard and open. Otherwise, we are dealing with application, hardware, or OS -specific systems. Delivers nontrivial qualities of service.
A grid should be transparent to the end user, addressing issues of response time, throughput, availability, security, and/or co-allocation of multiple resource types to meet complex user demands. The goal is that the utility of the combined system is significantly greater than that of the sum of its parts. In even simpler terms, grid is at the foundation level of the trends that are driving better synchronization between IT and the underlying hardware and software resources. In this new wave of innovation, we are starting to see IT more effectively manage its own resources -- and in the process, lead business to this world of "adaptive enterprise."
So why should the enterprise professional care about grid?
The primary reason is that grid will ultimately usher the enterprise into a new era of efficiency in managing its resources. Historically, IT organizations have had to overbuy resources -- planning for peak requirements and worst case scenarios. In the past there was no ability to turn the dial up and down on resources as users required them. Nor had there been a means for transitioning of resources as they dynamically changed state.
Grid is at the foundation of important trends like utility computing, IT automation and virtualization (Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization) and there are a number of things that your organization should be doing to prepare for the arrival of grid in the enterprise. These steps range from how you plan your SOA and utility computing strategy, to evaluating commodity hardware purchasing options, to affecting cultural change in your organization (i.e. a departure from silo'd resources) -- there are a whole host of new challenges and skill sets related to this innovation wave.

Saturday, March 20, 2010

Server Virtualization

What is server virtualization?
Server virtualization is a technology for partitioning one physical server into multiple virtual servers. Each of these virtual servers can run its own operating system and applications, and perform as if it is an individual server. This makes it possible, for example, to complete development using various operating systems on one physical server or to consolidate servers used by multiple business divisions.
Among the various virtualization methods available, NEC primarily focuses on virtualization software solutions. Because the virtualization software, or hypervisor, used by NEC runs directly on bare hardware (physical servers), our virtualized environments have little overhead. NEC’s proven, reliable solutions are built upon years of experience with virtualization.
Server virtualization features
Primary advantages of server virtualizationReduce number of serversPartitioning and isolation, the characteristics of server virtualization, enable simple and safe server consolidation.
Through consolidating, the number of physical servers can be greatly reduced. This alone brings benefits such as reduced floor space, power consumption and air conditioning costs. However, it is essential to note that even though the number of physical servers is greatly reduced, the number of virtual servers to be managed does not change. Therefore, when virtualizing servers, installation of operation management tools for efficient server management is recommended.
Reduce TCO
Server consolidation with virtualization reduces costs of hardware, maintenance, power, and air conditioning. In addition, it lowers the Total Cost of Ownership (TCO) by increasing the efficiency of server resources and operational changes, as well as virtualization-specific features. As a result of today’s improved server CPU performance, a few servers have high resource-usage rates but most are often underutilized. Virtualization can eliminate such ineffective use of CPU resources, plus optimize resources throughout the server environment. Furthermore, because servers managed by each business division's staff can be centrally managed by a single administrator, operation management costs can be greatly reduced.

Improve availability and business continuity
One beneficial feature of virtualized servers not available in physical server environments is live migration. With live migration, virtual servers can be migrated to another physical server for tasks such as performing maintenance on the physical servers without shutting them down. Thus there is no impact on the end user. Another great advantage of virtualization technology is that its encapsulation and hardware-independence features enhance availability and business continuity.

Increase efficiency for development and test environments
At system development sites, servers are often used inefficiently. When different physical servers are used by each business division's development team, the number of servers can easily increase. Conversely, when physical servers are shared by teams, reconfiguring development and test environments can be time and labor consuming.
Such issues can be resolved by using server virtualization to simultaneously run various operating system environments on one physical server, thereby enabling concurrent development and test of multiple environments. In addition, because development and test environments can be encapsulated and saved, reconfiguration is extremely simple.

Friday, March 19, 2010

Computer Network Maintenance

To keep your network running at its optimum performance you need to perform computer network maintenance on a regular basis. Network maintenance includes monitoring your bandwidth and checking packets. There are many good free Open Source utilities, such as, WireShark, nMap, and many others that can capture packets for analysis and provide bandwidth stress testing to help find any faulty component or possible noise sources.

WireShark is a great utility for capturing and analyzing packets. It has a nice easy to use graphical user interface. You can use it to quickly determine possible bandwidth problems concerning protocol or packet issues. You can capture as much data as you need and save it offline for later analysis.

nMap is another nice utility that can be used to monitor your network and quickly determine network inventory and possible unauthorized access.
Cabling is still a big source of network problems. CAT6 cabling is a must for newer networks. The older CAT5 cabling allows to much noise insertion through the cable and connectors. CAT6 is also better suited for 1Gbps networks and above.
Managed Switches
Managed switches are a must for network security and bandwidth optimization. They allow you managed access to any port to limit or control MAC addressing. You can be immediately notified of unauthorized access attempts and you have the option of locking out the port.

Website Blocking
Managed routers and proxy servers provide you website name or IP address blocking. You can also block all Internet access except for a few allowed sites or use a subscription list for blocking just known questionable sites.

Virtual Servers
Virtual servers is the latest tool for administrators to consolidate servers and allow for easy backup and restore. Now when a server get corrupted or infected it is a simple matter of just rebooting the computer to provide a fresh clean image.

Thursday, March 18, 2010

Next Generation Computing Devices - Laptops

There has been a lot of changes in the past two decades or so since the first laptop computers appeared. From many years there has been a revolution in the field of laptop manufacturing. We have come across lot of changes in their hardware components and software. The processor speed is in gigabytes now. But we cannot make out more changes in its hardware components. As we can observe that the laptops that are seen in the past and that we see now have many physical features same as before. Now it seems to be changing from traditional design of opening display lid and the mechanical key board to more exciting features. According to Mike Trainor(INTEL Corp.) these types of laptops are likely to be in the market before 2015.

In early 1990s, concept of integrated pointing device, webcam and speakers where some of the ideas which are integrated part of the device now. Today’s concept laptops are really amazing. Several concept laptops rely on touch sensitive screens that act as the system's keyboard and mouse and go beyond today's multi touch technology. Imagine being able to slide your finger across the screen to immediately shut off the display and keep what you were working on confidential, and you get an idea of its potential.
The display and the keyboards are similar to the slider phones available now. The pair of super bright organic LED panels slide into place next to each other, with the lower panel acting as keyboard or scribble pad.The whole thing is only three quarters of an inch thick. Together, the 11in. screens will yield about 16 inches of usable workspace, so the system has the dimensions and weight of a thin and light system but the screen of a larger one.

In another type, instead of a display and a mechanical keyboard, the device has two touch sensitive displays. The upper screen is primarily for viewing applications, and the lower screen is for the activities like typing, drawing and jotting notes. It can also lie flat for a large expanse of working space.According to designer, this notebook changes personality depending on how it's held. Opened all the way, it's a sketch pad. Fold it half open and rotate it 90 degrees, and it's an eBook.By emulating a musical keyboard on the lower half, when it's flat on a table, it can be a go anywhere piano.Beyond all these features these type of laptop is power saving since it uses led in it. This will consume much lesser power when compare to LCD display that is used today.
These laptops use the alternative power source, by their own energy source like methyl alcohol or natural gas will make their batteries run longer and be more independent from the electricity.

Tuesday, March 16, 2010

IT Management - ITIL v3

The current version of ITIL (v3) was published at the end of April 2007. This re-development provided greater focus upon the alignment of IT and business/management, through the whole planning to production lifecycle.
It introduced five new 'core' texts, as follows:
1. ITIL Service Strategy
2. ITIL Continual Service Improvement
3. ITIL Service Design
4. ITIL Service Transition
5. ITIL Service Operation

ITIL Service Strategy (SS)
This volume is actually the hub of the ITIL v3 core itself, and is a view of ITIL which aligns information technology and the business. From a triangular viewpoint of the ITIL core sets, it would be at the apex.
The following topics are covered by this volume:
— Strategy and value planning
— Roles / responsibilities
— Planning and implementing service strategies
— Business planning and IT strategy linkage
— Challenges, risks and critical success factors.
It helps focus upon understanding, and upon translating business strategy into IT strategy, as well as selection of the best practices for the particular industry in question.

ITIL Service Design (SD)
This volume provides guidance on the creation and maintenance of IT policies and architectures for the design of IT service solutions.
Included are the following topics:
— The service lifecycle
— Roles and responsibilities
— Service design objectives and elements
— Selecting the appropriate model
— Cost model
— Benefit and risk analysis
— Implementation
— Measurement / control
— CSF's and risks
This also embraces outsourcing, insourcing and co-sourcing.

ITIL Service Transition (ST)
This volume covers the longer term change management and release practices. It provides guidance for the transition of IT services into the business environment.
It includes the following topics:
— Managing change (organizational and cultural)
— Knowledge management
— Risk analysis
— The principles of service transition
— Lifecycle stages
— Methods, practices and tools
— Measurement and control
— Other best practices
Fundamentally, it covers how to create a transition strategy from service design and transfer it to the production (business) environment.

ITIL Service Operation (SO)
This volume covers delivery and control processes with a view to ensuring service stability.
The following topics are included:
— Principles and lifecycle stages
— Process fundamentals
— Application management
— Infrastructure management
— Operations management
— CSF's and risks
— Control processes and functions
It embraces the familiar basics of how to manage services in the production environment, including day to day issues and fire fighting.

ITIL Continual Service Improvement (CSI)
This volume covers the processes involved in improving service management within the business, in addition to the issues related service closure or retirement.
It includes the following topics:
— The drivers for improvement
— The principles of CSI
— Roles and responsibilities
— The benefits
— Implementation
— Methods, practices and tools
— Other best practices
It basically describes how to improve a service after it is deployed.

The contents of version two of the sets remained largely on board, with the processes remaining. However, there was certainly a change of emphasis and significant new content.

Monday, March 15, 2010

System Changes to Foil Hackers

1. Clearing the Page File at Shutdown
Windows 2000/XP paging file (Sometimes called the Swap File) can contain sensitive information such as plaintext passwords. Someone capable of accessing your system could scan that file and find its information. You can force windows to clear out this file.
In the registry navigate to HKEY_LOCAL_MACHINE SYSTEM Current Control Set Control Session Manager Memory Management and add or edit the DWORD Clear Page File At Shutdown. Set it to 1.
Note that when you do this, the system will take much longer to shut down: a system with a really big Page File (! Gig or more) may take a minute or two longer.

2. Disable the POSIX and OS/2 Subsystem.
Windows 2000 and XP come with little-documented subsystems it at allow compatibility with UNIX and OS/2 systems These rues systems are enabled by default but so rarely used that they are best off bring disabled completely to prevent possible service hijackings.
To disable these subsystems, open the registry and navigate to HKEY LOCAL MACHINE SYSTEM Current Control Set Control Session Manager Sub Systems. Delete the subkeys Os2 and Posix. then reboot.

3. Never leave default passwords blank.
On installation, Windows 2000 sets up an Administrator account with total system access and prompts for a password. Guess what: by default, it allows that password to be blank. If a user doesn't want to type a password, he can simply click Next and the system will be an open door for anyone who wants to log on. Always opt for a password of some kind when setting up the default account on a machine.

4. Install Windows In a different directory.
Windows usually installs itself in the WINDOWS directory. Windows NT 4 0 and 2000 Will opt for WINNT. Many worms and other rogue programs assume this to be the case and attempt to exploit those folders files. To defeat this install Windows to another directory when you're setting it up - you can specify the name of the directory during setup. WINDIR is okay; so some people use WNDWS - A few (not that many) programs may not install properly if you install Windows to another folder but they are very few and they are far between

5. Fake out hackers with a dummy Administrator account
Since the default account in Windows 2000 is always named Administrator, an enterprising hacker can try to break into your system by attempting to guess the password on that account. It you never bothered to put a password on that account, say your prayers.
Rather than be a sucker to a hacker, put a password on the Administrator account it you haven't done so already. Then change the name of the Administrator account. You'll still be able to use the account under its new name, since Windows identifies user accounts by a back-end ID number rather than the name. Finally, create a new account named Administrator and disable it. This should frustrate any would -be break-ins.
You can add new accounts and change the names of existing accounts in Windows 2000 through the Local Users and Groups snap in. Right-click on My Computer, select Manager, open the Local Users and Groups subtree, look in the Users folder and right-click on any name to rename it. To add a new user, right-click on the containing folder and select New User. Finally, to disable an account, double-click it, check the Account is disabled box and click OK.
Don't ever delete the original Administrator account. Some programs refuse to install without it and you might have to log in under that account at some point to setup such software. The original Administrator account is configured with a security ID that must continue to be present in the system.

6. Disable the Guest account
Windows XP comes with a Guest account that's used for limited access, but it's still possible to do some damage with it. Disable it completely if you are not using it. Under Control Panel, select User Accounts, click on Guest Account and then select Turn Off the Guest Account.

7. Set the Hosts file to read-only to prevent name hijacking.
This one's from (and to a degree, for) the experts. The HOSTS file is a text file that all flavors of Windows use to hold certain network addresses that never change. When a network name and address is placed in HOSTS, the computer uses the address listed there for that network name rather than performing a lookup (which can take time). Experts edit this file to place their most commonly-visited sites into it, speeding things up considerably.
Unfortunately hijackers and hackers also love to put their own information into it - redirecting people from their favorite sites to places they don't want to go. One of the most common entries in HOSTS is local HOST which is set 1770.0.1. This refers to the local machine and if this entry is damaged the computer can behave very unpredictably.
To prevent HOSTS from being hijacked, set it to read-only. Go to the folder %Systemroot%system32driversetc, right-click on HOSTS, select Properties check the Read-Only box and click OK. If you want to add your own entries to HOSTS, you can unprotect it before doing so, but always remember to set it to read-only after you're done.

8. Disallow changes to IE settings through IE
This is another anti hijacker tip. IE can be set so that any changes to its settings must be performed through the Internet icon in the Control Panel, rather than through IE's own interface. Some particularly unscrupulous programs or sites try to tamper with setting by accessing the Tools, Options menu in IE. You can disable this and still make changes to IE's settings through the Control Panel.
Open the Registry and browse to HKEY_CURRENT_USER Software Policies Microsoft Internet Explorer Restrictions. Create or edit a new DWORD value named No Browser Uptions and set it to 1 (this is a per-user setting). Some third-party programs such as Spybot Search And Destroy allow you to toggle this setting.
You can also keep IE from having other programs rename its default startup page, another particularly annoying form of hijacking. Browse to HKEY.CURRENT USER Software Policies Microsoft Internet Explore Control Panel and add or edit a DWORD, Homepage and set it to 1.

9. Turn off unneeded Services
Windows 2000 and XP both come with many background services that don't need to he running most of the time: Alerter, Messenger, Server (If you're running a standalone machine with no file or printer shares), NetMeeting Remote Desktop Sharing, Remote Desktop Help Session Manager (the last two if you're not using Remote Desktop or NetMeeting), Remote Registry, Routing and Remote Access (if you're not using Remote Access), SSDP Discovery Service, Telnet, and Universal Plug and Play Device Host.

10. Disable simple File Shares.
In Windows XP Professional, the Simple File Sharing mode is easily exploited, since it’s a little too easy to share out a file across your LAN (or the NET at large). To turn it off, go m My Computer, click Tools, Folder Option and the View tab, and uncheck Use Simple file sharing (Recommended). Click OK. When you do this you can access the Security tab in the Properties window for all folders; set permissions for folders; and take ownership of objects (but not in XP Home)

Saturday, March 13, 2010

IT Management - ITIL v2

What is ITIL?Primarily, ITIL provides international best practice guidance in IT Service Management. However, ITIL offers more than just guidance; it underpins the foundations of ISO/IEC 20000 (Service Management Standard, previously BS15000). It also provides a framework for IT Service Management Practitioners to demonstrate their knowledge and understanding of ITIL and to develop their professional expertise through trainingand qualifications.

The Evolution of ITILITIL V1 was initially developed in the 1980’s by the forerunner to the OGC (CCTA) and was used mainly by government agencies. From 1999 to 2001, ITIL became the cornerstone for Service Management by introducing the Service Support and Service Delivery disciplines and establishing Version 2. There then followed a natural progression of alignment between ITIL and BS15000, culminating in today’s ISO/IEC 20000 standard and more importantly, ITIL V3 – The Service Lifecycle.

ITIL V2ITIL V2 has prooved to be highly significant in the history of IT Service Management. Service Support focuses on the processes required to keep operations running on a day-to-day basis. It explains how the Service Desk owns and supports Incident Management and provides a foundation for supporting users issues and requests.Equally, understanding how Problem Management needs to be proactive as well as reactive and the significant benefits to be gained from effective root cause analysis, gives great insight into reducing the impact of service outages for the user.

Change Management provides a structured and controlled process to ensure effective impact assessment and scheduling for the introduction of change. Through consideration of both business and technical criteria, the Change Management process can significantly reduce risks and minimise the impact of change.

Release Management provides a framework for coordination, control and physical introduction of a change into the development and production environments. Elements include technical and non-technical aspects e.g. user training, to ensure that all the required support is in place.

Configuration Management provides the foundation to all the Service Support and delivery processes. By ensuring a comprehensive and accurate database of software, infrastructure and documentation, Configuration Management provides the collateral for all processes, e.g. providing component relationship information for impact assessment and incident / problem resolution and maintaining release baselines for both development andback-out purposes.

Service Delivery by contrast, concentrates on the underpinning processes required to ensure that the services are maintained and provisioned to meet the current and
future needs of the business. Service Level Management explains the relationship required with customers to ensure that business needs are understood and delivered, as well as acting as the overarching process for reporting on SLAs and managing internal and external provider relationships.

Availability Management is concerned with understanding each component part of the service, how reliable it is and putting in place alternative provisions to ensure the service continues should a component fail.

Capacity Management ensures that there is sufficient resource, infrastructure and overall service capacity to meet the current and future business requirements. It demonstrates how important it is to have a clear understanding of where the business is going and how it is planning to achieve its objectives.

Financial Management for IT supports the company’s overall financial objectives by being able to demonstrate control and understanding of how much services, infrastructure and support costs, and enabling effective budgeting and accounting of the whole IT division. It also explains how you can introduce charging into your organization, along with some of the inherent issues.

Finally, IT Service Continuity explains how to assess business risk, the different types of continuity provision that can be considered and the importance of aligning IT service continuity to the business continuity requirements.V2 identified the key processes of its time, but the shape of Service Management now has many facets. It is therefore becoming more and more important to recognise that Service Management is not just about supporting the end product. Hence, ITIL V3 – The Service Lifecycle was born.
The Service Lifecycle

The Service Lifecycle bridges the 4 key stages of a service; Strategy, Design, Transition and Operations. Enabling improvement and providing a vehicle for recognising change, Continuous Service Improvement surrounds the core processes. Each Service Lifecycle core process recognises the strengths of V2 and uses it as a platform to consolidate the whole lifecycle. There are new processes introduced to give a more robust service profile, with models to support any organization size.

Service Strategy concentrates on ensuring that the Service Strategy is defined, maintained and implemented. It introduces new concepts such as value creation, market definition and solution space. It focuses on enabling practical decision making, based on understanding service assets, structures and service economics with the ultimate aim of increasing the economic life of the services.

Service Design focuses on setting pragmatic service blueprints which convert strategy into reality. Harnessing Availability, Capacity, Continuity and Service Level Management, Service Design also focusses on the new process of Supplier Management and the concepts of Service warranty and utility, which customers consider to be fundamental.Service Transition aims to bridge the gap between projects and operations more effectively. It provides clear accountabilities and responsibilities for more of the V2 key processes e.g. Change, Configuration and Release, but extends them into Service Asset and Configuration management, Build and Test with Release and Deployment management.

Service Transition is concerned with the quality and control of the delivery to operations and provides example organization models to support transition, and guidance on how to reduce variation of delivery.

Service Operations ensures that there are robust end-to-end practices which support responsive and stable services. It influences Strategy, Design, Transition and Continuous Service Improvement through its knowledge of actual Service Delivery.

Friday, March 12, 2010

Database - Data Center Basics: Choosing the Location

Assessing Viable Locations for Your Data Center
When the time comes for your business to build a server environment, it is essential that the people responsible for the Data Center's design have an opportunity to provide input into where it is constructed. Traditionally, upper management decides what property to purchase, based upon a variety of a company's wants, needs, and business drivers. Other purchase considerations might include a parcel's price tag, its proximity to a talented labor pool, advantageous tax rates, or the desire to have a corporate presence in a particular geographic area. Whatever the drivers are, a property's suitability to house a Data Center must be among them. Purchasing or leasing a site without considering this greatly hampers the Data Center's capability to protect company servers and networking devices. Not making this a consideration also invariably leads to additional expense, either to retrofit the land's undesirable characteristics or to add more infrastructure to compensate for them.

An ideal Data Center location is one that offers many of the same qualities that a Data Center itself provides a company:

* Protection from hazards
* Easy accessibility
* Features that accommodate future growth and change

These qualities are fairly obvious, like saying that it is easier for an ice chest to keep drinks chilled when it is also cold outside. Less apparent are what specific characteristics improve or hamper a property's usability as a Data Center location and why.

Building Codes and the Data Center Site
The first step when evaluating an undeveloped property's suitability as a Data Center site is a determination of how the property is zoned. Zoning controls whether a server environment is allowed to be built there at all. Zoning is done in a majority of countries and reflects how the local government expects a parcel of land to be used. Some classifications prohibit a Data Center.

Site Risk Factors
Every parcel of land comes with unique hazards. Knowing the hazards associated with any property upon which you consider placing a Data Center is very useful and should be a serious consideration.

Undermentioned is a list of hazards that you should be caeful about when choosing your site.

* Natural Disasters
* Seismic Activity
* Ice Storms
* Hurricanes
* Tornadoes
* Flooding
* Landslides
* Fire
* Pollution
* Electromagnetic Interference
* Vibration
* Political Climates
* Flight Paths

Evaluating Physical Attributes of the Data Center Site
Once you are aware of the risk factors facing a potential Data Center site, it is time to assess the physical features of the property by answering the following questions:

Where is the site?
Is it easy to reach?
Does it have existing structures?
If so, how suited are they to housing a server environment?
Specifically, how well does the site support the key design strategies for constructing a productive Data Center?

Remember, the prerequisites for your Data Center to be robust, modular, flexible, standardized, and to intuitively promote good practices by users are mentioned below.

1.Relative Location
3.Disaster Recovery Options
4.Pre-Existing Infrastructure
5.Power Analysis
6.Cooling Capabilities
7.Structured Cabling
8.Amenities and Obstacles

* Clearances
* Weight issues
* Loading dock placement
* Freight elevator specifications
* Miscellaneous problem areas
* Distribution of key systems

Is there enough contiguous floor space to house your Data Center?

How tall are the doorways?
How wide are the halls?
What's the distance from floor to ceiling?

10.Weight Issues
11.Loading Dock
12.Freight Elevators
13.Problem Areas
14.Distribution of Key Systems

These tips will help you and your start up organization to identify a proper place to manage your services. You might also be off-shoring, but a good built up Data Center will serve you a greater purpose.

Wednesday, March 10, 2010

Networking - Network Admission Control

There is,one of the latest additions to the area of security is called Network Admission Control (NAC).
NAC is built around the notion that by asking specific questions of an organization's end hosts you can improve the overall security of a network by improving the compliance of end systems to a given admission policy.
But first some history. People who have some knowledge in managing networks back in the days of IPX and Novell Netware remember a little thing called the "login script." It was useful to map drives in DOS, set various host parameters, and force a user to patch his system when something new came out such as an antivirus update. At the time, users almost always logged in to the network because file and print services were nearly all you could do with the network. Specialized applications were still run through terminal emulation to mainframes or minis, and nobody could figure out a good reason to connect to the Internet yet. So if one didnt log in, he/she didnt use the network, plain and simple.

Today people have been using computers at employers with Windows-based networking for file and print services for close to 10 years but can't remember actually logging in on start-up, nor can they remember running a login script for probably five years. Even if they had, the Windows file servers are only one of many things they can do using IP within these organizations. Whether accessing internal websites, applications, or the public Internet, none of these functions requires an overall network login. Even if they did, login scripts typically only run at start-up. With many offices using laptops and wireless access, sleep mode or suspending the systems is common rather than fully shutting down. The laptop Im writing this article on hasnt been "turned off" for well over a week.

This is challenging because compliance matters today more than it did in the old days. Virus and worm attacks—and increasingly spyware—are more common now than ever and with the mobility of devices, the chance of infection is high as devices move to public networks and then back onto the corporate network. And because of the automated nature these attacks propagate through, the legitimate user represents as big a threat to the network as the bad guy who initiated the attack in the first place. This is because without these legitimate systems to aid in the propagation of the attack, the impact to the overall network is significantly reduced.
Beyond compliance, there is the more fundamental question today of basic network admission. The advent of laptops means outside users are increasingly attempting to connect to an organizations resources. Be they contractors or vendors, the ability to differentiate between insiders and outsiders is key to enforcing basic security policies within your organization. Relying on application authentication as the only means of this differentiation is problematic because not all important systems have such controls.

The principle cause of all these problems, and what NAC is attempting to solve, is that tying endpoint compliance or admission to one of many types of network use is challenging because you cant force the user to initiate the compliance process before potentially spreading malware. NAC solves this by moving the compliance event to initial network access. Before you can do any other function on the network, the device must be authenticated and its compliance level, or posture, must be validated.
NAC works like this: Endpoints are configured to run with an agent or without an agent with some corresponding loss of compliance accuracy. These agents respond to queries initiated by the network to identify various attributes of their posture. This can include OS and hot-fix version number, the presence and configuration of a personal firewall, and the .DAT file version running on a systems AV software, as well as when it last ran a full scan. When an endpoint runs without an agent, the system is audited by the network to determine the relative risk level of allowing it on the network.

In either approach, once the network determines the system is authorized and clean, it can be allowed on the network as usual. Systems that fail some element of the authorization or compliance checks can be relegated to quarantined access temporarily, or permanent restricted access (as in the case of a contractor machine). From quarantined access, the entitys network requests can be redirected to a remediation server where the end station receives instructions to bring itself into compliance. Once brought into compliance, the device can be provided proper access.
Choosing how to deploy NAC first depends on the overall architecture you wish to use. Some organizations are more interested in a turnkey approach that allows NAC to be overlaid onto an existing network infrastructure. Others want NAC more tightly integrated into the network fabric itself. In either approach, there are several components to any NAC solution.
Network Enforcement Device—The network device has the responsibility to initiate the queries to the end station and then relay the responses to the policy server. Today NAC is supported in Cisco routers running 12.3(8)T or later as well as the VPN 3000 concentrator running software 4.7 or later. In late summer 2005, this capability will be extended to LAN switches and, following that, WLAN access points. For the turnkey approach, Cisco Clean Access (CCA) appliances can be added to your existing network in either inline our out-of-band mode to perform the end system checks.

Agent—For both the turnkey and fabric-integrated approach, agent software is available to respond to the queries from the enforcement device. In the turnkey model this agent collects all the information and provides it to the network. In the fabric-integrated model, the agent communicates with Cisco and third-party software to learn posture information by acting like a broker on the host. Once collated, this information is then presented to the network.
Policy Server—The policy server acts as the repository of the posture information required for clean connectivity into the organization. Depending on which deployment model you choose, the policy server either defines compliance policy locally or communicates with third-party policy servers to assess specific credentials. In either case, the policy server makes the final admission decision, which it sends to the network device.

For organizations considering NAC deployment, an important point to remember is that this is relatively new technology. Like any new technology, early adopters will suffer bumps or bruises during deployment. The size of your network can be a big factor here. NAC is already successfully deployed in the small and medium-sized business markets, as well as specific pockets of larger enterprises. In these environments the variables introduced by NAC can be contained and a smoother rollout can be achieved. Larger organizations will want to take more care in evaluating how NAC fits in with its overall network authentication and endpoint security strategy.
While NAC won't stop a determined user bent on introducing viruses and worms into your network but if NAC is only successful in preventing the systems that arent trying to be active conduits of malicious traffic from attacking the network and keeping unauthorized systems off the network, it will have already performed a valuable service for an organization's IT infrastructure. NAC surpasses most other recent security innovations in importance and utility. Organizations hoping to enforce endpoint admission and compliance policies should give NAC careful attention and begin considering how such a technology would work in their environment.

Web Threats

What Are Web Threats?
Web threats are any threat that uses the Internet to perform malicious activities. They arrive, spread, deliver additional exploits and entrench themselves via the Internet and may include Trojan horse programs, spyware, adware, pharming and other malware. They also may be triggered by a hyperlink or an executable file attachment in a spam email.

Why Should I Care About Web Threats?
Web threats are more pervasive today and the fastest growing threat vector. These evolving and carefully targeted threats are technologically sophisticated, comprised of multiple components, and spawn numerous variants. Because they often require no user intervention at all to infect a computer, the very act of browsing may put your users at risk. Web threats can enter your network in real time, posing immediate danger to your company’s data, productivity, reputation, and revenue.

How Should I Protect Against Web Threats?
Today’s URL filtering and content inspection solutions are reactive, protect against known threats, and require static updates. They are not adequately equipped to address a threat landscape that is constantly evolving to evade detection. To combat today’s Web threats, you need a new solution—one that is dynamic, provides continuous updates, and is able to work in combination with URL filtering.
Find below one example and how other ways of getting information:

Trend Micro Total Web Threat Protection .
Trend Micro provides total Web threat protection with a multi-layered, multi-threat solution that includes innovative technologies deployed at the gateway, in the network, on application servers and on the client that work seamlessly together to proactively respond to new and emerging Web threats. New Web reputation technology protects against a variety of Web-based threats, including zero-day attacks, before they even enter your network. It assesses the trustworthiness of a Web site based on an analysis of the domain by:

1. Tracking lifecycle of hundreds of millions of Web domains
2. Providing continuous updates and live reputation feeds for Web domains
3. Extending market-leading Network Reputation Services anti-spam protection to the Web

Learn More About Total Web Threat Protection

1. Read our Web Threat Brochure
2. Listen to our Web Threat Podcast
3. Download our Web Threat White Paper

"Web-traffic-borne menaces like spyware are creating an all-time high concern among organizations…Web-security solutions, such as those from Trend Micro, that enhance and are integrated with traditional antivirus deployments at the gateway, client, server and desktop will become invaluable against web-based malware attacks. "

Tuesday, March 9, 2010

Networking - Ten Tips for IP Telephony Implementation

Often when an organization considers change that will impact every employee such as an enterprise-wide IP telephony,the process tends to focus on hardware, software, and getting the technology up to speed as quickly as possible. However, a company's infrastructure is composed not just of hardware and software, but also of people. The successful conversion to IP telephony does not rest solely on viability or reliability. It requires a careful combination of the right products, people, processes, tools, services, best practices, and methodologies all working in concert.

While the needs of every enterprise are different, some things are universal. Planning, communication, teamwork, and understanding your users' requirements are as important as technical expertise.

Tip 1. Build a Cross-Functional Team
The greatest up-front contributor to a successful, large technology migration is building a cross-functional team that not only has the requisite skills and technical expertise but represents users in every area in the organization impacted by the implementation. This team is responsible for ensuring rapid delivery of the migration that optimizes company investments.
Key members of the team include an executive program sponsor and steering committee composed of organizational stakeholders; a project Team lead; technology experts; security specialists; and subject matter experts in the areas of design and engineering, support, finance, and project management. When global or multinational theaters are involved, include team leads for each theater who will represent the needs of that location and user community.
After skill sets are identified and all representatives chosen, this well-represented team should start off the implementation by clearly defining the objectives and overall goals of the project, and identifying the tasks necessary to achieve those goals. Also begin defining the change management process, at-risk factors, and problem escalation challenges, which will minimize the risks of integrating an enterprise-wide IP telephony solution.

Tip 2. Get Your Users On Board
Resistance to change is normal and should always be anticipated. Managing user expectations will be paramount to making the process run as smooth as possible. One key way to achieve this is to take away the mystery and uncertainty among the individuals affected through education, and open, honest, and frequent communication with the stakeholders. Create a plan that gives you the ability to be flexible and proactive. Anticipate the glitches and constantly improve the process along the way, tailoring it to the specific needs of the stakeholders and the users they represent.
In addition to managing users' expectations, an IP telephony implementation typically will require significant business adjustments, staff training and education, and some redesigned business processes and fundamental shifts within the organization. All of these changes must be identified early and continually managed, and change initiatives coordinated and integrated in a timely fashion.

Tip 3. Do Your Homework
Corporate culture is often defined as "the way we do things around here." Culture builds a common language and brings people together, enabling them to work toward a shared goal. Understanding and working with your organization's culture is critical to successfully implementing new technology on a large scale. Does your company encourage risk taking? Is change incorporated often, and does the company embrace it? How has change been introduced and institutionalized in the past? Was the process successful or fraught with problems? Is new technology welcomed or resisted? Do employees solve problems in a team environment? Is communication a top priority? Is yours a virtual company with telecommuters or employees scattered across the globe? What have previous technology deployments taught you about how users prefer to be trained? All of these factors are part of your organizational culture and can influence your ability to integrate a new solution. Take the time to know your users. Do your homework, capitalize on what has worked in the past, and learn from the mistakes of others.
Equally important, it's essential that you have the participation and cooperation of all team members from the outset. A planning workshop will help you to educate and rally cooperation among the team, as well as ensure that the initiative stays true to the business requirements of your organization and meets implementation objectives. The team should work together to plan project deliverables, address solution capabilities, define hardware, software, and security requirements, assign third-party implementation services, identify the project critical path and milestones, and outline the migration strategy.

Tip 4. Ensure That User Requirements Drive Design Requirements
Consider developing a "Voice of the Client" program that consists of client-targeted surveys and focus groups to benchmark and track user-preferred services, products, solutions, and features. Use the survey as a tool to identify critical phone features, validate key business needs, gauge risk tolerance and user discomfort, and identify key functionalities that are paramount to your business. You can also use the survey as an opportunity to incorporate features of the new IP telephony system and to help determine the priority of which features should be enabled.
Survey results provide the design and engineering team with a "report card" that validates their concept of the new design. Missing key design elements are a critical mistake that can be avoided by listening to your users, conducting traffic analysis, performing a network audit and readiness assessment, understanding how the technology will impact your current infrastructure, and familiarizing yourself with the new technology.

Tip 5. Crawl First, Walk Proudly, and Run Aggressively
Your implementation strategy should allow you to progressively go faster as your experience levels become more efficient. You don't want to go too fast or, conversely, too slow. The number of employees, complexity of user requirements, size of the campus, and how widely all are dispersed will, of course, affect your migration strategy. Like most organizations, you are not dealing with a static environment. There will always be employees changing locations, getting hired or leaving, or exercising their mobility working on the road, at home, in the field, and places other than their office desktop. To accommodate this ever-changing environment, develop a migration strategy that takes into account all of the variables that can change, alter, or otherwise affect implementation of your new converged voice and data network.
Make sure no one falls through the cracks by dividing your migration. Your categories might be, for example, new employees; existing employees who are moving to a new location; buildings coming online;retrofit of existing buildings; merger and acquisition related facilities; or buildings with upcoming PBX lease renewals.

Tip 6. Follow the 80/20 Rule for Implementation
When it comes to actual implementation, the success of your IP telephony migration will depend on several considerations: proper planning, creating consistent standards, identifying at-risk factors, having a ready backup/backout plan, customer service, doing the prep work up front, applying best practices, paying attention to detail, and automating as much of the process as possible. Of all these important factors, planning weighs most heavily. In fact, a winning formula for migration success consists of 80 percent preparation and 20 percent installation. Quite simply, if you focus on your plan first, the implementation will go a lot smoother.

Tip 7. Ensure a Successful Day 2 Handoff
A successful Day 2 handoff requires a well thought out support plan (Day 2 is defined as the time period immediately following cutover of your new IP telephony solution). Four critical components are required to enable efficient operation and responsive support of your converged network: the support team, support processes, support services, and support tools.

Tip 8. Keep Your New Network Clean
Most large enterprises have hundreds of lines and circuits that, through the years, have either been forgotten about or are simply unused. While this tip isn't meant to cover all the technical considerations required to "clean out" your network, it's an important reminder to view your IP telephony implementation as an opportunity to clean out your network to start anew, as well as clean, groom, and prepare the IP infrastructure. So, when the implementation team begins the conversion to IP telephony, remove as many unused lines off the PBX as possible, and only convert those lines that were proven as valid. Conduct a final cleanup at the end of the conversion to ensure that the implementation team has ample time to carefully review and trace all unidentified analog lines and circuits. Take steps to verify that business-critical lines aren't removed, and make it a point to only migrate what you use, not what you have, so that you can help to keep the network clean.

Tip 9. Plan for PBX Lease Returns
At the time of implementation, you might have equipment that is leased, which meant that your IP telephony implementation schedule was largely dictated by the PBX lease return dates. To ensure that the massive effort of returning large quantities of leased equipment is organized and that items are returned on schedule, the team leader responsible for the retrofit cleanup should enter all PBX leases into a spreadsheet and develop a project plan to keep the returns on track. Carefully match the equipment list on the original lease agreement to the inventory being returned, create a box-level inventory list, and get a signed receiving list from the vendor.
In addition to managing the return of all leased equipment, there is also the process of removing all ancillary solutions and systems that are tied to the main PBX. The process of completely decommissioning your main PBX will take longer than you expect; therefore, assemble a project team to address the removal of all applications still running on it.

Tip 10. Look Back, Move Forward, and Prepare for the Future
Whether an IP telephony implementation involves 200 phones or 20,000 phones, careful and comprehensive planning, communication, teamwork, and knowing where the "gotchas" are hiding will divert problems before they even arise.
You can see your destination and it is a fully converged voice and data network with all users migrated to IP telephony. Before celebrating, however, there are still a few important items that require your attention. You still need to be ready to address how to prepare your network for the future.

Change management will be the toughest process to maintain once your new network is in place, but not because of routine changes or software upgrades. Maintaining a strict, yet manageable and scalable, process will be key to your success. Not only will your methods and procedures require a solid execution plan, but so will the standards by which you communicate the plan. Eliminate as many unknowns as possible by documenting your procedures, capture and incorporate lessons learned, and optimize your change management process. Make the commitment to continually support your new, dynamic network by reevaluating contingency plans often, conducting ongoing audits of network performance, incorporating new features through software upgrades, and reexamining the contract services that protect, monitor, and support your network.
To prepare for the future, you must embrace being prepared for new IP telephony applications. As applications become available, a system must be in place to analyze the technology for applicability, test it for feasibility, provide an adoption position, and ensure that all teams are involved, in agreement, and ready to reap the benefits that will come from rolling out another new IP communications application.

Networking - Multiservice Networks

Multiservice networks provide more than one distinct communications service type over the same physical infrastructure. Multiservice implies not only the existence of multiple traffic types within the network, but also the ability of a single network to support all of these applications without compromising quality of service (QoS) for any of them.

You find multiservice networks primarily in the domain of established service providers that are in the long-term business of providing wireline or wireless communication-networking solutions year after year. Characteristically, multiservice networks have a large local or long-distance voice constituency and are traditionally Asynchronous Transfer Mode (ATM) Layer 2-switched in the core with overlays of Layer 2 data and video solutions, such as circuit emulation, Frame Relay, Ethernet, Virtual Private Network (VPN), and other billed services. The initial definition for multiservice networks was a converged ATM and Frame Relay network supporting data in addition to circuit-based voice communications. Recently, next-generation multiservice networks have emerged, adding Ethernet, Layer 3 Internet Protocol (IP), VPNs, and Multiprotocol Label Switching (MPLS) services to the mix. IP and, perhaps more specifically, IP/MPLS core networks are taking center stage as multiservice networks are converging on Layer 2, Layer 3, and higher-layer services.

Many provider networks were built piecemeal;a voice network here, a Frame Relay network there, and an ATM network everywhere as a next-generation voice transporter and converged platform for multiple services. The demand explosion of Internet access in the 1990s sent many providers and operators scrambling to overlay IP capabilities, often creating another distinct infrastructure to operate and manage. Neither approach used the current investment to its best advantage.

This type of response to customer requirements perpetuates purpose-built networks. Purpose-built networks are not solely a negative venture. These networks do serve their purpose; however, their architectures often overserve their intended market, lack sufficient modularity and extensibility, and, thus, become too costly to operate in parallel over the long term. Multiple parallel networks can spawn duplicate and triplicate resources to provision, manage, and maintain. Examples are resource expansion through additional parts sparing, inimitable provisioning and management interfaces, and bandages to the billing systems. Often a new network infrastructure produces an entirely new division of the company, replicating several operational and business functions in its wake.

In the early 1980s, the International Telecommunication Union Telecommunication Standardization sector (ITU-T) and other standards organizations, such as the ATM Forum, established a series of recommendations for the networking techniques required to implement an intelligent fiber-based network to solve public switched telephone network (PSTN) limitations of interoperability and internetwork timing and carry new services such as digital voice and data.

The new era of networking is based on increasing opportunity through service pull, rather than through a particular technology push requiring its own purpose-built network infrastructure. Positioning networks to support the service pull of IP while operationally converging multiple streams of voice, video, and IP-integrated data is the new direction of multiservice network architecture. In the face of competitive pressures and service substitution, not only are next-generation multiservice networks a fresh direction, they are an imperative passage through which to optimize investment and expense.