Thursday, March 10, 2011

Synchronize time throughout your entire Windows network

It is important to synchronize the time on your network and devices, Its been reviewed the different types of timing sources and looked at methods you can use to coordinate the time on your network security devices.

However, it's not enough to simply synchronize the time on your network devices—this effort should extend all the way to the desktop. Applying a single, consistent time source throughout your network can boost both network efficiency and security.

Synchronizing time on your Windows domain requires following the Active Directory domain hierarchy to find a reliable time source for your entire domain. In a Windows Server 2003 Active Directory forest, the server that holds the primary domain controller (PDC) emulator role acts as the default time source for your entire network.

Each workstation and server in this network will try to locate a time source for synchronization. Using an internal algorithm designed to reduce network traffic, systems will make up to six attempts to find a time source. Here's a look at the order of these attempts:

Parent domain controller (on-site)
Local domain controller (on-site)
Local PDC emulator (on-site)
Parent domain controller (off-site)
Local domain controller (off-site)
Local PDC emulator (off-site)
To ensure that your servers are finding the proper time, you must configure your PDC emulator to receive the time from a valid and accurate time source. To configure this role, follow these steps:

Log on to the domain controller.
Enter the following at the command line:

W32tm /config /manualpeerlist: /syncfromflags:manual

""is a space-delimited list of DNS and/or IP addresses. When specifying multiple time servers, enclose the list in quotation marks.

Update the Windows Time Service configuration. At the command line, you can either enter W32tm /config /update, or you can enter the following:

Net stop w32time
Net start w32time


If a system isn't a member of a domain, you must manually configure it to synchronize with a specified time source. Follow these steps:

Go to Start | Control Panel, and double-click Date And Time.

On the Internet Time tab, select a time server from the drop-down list, or enter the DNS name of your network's internal time source.
Click Update Now, click Apply, and click OK.

Note: It's important to make sure that any access control lists on your network allow UDP port 123 to and from systems to the selected time source. For more information, see Microsoft's Windows Time Service Tools and Settings documentation.

Five tips for deciding whether to virtualize a server.

Not all servers are suited for virtualization. Be sure you consider these possible deal-killers before you try to virtualize a particular server.

Even though server virtualization is all the rage these days, some servers simply aren’t good candidates for virtualization. Before you virtualize a server, you need to think about several things. Here are a few tips that will help you determine whether it makes sense to virtualize a physical server.


1: Take a hardware inventory
If you’re thinking about virtualizing one of your physical servers, I recommend that you begin by performing a hardware inventory of the server. You need to find out up front whether the server has any specialized hardware that can’t be replicated in the virtual world.

Here’s a classic example of this: Many years ago, some software publishers used hardware dongles as copy-protection devices. In most cases, these doubles plugged into parallel ports, which do not even exist on modern servers. If you have a server running a legacy application that depends on such a copy-protection device, you probably won’t be able to virtualize that server.
The same thing goes for servers that are running applications that require USB devices. Most virtualization platforms will not allow virtual machines to utilize USB devices, which would be a big problem for an application that depends on one.
Vendor HotSpot
Get the Most Out of Server Virtualization



Learn More »
2. Take a software inventory
You should also take a full software inventory of the server before attempting to virtualize it. In a virtualized environment, all the virtual servers run on a host server. This host server has a finite pool of hardware resources that must be shared among all the virtual machines that are running on the server as well as by the host operating system.
That being the case, you need to know what software is present on the server so that you can determine what system resources that software requires. Remember, an application’s minimum system requirements do not change just because the application is suddenly running on virtual hardware. You still have to provide the server with the same hardware resources it would require if it were running on a physical box.


3. Benchmark the system’s performance
If you are reasonably sure that you’re going to be able to virtualize the server in question, you need to benchmark the system’s performance. After it has been virtualized, the users will be expecting the server to perform at least as well as it does now. The only way you can objectively compare the server’s post-virtualization performance against the performance that was being delivered when the server was running on a dedicated physical box is to use the Performance Monitor to benchmark the system’s performance both before and after the server has been virtualized. It’s also a good idea to avoid over-allocating resources on the host server so that you can allocate more resources to a virtual server if its performance comes up short.


4. Check the support policy
Before you virtualize a server, check the support policy for all the software that is running on the server. Some software publishers do not support running certain applications on virtual hardware.
Microsoft Exchange is one example of this. Microsoft does not support running the Unified Messaging role in Exchange Server 2007 or Exchange Server 2010 on a virtual server. It doesn’t support running Exchange Server 2003 on virtual hardware, either. I have to admit that I have run Exchange Server 2003 and the Exchange Server 2007 Unified Messaging role on a virtual server in a lab environment, and that seems to work fine. Even so, I would never do this in a production environment because you never want to run a configuration on a production server that puts the server into an unsupported state.


5. Perform a trial virtualization
Finally, I recommend performing a trial virtualization. Make a full backup of the server you’re planning to virtualize and restore the backup to a host server that’s running in an isolated lab environment. That way, you can get a feel for any issues you may encounter when you virtualize the server for real.
Although setting up such a lab environment sounds simple, you may also have to perform a trial virtualization of some of your other servers. For example, you might need a domain controller and a DNS server in your lab environment before you can even test whether the server you’re thinking about virtualizing functions properly in a virtual server environment.

Saturday, June 5, 2010

Cloud Computing

Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices on-demand, like the electricity grid.
It is a paradigm shift following the shift from mainframe to client–server that preceded it in the early 1980s. Details are abstracted from the users who no longer have need of expertise in, or control over the technology infrastructure "in the cloud" that supports them.[1] Cloud computing describes a new supplement, consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet.[2][3] It is a byproduct and consequence of the ease-of-access to remote computing sites provided by the Internet.[4]
The term "cloud" is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network,[5] and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.[6] Typical cloud computing providers deliver common business applications online which are accessed from another web service or software like a web browser, while the software and data are stored on servers.


Most cloud computing infrastructure consists of reliable services delivered through data centers and built on servers. Clouds often appear as single points of access for all consumers' computing needs. Commercial offerings are generally expected to meet quality of service (QoS) requirements of customers and typically offer SLAs.[7] The major cloud service providers include Savvis, Amazon, Google and Microsoft.[8]

Virtualization - A Sneak Peak

Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.
You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.
Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.
In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization:
  • Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.
  • Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
  • Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.

10 things you should know about virtualization

Virtualization has been a major buzzword in the IT world for a few years. Now the buzz is getting bigger, as we draw close to the release of Windows Server 2008 on March 1. Microsoft has promised that the Hyper-V virtualization component (formerly called Viridian) will follow within 180 days. Of course, Microsoft already has Virtual Server and Virtual PC, as well as stiff competition on the virtualization front from VMWare and Citrix/XenSource.
With all these options, taking the plunge into virtualization can be a big and confusing step.

Here are a few things you should know about virtualization and virtualization software before you start to plan a deployment.


#1: Virtualization is a broad term with many meanings


Virtualization software can be used for a number of purposes. Server consolidation (running multiple logical servers on a single physical machine) is a popular way to save money on hardware costs and make backup and administration easier, and that’s what we’re primarily focused on in this article. However, other uses include:

  • Desktop virtualization, for running client operating systems in a VM for training purposes or for support of legacy software or hardware.

  • Virtual testing environments, which provide a cost-effective way to test new software, patches, etc., before rolling them out on your production network.

  • Presentation virtualization, by which you can run an application in one location and control it from another, with processing being done on a server and only graphics and end-user I/O handled at the client end.

  • Application virtualization, which separates the application configuration layer from the operating system so that applications can be run on client machines without being installed.

  • Storage virtualization, whereby a SAN solution is used to provide storage for virtual servers, rather than depending on the hard disks in the physical server.



#2: Not all VM software is created equal


An array of virtualization programs are available, and the one(s) you need depends on exactly what you need to do. You might want to run a virtual machine on top of your desktop operating system, running a different OS, either to try out a new OS or because you have some applications that won’t run in one of the operating systems.
For example, if you’re using Windows XP as your desktop OS, you could install Vista in a VM to get to know its features. Or if you’re running Vista but you have an application you occasionally need to use that isn’t compatible with it, you could run XP in a VM with that application installed. For simple uses like this, a low-cost or free VM program, such as VMWare Workstation or Microsoft’s Virtual PC, will work fine.
On the other hand, if you need to consolidate several servers and thus need maximum scalability and security, along with sophisticated management features, you should use a more robust VM solution, such as VMWare’s ESX Servers, Microsoft’s Virtual Server or (when it’s available) the Hyper-V role in Windows Server 2008. For relatively simple server virtualization scenarios, you can use the free VMWare Server.


#3: Check licensing requirements first!


As far as licensing is concerned, most software vendors consider a VM to be no different from a physical computer. In other words, you’ll still need a software license for every instance of the operating system or application you install, whether on a separate physical machine or in a VM on the same machine.
There may also be restrictions in the EULA of either the guest or host OS regarding virtualization. For example, when Windows Vista was released, the licensing agreements for the Home Basic and Home Premium versions prohibited running those operating systems in VMs, but Microsoft has since changed those licensing terms in response to customer input.
Windows Server 2008’s EULA provides for a certain number of virtual images that can be run on the OS, depending on the edition. This ranges from none on Web edition to one on Standard, four on Enterprise, and an unlimited number on Datacenter and Itanium editions.


#4: Be sure your applications are supported


Another issue that needs to be addressed up front is whether the application vendor will support running its software in a virtual machine. Because VMs use emulated generic hardware and don’t provide access to the real hardware, applications running in VMs may not be able to utilize the full power of the installed video card, for example, or may not be able to connect to some of the peripherals connected to the host OS.


#5: Virtualization goes beyond Windows


There are many virtualization technologies and some of them run on operating systems other than Windows. You can also run non-Windows guest operating systems in a VM on a Windows host machine. VMWare can run on Linux, and Microsoft previously made a version of Virtual PC for Macintosh (but did not port it to the Intel-based Macs). Parallels Desktop provides support for running Windows VMs on Mac OS X. Parallels Workstation supports many versions of Windows and Linux as both host and guest. Parallels Virtuozzo is a server virtualization option available in both Linux and Windows versions. Other virtualization solutions include:

  • Xen (now owned by Citrix), which is one of the most popular hypervisor solutions for Linux.

  • Q, an open source program based on the QEMU open source emulation software, for running Windows or Linux on a Mac.

  • Open VZ, for creating virtual servers in the Linux environment.



#6: Virtualization can increase security


Isolating server roles in separate virtual machines instead of running many server applications on the same operating system instance can provide added security. You can also set up a VM to create an isolated environment (a “sandbox”), where you can run applications that might pose a security risk.
Virtual machines are also commonly used for creating “honeypots” or “honeynets.” These are systems or entire networks set up to emulate a production environment with the intention of attracting attackers (and at the same time, diverting them away from the real production resources).


#7: Virtualization can increase availability and aid in disaster recovery


Backing up virtual machine images and restoring them is much easier and faster than traditional disaster recovery methods that require reinstalling the operating system and applications and then restoring data. The VM can be restored to the same physical machine or to a different one in case of hardware failure. Less downtime means higher availability and greater worker productivity.


#8: VMs need more resources


It may seem obvious, but the more virtual machines you want to run simultaneously, the more hardware resources you’ll need on that machine. Each running VM and its guest OS and applications will use RAM and processor cycles, so you’ll need large amounts of memory and one or more fast processors to be able to allocate the proper resources to each VM.
To run multiple resource-hungry servers on one machine, you’ll need a machine with hardware that’s capable of supporting multiple processors and large amounts of RAM and you must be running a host OS that can handle these.


#9: 64 bits are better than 32


For server virtualization, consider deploying a 64-bit host operating system. 64-bit processors support a larger memory address space, and Windows 64-bit operating systems support much larger amounts of RAM (and in some cases, more processors) than their 32-bit counterparts. If you plan to use Windows Server 2008’s Hyper-V role for virtualization, you have no choice. It will be available only in the x64 versions of the OS.


#10: Many resources are available for planning your virtualization deployment


Virtualization is a huge topic, and this article is only meant to provide an overview of your options. Luckily, there are many resources on the Web that can help you understand virtualization concepts and provide more information about specific virtualization products. The following list should get you started:
Top 10 Reasons to Upgrade to Windows Server 2008 R2

Windows Server® 2008 R2 is the newest Windows Server operating system from Microsoft. Designed to help organizations reduce operating costs and increase efficiencies, Windows Server 2008 R2 provides enhanced management control over resources across the enterprise. It is designed to provide better energy efficiency and performance by reducing power consumption and lowering overhead costs. It also helps provide improved branch office capabilities, exciting new remote access experiences, streamlined server management, and expands the Microsoft virtualization strategy for both client and server computers.
#1. Powerful Hardware and Scaling Features
Windows Server 2008 R2 was designed to perform as well or better for the same hardware base as Windows Server 2008. In addition, R2 is the first Windows Server operating system to move solely to a 64-bit architecture.
Windows Server 2008 R2 also has several CPU-specific enhancements. First, this version expands CPU support to enable customers to run with up to 256 logical processors. R2 also supports Second Level Translation (SLAT), which enables R2 to take advantage of the Enhanced Page Tables feature found in the latest AMD CPUs as well as the similar Nested Page Tables feature found in Intel’s latest processors. The combination enables R2 servers to run with much improved memory management.
Components of Windows Server 2008 R2 have received hardware boosts as well. Hyper-V in Windows Server 2008 R2 can now access up to 64 logical CPUs on host computers. This capability not only takes advantage of new multicore systems, it also means greater virtual machine consolidation ratios per physical host.
#2. Reduced Power Consumption
Windows Server 2008 introduced a 'balanced' power policy, which monitors the utilization level of the processors on the server and dynamically adjusts the processor performance states to limit power to the needs of the workload. Windows Server 2008 R2 enhances this power saving feature by adding more granular abilities to manage and monitor server and server CPU power consumption, as well as extending this ability to the desktop via new power-oriented Group Policy settings.
Active Directory® Domain Services Group Policy in Windows Server 2008 already gave administrators a certain amount of control over power management on client PCs. These capabilities are enhanced in Windows Server 2008 R2 and Windows® 7 to provide even more precise control in more deployment scenarios for even greater potential savings.
#3. Hyper-V™ in Windows Server 2008 R2
Windows Server 2008 R2 also holds the much-anticipated update to Microsoft’s virtualization technology, Hyper-V™. The new Hyper-V™ was designed to augment both existing virtual machine management as well as to address specific IT challenges, especially around server migration.
Hyper-V™ is an enabling technology for one of Windows Server 2008 R2’s marquee features, Live Migration. With Hyper-V version 1.0, Windows Server 2008 was capable of Quick Migration, which could move VMs between physical hosts with only a few seconds of down-time. Still, those few seconds were enough to cause difficulties in certain scenarios, especially those includling client connections to VM-hosted servers. With Live Migration, moves between physical targets happen in milliseconds, which means migration operations become invisible to connected users. Making this even easier is a new feature called processor compatibility mode, which allows administrators to migrate machines between different generations of same-brand CPUs.
Customers employing System Center Virtual Machine Manager for Hyper-V will also enjoy additional management and orchestration scenarios, including a new VM-oriented Performance and Resource Optimization feature and updated support for managing failover clusters.
The new Hyper-V™ also has core performance enhancements, including the previously mentioned ability to take advantage of up to 64 logical processors and to beef up that CPU performance with host support for Second Level Translation (SLAT). Finally, VMs can also add and remove storage without requiring a reboot and also boot from VHD as well.
#4. Reduced Desktop Costs with VDI
Much of the interest in virtualization solutions is in the server world. However, equally exciting advances are being made in presentation virtualization, where processing happens on a server optimized for capacity and availability while graphics, keyboard, mouse, and other user I/O functions are handled at the user’s desktop.
Windows Server 2008 R2 contains enhanced Virtual Desktop Integration (VDI) technology, which extends the functionality of Terminal Services to deliver certain business programs to their employee’s remote desktops. With VDI, programs that Remote Desktop Services sends to a computer are now available on the Start menu right alongside programs that are locally installed. This approach provides improved desktop virtualization and better application virtualization.
Desktop virtualization will benefit from features including improved personalization management, a near-invisible integration of virtualized desktops and applications in Windows 7, better audio and graphics performance, a seriously cool Web access update and more. VDI provides more efficient use of virtualized resources and better integration with local peripheral hardware as well as powerful new virtual management features.
#5. Easier and More Efficient Server Management
Although increasing the capabilities of your server operating system is always a good thing, the perceived downside has always been additional complexity and workload for day-to-day server managers. Windows Server 2008 R2 specifically addresses this problem with lots of work evident across all of its management-oriented consoles. Features in these tools include:
• Improved data center power consumption and management, as evidenced earlier
• Improved remote administration, including a remotely-installable Server Manager
• Improved identity management features via the updated and simplified Active Directory Domain Services and Active Directory Federated Services
Windows Server 2008 R2 also improves on the popular PowerShell feature introduced in Windows Server 2008. PowerShell 2.0 significantly enhances the earlier version with the inclusion of more than 240 new pre-built cmdlets as well as a new graphical user interface (GUI) that adds professional-level development features for creating new cmdlets. The new GUI includes colored syntaxing, new production script debugging capabilities, and new testing tools.
#6. Ubiquitous Remote Access
Today’s mobile workforce is increasing the demand on IT to provide remote access to corporate resources. However, managing remote computers is an ongoing challenge, with low wide are network (WAN) bandwidth and sporadic connection and re-connection processes interfering with lengthier desktop management tasks such as Group Policy changes and up-to-date patching.
Windows Server 2008 R2 introduces a new type of connectivity called DirectAccess—a powerful way for remote users to seamlessly access corporate resources without requiring a traditional VPN connection and client software. Using technologies that shipped in Windows Server 2008, Microsoft has added simple management wizards that enable administrators to configure SSTP and IPv6 across both R2 and Windows 7 clients to enable the basic DirectAccess connection, and then augment that connection with additional R2 management and security tools, including management policies and NAP.
With DirectAccess, every user is considered remote all of the time. Users are no longer required to distinguish between local and remote connections. DirectAccess handles all of these distinctions in the background. IT professionals retain precise access control and full perimeter security, helping to ease both desktop security and management headaches on both sides of the connection.
#7. Improved Branch Office Performance and Management
Many branch office IT architectures have relatively low bandwidth. Slow WAN links impact the productivity of branch office employees waiting to access content from the main office, and costs for branch office bandwidth allocation can amount to as much as 33 % of overall corporate IT spending. To address this challenge, Windows Server 2008 R2 introduces a feature called BranchCache™, which reduces WAN utilization and improves the responsiveness of network applications.
With BranchCache™, clients who request access to data on the organization's network are sent directions to the file on the local (branch office) network if the file has ever been requested there before. If the file is stored locally, those clients get immediate high-speed access. Such files can be stored either on a local BranchCache™ server for larger branch offices or simply on local Windows 7 PCs.
#8. Simplified Management for SMBs
With Windows Server 2008 R2, Microsoft is focusing more attention at the SMB and mid-market customer. This new focus provides these customers with a rich landscape of Microsoft product offerings, from Small Business Server up to Windows Essential Business Server and now Windows Server 2008 Standard. All SKUs are being outfitted with new management tools to make SMB IT Pro life easier.
Active Directory’s new Active Directory Administration Center is one example—all those disparate management GUIs now hosted ina single interface and all based on PowerShell. Additionally, there are the Best Practice Analyzers, which Microsoft has extended to every server role to keep all your server configs in sync with the latest know-how.
And last but not least, there’s the new Windows Server Backup utility. Long a second-class citizen, this updated, in-the-box backup app has been significantly upgraded to include more granular support for designing backup jobs, including support for system state operations; and, it’s been optimized to run both faster and to use less disk space.
#9. The Strongest Web and Application Server To Date
Windows Server 2008 R2 includes many updates that make it the best Windows Server application platform yet, but one of the most important is the new Internet Information Services 7.5 (IIS 7.5).
The updated Web server includes features that streamline management by extending IIS Manager, implementing the IIS PowerShell Provider and taking advantage of .NET on Server Core. IIS 7.5 also integrates new support and troubleshooting features, including configuration logging and a dedicated Best Practice Analyzer. Last, we’ve integrated several popular optional extensions associated with Windows Server 2008, including URLScan 3.0 (now known as the Request Filter Module).

#10. Managing Data Not just Managing Storage

Managing storage isn’t just about managing disks. Storage volume is increasing at a 51% compounded annual growth rate between 2008 and 2012 according to IDC*. To keep pace and stay competitive, organizations must begin managing data, not just disks. Windows Server 2008 R2 gives IT administrators the tools for precisely this kind of initiative with the new File Classification Infrastructure (FCI). This new features builds an extensible and automated classification mechanism on top of existing shared file architetures; this enables IT administrators to direct specific actions for specific files based on entirely customizable classification. FCI is also extensible to partners, which means Windows Server 2008 R2 users can expect to see additional capabilities around FCI being delivered by ISVs in the near future.