When building a PC image for an enterprise roll-out, your goal should be to build a consistent experience across all PC variants, with as few discrepancies as possible. Building an image that allows for a quick recovery or deployment even when experienced technical staff are unavailable, where possible, even remotely.
The goal is not only to minimize the time it takes to build a PC from new, “out of the box,” but to provide a tool to recovery of a system in the event of a failure or infection, minimizing downtime for the end-user. Of course all of these mandates should translate into a reduced cost in performing these actions.
Some key points to remember in building an image are that you need to build the image correctly, using SysPrep and the necessary precautions to ensure that while you may be distributing this on a DVD (one more than one, or (preferably) a USB key. Ideally your image will be small enough to fit on a reasonably-sized (affordable) USB-key because as much as you want to distribute the keys quickly there are considerations to keep in mind. Once a PC is imaged, how do you get it on the domain? What access does a newly imaged PC have on the network, and is the image itself a security risk (does it contain a password or network access that might put your company at risk if found.
Licensing is also a concern Symantec Ghost, Acronis TrueImage, and other such tools are licensed for use on a per PC basis. There are alternatives to licensed products, but may require some futzing around. Clonezilla and the FOG Project (Free OpenSource Ghost) may be your best solution, but the key is reliability and efficiency. The less reliable a solution is, the less efficient it is. You want a solution you can trust and that even an end-user can walk through without significant risk to data, usability, or extended downtime.
If you’re in a large organisation you may not be able to afford the liabilities of an open source solution, the simplicity and support of an enterprise product would be best. If your organisation is smaller, less than 200 PCs, you may have the time and reap the cost benefits of the no-cost option. Choose carefully, but make sure that you are licensed for whatever you finally select.
As I’d said before, use SysPrep, this ensures that your machines do not become replicas of each other, causing nightmares of PCs with identity crises, The SID is configured at the initial install and is unique to each PC, simply cloning a PC and duplicating it is asking for trouble down the road. In addition to this level of duplication you’ll want to consider the software that’s pre-installed versus that which is installed after a newly imaged PC is configured. Some applications may base their interaction with the server on the SID or another identifying electronic stamp. While pre-configuring the PC with Microsoft Office may work, pre-installing your anti-virus solution may be ill-advised; especially it is managed from an enterprise server.
One Image for many machine types is preferred but this is a compromise, while Windows 7 accommodates this much easier that Windows XP, the reality is that you may end up spending more time in the testing phase if you make one image that works for all of your systems. If you refresh your PCs and laptops on a yearly basis (i.e. 1/3 of all systems annually), you could end up with 6 or 8 different PCs to keep in sync and ensure the right drivers are installed for each of these models. Yes, you could write a scripted installer that will install the correct drivers and applications based on the machine type/model, and this can work very well, but you’ll spend s significant amount of time testing and tweaking these install steps and even driver versions because there are subtle difference that creep into the mix over 4 years. Also, now you have new systems coming with Windows 7, yet many companies are still deploying Windows XP, which OS do you load is only part of the question, what applications (i.e. CD-Burning software) do you pre-load, or load once the PC type has been determined? You can only load that which was included on (licensed for) that PC originally. Let’s just hope you’ve stuck with one brand of hardware.
The installer script, a helper of sorts can be developed using many possible tools, but you will need information found using the WMI, otherwise known as the Windows Management Interface. This API of sorts will source a multitude of information, including the critical manufacturer, model which will assist with determining what drivers and applications to install and prepare the machine for first use. The level of automation you elect to use will be inverse to the simplicity of the user experience, and this is key in large organisations. Remember to keep the size of your organisation in mind when setting the scope of automation. If you have a few hundred PCs in one or two locations, you may not need as much automation because your Desktop Support staff will be on hand to assist, but if you have thousands of PCs you will want to leverage automation to reduce costs due to hand-holding when a new or re-imaged PC is deployed.
What can I Script?
The driver and applications can be installed, dependent upon the WMI results, but sometimes the order matters. Silent installs and monitoring the progress of each install will be critical. This can be a time consuming task in this process.
Naming the PC can be automated; if your company’s PC naming convention can be derived on the fly. For example, the use of a PC’s serial number, perhaps HASHED to a unique ID to make it short enough when it’s too long, can be combined with a short prefix. You can also ask the user for identifying information even requesting a Windows AD (via LDAP) login and location information. The user cannot log in, they cannot proceed, but this is a double-edged sword. What if someone is off-site?
Location, location, location: Of course if your network is well organised you can determine location based upon the IP Address, you can also conclude you are not on your network and respond accordingly. If you have many mobile workers this can also but you may look to a public facing web service that can help with identification and authorisation, this can also accommodate the automation of the connection of the PC to the domain. This you will want to guard carefully. Not just any PC should be allowed on your network, or domain.
Asset Management: You have access to the make, model, user who is setting up the PC and you may have access to a web or SQL server through this process you can capture these details if you’re online. If you’re not online you can store the information locally, a good plan anyway, in a file or the registry. You can later poll this information and you may wish to have a timeout if PCs move around without your knowledge that will pop up occasionally to update the ownership, etc.
Encryption tools, such as McAfee’s Endpoint Encryption product, can cause challenges to this process, you can certainly deploy the base image without encryption. While you can get the PC up and running and the user productive again, the user becomes a risk if any sensitive data is left unencrypted on the PC. There may be options to cover this threat.
Multi-boot and the use of Linux may provide a quick recovery through either a secondary boot partition or a Linux on a USB key that accommodates connection to your network via your VPN solution and a Terminal Services connection. If your network is protected by physical a VPN solution using RSA keys this may be a reasonable workaround to having a dead hard drive while out of the office. The user can connect to either a desktop PC, virtual machine, Terminal Server, or virtual desktop solution (VDI). Frankly there’s real merit, and security, in the use of solutions that can be run from a ChromeBook-based laptop. You infrastructure is safe in a datacentre, the computing happens there, and the users have nothing more than a terminal in effect.