Category Archives: IT Management

Focus on Existing Clients

I’ve been working as a part time consultant for small and start-up companies in Cambridge, MA. These clients ask me to design and build their IT infrastructure. Most of the time, the infrastructure is built in-house, and sometimes they are put into the “clouds”. It largely depends on which architecture make sense for the clients. For instance, some clients generate huge amount of data in-house, so it make sense to build the storage infrastructure inside their premise.

Once the infrastructure is built though, most will be in operations mode. This mode does not require huge amount of time — specially in small companies. You only get called when there are problems. Should you then look for new clients, so you can generate more revenue? I believe it is easier to focus on existing customers and generate more work (and revenue) from them. In fact, if you focus more on looking for new clients, your relationship with existing ones erode, your service become stagnant, and in some cases you end up losing their business.

To focus more on existing clients, here are three proven methods to generate more revenue from them:

1. Provide timely responses. When something breaks, fix it right away. If you cannot do it in the next hour, provide a feedback when you can work on it and the estimated completion time. Improve your customer service skills and communicate often.

2. Address unmet needs. There will always be unmet needs in the Information Technology space. For instance, the client may not know that due to regulation, data containing any personal information of employees and customers such as credit card numbers, social security numbers, etc. should be encrypted. Offer to create a project for this unmet need.

3. Offer value added services. For instance, offer a comprehensive Disaster Recovery Plan. Tell the client that a simple backup infrastructure is not enough for the business to continue to operate after a major disaster.

It’s hard and expensive to find new clients. Your existing clients will be happier (and will pay you more money) if you focus on them.

Security Strategy

Amidst the highly publicized security breaches, such as the LinkedIn hacked passwords, hacktivists defacing high profile websites, or online thieves stealing credit card information, one of the under-reported security breaches are nation states or unknown groups stealing Intellectual Property information from companies such as building designs, manufacturing secret formulas, business processes, financial information, etc. This could be the most damaging security breach in terms of its effect on the economy.

Companies do not even know they are being hacked, or are reluctant to report such breaches. And the sad truth is that companies do not even bother beefing up their security until they become victims.

In this day and age, all companies should have a comprehensive security program to protect their assets. It starts with an excellent security strategy, a user awareness program (a lot of security breaches are done via social engineering), and a sound technical solution. A multi-layered security is always the best defense – a firewall that monitors traffic, blocks IP addresses that launches attacks, and limits the network point of entry; an IDS/IPS that identifies attacks and gives signal; a good Security Information and Event Management (SIEM) system; and good patch management system to patch servers and applications immediately once vulnerabilities are identified, to name a few.

Cost is always the deciding factor in implementing technologies. Due diligence is needed in creating cost analysis and threat model. As with any security implementation, you do not buy a security solution that costs more than the system you are protecting.

Disaster Recovery using NetApp Protection Manager

In our effort to reduce tape media for backup, we have relied on disks for our backup and disaster recovery solution. Disks are getting cheaper and de-duplication technology keeps on improving. We still use tapes for archiving purposes.

One very useful tool for managing our backup and disaster recovery infrastructure is NetApp Protection Manager. It has replaced the management of local snapshots, snapmirror to Disaster Recovery (DR) site, and snapvault. In fact, it doesn’t use these terms anymore. Instead of “snapshot,” it uses “backup.” Instead of “snapmirror,” it uses the phrase “backup to disaster recovery secondary.” Instead of “snapvault,” it uses “DR backup or secondary backup.”

NetApp Protection Manager is policy-based (e.g. backup primary data every day @ 6pm, and retain backups for 12 weeks; backup primary data to DR site every day @ 12am; backup the secondary data every day @ 8am and retain for 1 year). As an administrator, one does not have to deal with the nitty-gritty technical details of snapshots, snapmirror, and snapvault.

There is a learning curve in understanding and using Protection Manager. I have been managing NetApp storage for several years and I am more familiar with snapshots, snapmirror, and snapvault. But as soon as I understood the philosophy behind the tool, it gets easier to use it. NetApp is positioning it for the cloud. The tool also has dashboards intended for managers and executives.

Backup Infrastructure

I have been designing, installing , and operating backup systems for the past several years.  I have mostly implemented and managed Symantec Netbackup (used to be Veritas Netbackup) for larger infrastructures and Symantec Backup Exec for smaller ones.

These software worked very well although some features are not very robust.  I’m very impressed for instance of the NDMP implementation in Netbackup.  Backing up terabytes of NetApp data via NDMP works very well.  However, I do not like the admin user interface of Netbackup since its not very intuitive. Their bare metal restore (BMR) implementation also is a pain.  Some of the bugs took years to fix.  Maybe because there are not too many companies using BMR.

Backup Exec works very well with small to medium systems. It has very intuitive interface, it is relatively easy to setup, and it has very good troubleshooting tools.  Lately though, Symantec has been playing catch up in their support for newer technologies such as VMware. It is so much easier to use Veeam to manage backup and restore of virtual machines.  In addition, Backup Exec has been breaking lately. Recent Microsoft patches have caused backup of System_State to hang.

But I think the biggest threat to these backup software are online backup providers. Crashplan, for instance, was initially developed for desktop backup, but it will not take long before companies will use it to back up their servers. When security concerns are addressed properly by these providers, companies will be more compelled to backup their data online. It’s just cheaper and easier to backup online.

NetApp Storage Migration Lessons

One of my latest projects is to consolidate six old NetApp Filers and migrate a total of 30 TB of data to a new NetApp Filer cluster, FAS 3240C. The project started several months ago and it is almost complete. Only one out of six NetApp filers is left to migrate.

I have done several storage migrations in the past, and there are always new lessons to learn in terms of the technology, migration strategy and processes, and the people involved in the project. Here are some of the lessons that I learned:

  1. As expected, innovations in computer technology move too fast and storage technology is one of them. IT professionals need to keep pace or our skills become irrelevant. I learned storage virtualization, NetApp fast cache, and snapmirror using smtape, among many other new features.
  2. Migration strategy, planning, and preparation take more time than the actual migration itself. For instance, one filer only took an hour and a half to migrate. However, the preparations such as snapmirroring, re-creating NFS and CIFS shares, making changes in users login scripts, making changes in several applications, and many other pre-work were done several days before the actual migration. The actual migration is actually just to catch up with the latest changes in the files (ie snapmirror update), and flipping the switch.
  3. People, like many other big IT projects, are always the challenging part. The key is to engage the stakeholders (business users, application owners, technical folks) early on in the project. Communicate with them the changes that are happening and how their applications and accesses to their data will be affected. Give them time to understand and make changes to their applications. Tell them the benefits of the new technology and communicate often the status of the project.