Category Archives: IT Strategy

The Evolving Role of IT Professionals

I started my career in software development.  I wrote codes, performed system analysis, and software quality assurance.   Then I switched to system and network administration, and infrastructure architecture.   While the roles of software developers may not change that much (software programs need to be created), the roles of IT administrators, architects, analysts, and IT departments in general are changing.  This is due to cheap hardware, smarter software and appliances, and the availability of the cloud.

I still remember some time ago when I would spend a lot of time troubleshooting a system.  Today, due to redundant systems, off the shelves and online applications, and the use of appliances, troubleshooting times have been reduced to a minimum.  When a component breaks, it’s easy to replace it.

IT companies are now selling converged network, server, and storage in a box which eliminated the need for elaborate architecture and implementation and has simplified IT operations.

With virtualization and the “cloud”, more and more applications and IT services (infrastructure as a service, software as a service, etc.) are being available online.

When it comes to IT, companies now have various choices – host their IT services externally via public cloud, build IT systems in house, or use the combination of the two.

Thus, the future role of IT professionals will be like brokers.  When the business comes to them for a need, they should be able to deliver quickly and provide the best IT solution.  They should be able to determine when to use the public cloud and when to use internal IT systems.  The key is to understand the business. For instance, it may not make sense to put data in the cloud if you are concerned about security or if your company is regulated by the government.  If your company is small, it may not make sense to build a costly IT infrastructure in house.

Successful IT professionals are not only technically savvy but also business savvy.

The Value of IT Certifications

I recently passed the VMware Certified Professional 5 – Data Center Virtualization exam. The last VMware certification I took was in 2007 when I passed the VMware Certified Professional 3 exam. It’s nice to have the latest VMware certification under my belt.

VMware certification is a little bit unique, because it requires one-week training and hands-on experience. You will find it difficult to pass the test without hands-on experience. Most of the questions in the test are real life scenarios and you can only understand the questions if you have encountered them in real life.

Some people argue the value of certifications. They say that certifications are useless because most of those people who have them are inexperienced. I agree that experience is the best way to learn in the IT field. I can attest to this after almost 20 years in the field. But IT certifications are valuable for the following reasons:

1. Not all IT certifications are created equal. While some certifications are easier to pass just by reading books, most IT certifications such as VCP (VMware Certified Professional), CISSP (Certified Information Systems Security Professional), and RHCE (Red Hat Certified Engineer) certifications need a high degree of experience to pass the tests.

2. Not all people are lucky enough to have access to expensive hardware to gain hands-on experience nor lucky enough to be assigned to IT projects to get the maximum exposure. Some people take the certification route to get knowledge and experience.

3. Not all IT knowledge is learned via experience since not all scenarios can be encountered in real life. Some are learned via reading books and magazines, taking the training, and passing certification tests. For instance, if your company’s standard is Fiber Channel for VMware datastore, the only way to learn about iSCSI datastore is to read or get trained on it.

4. IT certifications are solid evidence of your career. It will be very useful, for instance, when looking for a job. Prospective employers do not have a concrete evidence of your accomplishments, but a solid and trusted IT certification can prove your worth.

5. And finally, seasoned IT professionals, just like me, take certification tests to validate our knowledge.

The Importance of Disaster Recovery (DR) Testing

Recently, we conducted disaster recovery (DR) testing on one of our crucial applications. The server was running Windows 2008 on an HP physical box. We performed bare metal restore (BMR) using Symantec Netbackup 7.1. However, after Symantec BMR completed the restore, the server will not boot up. We troubleshoot the problem and tried several configurations. It took a couple of days before we figured out the issue. The issue, by the way, was that the boot sector got misaligned after the restore and we have to use Windows installation disk to repair it.

What if it was a real server disaster? The business cannot wait for a couple of days to restore the server. We defined an RTO (Recovery Time Objective) for that server to be 8 hours. And we did not meet it during our testing. This is the reason why DR testing is very important.

During DR testing, we have to test the restore technology and the restore procedures. In addition, we need to test if we can restore it on time (RTO) and if we can restore the data at a point in time (or RPO – Recovery Point Objective) (e.g. from a day before, or from a week ago).

With a lot of companies outsourcing their DR to third parties or to the cloud, DR testing becomes even more important. How do you know if the restore works? How do you know if their DR solution meets your RPO and RTO? Companies assume that because backups are being done, then restore will automatically work.

We perform DR testing once a year. But for crucial applications and data, I recommend DR testing twice a year. Also, perform a test every time you make significant changes on your backup infrastructure, such as software updates.

Security Done Right

During my job-related trip to Israel a couple of months ago, I was subjected to a thorough security check at the airport. I learned later on that everybody goes through the same process. It was a little inconvenient, but in the end, I felt safe.

With all the advance technologies in security, nothing beats the old way of conducting security – thorough checks on individuals. I also noticed the defense in depth strategy at the Israel airport – the several layers of security people have to pass to get to their destinations. No wonder some of the greatest IT security companies come from Israel (e.g. Checkpoint Firewall).

As an IT security professional (I’m a CISSP certified), I can totally relate to the security measures Israel has to implement. And companies need to learn from them. Not a day goes by that we learn companies being hacked, shamed, and extorted by hackers around the world.

Sadly, some companies only take security seriously when it’s too late – when their data has been stolen, their systems have been compromised, and their twitter account has been taken over. It will be a never ending battle with hackers, but it’s a great idea to start securing your systems now.

Getting Promoted in IT

One of the perks of serving at an Harvard alumni club (I am currently the Secretary of the Harvard-Radcliffe Club of Worcester), was attending a 2-day Alumni Leadership Conference in Cambridge, MA. It was a nice break from work. I met alumni leaders from all over the world, talked to accomplished people (I met the writer of one of my daughter’s favorite movies – Kung Fu Panda), learned what’s new in the Harvard world, and learned leadership skills from great speakers.

One of those speakers is David Ager, a faculty member at the Harvard Business School. He totally engaged the audience while delivering his opening address – “Leadership of High Performing Talent: A Case Study.” We discussed a case study about Rob Parson, a superstar performer in the financial industry. In a nutshell, Rob Parson delivered significant revenue to the company but his abrasive character and non-teamwork attitude didn’t fit well into the culture of the company. He was due for performance review and the question was – Should Rob be promoted?

The setting of the case study was in the financial industry, but the lesson holds true as well in the Information Techology (IT) industry. There are a lot of Rob Parson in IT – software developers, architects, analysts, programmers – who are high performers, but they rub other people the wrong way. They are intelligent, smart, and they develop very sophisticated software — the bread and butter of IT companies. Some of these IT superstars aspire for promotion for managerial role. Should they be promoted? Too often we hear stories about a great software architect who went to manage people, but faltered as a result.

IT professionals who would really like to manage people should be carefully evaluated for their potential. They should learn people and business skills in order to succeed. Before giving them any managerial position, they should undergo a development program and they should be under a guidance of a mentor (or a coach) for at least a year. Most IT professionals should not take on the managerial role. They should remain on their technical role to be productive, but they should be given other incentives that motivate and make them happy – such as complete authority of their work, flex time, an environment that foster creativity and so on.

BYOD

Recently, I attended a security seminar on the newest buzzword in the IT industry – BYOD, or Bring Your Own Device – to complete my CISSP CPE (Continuing Professional Education) requirement for the year. The seminar was sponsored by ISC2 and the speaker, Brandon Dunlap, is a seasoned, insightful, and very entertaining speaker.  I highly recommend the seminar.

BYOD came about because of the popularity of mobile devices – iPhone, iPad, Android, Blackberry, etc.- , the consumerization of IT, and employees getting more flexible schedules.    Companies are starting to allow their employees to use their own devices – to improve productivity, mobility, and supposedly save the company money.  The millennials, in particular, are more apt to use their own devices.  Owning these devices for them signifies status symbol or a fashion statement.

However,  does it make sense to allow these devices into the company’s network?  What are the security implications of the BYOD phenomenon?

From a technology standpoint, there are a lot of innovations to secure both the mobile devices and the company’s applications and data, for instance, using containers, to separate personal apps and company’s apps.  Security companies are creating products and services that will improve the security of BYOD.  But from a policy and legal standpoint, very little is being done.  Companies who jumped into this BYOD buzz are getting stung by BYOD pitfalls as exemplified by one of the greatest IT companies in the world – IBM.   In addition, recent studies showed that BYOD does not really save company money.

Companies need to thoroughly understand BYOD before adopting it.  It is a totally new way of working.

The seminar highlighted the many problems of BYOD, and the immense work that needs to be done to make it successful.  No wonder the organizer entitled it “Bring Your Own Disaster” instead of “Bring Your Own Device.”

 

Internal Web Analytics

There are a lot of tools out there that can analyze web traffic for your site. Leading the pack is Google Analytics. But what if you want statistics of your internal website, and you don’t necessarily want to send this information to an external provider such as Google? Here comes Piwik.  Piwik is very much like Google Analytics but can be installed on your internal network. The best part is that it’s free.

Since Piwik is a downloadable tool, you need to have a machine running web server and mysql. You can install it on your existing web server or on a separate web server. I installed it on a separate CentOS machine. I found the installation very easy. In fact, you just unzip a file and put those files in a web directory. The rest of the installation is via the browser. If there is a tool missing on your server, (in my case, I need the PDO extension) it will tell you how to install it. Pretty neat.

After installing the server, you just need to put a small javascript code on the pages you want to track. That’s it. Piwik will start gathering statistics for your site.

I also evaluated Splunk and it’s companion app – Splunk App for Web Intelligence, but I found that it is not ready for prime time. There are still bugs. No wonder it is still in beta. When I was evaluating, it wasn’t even able to get usable information from apache logs.

I’ve been using Awstats to extract statistics for internal websites for years. It has been very reliable but sometimes it provides inaccurate results. The open source Piwik web analytic tool provides accurate statistics and is the best tool I’ve used so far.

Security Strategy

Amidst the highly publicized security breaches, such as the LinkedIn hacked passwords, hacktivists defacing high profile websites, or online thieves stealing credit card information, one of the under-reported security breaches are nation states or unknown groups stealing Intellectual Property information from companies such as building designs, manufacturing secret formulas, business processes, financial information, etc. This could be the most damaging security breach in terms of its effect on the economy.

Companies do not even know they are being hacked, or are reluctant to report such breaches. And the sad truth is that companies do not even bother beefing up their security until they become victims.

In this day and age, all companies should have a comprehensive security program to protect their assets. It starts with an excellent security strategy, a user awareness program (a lot of security breaches are done via social engineering), and a sound technical solution. A multi-layered security is always the best defense – a firewall that monitors traffic, blocks IP addresses that launches attacks, and limits the network point of entry; an IDS/IPS that identifies attacks and gives signal; a good Security Information and Event Management (SIEM) system; and good patch management system to patch servers and applications immediately once vulnerabilities are identified, to name a few.

Cost is always the deciding factor in implementing technologies. Due diligence is needed in creating cost analysis and threat model. As with any security implementation, you do not buy a security solution that costs more than the system you are protecting.

Disaster Recovery using NetApp Protection Manager

In our effort to reduce tape media for backup, we have relied on disks for our backup and disaster recovery solution. Disks are getting cheaper and de-duplication technology keeps on improving. We still use tapes for archiving purposes.

One very useful tool for managing our backup and disaster recovery infrastructure is NetApp Protection Manager. It has replaced the management of local snapshots, snapmirror to Disaster Recovery (DR) site, and snapvault. In fact, it doesn’t use these terms anymore. Instead of “snapshot,” it uses “backup.” Instead of “snapmirror,” it uses the phrase “backup to disaster recovery secondary.” Instead of “snapvault,” it uses “DR backup or secondary backup.”

NetApp Protection Manager is policy-based (e.g. backup primary data every day @ 6pm, and retain backups for 12 weeks; backup primary data to DR site every day @ 12am; backup the secondary data every day @ 8am and retain for 1 year). As an administrator, one does not have to deal with the nitty-gritty technical details of snapshots, snapmirror, and snapvault.

There is a learning curve in understanding and using Protection Manager. I have been managing NetApp storage for several years and I am more familiar with snapshots, snapmirror, and snapvault. But as soon as I understood the philosophy behind the tool, it gets easier to use it. NetApp is positioning it for the cloud. The tool also has dashboards intended for managers and executives.

Vmware Datastore via NFS

One of the objectives of our recently concluded massive storage upgrade project, was to replace our vmware datastore from iSCSI to NFS. I have been hearing the advantages of using NFS versus block-level storage (ie, iSCSI or Fiber Channel), and true enough NFS did not disappoint.

We have been using iSCSI on NetApp as datastore on vmware for a long time, and it has been running pretty well. But when we perform maintenance on the NetApp storage, the virtual machines were often times affected. In addition, restore procedures can be a pain.

While Fiber Channel (FC) is still the standard storage for most vmware implementation because of its proven technology, in my experience here are the advantages of using NFS over iSCSI or FC:

1. Robust, as long as you follow the best practices guidelines. For instance, separate the NFS network from the general use network. Vmware and NetApp released white papers discussing the NFS datastore best practices. In our environment, we did several failover on the NetApp storage to upgrade the Data ONTAP version, and the virtual machines were never affected.

2. Easier to configure both on the vmware side and the NetApp side.

3. Easier to backup, via NDMP on the NetApp side.

4. Easier to restore vmdk files using the snapshots on the NetApp side, since there is no need to mount LUNs.

5. Vmware and NetApp built great tools for seamless maintenance and operations.