Author Archives: admin

Restoring NetApp LUNs

The other day, I was tasked to restore a NetApp LUN from several years ago. Since the backup was so long ago, I have to restore it from an archived tape. After restoring the file, it did not show up as a LUN. It turned out that there are few rules to follow in order to restore LUN from tape and that it shows up as LUN.

There are also a couple of requirements when backing up LUNs to tape. Thankfully when we backed up the LUNs, we followed these rules:

1. The data residing on the LUN should be in a quiesced state prior to backup so the file system caches are committed to disk.

2. The LUN must be backed up using a NDMP compliant backup application in order for it to retain the properties of a LUN. Symantec Netbackup is an NDMP compliant backup application.

When restoring, I learned that:

1. It must be restored to the root of a volume or qtree. If it is restored anywhere else, it will simply show up as a file and lose the metadata allowing it to be recognized as a LUN.

2. The backup software should not add an additional directory above the LUN when it is restored.

For instance, on Symantec Netbackup application, when restoring the LUN, you should select the option to “Restore everything to its original location.” If this is not possible, you can select the second option which is “Restore everyting to a different location, maintaining existing structure.” This means that you can restore it on a different volume.

For example, if the LUN resides in /vol/vol1/qtree1/lun1 and we chose to restore to /vol/vol2, the location where the LUN would be restored is /vol/vol2/qtree1/lun1 because it maintains the existing structure.

Do not select the third option which is “Restore individual directories and files to different locations” because the Netbackup software will add an extra directory beyond the qtree and the LUN will not show up as a LUN.

When restore is complete, a “lun show -v” output on the NetApp CLI will show the restored LUN on /vol/vol2 volume.

Deduplication and Disk-based Backup

Disk-based backup has been around for a long time now.  I remember when I first deployed a Data Domain DD560 appliance eight years ago. I was impressed by its deduplication technology.  Data domain appliances are disk storage arrays used primarily for backup.  We used it as a disk backup destination for Symantec Netbackup via NFS.  Data Domain sets itself apart because of its deduplication technology.  For instance, in our environment, we are getting a total compression (reduction) of 20x; which means that our 124TB data only uses 6TB of space – a very large space savings indeed.

In addition, disk-based backup makes it very fast and easy to restore data. The need to find the tapes, retrieve them, and mount them have been eliminated.

In fact, the need for tapes can be totally eliminated.  A lot of companies are still using tapes to store them off site for disaster recovery.  If you have a remote office site, a Data Domain appliance can be replicated to your remote site (or disaster recovery site).  What is nice about Data Domain replication is that only the deduped data is sent over the wire.  This means less bandwidth consumption on your WAN (Wide Area Network).

Deduplication technology is getting better. Storage vendors and backup software vendors are offering it in different flavors.  With the cost of disk going down, there is really no need for tapes. Even long term retention and archiving of data can now be stored on a low cost, deduped disk storage array.

The Evolving Role of IT Professionals

I started my career in software development.  I wrote codes, performed system analysis, and software quality assurance.   Then I switched to system and network administration, and infrastructure architecture.   While the roles of software developers may not change that much (software programs need to be created), the roles of IT administrators, architects, analysts, and IT departments in general are changing.  This is due to cheap hardware, smarter software and appliances, and the availability of the cloud.

I still remember some time ago when I would spend a lot of time troubleshooting a system.  Today, due to redundant systems, off the shelves and online applications, and the use of appliances, troubleshooting times have been reduced to a minimum.  When a component breaks, it’s easy to replace it.

IT companies are now selling converged network, server, and storage in a box which eliminated the need for elaborate architecture and implementation and has simplified IT operations.

With virtualization and the “cloud”, more and more applications and IT services (infrastructure as a service, software as a service, etc.) are being available online.

When it comes to IT, companies now have various choices – host their IT services externally via public cloud, build IT systems in house, or use the combination of the two.

Thus, the future role of IT professionals will be like brokers.  When the business comes to them for a need, they should be able to deliver quickly and provide the best IT solution.  They should be able to determine when to use the public cloud and when to use internal IT systems.  The key is to understand the business. For instance, it may not make sense to put data in the cloud if you are concerned about security or if your company is regulated by the government.  If your company is small, it may not make sense to build a costly IT infrastructure in house.

Successful IT professionals are not only technically savvy but also business savvy.

Best Practices for Using NFS Datastore on VMware

More companies are now deploying VMware with IP based shared storage (NAS). NAS storage is cheaper than Fiber Channel (SAN) storage because there is no separate Fiber Channel (FC) based network to maintain. More importantly, IP based storage performance and stability are now comparable with FC based storage.

Other advantages of using IP based storage, specifically NFS, are thin provisioning, de-duplication, and the ease-of-backup-and-restore of virtual machines and files on a virtual disk via array based snapshots. In addition, IP based storage is easier to maintain.

VMware published a whitepaper on the best practices for running VMware vSphere on Network Attached Storage (NAS) using NFS. Following the best practices in deploying an NFS based storage is very important to obtain a stable and optimized VMware environment. Here are the important things to consider:

On the network side, the local area network (LAN) on which the NFS traffic will run needs to be designed with availability, downtime-avoidance, isolation, and failover:

1. NFS traffic should be on a separate physical LAN, or at least on a separate VLAN.
2. Use private (non-routable) IP addresses. This will also address a security concern since NFS traffic is not encrypted and NFS is mounted with root privileges on the VMware host.
3. Use redundancy by teaming the NICs on the VMware host, configuring LACP protocol, and using two LAN switches.
4. Use jumbo frames.
5. Use 10GB Ethernet.

On the storage array side, the storage controller must be redundant, in case the primary one fails. In addition,

1. Configure the NFS exports to be persistent. (e.g. exportfs –p)
2. Install the VAAI and other plug-in tools from the storage vendor. For instance, NetApp has the Virtual Storage Console (VSC) plug-in that can be installed on the vCenter.
3. Refer to the storage vendor best practices guide. For instance, NetApp and EMC published their own best practice whitepapers for using NFS on VMware.

On the VMware hosts, the following configuration should be implemented:

1. Use the same datastore name across all hosts.
2. Select “No” for NIC Teaming failback option. If there is some intermittent behavior in the network, this will prevent the flip-flopping of NIC cards being used.
3. If you increase the maximum number of concurrent mount points (from the default of 8), also increase Net.TcpipHeapSize as well. For instance, if 32 mount points are used, increase tcpip.Heapsize to 30MB.
4. Set the following VMware High Availability options: (NFS heartbeats are used to determine if an NFS volume is still available.)
NFS. Hearbeat.Frequency=12
NFS.Hearbeat.Timeout=5
NFS.Hearbeat.MaxFailure=10

When configured properly, IP based storage, specifically NFS, provides a very solid storage platform for VMware.

NetApp Virtual Storage Console for VMware vSphere

One of the best tools for managing NetApp storage and VMware is a plug-in called NetApp Virtual Storage Console (VSC) for VMware vSphere. VSC provides administrators the ability to manage NetApp storage from vCenter client. It can configure, monitor, provision, and migrate NetApp datastores with fewer clicks. In addition, it can perform backup and recovery of LUNs and volumes from the vCenter client.

VSC can automatically discover your NetApp storage controllers and ESXi hosts. This task can take a lot of time if not using VSC. VSC can also automatically apply “best practices” settings on the ESXi host to optimize its configuration. It can rapidly provision datastores without going through the NetApp management interface. You can get backup (snapshots) of the datastore in a consistent state, and perform recovery in minutes.

NetApp implementation of its vStorage API for Array Integration (VAAI) offloads significant processing tasks to the storage array, freeing ESXi resources for other tasks. If you are using NFS though, you still need to download and install the NetApp NFS Plug-in for VMware VAAI.

For now, the VSC plug-in is only available for the traditional vCenter client. VMware is moving towards replacing the traditional vCenter client with the vSphere Web Client. I hope that NetApp releases the plug-in version for the web client pretty soon.

The Important Things In Life

I delivered this speech for my “Advance Communication” Toastmasters project, speech #2 of the Specialty Speech manual, “Uplift the Spirit” project. Here it goes:

I can still remember everything, as if it only happened yesterday. I was 12 years old, in 6th grade. My friend and I skipped school one sunny day and went to a pond. I was so naive, I jumped into the pond without checking it first. It was 8 feet deep, and I barely knew how to swim. Then, my foot got stuck in the mud at the bottom. I panicked! I was drowning! I thought I was going to die that day. Then suddenly I felt my friend pull me out to safety.

Fellow Toastmasters and most welcomed guests, did you also have a near death experience? Did it change your perspective in life?

Recently, I was reminded again that life can be short. I was driving late one evening after attending a conference in Cambridge. I was so tired that night. The road is not my normal route and at an intersection in Leicester on my way home, I did not stop at a stop sign. I only realized it after I crossed the intersection. And I exclaimed, “What just happened?” Then, I checked my rear view mirror, I saw a large trailer truck crossed the intersection. What if I was 3 seconds late? I could have been hit by the truck.

Life can be short. We should make the most of it.

So how do we live life to the fullest. Well, let’s learn from the people in their deathbeds. Research shows that there are 3 common deathbed regrets. I’ll talk about these three common regrets. My nurse friend who worked at a hospice confirmed these to me.

The first one is, I wish I had pursued my dreams.

I wish I had the courage to live a life true to myself, not the life others expected of me.

We should pursue our own dreams — not what our parents, friends, or other people told us to pursue. If you want to be a musician – go for it while there is still time. Climb Mt. Everest. Take that job at a non-profit charitable organization. Don’t mind what others think about it.

When I was a kid growing up in the Philippines, I would dream of coming to America. America as I’ve read in books and watched in movies is nice and prosperous. So I pursued this dream. I took a computer engineering course which I thought would be in demand in America. After college, I worked in the Philippines for 3 years to gain experience, and I relentlessly pursued technology companies in the US. With perseverance and a little bit of luck, I was hired by a telecommunications company in New Jersey, and I realized my dream of coming to America.

I have other dreams such as to travel around the world and experience other cultures, and I’m working towards these dreams.

What about you? What are your dreams? Are you pursuing them?

The second common deathbed regret is, I wish I hadn’t work so hard.

I wish I had spent more time with family and friends than at work. People in their deathbeds do not cherish the long hours in the office . Instead they remember the time they spent with their families and friends.

The other day, after getting home very tired from work, my daughter Justine approached me, “Dad, can you help me with my homework?” I snapped at her, “Can’t you see I’m so tired from work, and I still have to mow the lawn?” She meekly said, “Ok Dad, maybe later.” But then I realized, I just told my daughter that she is not my priority. I thought hard about it, and I realized that I had my priorities wrong – my work is my first priority, then household chores, then my daughter/family. It should be the other way around!

People in deathbeds also wished they had stayed in touch with their friends.

I always planned to call my friends, but because of work, and other responsibilities, I always put it in the back burner and never really got the chance to call them. I will often say, “I’ll call them tomorrow, they will still be there anyway.” Well, I think I should be getting in touch with them before it’s too late.

The lesson here is to reach out to friends. Have a long chat over a glass of wine (or beer). Call them If they live very far away.

Fortunately, it’s now easy more than ever to stay in touch with friends, via Facebook. So if you haven’t been using Facebook yet, sign up and reconnect with your old friends.

The third common deathbed regret is, I wish that I had let myself be happier.

Too often we postpone happiness. I remember when I was in high school, I would often say, “I’ll be happy when I get to college.” When I was in college, I would often say, “I’ll be happy when I get a job.” Now that I have a a job, I would often say, “I’ll be happy when I retire.” “I’ll be happy if I win the lottery.” When my day gets challenging, I would often say, “I’ll be happy when this day is over.”

Then I realized that I will never be happy if I keep on postponing it. The present is all we’ve got. Today! Now! Be happy now, because tomorrow may never come.

So, open that expensive wine. Eat that delicious chocolate cake. Play ball with your son, or play pretend tea party with your daughter. Book that European vacation you’ve been longing for. Move to Florida if you love warm weather — although we’ll really miss you if you move out of Massachusetts. Buy yourself a kindle and read; whatever it is, be good to yourself. Also, be good to others, because making others happy makes us happy too.

It is hard to prioritize and do the important things in life. We always get caught up with daily activities – work, household chores, interruptions.

But I think, for me, that near-death drowning experience when I was a kid taught me a lesson – never skip school to go to the pond. But seriously, recalling that story every time reminds me to put my life in perspective and do the important things in life. Life is short, make the most of it.

The Value of IT Certifications

I recently passed the VMware Certified Professional 5 – Data Center Virtualization exam. The last VMware certification I took was in 2007 when I passed the VMware Certified Professional 3 exam. It’s nice to have the latest VMware certification under my belt.

VMware certification is a little bit unique, because it requires one-week training and hands-on experience. You will find it difficult to pass the test without hands-on experience. Most of the questions in the test are real life scenarios and you can only understand the questions if you have encountered them in real life.

Some people argue the value of certifications. They say that certifications are useless because most of those people who have them are inexperienced. I agree that experience is the best way to learn in the IT field. I can attest to this after almost 20 years in the field. But IT certifications are valuable for the following reasons:

1. Not all IT certifications are created equal. While some certifications are easier to pass just by reading books, most IT certifications such as VCP (VMware Certified Professional), CISSP (Certified Information Systems Security Professional), and RHCE (Red Hat Certified Engineer) certifications need a high degree of experience to pass the tests.

2. Not all people are lucky enough to have access to expensive hardware to gain hands-on experience nor lucky enough to be assigned to IT projects to get the maximum exposure. Some people take the certification route to get knowledge and experience.

3. Not all IT knowledge is learned via experience since not all scenarios can be encountered in real life. Some are learned via reading books and magazines, taking the training, and passing certification tests. For instance, if your company’s standard is Fiber Channel for VMware datastore, the only way to learn about iSCSI datastore is to read or get trained on it.

4. IT certifications are solid evidence of your career. It will be very useful, for instance, when looking for a job. Prospective employers do not have a concrete evidence of your accomplishments, but a solid and trusted IT certification can prove your worth.

5. And finally, seasoned IT professionals, just like me, take certification tests to validate our knowledge.

Important Features and Capabilities in the New vSphere 5.1

vSphere 5.1 has been released several months ago, and among its new features and capabilities, I think the important ones are the following:

1. Larger virtual machines. Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM). This means that enterprise apps such as SAP, large databases, email, and other high demand apps can now be virtualized without worrying about CPU and memory resources. The goal of attaining 100% virtualization in the Data Centers can now be realized.

2. vSphere Data Protection and vSphere Replication. vSphere Data Protection is used to backup and recover virtual machines. vSphere Replication is used to replicate virtual machines to remote Data Center for disaster recovery. No need to integrate third party tools such as Veeam for backup or Zerto for replication to remote DR site.

3. vSphere 5.1 eliminates the need to reboot virtual machines for subsequent VMware Tools upgrades on Windows.

For a complete list of the new features and capabilities, go to this website:

http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere51.pdf

However, the vSphere Web Client is now the core administrative interface for vSphere. The vSphere client is still available but I’m afraid it will not be supported in future releases. I still like the vSphere client because it’s more roubust, stable, and faster. In addition, there are a lot of plug-ins that are still not available in the Web Client such as the NetApp Virtual Storage Console for VMware vSphere. We use NetApp for our datastore and this plug-in is very important to us.

Accomplishments as a Toastmaster Club President

As the Toastmaster year draws to a close, I am proud to say that our AbbVie Bioresearch Toastmaster club, where I am the President, achieved the President’s Distinguished Club award, the highest award a club can get. We obtained this award because:

1. two of our members completed the ten Competent Communicator speech projects;
2. two of our members completed the ten Advanced Communicator speech projects;
3. two of our members completed the ten Competent Leadership projects;
4. two of our members completed the Advanced Leadership projects;
5. we signed up 13 new members;
6. all of our officers were trained in both the summer and winter Toastmaster Leadership Institute (TLI) trainings;
7. we submitted our membership dues on time;
8. we submitted officers and members list on time;
9. and we maintained a membership base of 38.

In addition, our club sponsored a Youth Leadership Program, an eight-session, workshop-style program, designed to enable the youth to develop communication and leadership skills through practical experience. Our club also premièred the movie “Speak,” a powerful and inspiring documentary about conquering life’s hurdles and finding your voice.

We also held several Open Houses to attract new members and held Speech Contests to enhance our members educational experience.

All of these accomplishments were made possible because of the untiring and enthusiastic efforts of our officers and members.

Being a Toastmaster officer is challenging. I have to constantly motivate people to attend the meetings, volunteer for roles, and finish their communication and leadership projects. But it is a very rewarding experience. I learned practical skills on leadership, management and organization. I learned “people skills” such as interpersonal communication skills, conflict resolution, and patience. But there is no better reward than knowing that our club members are getting better in their speaking and leadership skills.

I signed up for an Area Governor position for the next Toastmaster year and I am looking forward to bigger challenges.

The Importance of Disaster Recovery (DR) Testing

Recently, we conducted disaster recovery (DR) testing on one of our crucial applications. The server was running Windows 2008 on an HP physical box. We performed bare metal restore (BMR) using Symantec Netbackup 7.1. However, after Symantec BMR completed the restore, the server will not boot up. We troubleshoot the problem and tried several configurations. It took a couple of days before we figured out the issue. The issue, by the way, was that the boot sector got misaligned after the restore and we have to use Windows installation disk to repair it.

What if it was a real server disaster? The business cannot wait for a couple of days to restore the server. We defined an RTO (Recovery Time Objective) for that server to be 8 hours. And we did not meet it during our testing. This is the reason why DR testing is very important.

During DR testing, we have to test the restore technology and the restore procedures. In addition, we need to test if we can restore it on time (RTO) and if we can restore the data at a point in time (or RPO – Recovery Point Objective) (e.g. from a day before, or from a week ago).

With a lot of companies outsourcing their DR to third parties or to the cloud, DR testing becomes even more important. How do you know if the restore works? How do you know if their DR solution meets your RPO and RTO? Companies assume that because backups are being done, then restore will automatically work.

We perform DR testing once a year. But for crucial applications and data, I recommend DR testing twice a year. Also, perform a test every time you make significant changes on your backup infrastructure, such as software updates.