Author Archives: admin

End User Experience on Enterprise IT

A lot of focus on adapting BYOD (Bring Your Own Devices) has been exerted by enterprise IT departments due to the popularity of mobile phones and tablets, and their cost savings to companies. However, I believe equal focus should be given to enterprise applications to enhance end user experience. Numerous enterprise applications are still antiquated, difficult to use, and not even suitable for mobile devices.

One of the goals of enterprise IT is to provide excellent user experience, thus increasing end user productivity. If the hardware devices are state of the art mobile phones and tablets but the apps are very hard to use, then the purpose is defeated.

For instance, searching for information inside the enterprise is still very difficult. Information is scattered across different file servers and applications. Very few companies have Google-like enterprise search capability. People are frustrated because it’s easier to search just about anything on the Internet, but it’s very difficult to find simple information inside the enterprise.

Enterprise applications should be like consumer IT applications, such as those provided by innovative companies like Amazon, Google, Facebook, etc. These web-based or mobile-based enterprise apps should be very user friendly and intuitive. In addition, training should not be required to use these enterprise apps. Google does not ask us to train whenever they deploy a new consumer app.

Enterprise apps should also be secure, just like those provided by online banking sites. Data should be encrypted and users properly authenticated.

End users should have the same user experience when at home doing online shopping, banking, and searching, and when at work using enterprise applications.

Migrating Files to EMC VNX

There are several methodologies in migrating files to EMC VNX. One method I used recently to migrate Windows files (CIFS) was to copy files between the source CIFS server to the target VNX server using the emcopy migration suite of tools from EMC. EMC provides these free tools including emcopy, lgdup, and sharedup to accomplish the migration task. There are several steps you need to follow for a successful migration. In general, I used the following procedure:

1. Create the necessary VDM (Virtual Data Mover), File System, and CIFS server on the target VNX machine.
2. Create CIFS share. Copy the share permissions (or ACL) and the NTFS root folder ACLs from the old share to new share. You can also use the sharedup.exe utility.
3. Use lgdup.exe utility to copy local groups from the source CIFS server to the target CIFS server.
4. Run emcopy.exe to perform baseline copy.
5. Create an emcopy script to sync the files every night. This will temendously cut the time needed to update the files on the final day of migration.
6. Analyze emcopy log to make sure files are being copied successfully. You may also spot check the ACLs and/or run tools to compare files and directories between the source and target.
7. On the day of the cutover:
a. Disconnect users from the source CIFS server and make the file system read-only.
b. Run the final emcopy script.
c. Follow EMC156835 to rename the CIFS server so that the new CIFS server will have the name of its old CIFS server. This procedure entails unjoining the source and target CIFS server from Active Directory (AD), renaming NetBIOS name on the new CIFS server, joining back the CIFS server, etc. Update the DNS record too if necessary.
8. Check the new CIFS shares and make sure the users are able to read/write on the share.

To migrate UNIX/Linux files, use the UNIX rsync utility to copy files between source and target VNX.

Moving a Qtree Snapmirror Source in NetApp Protection Manager

A couple of weeks ago, one of the volumes in our NetApp Filer storage almost ran out of space. I cannot expand the volume since its aggregate is also low in space. I have to move it to a different volume contained in an aggregate with plenty of space on the same filer. The problem is, this volume contained qtrees that are snapmirrored to our Disaster Recovery (DR) site and it is being managed by NetApp Protection Manager. How do I move the qtree snapmirror sources without re-baselining the snapmirror relationship using Protection Manager? Unfortunately, there is no way to do this using Protection Manager, and re-baselining is not an option – it has terabytes of data that may take couple of weeks to complete.

Like any sane IT professional, I googled how to do this. I did not find a straight forward solution, but I found bits and pieces of information. I consolidated this information and generated the steps below. Generally, the steps below are the combination of the snapmirror CLI commands and Protection Manager configuration tasks.

1. On the CLI of the original source filer, copy the original qtree to a new qtree on a new volume by using the following command:

sourcefiler> snapmirror initialize -S sourcefiler:/vol/oldsourcevol/qtree sourcefiler:/vol/newsourcevol/qtree

This took some time, and I also updated snapmirror.conf so that snapmirror updates daily.

2. On the day of the cutover, perform a final snapmirror update on the new volume. Before doing this, make sure that nobody is accessing the data by removing the share.

sourcefiler> snapmirror update sourcefiler:/vol/newsourcevol/qtree

3. Login to the Operations Manager server, and run the following on the command prompt:

c:\> dfm option set dpReaperCleanupMode=Never

This prevents Protection Manager’s reaper cleaning up any relationship.

4. Issue the following command to relinquish the primary and secondary member:

c:\> dfpm dataset relinquish destination-qtree-name

This will mark the snapmirror relationship as external and Protection Manager will no longer manage the relationship.

5. Using NetApp Management Console GUI interface, remove the primary member from the dataset. Then remove the secondary member.

6. On the CLI of the source filer, create a manual snapshot copy by using the following command:

sourcefiler> snap create oldsourcevol common_Snapshot

6. Update the destinations by using the following commands:

sourcefiler> snapmirror update -c common_Snapshot -s common_Snapshot -S sourcefiler:/vol/oldsourcevol/qtree sourcefiler:/vol/newsourcevol/qtree

destinationfiler> snapmirror update -c common_Snapshot -s common_Snapshot -S sourcefiler:/vol/oldsourcevol/qtree destinationfiler:/vol/destinationvol/qtree

7. Quiesce and break the SnapMirror relationship between source filer and destination filer, and oldsource and newsource volumes, using the following commands:

destinationfiler> snapmirror quiesce /vol/destinationvol/qtree
destinationfiler> snapmirror break /vol/destinationvol/qtree
sourcefiler> snapmirror quiesce /vol/volnewsourcevol/qtree
sourcefiler> snapmirror break /vol/volnewsourcevol/qtree

8. Establish the new snapmirror relationship using the following command on the destination system:

destinationfiler> snapmirror resync -S sourcefiler:/vol/newsourcevol/qtree destinationfiler:/vol/destinationvol/qtree

The new SnapMirror relationship automatically picks the newest common Snapshot copy for replication. This is the common Snapshot copy.

9. Verify that the SnapMirror relationship is resynchronizing by using the following command:

destinationfiler> snapmirror status

10. Recreate the shares on the new source volume.

11. At this point, on the Protection Manager GUI console, you will see the snapmirror relationship in the External Relationship tab.

12. Create a new dataset with required policy and schedule. Use the import wizard to import the snapmirror relationship to the new dataset.

13. On the Operations Manager server command prompt, set back the reaper cleanup mode back to orphans.

c:\> dfm options set dpReaperCleanupMode=orphans.

Please send me a note if you need more information.

Sources

https://library.netapp.com/ecmdocs/ECMP1196991/html/GUID-301E424F-62C5-4C89-9435-F13202A1E4B6.html
https://communities.netapp.com/message/44365

IT Converged Infrastructure

Is converged infrastructure the future? Major technology companies are now offering integrated compute, storage, and network in a box. Leading the pack is the Vblock system by VCE. Vblock consists of hardware and software from Cisco, EMC, and VMware.

Similarly, servers, storage, and network vendors are also offering their own integrated system. NetApp, a storage vendor, is selling FlexPod. A FlexPod combines NetApp storage systems, Cisco Unified Computing System servers, and Cisco Nexus fabric into a single, flexible architecture.

Cisco, a networking company, has been selling x86 Unified Computing System for years and recently bought Whiptail, a high performance storage company, to enhance their unified infrastructure offering. HP, a server company, is offering the POD solution.

These converged infrastructure solutions are not only suited for small or medium sized data centers but they are engineered for large scale, high performance, and highly reliant data centers. In addition, security, automation, and monitoring are built into the package.

With these solutions, companies do not need to spend time and money architecting and integrating servers, storage, and networks. Most importantly, operations and vendor support will be simplified. There will only be one point of contact for vendor support and finger pointing between vendors will be minimized.

Information Security Conference

I recently attended the 2013 (ISC)2 Annual Security Congress held at Chicago, IL on Sept 23 to 27. The conference was held in conjunction with the ASIS International Security conference. It was one of the premier conference attended by security professionals from all over the world. The conference was a huge success.

I attended the conference to primarily obtain CPE (Continuing Professional Education) points for my CISSP (Certified Information Systems Security Professional) certification, to learn from experts on the latest technologies and trends in information security, and to network with information security professionals.

The keynote speeches were informative, entertaining, and inspirational. Steve Wozniak (co-founder of Apple computers) talked about how he got into the world of computing and that hacking – for the sake of learning, inventing, and developing programs – should be fun. Former Prime Minister of Australia, Hon. John Howard, talked about the qualities of a great leader and the state of the world economy. Mike Ditka (an NFL legend), delivered an inspirational speech on attitude and success.

The sessions on information security varied widely from governance to technical deep-dive on security tools. Hot topics included cloud security, mobile security, hackers, privacy, and end user awareness. What struck me most was that the reason why there are still a lot of security breaches despite the advances in technologies is that security is often an afterthought for most companies – defence-in-depth is not properly implemented, programmers write insecure programs (for instance, they don’t write programs that checks for SQL injections), and users are not properly trained on security (such as how to use a good passwords, not to click phishing site sent via email, etc).

The world of information security is expanding. As more and more people are using the Internet and more companies are doing business online, the need for security becomes even more important.

Restoring NetApp LUNs

The other day, I was tasked to restore a NetApp LUN from several years ago. Since the backup was so long ago, I have to restore it from an archived tape. After restoring the file, it did not show up as a LUN. It turned out that there are few rules to follow in order to restore LUN from tape and that it shows up as LUN.

There are also a couple of requirements when backing up LUNs to tape. Thankfully when we backed up the LUNs, we followed these rules:

1. The data residing on the LUN should be in a quiesced state prior to backup so the file system caches are committed to disk.

2. The LUN must be backed up using a NDMP compliant backup application in order for it to retain the properties of a LUN. Symantec Netbackup is an NDMP compliant backup application.

When restoring, I learned that:

1. It must be restored to the root of a volume or qtree. If it is restored anywhere else, it will simply show up as a file and lose the metadata allowing it to be recognized as a LUN.

2. The backup software should not add an additional directory above the LUN when it is restored.

For instance, on Symantec Netbackup application, when restoring the LUN, you should select the option to “Restore everything to its original location.” If this is not possible, you can select the second option which is “Restore everyting to a different location, maintaining existing structure.” This means that you can restore it on a different volume.

For example, if the LUN resides in /vol/vol1/qtree1/lun1 and we chose to restore to /vol/vol2, the location where the LUN would be restored is /vol/vol2/qtree1/lun1 because it maintains the existing structure.

Do not select the third option which is “Restore individual directories and files to different locations” because the Netbackup software will add an extra directory beyond the qtree and the LUN will not show up as a LUN.

When restore is complete, a “lun show -v” output on the NetApp CLI will show the restored LUN on /vol/vol2 volume.

Deduplication and Disk-based Backup

Disk-based backup has been around for a long time now.  I remember when I first deployed a Data Domain DD560 appliance eight years ago. I was impressed by its deduplication technology.  Data domain appliances are disk storage arrays used primarily for backup.  We used it as a disk backup destination for Symantec Netbackup via NFS.  Data Domain sets itself apart because of its deduplication technology.  For instance, in our environment, we are getting a total compression (reduction) of 20x; which means that our 124TB data only uses 6TB of space – a very large space savings indeed.

In addition, disk-based backup makes it very fast and easy to restore data. The need to find the tapes, retrieve them, and mount them have been eliminated.

In fact, the need for tapes can be totally eliminated.  A lot of companies are still using tapes to store them off site for disaster recovery.  If you have a remote office site, a Data Domain appliance can be replicated to your remote site (or disaster recovery site).  What is nice about Data Domain replication is that only the deduped data is sent over the wire.  This means less bandwidth consumption on your WAN (Wide Area Network).

Deduplication technology is getting better. Storage vendors and backup software vendors are offering it in different flavors.  With the cost of disk going down, there is really no need for tapes. Even long term retention and archiving of data can now be stored on a low cost, deduped disk storage array.

The Evolving Role of IT Professionals

I started my career in software development.  I wrote codes, performed system analysis, and software quality assurance.   Then I switched to system and network administration, and infrastructure architecture.   While the roles of software developers may not change that much (software programs need to be created), the roles of IT administrators, architects, analysts, and IT departments in general are changing.  This is due to cheap hardware, smarter software and appliances, and the availability of the cloud.

I still remember some time ago when I would spend a lot of time troubleshooting a system.  Today, due to redundant systems, off the shelves and online applications, and the use of appliances, troubleshooting times have been reduced to a minimum.  When a component breaks, it’s easy to replace it.

IT companies are now selling converged network, server, and storage in a box which eliminated the need for elaborate architecture and implementation and has simplified IT operations.

With virtualization and the “cloud”, more and more applications and IT services (infrastructure as a service, software as a service, etc.) are being available online.

When it comes to IT, companies now have various choices – host their IT services externally via public cloud, build IT systems in house, or use the combination of the two.

Thus, the future role of IT professionals will be like brokers.  When the business comes to them for a need, they should be able to deliver quickly and provide the best IT solution.  They should be able to determine when to use the public cloud and when to use internal IT systems.  The key is to understand the business. For instance, it may not make sense to put data in the cloud if you are concerned about security or if your company is regulated by the government.  If your company is small, it may not make sense to build a costly IT infrastructure in house.

Successful IT professionals are not only technically savvy but also business savvy.

Best Practices for Using NFS Datastore on VMware

More companies are now deploying VMware with IP based shared storage (NAS). NAS storage is cheaper than Fiber Channel (SAN) storage because there is no separate Fiber Channel (FC) based network to maintain. More importantly, IP based storage performance and stability are now comparable with FC based storage.

Other advantages of using IP based storage, specifically NFS, are thin provisioning, de-duplication, and the ease-of-backup-and-restore of virtual machines and files on a virtual disk via array based snapshots. In addition, IP based storage is easier to maintain.

VMware published a whitepaper on the best practices for running VMware vSphere on Network Attached Storage (NAS) using NFS. Following the best practices in deploying an NFS based storage is very important to obtain a stable and optimized VMware environment. Here are the important things to consider:

On the network side, the local area network (LAN) on which the NFS traffic will run needs to be designed with availability, downtime-avoidance, isolation, and failover:

1. NFS traffic should be on a separate physical LAN, or at least on a separate VLAN.
2. Use private (non-routable) IP addresses. This will also address a security concern since NFS traffic is not encrypted and NFS is mounted with root privileges on the VMware host.
3. Use redundancy by teaming the NICs on the VMware host, configuring LACP protocol, and using two LAN switches.
4. Use jumbo frames.
5. Use 10GB Ethernet.

On the storage array side, the storage controller must be redundant, in case the primary one fails. In addition,

1. Configure the NFS exports to be persistent. (e.g. exportfs –p)
2. Install the VAAI and other plug-in tools from the storage vendor. For instance, NetApp has the Virtual Storage Console (VSC) plug-in that can be installed on the vCenter.
3. Refer to the storage vendor best practices guide. For instance, NetApp and EMC published their own best practice whitepapers for using NFS on VMware.

On the VMware hosts, the following configuration should be implemented:

1. Use the same datastore name across all hosts.
2. Select “No” for NIC Teaming failback option. If there is some intermittent behavior in the network, this will prevent the flip-flopping of NIC cards being used.
3. If you increase the maximum number of concurrent mount points (from the default of 8), also increase Net.TcpipHeapSize as well. For instance, if 32 mount points are used, increase tcpip.Heapsize to 30MB.
4. Set the following VMware High Availability options: (NFS heartbeats are used to determine if an NFS volume is still available.)
NFS. Hearbeat.Frequency=12
NFS.Hearbeat.Timeout=5
NFS.Hearbeat.MaxFailure=10

When configured properly, IP based storage, specifically NFS, provides a very solid storage platform for VMware.

NetApp Virtual Storage Console for VMware vSphere

One of the best tools for managing NetApp storage and VMware is a plug-in called NetApp Virtual Storage Console (VSC) for VMware vSphere. VSC provides administrators the ability to manage NetApp storage from vCenter client. It can configure, monitor, provision, and migrate NetApp datastores with fewer clicks. In addition, it can perform backup and recovery of LUNs and volumes from the vCenter client.

VSC can automatically discover your NetApp storage controllers and ESXi hosts. This task can take a lot of time if not using VSC. VSC can also automatically apply “best practices” settings on the ESXi host to optimize its configuration. It can rapidly provision datastores without going through the NetApp management interface. You can get backup (snapshots) of the datastore in a consistent state, and perform recovery in minutes.

NetApp implementation of its vStorage API for Array Integration (VAAI) offloads significant processing tasks to the storage array, freeing ESXi resources for other tasks. If you are using NFS though, you still need to download and install the NetApp NFS Plug-in for VMware VAAI.

For now, the VSC plug-in is only available for the traditional vCenter client. VMware is moving towards replacing the traditional vCenter client with the vSphere Web Client. I hope that NetApp releases the plug-in version for the web client pretty soon.