Category Archives: IT Strategy

Enterprise Search Using Google Search Appliance

One of the pain points for companies these days is how difficult it is to find relevant information inside their corporate network. I often hear people complain that it is easier to find any information on the Internet using Google or Bing rather than inside the enterprise.

Well, Google has been selling their Google Search Appliance (GSA) for many years. GSA brings Google superior search technology to a business corporate network. It even has the familiar look and feel that people have been accustomed to when doing a search on the Internet.

GSA can index and serve content located on the internal websites, documents located on file servers, and Microsoft Sharepoint repositories.

I recently replaced an old GSA, and quickly remembered how easy and fast it is to deploy. The hardware of the GSA is a souped up Dell server with a bright yellow casing. Racking the hardware is a snap. It comes with instructions on where to plug the network interfaces. The initial setup is done via a back-to-back network connection to a laptop, where network settings such as the IP address, netmask, gateway, time server, mail server, etc are configured.

Once the GSA is accessible on the network, the only other thing to do is to configure the initial crawl of the web servers and/or file systems, which may take a couple of hours. Once the documents are indexed, the appliance is ready to answer user search requests.

The search appliance has many advanced features and can be customized to your needs. For instance, you can customize the behavior and appearance of the search page. You can turn on or off the auto-completion feature. You can configure security settings, so that content is only available to certain people that are properly authenticated, and many other features.

Internal search engines such as the Google Search Appliance will increase the productivity of corporate employees by helping them save time looking for information.

Redefining Data Center In A Box

Data center in a box is traditionally defined as a “type of data center in which portable, mobile, and modular information nodes are self-contained within a cargo container. It is designed and packaged for quick deployment and acquisition of data center solutions in organizations or facilities, including remote off-site locations.” Data center in a box usually contains equipment from large storage, compute, and network vendors such as EMC, NetApp, Dell, and Cisco. They are pieced together to form the IT infrastructure. Virtual Computing Alliance (VCE) for instance, offers Vblock, a bundled product containing EMC storage, Cisco servers, and VMware. NetApp has a similar offering called Flexpod.

But new innovative companies such as Simplivity, Nutanix, and Scale Computing are changing the definition of data center in a box. They are creating a purpose-built product from the ground up that incorporates not just compute, storage, and network, but additional services such as data deduplication, wan optimization, and backup in a box.

For instance, Simplivity’s product called OmniCube is “a powerful data center building block that assimilates the core functions of server, storage and networking in addition to a wide range of advanced functionality including: native VM-level backup, WAN optimization, bandwidth efficient replication for DR, cache accelerated performance, and cloud integration.”

These products will further simplify the design, implementation, and operation of IT infrastructure. With these boxes, there is no more storage area network (SAN) to manage, nor additional appliances such as WAN accelerator to deploy. A few virtual machine (VM) administrators can manage all the boxes in a cluster from the VMware server virtualization management user interface.

Data center in a box will continue to evolve and will change how we view and manage IT infrastructure for years to come.

IT Infrastructure Qualification and Compliance

One of the requirements of building and operating an IT infrastructure in a highly regulated industry (such as the pharmaceutical industry, which is regulated by the FDA) is to qualify, or validate the servers, network, and storage when they are being built. Once built, any changes to the infrastructure should undergo a change control procedure.

Building the infrastructure and making changes to it should undergo verification. They should also be documented so that they can be easily managed and traced. These activities are really not that different from the best practices guide in operating an IT infrastructure, or even from the ITIL processes.

FDA does not dictate how to perform IT infrastructure qualification or validation, as long as you have documented reasonable procedures.

The problem is that some companies overdo validation and change control processes. The common problems I’ve seen are: (1) too many signatures required to make a change, (2) no automated procedure to perform the documentation – many still use papers to route documents (3) and finally, the people who perform the checks and balances sometimes do not understand the technology.

The result is that IT personnel get overwhelmed with paperwork and bureaucracy. This discourages them to make critical changes to the infrastructure such as performing security patches on time. This also leads to the relunctance of IT personnel to implement newer or leading edge technologies into their infrastructure.

Fortunately, the International Society for Pharmaceutical Engineering (ISPE) has published a Good Automated Manufacturing Practice (GAMP) guidance on IT Infrastructure Control and Compliance. Companies can create their own IT infrastructure qualification program and procedures based on the GAMP guidance document. They should be simple but comprehensive enough to cover all the bases. It is also important that these procedures be periodically reviewed and streamlined to achieve an optimized procedure.

IT Infrastructure for Remote Offices

When designing the IT infrastructure (servers, storage, and network) of small remote offices, infrastructure architects of large enterprises are often faced with the question, what is the best IT infrastructure solution for remote sites? Low-cost, simple, secure, and easy to support solution always come to mind, but positive end-user experience in terms of network and application performance, and user friendliness should also be in the top priorities when building the infrastructure.

Most small sites just need access to enterprise applications and to file and print services. Network infrastructure definitely needs to be built – such as the site’s local area network (LAN), wireless access points, wide area network (WAN) to connect to the enterprise data center, and access to the Internet. The bigger question though is: should servers and storage be installed on the site?

There are a lot of technologies such as WAN accelerators and “sync and share” applications that will forgo installing servers and storage on the remote sites without sacrificing positive end-user experience. For instance, Riverbed WAN accelerator products tremendously improve performance access to files and applications from the remote sites to the enterprise data center.  These products can even serve up remote datastore for VMware farms. “Sync and share” applications are dropbox-like applications (such as EMC Syncplicity). Enterprises can build a storage as a service solution in their internal infrastructure. This will eliminate the need to install file servers or storage appliances on the remote sites.

The decision to “install servers” or “go serverless” at the remote sites still depends on many factors. They should be dealt with on a case by case basis and should not rely on a cookie cutter solution. Some of the criteria to consider are: the number of people at the sites and the growth projection; the storage size requirement, available WAN bandwidth, the presence or absence of local IT support, office politics, and country/region specific regulation for data to remain local. If these issues are factored in, a better solution can be designed for remote offices.

Big Data

There is too much hype on big data these days, promising the next big revolution in information technology which will change the way we do business. It purports to have a big impact on economy, science, and society at large. In fact, big data right now is at the “peak of inflated expectations” on the Gartner technology hype cycle.

Big data “refers to our burgeoning ability to crunch vast collections of information, analyze it instantly, and draw sometimes profoundly surprising conclusions from it.” It answers questions that are sometimes not so obvious.

Big data definitely has tremendous potential. After all the hype has subsided, entities that do not take advantage of its power will be left out. In fact big data is already being used by technology companies such as Google, Amazon, Facebook, and many other companies. IT vendors such as Oracle, EMC, and IBM started offering big data solutions for companies and enterprises.

There are three drivers that is making big data possible:

First, a robust and cheap IT infrastructure – powerful server platforms that crunch data, advanced storage systems that store huge amount of data, and ubiquitous network – Wifi, 4G, fiber, etc.

Second, the explosion of data from mobile devices, social networks, web searches, sensors, and data from many different devices.

Lastly, the proliferation of powerful analytics and data mining tools suited for big data, such as Hadoop, MapReduce, NoSQL, and many other software yet to be created. These tools will only get better and better.

I recently read the book entitled “Big Data: A Revolution That Will Transform How We Live, Work, and Think” by Viktor Mayer-Schönberger and Kenneth Cukier.

The book is spot on its predictions. With big data, there will be yet another paradigm shift on how we understand the world. With big data, “what” is more important than “why”. Big data is also the processing of complete data, not just a sampling of data. It also means accepting less than perfect accurate result.

The book also talks about the dark side of big data – such as the loss of privacy. It also talks about how big data predictions can be used to police and punish individuals, and how organizations may blindly defer to what the data says without understanding its limitations.

I highly recommend the book to those who like to fully understand big data and its implications.

Network and Server Monitoring Using Open Source Tools

I am a fan of open source tools. The Internet, as we know it today, will not exist if not for the open source movement. We owe this to the countless architects and developers who dedicated their time and effort to write open source software.

Enterprise IT departments can also take advantage of open source software. Numerous companies have been using them for years. One particular area where they can be used is network and server monitoring.

There are a lot of open source network monitoring tools out there. Leading the pack are Nagios, Zabbix, and Cacti. My favorite tool though is OpenNMS. I particularly like it because it is very easy to setup and administer. It can automatically discover your nodes on the network. There were very few tweaks when I first set it up. It provides simple event and notification via email or pager. In addition, its web-based management interface interface is very easy to use.

I have been using OpenNMS for several years now and it has been running rock solid. I definitely recommend OpenNMS for IT departments who do not want to pay a hefty price to monitor their network and servers.

End User Experience on Enterprise IT

A lot of focus on adapting BYOD (Bring Your Own Devices) has been exerted by enterprise IT departments due to the popularity of mobile phones and tablets, and their cost savings to companies. However, I believe equal focus should be given to enterprise applications to enhance end user experience. Numerous enterprise applications are still antiquated, difficult to use, and not even suitable for mobile devices.

One of the goals of enterprise IT is to provide excellent user experience, thus increasing end user productivity. If the hardware devices are state of the art mobile phones and tablets but the apps are very hard to use, then the purpose is defeated.

For instance, searching for information inside the enterprise is still very difficult. Information is scattered across different file servers and applications. Very few companies have Google-like enterprise search capability. People are frustrated because it’s easier to search just about anything on the Internet, but it’s very difficult to find simple information inside the enterprise.

Enterprise applications should be like consumer IT applications, such as those provided by innovative companies like Amazon, Google, Facebook, etc. These web-based or mobile-based enterprise apps should be very user friendly and intuitive. In addition, training should not be required to use these enterprise apps. Google does not ask us to train whenever they deploy a new consumer app.

Enterprise apps should also be secure, just like those provided by online banking sites. Data should be encrypted and users properly authenticated.

End users should have the same user experience when at home doing online shopping, banking, and searching, and when at work using enterprise applications.

IT Converged Infrastructure

Is converged infrastructure the future? Major technology companies are now offering integrated compute, storage, and network in a box. Leading the pack is the Vblock system by VCE. Vblock consists of hardware and software from Cisco, EMC, and VMware.

Similarly, servers, storage, and network vendors are also offering their own integrated system. NetApp, a storage vendor, is selling FlexPod. A FlexPod combines NetApp storage systems, Cisco Unified Computing System servers, and Cisco Nexus fabric into a single, flexible architecture.

Cisco, a networking company, has been selling x86 Unified Computing System for years and recently bought Whiptail, a high performance storage company, to enhance their unified infrastructure offering. HP, a server company, is offering the POD solution.

These converged infrastructure solutions are not only suited for small or medium sized data centers but they are engineered for large scale, high performance, and highly reliant data centers. In addition, security, automation, and monitoring are built into the package.

With these solutions, companies do not need to spend time and money architecting and integrating servers, storage, and networks. Most importantly, operations and vendor support will be simplified. There will only be one point of contact for vendor support and finger pointing between vendors will be minimized.

The Evolving Role of IT Professionals

I started my career in software development.  I wrote codes, performed system analysis, and software quality assurance.   Then I switched to system and network administration, and infrastructure architecture.   While the roles of software developers may not change that much (software programs need to be created), the roles of IT administrators, architects, analysts, and IT departments in general are changing.  This is due to cheap hardware, smarter software and appliances, and the availability of the cloud.

I still remember some time ago when I would spend a lot of time troubleshooting a system.  Today, due to redundant systems, off the shelves and online applications, and the use of appliances, troubleshooting times have been reduced to a minimum.  When a component breaks, it’s easy to replace it.

IT companies are now selling converged network, server, and storage in a box which eliminated the need for elaborate architecture and implementation and has simplified IT operations.

With virtualization and the “cloud”, more and more applications and IT services (infrastructure as a service, software as a service, etc.) are being available online.

When it comes to IT, companies now have various choices – host their IT services externally via public cloud, build IT systems in house, or use the combination of the two.

Thus, the future role of IT professionals will be like brokers.  When the business comes to them for a need, they should be able to deliver quickly and provide the best IT solution.  They should be able to determine when to use the public cloud and when to use internal IT systems.  The key is to understand the business. For instance, it may not make sense to put data in the cloud if you are concerned about security or if your company is regulated by the government.  If your company is small, it may not make sense to build a costly IT infrastructure in house.

Successful IT professionals are not only technically savvy but also business savvy.

The Value of IT Certifications

I recently passed the VMware Certified Professional 5 – Data Center Virtualization exam. The last VMware certification I took was in 2007 when I passed the VMware Certified Professional 3 exam. It’s nice to have the latest VMware certification under my belt.

VMware certification is a little bit unique, because it requires one-week training and hands-on experience. You will find it difficult to pass the test without hands-on experience. Most of the questions in the test are real life scenarios and you can only understand the questions if you have encountered them in real life.

Some people argue the value of certifications. They say that certifications are useless because most of those people who have them are inexperienced. I agree that experience is the best way to learn in the IT field. I can attest to this after almost 20 years in the field. But IT certifications are valuable for the following reasons:

1. Not all IT certifications are created equal. While some certifications are easier to pass just by reading books, most IT certifications such as VCP (VMware Certified Professional), CISSP (Certified Information Systems Security Professional), and RHCE (Red Hat Certified Engineer) certifications need a high degree of experience to pass the tests.

2. Not all people are lucky enough to have access to expensive hardware to gain hands-on experience nor lucky enough to be assigned to IT projects to get the maximum exposure. Some people take the certification route to get knowledge and experience.

3. Not all IT knowledge is learned via experience since not all scenarios can be encountered in real life. Some are learned via reading books and magazines, taking the training, and passing certification tests. For instance, if your company’s standard is Fiber Channel for VMware datastore, the only way to learn about iSCSI datastore is to read or get trained on it.

4. IT certifications are solid evidence of your career. It will be very useful, for instance, when looking for a job. Prospective employers do not have a concrete evidence of your accomplishments, but a solid and trusted IT certification can prove your worth.

5. And finally, seasoned IT professionals, just like me, take certification tests to validate our knowledge.