Author Archives: admin

Data-centric Security

Data is one of the most important assets of an organization; hence, it must be secured and protected. Data typically goes in and out of an organization’s internal network in order to conduct business and do valuable work. These days, data reside in the cloud, go to employees’ mobile devices or to business partners’ networks. Laptops and USB drives containing sensitive information sometimes get lost or stolen.

In order to protect the data, security must travel with the data. For a long time, the focus of security is on the network and on the devices where the data resides. Infrastructure security such as firewalls, intrusion prevention systems, etc. are not enough anymore. The focus should now shift to protecting the data itself.

Data-centric security is very useful in dealing with data breaches, especially with data containing sensitive information such as personally identifiable information, financial information and credit card numbers, health information and intellectual property data.

The key to data-centric security is strong encryption because if the public or hackers get ahold of sensitive data, it will show up as garbled information which is pretty much useless to them. To implement a robust data-centric security, the following should be considered:

1. Strong data at rest encryption on the server/storage side, applications and databases.
2. Strong in-transit encryption using public key infrastructure (PKI).
3. Effective management of encryption keys.
4. Centralized control of security policy which enforce standards and protection on data stored on the devices at the endpoints or on the central servers and storage.

Cybersecurity Insurance

I recently attended the SC Security Congress in NY. One of the hot topics was cybersecurity insurance. As we’ve seen in the news, many companies are suffering from cyber attacks, and one of the mitigating solutions for these companies is to transfer the financial risk of a security breach to insurers.

There is a growing number of insurance companies offering this financial service. But is there really a need for it? I believe there is. Being hacked is no longer a matter of “if” but “when”. Every company will suffer a security breach in some form or another. Cybersecurity insurance will give a company an incentive to tighten up or better its security measures. While it cannot reduce the damage to a company’s reputation nor cover intellectual property theft and business downturn caused by an attack, it will lessen the financial damage to a company when hackers attack its site.

Delivering Centralized Data with Local Performance to Remote Offices

One of the challenges large companies are facing is how to build and support IT infrastructure (network, server, and storage) for remote offices. Remote offices usually do not have full time IT employees because it is usually cost prohibitive to employ full time IT personnel to support a small IT infrastructure. In addition, large companies are very protective of their data and want their data to be centralized at their data centers sitting on a reliable and well protected infrastructure.

However, centralizing the infrastructure and data location may lead to poor performance for the local site, especially if the WAN bandwidth and latency is not that great.

Enter Riverbed Steelfusion. Riverbed SteelFusion is a branch converged infrastructure solution that centralizes data in the datacenter and delivers local performance and nearly instant recovery at the branch. It does this by consolidating branch servers, storage, networking and virtualization infrastructure into a single solution.

With Steelfusion, a virtual machine which will act as a branch file or application server is provisioned at the data center where a Steelfusion Core is located, and is projected to the branch via the Steelfusion Edge located at the branch office.

Steelfusion has the following advantages:

1. No more costly installation and maintenance of servers and storage at the branch office.
2. LAN performance in the branch, which will make end users happy.
3. Centralized management of storage, servers, and data at the data center.
4. No more costly branch backup (such as backup hardware and software, tape media, backup management, off-site handling, etc)
5. Improved recovery of servers and applications.
6. Quick provisioning of servers.
7. Data is secure in the data center, in case branch office has a disaster or theft.

Delivering data and applications located at the data centers to branch/remote offices while maintaining local area performance can be accomplished by using Riverbed Steelfusion.

The Importance of Threat Intelligence to Your Company’s Information Security

One of the tools that helps identify and combat information security threats to your company is “threat intelligence.” Some companies are building their own threat intelligence plans, and some are buying services from providers offering threat intelligence services. Threat intelligence is information that has been analyzed to discover informative insights – high quality information that will help your company make decisions. It is like an early warning system that will help your company prioritize vulnerabilities, predict threats, and prevent the next attack to your systems.

Threat information can come from different sources:

1. Internal sources such as information coming from internal employees, organizational behaviors and activities
2. External sources such as government agencies, websites, blogs, tweets, and news feeds
3. Logs from network equipment, both from your own network, from Internet Service Providers, and from telecoms
4. Logs from security equipment (firewalls, IPS, etc), servers, and applications
5. Managed security providers that aggregate data and crowd-source information

The challenge of threat intelligence is how to put the pieces together that have been gathered from these different sources. A tool that is able to digest all these data (Hadoop and Mapreduce tools for Big Data comes to mind) is necessary to produce meaningful information. Security data analysts are also key in producing actionable threat intelligence from these wide variety of data.

D2D2T vs. D2D2C

Disk-to-disk-to-tape (D2D2T) is a type of computer storage backup in which data is copied first to backup storage on a disk and then copied again to a tape media. This two tiered approach provides a quick short term recovery option since backup data can be easily retrieved from disk, as well as a more reliable long-term archive and disaster recovery on tape, since tape media can be stored off-site.

But using tapes has its drawbacks. Tape handling is one of them. Since it is usually a manual process, the possibility of human error is apparent – tapes getting misplaced, tapes getting lost or damaged while being transported to an off-site location, personnel forgetting to make backup to tape, failing backups because of tape device error or not enough space on tape, etc.

One alternative to D2D2T which is gaining popularity these days is disk-to-disk-to-cloud (D2D2C). With a D2D2C approach, the tape tier of D2D2T is simply replaced with cloud storage. A D2D2C backup involves backing up server drives to disk-based storage, and then running scheduled backups to archive backup data off to a cloud-based location. For short-term recovery operations, backups from disk are used for restoration.

The advantages of using D2D2C are: no more manual handling of tape media to send off-site, thus eliminating human tape handling error; provides easier and faster options for restoring data (tape restore can be a step-by-step manual process: retrieve tape from off-site location, mount tape, search backup catalogue, restore data); data can be restored anywhere; data transfer to the cloud can occur during off hours which will not impact the business; and cloud backups are usually incremental in nature which will reduce the amount of data sent over the network.

However, there are also some disadvantages of using D2D2C. For small offices especially, sometimes the WAN or Internet bandwidth can be a limiting factor. Also, backup to the cloud is still relatively expensive.

Bare Metal Restore

One important capability of a disaster recovery plan is the ability to do bare metal restore (BMR). A BMR is a restore of your entire server to new or original hardware after a catastrophic failure. A BMR can either be done manually – by reformatting the computer from scratch, reinstalling the operating system, reinstalling software applications, and restoring data and settings; or automatically – by using BMR tools to facilitate the bare metal restore process. The manual process, however, takes time and can be error prone, while BMR tools can be fast and easy.

With the majority of servers being virtualized, what’s the use of BMR? With virtualization, especially when using image level backup, there is no need to use specialized BMR tools. However, there are still servers that cannot be virtualized (such as applications requiring dongle, systems requiring extreme performance, applications/databases with license agreements that do not permit virtualization, etc.). With these systems requiring physical servers, BMR is critical to their recovery.

Backup vendors usually have bare metal solution integrated in their package, but usually not enough. There are software vendors that specialize in bare metal recovery.

Typically, a bare metal restore process involves:
1. Generating an ISO recovery image
2. Using the ISO image to boot the system to be recovered
3. Once in the restore environment, setting up the network connection (IP address, netmask, etc.), so it can connect to the backup server to restore the image.
4. Verifying the disk partitions and mapping.
5. Stepping through the restore wizard – such as choosing the image file you want to restore (point in time), and the partition (or unallocated space) to which you want to restore.
6. Performing any post recovery tasks – such as checking the original IP address, checking that the application services are running, etc.

Bare metal restore is essential to a server disaster recovery plan.

Enterprise Search Using Google Search Appliance

One of the pain points for companies these days is how difficult it is to find relevant information inside their corporate network. I often hear people complain that it is easier to find any information on the Internet using Google or Bing rather than inside the enterprise.

Well, Google has been selling their Google Search Appliance (GSA) for many years. GSA brings Google superior search technology to a business corporate network. It even has the familiar look and feel that people have been accustomed to when doing a search on the Internet.

GSA can index and serve content located on the internal websites, documents located on file servers, and Microsoft Sharepoint repositories.

I recently replaced an old GSA, and quickly remembered how easy and fast it is to deploy. The hardware of the GSA is a souped up Dell server with a bright yellow casing. Racking the hardware is a snap. It comes with instructions on where to plug the network interfaces. The initial setup is done via a back-to-back network connection to a laptop, where network settings such as the IP address, netmask, gateway, time server, mail server, etc are configured.

Once the GSA is accessible on the network, the only other thing to do is to configure the initial crawl of the web servers and/or file systems, which may take a couple of hours. Once the documents are indexed, the appliance is ready to answer user search requests.

The search appliance has many advanced features and can be customized to your needs. For instance, you can customize the behavior and appearance of the search page. You can turn on or off the auto-completion feature. You can configure security settings, so that content is only available to certain people that are properly authenticated, and many other features.

Internal search engines such as the Google Search Appliance will increase the productivity of corporate employees by helping them save time looking for information.

Leading Several Toastmasters Clubs as Area Governor

There are many leadership opportunities in Toastmasters. One of those opportunities is serving as Area Governor – where one leads and oversees several clubs in a geographical area. I have been honored and privileged to serve as Area Governor of Area 53, District 31 (Eastern Massachusetts and Rhode Island) in the past Toastmaster year (July 2013 to June 2014). I am proud that during my tenure, my area has earned the President’s Distinguished Area award.

Area Governors help clubs succeed. They visit the clubs several times a year, determine the clubs’ strengths and weaknesses, create club success plans with club officers, encourage members to finish their speech and leadership projects, and help the club obtain new members. In addition, area governors facilitate area speech contests – one of the most important tradition in Toastmasters.

As Area Governor, I have encountered many challenges but the time and effort I spent has been well worth it. I met and worked with various people, learned to work with different personalities, and nurtured relationships. I guided a struggling club to become a great club. I have strengthened my leadership skills in the process, and learned how to truly motivate and inspire.

There are so many opportunities in Toastmasters to learn and lead. You just need to step up.

Installing High Performance Computing Cluster

A high performance computing (HPC) cluster is usually needed to analyze data from scientific instruments. For instance, I recently setup an HPC cluster using Red Hat Enterprise Linux 6.5 consisting of several nodes which will be used to analyze data generated from a gene sequencer machine.

Basically, to build the cluster, you need several machines with high speed processors and multiple cores, lots of memory, a high speed network to connect the nodes, and a huge and fast data storage. You also need to install an operating system – such as the Red Hat or CentOS Linux, and configure tools and utilities such as kickstart, ssh, NFS, and NIS. Finally, a cluster software or queueing system is needed to manage jobs to fully utilize the compute resources. One of the commonly used open source cluster software is Son of Grid Engine (SGE)  – an offshoot of the popular Sun Grid Engine.

An excellent write up for setting up an HPC cluster can be found at this Admin article.

The latest Son of Grid Engine version (as of this writing) is 8.1.7 and can be downloaded from the Son of Grid Engine Project Site.

Since the environment I setup is running Red Hat Enterprise Linux 6.5, I downloaded and installed the following rpms:

gridengine-8.1.7-1.el6.x86_64.rpm
gridengine-execd-8.1.7-1.el6.x86_64.rpm
gridengine-qmaster-8.1.7-1.el6.x86_64.rpm
gridengine-qmon-8.1.7-1.el6.x86_64.rpm

After the installation of the rpms, I installed and configured the qmaster, then installed sge (execd) on all the nodes. I also ran a simple test to verify that the cluster is working by issuing the following commands:

$ qsub /opt/sge/examples/jobs/simple.sh
$ qstat

MIT Sloan CIO Symposium

I recently attended the MIT Sloan CIO Symposium held in Cambridge, MA on May 21st, 2014. The event was well attended by Chief Information Officers (CIOs), VPs, business executives, academics, entrepreneurs, and professionals from companies all over the world. The speaker lineup was superb, the symposium was content rich, and the #mitcio twitter hashtag was trending during the event.

I enjoyed the symposium because of the combination of academic and business perspective coming from a diverse set of speakers. Hot topics such as security, big data, robotics and self-driving cars and its implications to society, and the evolving role of CIOs were big topics of conversation.

The key takeaways for me are the following:

1. The future role of CIOs and IT professionals in general will be service brokers. They will increasingly serve as in-house IT service providers, and as brokers for business managers and external cloud service providers.

2. On the issue of “build vs buy, and when does it make sense to build your own system”, the answer is — when it is a source for your competitive advantage, or when what you build will differentiate your business from anyone else.

3. CIOs increasingly have to work closely with the business to deliver on technology promises rather than focusing on the technology alone. They should have a seat at the executive table. CIOs need to stay in front of their organizations and should talk to boards regularly. They should be communicating the risks of IT investments and demonstrate its benefit to the business.

4. To maximize and communicate the business value of IT, use the following sentence when explaining the benefits of IT to business: “We are going to do ___, to make ___ better, as measured by ___, and it is worth ____.” Also, consider “you” and “your” as the most important words when translating the business value of IT.

5. In terms of the future of technology – everything is becoming data-fied. Brynjolfsson, the author of the book, “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies” said that “We are at the cusp of a 10 year period where we go from machines not really understanding us to being able to.” Thus we are seeing tremendous technological advancements in robotics and self-driving cars. With all these technological progress, we also have to think about how our culture, laws, ethics, and economics will be affected. For instance, how will employment be affected by robots that can generally do repetitive tasks? The advice from the panel is that “creative lifelong learners will always be in demand.”