Author Archives: admin

Cloud Security vs On-Prem Security

One of the big differences between cloud security and on-prem security is that the former is built from the ground up while the latter is bolted in the process. AWS for instance had made their infrastructure secure ever since they first built it. They realized early on that companies will not be putting their data in the cloud if it’s not inherently secure.

However, security is still a shared responsibility between the cloud provider and the consumer. By now, everybody should be aware of the AWS Shared Responsibility Model. Companies who are used to the traditional security model will find that cloud security entails a different mindset. In the cloud, the focus shifts from network, operating systems, and perimeter security to security governance, access control, and secure development. Since the underlying infrastructure of the cloud is secured by the provider, companies utilizing it can now focus on the true information security – the ones that really matters to the company, such as data, users, and workflow security.

Security governance is important in the cloud. Security folks should spend more time planning and less fire fighting. They should be crafting and implementing policies that truly secure the company’s assets – such as data-centric security policies and secure software development. There should be a solid access control. For example, users are only granted access if they really need it.

There are a couple of challenges with cloud security. First is the obvious disconnect between shared security model and traditional security model. Companies used to on-prem security will still want to spend resources on perimeter security. Second is compliance. For instance, how can traditional auditors understand how to audit new technologies in the cloud like Lambda, where there is no server to verify?

Companies using the cloud should realize that security is still their responsibility but they should focus more on data and application security.

Cloud Security Challenges and Opportunities

I recently attended the ISC2 Security Congress held on Oct 8 to 10, 2018 at the Marriott Hotel in New Orleans, Louisiana.  Based on the keynotes, workshops, and sessions at the conference, these are the challenges and opportunities facing cloud security:

  1. Container and serverless (e.g. AWS Lambda) security.  For instance, how will you ensure isolation of various applications?
  2. Internet of Things (IOT) and endpoint security.  As more and more sensors, smart appliances and devices with powerful CPUs and bigger memories are connected to the cloud, more computation will happen on the edge, thus increasing security risks.
  3. Machine learning and artificial intelligence (AI).  How can AI help guard against cyber-attacks, predicts impending security breach, or improve investigation or forensics?
  4. Blockchain technology. Blockchain will be transforming how audits will be performed in the future.
  5. Quantum computing if and when it comes into fruition will break cryptography.  Cryptography is the reason why commerce happens on the Internet.  New encryption algorithm is needed when quantum computing becomes a reality.
  6. How will the implementation of GPDR (General Data Protection Regulation) in the European Union affects data sovereignty (“a concept that information which is stored in digital form is subject to the laws of the country in which it is located”), data privacy, and alignment of privacy and security?
  7. DevSecOps (having a mindset about application and infrastructure security from the start) will continue to gain momentum.

We are likely to be seeing continuing innovations in these areas within the next few years.

Defining the Different Types of Cloud Services

There are several kinds of cloud services, depending on which tier of the technology stack the service resides:

Software-as-a-Service (SaaS) delivers entire functioning applications through the cloud. SaaS frees companies from building their own data centers, buying hardware and software licenses, and developing their own programs. Salesforce is an example of a SaaS provider.

Infrastructure-as-a-Service (IaaS) delivers the underlying resources – compute, storage and networking – in a virtual fashion to organizations who purchase service “instances” of varying sizes. In addition, IaaS vendors provide security, monitoring, load balancing, log access, redundancy, backup and replication. Amazon Web Services, Microsoft Azure and Google Compute Platform are all examples of IaaS providers.

Platform-as-a-Service (PaaS) lies in the middle of SaaS and IaaS. It delivers hardware, software tools, and middleware – usually for application development – to users over the Internet. Google App Engine, Red Hat OpenShift, and Microsoft Azure are examples of PaaS providers.

Containers-as-a-Service (CaaS) is the newest cloud service that focuses on managing container-based workloads. A CaaS offers a framework for deploying and managing application and container clusters by delivering container engines, orchestration, and the underlying resources to users. Google Container Engine, Amazon EC2 Container Service, and Azure Container Services are the leading CaaS providers.

Upgrading Avamar Proxies from Version 7.2 to 7.4

Avamar Proxies cannot be upgraded anymore from version 7.2 to 7.4 using the old method (i.e. mounting the ISO file and rebooting the proxy), due to incompatibility with the new version.

In general, you have to delete the old proxies, and deploy new proxies using the new tool Proxy Deployment Manager.   To preserve the settings of the old proxies, perform the following steps when there are no backup jobs running:

  1. Collect all the details from the old proxies including:
    • Hostname, Domain Name, IP address, Netmask, Gateway, DNS Server
    • VM host, VM datastore, VM network
  2. Delete proxies on the Avamar Console:
    • First, on the POLICY window, edit all the backup policies that are using the proxies, and uncheck them.
    • Once removed from policy, go to ADMINISTRATION, and delete the proxies.
  3. Go to vCenter to power down the proxies, then “Delete from Disk”
  4. Once all the proxies are gone, you are now ready to deploy the new proxies. Go to Avamar Console, click VMware > Proxy Deployment Manager.
  5. Click “Create Recommendation” button.
  6. Once you see the proxy recommendation, enter the proxy detail one by one for all proxies (including hostname, IP, gateway, VM network, etc.) on their respective VMware hosts.
  7. Remove all other “New Proxies” and hit “Apply”
  8. Once the proxies are deployed, they need to be registered to the Avamar server, one by one.
  9. Using vmware console or ssh, connect to the proxy, and logon as root.
  10. Enter the command: /usr/local/avamarclient/etc/initproxyappliance.sh start
  11. Register the proxy to the appropriate Avamar server (use the Avamar server FQDN).
  12. Once registered, go to the Avamar Console and configure the proxies:
    • On ADMINISTRATION window, edit the proxy, then select the appropriate “Datastores” and “Groups”
    • On POLICY window, edit the image-level backup policies, then add back (or check) the Proxies
  13. Perform test backup.

Data Protection in AWS

Data protection along with security used to be an afterthought in many in-house IT projects. In the cloud, data protection has became the forefront for many IT implementations. Business users spinning up servers or EC2 instances in AWS clamor for the best protection for their servers and data.

Luckily, AWS provides a highly effective snapshot mechanism on EBS volumes that are stored on a highly durable S3 storage. Snapshots are storage efficient and use copy-on-write and restore-before-read which allow for both consistency and immediate recovery. Storing snapshot in S3 which is a separate infrastructure from EBS, has the added benefit of data resiliency – failure in the production data will not affect the snapshot data.

However, this backup and restore mechanism provided by AWS lacks many of the features found in the traditional backup solutions such as cataloging, ease of management, automation, and replication. In response, third party vendors are now offering products and services that will make backup and recovery easy and efficient in AWS. Some vendors provide services to manage and automate this. Other vendors provide products that mimics the ease of management of the traditional backup. For instance, Dell EMC provides Avamar and Data Domain virtual editions that you can use on AWS.

Selecting the Right HCI Solution

The popularity of hyper converged infrastructure (HCI) systems is fueled not only by better, faster, and cheaper cpu and flash storage, but also by better orchestration of compute and storage resources, horizontal scaling, and elasticity to adjust to changing workloads.

Hyper-converged infrastructures are scale-out systems with nodes that are added and aggregated into a single platform. Each node performs compute, storage and networking functions, and they run virtualization software. HCI enables the software-defined data center.

But what are the considerations in buying the right solution for your use case? Here are some guidelines:

1. Closely inspect how the system implements reliability and resiliency. How does it protect system configuration and data? Implementations include replication, erasure coding, distribution of state information across multiple nodes to enable automatic failover, etc.

2. Does it have self-healing capabilities?

3. Can it perform non-disruptive upgrades?

4. Does it support VMware vSphere, as well as Microsoft Hyper-V and open source hypervisors like KVM?

5. Does the storage supports auto-tiering?

6. Since migrations affect virtual machine performance, how does the system maintains data locality as virtual machines move from one host to another?

7. What are the network configuration options? How is the network managed? Is there a self-optimizing network capabilities?

8. How is the performance affected when backups and restore are performed?

9. What is the performance impact if they are deployed in multiple geographical regions?

10. What are the data protection and recovery capabilities, such as snapshots and replication of workloads locally, to remote data centers and in the cloud?

11. Does it deduplicate the data, which minimizes the amount of data stored?

12. Does it have the option to extend to public clouds?

13. What are its management capabilities? Does it provide a single intuitive console for managing the HCI, or does it include a plug-in to hypervisor management tool such as vCenter to perform the management tasks?

14. Does it have APIs that enable third-party tools and custom scripts to interface with to enable automation?

15. Does it have monitoring, alerting, and reporting system which analyzes its performance, errors and capacity planning?

Finally, you should look at the vendor itself and look at their future in the HCI space, their product roadmap, support polices and cost model (lease, outright purchase, pay as you go, etc).

Optimizing AWS Cost

One of the advantages of using the cloud is cost savings since you only pay for what you use. However, many companies still waste resources in the cloud, and end up paying for services that they don’t use. A lot of people are stuck in the old ways of implementing IT infrastructure such as overprovisioning and keeping the servers on 24×7 even when they are idle most of the time.

There are several ways you can optimize AWS in order to save money.

1. Right sizing

With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision. On the compute side, you should select the correct EC2 instance appropriate with the application, and provision only enough number of instances to meed the need. When the need for more compute increases, you can scale up or scale out compute resources. For instance during low-demand, use only a couple of EC2 instances, but during high-demand, autoprovision additional EC2 instances to meet the load.

On the storage side, AWS offers multiple tiers to fit your storage need. For instance, you can store frequently used files/objects on S3 Standard tier, store less frequently used files/objects on S3 Infrequent Access (IA) tier, and store archive data on Glacier. Finally you should delete data that you don’t need.

2. Reserve capacity

If you know that you will be using AWS for a long period of time, you can commit to reserve capacity from AWS and save a lot of money on equivalent on-demand capacity.

Reserved Instances are available in 3 options – All up-front (AURI), partial up-front (PURI) or no upfront payments (NURI). When you buy Reserved Instances, the larger the upfront payment, the greater the discount. To maximize your savings, you can pay all up-front and receive the largest discount. Partial up-front RI’s offer lower discounts but give you the option to spend less up front. Lastly, you can choose to spend nothing up-front and receive a smaller discount, but this option allows you to free up capital to spend on other projects.

3. Use spot market

If you have applications that are not time sensitive such as non-critical batch workloads, you may be able to save a lot of money by leveraging Amazon EC2 Spot Instances. This works like an auction where you bid on spare Amazon EC2 computing capacity.

Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications

4. Cleanup unused services

One of the best ways to save money is to turn off unused and idle resources. These include EC2 instances with no network or CPU activity for the past few days, Load Balancers with no traffic, unused block storage (EBS), piles of snapshots and detached Elastic IPs. For instance, one company analyzed their usage pattern and found that during certain periods, they should be able to power off a number of EC2 instances, thereby minimizing their costs.

One thing you really need to do on a regular basis is to monitor and analyze your usage. AWS provides several tools to track your costs such as Amazon CloudWatch (which collects and tracks metrics, monitors log files, and sets alarms), Amazon Trusted Advisor (which looks for opportunities to save you money such as turning off non-production instances), and Amazon Cost Explorer (which gives you the ability to analyze your costs and usage).

Reference: https://aws.amazon.com/pricing/cost-optimization/

Hyper Converged Infrastructure (HCI)

Companies who want to retain control of their infrastructure and data (due to regulatory, security, application requirement, and other reasons), but still want the benefits of the public cloud – such as unlimited scalability, efficient resource utilization, cost-effectiveness of pooling compute and storage resources, easy provisioning of resources on demand – would benefit tremendously by using hyperconverged infrastructure (HCI) on their premises.

Hyperconverged infrastructure consolidates compute, storage, and networking in a box. It creates a modular system which can be linearly scaled. HCI takes advantage of commodity hardware (i.e. x86 systems) and advances in storage and networking technologies (i.e. flash storage providing high IOPS and 10GB / 40GB high speed Ethernet).

HCI uses virtualization technology (such as VMware) to aggregate compute, storage and network. It eliminates the need for dedicated SAN and storage arrays by pooling the storage of each node and defining it via software. In addition, HCI usually offers unified management which eliminates the different management silos between compute, storage and network.

There are a variety of HCI solutions to choose from. You can build it yourself using commodity hardware and using virtualization software (e.g. VMware vSphere) and software defined storage (e.g. VMware vSAN). You can also buy hyperconverged appliances from vendors such as Nutanix and Dell-EMC (VxRails). Hyperconverged rack-scale systems for large enterprises, such as Dell-EMC VxRack, are available as well .

There are numerous advantages for using HCI:

1. Faster time to deploy – you can easily add compute, storage and network, and scale it up and out to meet business demands. This in turn reduces development cycles for new apps and services.

2. Simplified management and operations – compute, storage and network provisioning can be done by a unified team eliminating the network, compute or storage silos. Many provisioning and configuration tasks can now be scripted and automated.

3. Cost savings – initial investment is usually lower. Your company can start small and scale incrementally as you grow, adding smaller amounts of compute or storage capacity as required vs buying larger bundles of software and storage arrays. Operational expenses is also much lower, since there is no more SAN to manage.

4. Reduces the footprint of the Data Center which means less power and less cooling requirements. HCI can usually consolidate 16 DC racks into one.

HCI is the ideal infrastructure solution for on-premise data centers.

Migrating IT Workloads to the Cloud

As companies realize the benefits of the cloud, along with new cloud application deployments, they are also migrating existing on-premise applications to the cloud.

However, migration can be a daunting task, and if not planned and executed properly, it may end up in a catastrophe.

When migrating to the cloud, the first thing companies have to do is to define a strategy. There are several common migration strategies.

The first one is “lift and shift”. In this method, applications are re-hosted in the cloud provider (such as AWS or Azure). Re-hosting can be done by performing migration sync and fail over – using tools available from the cloud provider or third party vendors.

The second strategy is to re-platform. In this method, the core architecture of the application is unchanged, but some optimizations are done to take advantage of the cloud architecture.

The third strategy is to repurchase. In this method, the existing application is totally dropped and you buy a new one that runs on the cloud.

The fourth strategy is to re-architect the application by using cloud-native features. Usually you re-architect the application to take advantage of the scalability and higher performance offered by the cloud.

The last strategy is to retain the applications to run on-premise. Some applications (especially legacy ones) are very complicated to migrate and keeping them on-premise may be the best option to take.

One important task to perform after migration is to validate and test the applications. Once they are smoothly running, find opportunities for application optimization, standardization and future proofing.

Common Pitfalls of Deploying Applications in the Cloud

Due to the relatively painless way of spinning up servers in the cloud, business units of large companies are flocking to AWS and other cloud providers to deploy their applications instead of using internal IT. This is expected and even advantageous because the speed of deployment in the cloud is usually unmatched by internal IT. However, there are many things to consider and pitfalls to avoid in order to establish a robust and secure application.

I recently performed an assessment of an application in the cloud implemented by a business unit with limited IT knowledge. Here are some of my findings:

  1. Business units have the impression that AWS takes care of security of the application. While AWS takes care of security of the cloud (which means security from the physical level up to the hypervisor level), the customer is still responsible for the security in the cloud (including OS security, encryption, customer data protection, etc.). For instance, the customer is still responsible for OS hardening (implementing secure password policy, turning off unneeded services, locking down ssh root access, enabling selinux, etc.) and monthly security patching.
  2. These servers in the cloud also lack the integration with enterprise internal tools to properly monitor and administer the servers. Usually enterprises have developed mature tools for these purposes. Without integrating with these tools, they are usually blind to what’s going on with their servers, especially the very important task of monitoring their security.
  3. These servers do not have periodic auditing. For instance, although Security Groups have been setup properly in the beginning, they have to be audited and revisited every so often so that ports that are no longer needed can be disabled/removed from the Security Groups.
  4. There is no central allocation of IP addresses. IP addresses may overlap once their own VPC is connected to other VPCs and the enterprise internal network.
  5. One of the most commonly neglected task after spinning up servers is to configure their backup and retention. For companies that are regulated, it is extremely important to adhere to their backup and retention policy.
  6. Because of the business unit’s limited knowledge of IT infrastructure, fault tolerance and reliability may not be properly set up. For instance, they may only use one availability zone instead of using two or more.
  7. Business units may not be optimizing the cost of their deployment in the cloud. There are many ways to accomplish this, such as using tiered storage (for instance, using Glacier to archive data instead of S3), powering down servers when not in use, bidding for resources for less time sensitive tasks, etc.

Business units should be cautious, and should consider consulting internal IT before deploying in the cloud to ensure a reliable, secure, and cost-effective applications.