Author Archives: admin

Azes Publishing Corporation

ANNOUNCEMENT

https://azespublishing.com/

We recently launched Azes Publishing Corporation – a publisher and distributor of Philippine textbooks.

Our mission is to provide the Philippine market with quality and affordable learning materials
from pre-school to post-graduate studies.

Our vision is to become an industry leader in the production and distribution of quality textbooks and other learning materials in the Philippines.

For more information, visit our website at https://azespublishing.com/

Using the Cloud for Disaster Recovery

One of the common use cases for using the cloud, especially for companies with large on-prem data centers, is Disaster Recovery (DR).  Instead of building or continuing to maintain an expensive on-prem DR site, the cloud can provide a cheaper alternative for replicating and protecting your data.

There are many products and services out there for DR in the cloud.  If your company is using EMC devices – specifically Avamar and Data Domain (DD) – for data protection, you can replicate your virtual machines (VM) backup to AWS and be able to perform disaster recovery of your servers in AWS.  This solution is called Data Domain Cloud DR (DDCDR) and  it enables DD to backup to AWS S3 object storage. Data is sent securely and efficiently, requiring minimal compute cycles and footprint within AWS. In the event of a disaster, VM images can be restored and run from within AWS. Since neither Data Protection Suite nor DD are required in the cloud, compute cycles are only required in the event of a restore.

Backup Process

  • DDCDR requires that a customer with Avamar backup and Data Domain (DD) storage install an OVA which deploys an “add-on” to their on-prem Avamar/DD system and install a lightweight VM (Cloud DR server) utility in their AWS domain.
  • Once the OVA is installed, it will read the changed data and will segment, encrypt, and compress the backup data and then send this and the backup metadata to AWS S3 object storage.
  • Avamar/DD policies can be established to control how many daily backup copies are to be saved to S3 object storage. There’s no need for Data Domain or Avamar to run in AWS.

Restore Process

  • When there’s a problem at the primary data center, an admin can click on a Avamar GUI button and have the Cloud DR server uncompress, decrypt, rehydrate and restore the backup data into EBS volumes, translate the VMware VM image to an AMI image, and then restarts the AMI on an AWS virtual server (EC2) with its data on EBS volume storage.
  • The Cloud DR server will use the backup metadata to select the AWS EC2 instance with the proper CPU and RAM needed to run the application. Once this completes, the VM is running standalone, in an AWS EC2 instance. Presumably, you have to have EC2 and EBS storage volumes resources available under your AWS domain to be able to install the application and restore its data.

Source: https://www.dellemc.com/

Guiding Principles for Cloud Security

To create a solid security for your servers, data, and applications hosted in the cloud, you must adhere to the following security guiding principles:

Perimeter Security

The first line of defense against attacks is perimeter security.  Creating private networks to restrict visibility into computing environment is one of them.   Micro-segmentation which  isolates applications and data with a hardened configuration is another one. Creating  a strong abstraction layer from hardware and virtualization environment will also strengthen perimeter security.  

Continuous Encryption

There shouldn’t be any more reason why data traversing the network (public or private) and data stored on storage arrays shouldn’t be encrypted.  Even the popular Google Chrome browser started to flag unencrypted websites to alert users.  Leverage cheap computing power, secure key management, and the Public Key Infrastructure to achieve data-in-transit and data-at-rest encryption. 

Effective Incident Response

Attacks to your servers, data, and applications in the cloud will definitely occur.  It’s just a question of “when” will it happen.  An effective incident response program – using automated and manual response – ready to be invoked once an attack occurs will lessen the pain of the breach.

Continuous Monitoring

Continuous and robust monitoring of your data, applications, and security tools and on-time alerting when security breach happens is a must.  In addition, easy integration of third party monitoring capabilities will also help in achieving sound monitoring system.

Resilient Operations

The infrastructure should be capable of withstanding attack.  For instance, you should maintain data and applications availability by mitigating DDoS attacks. The applications should continually function in the presence of ongoing attack.  In addition, there should be minimal degradation of performance as a result of environmental failures. Employing high availability, redundancy, and disaster recovery strategy will help achieve resilient operations.

Highly Granular Access Control

Organizations need to make sure that their employees and customers can access the resources and data they need, at the right time, from wherever they are. Conversely they need to make sure that bad actors are denied access as well.  They should have a strong cryptographic Identity and Access Management (AIM).  They should leverage managed Public Key Infrastructure service to authenticate users, restrict access to confidential information and verify the ownership of sensitive documents.

Secure Applications Development

Integrate security automation into DevOps practices (or DevSecOps), ensuring security is baked in, not bolted on.

Governance, Risk Management, Compliance

Finally, a great cloud security program should be properly governed, for instance, by having visibility of configurations. Risks should be managed by readily identifying gaps or other weakness.  Lastly, your security program should have broad regulatory and compliance certifications.

Cloud Security vs On-Prem Security

One of the big differences between cloud security and on-prem security is that the former is built from the ground up while the latter is bolted in the process. AWS for instance had made their infrastructure secure ever since they first built it. They realized early on that companies will not be putting their data in the cloud if it’s not inherently secure.

However, security is still a shared responsibility between the cloud provider and the consumer. By now, everybody should be aware of the AWS Shared Responsibility Model. Companies who are used to the traditional security model will find that cloud security entails a different mindset. In the cloud, the focus shifts from network, operating systems, and perimeter security to security governance, access control, and secure development. Since the underlying infrastructure of the cloud is secured by the provider, companies utilizing it can now focus on the true information security – the ones that really matters to the company, such as data, users, and workflow security.

Security governance is important in the cloud. Security folks should spend more time planning and less fire fighting. They should be crafting and implementing policies that truly secure the company’s assets – such as data-centric security policies and secure software development. There should be a solid access control. For example, users are only granted access if they really need it.

There are a couple of challenges with cloud security. First is the obvious disconnect between shared security model and traditional security model. Companies used to on-prem security will still want to spend resources on perimeter security. Second is compliance. For instance, how can traditional auditors understand how to audit new technologies in the cloud like Lambda, where there is no server to verify?

Companies using the cloud should realize that security is still their responsibility but they should focus more on data and application security.

Cloud Security Challenges and Opportunities

I recently attended the ISC2 Security Congress held on Oct 8 to 10, 2018 at the Marriott Hotel in New Orleans, Louisiana.  Based on the keynotes, workshops, and sessions at the conference, these are the challenges and opportunities facing cloud security:

  1. Container and serverless (e.g. AWS Lambda) security.  For instance, how will you ensure isolation of various applications?
  2. Internet of Things (IOT) and endpoint security.  As more and more sensors, smart appliances and devices with powerful CPUs and bigger memories are connected to the cloud, more computation will happen on the edge, thus increasing security risks.
  3. Machine learning and artificial intelligence (AI).  How can AI help guard against cyber-attacks, predicts impending security breach, or improve investigation or forensics?
  4. Blockchain technology. Blockchain will be transforming how audits will be performed in the future.
  5. Quantum computing if and when it comes into fruition will break cryptography.  Cryptography is the reason why commerce happens on the Internet.  New encryption algorithm is needed when quantum computing becomes a reality.
  6. How will the implementation of GPDR (General Data Protection Regulation) in the European Union affects data sovereignty (“a concept that information which is stored in digital form is subject to the laws of the country in which it is located”), data privacy, and alignment of privacy and security?
  7. DevSecOps (having a mindset about application and infrastructure security from the start) will continue to gain momentum.

We are likely to be seeing continuing innovations in these areas within the next few years.

Defining the Different Types of Cloud Services

There are several kinds of cloud services, depending on which tier of the technology stack the service resides:

Software-as-a-Service (SaaS) delivers entire functioning applications through the cloud. SaaS frees companies from building their own data centers, buying hardware and software licenses, and developing their own programs. Salesforce is an example of a SaaS provider.

Infrastructure-as-a-Service (IaaS) delivers the underlying resources – compute, storage and networking – in a virtual fashion to organizations who purchase service “instances” of varying sizes. In addition, IaaS vendors provide security, monitoring, load balancing, log access, redundancy, backup and replication. Amazon Web Services, Microsoft Azure and Google Compute Platform are all examples of IaaS providers.

Platform-as-a-Service (PaaS) lies in the middle of SaaS and IaaS. It delivers hardware, software tools, and middleware – usually for application development – to users over the Internet. Google App Engine, Red Hat OpenShift, and Microsoft Azure are examples of PaaS providers.

Containers-as-a-Service (CaaS) is the newest cloud service that focuses on managing container-based workloads. A CaaS offers a framework for deploying and managing application and container clusters by delivering container engines, orchestration, and the underlying resources to users. Google Container Engine, Amazon EC2 Container Service, and Azure Container Services are the leading CaaS providers.

Upgrading Avamar Proxies from Version 7.2 to 7.4

Avamar Proxies cannot be upgraded anymore from version 7.2 to 7.4 using the old method (i.e. mounting the ISO file and rebooting the proxy), due to incompatibility with the new version.

In general, you have to delete the old proxies, and deploy new proxies using the new tool Proxy Deployment Manager.   To preserve the settings of the old proxies, perform the following steps when there are no backup jobs running:

  1. Collect all the details from the old proxies including:
    • Hostname, Domain Name, IP address, Netmask, Gateway, DNS Server
    • VM host, VM datastore, VM network
  2. Delete proxies on the Avamar Console:
    • First, on the POLICY window, edit all the backup policies that are using the proxies, and uncheck them.
    • Once removed from policy, go to ADMINISTRATION, and delete the proxies.
  3. Go to vCenter to power down the proxies, then “Delete from Disk”
  4. Once all the proxies are gone, you are now ready to deploy the new proxies. Go to Avamar Console, click VMware > Proxy Deployment Manager.
  5. Click “Create Recommendation” button.
  6. Once you see the proxy recommendation, enter the proxy detail one by one for all proxies (including hostname, IP, gateway, VM network, etc.) on their respective VMware hosts.
  7. Remove all other “New Proxies” and hit “Apply”
  8. Once the proxies are deployed, they need to be registered to the Avamar server, one by one.
  9. Using vmware console or ssh, connect to the proxy, and logon as root.
  10. Enter the command: /usr/local/avamarclient/etc/initproxyappliance.sh start
  11. Register the proxy to the appropriate Avamar server (use the Avamar server FQDN).
  12. Once registered, go to the Avamar Console and configure the proxies:
    • On ADMINISTRATION window, edit the proxy, then select the appropriate “Datastores” and “Groups”
    • On POLICY window, edit the image-level backup policies, then add back (or check) the Proxies
  13. Perform test backup.

Data Protection in AWS

Data protection along with security used to be an afterthought in many in-house IT projects. In the cloud, data protection has became the forefront for many IT implementations. Business users spinning up servers or EC2 instances in AWS clamor for the best protection for their servers and data.

Luckily, AWS provides a highly effective snapshot mechanism on EBS volumes that are stored on a highly durable S3 storage. Snapshots are storage efficient and use copy-on-write and restore-before-read which allow for both consistency and immediate recovery. Storing snapshot in S3 which is a separate infrastructure from EBS, has the added benefit of data resiliency – failure in the production data will not affect the snapshot data.

However, this backup and restore mechanism provided by AWS lacks many of the features found in the traditional backup solutions such as cataloging, ease of management, automation, and replication. In response, third party vendors are now offering products and services that will make backup and recovery easy and efficient in AWS. Some vendors provide services to manage and automate this. Other vendors provide products that mimics the ease of management of the traditional backup. For instance, Dell EMC provides Avamar and Data Domain virtual editions that you can use on AWS.

Selecting the Right HCI Solution

The popularity of hyper converged infrastructure (HCI) systems is fueled not only by better, faster, and cheaper cpu and flash storage, but also by better orchestration of compute and storage resources, horizontal scaling, and elasticity to adjust to changing workloads.

Hyper-converged infrastructures are scale-out systems with nodes that are added and aggregated into a single platform. Each node performs compute, storage and networking functions, and they run virtualization software. HCI enables the software-defined data center.

But what are the considerations in buying the right solution for your use case? Here are some guidelines:

1. Closely inspect how the system implements reliability and resiliency. How does it protect system configuration and data? Implementations include replication, erasure coding, distribution of state information across multiple nodes to enable automatic failover, etc.

2. Does it have self-healing capabilities?

3. Can it perform non-disruptive upgrades?

4. Does it support VMware vSphere, as well as Microsoft Hyper-V and open source hypervisors like KVM?

5. Does the storage supports auto-tiering?

6. Since migrations affect virtual machine performance, how does the system maintains data locality as virtual machines move from one host to another?

7. What are the network configuration options? How is the network managed? Is there a self-optimizing network capabilities?

8. How is the performance affected when backups and restore are performed?

9. What is the performance impact if they are deployed in multiple geographical regions?

10. What are the data protection and recovery capabilities, such as snapshots and replication of workloads locally, to remote data centers and in the cloud?

11. Does it deduplicate the data, which minimizes the amount of data stored?

12. Does it have the option to extend to public clouds?

13. What are its management capabilities? Does it provide a single intuitive console for managing the HCI, or does it include a plug-in to hypervisor management tool such as vCenter to perform the management tasks?

14. Does it have APIs that enable third-party tools and custom scripts to interface with to enable automation?

15. Does it have monitoring, alerting, and reporting system which analyzes its performance, errors and capacity planning?

Finally, you should look at the vendor itself and look at their future in the HCI space, their product roadmap, support polices and cost model (lease, outright purchase, pay as you go, etc).

Optimizing AWS Cost

One of the advantages of using the cloud is cost savings since you only pay for what you use. However, many companies still waste resources in the cloud, and end up paying for services that they don’t use. A lot of people are stuck in the old ways of implementing IT infrastructure such as overprovisioning and keeping the servers on 24×7 even when they are idle most of the time.

There are several ways you can optimize AWS in order to save money.

1. Right sizing

With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision. On the compute side, you should select the correct EC2 instance appropriate with the application, and provision only enough number of instances to meed the need. When the need for more compute increases, you can scale up or scale out compute resources. For instance during low-demand, use only a couple of EC2 instances, but during high-demand, autoprovision additional EC2 instances to meet the load.

On the storage side, AWS offers multiple tiers to fit your storage need. For instance, you can store frequently used files/objects on S3 Standard tier, store less frequently used files/objects on S3 Infrequent Access (IA) tier, and store archive data on Glacier. Finally you should delete data that you don’t need.

2. Reserve capacity

If you know that you will be using AWS for a long period of time, you can commit to reserve capacity from AWS and save a lot of money on equivalent on-demand capacity.

Reserved Instances are available in 3 options – All up-front (AURI), partial up-front (PURI) or no upfront payments (NURI). When you buy Reserved Instances, the larger the upfront payment, the greater the discount. To maximize your savings, you can pay all up-front and receive the largest discount. Partial up-front RI’s offer lower discounts but give you the option to spend less up front. Lastly, you can choose to spend nothing up-front and receive a smaller discount, but this option allows you to free up capital to spend on other projects.

3. Use spot market

If you have applications that are not time sensitive such as non-critical batch workloads, you may be able to save a lot of money by leveraging Amazon EC2 Spot Instances. This works like an auction where you bid on spare Amazon EC2 computing capacity.

Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications

4. Cleanup unused services

One of the best ways to save money is to turn off unused and idle resources. These include EC2 instances with no network or CPU activity for the past few days, Load Balancers with no traffic, unused block storage (EBS), piles of snapshots and detached Elastic IPs. For instance, one company analyzed their usage pattern and found that during certain periods, they should be able to power off a number of EC2 instances, thereby minimizing their costs.

One thing you really need to do on a regular basis is to monitor and analyze your usage. AWS provides several tools to track your costs such as Amazon CloudWatch (which collects and tracks metrics, monitors log files, and sets alarms), Amazon Trusted Advisor (which looks for opportunities to save you money such as turning off non-production instances), and Amazon Cost Explorer (which gives you the ability to analyze your costs and usage).

Reference: https://aws.amazon.com/pricing/cost-optimization/