Selecting the Right HCI Solution

The popularity of hyper converged infrastructure (HCI) systems is fueled not only by better, faster, and cheaper cpu and flash storage, but also by better orchestration of compute and storage resources, horizontal scaling, and elasticity to adjust to changing workloads.

Hyper-converged infrastructures are scale-out systems with nodes that are added and aggregated into a single platform. Each node performs compute, storage and networking functions, and they run virtualization software. HCI enables the software-defined data center.

But what are the considerations in buying the right solution for your use case? Here are some guidelines:

1. Closely inspect how the system implements reliability and resiliency. How does it protect system configuration and data? Implementations include replication, erasure coding, distribution of state information across multiple nodes to enable automatic failover, etc.

2. Does it have self-healing capabilities?

3. Can it perform non-disruptive upgrades?

4. Does it support VMware vSphere, as well as Microsoft Hyper-V and open source hypervisors like KVM?

5. Does the storage supports auto-tiering?

6. Since migrations affect virtual machine performance, how does the system maintains data locality as virtual machines move from one host to another?

7. What are the network configuration options? How is the network managed? Is there a self-optimizing network capabilities?

8. How is the performance affected when backups and restore are performed?

9. What is the performance impact if they are deployed in multiple geographical regions?

10. What are the data protection and recovery capabilities, such as snapshots and replication of workloads locally, to remote data centers and in the cloud?

11. Does it deduplicate the data, which minimizes the amount of data stored?

12. Does it have the option to extend to public clouds?

13. What are its management capabilities? Does it provide a single intuitive console for managing the HCI, or does it include a plug-in to hypervisor management tool such as vCenter to perform the management tasks?

14. Does it have APIs that enable third-party tools and custom scripts to interface with to enable automation?

15. Does it have monitoring, alerting, and reporting system which analyzes its performance, errors and capacity planning?

Finally, you should look at the vendor itself and look at their future in the HCI space, their product roadmap, support polices and cost model (lease, outright purchase, pay as you go, etc).

Optimizing AWS Cost

One of the advantages of using the cloud is cost savings since you only pay for what you use. However, many companies still waste resources in the cloud, and end up paying for services that they don’t use. A lot of people are stuck in the old ways of implementing IT infrastructure such as overprovisioning and keeping the servers on 24×7 even when they are idle most of the time.

There are several ways you can optimize AWS in order to save money.

1. Right sizing

With AWS you can right size your services to meet exactly the capacity requirements you need without having to overprovision. On the compute side, you should select the correct EC2 instance appropriate with the application, and provision only enough number of instances to meed the need. When the need for more compute increases, you can scale up or scale out compute resources. For instance during low-demand, use only a couple of EC2 instances, but during high-demand, autoprovision additional EC2 instances to meet the load.

On the storage side, AWS offers multiple tiers to fit your storage need. For instance, you can store frequently used files/objects on S3 Standard tier, store less frequently used files/objects on S3 Infrequent Access (IA) tier, and store archive data on Glacier. Finally you should delete data that you don’t need.

2. Reserve capacity

If you know that you will be using AWS for a long period of time, you can commit to reserve capacity from AWS and save a lot of money on equivalent on-demand capacity.

Reserved Instances are available in 3 options – All up-front (AURI), partial up-front (PURI) or no upfront payments (NURI). When you buy Reserved Instances, the larger the upfront payment, the greater the discount. To maximize your savings, you can pay all up-front and receive the largest discount. Partial up-front RI’s offer lower discounts but give you the option to spend less up front. Lastly, you can choose to spend nothing up-front and receive a smaller discount, but this option allows you to free up capital to spend on other projects.

3. Use spot market

If you have applications that are not time sensitive such as non-critical batch workloads, you may be able to save a lot of money by leveraging Amazon EC2 Spot Instances. This works like an auction where you bid on spare Amazon EC2 computing capacity.

Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications

4. Cleanup unused services

One of the best ways to save money is to turn off unused and idle resources. These include EC2 instances with no network or CPU activity for the past few days, Load Balancers with no traffic, unused block storage (EBS), piles of snapshots and detached Elastic IPs. For instance, one company analyzed their usage pattern and found that during certain periods, they should be able to power off a number of EC2 instances, thereby minimizing their costs.

One thing you really need to do on a regular basis is to monitor and analyze your usage. AWS provides several tools to track your costs such as Amazon CloudWatch (which collects and tracks metrics, monitors log files, and sets alarms), Amazon Trusted Advisor (which looks for opportunities to save you money such as turning off non-production instances), and Amazon Cost Explorer (which gives you the ability to analyze your costs and usage).

Reference: https://aws.amazon.com/pricing/cost-optimization/

Hyper Converged Infrastructure (HCI)

Companies who want to retain control of their infrastructure and data (due to regulatory, security, application requirement, and other reasons), but still want the benefits of the public cloud – such as unlimited scalability, efficient resource utilization, cost-effectiveness of pooling compute and storage resources, easy provisioning of resources on demand – would benefit tremendously by using hyperconverged infrastructure (HCI) on their premises.

Hyperconverged infrastructure consolidates compute, storage, and networking in a box. It creates a modular system which can be linearly scaled. HCI takes advantage of commodity hardware (i.e. x86 systems) and advances in storage and networking technologies (i.e. flash storage providing high IOPS and 10GB / 40GB high speed Ethernet).

HCI uses virtualization technology (such as VMware) to aggregate compute, storage and network. It eliminates the need for dedicated SAN and storage arrays by pooling the storage of each node and defining it via software. In addition, HCI usually offers unified management which eliminates the different management silos between compute, storage and network.

There are a variety of HCI solutions to choose from. You can build it yourself using commodity hardware and using virtualization software (e.g. VMware vSphere) and software defined storage (e.g. VMware vSAN). You can also buy hyperconverged appliances from vendors such as Nutanix and Dell-EMC (VxRails). Hyperconverged rack-scale systems for large enterprises, such as Dell-EMC VxRack, are available as well .

There are numerous advantages for using HCI:

1. Faster time to deploy – you can easily add compute, storage and network, and scale it up and out to meet business demands. This in turn reduces development cycles for new apps and services.

2. Simplified management and operations – compute, storage and network provisioning can be done by a unified team eliminating the network, compute or storage silos. Many provisioning and configuration tasks can now be scripted and automated.

3. Cost savings – initial investment is usually lower. Your company can start small and scale incrementally as you grow, adding smaller amounts of compute or storage capacity as required vs buying larger bundles of software and storage arrays. Operational expenses is also much lower, since there is no more SAN to manage.

4. Reduces the footprint of the Data Center which means less power and less cooling requirements. HCI can usually consolidate 16 DC racks into one.

HCI is the ideal infrastructure solution for on-premise data centers.

Migrating IT Workloads to the Cloud

As companies realize the benefits of the cloud, along with new cloud application deployments, they are also migrating existing on-premise applications to the cloud.

However, migration can be a daunting task, and if not planned and executed properly, it may end up in a catastrophe.

When migrating to the cloud, the first thing companies have to do is to define a strategy. There are several common migration strategies.

The first one is “lift and shift”. In this method, applications are re-hosted in the cloud provider (such as AWS or Azure). Re-hosting can be done by performing migration sync and fail over – using tools available from the cloud provider or third party vendors.

The second strategy is to re-platform. In this method, the core architecture of the application is unchanged, but some optimizations are done to take advantage of the cloud architecture.

The third strategy is to repurchase. In this method, the existing application is totally dropped and you buy a new one that runs on the cloud.

The fourth strategy is to re-architect the application by using cloud-native features. Usually you re-architect the application to take advantage of the scalability and higher performance offered by the cloud.

The last strategy is to retain the applications to run on-premise. Some applications (especially legacy ones) are very complicated to migrate and keeping them on-premise may be the best option to take.

One important task to perform after migration is to validate and test the applications. Once they are smoothly running, find opportunities for application optimization, standardization and future proofing.

Common Pitfalls of Deploying Applications in the Cloud

Due to the relatively painless way of spinning up servers in the cloud, business units of large companies are flocking to AWS and other cloud providers to deploy their applications instead of using internal IT. This is expected and even advantageous because the speed of deployment in the cloud is usually unmatched by internal IT. However, there are many things to consider and pitfalls to avoid in order to establish a robust and secure application.

I recently performed an assessment of an application in the cloud implemented by a business unit with limited IT knowledge. Here are some of my findings:

  1. Business units have the impression that AWS takes care of security of the application. While AWS takes care of security of the cloud (which means security from the physical level up to the hypervisor level), the customer is still responsible for the security in the cloud (including OS security, encryption, customer data protection, etc.). For instance, the customer is still responsible for OS hardening (implementing secure password policy, turning off unneeded services, locking down ssh root access, enabling selinux, etc.) and monthly security patching.
  2. These servers in the cloud also lack the integration with enterprise internal tools to properly monitor and administer the servers. Usually enterprises have developed mature tools for these purposes. Without integrating with these tools, they are usually blind to what’s going on with their servers, especially the very important task of monitoring their security.
  3. These servers do not have periodic auditing. For instance, although Security Groups have been setup properly in the beginning, they have to be audited and revisited every so often so that ports that are no longer needed can be disabled/removed from the Security Groups.
  4. There is no central allocation of IP addresses. IP addresses may overlap once their own VPC is connected to other VPCs and the enterprise internal network.
  5. One of the most commonly neglected task after spinning up servers is to configure their backup and retention. For companies that are regulated, it is extremely important to adhere to their backup and retention policy.
  6. Because of the business unit’s limited knowledge of IT infrastructure, fault tolerance and reliability may not be properly set up. For instance, they may only use one availability zone instead of using two or more.
  7. Business units may not be optimizing the cost of their deployment in the cloud. There are many ways to accomplish this, such as using tiered storage (for instance, using Glacier to archive data instead of S3), powering down servers when not in use, bidding for resources for less time sensitive tasks, etc.

Business units should be cautious, and should consider consulting internal IT before deploying in the cloud to ensure a reliable, secure, and cost-effective applications.

Understanding “Serverless” Applications

“Serverless” architecture is a hot topic in the IT world, especially after AWS released their Lambda service. But many people are confused with the term “serverless.” The term is paradoxical because applications running on the “serverless” architecture are still using services that are powered by servers on the back end. The term is meant to be catchy and provocative, but what it really means is that, as a developer or as a company providing an app or services, you don’t need to provision, monitor, patch and manage any server. You can focus on building and maintaining your application and leave the server administration tasks to the cloud provider. This also means that you don’t have to worry about capacity planning. With the vast resources available on the cloud, your application scales with usage. In addition, availability, fault-tolerance and infrastructure security are already built-in.

Whereas on the traditional “always on” server sitting behind an application, you have to pay for its 24/7 operations, on a serverless architecture, you only pay for the time your application is running and you never have to pay for server idle time.

In this model, your application is only invoked if there is a trigger, such as changes in data state, requests from end user, changes in resource state, etc. Your program (or function that is written in node.js, Python, Java or C# programming languages) will then perform the task, and provide the output or service.

The obvious benefits of this model are reduced cost and reduced complexity. However, there are also drawbacks such as vendor control, vendor lock-in, multi tenancy concerns, and security issues.

As the “serverless” architecture becomes mainstream, more and more companies and developers will use this model to provide services.

AWS Certified Solutions Architect

I recently passed the AWS Certified Solutions Architect – Associate exam. This certification allows me to prove my knowledge of the cloud and be able to help companies who are using or will be using AWS cloud services as their cloud provider. For more information about AWS Certification, please visit the official site here: https://aws.amazon.com/certification/

Initially used by developers and start-up companies, AWS has grown into a solid and robust cloud services provider. Big enterprises are realizing the value of AWS and more and more companies are extending their data centers to the cloud. For most companies, traditional on premises infrastructure may not be sufficient anymore as business users demand more from IT, including faster and scalable IT services.

Getting AWS-certified requires hard work. You need to read the book, enroll in a training class (if possible), do practice tests and get hands-on experience in AWS. In addition, you should also have a working knowledge of networking, virtualization, database, storage, servers, scripting / programming and software applications. IT professionals should invest their skills in the cloud or run the risk of being obsolete.

AWS Cloud Architecture Best Practices

AWS services have many capabilities.  When migrating existing applications to the cloud or creating new applications for the cloud, it is important to know these AWS capabilities in order to architect the most resilient, efficient, and scalable solution for your applications.

Cloud architecture and on-premise architecture differs in so many ways.  In the cloud, you treat the infrastructure as a configurable and flexible software as opposed to hardware. You need to have a different mindset when architecting in the cloud because the cloud has a different way of solving problems.

You have to consider the following design principles in AWS cloud:

  1. Design for failure by implementing redundancy everywhere.  Components fail all the time.  Even whole site fail sometimes.  For example, if you implement redundancy of your web/application servers in different availability zones, your application will be more resilient when one availability zone fails.
  2. Implement scalability.  One of the advantages of using the cloud vs on-premise is the ability to grow and shrink the resources you need depending on the demand.  AWS supports scaling your resources vertically and horizontally, even automating it by using auto-scaling.
  3. Use AWS storage service that fits your use case.  AWS has several storage services with different properties, cost and functionality.  Amazon S3 is used for web applications that need large-scale storage capacity and performance.  It is also used  for backup and disaster recovery.  Amazon Glacier is used for data archiving and long-term backup.  Amazon EBS is a block storage used for mission-critical applications. Amazon EFS (Elastic File System) is used for SMB or NFS shares.
  4. Choose the right database solution. Match technology to the workload: Amazon RDS is for relational databases. Amazon DynamoDB is for NoSQL databases and Amazon Redshift is for data warehousing.
  5. Use caching to improve end user experience.  Caching minimizes redundant data retrieval operations making future requests faster.   Amazon CloudFront is a content delivery network that caches your website via edge devices located around the world. Amazon ElastiCache is for caching data for mission-critical database applications.
  6. Implement defense-in-depth security.  This means building security at every layer.  Referencing the AWS “Shared Security” model, AWS is in-charge of securing the cloud infrastructure (including physical layer and hypervisor layer) while the costumer is in-charge of the majority of the layers from the operating system up to the application layer.  This means customer is still responsible for patching the OS and making the application as secure as possible.  AWS provides security tools that will make your application secure such as IAM, security groups, network ACL’s, CloudTrail, etc.
  7. Utilize parallel processing.  For instance, multi-thread requests by using concurrent threads instead of sequential requests.  Another example is to deploy multiple web or application servers behind load balancers so that requests can be processed by multiple servers at once.
  8. Decouple your applications. IT systems should be designed in a way that reduces inter-dependencies, so that a change or failure in one component does not cascade to other components.  Let the components interact with each other only through standard APIs.
  9.  Automate your environment. Remove manual process to improve system’s stability and consistency.  AWS offers many automation tools to ensure that your infrastructure can respond quickly to changes.
  10. Optimize for cost.  Ensure that your resources are sized appropriately (they can scale in and out based on need),  and that you are taking advantage of different pricing options.

Sources: AWS Certified Solutions Architect Official Study Guide; Global Knowledge Architecting on AWS 5.1 Student Guide

New Book – Practical Research 1: Basics of Qualitative Research

Grade Level: Grade 11
Semester: 2nd Semester
Track: Applied Track
Authors: Garcia, et al.
ISBN: 978-6218070127
Year Published: 2017
Language: English
No. of pages: 400
Size: 7 x 10 inches
Publisher: Azes Publishing

About the book:

This book aims to develop critical thinking and problem-solving skills through qualitative research.

Contents:

 Chapter 1 – Nature of Inquiry and Research

  1. What Is Research?
  2. The Importance of Research in Daily Life
  3. The Characteristics, Processes, and Ethics of Research
  4. Quantitative and Qualitative Research
  5. The Kinds of Research Across Fields

Chapter 2 – Qualitative Research and Its Importance in Daily Life

  1. What is Qualitative Research?
  2. Characteristics of Qualitative Research
  3. Approaches in Qualitative Research
  4. Methods in Qualitative Research
  5. Strengths and Weaknesses of Qualitative Research
  6. Importance of Qualitative Research Across Fields
  7. Generic Outline of a Written Qualitative Research Paper

Chapter 3 – Identifying the Inquiry and Stating the Problem

  1. Range of Research Topics in the Area of Inquiry
  2. How to Design a Research that is Useful in Daily Life
  3. The Research Title
  4. The Background of Research
  5. The Research Questions
  6. The Scope and Delimitation of Study
  7. Benefit and Beneficiaries/ Significance of Study
  8. The Statement of the Problem

Chapter 4 – Learning from Others and Reviewing the Literature

  1. Criteria in Selecting, Citing, and Synthesizing Related Literature
  2. Ethical Standards in Writing Related Literature
  3. The Definition of Terms as Used in the Study

Chapter 5 – Understanding Data and Ways to Systematically Collect Data

  1. What are the Qualitative Research Designs?
  2. Description of Sample
  3. Instrument Development
  4. Data Collection and Analysis Procedures
  5. Guidelines in Writing Research Methodology

Chapter 6 – Finding Answers Through Data Collection

  1. Data Collection Methods
  2. Examples of Data Collection Methods

Chapter 7 – Analyzing the Meaning of the Data and Drawing Conclusions

  1. What is Qualitative Data Analysis?
  2. 2. Ethnographic Data Analysis
  3. Grounded Theory Data Analysis
  4. Phenomenological Data Analysis
  5. Constant Comparative Method Analysis
  6. Language-Based Data Analysis
  7. Coding
  8. Computer-Aided Analysis
  9. How to Analyze Qualitative Data
  10. Summary of Analyzing Qualitative Data
  11. Examples of Data Analysis in Qualitative Research

 Chapter 8 – Reporting and Sharing the Findings

  1. Summary of Findings, Conclusions & Recommendations
  2. Techniques in Listing References
  3. Process of Report Writing
  4. Selection Criteria and Process of Best Design

Online Textbook Support is available.


For orders, please contact: 

Azes Publishing

New Book: Organization and Management

Organization and Management

DepEd K-12 Curriculum Compliant
Outcomes Based Education (OBE) Designed
Grade Level: Grade 11
Semester: 1st Semester
Strands: ABM, GAS
Authors: Palencia J, Palencia F, Palencia S.
ISBN: 978-621-436-005-5
Edition: First Edition
Year Published: 2019
Language: English
No. of pages: 
Size: 7 x 10 inches

About the book:
This book deals with the basic concepts, principles, and processes related to business organization, and the functional areas of management. Emphasis is given to the study of management functions like planning, organizing, staffing, leading, controlling, and the roles of these functions in entrepreneurship.

Contents:
Chapter 1 – Nature and Concept of Management
Chapter 2 – The Firm and Its Environment
Chapter 3 – Planning
Chapter 4 – Organizing
Chapter 5 – Staffing
Chapter 6 – Leading
Chapter 7 – Controlling
Chapter 8 – Introduction to the Different Functional Areas of Management
Chapter 9 – Special Topics in Management

Please contact me if your school is interested to review this textbook for possible adoption.