Hyper Converged Infrastructure (HCI)

Companies who want to retain control of their infrastructure and data (due to regulatory, security, application requirement, and other reasons), but still want the benefits of the public cloud – such as unlimited scalability, efficient resource utilization, cost-effectiveness of pooling compute and storage resources, easy provisioning of resources on demand – would benefit tremendously by using hyperconverged infrastructure (HCI) on their premises.

Hyperconverged infrastructure consolidates compute, storage, and networking in a box. It creates a modular system which can be linearly scaled. HCI takes advantage of commodity hardware (i.e. x86 systems) and advances in storage and networking technologies (i.e. flash storage providing high IOPS and 10GB / 40GB high speed Ethernet).

HCI uses virtualization technology (such as VMware) to aggregate compute, storage and network. It eliminates the need for dedicated SAN and storage arrays by pooling the storage of each node and defining it via software. In addition, HCI usually offers unified management which eliminates the different management silos between compute, storage and network.

There are a variety of HCI solutions to choose from. You can build it yourself using commodity hardware and using virtualization software (e.g. VMware vSphere) and software defined storage (e.g. VMware vSAN). You can also buy hyperconverged appliances from vendors such as Nutanix and Dell-EMC (VxRails). Hyperconverged rack-scale systems for large enterprises, such as Dell-EMC VxRack, are available as well .

There are numerous advantages for using HCI:

1. Faster time to deploy – you can easily add compute, storage and network, and scale it up and out to meet business demands. This in turn reduces development cycles for new apps and services.

2. Simplified management and operations – compute, storage and network provisioning can be done by a unified team eliminating the network, compute or storage silos. Many provisioning and configuration tasks can now be scripted and automated.

3. Cost savings – initial investment is usually lower. Your company can start small and scale incrementally as you grow, adding smaller amounts of compute or storage capacity as required vs buying larger bundles of software and storage arrays. Operational expenses is also much lower, since there is no more SAN to manage.

4. Reduces the footprint of the Data Center which means less power and less cooling requirements. HCI can usually consolidate 16 DC racks into one.

HCI is the ideal infrastructure solution for on-premise data centers.

Migrating IT Workloads to the Cloud

As companies realize the benefits of the cloud, along with new cloud application deployments, they are also migrating existing on-premise applications to the cloud.

However, migration can be a daunting task, and if not planned and executed properly, it may end up in a catastrophe.

When migrating to the cloud, the first thing companies have to do is to define a strategy. There are several common migration strategies.

The first one is “lift and shift”. In this method, applications are re-hosted in the cloud provider (such as AWS or Azure). Re-hosting can be done by performing migration sync and fail over – using tools available from the cloud provider or third party vendors.

The second strategy is to re-platform. In this method, the core architecture of the application is unchanged, but some optimizations are done to take advantage of the cloud architecture.

The third strategy is to repurchase. In this method, the existing application is totally dropped and you buy a new one that runs on the cloud.

The fourth strategy is to re-architect the application by using cloud-native features. Usually you re-architect the application to take advantage of the scalability and higher performance offered by the cloud.

The last strategy is to retain the applications to run on-premise. Some applications (especially legacy ones) are very complicated to migrate and keeping them on-premise may be the best option to take.

One important task to perform after migration is to validate and test the applications. Once they are smoothly running, find opportunities for application optimization, standardization and future proofing.

Common Pitfalls of Deploying Applications in the Cloud

Due to the relatively painless way of spinning up servers in the cloud, business units of large companies are flocking to AWS and other cloud providers to deploy their applications instead of using internal IT. This is expected and even advantageous because the speed of deployment in the cloud is usually unmatched by internal IT. However, there are many things to consider and pitfalls to avoid in order to establish a robust and secure application.

I recently performed an assessment of an application in the cloud implemented by a business unit with limited IT knowledge. Here are some of my findings:

  1. Business units have the impression that AWS takes care of security of the application. While AWS takes care of security of the cloud (which means security from the physical level up to the hypervisor level), the customer is still responsible for the security in the cloud (including OS security, encryption, customer data protection, etc.). For instance, the customer is still responsible for OS hardening (implementing secure password policy, turning off unneeded services, locking down ssh root access, enabling selinux, etc.) and monthly security patching.
  2. These servers in the cloud also lack the integration with enterprise internal tools to properly monitor and administer the servers. Usually enterprises have developed mature tools for these purposes. Without integrating with these tools, they are usually blind to what’s going on with their servers, especially the very important task of monitoring their security.
  3. These servers do not have periodic auditing. For instance, although Security Groups have been setup properly in the beginning, they have to be audited and revisited every so often so that ports that are no longer needed can be disabled/removed from the Security Groups.
  4. There is no central allocation of IP addresses. IP addresses may overlap once their own VPC is connected to other VPCs and the enterprise internal network.
  5. One of the most commonly neglected task after spinning up servers is to configure their backup and retention. For companies that are regulated, it is extremely important to adhere to their backup and retention policy.
  6. Because of the business unit’s limited knowledge of IT infrastructure, fault tolerance and reliability may not be properly set up. For instance, they may only use one availability zone instead of using two or more.
  7. Business units may not be optimizing the cost of their deployment in the cloud. There are many ways to accomplish this, such as using tiered storage (for instance, using Glacier to archive data instead of S3), powering down servers when not in use, bidding for resources for less time sensitive tasks, etc.

Business units should be cautious, and should consider consulting internal IT before deploying in the cloud to ensure a reliable, secure, and cost-effective applications.

Understanding “Serverless” Applications

“Serverless” architecture is a hot topic in the IT world, especially after AWS released their Lambda service. But many people are confused with the term “serverless.” The term is paradoxical because applications running on the “serverless” architecture are still using services that are powered by servers on the back end. The term is meant to be catchy and provocative, but what it really means is that, as a developer or as a company providing an app or services, you don’t need to provision, monitor, patch and manage any server. You can focus on building and maintaining your application and leave the server administration tasks to the cloud provider. This also means that you don’t have to worry about capacity planning. With the vast resources available on the cloud, your application scales with usage. In addition, availability, fault-tolerance and infrastructure security are already built-in.

Whereas on the traditional “always on” server sitting behind an application, you have to pay for its 24/7 operations, on a serverless architecture, you only pay for the time your application is running and you never have to pay for server idle time.

In this model, your application is only invoked if there is a trigger, such as changes in data state, requests from end user, changes in resource state, etc. Your program (or function that is written in node.js, Python, Java or C# programming languages) will then perform the task, and provide the output or service.

The obvious benefits of this model are reduced cost and reduced complexity. However, there are also drawbacks such as vendor control, vendor lock-in, multi tenancy concerns, and security issues.

As the “serverless” architecture becomes mainstream, more and more companies and developers will use this model to provide services.

AWS Certified Solutions Architect

I recently passed the AWS Certified Solutions Architect – Associate exam. This certification allows me to prove my knowledge of the cloud and be able to help companies who are using or will be using AWS cloud services as their cloud provider. For more information about AWS Certification, please visit the official site here: https://aws.amazon.com/certification/

Initially used by developers and start-up companies, AWS has grown into a solid and robust cloud services provider. Big enterprises are realizing the value of AWS and more and more companies are extending their data centers to the cloud. For most companies, traditional on premises infrastructure may not be sufficient anymore as business users demand more from IT, including faster and scalable IT services.

Getting AWS-certified requires hard work. You need to read the book, enroll in a training class (if possible), do practice tests and get hands-on experience in AWS. In addition, you should also have a working knowledge of networking, virtualization, database, storage, servers, scripting / programming and software applications. IT professionals should invest their skills in the cloud or run the risk of being obsolete.

AWS Cloud Architecture Best Practices

AWS services have many capabilities.  When migrating existing applications to the cloud or creating new applications for the cloud, it is important to know these AWS capabilities in order to architect the most resilient, efficient, and scalable solution for your applications.

Cloud architecture and on-premise architecture differs in so many ways.  In the cloud, you treat the infrastructure as a configurable and flexible software as opposed to hardware. You need to have a different mindset when architecting in the cloud because the cloud has a different way of solving problems.

You have to consider the following design principles in AWS cloud:

  1. Design for failure by implementing redundancy everywhere.  Components fail all the time.  Even whole site fail sometimes.  For example, if you implement redundancy of your web/application servers in different availability zones, your application will be more resilient when one availability zone fails.
  2. Implement scalability.  One of the advantages of using the cloud vs on-premise is the ability to grow and shrink the resources you need depending on the demand.  AWS supports scaling your resources vertically and horizontally, even automating it by using auto-scaling.
  3. Use AWS storage service that fits your use case.  AWS has several storage services with different properties, cost and functionality.  Amazon S3 is used for web applications that need large-scale storage capacity and performance.  It is also used  for backup and disaster recovery.  Amazon Glacier is used for data archiving and long-term backup.  Amazon EBS is a block storage used for mission-critical applications. Amazon EFS (Elastic File System) is used for SMB or NFS shares.
  4. Choose the right database solution. Match technology to the workload: Amazon RDS is for relational databases. Amazon DynamoDB is for NoSQL databases and Amazon Redshift is for data warehousing.
  5. Use caching to improve end user experience.  Caching minimizes redundant data retrieval operations making future requests faster.   Amazon CloudFront is a content delivery network that caches your website via edge devices located around the world. Amazon ElastiCache is for caching data for mission-critical database applications.
  6. Implement defense-in-depth security.  This means building security at every layer.  Referencing the AWS “Shared Security” model, AWS is in-charge of securing the cloud infrastructure (including physical layer and hypervisor layer) while the costumer is in-charge of the majority of the layers from the operating system up to the application layer.  This means customer is still responsible for patching the OS and making the application as secure as possible.  AWS provides security tools that will make your application secure such as IAM, security groups, network ACL’s, CloudTrail, etc.
  7. Utilize parallel processing.  For instance, multi-thread requests by using concurrent threads instead of sequential requests.  Another example is to deploy multiple web or application servers behind load balancers so that requests can be processed by multiple servers at once.
  8. Decouple your applications. IT systems should be designed in a way that reduces inter-dependencies, so that a change or failure in one component does not cascade to other components.  Let the components interact with each other only through standard APIs.
  9.  Automate your environment. Remove manual process to improve system’s stability and consistency.  AWS offers many automation tools to ensure that your infrastructure can respond quickly to changes.
  10. Optimize for cost.  Ensure that your resources are sized appropriately (they can scale in and out based on need),  and that you are taking advantage of different pricing options.

Sources: AWS Certified Solutions Architect Official Study Guide; Global Knowledge Architecting on AWS 5.1 Student Guide

New Book – Practical Research 1: Basics of Qualitative Research

Grade Level: Grade 11
Semester: 2nd Semester
Track: Applied Track
Authors: Garcia, et al.
ISBN: 978-6218070127
Year Published: 2017
Language: English
No. of pages: 400
Size: 7 x 10 inches
Publisher: Azes Publishing

About the book:

This book aims to develop critical thinking and problem-solving skills through qualitative research.

Contents:

 Chapter 1 – Nature of Inquiry and Research

  1. What Is Research?
  2. The Importance of Research in Daily Life
  3. The Characteristics, Processes, and Ethics of Research
  4. Quantitative and Qualitative Research
  5. The Kinds of Research Across Fields

Chapter 2 – Qualitative Research and Its Importance in Daily Life

  1. What is Qualitative Research?
  2. Characteristics of Qualitative Research
  3. Approaches in Qualitative Research
  4. Methods in Qualitative Research
  5. Strengths and Weaknesses of Qualitative Research
  6. Importance of Qualitative Research Across Fields
  7. Generic Outline of a Written Qualitative Research Paper

Chapter 3 – Identifying the Inquiry and Stating the Problem

  1. Range of Research Topics in the Area of Inquiry
  2. How to Design a Research that is Useful in Daily Life
  3. The Research Title
  4. The Background of Research
  5. The Research Questions
  6. The Scope and Delimitation of Study
  7. Benefit and Beneficiaries/ Significance of Study
  8. The Statement of the Problem

Chapter 4 – Learning from Others and Reviewing the Literature

  1. Criteria in Selecting, Citing, and Synthesizing Related Literature
  2. Ethical Standards in Writing Related Literature
  3. The Definition of Terms as Used in the Study

Chapter 5 – Understanding Data and Ways to Systematically Collect Data

  1. What are the Qualitative Research Designs?
  2. Description of Sample
  3. Instrument Development
  4. Data Collection and Analysis Procedures
  5. Guidelines in Writing Research Methodology

Chapter 6 – Finding Answers Through Data Collection

  1. Data Collection Methods
  2. Examples of Data Collection Methods

Chapter 7 – Analyzing the Meaning of the Data and Drawing Conclusions

  1. What is Qualitative Data Analysis?
  2. 2. Ethnographic Data Analysis
  3. Grounded Theory Data Analysis
  4. Phenomenological Data Analysis
  5. Constant Comparative Method Analysis
  6. Language-Based Data Analysis
  7. Coding
  8. Computer-Aided Analysis
  9. How to Analyze Qualitative Data
  10. Summary of Analyzing Qualitative Data
  11. Examples of Data Analysis in Qualitative Research

 Chapter 8 – Reporting and Sharing the Findings

  1. Summary of Findings, Conclusions & Recommendations
  2. Techniques in Listing References
  3. Process of Report Writing
  4. Selection Criteria and Process of Best Design

Online Textbook Support is available.


For orders, please contact: 

Azes Publishing

New Book: Organization and Management

Organization and Management

DepEd K-12 Curriculum Compliant
Outcomes Based Education (OBE) Designed
Grade Level: Grade 11
Semester: 1st Semester
Strands: ABM, GAS
Authors: Palencia J, Palencia F, Palencia S.
ISBN: 978-621-436-005-5
Edition: First Edition
Year Published: 2019
Language: English
No. of pages: 
Size: 7 x 10 inches

About the book:
This book deals with the basic concepts, principles, and processes related to business organization, and the functional areas of management. Emphasis is given to the study of management functions like planning, organizing, staffing, leading, controlling, and the roles of these functions in entrepreneurship.

Contents:
Chapter 1 – Nature and Concept of Management
Chapter 2 – The Firm and Its Environment
Chapter 3 – Planning
Chapter 4 – Organizing
Chapter 5 – Staffing
Chapter 6 – Leading
Chapter 7 – Controlling
Chapter 8 – Introduction to the Different Functional Areas of Management
Chapter 9 – Special Topics in Management

Please contact me if your school is interested to review this textbook for possible adoption.

Protecting Your Company Against Ransomware Attacks

Ransomware attacks are the latest security breach incidents grabbing the headlines these days. Last month, major companies including Britain’s National Health Services, Spain’s Telefónica, and FedEx were victims of the WannaCry ransomware attacks. Ransomware infects your computer by encrypting your important documents, and the attackers then ask for ransom to decrypt your data in order to become usable again.

Ransomware attack operations have become more sophisticated, in some cases functioning with a full helpdesk support.

While the latest Operating System patches and anti-malware programs can defend these attacks to a point, they are usually reactive and ineffective. For instance, the WannyCry malware relied heavily on social engineering (phishing) to spread, and relying on end users to open malicious email or to click on malicious websites.

The best defense for ransomware attacks is a good data protection strategy in the area of backup and disaster recovery. When ransomware hits, you can simply remove the infected encrypted files, and restore the good copies. It’s surprising to know that a lot of companies and end users do not properly backup their data. There are tons of backup software and services in the cloud to backup data. A periodic disaster recovery test is also necessary to make sure you can restore data when needed.

A sound backup and disaster recovery plan will help mitigate attacks against ransomware.

Ensuring Reliability of Your Apps on the Amazon Cloud

On February 28, 2017, the Amazon Simple Storage Service (S3) located in the Northern Virginia (US-EAST-1) Region went down due to an incorrect command issued by a technician. A lot of websites and applications that rely on the S3 service went down with it. The full information about the outage can be found here: https://aws.amazon.com/message/41926/

While Amazon Web Services (AWS) could have prevented this outage, a well-architected site should not have been affected by this outage. Amazon allows subscribers to use multiple availability zones (and even redundancy in multiple regions), so that when one goes down, the applications are still able to run on the others.

It is very important to have a well-architected framework when using the cloud. AWS provides one that is based on five pillars:

  • Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
  • Reliability – The ability of a system to recover from infrastructure or service failures, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
  • Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization – The ability to avoid or eliminate unneeded cost or suboptimal resources.
  • Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

For those companies who were affected by the outage, applying the “reliability” principle (by utilizing multiple availability zones, or using replication to different regions), could have shielded them from the outage.