Understanding “Serverless” Applications

“Serverless” architecture is a hot topic in the IT world, especially after AWS released their Lambda service. But many people are confused with the term “serverless.” The term is paradoxical because applications running on the “serverless” architecture are still using services that are powered by servers on the back end. The term is meant to be catchy and provocative, but what it really means is that, as a developer or as a company providing an app or services, you don’t need to provision, monitor, patch and manage any server. You can focus on building and maintaining your application and leave the server administration tasks to the cloud provider. This also means that you don’t have to worry about capacity planning. With the vast resources available on the cloud, your application scales with usage. In addition, availability, fault-tolerance and infrastructure security are already built-in.

Whereas on the traditional “always on” server sitting behind an application, you have to pay for its 24/7 operations, on a serverless architecture, you only pay for the time your application is running and you never have to pay for server idle time.

In this model, your application is only invoked if there is a trigger, such as changes in data state, requests from end user, changes in resource state, etc. Your program (or function that is written in node.js, Python, Java or C# programming languages) will then perform the task, and provide the output or service.

The obvious benefits of this model are reduced cost and reduced complexity. However, there are also drawbacks such as vendor control, vendor lock-in, multi tenancy concerns, and security issues.

As the “serverless” architecture becomes mainstream, more and more companies and developers will use this model to provide services.

AWS Certified Solutions Architect

I recently passed the AWS Certified Solutions Architect – Associate exam. This certification allows me to prove my knowledge of the cloud and be able to help companies who are using or will be using AWS cloud services as their cloud provider. For more information about AWS Certification, please visit the official site here: https://aws.amazon.com/certification/

Initially used by developers and start-up companies, AWS has grown into a solid and robust cloud services provider. Big enterprises are realizing the value of AWS and more and more companies are extending their data centers to the cloud. For most companies, traditional on premises infrastructure may not be sufficient anymore as business users demand more from IT, including faster and scalable IT services.

Getting AWS-certified requires hard work. You need to read the book, enroll in a training class (if possible), do practice tests and get hands-on experience in AWS. In addition, you should also have a working knowledge of networking, virtualization, database, storage, servers, scripting / programming and software applications. IT professionals should invest their skills in the cloud or run the risk of being obsolete.

AWS Cloud Architecture Best Practices

AWS services have many capabilities.  When migrating existing applications to the cloud or creating new applications for the cloud, it is important to know these AWS capabilities in order to architect the most resilient, efficient, and scalable solution for your applications.

Cloud architecture and on-premise architecture differs in so many ways.  In the cloud, you treat the infrastructure as a configurable and flexible software as opposed to hardware. You need to have a different mindset when architecting in the cloud because the cloud has a different way of solving problems.

You have to consider the following design principles in AWS cloud:

  1. Design for failure by implementing redundancy everywhere.  Components fail all the time.  Even whole site fail sometimes.  For example, if you implement redundancy of your web/application servers in different availability zones, your application will be more resilient when one availability zone fails.
  2. Implement scalability.  One of the advantages of using the cloud vs on-premise is the ability to grow and shrink the resources you need depending on the demand.  AWS supports scaling your resources vertically and horizontally, even automating it by using auto-scaling.
  3. Use AWS storage service that fits your use case.  AWS has several storage services with different properties, cost and functionality.  Amazon S3 is used for web applications that need large-scale storage capacity and performance.  It is also used  for backup and disaster recovery.  Amazon Glacier is used for data archiving and long-term backup.  Amazon EBS is a block storage used for mission-critical applications. Amazon EFS (Elastic File System) is used for SMB or NFS shares.
  4. Choose the right database solution. Match technology to the workload: Amazon RDS is for relational databases. Amazon DynamoDB is for NoSQL databases and Amazon Redshift is for data warehousing.
  5. Use caching to improve end user experience.  Caching minimizes redundant data retrieval operations making future requests faster.   Amazon CloudFront is a content delivery network that caches your website via edge devices located around the world. Amazon ElastiCache is for caching data for mission-critical database applications.
  6. Implement defense-in-depth security.  This means building security at every layer.  Referencing the AWS “Shared Security” model, AWS is in-charge of securing the cloud infrastructure (including physical layer and hypervisor layer) while the costumer is in-charge of the majority of the layers from the operating system up to the application layer.  This means customer is still responsible for patching the OS and making the application as secure as possible.  AWS provides security tools that will make your application secure such as IAM, security groups, network ACL’s, CloudTrail, etc.
  7. Utilize parallel processing.  For instance, multi-thread requests by using concurrent threads instead of sequential requests.  Another example is to deploy multiple web or application servers behind load balancers so that requests can be processed by multiple servers at once.
  8. Decouple your applications. IT systems should be designed in a way that reduces inter-dependencies, so that a change or failure in one component does not cascade to other components.  Let the components interact with each other only through standard APIs.
  9.  Automate your environment. Remove manual process to improve system’s stability and consistency.  AWS offers many automation tools to ensure that your infrastructure can respond quickly to changes.
  10. Optimize for cost.  Ensure that your resources are sized appropriately (they can scale in and out based on need),  and that you are taking advantage of different pricing options.

Sources: AWS Certified Solutions Architect Official Study Guide; Global Knowledge Architecting on AWS 5.1 Student Guide

New Book – Practical Research 1: Basics of Qualitative Research

Grade Level: Grade 11
Semester: 2nd Semester
Track: Applied Track
Authors: Garcia, et al.
ISBN: 978-6218070127
Year Published: 2017
Language: English
No. of pages: 400
Size: 7 x 10 inches
Publisher: Azes Publishing

About the book:

This book aims to develop critical thinking and problem-solving skills through qualitative research.

Contents:

 Chapter 1 – Nature of Inquiry and Research

  1. What Is Research?
  2. The Importance of Research in Daily Life
  3. The Characteristics, Processes, and Ethics of Research
  4. Quantitative and Qualitative Research
  5. The Kinds of Research Across Fields

Chapter 2 – Qualitative Research and Its Importance in Daily Life

  1. What is Qualitative Research?
  2. Characteristics of Qualitative Research
  3. Approaches in Qualitative Research
  4. Methods in Qualitative Research
  5. Strengths and Weaknesses of Qualitative Research
  6. Importance of Qualitative Research Across Fields
  7. Generic Outline of a Written Qualitative Research Paper

Chapter 3 – Identifying the Inquiry and Stating the Problem

  1. Range of Research Topics in the Area of Inquiry
  2. How to Design a Research that is Useful in Daily Life
  3. The Research Title
  4. The Background of Research
  5. The Research Questions
  6. The Scope and Delimitation of Study
  7. Benefit and Beneficiaries/ Significance of Study
  8. The Statement of the Problem

Chapter 4 – Learning from Others and Reviewing the Literature

  1. Criteria in Selecting, Citing, and Synthesizing Related Literature
  2. Ethical Standards in Writing Related Literature
  3. The Definition of Terms as Used in the Study

Chapter 5 – Understanding Data and Ways to Systematically Collect Data

  1. What are the Qualitative Research Designs?
  2. Description of Sample
  3. Instrument Development
  4. Data Collection and Analysis Procedures
  5. Guidelines in Writing Research Methodology

Chapter 6 – Finding Answers Through Data Collection

  1. Data Collection Methods
  2. Examples of Data Collection Methods

Chapter 7 – Analyzing the Meaning of the Data and Drawing Conclusions

  1. What is Qualitative Data Analysis?
  2. 2. Ethnographic Data Analysis
  3. Grounded Theory Data Analysis
  4. Phenomenological Data Analysis
  5. Constant Comparative Method Analysis
  6. Language-Based Data Analysis
  7. Coding
  8. Computer-Aided Analysis
  9. How to Analyze Qualitative Data
  10. Summary of Analyzing Qualitative Data
  11. Examples of Data Analysis in Qualitative Research

 Chapter 8 – Reporting and Sharing the Findings

  1. Summary of Findings, Conclusions & Recommendations
  2. Techniques in Listing References
  3. Process of Report Writing
  4. Selection Criteria and Process of Best Design

Online Textbook Support is available.


For orders, please contact: 

Azes Publishing

New Book: Organization and Management

Organization and Management

DepEd K-12 Curriculum Compliant
Outcomes Based Education (OBE) Designed
Grade Level: Grade 11
Semester: 1st Semester
Strands: ABM, GAS
Authors: Palencia J, Palencia F, Palencia S.
ISBN: 978-621-436-005-5
Edition: First Edition
Year Published: 2019
Language: English
No. of pages: 
Size: 7 x 10 inches

About the book:
This book deals with the basic concepts, principles, and processes related to business organization, and the functional areas of management. Emphasis is given to the study of management functions like planning, organizing, staffing, leading, controlling, and the roles of these functions in entrepreneurship.

Contents:
Chapter 1 – Nature and Concept of Management
Chapter 2 – The Firm and Its Environment
Chapter 3 – Planning
Chapter 4 – Organizing
Chapter 5 – Staffing
Chapter 6 – Leading
Chapter 7 – Controlling
Chapter 8 – Introduction to the Different Functional Areas of Management
Chapter 9 – Special Topics in Management

Please contact me if your school is interested to review this textbook for possible adoption.

Protecting Your Company Against Ransomware Attacks

Ransomware attacks are the latest security breach incidents grabbing the headlines these days. Last month, major companies including Britain’s National Health Services, Spain’s Telefónica, and FedEx were victims of the WannaCry ransomware attacks. Ransomware infects your computer by encrypting your important documents, and the attackers then ask for ransom to decrypt your data in order to become usable again.

Ransomware attack operations have become more sophisticated, in some cases functioning with a full helpdesk support.

While the latest Operating System patches and anti-malware programs can defend these attacks to a point, they are usually reactive and ineffective. For instance, the WannyCry malware relied heavily on social engineering (phishing) to spread, and relying on end users to open malicious email or to click on malicious websites.

The best defense for ransomware attacks is a good data protection strategy in the area of backup and disaster recovery. When ransomware hits, you can simply remove the infected encrypted files, and restore the good copies. It’s surprising to know that a lot of companies and end users do not properly backup their data. There are tons of backup software and services in the cloud to backup data. A periodic disaster recovery test is also necessary to make sure you can restore data when needed.

A sound backup and disaster recovery plan will help mitigate attacks against ransomware.

Ensuring Reliability of Your Apps on the Amazon Cloud

On February 28, 2017, the Amazon Simple Storage Service (S3) located in the Northern Virginia (US-EAST-1) Region went down due to an incorrect command issued by a technician. A lot of websites and applications that rely on the S3 service went down with it. The full information about the outage can be found here: https://aws.amazon.com/message/41926/

While Amazon Web Services (AWS) could have prevented this outage, a well-architected site should not have been affected by this outage. Amazon allows subscribers to use multiple availability zones (and even redundancy in multiple regions), so that when one goes down, the applications are still able to run on the others.

It is very important to have a well-architected framework when using the cloud. AWS provides one that is based on five pillars:

  • Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
  • Reliability – The ability of a system to recover from infrastructure or service failures, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
  • Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization – The ability to avoid or eliminate unneeded cost or suboptimal resources.
  • Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.

For those companies who were affected by the outage, applying the “reliability” principle (by utilizing multiple availability zones, or using replication to different regions), could have shielded them from the outage.

Securing Your Apps on Amazon AWS

One thing to keep in mind when putting your company’s applications in the cloud, specifically on Amazon AWS, is that you are still largely responsible for securing them. Amazon AWS has solid security in place, but you do not entrust the security aspect to Amazon thinking that your applications are totally secure because they are hosted there. In fact, Amazon AWS has a shared security responsibility model depicted by this diagram:

Source:  Amazon AWS

Amazon AWS is responsible for the physical and infrastructure security, including hypervisor, compute, storage, and network security; and the customer is responsible for application security, data security, Operating System (OS) patching and hardening, network and firewall configuration, identity and access management, and client and server-side data encryption.

However, Amazon AWS provides a slew of security services to make your applications more secure. They provide the AWS IAM for identity and access management, Security Groups to shield EC2 instances (or servers), Network ACLs that act as firewall for your subnets, SSL encryption for data transmission, and user activity logging for auditing. As a customer, you need to understand, design, and configure these security settings to make your applications secure.

In addition, there are advance security services that Amazon AWS provides, so that you don’t have to build them, including the AWS Directory Service for authentication, AWS KMS for Security Key Management, AWS WAF Web Application Firewall for deep packet inspection, and DDOS mitigation.

There is really no perfect security, but securing your infrastructure at every layer tremendously improves the security of your data and applications in the cloud.

Annual New England VTUG Winter Conference

I have been attending the annual New England Virtualization Technology Users Group (VTUG) Winter Warmer Conference for the past couple of years. This year, it was held on January 19, 2017 at Gillette Stadium.

Gillette Stadium is where the New England Patriots football team plays. The stadium has nice conference areas and the event usually features meeting and getting autographs from some famous Patriots alumni. This year we got the chance to meet running back Kevin Faulk and Patrick Pass.

Although the event is sponsored by technology vendors, most of the keynotes and breakout sessions are not sale pitches. They are usually very informative sessions delivered by excellent speakers.

The key takeaways for me from the conference are the following:

  1. Cloud adoption remains a hot topic, but containerization of applications being led by Docker, enables companies to construct and deliver microservices applications at lightning speed. Coupled with DevOps practices and support from major software vendors and providers (Windows, RedHat, Azure, AWS, etc), containers will be the next big thing in virtualization.
  2. VMware is getting serious about infrastructure security. Security has become the front and center focus of the release of vSphere 6.5. Their objective is to make security easy to manage. Significant security features include VM encryption at scale, enhanced logging from vCenter, VM’s secure boot support, and secure boot support for ESX1. For more information, visit this website.
  3. As more and more companies are moving into hybrid cloud model (a combination of private and public cloud), vendors are getting more innovative on creating products and services that will help companies easily manage and completely secure the hybrid cloud.
  4. Hyper-converged infrastructure is now being broadly adopted, with EMC VXrails and Nutanix leading the pack. The quest for consolidation, simplification, and software-defined infrastructure is in full steam.
  5. New innovative companies are present at the event as well. One particular company called Igneous, offers “true cloud for local data.”

Building an Enterprise Private Cloud

Businesses are using public clouds such as Amazon AWS, VMware vCloud or Microsoft Azure because they are relatively easy to use, they are fast to deploy, businesses can buy resources on demand, and most importantly, they are relatively cheap (because there is no operational overhead in building, managing and refreshing an on-premise infrastructure). But there are downsides to using public cloud, such as security and compliance, diminished control of data, data locality issue, and network latency and bandwidth. On-premise infrastructure is still the most cost effective for regulated data and for applications with predictable workloads (such as ERP, local databases, end-user productivity tools, etc).

However, businesses and end-users are expecting and demanding cloud-like services from their IT departments for these applications that are best suited on-premise. So, IT departments should build and deliver an infrastructure that has the characteristics of a public cloud (fast, easy, on-demand, elastic, etc) and the reliability and security of the on-premise infrastructure – an enterprise private cloud.

An enterprise cloud is now possible to build because of the following technology advancements:

  1. hyper-converged solution
  2. orchestration tools
  3. flash storage

When building an enterprise cloud, keep in mind the following:

  1. They should be 100% virtualized.
  2. There should be a mechanism for self-service provisioning, monitoring, billing and charge back.
  3. A lot of operational functions should be automated.
  4. Compute and storage can be scaled-out.
  5. It should be resilient – no single point of failure.
  6. Security should be integrated in the infrastructure.
  7. There should be a single management platform.
  8. Data protection and disaster recovery should be integrated in the infrastructure.
  9. It should be application-centric instead of infrastructure-centric.
  10. Finally, it should be able to support legacy applications as well as modern apps.