Network Policies are an application-centric construct that allows you to specify how a pod is allowed to communicate with various network “entities”

Kubernetes Network Policies implementation diagram.
Kubernetes Network Policies implementation diagram.
Kubernetes Network Policies implementation diagram.

Prerequisite

This tutorial is a continuation of my previous articles:

These articles aimed to help you use Ansible¹ to create a Kubernetes Cluster in Google Cloud Platform (GCP)² and deploy a Nginx³ pod. Therefore, if you didn’t do it already, review the mentioned articles before proceeding.

The code used to create this tutorial is available in this repository.

From this…


Understand what is Helm, Helm Charts and how to configure GitHub pages to store and share your Charts.

What are Helm and Helm Charts?

Accordingly to the Helm official website¹:

Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

A Helm Chart² is a collection of yaml files that describes the Kubernetes resources used for your application. One can use a single file or multiple files to deploy the resources. …


Ansible infrastructure-as-code to automate Nginx deployment in Google Kubernetes Cluster (GKE) on Google Cloud Platform (GCP).

Automate application deployments in Kubernetes using Ansible
Automate application deployments in Kubernetes using Ansible
Automate application deployments in Kubernetes using Ansible — Image from author

Prerequisite

This tutorial is a continuation of my previous article called How to automate the setup of a Kubernetes cluster on GCP. This article aimed to help you use Ansible¹ to create a Kubernetes Cluster in Google Cloud Platform (GCP)². Therefore, if you didn’t do it already, review the mentioned article before proceeding.

The code used to create this tutorial is available in this repository.

From this point on, I assume you already have an up and running GKE cluster.

Manage Kubernetes Objects with Ansible

We will use the kubernetes.core.k8s³…


Install and configure a Docker environment to run Apache Kafka locally

Docker Environment Topology with Apache Zookeeper and Apache Kafka — from author
Docker Environment Topology with Apache Zookeeper and Apache Kafka — from author
Docker Environment Topology with Apache Zookeeper and Apache Kafka — from author

Introducing the Apache Kafka ecosystem

Apache Kafka¹ is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

Apache Zookeeper² is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. It is an essential part of Kafka.

This tutorial provides the means to execute Kafka in a distributed architecture with a 3-node Broker³ cluster. Also, Zookeeper is configured in Replicated mode⁴ - called ensemble - to take advantage of distributed architecture.

The code used to create…


Using Ansible to install, setup, and configure a Google Kubernetes Cluster (GKE) on Google Cloud Platform (GCP).

Infrastructure as Code — Ansible — GCP — Kubernetes
Infrastructure as Code — Ansible — GCP — Kubernetes
Infrastructure as Code — Ansible — GCP — Kubernetes

Automating the setup of a GKE cluster

As I briefly described in this article, Infrastructure as Code (IaC)¹ is paramount to maintain consistency across different environments. IaC resolves the environment drift issue where each environment has unique configurations that are not reproducible automatically.

The code used to create this tutorial is available in this repository.

Ansible² is the tool of choice to implement this tutorial. It helps us to create the necessary code to provision a basic Kubernetes cluster on GCP (GKE)³ automation.

Ansible Directory Layout

Follow is the directory structure we will…


What is a Service Mesh?

A service mesh is an infrastructure layer implemented to control the communication among containers (or services) in a Microservices architecture¹. The goal is to provide an improved approach for Traffic Management², Security³, and Observability⁴ across a network of services.

According to redhat.com⁵:

A service mesh doesn’t introduce new functionality to an app’s runtime environment — apps in any architecture have always needed rules to specify how requests get from point A to point B. …


IT automation, also known as Infrastructure as Code (IaC), is an intrinsic part of the DevOps culture and best practices. The goal is to guarantee the same environment is created every time the code is executed. This is pivotal to the implementation of Continuous Integration / Continuous Deployment (CI/CD). The purpose of the CI/CD pipeline is to enable teams to release a constant flow of software updates into production to quicken release cycles, lower costs, and reduce the risks associated with development.

This tutorial walks you through the creation of a serverless CI/CD pipeline using AWS CodePipeline and AWS Fargate


It’s excellent all the flexibility and resources that cloud computing provide to us. We can spin up servers, databases, and load balancers in minutes. However, it’s easy to lose track of how much you are spending and at the end of the month together with your bill you receive an unpleasant surprise.

Why wait until the end of the month to verify your spending?

Let’s use Lambda, Python, CloudWatch, and Simple Notification Service (SNS) to alert us daily regarding our billing in AWS.

Firstly, let’s create a Lambda function with permission (role) to access CostExplorer and to publish to SNS.


CloudFormation Stack

A stack¹ is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks. AWS CloudFormation² uses stacks to provision resources.

During the execution of the stack, when a resource is successfully provisioned its status is updated to CREATE_COMPLETE as we can see in the image below.

CloudFormation stack complete
CloudFormation stack complete
CloudFormation stack complete — From the author

In most circumstances this behaviour is excellent. But, for EC2 instances there is a particularity. AWS Cloudformation will mark the EC2 instance as ready as soon as the status checks⁴ for the…


How to create a Python Lambda function to connect to a AWS Aurora Serverless database using the ‘Data API’?

First of all it’s important to know that the ‘Data API’ that enables the connection with Aurora Serverless is still in beta and only available in N. Virginia (us-east-1). Thus, all your resources must reside in this region.

Firstly, you create your Aurora Serverless database:

Select “Serverless” and specify the username and password to connect:

Rafael Natali

AWS | GCP | Terraform | Kubernetes — linkedin.com/in/rafaelnatali

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store