Securing the Connection from NodeJS App on EKS to S3
You have your app deployed on an EC2 instance via nodes on EKS and this app needs to access/interact with files stored in an Amazon S3 bucket.
You have your app deployed on an EC2 instance via nodes on EKS and this app needs to access/interact with files stored in an Amazon S3 bucket.
You've probably gotten to a point where you need to manage multiple clusters using GitOps, knowing that managing the argocd instance itself can be considered tedious or painful, haha, meaning you sure do not want to install new argocd instances on other new Kubernetes clusters.
So you've deployed a few resources on AWS, EC2, and Redis instance, exposed port 6379, and made sure other resources in the VPC have access to the Redis instance and all.
You've tried hardening by default for your resources, that's good, but by mistake, your Redis instance was deployed into the public subnets, which makes the service accessible by any internet user.
Been following the tech communities in Ekiti from 100 Level, passionate about every bit of it, the way I swiftly leave classes to the Tech Hub even made my colleagues nickname me "Techub".
But then there was no clear path, no focus, looking around there were no cyber security communities, so what was I doing? I joined the dev communities, going to every event just to take the swag and yes learning too.
In my past article about signing container images, got some comments which led me to dig into the keyless signing of container images.
Okay, you've moved your infrastructure provisioning from visiting the console page and now adopted IaC ( Infrastructure as Code) for provisioning your infrastructure using Terraform.
So along the way, you discovered that you will need some sensitive credentials like GitHub token to use with aws amplify, datadog API and key deployments?
So you've moved your organization's secret management process to Hashicorp Vault on Kubernetes? everything is working well, but you are about to promote to production, this brings a lot of questions about stability, recovery and fully operational vault servicing your deployments.
Struggling to pick the right autoscaler for your Kubernetes cluster? Trust me, I get it. With all the options out there, choosing between Cluster Autoscaler, Karpenter, and others can be overwhelming.
Here's the deal - while both Cluster Autoscaler and Karpenter are backed by AWS, I've found Karpenter to be consistently faster at both scaling up and down. Let me show you how to set it up.
There are many tools for handling complex architecture of deploying changes of your applications from the build stage to your cluster, most times the term and process of archiving this is called GitOps only if GitHub is being used as the single source of truth in the scenario.
when it comes to containerized environment graceful shutdown, process management and reducing attack surface, I believe we can't leave dumb-init and tini out of it.