This one will be relatively short, figured I’d post this for anyone else who was struggling with use case.

Your goal: your application needs to use a Python module that is available in a private Azure Artifact’s feed and you want to pip install this module in a Alpine based docker build.

Was recently working on a project where I had this exact use case. From experience, things can always get slightly more complicated when trying to build Docker images that depend on things behind private artifact repositories; security of your repository secrets being the biggest of these issues and…


nice! thanks a ton just needed this. Very similar to asyncio queues in python


This will be a quick post but could not find much on this error, so figured I’d post it for others.

{"service":"AWSGlue","statusCode":400,"errorCode":"EntityNotFoundException","requestId":"xxxxx","errorMessage":"Continuation for job JobBookmark for accountId=xxxxx, jobName=myjob, runId=jr_xxxxx does not exist. not found","type":"AwsServiceError"}

Was recently working on a PySpark job in AWS Glue and was attempting to use the Job Bookmarks feature which lets your Spark jobs bookmark the last set of data read from S3, so that on the next run you don’t process it again. You can read more about this here.

--job-bookmark-option job-bookmark-enable

In my case my job had the bookmark option enabled, and I was…


If you’ve ever had to monitor an application, endpoint, or website, you’ve likely come across literally hundreds of monitoring services that can execute simple HTTP based checks from N global endpoints then notify an operator when certain thresholds are met. One of the more widely know services that can do this is Pingdom.

On a past project, the team was tasked with monitoring an application comprised of several underlying components each manifesting themselves behind a single endpoint FQDN where various paths were actually serviced by N underlying applications, each of which either exposed their own health check or simply needed…


Earlier this year I re-entered the rabbit hole which is the dizzying world of CI/CD platforms and solutions. Today’s marketplace presents so many choices that I can only imagine how daunting it is for a newcomer to the space to decide on what solution to go with.

Thankfully the industry is starting to invest in defining some standardization and conventions for CI/CD systems and the concepts around “pipelines” which is ripe with frequently repeated patterns across various vendors that some baseline level of standardization is needed. One of the organizations involved in this effort is the Continuous Delivery Foundation (CDF)


This post is a continuation into the world of locally executing CI/CD for developers, with my prior post being about Skaffold. In this post I’ll look at another one of these tools called Tilt.

Background

The world of software development and how apps are run in production environments has come a long way over the years. Starting with bare metal physical servers, we evolved to virtual machines, onward to LXC, Docker daemons, and now our current state of container orchestration via things like Kubernetes.

The other side of the world… that which defines how software developers locally develop, test, iterate, package…


The acronym “CI/CD” and its respective phrases (continuous integration & continuous [delivery|deployment]) are sometimes munged together yet there are clear definitions and lines of delineation for each, despite many CI/CD offerings out there that enable you to use a single framework to implement both sides of the CI/CD equation using the same tooling.

The intent of continuous delivery is pretty simple: take a built, tested, validated artifact and “deliver or deploy” it to one or more target execution environments. What exactly an “artifact” is within a CI/CD system completely depends on each application and target execution environment. …


This is a long overdue followup to my prior article titled “Reactive re-engineering with Akka” in which I described an approach to processing large amounts of data more efficiently by using reactive actors and how they were used to massively scale up an existing data synchronization platform that was starting to bottleneck.

In this article I wanted to expand on it by discussing the topic of retrying failed synchronizations of data. The concept of retrying failed operations can be pretty complicated, especially in event driven systems where manipulations of the source data results in near immediate synchronizations of that data…


The world of software development and how apps are run in production environments has come a long way over the years. Starting with bare metal physical servers, we evolved to virtual machines, onward to LXC, Docker daemons, and now our current state of container orchestration via things like Kubernetes.

The other side of the world… that which defines how software developers locally develop, test, iterate, package, build and deploy those apps to their final execution environments likewise has varied wildly. Much of this is due to obvious things like the choice of language and frameworks, but another factor in it…


If you’re like many others out there, you’ve been holding off of migrating to Helm 3 until at least version 3.1 is out. Well as of early February it finally was released. Since then I’ve recently gone through some analysis of migrating Helm 2 releases (with the Tillerless plugin) to Helm 3.x and figured I’d share some of my findings.

The first thing you will want to do is actually test that all of your charts actually will work as expected with Helm 3. For the most part, I’ve seen no issues in this particular area, but this completely depends…

bitsofinfo

stream of engineering: https://github.com/bitsofinfo

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store