Category Archives: DevOps

Unveiling the Truth: Kubernetes as a Panacea or a Myth?

Kubernetes as a Panacea: Myth or Reality?

In the rapidly evolving world of technology, few tools have garnered as much attention as Kubernetes. Often hailed as a silver bullet for a multitude of IT challenges, Kubernetes is touted as a panacea for managing containerized applications. But is this perception grounded in reality, or is it merely a myth? In this blog post, we will delve into the capabilities and limitations of Kubernetes, exploring whether it truly lives up to the hype.

What is Kubernetes?

Kubernetes, often abbreviated as K8s, is an open-source platform designed to automate the deployment, scaling, and operation of application containers across clusters of hosts. Originally developed by Google, it is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes provides a framework to run distributed systems resiliently, handling scaling and failover for applications, and providing deployment patterns and management tools.

The Promises of Kubernetes

  1. Scalability: One of the most significant promises of Kubernetes is its ability to scale applications seamlessly. It can automatically adjust the number of running containers based on the current load, ensuring that applications remain responsive and efficient under varying demands.
  2. Resilience: Kubernetes offers robust mechanisms for managing the lifecycle of applications. It ensures that your applications are always running in the desired state, automatically restarting containers that fail or are unresponsive.
  3. Portability: Kubernetes abstracts away the underlying infrastructure, making it possible to run your applications on any cloud provider or on-premises environment. This portability is a key advantage for businesses looking to avoid vendor lock-in.
  4. Efficiency: By optimizing the use of resources through its scheduling capabilities, Kubernetes can lead to more efficient utilization of hardware, reducing costs and improving performance.
  5. Automation: Kubernetes automates many aspects of application management, including deployment, scaling, and operations, freeing up developers and IT staff to focus on more strategic tasks.

The Reality Check

While the promises of Kubernetes are compelling, it’s essential to recognize that it is not a magic solution that will solve all problems effortlessly. There are several considerations and challenges that organizations must be aware of:

  1. Complexity: Kubernetes has a steep learning curve. Its powerful features come with a complexity that can be overwhelming for teams new to container orchestration. Proper training and expertise are required to harness its full potential.
  2. Resource Intensive: Running a Kubernetes cluster can be resource-intensive. The control plane components and various add-ons needed for a production environment can consume significant CPU and memory, which may not be ideal for small-scale applications.
  3. Operational Overhead: Despite its automation capabilities, Kubernetes still requires significant operational oversight. Maintaining, updating, and securing a Kubernetes cluster involves ongoing effort and vigilance.
  4. Security: Kubernetes security is complex and multi-faceted. Misconfigurations can lead to vulnerabilities, and securing a cluster requires a thorough understanding of Kubernetes security best practices and continuous monitoring.
  5. Integration and Compatibility: Integrating Kubernetes with existing systems and workflows can be challenging. Not all applications are designed to run in a containerized environment, and some may require substantial refactoring to be compatible with Kubernetes.
  6. Database Migrations: One of the notable limitations of Kubernetes is its inability to manage complex database migrations effectively. While Kubernetes excels at managing stateless applications, handling stateful components like databases and their intricate migration processes can be cumbersome. Database migrations often require precise sequencing, careful coordination, and rollback mechanisms that go beyond the orchestration capabilities of Kubernetes. These tasks often necessitate external tools and manual oversight, underscoring that Kubernetes is not a comprehensive solution for every aspect of application deployment and maintenance.

Use Istio

Attending recently a potential candidate meeting, the candidate was asked to “provide a system solution that would solve the most common issue found in a live-update system: uptime“. The requirements were simple. You have a system that is broken into two parts:

  • Backend.
  • Database.

Using k8s what would be the ideal setup you would choose so that you can update your Backend code, while also doing a database migration?

For example, say the backend code is compatible with the db code at major version X, what happens if you upgrade to the major version Y, and you need all your users to be serviced with 0 (literally 0), downtime?

The candidate, responded, after thinking a bit, that you need, in the between to deploy a backwards compatible version, so as to give a chance to the migration to take place.

This was unacceptable of course according to the interviewer. The answer from the interviewer was to : Use Istio.

What does this Istio thing:

  1. Traffic Management: Imagine you have a bunch of tiny services (like small programs) that need to talk to each other to make a larger application work, much like how team members need to communicate to complete a project. Istio acts as the “traffic controller,” making sure these messages go to the right place, in the right order, and without getting lost. It can also control how much traffic goes to each service, balance the load, and handle failures gracefully.
  2. Security: Istio makes sure that communication between services is secure. It’s like locking all the doors and windows in a building to ensure that only authorized people can enter. Istio automatically encrypts the data being sent between services and checks that each service is who it claims to be (authentication). It also enforces rules about who can talk to whom (authorization).
  3. Observability: Istio helps you keep an eye on what’s happening inside your application. It’s like having a surveillance system that monitors traffic, performance, errors, and other important metrics. Istio provides insights into how services are interacting, which can help you quickly identify and fix issues.
  4. Policy Enforcement: It ensures that certain rules are followed. For example, it can enforce limits on how much data one service can send to another or ensure that certain services are only accessible under specific conditions. It’s like having company policies to ensure everything runs smoothly and safely.

Ok, so my guess is that the interviewer had read the above bullet points and thought:

Hmmm, smart traffic management looks like can be used so that we can route traffic from the database while running a migration in a way that it would never stop servicing at all!

Well, that was correct. Except that the database is a central place that gathers all application traffic and requires a central semaphore to perform write applications. This main’s semaphore’s task is to allow or not allow one agent writing or non writing. That means, when you are running your migration and you are changing the columns of a table, the RDBMS needs to have exclusive access to that table in order to recreate it, and no one else is allowed to write there. No matter how smart you will route the pod logic and traffic, the end bottleneck would be the database.

Kubernetes: Panacea or Powerful Tool?

Kubernetes is undoubtedly a powerful tool that can transform how organizations deploy and manage applications. It offers unparalleled capabilities for scalability, resilience, and efficiency. However, it’s not a one-size-fits-all solution. The notion of Kubernetes as a panacea is a myth if taken to mean that it can solve all IT problems without effort or expertise.

The reality is that Kubernetes can provide immense benefits when used appropriately and with the right level of investment in knowledge and resources. Organizations should approach Kubernetes with a clear understanding of its strengths and limitations, ensuring they have the necessary skills and infrastructure in place to leverage its full potential.

In conclusion, Kubernetes is not a magical cure-all, but it is a powerful tool that, when used correctly, can significantly enhance the agility, reliability, and scalability of modern applications. By setting realistic expectations and preparing adequately, organizations can avoid the pitfalls and maximize the advantages of adopting Kubernetes.


Coldfusion Admin API

For the past 4 years or so, I’ve been more active with the devops side of things. So, I was lucky enough to not work so closely to the business side anymore. That perk, though, came with a caveat. I was made responsible to provide optimizations and performance gains at the company’s main service, which unfortunately is largely built using one language thats hanging literally by a thread.

Yes, that language’s name is "Coldfusion". No, you have surely not heard of it before, and yes, luckily, it has nothing to do with physical cold fusion.

What is this Coldfusion?

Adobe’s mental child, which came out with Dreamweaver…

Coldfusion, is a closed source language that was created by Adobe. It’s a language that was conceived in 1995 (!) and it’s purpose was to help people break the compiling loop that reigned the internet world back then.

Their first intent was to create a framework that would connect html pages with database engines, and thus providing an Api that would be very easy to change while coding websites.

Luckily, the first implementation of Coldfusion was coded in Visual C++ (god help us), and its runtime was strictly Windows, as back then the popular runtime and tools were being provided by the Gates family. There were some ports to Sun’s Solaris, but they were limited.

After version 6 with the debut of Coldfusion 6MX, everything moved to Java, where they stayed up to now. You can see my repo, thats a port of the popular SOLID pattern. Since I was hired as a Software engineer, I had to deal with code quality. The syntax is quite similar to javascript, but you can easily load java jars and run them directly (which actually gives the language actual leeway).

Ok, but, so whats this API you talking about?

If for some weird reason you have ended up in my position and Coldfusion is “paying your bills” you might end up reading up articles about how to do stuff.

Most helpful is Ben Nadel’s blog, this guy has been with Coldfusion since its first steps and he’s helped a lot lot of people with his posts. Ben will solve a lot of questions you will have when writing code with Coldfusion. He’s done a lot of good work, and also getting a lot of props for publishing his problems and solutions. There are also more resources you can address your questions at, I’ll just mention some here: official Coldfusion Adobe community community.adobe.com, the Adobe CF portal at coldfusion.adobe.com, CFML Slack, and more.

But there were times that we had to ask for professional help. Unfortunately Coldfusion is a closed source project. There is a respective open source implementation (called Lucee), but unfortunately – and that was explored when I was firstly joined – , it wasn’t 100% compatible with the company’s projects. Therefore, we were stuck with the closed source one, and even though its official documentation is good, Adobe, who’s got the reins in managing the whole language, at times, doesn’t really care what’s going on with the community. So, they are only answering the community’s questions only if they are under pressure.

The guy who’s applying pressure is Charlie Arehart. He’s liaised numerous times between popular questions (especially at the administration side of CF), and he’s doing a really good job.

Managing CF service

My troubles started when I was called to manage a Coldfusion service programmatically. CF, comes in a service – server package, which runs and you have the option of “visiting” a specially crafted server URL where you can point and click administrative options, after being authenticated. Options like for example change the code mappings, as to where the Coldfusion code resides inside your server, or, say, refresh something Coldfusion calls “query cache”.

Long story short, I had to find a way to make all those changes programmatically, as in any serious enterprise, you just can’t deal with point and click changes, iterating every single server.

Coldfusion Admin API

So luckily Coldfusion is exposing those Administrative functions in a form of an api. Charlies Admin API Blog Post, is descriptive enough to guide you through the process. So if for example you want programatically create some database connections (in CF world they are called “Data Source Objects”) you can do so like this:

<cfscript>
// Login is always required. This example uses two lines of 
code.adminObj = createObject("component","cfide.adminapi.administrator");
adminObj.login("admin");
// Instantiate the data source 
object.myObj = createObject("component","cfide.adminapi.datasource");
// Create a DSN.
myObj.setMSSQL(
driver="MSSQLServer",
name="northwind_MSSQL",
host = "xx.x.xxx.xx",
port = "1433",
database = "northwind",
username = "sa",
login_timeout = "29",
timeout = "23",
interval = 6,
buffer = "64000",
blob_buffer = "64000",
setStringParameterAsUnicode = "false",
description = "Northwind SQL Server",
pooling = true,
maxpooledstatements = 999,
enableMaxConnections = "true",
maxConnections = "299",
disable_clob = true,
disable_blob = true,
disable = false,
storedProc = true,
alter = false,
grant = true,
select = true,
update = true,
create = true,
delete = true,
drop = false,
revoke = false);
</cfscript>

The API cfc files that are offered are the following:

CFC’s that can be included to administer a Coldfusion Server installation

Charlie in his blog says that he has asked the Adobe team to document the functions that each cfc exposes, but unfortunately Adobe, being Adobe, didn’t. They have merely documented 7 out of the 18 files, and the rest are left as they were.

If you wish to introspect the other files you can do so just by log into http://localhost:8500/CFIDE/administrator/index.cfm while running a CF Server installation, and then, head to Security -> RDS.

Change or setup an RDS password.

There you either disable RDS (not recommended for long run setups), or change the password.

After that you can simply follow the virtual path, ie, if you wish to introspect the runtime.cfc you can simply go to : http://localhost:8500/CFIDE/adminapi/runtime.cfc, and you will be met with the following page:

Or if you prefer a link, here

Just as you’ve guessed, this is all the CF API

So I went the extra mile and went and copied all the CF 2018 introspection code that Adobe is producing when visiting all the administrative modules listed in their server, by creating a complete “Coldfusion 2018 Admin API Documentation”.

You can just click the links below and you will get the html as it is being generated from the original Coldfusion Administration URL.

Base

Runtime

Access Manager

Collections

Datasource

Debugging

Event Gateway (care when you use this one, its severely outdated)

Extensions

Flex

Mail

Office

Runtime

Scheduler

Security

Server Instance

Websocket

I hope this simplifies the administration

My attempt was purely drafted to help people so that they wouldn’t have to search locally or in a server to have the tools to administer their installation.

Since Adobe stopped the process of documenting, I felt this must have been done somewhere, so I took the initiative of putting it here.

Stay tuned, I will come back with some more posts about crypto — my new hobby!

EDIT: I will create another post documenting the CF2021 ones, as we will be soon migrating there as well.

Powershell Shenanigans

Lately I have been working on a job position, mostly orientated towards the system administration side. As a result of that I am working into creating some tools that help the everyday life of a developer.

Unfortunately, because that company has a legacy product (all have that, even startups!) I also had to provide some tooling for that too. As you may guess, that product was running into Windows servers. And here’s when the story starts getting interesting.

Powershell was very popular in the past… Yet now its becoming a nuisance…

It is a Microsoft product!

From: The dev community

Yes, yes! I know. Half of the people you might ask around they are going to come back at you with that phrase. It isn’t open source, and it is a Microsoft product. And when they utter that phrase you can see their facial expression, saying it with such aversion, as if Microsoft is the devil himself, and they are the twelve apostles!

Sure, that product has its issues, but it also has some (if not very good in my humble opinion), documentation online: https://docs.microsoft.com/en-us/powershell/

Really powerful stuff, coming in from Microsoft, and the chaos that is called Windows OS… (let’s not forget Vista, Windows Millennium, Internet Explorer, and all those “successful products” we were forced to use…).

To cut to the chase

My main point is that Powershell, strives to offer some tools needed for system administrators to administer their Windows Installations. And it fails, unfortunately. As a product it is so chaotic and big, with so many different pathways you can end up being caught at. Especially if someone compares that with the simplicity of the unix counterpart. Even though, they have tried to be more effective and direct. I mean, in every modern installation of Windows 10 all you have to do is WinKey + type “Power” + Press Enter, and you are within a cli where you can start executing commands. Quite fast, and user friendly.

The problems start when you try to consolidate stuff. When you want to write different scripts that perform different tasks. When you are trying to include that awesome script you wrote, and its very essential to the grand scheme of your process. Thats when things, start to get interesting, and frankly, I think Microsoft hasn’t really put things into perspective when they started implementing that product.

For example:

I was asked from the security team to lock down user permissions into a given server. In order to do that the best way possible (since we do not want not our users to at least have a the required permissions they actually need) to create another role (or user) and assume that role to run stuff. Since the setup was old, the only option I had was use a user to do that. Which lead to the following hidden default decisions by Powershell.

I had to use this :

$username = "domainuser_name"
$securePassword = "secure_hash" | ConvertTo-SecureString
$credential = New-Object System.Management.Automation.PSCredential $username, $securePassword

In order to assume the user and run the commands I wanted. Only problem was that I had to somehow encrypt the secure_hash using this function:

ConvertFrom-SecureString

If you visit the documentation, and not read carefully the description (especially the last part of it) and jump to the usage, you will try to call it somehow like this:

$SecureString = Read-Host -AsSecureString
$StandardString = ConvertFrom-SecureString $SecureString

The above will echo something like this:

Write-Host $StandardString
70006f007700650072007300680065006c006c0072006f0063006b0073003f00

for the password: powershellrocks?.

Now if you take that $StandardString and you pass it in the ConvertTo-SecureString function then that will create a System.Security.SecureString object (whatever that is, I couldn’t properly inspect it…), which can be passed along as a credential to log in to Windows computers.

Now this works just fine if you run all those commands in the server you want to work with. The problems start later, when you re-provision that server (and of course you have saved that $StandardString since , the user hasn’t changed credentials, and you need that to log him in). If you hadn’t payed attention at the last subsentence of the description:

If no key is specified, the Windows Data Protection API (DPAPI) is used to encrypt the standard string representation.

Surprise!

A quick google search of Windows Data Protection (DPAPI) and you will see its nothing more that a key storage engine that saves a butch of keys from the user. So when you are calling the function without the -Key argument, then a different key is used coming from DPAPI. And, of course the error you are getting back if you call the reverse function isn’t that descriptive either:

ConvertTo-SecureString : Input string was not in a correct format.

Was it too hard to get a message like, key is invalid or decryption failed? Especially since they are using by default the hidden Windows key?

Unfortunately this goes across all PS

The guys who originally wrote Powershell, didn’t want to adhere to Explicit is better than implicit, as this is a principle used quite often in software development (see this). As being a primarily a linux user, I always loved the tools that MS was providing to Windows users. And frankly this was amazing in the past. But unfortunately, as time goes by, I am realising that the decisions they had to take while implementing those tools, weren’t as objective as the respective open source ones.

Or even when the open source guys didn’t do such a good job, and ended up creating non-useful tools, those tools were becoming deprecated quite fast. This cycle didn’t happen with Microsoft. A product had to go live, and if that product covered the needs of the users, was in fact irrelevant to whether it had to go live or not… (sounds familiar?)