Go to search feature Go to content

You need to use a different browser. To be able to use our internet services, you can instead use one of these browsers: Apple Safari, Google Chrome, Microsoft Edge or Mozilla Firefox.

Read more about recommended browsers

SEB Cloud Platform – Immutable infra with Packer

Illustration of a cloud with a padlock

When we started our cloud journey, we first went for using Google’s pre-packaged images for Compute. This worked fine to begin with, but one big caveat that we found was that they did not come packed with a cloud-logging agent.

And since we had a demand that some logs (e.g. syslog and authlog) always should be sent to Cloud Logging we had to manually install the cloud- logging agent on each machine. We quickly understood that this will not scale well and that we instead needed to pack our own images with the agent pre-installed and configured out-of-the-box. 

We were already using Terraform from HashiCorp to deploy our infrastructure in Google Cloud (GCP) and wanted something that tied well into that. So we started to look into Packer, which is a tool that works for a number of different Cloud providers including GCP. Packer works similarly to how docker works so you point it to a compute image that you want to modify. Then you can run tasks for what you want to be included in your custom, company- branded, image like installing logging agents. After it has run its different tasks you can push your fresh new custom image to any GCP project of your choosing. 

Over to execution! 

The first thing we did was to set up a pipeline to execute and build our images. We also opted for having a specific GCP project to store our finished images. We then started to work with Packer and as it uses the same HCL2 style language as Terraform it was quite easy to adopt. Here is an example:

 source "googlecompute" "seb_custom" {
  disk_size               = "${var.disk_size}"
  image_description       = "Image with stackdriver logging and monitoring agents preinstalled."
  image_family            = "${var.source_image_family}"
  image_name              = "${var.destination_image}"
  network_project_id      = "gcp-project-id"
  omit_external_ip        = true
  project_id              = "${var.destination_image_project_id}"
  scopes                  = ["https://www.googleapis.com/auth/monitoring",

  source_image            = "${var.source_image}"
  source_image_project_id = ["${var.source_image_project_id}"]
  ssh_username            = "packer"
  subnetwork              = "subnet-name"
  tags                    = ["os-image-builder"]
  use_internal_ip         = true
  zone                    = "zone-in-europe"

The next step was a build block which invokes sources and runs provisioning steps on them. The documentation for build blocks can be found here.

build { 
 sources = ["source.googlecompute.seb_custom"]
provisioner "shell" { 
 script = "${var.setup_script}"
  provisioner "file" {
destination = "audit.conf"
 source      = "custom-collectd-confs/audit.conf"
 provisioner "shell" {
 inline = ["sudo cp audit.conf /etc/google-fluentd/config.d/audit.conf",
"sudo chown root:root /etc/google-fluentd/config.d/audit.conf",
"sudo chmod 644 /etc/google-fluentd/config.d/audit.conf", "sudo service google-fluentd reload"]


Now, having these images is no fun if no one is using them. So we packed images for a selected number of OS that we wanted to support and then we implemented an organization policy that stated that users could only fetch images from our own custom image project. 


Now we can be sure that all critical logs are sent to cloud logging and can be picked up by a log sink and sent further to secure storage. 


This panned out very well for us and we can also easily add more mandatory software to our compute images if needed. As an example, there was a lot of demand to use cloud SQL proxy. So now that is also packaged into our image. I would recommend any large organization to start looking into this way of working so you can be confident that you are compliant with your regulations. 

Cloud Enabler

Jens Hörnström

Cloud Enabler

Jens Hörnström

Cyber Security Expert.

Håkan Edman

Cyber Security Expert.

Håkan Edman