Terraform可以监视目录中的更改吗?

时间:2018-07-02 14:39:58

标签: terraform

我想监视文件目录,如果其中一个更改,则重新上传并运行其他一些任务。我以前的解决方案涉及监视单个文件,但这容易出错,因为某些文件可能会被忘记:

resource "null_resource" "deploy_files" {    
  triggers = {
    file1 = "${sha1(file("my-dir/file1"))}"
    file2 = "${sha1(file("my-dir/file2"))}"
    file3 = "${sha1(file("my-dir/file3"))}"
    # have I forgotten one?
  }

  # Copy files then run a remote script.
  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

我的下一个解决方案是在一个资源中获取目录结构的哈希值,并在第二个资源中将其用作触发器:

resource "null_resource" "watch_dir" {
  triggers = {
    always = "${uuid()}"
  }

  provisioner "local-exec" {
    command = "find my-dir  -type f -print0 | xargs -0 sha1sum | sha1sum > mydir-checksum"
  }
}


resource "null_resource" "deploy_files" {    
  triggers = {
    file1 = "${sha1(file("mydir-checksum"))}"
  }

  # Copy files then run a remote script.
  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

这行得通,除了对mydir-checksum的更改仅在第一个apply之后才进行。因此,我需要两次apply,这不是很好。有点纠结。

我找不到更明显的方法来监视整个目录的内容更改。有标准的方法吗?

5 个答案:

答案 0 :(得分:9)

在 Terraform 0.12 及更高版本中,可以使用 for expression 结合 fileset function 和散列函数之一来计算目录中文件的组合校验和:

> sha1(join("", [for f in fileset(path.cwd, "*"): filesha1(f)]))
"77e0b2785eb7405ea5b3b610c33c3aa2dccb90ea"

上面的代码将为当前目录中与名称模式匹配的每个文件计算一个 sha1 校验和,将校验和连接到一个字符串中,最后,为结果字符串计算一个校验和。所以 null_resource 示例看起来像这样,上面的表达式作为触发器:

resource "null_resource" "deploy_files" {    
  triggers = {
    dir_sha1 = sha1(join("", [for f in fileset("my-dir", "*"): filesha1(f)]))
  }

  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

请注意,fileset("my-dir", "*") 不考虑 my-dir 子目录中的文件。如果您想包含这些校验和,请使用名称模式 ** 而不是 *

答案 1 :(得分:4)

我也有相同的要求,我使用data.external资源以以下方式实现了它:

  • 编写了一个脚本,该脚本将使用md5sum

    给我目录的校验和。
    #!/bin/bash
    #
    # This script calculates the MD5 checksum on a directory
    #
    
    # Exit if any of the intermediate steps fail
    set -e
    
    # Extract "DIRECTORY" argument from the input into
    # DIRECTORY shell variables.
    # jq will ensure that the values are properly quoted
    # and escaped for consumption by the shell.
    eval "$(jq -r '@sh "DIRECTORY=\(.directory)"')"
    
    # Placeholder for whatever data-fetching logic your script implements
    CHECKSUM=`find ${DIRECTORY} -type f | LC_ALL=C sort | xargs shasum -a 256 | awk '{ n=split ($2, tokens, /\//); print $1 " " tokens[n]} ' |  shasum -a 256 | awk '{ print $1 }'`
    
    # Safely produce a JSON object containing the result value.
    # jq will ensure that the value is properly quoted
    # and escaped to produce a valid JSON string.
    jq -n --arg checksum "$CHECKSUM" '{"checksum":$checksum}'
    
  • 如下创建data.external

    data "external" "trigger" {
      program = ["${path.module}/dirhash.sh"]
    
      query {
        directory = "${path.module}/<YOUR_DIR_PATH_TO_WATCH>"
      }
    }
    
  • 使用以上资源results的输出作为null_resource的触发器

    resource "null_resource" "deploy_files" {
      # Changes to any configuration file, requires the re-provisioning
      triggers {
        md5 = "${data.external.trigger.result["checksum"]}"
      }
      ...
    }
    

PS:脚本依赖于jq

更新: 更新了校验和计算逻辑,以抵消查找在不同平台上的影响。

答案 2 :(得分:3)

您可以使用"archive_file" data source

from os import walk, path, rename

def rename(source, dest):
    for root, sub, files in walk(source):
        for file in files:
            if file.endswith('.mp4'):
                nbase = file.split('.mp4')[0]
                newname = nbase[0:len(nbase) - 12] + '.mp4'

                nsource = path.join(root, file)
                rdest = path.join(dest,newname)

                rename(nsource,rdest)

s = '/Users/ja/Desktop/t'
d = '/Users/ja/Desktop/u'

rename(s,d)

仅当存档的哈希值已更改时,才会重新配置空资源。每当data "archive_file" "init" { type = "zip" source_dir = "data/" output_path = "data.zip" } resource "null_resource" "provision-builder" { triggers = { src_hash = "${data.archive_file.init.output_sha}" } provisioner "local-exec" { command = "echo Touché" } } (在此示例中为source_dir)的内容发生更改时,将在刷新期间重建归档文件。

答案 3 :(得分:0)

Terraform似乎没有提供任何目录树遍历功能,所以我能想到的唯一解决方案是使用某种外部工具来做到这一点,例如Make:

all: tf.plan

tf.plan: hash *.tf
        terraform plan -o $@

hash: some/dir
        find $^ -type f -exec sha1sum {} + > $@

.PHONY: all hash

,然后在Terraform文件中:

resource "null_resource" "deploy_files" {    
  triggers = {
    file1 = "${file("hash")}"
  }

  # Copy files then run a remote script.
  provisioner "file" { ... }
  provisioner "remote-exec: { ... }
}

答案 4 :(得分:0)

我将其用于GCP中的Cloud Functions,其中Cloud Function仅取决于文件名,因此,当任何文件更改时,我需要强制更改文件名,否则每次部署都会弄脏Cloud Function。我使用本地语言解决了该问题,因此没有像archive_file解决方案那样创建额外的文件。提示是您需要对文件名进行硬编码,在某些情况下可以接受。

locals {
  # Hard code a list of files in the dir
  cfn_files = [
    "cfn/requirements.txt",
    "cfn/main.py",
  ]

  # Get the MD5 of each file in the directory
  cfn_md5sums = [for f in local.cfn_files : filemd5(f)]

  # Join the MD5 sums together and take the MD5 of all of them
  # Effectively checksumming the pieces of the dir you care about
  cfn_dirchecksum = md5(join("-", local.cfn_md5sums))
}
...

data "archive_file" "cfn" {
  type        = "zip"
  source_dir = "cfn"
  output_path = "cfn/build/${local.cfn_dirchecksum}.zip"
}
resource "google_storage_bucket_object" "archive" {
  name   = data.archive_file.cfn.output_path
  bucket = google_storage_bucket.cfn_bucket.name
  source = data.archive_file.cfn.output_path
}
...
resource "google_cloudfunctions_function" "function" {
  project     = var.project_id
  region      = var.region
  runtime     = "python37"

  source_archive_bucket = google_storage_bucket.cfn_bucket.name
  source_archive_object = google_storage_bucket_object.archive.name
...
}
相关问题