I am currently having two (maybe conflicting) S3 bucket policies, which show a permanent difference on Terraform. Before I show parts of the code, I will try to give an overview of the structure.
I am currently using a module, which:
I have created some code (snippet and not full code) to illustrate how this looks like for the module.
# S3 Policy to be attached to the ROLE
data "aws_iam_policy_document" "foo_iam_s3_policy" {
statement {
effect = "Allow"
resources = ["${data. s3_bucket.s3_bucket.arn}/*"]
actions = ["s3:GetObject", "s3:GetObjectVersion"]
}
statement {
effect = "Allow"
resources = [data.s3_bucket.s3_bucket.arn]
actions = ["s3:*"]
}
}
# VPC Policy to be attached to the BUCKET
data "aws_iam_policy_document" "foo_vpc_policy" {
statement {
sid = "VPCAllow"
effect = "Allow"
resources = [data.s3_bucket.s3_bucket.arn, "${data.s3_bucket.s3_bucket.arn}/*"]
actions = ["s3:GetObject", "s3:GetObjectVersion"]
condition {
test = "StringEquals"
variable = "aws:SourceVpc"
values = [var.foo_vpc]
}
principals {
type = "*"
identifiers = ["*"]
}
}
}
# Turn policy into a resource to be able to use ARN
resource "aws_iam_policy" "foo_iam_policy_s3" {
name = "foo-s3-${var.s3_bucket_name}"
description = "IAM policy for foo on s3"
policy = data.aws_iam_policy_document.foo_iam_s3_policy.json
}
# Attaches s3 bucket policy to IAM Role
resource "aws_iam_role_policy_attachment" "foo_attach_s3_policy" {
role = data.aws_iam_role.foo_role.name
policy_arn = aws_iam_policy.foo_iam_policy_s3.arn
}
# Attach foo vpc policy to bucket
resource "s3_bucket_policy" "foo_vpc_policy" {
bucket = data.s3_bucket.s3_bucket.id
policy = data.aws_iam_policy_document.foo_vpc_policy.json
}
Now let's step outside of the module, where the S3 bucket (the one I mentioned that will be inputted into the module) is created, and where another policy needs to be attached to it (the S3 bucket). So outside of the module, we:
# Create policy to allow bar to put objects in the bucket
data "aws_iam_policy_document" "bucket_policy_bar" {
statement {
sid = "Bar IAM access"
effect = "Allow"
resources = [module.s3_bucket.bucket_arn, "${module. s3_bucket.bucket_arn}/*"]
actions = ["s3:PutObject", "s3:GetObject", "s3:ListBucket"]
principals {
type = "AWS"
identifiers = [var.bar_iam]
}
}
}
# Attach Bar bucket policy
resource "s3_bucket_policy" "attach_s3_bucket_bar_policy" {
bucket = module.s3_bucket.bucket_name
policy = data.aws_iam_policy_document.bucket_policy_bar.json
}
(For more context: Basically foo is a database that needs VPC and s3 attachment to role to operate on the bucket and bar is an external service that needs to write data to the bucket)
When I try to plan/apply, Terraform shows that there is always change, and shows an overwrite between the S3 bucket policy of bar (bucket_policy_bar
) and the VPC policy attached inside the module (foo_vpc_policy
).
In fact the error I am getting kind of sounds like what is described here:
The usage of this resource conflicts with the aws_iam_policy_attachment resource and will permanently show a difference if both are defined.
But I am attaching policies to S3 and not to a role, so I am not sure if this warning applies to my case.
Why are my policies conflicting? And how can I avoid this conflict?
EDIT: For clarification, I have a single S3 bucket, to which I need to attach two policies. One that allows VPC access (foo_vpc_policy, which gets created inside the module) and another one (bucket_policy_bar) that allows IAM role to put objects in the bucket
there is always change
That is correct. aws_s3_bucket_policy
sets new policy on the bucket. It does not add new statements to it.
Since you are invoking aws_s3_bucket_policy
twice for same bucket, first time in module.s3_bucket
module, then second time in parent module (I guess), the parent module will simply attempt to set new policy on the bucket. When you perform terraform apply/plan
again, the terraform will detect that the policy defined in module.s3_bucket
is different, and will try to update it. So you end up basically with a circle, where each apply
will change the bucket policy to new one.
I'm not aware of a terraform resource which would allow you to update (i.e. add new statements) to an existing bucket policy. Thus I would try to re-factor your design so that you execute aws_s3_bucket_policy
only once with all the statements that you require.
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments