Currently we are running 4 commands:
below two aws cli commands in jenkins docker container:
sh 'aws cloudformation package ...'
s3Upload()
Below two aws cli commands in docker container:
aws s3 cp source dest
aws cloudformation deploy
To run these above 4 commands in docker container, aws cli
derive access permissions from docker host( EC2 ) which assumes a role with policy having permissions ( to access s3 and create/update cloud formation stack).
But the problem with such solution is,
we have to assign this role(say xrole
) to every EC2 that is running in each test environment. There are 3-4 test environments.
Internally, aws creates an adhoc user as aws::sts::{account Id}::assumerole/xrole/i-112223344
and above 4 commands run on behalf of this user.
Better solution would be to create a user and assign the same role(xrole
) to this and run above 4 commands as this user.
But,
1) what is the process to create such user? Because it has to assume xrole
...
2) how to run above 4 commands with this user?
Best practice is to use roles, not users when working with EC2 instances. Users are necessary only when you need to grant permissions to applications that are running on computers outside of AWS environment (on premise). And even then, it is still best practice to grant this user permissions to only assume role which grants the necessary permissions.
If you are running all your commands from within containers and you want to grant permissions to containers instead of the whole EC2 instance then what you can do is to use ECS service instead of plain EC2 instances.
When using EC2 launch type with ECS, you have the same control over the EC2 instance but the difference is that you can attach role to a particular task (container) instead of the whole EC2 instance. By doing this, you can have several different tasks (containers) running on the same EC2 instance while each of them have only permissions that its needs. So if one of your containers needs to upload data to S3, you can create necessary role, specify the role in task definition and only that particular task will have those permissions. Neither other tasks nor the EC2 instance itself will be able to upload objects to S3.
Moreover, if you specify awsvpc
networking mode for your tasks, each task will get its own ENI which means that you can specify Security Group for each task separately even if they are running on the same EC2 instance.
Here is an example of task definition using docker image stored in ECR and role called AmazonECSTaskS3BucketRole
.
{
"containerDefinitions": [
{
"name": "sample-app",
"image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1",
"memory": 200,
"cpu": 10,
"essential": true
}
],
"family": "example_task_3",
"taskRoleArn": "arn:aws:iam::123456789012:role/AmazonECSTaskS3BucketRole"
}
Here is documentation for task definitions
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments