Wednesday, June 15, 2022

Docker / docker-compose / Kubernetes - add volume

 You can think docker-compose as container orchestrator. Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Here, I just created a simple docker-compose file to add a mounted volume.

services: 2 myapp: 3 image: image-name:0.0 4 build: 5 context: . 6 volumes: 7 - ./:/app

And when you locally debug or run your docker image, you can just simply run:

1docker-compose build

2docker run -it image-name:0.0

Remember, be sure to use docker-compose down to remove network connection.

When you orchestrate the docker through Argo, by default, Docker can get output artifacts/parameters from the base layer (e.g. /tmp), so we can mount volumes onto the pod. The easiest way to do this is to use as emptyDir volume. (This is only needed for output artifacts/parameters. Input artifacts/parameters are automatically mounted to an empty-dir if needed).

(Ref: https://argoproj.github.io/argo-workflows/empty-dir/)

- name: xxx-template 2 metadata: 3 annotations: 4 sidecar.istio.io/inject: 'false' 5 container: 6 image: image-name:latest 7 imagePullPolicy: Always 8 command: ["python", "scraper/__main__.py"] 9 volumeMounts: 10 - name: downloads 11 mountPath: /output 12 volumes: 13 - name: downloads 14 emptyDir: { }

Tuesday, June 14, 2022

Python - super()

 The Python super() method lets you access methods from a parent class from within a child class. This helps reduce repetition in your code. 

One core feature of object-oriented programming languages like Python is inheritance. Inheritance is when a new class uses code from another class to create the new class.

When you’re inheriting classes, you may want to gain access to methods from a parent class. That’s where the super() function comes in.

Here is the syntax for the super() method:

class Food():
	def __init__(self, name):
		self.name = name
class Cheese(Food):
	def __init__(self, brand):
		super().__init__()
		self.brand = brand

The super() method does not accept any arguments. 

Whatever comes last in Third() is the parent class. so when calling Third(), it will call Second() first, then First().






Python - Upload file to S3 using pre-signed URL

 

import requests

# API endpoint
api_url = "xxxxxx"

reports = "test/"
file_name = "test.csv"
OBJECT_NAME_TO_UPLOAD = "test.csv"

params = {"action": "upload",
"file_key": reports + file_name,
"bucket_name": "xxxx"
}

headers = {"x-api-key": "xxxx"}

session = requests.session()
# Generate pre-signed-url
response = session.get(api_url, headers=headers, json=params)
res = response.json()

print(res)


# # # # # Upload file to S3 using pre-signed URL
with open(OBJECT_NAME_TO_UPLOAD) as f:

upload_response = session.request('PUT', res['url'], data=f.read().encode('utf-8'))


print(f"Upload response: {upload_response.status_code}")

AWS - pre-signed url as inbound/outbound

 

Basic Architecture





Prerequisites/ Resources

  • S3 Bucket

  • IAM role/policy

  • Lambda Function (Python)

  • API Gateway

  • Postman (testing purpose)

Lambda function act as endpoint for API Gateway

1   import boto3 2from botocore.client import Config 3 4def lambda_handler(event, context): 5 print("event: ", event) 6 # step 1: connect to s3 using boto3 7 try: 8 s3Client = boto3.client("s3", config=Config(signature_version='s3v4')) 9 10 except Exception as e: 11 return { 12 "status_code": 400, 13 "error": 0 14 } 15 16 # step 2: prepare params 17 bucket_name = event.get('bucket_name') 18 file_key = event.get('file_key') 19 action = event.get('action') 20 21 # step 3: generate presigned url 22 try: 23 URL = s3Client.generate_presigned_url( 24 "put_object" if action == "upload" else "get_object", 25 Params = {"Bucket": bucket_name, "Key": file_key}, 26 ExpiresIn = 180) 27 28 return { 29 "status_code": 200, 30 "url": URL, 31 "event": event 32 } 33 34 except Exception as e: 35 return { 36 "status_code": 400, 37 "error": 0 38 }

lambdaAccessS3Bucket policy and create role and add this policy

{ 2 "Version": "2012-10-17", 3 "Statement": [ 4 { 5 "Effect": "Allow", 6 "Action": [ 7 "s3:PutObject", 8 "s3:GetObject" 9 ], 10 "Resource": "arn:aws:s3:::bucket_name/*" 11 } 12 ] 13}


Docker - common command to install Terraform, AWS-Vault, Terragrunt and grant gitlab access

Docker image to install Terraform, AWS-Vault, Terragrunt and grant gitlab access

When build image, you can run:

docker build -t docker_name:tag . --force-rm --build-arg SSH_PRIVATE_KEY=""

To use local AWS environment, you can run docker image like:

docker run -ti --env-file <(aws-vault exec your_role -- env | grep -e ^AWS_) docker_name:tag 

In that case, you don't need to put aws-vault when you run .sh command. instead, you can just execute 

############################################
#### Run through Terraform ####
############################################
terraform init
terraform validate
terraform plan
terraform apply
terraform show
terraform destroy

Docker example:

FROM basic-ubuntu:1

LABEL Maintainer="xxxxx"

# Set the working directory in the container
WORKDIR /root

# copy all sub-directories and files into working directory in the container
COPY commands.sh .

##### Install software needed in order to run command ######
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install sudo
RUN sudo apt-get update && sudo apt-get install \
-y gnupg software-properties-common curl
RUN apt-get update
RUN sudo apt-get install -y git
RUN apt-get update

###### Install Terraform ######
RUN curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
RUN sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com \
$(lsb_release -cs) main"

RUN sudo apt-get install apt-transport-https
RUN apt-get update

RUN sudo apt install terraform

###### Install AWS-Vault ######
RUN sudo curl -L -o /usr/local/bin/aws-vault \ 
https://github.com/99designs/aws-vault/releases/download/v4.2.0/aws-vault-linux-amd64
RUN sudo chmod 755 /usr/local/bin/aws-vault
###### Install Terragrunt ######
RUN sudo curl -L -o /usr/local/bin/terragrunt \ 
https://github.com/gruntwork-io/terragrunt/releases/download/v0.36.6/terragrunt_linux_amd64
RUN sudo chmod 755 /usr/local/bin/terragrunt

###### Set up Gitlab access ######
ARG SSH_PRIVATE_KEY

RUN apt-get update
RUN apt-get install -y openssh-client

# Pass the content of the private key into the container
RUN mkdir /root/.ssh/
RUN echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa
# Gitlab requires a private key with strict permission settings
RUN chmod 600 /root/.ssh/id_rsa
# Add Gitlab to known hosts
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan gitlab.com >> /root/.ssh/known_hosts

###### Clone Terragrunt repo ######
RUN git clone git@gitlab.com......

ENTRYPOINT ["/bin/bash", "./commands.sh"]