Hosting and CI/CD Pipeline for Website with S3

Hi! Today we will be explaining questions like;
-How to host a website with S3 bucket?
-How to prepare CI/CD pipeline for S3 bucket?
-How to store AWS credentials in Jenkins?
-How to host GoDaddy domain name with AWS Route53?

How to host a website with S3 bucket?

So let’s get started with creating the S3 bucket, if we want host our website we need to have a same name in S3 bucket with website name. So i will be using S3 bucket name.

And if we want to everybody will see our website, we need to grant public access to bucket for that situation, as we can see AWS want to approve that “we dont want to block public access” for this bucket.

In page below there are some options like bucket versioning, tags, default encryption and advanced settings. We don’t need to use default encryption and advanced settings for that bucket but in the future when you update the HTML files in your bucket and you don’t want o lose your files old versions, you can enable bucket versioning but in that project i wont enable it. Also you can use tags for that defining easily later, i will use just tag.

After we created our bucket, we should enable “static website hosting” option. For that step click to your bucket name > Properties > Edit Static website hosting(in the bottom) > Enable option > Index document = index.html and Error document = error.html. (Users will see error.html when something is wrong with your website)

Let’s add our files to S3 bucket with aws cp command, you can download aws command from here.
With command aws s3 cp — recursive . s3:// i say
“copy all files from my current directory to bucket with recursive way”

If we try to look how our S3 bucket is look like, we will have an surprise :) You can find main S3 bucket URL at the bottom in Properties section.

As you can we are having 403 error because of permission, in the next step we should attach a policy to allow our bucket will be seenable by users. Click to bucket name > Permission > Bucket policy > Edit
We can use that policy below for granting access to users.

"Version": "2012-10-17",
"Statement": [
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"

After you attached the policy to bucket, just feel free to visit your website with S3 domain :) I know that URL doesn’t seem nice, we will fix that in Route53 section.

Our website is working stable now, but when you want to do some changes in your code. With one click we can move the new code to our website, we will be using Jenkins for that stage.

How to prepare CI/CD pipeline for S3 bucket?
In that section we will have an EC2 virtual server and configure the virtual server for automation steps. Now let’s click to EC2 service > Launch Instances > Choose Ubuntu Server 20.04 LTS AMI(i prefer that distro) > t2.micro >Select any public subnet > Add storage (min 8gb) > Add any tags > Create a new security group
In that demo i will open ports 80 and 443 for sharing our content, maybe port 443 is not necessary for now but i am keeping that open for the future.
I will keep open port 22 for SSH because we need to configure our virtual server and reaching out to Jenkins we will need to port 8080. I will keep 22 and 8080 ports open for only my IP for security.

After these steps our server is ready to use! Thats time to use that script for installing aws-cli, Jenkins and its components. You can reach to script from my GitHub. When we run that script, our services are ready!

If we type aws s3 ls directly, we wont be able to see our bucket. We need to configure our credentials first, after that we can connect to our AWS sources like that;

Time to use Jenkins now! Let’s click to New Item > Freestyle project > Give a name. We should try now, is our Jenkins able to see AWS sources or no?

How to store AWS credentials in Jenkins?
We had error, as we can see our Jenkins user in Ubuntu server is not able to have access to AWS sources. Now we should add our secret keys as variable in Jenkins but there variables are private for us, so we should keep them hidden. We can use like Job > Configure > Build Environment > Use secret text(s) or file(s) > Add secret text. Write your variables for credentials and add their value as Specific credentials. You can click to Add section after that add your secret keys like below.

Now we can use our variables for configuring Jenkins environment, we should add these commands for that;

After these steps there is just one way left to end, at that section we will take our code and configure our commands last time after that we are good to go.
-We will add our Github project with these steps Job > Configure > Source Code Management > Select Git > Write Repository URL (mine is
-Now we will enable Poll SCM option, with that option our Jenkins will be checking our Github repository. When we commit new changes to repository, Jenkins will be triggered and will be running commands for taking new code to production. * * * * * means it is a cron will be checking repository in every minute.

After all these steps Jenkins is able to;
-Check the code changes
-Reach to AWS sources
-Take the repository

Now we will add new commands for our little CI/CD stage :) With assume new published new to repository, Jenkins is getting triggered and downloading new code into workspace and with aws s3 sync command we are saying “send to new file to S3 bucket” and after that step our index.html is getting changed and fresh code in the production :)

It is time to try our Jenkins now, u can take a look, at the top my index.html includes Cloud-Native Engineer at I will change the title from Cloud-Native to DevOps in related index.html and will push to the repository. After 1 minute Jenkins will run our CI/CD commands and will change the title in the production.

push the code
jenkins output after 1 minute

As we can see our change been detected by Jenkins, and aws commands were running succesfully.

How to host GoDaddy domain name with AWS Route53?
I have got a domain name in GoDaddy with my name and surname, in that section we will match that domain with Route53 service.

Let’s begin with these steps; Click to Route53 service > Create a hosted zone > Type your domain name (mine is > Keep it public hosted zone.
After that steps we will have NS and SOA records but we need to have A record also for attaching our S3 bucket endpoint to domain name. We will click to Create record button and we need to type www. for record name, select Record type as A-Routes traffic an IPv4 address… and enable Alias option because we will use our S3 bucket endpoint as an alias. After enabling Alias option we need to use which region we have selected for S3 bucket, i have used Ohio(us-east-2) region, we can see our S3 bucket endpoint in below. If you can’t see reason can be your bucket name, that needs start with www.

We just created the resources we need to use in GoDaddy. I am sorry but my GoDaddy interface is working in Turkish, i couldn’t switch to English because of that my section names can’t be same. After we logged into GoDaddy, click to DNS > Manage Your DNS > Choose “i will use my own servername”
At that part we will type our 2 Route53 values, i have used these values:
I have used my first value and second one in GoDaddy because my SOA value ends with, and my first NS value ends with also. For second servername i used second Route53 value which is ending with .net.

And here is the result, domain name is working with my website S3 bucket without SSL. At the next post we will be talking about that :)

I hope you could have your own website with my Medium post. Feel free to text me for any questions.
GitHub Repository:

curious about cloud & automation