Create an ipsec VPN tunnel between GCE network and AWS network

Recently I got a task to link the AWS network with the GCE network so that the traffic between the two networks would be going via an encrypted and secured tunnel without accessing the public network. For this you will need a GCE account. If you don't have one google actually offers a $300 free trial account which can be used over one year. The AWS side can be done also using free tier of EC2 instances without therefore you can test this without having to pay anything.

Assuming that you already have both the GCE and AWS account I will jump right into it and I will start with creating the AWS and GCE networks.

To create the GCE network you can either do it via command line or by clicking your way around on the dashboard. Via command line you run the following command in google console:

#gcloud compute networks create z0z0-network --range 10.1.2.0/24 --mode=legacy

If you choose to do it via command line just replace the z0z0-network with your own name. From the GCE dashboard I have removed the default network and after doing that click on Create VPC network:

Create-VPC-network

VPC-Settings
After doing that the next steps would be to create firewall rules so that you can access the instances which you will be creating. Like before this can be done also either from command line or clicking on the dashboard.

#gcloud compute firewall-rules create gce-network-allow-ssh --allow tcp:22 --network z0z0-network

Via the dashboard first you will have to click your newly created VPC and on the bottom to click on Add firewall rule button

firewall_start

Then you will have to add a name and source where you want to access this rule from (I used 0.0.0.0/0 which means from everywhere) and then add the port on which you want to access the instances and here I added tcp:22 for the SSH port:

firewall setup

Also create a rule to allow all icmp, tcp and udp traffic within the network you just created. For this I will only show command line since the dashboard would be almost the same as the above except the source would be 10.1.2.0/24 instead of 0.0.0.0/0.

#gcloud compute firewall-rules create allow-all-in-network --allow tcp:1-65535,udp:1-65535,icmp --source-ranges 10.1.2.0/24 --network z0z0-network

show-firewall

Since we want to have control over the gateway where the VPN access would work we will create a special instance which would act as a NAT gateway which will be the only server on the network having internet access and it will be used as a bastion.

#gcloud beta compute --project "z0z0-xxxxx" instances create "bastion" --zone "europe-west3-a" --machine-type "custom-1-2048" --network "z0z0-network" --can-ip-forward --maintenance-policy "MIGRATE" --service-account "<service_account>" --min-cpu-platform "Automatic" --tags "nat" --image "debian-9-stretch-v20180105" --image-project "debian-cloud" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "bastion"

We will also create another machine which will act as a host from the network. The command will be the same as before but we will not add the --can-ip-forward option and for the tags will use no-net instead of nat and add --no-address option for not assigning an public IP address. This time I will be creating this from the GCE dashboard.

create-vps
Click on the networking link on the bottom of the popup to get to the networking settings.
network-tags
After setting the Network tags click on the edit button on the right of z0z0-network.
network-settings
And change the External IP to None. After that click Done and then Save. After a couple of minutes you will see the new VPS on your network. Note that the SSH link will be disabled. This mean you cannot access the VPS directly from the browser. You will have to go via the bastion server since it has no public IP address.
vps-list
On the GCE side all we have to do is to create a route where we route the host internet access via the bastion server and enable ip forwarding on the bastion server.

For creating the route we can run the following command:

#gcloud compute routes create no-ip-internet-route --network z0z0-network --destination-range 0.0.0.0/0 --next-hop-instance bastion --next-hop-instance-zone europe-west3-a --tags no-net --priority 800

or by clicking on the Dashboard -> VPS Networks -> Routes
VPC-menu
Then Create New Route
no-ip-route
When done click Create button.

All we need now to do is to enable the ip forwarding on the bastion and add a masquerade firewall rule for the NAT transation. For this you need to click on the SSH button of the bastion server. Change yourself to root by running sudo su - and run the following commands:

#echo "net.ipv4.ip_forward=1" | tee /etc/sysctl.conf
#sysctl -p
#iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Now your host will be able to access the internet, and for now we are done with the GCE side of the configuration. We will come back when we have to install and configure the VPN tunnel side.

Let's do the same steps on the AWS side. This part will not have any command line.
Assuming that you have created any default VPC from your AWS account you can click the Create VPC button.

create AWS VPC

A very important thing is that when you assign both AWS and GCE IPv4 block make sure that they do not overlap eachother because otherwise will be impossible to route the network packages from one network to the other one. If you are not very good in network splitting I suggest using some online netmask calculator which will be showing you the first and last IP address from the network. After creating the VPC we need to create a subnet for our AWS network. Click on Subnets -> Create Subnet
create subnet
Give it a name and make sure that you select the correct VPC from the VPC list in case you have multiple VPC's, then click on "Yes, Create" button.

The next step will be to assign an internet gateway since these instances will have to have the option to reach out to the internet. For that click on the Internet Gateways -> Create Internet Gateway. Add a name and click on "Yes, Create" button.
Internet Gateway
As you can see the state will show in red detached. Select the internet gateway you just created and click on the "Attach to VPC button"
Attach to VPC
make sure that the correct VPC is selected in the drop down meny and click the "Yes, Attach" button.
The next step is to create a public route to this VPS. For that click on the Route Tables, you will see that an unnamed default routing table is created. We will leave the default table alone and create a new Routing table by clicking on Create Route Table button.
Create Route Table
Give it a name, make sure the correct VPC is selected and then click the button on the bottom. When done have it selected and on the bottom of the screen click on the Routes tab and click edit button to add more routes.
add routes
Add a route with destination 0.0.0.0/0 and on the Target form select the internet gateway we created before, then save the routing table and click on Subnet Associations tab.
assign subnet
Edit the table click on the checkbox to associate this network with this routing table and then save the page. Next click on the Security Groups and select the default security group created. Edit the Inbound Rules. Change the source to 0.0.0.0/0 and save it.

At this moment our network is ready to have instances deployed on it. Deploy two free EC2 instances. On the EC2 services click on Launch instance.
launch instance
Click on the Free tier only checkbox and click on one of the instance types(I will be using ubuntu). On The next page go with the default settings and click on the bottom to the "Next: Configure Instance Details"
select free tier
Next page make sure you have the correct VPC and subnet selected and go with the default setting for the rest, then click on the Next: Add Storage.
configure instance
Go with the default settings until you reach the security groups. Click on select existing security group then select the default SG and click on the blue button on the bottom.
select security group
Review your instance configuration and click the Launch button. On the next popup if you don't have a keypair then choose to create a key pair (this will be used to authenticate into your boxes), if you already have a key pair then select it and click on Launch instances.
key pairs

After a while your instances will be deployed and running.
running instances

Select the first instance, on the lower side of the page click on the Network interface: eth0 and copy the Interface ID.
interface_id
On the right side click on the Elastic IPs. If you don't have any unassigned IP addresses then click on Allocate new Address then on Allocate and close. A new public IP address will show up in the list. Make sure you have it selected and click on the Actions button and select Associate Address.
Associate IP address
Click on the Network Interface radio button, paste the copied interface Id into the Network Interface form and click Accociate. When successfull then close it. Now one of your instances will have a public IP address associated with it therefore you will be able to access it over ssh. Now you can ssh into your box using the keypair you downloaded earlier.

With this we have created two boxes on GCE and AWS network from each networks one of the boxes has a public ip address and can be accessed from the internet the other is hiding in a private network. The next challenge will be to make these two hosts on the private network talk to each other.
To be able to do that we need to install and configure a VPN software on the gateway boxes (the ones with public IP.). So ssh into each of the boxes and install strongswan in them. If you choosed debian or ubuntu like me the installation is very straight forward:

 #sudo apt-get update
 #sudo apt-get -y install strongswan 

Now for configuring the VPN you need to edit the /etc/ipsec.conf file and add the following configuration on the AWS gateway:

config setup
 strictcrlpolicy=no
 charondebug=all

# Add connections here.

# Sample VPN connections

conn %default
	ikeifetime=60m
	keylifetime=20m
	rekeymargin=3m
	keyingtries=1
	keyexchange=ikev2

conn gce
	authby=secret
	auto=route
	type=tunnel
	left=%defaultroute
	leftid=<AWS PUBLIC IP>
	leftsubnet=<AWS PRIVATE NETWORK>
	leftauth=psk
    right=<GCE PUBLIC IP>
	rightid=<GCE PUBLIC IP>
	rightsubnet=<GCE PRIVATE NETWORK>
	rightauth=psk
	ike=aes128-sha1-modp1024
	esp=aes128-sha1-modp1024

And the following on the GCE gateway:

config setup
 strictcrlpolicy=no
 charondebug=all

# Add connections here.

# Sample VPN connections

conn %default
	ikeifetime=60m
	keylifetime=20m
	rekeymargin=3m
	keyingtries=1
	keyexchange=ikev2

conn aws
	authby=secret
	auto=route
	type=tunnel
	left=%defaultroute
	leftid=<GCE PUBLIC IP>
	leftsubnet=<GCE PRIVATE NETWORK>
	leftauth=psk
    right=<AWS PUBLIC IP>
	rightid=<AWS PUBLIC IP>
	rightsubnet=<AWS PRIVATE NETWORK>
	rightauth=psk
	ike=aes128-sha1-modp1024
	esp=aes128-sha1-modp1024

Then you need to edit the /etc/ipsec.secrets and add the following on both servers:

<AWS PUBLIC IP> <GCE PUBLIC IP> : PSK "<SOME RANDOM STRING>"

Note: the random sting needs to be the same on both of the servers otherwise the authentication will fail.
= you find it into the aws instances on the instance where you have assigned the public IP
= it is the bastion servers public ip. Can you taken from the dashboard
= it is the private network you have configured on the AWS VPC. If you have followed this tutorial it is 10.0.1.0/24
= it is the private network you have configured on the GCE VPS. If you have followed this tutorial then it is 10.1.2.0/24

Once you have configured the gateways we need to restart strongswan service to activate the new config files. You can do this by running:

#systemctl restart strongswan

Now we need to open the firewall on the GCE side. For this we need to add access to the <AWS PUBLIC IP> udp ports 500, 4500, and esp protocols.
add vpn ports
To test if the ports are open we need to install netcat on both gateways by running the following command:

#apt-get install -y netcat

For testing it on one of the servers we are running the following command:

#nc -ulp500

and on the other one the following:

#nc -uvz <fist server ip> 500

if you see this output then your firewall is configured correctly:

ec2-35-169-71-49.compute-1.amazonaws.com [35.169.71.49] 500 (isakmp) open

If you wish you can do that with port 4500 as well. Now it is time to start up your VPN server. To do it so you need to run the following commands on one of the nodes.

#ipsec status
Routed Connections:
     gce{1}:  ROUTED, TUNNEL, reqid 1
     gce{1}:   10.0.1.0/24 === 10.1.2.0/24
Security Associations (0 up, 0 connecting):
  none

from here we take the name of the VPN tunnel which in our case is gce and run:

#ipsec up gce
initiating IKE_SA gce[1] to 35.198.124.56
generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(HASH_ALG) ]
sending packet: from 10.0.1.71[500] to 35.198.124.56[500] (956 bytes)
received packet: from 35.198.124.56[500] to 10.0.1.71[500] (328 bytes)
parsed IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP) N(HASH_ALG) N(MULT_AUTH) ]
local host is behind NAT, sending keep alives
remote host is behind NAT
authentication of '35.169.71.49' (myself) with pre-shared key
establishing CHILD_SA gce
generating IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH SA TSi TSr N(MOBIKE_SUP) N(NO_ADD_ADDR) N(MULT_AUTH) N(EAP_ONLY) ]
sending packet: from 10.0.1.71[4500] to 35.198.124.56[4500] (348 bytes)
received packet: from 35.198.124.56[4500] to 10.0.1.71[4500] (236 bytes)
parsed IKE_AUTH response 1 [ IDr AUTH SA TSi TSr N(AUTH_LFT) N(MOBIKE_SUP) N(NO_ADD_ADDR) ]
authentication of '35.198.124.56' with pre-shared key successful
IKE_SA gce[1] established between 10.0.1.71[35.169.71.49]...35.198.124.56[35.198.124.56]
scheduling reauthentication in 10566s
maximum IKE_SA lifetime 10746s
connection 'gce' established successfully

Now that our VPN tunnel is up and running there are a few things to be done. First of all we need to disable source/destination check on the AWS gateway. Right click on the AWS gateway instance and select Networking -> Change Source/Dest. Check
change source/dest check
If the Source/Dest check is disabled then cancel the popup otherwise click on Yes, Disable
disable source/dest check
Next thing we need to do is also enable ip forward on the AWS gateway so exactly as we did it on the bastion server run:

#echo "net.ipv4.ip_forward=1" | tee /etc/sysctl.conf
#sysctl -p

And last add a route to the other network from each side:

GCE route for VPN

For the AWS is a bit tricky. So first go to instances and click on the gateway instance and copy it's instance_id.
Instance ID
Then head over to VPC -> Route Table. Click on the route table we created and edit it. For the source enter the GCE network range and on the target paste the copied Instance ID and save it.
AWS route via VPN

Now you should be able to ping the AWS host from any GCE hosts