Tag Archives: ec2

Crossing the Amazon VPC boundary

Cross-VPC access is one of the difficult problems one faces when utilizing Virtual Private Clouds for segregation and separation of systems.  Separating is a good thing, however often there is a need to cross these boundaries for control traffic, monitoring, and user convenience.  In my case, the systems I work with are primarily cloud-based, and traditional options of adding a Hardware VPN Gateway were sub-optimal.  Last month, Amazon announced VPC Peering as a way to break down the VPC boundary within a single region.  This is great news for single-region deployments, but still does not address cross-region access needed for an high availability solution.

One solution to the lack of inter-region VPC peering is to use an in-cloud VPN hub, and to connect segregated application VPC’s via the use of a NAT+VPN gateway within each VPC.  In the example below, the private network 203.0.113.0/24 is subdivided between two VPCs, each with a public and private subnet, and with the private network being re-routed by the VPC routing tables to either the VPN hub or the NAT+VPN client gateway.

Image

Here is the configuration for the vpn-hub, which creates a VPC with a VPN-HUB IPsec gateway to pull together the client VPCs: example-hub.json

Here is the client configuration, which routes all private network traffic back to the VPN gatway: example-client.json

When using the cloudcaster tool, the routing tables of the VPC are modified to direct the private network to the NAT+VPN gateway.  For the vpn-hub, one additional change is needed; the private network needs to be re-routed from the NAT+VPN gateway to the VPN-HUB gateway:

# instance-id is the ID of the VPN-HUB instance
# route-table-id is the ID of the public subnet 203.0.113.0/28

aws ec2 replace-route –region us-west-2 –destination-cidr-block 203.0.113.0/24 –route-table-id rtb-XXXXXXXX –instance-id i-XXXXXX

For this to work, you will need to build 3 AMI types based on the Amazon NAT/PAT instance:

  • vpn-hub – the VPN concentrator
  • nat-hub – a NAT/PAT gateway with an exclusion from NAT/PAT for the private network
  • nat-vpn – a NAT/PAT gateway with IPsec that tunnels traffic destined to the private network via the VPN-HUB

Instructions for building the AMI types is located in the README.  An ElasticIP is required for the VPN-HUB, which needs to be baked into the AMI image for the NAT-VPN.  At boot, and each hour thereafter, the VPN-HUB will poll the EC2 API and construct a list of tunnels to build, allowing the VPN to extend to future VPCs and clean up after VPCs are deleted.

Some caveats: this is not a high-availability nor a high-traffic solution as presented.  Each vpn-hub/nat-hub/nat-vpn is a single point of failure, and using t1.micro instances is not recommended for high-throughput networking.  High-availability is not currently practically possible as VPC route tables do not support multipath routing to instances at this time.

This solution does perform admirably for command & control and monitoring traffic, especially when combined with either ssh bounce boxes or a client-vpn host to enable access to all hosts within your infrastructure.

 

Advertisement

CloudCaster – casting clouds into existence

CloudCaster is my tool to cast clouds into existence in many regions, yet still maintain source-controlled infrastructure specifications.  A single JSON document is used to specify your cloud architecture.  Currently it only supports EC2/VPC/Route53.

https://github.com/WrathOfChris/ops/tree/master/cloudcaster

This tool is my attempt to capture all the manual steps I was using to create Virtual Private Cloud infrastructure: subnets, routing tables, internet gateways, VPNs, NAT instances, AutoScale groups, launch configs, and Load Balancers.

An example specification is here: https://github.com/WrathOfChris/ops/blob/master/cloudcaster/examples/example.json

In each Availability Zone, it creates a Public subnet and a Private subnet.  The public subnet will contain any ELB’s created, apps specified with the “public” flag, and the NAT instance for the private instances to reach the world.

I wrote this tool for a number of reasons.  I needed a way to specify the state my cloud infrastructure should be in, and be able to re-create the infrastructure setup in case of a catastrophic failure.  Eventually I will need to transition to multi-cloud, and specifying the infrastructure will allow me to adapt other cloud provider APIs when I need them without being locked into a single vendor.  I also wanted to codify many of the best-practices I’ve learned into the automation, so new services are created default-best.

CloudCasterVPC

Documentation is located here: https://github.com/WrathOfChris/ops/blob/master/cloudcaster/README.md

Naming is partially-enforced.  Load balancers and AutoScale groups have the environment name postfixed to the name.  Security groups do not (least surprise!).  The concept of a “continent” is just a DNS grouping to allow for delegation to a Global Traffic Manager or second DNS provider.

A sample run consisting of a single app, single elb, and the nat instance would create resources similar to:

Auto Scaling Groups:

$ as-describe-auto-scaling-groups --region us-west-2
AUTO-SCALING-GROUP exampleapp-prod exampleapp-prod-20140106002258 us-west-2c,us-west-2b,us-west-2a example-prod 0 1 1 Default
INSTANCE i-17c3b121 us-west-2b InService Healthy exampleapp-prod-20140106002258
TAG exampleapp-prod auto-scaling-group Name exampleapp-prod true
TAG exampleapp-prod auto-scaling-group cluster blue true
TAG exampleapp-prod auto-scaling-group env prod true
TAG exampleapp-prod auto-scaling-group service example true

Launch Configs:

$as-describe-launch-configs --region us-west-2
LAUNCH-CONFIG exampleapp-prod-20140106002258 ami-ccf297fc t1.micro discovery

Note the date encoded in the LaunchConfig name, this allows CloudCaster to update in place by swapping launch configs.  Next time an instance is terminated, the new instance will be launched from the new Launch Config

Load Balancers:

$ elb-describe-lbs --region us-west-2
LOAD_BALANCER example-prod example-prod-1891025847.us-west-2.elb.amazonaws.com 2014-01-06T00:22:51.910Z internet-facing

Warnings apply – CloudCaster will create instances and load balancers, and that will cost you money.  There is no delete option, you will have to manually delete all resources created.  It is not designed to be a general purpose tool for all your needs – it does exactly what I need, and a little less.

In the example.json, you may notice mention of a “psk” – this is for a future post where I will talk about creating automatic VPNs between VPCs using a VPN concentrator instance and the NAT instances.  For now, you will see that CloudCaster sets route in the public subnets for “privnet” – the overarching private subnet for all your worldwide VPCs.

That’s all for now, I hope you enjoy

finding ec2 nodes

Each and every day I find myself needing lists of host groups within EC2.  Lately it has been for building clusters of distributed Erlang and Riak, but also for adding dynamic or periodically updated lists for monitoring.

Normally I would just pipeline some shell together, but that is sub-optimal:

$ ec2-describe-instances -F tag:service=nat | grep ^INSTANCE | awk '{print $4;}'
ec2-1-2-3-4.compute-1.amazonaws.com
ec2-2-3-4-5.compute-1.amazonaws.com

Along the way I realized that I was rewriting similar fragments all too often, and though I usually wanted the private hostname, sometimes I needed IP address (riak – I’m looking at you!) or the public hostname.  Time to build a tool:

$ ./ec2nodefind -e test -s benchmark -i
10.1.2.3
10.1.2.4
$ ./ec2nodefind -e test -s benchmark -pF
ec2-54-209-1-2.compute-1.amazonaws.com
ec2-54-209-1-3.compute-1.amazonaws.com

Great!  So much easier, but I’m already on a host that is tagged, why does my config management system have to inject that info?  Lets make it autodiscover based on the instance metadata.  This requires an instance-profile role with permissions for “ec2:Describe*”.  Here we can be verbose and see the discovery values.

$ ./ec2nodefind -va
Autodiscovery: cluster benchmark
Autodiscovery: env test
Autodiscovery: service benchmark
ip-10-1-2-3
ip-10-1-2-4

Perfect!  Now we have automatically discovered peers without our (env,service,cluster) group.

Here’s the code: https://github.com/WrathOfChris/ops/tree/master/ec2nodefind

Enjoy!