Private docker registry on AWS with S3

Creating a docker private registry is pretty trivial and well documented. If you are just playing with it, docker hub might be a good start. Few things to figure out before starting with private registry:

  • Storage. There are numerous options
    • File system
    • Azure
    • Google cloud (GCS)
    • AWS S3
    • Swift
    • OSS
    • In memory (not a good option unless you are testing)
  • Authentication
    • silly (as the name implies it is really silly and not suitable for real deployments)
    • htpasswd (Apache htpasswd style authentication). Credentials are predefined in a file and only suitable when used with TLS)
    • token OAUTH 2.0 style authentication using a Bearer token. This could be tricky, if you have Jenkins or other CI systems building and pushing docker images)
  • Transport security
    • Use of TLS is a strongly advised. If you don’t have X509 cert/key, use letsencrypt free service
  • Storage security
    • Ideally image data should also be secured at rest. See below for S3 storage security
  • Regions
    • If accessing data from multiple regions is required, docker registry provides ability to use CloudFront

Here is a quick and easy setup on AWS using S3 as storage:

  • Create S3 bucket in the region you want to save the images (my-docker-registry)
  • If you got burned by recent AWS S3 outage few months back, you would also replicate your bucket to another region 🙂 It is pretty simple to setup
  • I also recommend using encrypting data in S3 bucket. You can do this using AWS Key Management Service (KMS) or using Server Side Encryption (SSE) with AES-256. If you are replicating the bucket data to other region(s), you cannot use KMS
  • For the buckets, set bucket policy (under bucket permissions) to enforce encrypted data. Here is sample bucket policy for enforcing SSE AES-256:
{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-docker-registry/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-docker-registry/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}
  • Figure out where you are going to run the registry. Docker registry is a docker image. It is better to have this EC2 instance in the same region as the S3 bucket. Ideally it should be in a VPC with a S3 endpoint configured. Whether the instance should have Public IP or not depends on where you are going to push/pull the images from!
  • Ideally the instance hosting docker registry can be launched with IAM role. This way there won’t be a need to provision access/secret keys. Here is a sample IAM role:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::my-docker-registry"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
            "Resource": "arn:aws:s3:::my-docker-docker/*"
        }
    ]
}
  • Configure the Security Group the the instance appropriately. Ideally I would disable all incoming ports except for 22 and 443 from your specific IP address
  • Follow the installation instructions to install latest docker on the instance
  • The user who is going to bring up the docker registry container should have access to talk to docker daemon. You can either do this as root user 😦 or modify a regular user and make the user part of docker group (usermod -a -G docker userid)
  • Create a docker-compose.yml file. Here is sample. In this case I used X509 cert/key issued by a CA
registry:
  restart: always
  image: registry:2
  ports:
   - 443:5000
  volumes:
   - /host/path/to/certs:/certs
   - /host/path/to/config.yml:/etc/docker/registry/config.yml
  • Create /host/path/to/config.yml configuration for registry configuration. Here is sample template with S3 storage and TLS configuration:
version: 0.1
storage:
  s3:
    region: us-east-1
    bucket: my-docker-registry
    encrypt: true
    secure: true
    v4auth: true
    chunksize: 5242880
    multipartcopychunksize: 33554432
    multipartcopymaxconcurrency: 100
    multipartcopythresholdsize: 33554432
  cache:
    blobdescriptor: inmemory
http:
  addr: 0.0.0.0:5000
  net: tcp
  prefix: /
  host: https://<registry hostname>
  tls:
    certificate: /certs/hostname.crt
    key: /certs/hostname.key
  headers:
    X-Content-Type-Options: [nosniff]
 http2:
   disabled: false
  • Change <registry hostname> with appropriate value. In this case, I used real X509 certificate and key that are copied to the host and made available to the docker registry image. Other option is to use letsencrypt configuration
  • Bring up the docker registry:
$ docker-compose up -d
# Check logs
$ docker-compose logs registry
  • Now it should be possible to tag and push any image to your registry. For example:
$ docker pull ubuntu
$ docker tag ubuntu <registry hostname>/ubuntu

At this point registry should be working and usable but because authentication is not yet setup, you should make sure it is only accessible from trusted hosts.

Private docker registry on AWS with S3

PosgreSQL to Hadoop/Hive

Ever tried to get data from PostgreSQL to Hive? Came across CSV SerDe which is bundled in latest version of Apache Hive. But for all practical purposes it is useless. It treats every column as string. So wrote my own SerDe. You can find the source on GitHub. Dump your PostgreSQL table data using pg_dump or psql with COPY in plain text format.

Download pgdump-serde jar to your local machine. Open hive shell, add jar. Create external table and load the dump data. If you are using pg_dump file, this SerDe cannot handle schema, comments, column headers etc. So remove unnecessary header/footer that is not row data.

hive> add jar <path to>/pgdump-serde-1.0.4-1.2.0-all.jar;
hive> USE my_database;
hive> CREATE EXTERNAL TABLE `my_table_ext` (
  `id` string,
  `time` timestamp,
  `what` boolean,
  `size` int,
  ...
)
ROW FORMAT SERDE 'com.pasam.hive.serde.pg.PgDumpSerDe'
LOCATION '/tmp/my_table_ext.txt';
hive> LOAD DATA LOCAL INPATH '<path to dump directory>/my_table.dump' OVERWRITE INTO TABLE my_table_ext;
PosgreSQL to Hadoop/Hive

MongoDB WiredTiger slow queries

Recently we hit production MongoDB (version 3.2.6) issue. MongoDB was reporting lots slow queries. Our application was starting to show performance issues. Some of the slow responses were for covered queries.

mongostat was reporting very high %used for WiredTiger cache and it was not coming down. As a result we were seeing significantly high value for  db.serverStatus().wiredTiger.cache[“pages evicted by application threads”]. This was causing slowdown of many queries. Ideally this value should be zero. This will happen if the cache % used hits 96%. Ideally it should be around 80%

Currently experimenting with WiredTiger eviction parameters to see if it makes any difference:

  • eviction_trigger
  • eviction_target
  • eviction_dirty_target
  • eviction=(threads_min=X,threads_max=Y)

It looks like the eviction server is not able to keep up with evicting pages and it gets into a state where application threads are evicting pages causing slowdown  😦

Solution:

We had thousands of collections in this database and 10’s of thousands of indices. Most of the collections were collection shards to work-around MMAPv1 collection lock contention. Before we sharded the collections one of them grew very big. 10’s of millions of entries. In this scenarios, depending on applications CRUD pattern, you can hit cache related issues. There are two solutions that worked for me with WiredTiger. Either to evenly balance the sharded collection or to consolidate the sharded collections.

MongoDB WiredTiger slow queries

AWS VPN High Availability

This is a refinement to my previous approach.  In previous model, there were two VyOS instances in every AWS region. In this model, there are only two VyOS instances in the hub region. All Amazon regions (including the hub region) connect to these VyOS instances. Each line below represents two tunnels. Amazon VPN comes with two tunnels. But both tunnels connect to the same server (VyOS) on the other end.

VPN2

Total cost comes down to (2 * $0.05 per hour * number of regions) + (2 * instance type for VyOS). In our deployment, I chose c3.2xlarge which is $0.42 per hour. For reserved instances that prices comes down to $.020+ cents per instances. For a total of four regions the cost per hour is (2 * 0.05 * 4) + (2 * 0.42) = $1.24 per hour (on demand instances). For 1 year reserved, the cost comes down to roughly $0.90 cents per hour. c3.2xlarge is probably bigger than what we need, but it has high network throughput.

Figure out your hub AWS region. Launch two VyOS AMI’s in two different availability zones

  • These should be in public subnet with public IP addresses
  • Enable termination protection if you want to be on the safe side
  • Change shutdown behavior to stop the instance (instead of terminate)
  • Disable source/destination checks (important)
  • Use a open security group until the configuration is done

Allocate two Elastic IPs (EIP) and associate them with the two instances

Upgrade VyOS to the latest version (accept the default values for all the prompts). Reboot after it done

$ add system image http://packages.vyos.net/iso/release/version/vyos-version-amd64.iso
$ reboot

In every region (including the hub), create two customer gateways (CGW), one for each VyOS instance

  • Use dynamic routing
  • Use a BGP ASN from private space (eg: 65000). Use the same value for all CGWs
  • Use the Elastic IP address of VyOS

Also in every region, create Virtual Private Gateway (VPG) and attach it to the VPC. And finally create two VPN connections (one for each CGW)

  • VPG should match the one created before
  • Routing should be dynamic

Once the VPNs are created, download the configuration for each one of them

  • Vendor: Vyatta
  • Platform: Vyatta Network OS
  • Software:Vyatta Network OS 6.5+

There is a lot common in all of these configuration files. Depending on the number of regions, you might end up with 2, 4, 6 or 8 configuration files. Separate the files into two groups. Ones that are associated with CGW1 and others for CGW2

$ ssh -i private-key vyos@elastic-ip-of-cgw
$ configure
set vpn ipsec ike-group AWS lifetime ‘28800'
set vpn ipsec ike-group AWS proposal 1 dh-group ‘2'
set vpn ipsec ike-group AWS proposal 1 encryption ‘aes128'
set vpn ipsec ike-group AWS proposal 1 hash ‘sha1'
set vpn ipsec ipsec-interfaces interface ‘eth0'
set vpn ipsec esp-group AWS compression ‘disable’
set vpn ipsec esp-group AWS lifetime ‘3600'
set vpn ipsec esp-group AWS mode ‘tunnel’
set vpn ipsec esp-group AWS pfs ‘enable’
set vpn ipsec esp-group AWS proposal 1 encryption ‘aes128'
set vpn ipsec esp-group AWS proposal 1 hash ‘sha1'
set vpn ipsec ike-group AWS dead-peer-detection action ‘restart’
set vpn ipsec ike-group AWS dead-peer-detection interval ‘15'
set vpn ipsec ike-group AWS dead-peer-detection timeout ‘45'

Next configure the interfaces. All VPN configurations refer to vti0 and vti1. But you cannot use the same VTI’s for multiple tunnels. So replace vti0/vti1 with vtiX/vtiY appropriately. Example:

set interfaces vti vti3 address ‘169.A.B.C/30'
set interfaces vti vti3 description ‘Oregon to Virginia Tunnel 1'
set interfaces vti vti3 mtu ‘1436'

set interfaces vti vti4 address ‘169.X.Y.Z/30'
set interfaces vti vti4 description ‘Oregon to Virginia Tunnel 2'
set interfaces vti vti4 mtu ‘1436'

In the site-to-site section of the downloaded configuration files, local-address will be set to the elastic IP address of VyOS. VyOS will not like that, because it does not know anything about the EIP. Change it to the local eth0 address (eg: 10.5.0.10). And apply the site-to-site configuration:

set vpn ipsec site-to-site peer X.Y.Z.A authentication mode ‘pre-shared-secret’
set vpn ipsec site-to-site peer X.Y.Z.A authentication pre-shared-secret ‘XX1'
set vpn ipsec site-to-site peer X.Y.Z.A description ‘Oregon to Virginia Tunnel 1'
set vpn ipsec site-to-site peer X.Y.Z.A ike-group ‘AWS’
set vpn ipsec site-to-site peer X.Y.Z.A local-address ‘10.A.B.C'
set vpn ipsec site-to-site peer X.Y.Z.A vti bind ‘vtiX'
set vpn ipsec site-to-site peer X.Y.Z.A vti esp-group ‘AWS’
...

Next configure BGP:

set protocols bgp 650xy neighbor 169.A.B.E remote-as ‘xyz1'
set protocols bgp 650xy neighbor 169.A.B.E soft-reconfiguration ‘inbound’
set protocols bgp 650xy neighbor 169.A.B.E timers holdtime ‘30'
set protocols bgp 650xy neighbor 169.A.B.E timers keepalive ‘30'
...

In my setup, I also changed the ntp servers and the hostname:

set system host-name my-hostname
delete system ntp
set system ntp server 0.a.b.ntp.org
set system ntp server 1.a.b.ntp.org
set system ntp server 2.a.b.ntp.org

Amazon instances only get a route for their subnet and not the entire VPC. If you check the output of show ip route, you will see a route for the VyOS subnet. Add a static route for the entire VPC. The follow example assumes you have a 10.X.0.0/16 VPC:

set protocols static route 10.X.0.0/16 next-hop 10.X.0.1 distance 10

Finally, configure the route/network BGP will advertise to the other end (Amazon). For BGP to advertise the route, the route should be in the routing table.

set protocols static route 10.0.0.0/8 next-hop 10.Y.0.1 distance 100
set protocols bgp 650xy network 10.0.0.0/8

Commit the changes and backup the configuration. And keep a copy of the configuration somewhere safe (not on the VyOS instances).

commit
save
save /home/vyos/backup.conf
exit

From the backed up configuration file, it is better to remove sections that are specific to the VyOS instance. This way, the configuration can be merged easily when instances need to be replaced later:

  • interfaces ethernet eth0
  • service
  • system

You can refer to VyOS documentation Wiki, but some commands I found useful:

show ip route
show ip bgp
show ip bgp summary
show ip bgp neighbor 169.A.B.E advertised-routes
show ip bgp neighbor 169.A.B.E received-routes
show vpn debug

At this point, all VPN tunnels in all VPC’s should be green. And they should be receiving exactly 1 route. Modify all the VPC route tables and enable route propagation. All instances should be able to reach other instances irrespective of which VPC they are in.

If it is necessary to replace a VyOS instance:

  • Kill the instance that is being replaced
  • Create another instance in the same public subnet with the same private IP
  • Choose the correct security group and SSH key
  • Disable the source/dest checks
  • Reassign the EIP from the old instance
  • SCP the backup configuration file to the new VyOS instance
  • SSH to the instance:
$ configure
$ delete system ntp
$ commit
$ merge /home/vyos/backup.conf
$ commit
$ save
$ exit

There are 4 tunnels from each VPC to the hub. If one VyOS box dies, traffic will start flowing through the other one. Start ping from an instance in VPC1 to another instance in VPC2. While this is running, reboot VyOS1 instance. You should see minimal disruption. Once the VyOS1 box comes up, reboot VyOS2, traffic should fail over appropriately.

Finally modify the security group/NACLs. NTP uses 123/udp (inbound and outbound). IPsec uses 500/udp and ESP/AH IP protocols (inbound and outbound). BGP uses 179/tcp. And of course you want SSH (22/tcp) open as well. You can modify the security group/NACLs by port/protocol. Another option is to whitelist the Amazon VPN tunnel IP address and let all traffic from those IPs.

AWS VPN High Availability