PosgreSQL to Hadoop/Hive

Ever tried to get data from PostgreSQL to Hive? Came across CSV SerDe which is bundled in latest version of Apache Hive. But for all practical purposes it is useless. It treats every column as string. So wrote my own SerDe. You can find the source on GitHub. Dump your PostgreSQL table data using pg_dump or psql with COPY in plain text format.

Download pgdump-serde jar to your local machine. Open hive shell, add jar. Create external table and load the dump data. If you are using pg_dump file, this SerDe cannot handle schema, comments, column headers etc. So remove unnecessary header/footer that is not row data.

hive> add jar <path to>/pgdump-serde-1.0.4-1.2.0-all.jar;
hive> USE my_database;
hive> CREATE EXTERNAL TABLE `my_table_ext` (
  `id` string,
  `time` timestamp,
  `what` boolean,
  `size` int,
ROW FORMAT SERDE 'com.pasam.hive.serde.pg.PgDumpSerDe'
LOCATION '/tmp/my_table_ext.txt';
hive> LOAD DATA LOCAL INPATH '<path to dump directory>/my_table.dump' OVERWRITE INTO TABLE my_table_ext;
PosgreSQL to Hadoop/Hive

MongoDB WiredTiger slow queries

Recently we hit production MongoDB (version 3.2.6) issue. MongoDB was reporting lots slow queries. Our application was starting to show performance issues. Some of the slow responses were for covered queries.

mongostat was reporting very high %used for WiredTiger cache and it was not coming down. As a result we were seeing significantly high value for  db.serverStatus().wiredTiger.cache[“pages evicted by application threads”]. This was causing slowdown of many queries. Ideally this value should be zero. This will happen if the cache % used hits 96%. Ideally it should be around 80%

Currently experimenting with WiredTiger eviction parameters to see if it makes any difference:

  • eviction_trigger
  • eviction_target
  • eviction_dirty_target
  • eviction=(threads_min=X,threads_max=Y)

It looks like the eviction server is not able to keep up with evicting pages and it gets into a state where application threads are evicting pages causing slowdown  😦


We had thousands of collections in this database and 10’s of thousands of indices. Most of the collections were collection shards to work-around MMAPv1 collection lock contention. Before we sharded the collections one of them grew very big. 10’s of millions of entries. In this scenarios, depending on applications CRUD pattern, you can hit cache related issues. There are two solutions that worked for me with WiredTiger. Either to evenly balance the sharded collection or to consolidate the sharded collections.

MongoDB WiredTiger slow queries

AWS VPN High Availability

This is a refinement to my previous approach.  In previous model, there were two VyOS instances in every AWS region. In this model, there are only two VyOS instances in the hub region. All Amazon regions (including the hub region) connect to these VyOS instances. Each line below represents two tunnels. Amazon VPN comes with two tunnels. But both tunnels connect to the same server (VyOS) on the other end.


Total cost comes down to (2 * $0.05 per hour * number of regions) + (2 * instance type for VyOS). In our deployment, I chose c3.2xlarge which is $0.42 per hour. For reserved instances that prices comes down to $.020+ cents per instances. For a total of four regions the cost per hour is (2 * 0.05 * 4) + (2 * 0.42) = $1.24 per hour (on demand instances). For 1 year reserved, the cost comes down to roughly $0.90 cents per hour. c3.2xlarge is probably bigger than what we need, but it has high network throughput.

Figure out your hub AWS region. Launch two VyOS AMI’s in two different availability zones

  • These should be in public subnet with public IP addresses
  • Enable termination protection if you want to be on the safe side
  • Change shutdown behavior to stop the instance (instead of terminate)
  • Disable source/destination checks (important)
  • Use a open security group until the configuration is done

Allocate two Elastic IPs (EIP) and associate them with the two instances

Upgrade VyOS to the latest version (accept the default values for all the prompts). Reboot after it done

$ add system image http://packages.vyos.net/iso/release/version/vyos-version-amd64.iso
$ reboot

In every region (including the hub), create two customer gateways (CGW), one for each VyOS instance

  • Use dynamic routing
  • Use a BGP ASN from private space (eg: 65000). Use the same value for all CGWs
  • Use the Elastic IP address of VyOS

Also in every region, create Virtual Private Gateway (VPG) and attach it to the VPC. And finally create two VPN connections (one for each CGW)

  • VPG should match the one created before
  • Routing should be dynamic

Once the VPNs are created, download the configuration for each one of them

  • Vendor: Vyatta
  • Platform: Vyatta Network OS
  • Software:Vyatta Network OS 6.5+

There is a lot common in all of these configuration files. Depending on the number of regions, you might end up with 2, 4, 6 or 8 configuration files. Separate the files into two groups. Ones that are associated with CGW1 and others for CGW2

$ ssh -i private-key vyos@elastic-ip-of-cgw
$ configure
set vpn ipsec ike-group AWS lifetime ‘28800'
set vpn ipsec ike-group AWS proposal 1 dh-group ‘2'
set vpn ipsec ike-group AWS proposal 1 encryption ‘aes128'
set vpn ipsec ike-group AWS proposal 1 hash ‘sha1'
set vpn ipsec ipsec-interfaces interface ‘eth0'
set vpn ipsec esp-group AWS compression ‘disable’
set vpn ipsec esp-group AWS lifetime ‘3600'
set vpn ipsec esp-group AWS mode ‘tunnel’
set vpn ipsec esp-group AWS pfs ‘enable’
set vpn ipsec esp-group AWS proposal 1 encryption ‘aes128'
set vpn ipsec esp-group AWS proposal 1 hash ‘sha1'
set vpn ipsec ike-group AWS dead-peer-detection action ‘restart’
set vpn ipsec ike-group AWS dead-peer-detection interval ‘15'
set vpn ipsec ike-group AWS dead-peer-detection timeout ‘45'

Next configure the interfaces. All VPN configurations refer to vti0 and vti1. But you cannot use the same VTI’s for multiple tunnels. So replace vti0/vti1 with vtiX/vtiY appropriately. Example:

set interfaces vti vti3 address ‘169.A.B.C/30'
set interfaces vti vti3 description ‘Oregon to Virginia Tunnel 1'
set interfaces vti vti3 mtu ‘1436'

set interfaces vti vti4 address ‘169.X.Y.Z/30'
set interfaces vti vti4 description ‘Oregon to Virginia Tunnel 2'
set interfaces vti vti4 mtu ‘1436'

In the site-to-site section of the downloaded configuration files, local-address will be set to the elastic IP address of VyOS. VyOS will not like that, because it does not know anything about the EIP. Change it to the local eth0 address (eg: And apply the site-to-site configuration:

set vpn ipsec site-to-site peer X.Y.Z.A authentication mode ‘pre-shared-secret’
set vpn ipsec site-to-site peer X.Y.Z.A authentication pre-shared-secret ‘XX1'
set vpn ipsec site-to-site peer X.Y.Z.A description ‘Oregon to Virginia Tunnel 1'
set vpn ipsec site-to-site peer X.Y.Z.A ike-group ‘AWS’
set vpn ipsec site-to-site peer X.Y.Z.A local-address ‘10.A.B.C'
set vpn ipsec site-to-site peer X.Y.Z.A vti bind ‘vtiX'
set vpn ipsec site-to-site peer X.Y.Z.A vti esp-group ‘AWS’

Next configure BGP:

set protocols bgp 650xy neighbor 169.A.B.E remote-as ‘xyz1'
set protocols bgp 650xy neighbor 169.A.B.E soft-reconfiguration ‘inbound’
set protocols bgp 650xy neighbor 169.A.B.E timers holdtime ‘30'
set protocols bgp 650xy neighbor 169.A.B.E timers keepalive ‘30'

In my setup, I also changed the ntp servers and the hostname:

set system host-name my-hostname
delete system ntp
set system ntp server 0.a.b.ntp.org
set system ntp server 1.a.b.ntp.org
set system ntp server 2.a.b.ntp.org

Amazon instances only get a route for their subnet and not the entire VPC. If you check the output of show ip route, you will see a route for the VyOS subnet. Add a static route for the entire VPC. The follow example assumes you have a 10.X.0.0/16 VPC:

set protocols static route 10.X.0.0/16 next-hop 10.X.0.1 distance 10

Finally, configure the route/network BGP will advertise to the other end (Amazon). For BGP to advertise the route, the route should be in the routing table.

set protocols static route next-hop 10.Y.0.1 distance 100
set protocols bgp 650xy network

Commit the changes and backup the configuration. And keep a copy of the configuration somewhere safe (not on the VyOS instances).

save /home/vyos/backup.conf

From the backed up configuration file, it is better to remove sections that are specific to the VyOS instance. This way, the configuration can be merged easily when instances need to be replaced later:

  • interfaces ethernet eth0
  • service
  • system

You can refer to VyOS documentation Wiki, but some commands I found useful:

show ip route
show ip bgp
show ip bgp summary
show ip bgp neighbor 169.A.B.E advertised-routes
show ip bgp neighbor 169.A.B.E received-routes
show vpn debug

At this point, all VPN tunnels in all VPC’s should be green. And they should be receiving exactly 1 route. Modify all the VPC route tables and enable route propagation. All instances should be able to reach other instances irrespective of which VPC they are in.

If it is necessary to replace a VyOS instance:

  • Kill the instance that is being replaced
  • Create another instance in the same public subnet with the same private IP
  • Choose the correct security group and SSH key
  • Disable the source/dest checks
  • Reassign the EIP from the old instance
  • SCP the backup configuration file to the new VyOS instance
  • SSH to the instance:
$ configure
$ delete system ntp
$ commit
$ merge /home/vyos/backup.conf
$ commit
$ save
$ exit

There are 4 tunnels from each VPC to the hub. If one VyOS box dies, traffic will start flowing through the other one. Start ping from an instance in VPC1 to another instance in VPC2. While this is running, reboot VyOS1 instance. You should see minimal disruption. Once the VyOS1 box comes up, reboot VyOS2, traffic should fail over appropriately.

Finally modify the security group/NACLs. NTP uses 123/udp (inbound and outbound). IPsec uses 500/udp and ESP/AH IP protocols (inbound and outbound). BGP uses 179/tcp. And of course you want SSH (22/tcp) open as well. You can modify the security group/NACLs by port/protocol. Another option is to whitelist the Amazon VPN tunnel IP address and let all traffic from those IPs.

AWS VPN High Availability