Docker and osquery

If you are into security you might have heard about osquery. It is extremely powerful tool that can be used for various purposes:

  • Real time endpoint monitoring
  • Anomaly detection
  • File integrity monitoring
  • Metrics (prometheus)
  • Container (docker) monitoring
  • syslog aggregation

Numerous enterprises big and small from all verticals are using it, or planning on using it. It is being deployed to production servers as well as employee desktop/laptops. It has first class support for various flavors of Linux and macOS. Windows functionality is maturing, thanks to open source community contributions and Facebook’s efforts.

Recently we (Uptycs) started publishing docker images with latest osquery version. We published images for various versions of Ubuntu and CentOS. Ideally running osquery in docker container doesn’t make sense unless you are using CoreOS Container Linux. But if you are just playing with osquery and want to test some functionality docker images are ideal. osquery comes with two binaries. Interactive shell osqueryi and osqueryd daemon.

Interactive shell can be launched as follows. It will present a SQL prompt:

$ docker run -it uptycs/osquery:2.7.0 osqueri
Using a virtual database. Need help, type '.help'
osquery>

You can run sample queries like:

osquery> SELECT * FROM processes;

When running inside the container osquery will only return information available to it from within the container. In the case of processes which are retrieved from /proc osquery will return the processes running inside the container. In this case it will be one process: /usr/bin/osqueryi

If you omit the command part it will launch the osquery daemon with a warning:

$ docker run -it uptycs/osquery:2.7.0
W0919 19:46:14.058230 7 init.cpp:649] Error reading config: config file does not exist: /etc/osquery/osquery.conf

This is because the Dockerfile is configured to run the osquery daemon by default with the following arguments:

osqueryd --flagfile /etc/osquery/osquery.flags --config_path /etc/osquery/osquery.conf

Flags file and the config file are meant to be provided from the host. osquery have extensive number of command line flags. Optionally configuration file can also be provided to the container. Refer to osquery configuration documentation on what can be specified in conf file.

Assuming osquery.flags and osquery.conf are created on host machine in directory /some/path, osquery daemon can be launched as:

$ docker run -it -v /some/path:/etc/osquery uptycs/osquery:2.7.0

osquery can also gather information about docker . When osquery is running inside container, it cannot talk to the docker daemon running on the host machine. If you expose the document UNIX domain socket from host to the container osquery can query/gather information about docker images, containers, volumes, networks, labels etc

$ docker run -it -v /some/path:/etc/osquery -v/var/run/docker.sock:/var/run/docker.sock uptycs/osquery:2.7.0

If logging is configured, osquery daemon needs to identify itself to the log endpoint.
–host_identifier flag should be appropriately configured. If hostname value is used for host identifier, you might want to start docker with hostname option:

$ docker run -it --hostname osquery1.example.com -v /some/path:/etc/osquery -v/var/run/docker.sock:/var/run/docker.sock uptycs/osquery:2.7.0

host_identifier with uuid value is not appropriate if you are planning on launching multiple osquery docker instances or if you have osquery running on the host also. Docker containers will end up sharing the same UUID.

As I mentioned above osquery in docker only makes sense for playing with osquery or testing it. In real deployments it should be running on the host or virtual machine. In case of CoreOS Container Linux there is no easy way to run any service on the host/virtual machine. On Container Linux osquery can be run inside toolbox which uses systemd-nspawn.  I will cover this in separate topic.

Advertisements
Docker and osquery

Private docker registry on AWS with S3

Creating a docker private registry is pretty trivial and well documented. If you are just playing with it, docker hub might be a good start. Few things to figure out before starting with private registry:

  • Storage. There are numerous options
    • File system
    • Azure
    • Google cloud (GCS)
    • AWS S3
    • Swift
    • OSS
    • In memory (not a good option unless you are testing)
  • Authentication
    • silly (as the name implies it is really silly and not suitable for real deployments)
    • htpasswd (Apache htpasswd style authentication). Credentials are predefined in a file and only suitable when used with TLS)
    • token OAUTH 2.0 style authentication using a Bearer token. This could be tricky, if you have Jenkins or other CI systems building and pushing docker images)
  • Transport security
    • Use of TLS is a strongly advised. If you don’t have X509 cert/key, use letsencrypt free service
  • Storage security
    • Ideally image data should also be secured at rest. See below for S3 storage security
  • Regions
    • If accessing data from multiple regions is required, docker registry provides ability to use CloudFront

Here is a quick and easy setup on AWS using S3 as storage:

  • Create S3 bucket in the region you want to save the images (my-docker-registry)
  • If you got burned by recent AWS S3 outage few months back, you would also replicate your bucket to another region 🙂 It is pretty simple to setup
  • I also recommend using encrypting data in S3 bucket. You can do this using AWS Key Management Service (KMS) or using Server Side Encryption (SSE) with AES-256. If you are replicating the bucket data to other region(s), you cannot use KMS
  • For the buckets, set bucket policy (under bucket permissions) to enforce encrypted data. Here is sample bucket policy for enforcing SSE AES-256:
{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-docker-registry/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-docker-registry/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}
  • Figure out where you are going to run the registry. Docker registry is a docker image. It is better to have this EC2 instance in the same region as the S3 bucket. Ideally it should be in a VPC with a S3 endpoint configured. Whether the instance should have Public IP or not depends on where you are going to push/pull the images from!
  • Ideally the instance hosting docker registry can be launched with IAM role. This way there won’t be a need to provision access/secret keys. Here is a sample IAM role:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": "arn:aws:s3:::my-docker-registry"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload"
            ],
            "Resource": "arn:aws:s3:::my-docker-docker/*"
        }
    ]
}
  • Configure the Security Group the the instance appropriately. Ideally I would disable all incoming ports except for 22 and 443 from your specific IP address
  • Follow the installation instructions to install latest docker on the instance
  • The user who is going to bring up the docker registry container should have access to talk to docker daemon. You can either do this as root user 😦 or modify a regular user and make the user part of docker group (usermod -a -G docker userid)
  • Create a docker-compose.yml file. Here is sample. In this case I used X509 cert/key issued by a CA
registry:
  restart: always
  image: registry:2
  ports:
   - 443:5000
  volumes:
   - /host/path/to/certs:/certs
   - /host/path/to/config.yml:/etc/docker/registry/config.yml
  • Create /host/path/to/config.yml configuration for registry configuration. Here is sample template with S3 storage and TLS configuration:
version: 0.1
storage:
  s3:
    region: us-east-1
    bucket: my-docker-registry
    encrypt: true
    secure: true
    v4auth: true
    chunksize: 5242880
    multipartcopychunksize: 33554432
    multipartcopymaxconcurrency: 100
    multipartcopythresholdsize: 33554432
  cache:
    blobdescriptor: inmemory
http:
  addr: 0.0.0.0:5000
  net: tcp
  prefix: /
  host: https://<registry hostname>
  tls:
    certificate: /certs/hostname.crt
    key: /certs/hostname.key
  headers:
    X-Content-Type-Options: [nosniff]
 http2:
   disabled: false
  • Change <registry hostname> with appropriate value. In this case, I used real X509 certificate and key that are copied to the host and made available to the docker registry image. Other option is to use letsencrypt configuration
  • Bring up the docker registry:
$ docker-compose up -d
# Check logs
$ docker-compose logs registry
  • Now it should be possible to tag and push any image to your registry. For example:
$ docker pull ubuntu
$ docker tag ubuntu <registry hostname>/ubuntu

At this point registry should be working and usable but because authentication is not yet setup, you should make sure it is only accessible from trusted hosts.

Private docker registry on AWS with S3