Hasura on AWS EKS with an application loadbalancer

Escaped makes a point of hiring security consultants with a development background as we think to know how to test applications you need to know how to create them.

We're always working on internal projects and tools and decided to use Hasura Graph Engine for our backend. We wanted to deploy this on AWS using Kubernetes with an application load balancer. We thought this would be a fairly quick process and as we're familiar with AWS and Hasura. We were wrong…. about a day and a half later and with a few grey hairs we finally got the configuration right. We wanted to share this as it will hopefully easy someone else’s pain.

This article makes the assumption you have already got authenticated via the CLI to your AWS environment.

Cluster

We want to create a development environment with cheaper resources so we specified small instances.

  eksctl create cluster \
--name hasura_cluster \
--version 1.24 \
--region us-east-1  \
--zones us-east-1c,us-east-1d \
--nodegroup-name linux-nodes \
--nodes 2 \
--nodes-min 2 \
--nodes-max 2 \
--with-oidc \
--managed \
--node-type t2.small

Load Balancer Controller

This excellent article got us most of the way there but left us with a gateway timeout. You will need to set up a load balancer controller for the process detailed here to work. Follow the steps from "[NEW] Deploying the AWS Load Balancer controller" to "kubectl -n kube-system get po" substituting your cluster name etc.

RDS

At point, we manually set up an RDS instance creating a new VPC for it to exist on. We needed to access the RDS instance from the Internet for development but also need it to be accessible from EKS. We connected to the database from a desktop client and created a database for Hasura to use.

The solution to the networking came from this script which will create a peering connection between the 2 VPC's. This was a massive time saver.

Certicate Manager

To be able to use SSL certificates we need to install a certificate manager into our environment. We used the helm tool for this.

helm install \                             
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.7.2 \
--set installCRDs=true

The Config

The final step was to implement a namespace, app deployment, and nodeport service. Our config is below.

---
apiVersion: v1
kind: Namespace
metadata:
  name: hasura-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: hasura
    hasuraService: custom
  name: hasura
  namespace: hasura-ns
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hasura
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hasura
    spec:
      containers:
      - image: hasura/graphql-engine:v2.16.0
        imagePullPolicy: IfNotPresent
        name: hasura
        env:
        # - name: HASURA_GRAPHQL_SERVER_PORT
        #   value: "80"
        - name: HASURA_GRAPHQL_DATABASE_URL
          value: postgres://usernam:password@hostname:5432/dbname
        ## enable the console served by server
        - name: HASURA_GRAPHQL_ADMIN_SECRET 
          value: changeme
        - name: HASURA_GRAPHQL_ENABLE_CONSOLE
          value: "true"
        ## enable debugging mode. It is recommended to disable this in production
        - name: HASURA_GRAPHQL_DEV_MODE
          value: "true"
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
---
apiVersion: v1
kind: Service
metadata:
  namespace: hasura-ns
  name: hasura-nodeport
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: NodePort
  selector:
    app: hasura
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: hasura-ns
  name: hasura
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/healthcheck-path: /healthz
    alb.ingress.kubernetes.io/certificate-arn: arn of certificate
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
    
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: hasura-nodeport
              port:
                number: 80

Hopefully, at this point, you should be able to see a load balancer in your AWS console and connect via the URL associated with it. By the time we got this working, it was probably our 20th attempt!

Escaped is a penetration testing company that specializes in serving businesses that don't have an in-house security team. If you have any questions about the penetration testing process, we'd love to help. Reach out here at any time!