For Bookingkit, we developed a high-performance API Gateway leveraging cutting-edge technologies like Golang and AWS. Our work encompassed backend development, API integration, and infrastructure automation, delivering a robust and scalable solution. The Gateway unified data access across instances, optimized response times, and ensured 24/7 availability, empowering Bookingkit to support its partners and enhance user engagement in the leisure industry.
bookingkit is one of the leading Berlin-based companies from the leisure industry, highly recognized in the sector. The company operates as a neutral Global Distribution System (GDS) to provide travel agencies and marketing networks access to tours and activities in the form of a digital inventory. It is also a web-based and integrated solution (software as a service) for providers of tours and activities.
We were approached with an idea depicting how they would like to change their platform’s architecture to enable scaling it even further. The chosen solution is based on sharding - creating several instances sharing the same codebase but with different datasets derived from the source and divided by the chosen factor.
Such an approach, similar to microservice architecture, creates certain challenges like:
That can be achieved for instance by creating an API gateway in front of the application instances, and that is what we aimed for.
In addition, a solution had to:
All of this has to be highly available - bookingkit is working with demanding partners such as Trip Advisor or Get Your Guide, which means their services have to be available 24/7.
When you’ve been building a platform for years, it is not easy or sometimes even viable to go from a monolith to full microservices right away. Such an operation could consume too much time and resources and basically is quite risky. But an API Gateway is a necessary step forward in creating a distributed system. The new service should do everything that the bookingkit’s API could do.
We knew the issues and the main goal. The question that stood was: “How exactly are we going to achieve that?”. To find the complete answer, a couple of steps were needed. It all started with an intensive technical workshop.
The first concept we discussed was creating a service that would not only mimic the existing API but would also be a copy of the whole database, combining all the data that was supposed to be sharded into a number of instances connected to it.
After workshops and further discussions, it became clear we needed to validate our assumptions with a Proof of Concept using Flask, a Python framework. While the results were somewhat pleasing, concerns about performance, complexity, data integrity, and migrations persisted.
Another technical meeting led to the idea of a lighter version of the service, one that would simply relay requests to the instances behind it and combine the responses. In this iteration, we decided to compare Python with Golang for better performance.
That was the moment when it “clicked.” Not only did we confirm the solution was a good fit, but we also discovered that choosing Go yielded surprisingly better results—TTFB was twice as fast compared to the Python solution.
API gateway proxying and aggregating data to/from instances was created. The programming language chosen was Golang, infrastructure that was also a big part of the solution is in AWS (AWS ECS), we also prepared robust CI/CD pipeline with automated tests of the whole API written in Cypress, automated performance tests in k6, automatic deployment from the pipeline to staging/production environments. Everything was also automated in Terraform.
We’ve established that the Gateway pattern will be suitable for this project covering all the functional requirements and being lightweight.
Go is the way to go! The application written in Golang provides twice as fast responses for an API request than the one created in Python/Flask.
The initial plan for the infrastructure set-up was conceived - our non-functional requirements covered usage of AWS services, high-availability and possibly low response latencies.
Specific Gateway response latency measured in TTFB (Time To First Byte) become our key goal to be met.