In this blog post, we’d like to present one of our recent hires, Lars Bakke Krogvig. Lars works at Unacast as a Data Engineer and his main responsibility is to make sure the data we receive from our partners flows steadily through our processing pipelines. In his spare time, he’s a road cyclist, and almost as passionate about speed-solving Rubik’s cube as he is passionate about freestyle skiing.

Lars

Lars, who are you?

I’m Lars, 27 years old, born and raised in Oslo. I have a Master’s Degree in Applied Mathematics from NTNU in Trondheim. I used to be pretty serious about twin-tip skiing but never got seriously injured. Now I’m more into road cycling, and yes, I suppose I occasionally maybe devote some time to improve at solving Rubik’s cubes. I also like programming, and lots of other things!

What did you do before you started at Unacast

I worked as an Information Management Consultant in the private sector, where I designed, built and maintained reporting systems.

Why did you start working at Unacast?

My main motivation was that I wanted to roll up my sleeves and work more hands-on with design of information flow and implementation of processing pipelines. As a consultant I felt distanced from the problems I solved, I was more of a helping hand than a stakeholder. I wanted to be more involved.

I also want to become better at programming, and so I felt that it was a good idea for me to do more of that in my work (and perhaps slightly less in my spare time).

Another important factor for me was the opportunity to work with emerging technologies, which I believe are more fun and gives me a good skill set for the future.

What kind of problems do you solve, and what are the tools you use?

My main tasks are to rationalize the logic and sequence of what we do with data in Unacast and place that into a structured processing framework.

When working on data problems my job is to figure out where we are and where we want to go, and then to structure the steps in between and make it possible for everyone throughout the company to grasp what actually happens. I try to make complicated processes understandable for everyone, and have everyone around the table wholeheartedly nod and say “this makes sense”.

Regarding tools and methodology, I usually throw together a proof of concept using BigQuery and SQL, and when the concept is validated I put that into our pipelines with Dataflow.

What is the most fun thing about working at Unacast?

I really like that we have short lead times from an idea on a whiteboard to production, and the flexibility the toolset we use provide us with. It is also very rewarding (and sometimes scary) to see my ideas ending up having a big impact on the business, and having the power to influence the company in this way.

Beyond that I really enjoy working in a relaxed environment with colleagues I can have fun with at work. I get to travel to the US and work with people there, which is a new and rewarding experience for me. Everyone seem to tolerate my dry sense of humor, and even respect me enough to not borrow my markers without asking, which is all anyone can ask for.

What will you gain from working at Unacast?

I see this as a rare opportunity to work with leading technologies and processes are becoming standards internationally, but have not yet gained so much momentum here in Norway. I think this is will be a big advantage for me in the long run.

Right now I benefit from being able to work from wherever I want, which gives me a lot of personal freedom few companies are able to offer. I get to learn a lot and from our skilled developers and draw on their experience, which I think is really nice.

All in all, I get to do the things I want to do in a great environment, and have fun along the way!

I really want to use Dataflow, but Java isn't my 🍵
What to do?

A brief intro to Dataflow

Image credit Google

Dataflow is

A fully-managed cloud service and programming model for batch and streaming big data processing.

from Google. It is a unified programming model (recently open sourced as Apache Beam) and a managed service for creating ETL, streaming and batching jobs. It’s also seamlessly integrated with other Google Cloud services like Cloud Storage, Pub/Sub, Datastore, BigTable, BigQuery. The combination of automatic resource management, auto scaling and the integration with the other Google Cloud

So how do we use it?

Here at Unacast we receive large amounts of data, through both files and our APIs, that we need to filter, convert and pass on to storage in e.g. BigQuery. So being able to create both batch (files) and stream (APIs) based data pipelines using one DSL, running on our existing Google Cloud infrastructure was a big win. As Ken wrote in his post on GCP we try to use it every time we need to process a non-trivial amount of data or we just need to run continuously running worker. Ken also mentioned in that post that we found the Dataflow Java SDK less than ideal for defining data pipelines in code. The piping of transformations (pure functions) felt like something better represented in a proper functional language. We had a brief look at the Scala based SCIO by spotify (which is also donated to Apache Beam btw). It looks promising, but we felt that their DSL diverged too much from the “native” Java/Beam one.

Next on our list was Datasplash, a thin Clojure wrapper around the Java SDK with a Clojuresque approach to the pipeline definitions, using concepts such as (threading), map and filter mixed with regular clojure functions, what’s not to like? So we went with Datasplash and have really enjoyed using it in several of our data pipeline projects. Since the Datsplash source is quite extensible and relatively easy to get a grasp of we even have contributed a few enhancements and bugfixes to the project.

And in the blue corner, Datasplash!

It’s time to see of how Datasplash performs in the ring, and to showcase that I’ve chosen to reimplement the StreamingWordExtract example from the Dataflow documentation. A Dataflow-off, so to speak.

The example pipeline reads lines of text from a PubSub topic, splits each line into individual words, capitalizes those words, and writes the output to a BigQuery table

Here’s how it the code looks in it’s entirety, and I’ll talk about some of the highlights specifically about the the pipeline composition bellow.

First we have to create a pipeline instance, and it can in theory be use several times to create parallel pipelines.

Then we apply the different transformation functions with the pipeline as the first argument. Notice that the pipeline has to be run in a separate step, passing the pipeline instance as an argument. This isn’t very functional, but it’s because of the underlying Java SDK.

Inside apply-transforms-to-pipeline we utilize the Threading Macro to start passing the pipeline as the last argument to the read-from-pubsub transformation. The Threading Macro will then pass the result of that transformation as the last argument of the next one, and so on and so forth.

Here we see the actual processing of the data. For each message from PubSub we extract words (and flatten those lists with mapcat), uppercase each word and add them to a simple row json object. Notice the different ways we pass functions to map/mapcat.

Last, but not least we write the results as separate rows to the given BigQuery table.

And that’s it really! No to apply a simple, pure function.

Here’s a quick look at the graphical representation of the pipeline in the Dataflow UI.

This is the Dataflow UI view of the pipeline. 27.770 words have been added to BigQuery

Conclusion

To summarize I’ll say that the experience of building Dataflow pipelines in Clojure using Datasplash has been a pleasant and exciting experience. I would like to emphasize a couple of things I think have turned out to be extra valuable.

  • The code is mostly known Clojure constructs, and the Datasplash specific code try to use the same semantics. Like ds/map and ds/filter.
  • Having a REPL at hand in the editor to test small snippets and function is very underestimated, I’ve found my self using it all the time.
  • Setting up aliases to run different pipelines (locally and in the ☁️ ) with different arguments via Leiningen has also been really handy when testing a pipeline during development.
  • The conciseness and overall feeling of “correctness” when working in an immutable, functional LISP has also been something that I’ve come to love even more now that I’ve tried it in a full fledged project.

First, let me be frank and announce that we’ve been on the Google Cloud Platform (GCP) for little more than a year. And that we’ve not been working full time with GCP for a year, yet. Nonetheless, We feel ready to share some insights and thoughts on building microservices on GCP.

In this blog post will focus on why we at Unacast chose GCP as our cloud provider, why it’s still a good fit for us, and a few lessons learned of using the platform for about a year.

Why Google Cloud Platform

Today the natural choice of cloud is Amazon Web Services (AWS). And with good reason. AWS pioneered many of the great cloud services out there, S3, EC2, Lambda etc. It has as far as we know the longest list of cloud components you can use to build your platform 1. And it’s battle tested at scale by Amazon, Netflix, AirBnB and many others. So why did we choose GCP instead?

Actually, we didn’t. Unacast started out building its platform on a combination of Heroku and AWS. And after some fumbling, sessions of banging our heads against the wall, and some help from a consultant2 we decided to try GCP. And with some effort and a lot of luck it turned out to be the right platform for us. The reason for testing GCP was two fold 1) GCPs big data capabilities and 2) it helps us minimise time used on operations.

There is no secret that Google knows how to handle large amounts of data. And many of the tools provided at GCP is designed for handling storing and processing big data. Tools like Dataflow, Pubsub, BigQuery, Datastore, and BigTable are really powerful tools for data management. GCP also as great environments for running services like App-, Cloud Engine and Dataflow helps us maximise the time used to build business critical features fast, rather than using developer time on keeping the lights on.

The Good Parts

Pubsub

Pubsub is a distributed publish/subscriber queue. It can be used to propagate messages between services, but at Unacast we’ve mostly used it to regulating back pressure. And it works great in scenarios where you want to buffer traffic/load between front-line API endpoints and sub stream services. We use this approach designing write-centric APIs that can handle large unpredictable spikes of requests. NB! Pubsub doesn’t provide any ordering guarantees, and it doesn’t provide any retention unless a subscription is created for a topic.

BigQuery

BigQuery is a great database for building analytics tools. Storing data is cheap and you only pay for querying and storing data. BigQuery is great because of its out of the box capabilities for querying large amounts of data really fast. To put things into perspective, with our thorough usage we’ve seen that BigQuery can query 1GB of data just as fast as 100GB (and probably even more). One thing to remember when using BigQuery is that it’s an append-only database, meaning that you cannot delete single rows, only tables3. In other words, where Cassandra as row-level TTLs BigQuery ships with table-level TTLs. So implementing data retention has to be done differently and may not be straight forward if you’re coming from a standard SQL perspective.

App Engine

App Engine is a scalable runtime environment for Java designed to scale without having to worry about operations. App Engine is great if you need highly scalable APIs. But you can only use Java 7 and libraries whitelisted by Google. Because of this restrictions we’ve got mixed feelings for App Engine. Getting scale without worrying about operations is great but on the other hand the development process is a lot more complex. We would use App Engine where your API doesn’t need much logic or external dependencies, like an API gateway, but for more complex services we would use Container Engine instead.

Container Engine

Container Engine is GCPs answer for hosting linux containers. It’s powered by Kubernetes which is, as of writing, the de facto standard for scheduling and running linux containers in production. On GCP we view Container Engine as the middle ground between Compute and App Engine. Where we believe you get the best tradeoff between operational overhead and flexibility. With Kubernetes you can do interesting things as bundle databases or other services together to increase performance which is impossible in App Engine. However, you have to worry about updating your Kubernetes cluster and keeping the nodes healthy and happy. Adding some operational complexity, work and time spent on not adding features.

Dataflow

Dataflow all the things! Dataflow is GCPs next generation MapReduce. It has streaming and batch capabilities. Dataflow is so good that we try to use it every time we need to process a non-trivial amount of data or we just need to run continuously running workers. As of writing Dataflow only has a SDK for Java. And Java isn’t necessarily the natural language for defining and working with data pipelines. Needless to say we started to look for non-official SDKs that could suit our needs. We found Datasplash, a Dataflow wrapper written in Clojure. Clojure syntax and functional approach works very well when defining data processing pipelines. We’re currently pretty happy with Datasplash/Clojure, at the time of writing we’re running Dataflow pipelines written in Java and Clojure. Time will show if this is the right tool. A caveat with Dataflow on GCP is that it uses Google Compute Engines under the hood. And that means the quota limits for virtual machines can be a show stopper. Make sure to always have large enough limits while you’re evolving your platform.

The not so good

Stackdriver

Stackdrivers monitors sucks. Monitoring is hard from the get go. Its hard because it’s hard to know what to monitor and setting up real good monitors. If the monitor is too verbose and sensitive nobody cares when an alert is triggered, and if it’s too sensitive errors in production will go unnoticed. Our opinion setting up custom metrics in Stackdriver is a horrible experience. And that is why we use Datadog for monitoring services and setting up dashboards. To be fair, Stackdriver has some good components too, especially if you’re using AppEngine. Stackdrivers Trace functionality is awesome for tracking what is slow in your application. And the logs module are easy to use and query for interesting info. Our experience is that these two modules works really great out-of-the-box.

CloudSQL

Cloud SQL is a great service for running a SQL database with automatic backups, migrate to better hardware, and easy setup and scale read-only replicas. But the SQL engine behind is MySQL. We’ve much respect for what MySQL achieved back in the day but those days are over. But because of the ease of use, infrastructure wise, we’ll probably still be using Cloud SQL in the nearest future. However we think we should always consider using Postgres through compose.io or even AWS AuroraDB before settling for Cloud SQL.

Closing notes

We haven’t been able to test all the features of GCP and some of them looks really promising. We’re really excited about the machine learning module. And we hope they’ll support Endpoints for other services than AppEngine soon.

Choosing the right cloud platform isn’t straight forward. It’s hard to know if the services provided by a platform at hand is the right services. We at Unacast have learned from first hand experience that more isn’t necessarily better. And that your first choice and instinct might not always be correct. It was and is still the right choice for us. And after Spotify announced that they we’re moving their infrastructure to GCP, we’re more sure than ever that we chose the right cloud platform.

Footnotes

1
2
3
1. Everything is a platform these days.
2. All consultants aren’t evil.
3. Not entirely true. Deletes are expensive not impossible.

Introduction

This post is best read with some prior knowledge to Kubernetes. You should be familiar with concepts like pods, services, secrets and deployments. Also, I'm assuming you've been working with kubectl before. Enjoy!

At Unacast we spend a lot of time creating web services usually in the form of JSON APIs. And we’ve spent a lot of time designing and experimenting and researching to design them. We’ve shared what we’ve learned along the way. A lot of these posts has been theoretical but in todays post we’re getting our hands dirty, and we’re going to implement an API that scales when being subject to massive amount of read requests. And we’re doing this using Redis. All the examples will be run using Kubernetes.

We assume that the logic to keep the data in Redis updated has been implemented somewhere else. And we also assume that the rate of adding or updating data (writes) is low. In other words, we expect to have a multiple orders of magnitude more reads than writes. Thus, in our tests we’ll just contain read requests and no writes.

All code snippets included in this post can be found in its full form here.

Redis

Before we get started we need to talk about Redis. The Redis team has described Redis quite good on their homepage

Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker.

In my own words, Redis is really fast and scalable in-memory database with a small footprint on both memory and CPU. What it lacks in features it makes it makes up for in speed and ease of use. Redis isn’t like a relational store where you use SQL to query. But it is shipped with multiple commands for manipulating different types of data structures.

Redis is a really powerful tool and should be a part of every developers toolkit. If Redis isn’t the best fit for you, I’ll still recommend investing time into learning how and when to use a in-memory database.

Architctures

Redis can be used in multiple ways. Each different approach has different trade-offs and characteristics. We’ll be looking at two different models and test them on scalability. The two models we’re testing are:

  1. Central Redis: one Redis used by multiple instances of the API.

  2. Redis as a sidecar container: Run a read-only slave instance of Redis for every instance of the API.

The performance tests will be run against the same service using the same endpoint for both models. I’ve extracted the endpoint from main.go and included it below.

The snippet does one simple thing. It asks Redis for a string that is stored using the key known-key. And from this simple endpoint we’ll look how Redis behaves under pressure and if it scales. We expect different behavior from the two different architectural approaches. This example might seem constructed, a real world examples that is similar to this approach is verification of api tokens. I agree that this might not be the best way to do token verification but it’s a very simple and elegant design. For a more elegant solution you should consider JSON Web Tokens.

Central Redis

As mentioned above a central Redis architecture is when we use one Redis instance for all API instances. In our case these API instances are replicas of the same API. This is not a restriction but it is a recommended architectural principal to not share databases between different services.

In Unacast we believe in not hosting your own databases. We’ll rather focus on building stuff for our core business and not worry about operations. Normally, we use Google Cloud Platform (GCP) for hosting databases. But hosted Redis isn’t publicly available at GCP so decided to use compose.io’s Redis hosting.

Setting up the service using a single Redis is pretty straight forward using Compose.io. Compose.io has some great guides on how you to get started with their Redis hosting as well. The kube manifest for running a kubernetes deployment and service is added below:

Redis as a sidecar container

Before we describe how to setup a Redis as a sidecar container. We’ve to give short description of what a sidecar is. The sole responsibility of a sidecar container is to support another container. And in this case the job of the Redis sidecar container is to support the API. In Kubernetes we solve this by bundling the API and a Redis Container inside one pod. And for those of us who don’t remember what a Pod is, here is an excerpt for the Kubernetes documentation:

pods are the smallest deployable units of computing that can be created and managed in Kubernetes.

Meaning that if a Redis container is bundled with an API container. They’ll always be deployed together on the same machine. Sharing the same IP and port ranges. So don’t try to bundle two services using the same ports, it’ll simply not work.

The following shows how to bundle the two containers together inside a Kubernetes pod.

By deploying this we’ll have a Redis instances for each pod replica. In this specific case we’ll have three Redis instances. That means we need some mechanism for keeping these instances in sync. Implementing sync functionality is horrible to do on your own [citation needed]. Luckily, Redis can be run in master-slave mode and we’ve a stable Redis instances hosted by compose.io. By configuring every Redis sidecar instance as a slave of the master run by compose.io. We can just update the master and not worry about propagating the data the slaves. Our unscientific tests showed us that the Redis master propagates data to the slaves really fast.

NB! A caveat is that you’ve to setup a SSL tunnel to compose.io to be able to successfully pair the sidecar instances to compose.io’s master instance.

We expect this architecture to scale better than the central Redis approach.

Results

All the tests were run using a Kubernetes cluster:

  • 12 instances of g1-small virtual machines
  • 12 pod replicas

We used vegeta distributed on five n1-standard-4 virtual machines to run the performance tests.

The graphs below are the results from the performance tests. The results focuses on success rate and response times.

Central Redis

Redis as a sidecar container

Conclusion

As expected we see that the sidecar container scales better than the central approach. We observe that the central approach is able to scale to about 15 000 reads/second, while the other can handle over 60 000 reads/second without any problems. Remember that these tests are run on the same hardware and that only a minor change in the APIs architecture resulted in a major performance gain.

Closing Notes

One last thing, remember that utilizing multiple read-only slaves will behave in the same matter as multiple read-only Redis slaves. We prefer using Redis because of its speed, small footprint and ease of use.

We haven’t been running this in production for a long time. So we don’t have any operational experience to share yet. And we intend to share this in the future.

Further work

This post didn’t cover if the Redis as a sidecar container approach scaled linearly as more CPU was added. This is outside of the scope of this post. But our internal testing has shown this to be true. You’re welcome to test this yourself.

At Unacast we’re obsessed with monitoring. One of our mantras is “monitoring over testing”. And notice we haven’t added any monitoring for the Redis instances inside a pod. However if you’re using Datadog, as we do, it’s fairly straight forward to add monitoring by bundling a dd-agent as another sidecar container inside the same pod.

Want more?

If you’re interested in reading more about API design I can recommend the following posts from our archive:

Could we change the paradigm of how we build HTTP REST APIs such that great API documentation is a consequence instead of a chore?

Throughout this text API(s) refers to HTTP APIs

Great API documentation makes integrating with an API a breeze. Not only because you can read it and implement a good client yourself, but more in the line of great API documentation would let us generate awesome clients such that we do not have to spend any time at all. The downside is that API documentation rarely is great, not existing at all, missing crucial information or just blatantly wrong. So how could we fix this and at the same time make it an enjoyable experience?

As developers, documentation is often something we write after the fact. On some occasional sunny days we might actually write documentation simultaneously as code. Unfortunately, the next day we might again change that piece of code and of course forget to change the documentation along with it. To my experience this is as much the fact when writing a piece of application code as when implementing APIs. The issue with not writing API documentation until when development is completed is that it stands as an extra barrier before releasing. Unsurprisingly, writing the documentation is likely to be hurried as it blocks release and the fact that the developer finds it tedious to document. Using annotation based API documentation in the code itself as opposed to defining the API documentation in a separate file helps, but is not in our experience sufficient to mitigate the issue of API documentation being an artifact updated only after technical implementations are completed.

The opposite approach, design-first, is an approach to building APIs where the documentation is written first, then the implementation is shaped after that piece of documentation. This approach has been covered by many thought leaders over the last couple of years: Programmable Web, API Evangelist and InfoQ. The upside of this approach is that you get a chance to vet your API design without writing a line of code. Additionally, it acts as a natural task specification for the developers implementing the design. The main downside of this approach is that technical challenges with the API design might only surface very late in the process, making it more costly to actually amend the challenges gracefully. Furthermore, developers might feel bound by the specification in the API and feel the process to rigid, or in worse cases decide to change the design neglecting the already existing API document.

Many of the popular frameworks used to build web applications and APIs today rely heavily on convention to increase developer speed. The framework themselves by default defines what error codes are returned and what headers used. Even though the conventions are convenient, they tend to make writing accurate API documentation challenging. The challenge is due to the documentee having to remember to document all the less than visible default conventional behaviors. In our experience, being privy to the ‘complete’ set of these behaviors for any framework is a daunting challenge.

As for the issues mentioned we see two classes of issues: those related to making changes twice (code and documentation) and those that are a result of including the conventions used by our frameworks to our API documentation. As per our understanding to accommodate any or both of these issues API documentation must be a first-class citizen in our frameworks or languages we write APIs in. By first-class citizen we mean that the API design cannot change without the documentation changes and vice versa. Additionally, the framework should reflect all of its API design defaults in the API documentation. By completing this the API documentation basically works as a contract.

An example of a framework that does treat API documentation as a first-class citizen is the Go framework goa. goa makes a clean separation between the API design interface and your business logic. The API design interface is defined using a DSL. From the DSL API documentation, data models and the classic controller code is generated. As long as you do not alter the generated code, which easily can be enforced on a build server, your API design and API documentation are now always in sync. This characteristic makes uncovering breaks to a current API-contract as diff’ing the new and and old API documentation document.