Summary
CouchDB is a distributed document database built for scale and ease of operation. With a built-in synchronization protocol and a HTTP interface it has become popular as a backend for web and mobile applications. Created 15 years ago, it has accrued some technical debt which is being addressed with a refactored architecture based on FoundationDB. In this episode Adam Kocoloski shares the history of the project, how it works under the hood, and how the new design will improve the project for our new era of computation. This was an interesting conversation about the challenges of maintaining a large and mission critical project and the work being done to evolve it.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, a 40Gbit public network, fast object storage, and a brand new managed Kubernetes platform, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. And for your machine learning workloads, they’ve got dedicated CPU and GPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
- Are you spending too much time maintaining your data pipeline? Snowplow empowers your business with a real-time event data pipeline running in your own cloud account without the hassle of maintenance. Snowplow takes care of everything from installing your pipeline in a couple of hours to upgrading and autoscaling so you can focus on your exciting data projects. Your team will get the most complete, accurate and ready-to-use behavioral web and mobile data, delivered into your data warehouse, data lake and real-time streams. Go to dataengineeringpodcast.com/snowplow today to find out why more than 600,000 websites run Snowplow. Set up a demo and mention you’re a listener for a special offer!
- Setting up and managing a data warehouse for your business analytics is a huge task. Integrating real-time data makes it even more challenging, but the insights you obtain can make or break your business growth. You deserve a data warehouse engine that outperforms the demands of your customers and simplifies your operations at a fraction of the time and cost that you might expect. You deserve ClickHouse, the open-source analytical database that deploys and scales wherever and whenever you want it to and turns data into actionable insights. And Altinity, the leading software and service provider for ClickHouse, is on a mission to help data engineers and DevOps managers tame their operational analytics. Go to dataengineeringpodcast.com/altinity for a free consultation to find out how they can help you today.
- You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
- Your host is Tobias Macey and today I’m interviewing Adam Kocoloski about CouchDB and the work being done to migrate the storage layer to FoundationDB
Interview
- Introduction
- How did you get involved in the area of data management?
- Can you starty by describing what CouchDB is?
- How did you get involved in the CouchDB project and what is your current role in the community?
- What are the use cases that it is well suited for?
- Can you share some of the history of CouchDB and its role in the NoSQL movement?
- How is CouchDB currently architected and how has it evolved since it was first introduced?
- What have been the benefits and challenges of Erlang as the runtime for CouchDB?
- How is the current storage engine implemented and what are its shortcomings?
- What problems are you trying to solve by replatforming on a new storage layer?
- What were the selection criteria for the new storage engine and how did you structure the decision making process?
- What was the motivation for choosing FoundationDB as opposed to other options such as rocksDB, levelDB, etc.?
- How is the adoption of FoundationDB going to impact the overall architecture and implementation of CouchDB?
- How will the use of FoundationDB impact the way that the current capabilities are implemented, such as data replication?
- What will the migration path be for people running an existing installation?
- What are some of the biggest challenges that you are facing in rearchitecting the codebase?
- What new capabilities will the FoundationDB storage layer enable?
- What are some of the most interesting/unexpected/innovative ways that you have seen CouchDB used?
- What new capabilities or use cases do you anticipate once this migration is complete?
- What are some of the most interesting/unexpected/challenging lessons that you have learned while working with the CouchDB project and community?
- What is in store for the future of CouchDB?
Contact Info
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
- Apache CouchDB
- FoundationDB
- IBM
- Cloudant
- Experimental Particle Physics
- FPGA == Field Programmable Gate Array
- Apache Software Foundation
- CRDT == Conflict-free Replicated Data Type
- Erlang
- Riak
- RabbitMQ
- Heisenbug
- Kubernetes
- Property Based Testing
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to the Data Engineering Podcast, the show about modern data management. When you're ready to build your next pipeline or want to test out the project you hear about on the show, you'll need somewhere to deploy them. So check out our friends at Linode. With 200 gigabit private networking, scalable shared block storage, a 40 gigabit public network, fast object storage, and a brand new managed Kubernetes platform, you get everything you need to run a fast, reliable, and bulletproof data platform. And for your machine learning workloads, they've got dedicated CPU and GPU instances. Go to data engineering podcast.com/linode, that's l I n o d e, today to get a $20 credit and launch a new server in under a minute. And don't forget to thank them for their continued support of this show. And setting up and managing a data warehouse for your business analytics is a huge task.
Integrating real time data makes it even more challenging, but the insights you obtained could make or break your business growth. You deserve a data warehouse engine that outperforms the demands of your customers and simplifies your operations at a fraction of the time and cost you might expect. You deserve ClickHouse, the open source analytical database that deploys and scales wherever and whenever you want it to and turns data into actionable insights. And Alteniti, the leading software and service provider for ClickHouse, is on a mission to help data engineers and DevOps managers tame their operational analytics. Go to data engineering podcast.com/altinity, that's altinity, for a free consultation to find out how they can help you today.
[00:01:44] Unknown:
Are you spending too much time maintaining your data pipeline? Snowplow empowers your business with a real time event data pipeline running in your own cloud account without the hassle of maintenance. Snowplow takes care of everything from installing your pipeline in a couple of hours to upgrading and auto scaling so you can focus on your exciting data projects. Your team will get the most complete, accurate, and ready to use behavioral web and mobile data delivered into your data warehouse, data lake, and real time data streams.
[00:02:10] Unknown:
Go to data engineering podcast.com/snowplow
[00:02:14] Unknown:
today to find out why more than 600, 000 websites run Snowplow. Set up a demo and mention you're a listener for a special offer. Your host is Tobias Macy. And today, I'm interviewing Adam Koculoski about CouchDB and the work being done to migrate
[00:02:30] Unknown:
introducing yourself? Sure, Tobias. Thanks for having me on. My name is Adam Koculoski. I work at IBM, and I'm 1 of the members of the project management committee for Apache CouchDB. And do you remember how you first got involved in the area of data management? Yeah. I was really on the sort of practitioner side more than the the vendor side. I, in graduate school, I was doing work in experimental particle physics, and that is a very data intensive exercise. You're looking for needles in very large haystacks, collecting tons of telemetry from these very large detectors about collisions between fundamental particles that are happening millions of times a second and looking for really rare signals.
And, you know, the practicalities of that involve a lot of the kind of analytics that would be familiar to the, folks who do a lot of work in Spark and and things of that nature. That was really my first introduction to data management. The database side of it came from kind of managing all the metadata about those datasets and the compute jobs that were processing them. And that's when I kinda got deeper into the way that people managed databases across universities, across national laboratories, across different regions of the globe, which is what got me interested in in projects like CouchDB. And my understanding of things like particle physics is that
[00:03:45] Unknown:
part of the challenge of the managing the volume of data is that you have to That's exactly
[00:03:58] Unknown:
That's exactly right. That's 1 of the most contentious parts of any particle physics collaboration is when you are making those online triggering decisions. There's several levels to it. Some of that triggering decision work has to happen in FPGAs and things that respond incredibly quickly. And then you go to a next level that has a little bit more time to make a decision. Maybe it can run on a commodity Linux host and do some processing. And then, you know, you ultimately get to a level where you say, yep. We're gonna accept this event. We're gonna write it to tape. We're gonna write it to disk. And that's when all the reconstruction efforts go in to see how well your online triggering actually captured events of interest.
[00:04:32] Unknown:
And you said that because of the challenges of being able to handle sharing of these datasets across different universities and organizations as part of what led you to CouchDB. So I'm wondering if you can just start by giving a bit of an introduction to how you got involved in the community and your current role.
[00:04:50] Unknown:
Sure. I got involved in CouchDB because some of my colleagues from that physics research group and I decided we should take a stab at starting a company together. We saw a lot of the new innovations that were happening in the world of data management. This was, you know, around 2007 when you had papers being published by the big web companies taking on traditional approaches to data management at scale. And we thought to ourselves, well, we're we're certainly embarking on some nontraditional applications of data management at scale in our own work, and we can foresee that other companies beyond the, you know, large scale web companies would be interested in this kind of approach as they become more data driven themselves. And so when we thought about trying to start a company, we we zeroed in on this world of scalable distributed databases and said, okay, this seems like a place where we can really make an impact and where there doesn't seem to already be a, you know, a leader in the market.
And we did a little bit of a review of projects that were out in the open source world to see if there were things that we ought to extend rather than start from scratch. And CouchDB at the time seemed to have a lot of the right mentality on its approach to data distribution, its approach to, you know, empowering Internet and web applications and its approach to scalability. It just seemed really well aligned with what we wanted to do with our company. And
[00:06:12] Unknown:
so if you can now give a bit more detail on the CouchDB project itself and a bit about what it is and, some of the main use cases that it's well suited for and some of the ways that people are using it? Absolutely.
[00:06:25] Unknown:
CouchDB is a project maintained by the Apache Software Foundation and has been for the past decade. It's imminently issuing its 3rd major release, the 3 dot o release, which I think is, a culmination of several years' worth of work optimizing, hardening, and then and really, you know, rounding out the support for some of the key use cases that it has served really well over the past several years. We find CouchDB to be an excellent general purpose database for building web and mobile applications. You know, there's this sort of in many of these projects, you know, 80% of the functionality can be covered by a large variety of databases and CouchDB is certainly 1 of those databases, so we don't see reasons, you know, to disqualify it from many types of, you know, sort of web applications that are being built today. Where it has a particular strength though is in its mechanisms for active active data replication across a wide variety of topologies, you know, systems that might exist in an on premises data center and another 1 in a cloud, deployments across different clouds, systems that want to maintain a local offline, you know, editable repository on a mobile device or a tablet of some kind.
Those kinds of situations where the data lives in multiple places and needs to be synchronized
[00:07:46] Unknown:
are ones that that CouchDB finds itself especially well suited for. Yeah. And I know that there are also projects like CouchDB and CouchDB Lite for being able to use the same type of interface and take advantage of the synchronization capabilities and things like browsers or in mobile devices. And so that was part of what I was initially attracted to when I first found out about CouchDB is not having to deal with writing my own synchronization primitives to be able to take advantage of that same capability.
[00:08:17] Unknown:
That's exactly right. Because that that replication protocol is something that is, you know, running as JSON documents over an HTTP interface, the the sort of bar to implementing a client that speaks that protocol is is lower than you might imagine and that's where the PouchDB project got up and running, I think, provides a very nice, option for folks who are running in the browser like that or running in, you know, sort of a browser like environment in a in a mobile device. And, you know, I think that's something that that I think speaks to the power of, like, open standards, open APIs, and Internet protocols for for fostering innovation at that level. Yeah. It's definitely
[00:08:54] Unknown:
interesting how having a standard interface for a given application or particular use case frees people up to be able to spend their energy on much more valuable enterprises and the types of projects and products that come out as a result of that, where in some cases, having a standardized API can be seen as a bit of a constraint that might be limiting in the capabilities that you have. But it's amazing the types of innovation that people can come up with even given those types of constraints.
[00:09:26] Unknown:
Yeah. I'm encouraged to hear you, you know, sort of, point out the challenges of implementing one's own synchronization protocol when you really get into the details of active active synchronization of 2 different repositories that are being edited simultaneously, it it becomes a a pretty fun project pretty quickly. And, you know, there I would say there's still a ton of active research in the field in looking at, you know, data structures that can intelligently merge edits from different environments, data structures that can intelligently track lineage between these different environments. So it's, you know, it's I guess the phrase can of worms comes to mind. Right? Yeah. Definitely. Even in your own sync protocol.
[00:10:04] Unknown:
Yeah. The CRDT or conflict free replication or I I forget exactly what it stands for, but data types. Yeah? Yes. Exactly. That's definitely an interesting area of research as well and, the types of use cases that that enables. But as as you said, it's challenging, and that's part of why it's not being used quite so ubiquitously as 1 might think. And in terms of the history of CouchDB, you mentioned that you first started looking at it in around the 2007 time frame, and it was adopted by Apache in 2010. And that was in sort of the peak of the NoSQL movement where people were trying to figure out different ways of having web scale or, you know, massively distributed data storage layers and some of the aspects of things like ACID and transactions that they are willing to forego in order to be able to handle these scaling capabilities. But then in recent years, people have started going in the other direction of moving back towards relational and transactional interfaces because of different breakthroughs that have been made in some of the architecture and compute environments. And so wondering if you can just talk through a bit of the history and the role that CouchDB played in that NoSQL movement and some of the ways that it has maintained relevancy and grown over the past several years? I think that's an excellent summary of the past 10 years in the world of distributed databases. You know, we,
[00:11:26] Unknown:
sacrificed a lot in those early years in order to quickly achieve, you know, new heights in scalability, as it were, and over the past decade have progressively tried to recover some of those richer isolation semantics and transactional capabilities that truly are pretty powerful primitives for application developers to rely upon. In the first versions of CouchDB, we, you know, had a system like the 1 dot x release. Right? Would run on a single server. It intentionally did not support, like, atomicity across updates to multiple documents because we knew that that would be a particularly thorny problem to tackle as we got towards native clustering in the 2.0 release. But it had a model that, you know, was providing good isolation between updates to individual documents on a single deployment. In the next major release, the 2.0 release, and this is something that continues in 3.0, we adopted an eventually consistent system for replicating database you know, documents and databases across shards and use the same basic revision tracking mechanisms that are implemented in CouchDB support for replication across instances as the way that we would converge on 1 view of the database in a CouchDB cluster.
And, you know, that that reuse had plenty of benefits. It meant that the protocol we used and the and and, frankly, the implementation that we used was very battle tested, but it also had its downsides. You know, the the fact that, these systems weren't executing any consensus protocol, they weren't providing any sort of rollback mechanism meant that anytime you were concurrently issuing edits to a single document from multiple writers, you ran the possibility that these folks could race and that both could be accepted by different replicas within the cluster. So we certainly lived through a number of the downsides of sacrificing traditional levels of database isolation in in the name of scalability.
And I think, you know, over the years, our users have understood the kinds of patterns that can be employed in application design in order to avoid that kind of contention. Nevertheless, the fact that they have to employ those kinds of patterns limits the the ways in which they can design their applications to make the best use of the database. So I think we've, you know, we've we've we've had a front seat, I guess, at that, you know, that movie of how, you know, isolation and and and and scalability have been in tension with 1 another in the world distributed databases over the past decade. And I think we are, as a result, deeply appreciative of the power that, you know, strong consistency and transactional support, can can can provide to application developers.
[00:14:17] Unknown:
And this is probably getting a little deep in the weeds, but my understanding of the way that CouchDB was architected and the use cases that it was optimizing for was to prioritize write throughput. And 1 of the ways that it did that was by foregoing indexing on write and requiring those indexes to be built on read requests, which could sometimes lead to high latency in those read requests. So I'm curious what the current state of affairs is and,
[00:14:46] Unknown:
some of the trade offs that have been accepted in the name of high write throughput. Yeah. You've done your homework there for sure. CouchDB's materialized views have always been indexed on read, and you and that is very much a performance optimization. Because those indexes were defined in, you know, JavaScript functions that the user would upload, we needed an environment that could pipeline the indexing requests through a, you know, a configured JavaScript process, and firing up 1 of those for every single concurrent write was something that, was was harder to deliver high throughput out of. The trade off being that those indexes were, you know, potentially inconsistent with 1 another. Each copy of the data like, each copy of a database shard would be indexing independently, and that introduced additional some, you know, potential for inconsistencies in the observability there. Like, you might hit 1 copy of a secondary index on 1 request and another copy in another index, and guaranteeing that those things had a sort of a view that was progressing through time was an extra challenge for us at the clustering layer. And I'm wondering if you can talk a bit more about the current architecture and implementation the The the 3 dot o release continues to bring with it the same fundamental architecture that the 2 dot o release had. Right? So we still have databases that are split into shards, each shards replicated, documents are accepted by the cluster when a majority of the copies of those shards accept the right, indexes are built on read.
What 3.0 brings with it is, you know, a lot of things that ease the administration of, you know, a CouchDB cluster by having auto indexing daemons running in the background, by having automatic daemons that are, you know, vacuuming or compacting the database files based on, you know, an analysis of the workload. It brings with it, a new, you know, full text search integration, that gives people additional flexibility in defining their secondary indexes. It also brings with it 1 scalability improvement, which is essentially a a support for compound primary keys where the user has a little bit of additional control over colocation of documents.
The way we have always answered view index queries in CouchDB has been a scatter gather mechanism. Each of the replicas of a database shard built its own copy of the index. And when you go to query that index, we have to go ask each of the shards what its contribution to that query is because we don't know a priori which shards host relevant data for that query. What the new partitioning feature allows is for a user to say every document sharing this prefix, a device ID, for example, or a user account ID in the case of a, you know, a SaaS application, should be colocated together.
And any query that specifies that that first portion of the key in its query can be satisfied just by that 1 shard rather than having to do this gather gather. So this is a much more scalable approach to to indexing, and it's 1 that we find meets the needs of a lot of the different use cases for for Vue queries in CouchDB. So that's a nice improvement coming in 3.0 as well. And CouchDB
[00:18:03] Unknown:
being a document oriented database, I've always been a little intrigued in the data modeling requirements of those types of storage engines where at face value, when you're first starting off doing the hello world tutorial, you say, oh, of course, documents are easy. I just have everything in 1 record. But then as you start to scale and want to try to do more complex analysis or try and figure out joins or how to be able to compose different records together, it starts to become much more challenging, and you have to get much more detailed and figure out how you're going to handle generating and then using these documents upfront rather than what is, proposed as the sweet spot of these where you can just throw documents in and then figure out after the fact what you're going to do with them. And I'm curious what your experience has been in that regard.
[00:18:53] Unknown:
Yeah. I think that's a great perspective. I would say that the view engine has been a powerful assist to our users in that regard because it gives you, like, the ability to define an arbitrary JavaScript function that's executed over these different documents. If you end up with an evolving schema, it's possible to address those things not by a large scale data migration, but by some mix, you know, some extra handling in the view code. We've certainly had users do that at scale. It's also the kind of thing that can do some simplistic kinds of joins by picking out, you know, the related attributes of different documents that are different, you know, sort of represent different classes of objects and pulling them together into 1 view that then, you know, you can issue a range request against that view to get a blog post and all of its comments, you know, in in 1 query to the database. So the flexibility of the view system is something has been something that has, given people the ability to recover from, you know, changes in the data model over time, right, without a large scale migration. And it's also given them, you know, kind of the escape hatch to be able to to to cover sort of unexpected requirements in the application layer, you know, without without making fundamental changes to the model. But, I would say, you know, a lot of our users have found that the trade off of being able to get to market faster, has been a worthwhile 1. For for many of these people, like, the time to market has been, you know, of the utmost importance.
We had a lot of gaming customers for, quite some time where, you know, they could had a very predictable measurement of the revenue associated with, you know, launching additional games on the App Store and sort of turning their marketing machinery on it. And I I can clearly remember conversations with them where they said, look. We know an evolution of the data model is really the right approach here, but when we do the math, it doesn't make financial sense for us to delay our release timeline in order to go through that that that remodeling of the data. So we'd rather just scale scale this thing out a little bit further and see how much longer we can make this last.
[00:20:59] Unknown:
Yeah. The, the old space time trade off. And another implementation detail of CouchDB that I'm intrigued by is the fact that it's written at least primarily in Erlang, which I also know the React engine was written in and a few other layers such as RabbitMQ. And I'm curious what you have found to be both the benefits and the challenges of that being the runtime for CouchDB.
[00:21:23] Unknown:
Yeah. Maybe challenges first. Erlang is not particularly well suited to high throughput number crunching. You There are a lot of things that can be computationally intensive if done in pure Erlang. And so we've had to be fairly careful about what kinds of processing we do and be judicious about pushing things down into c code, you know, as as needed in order to hit our throughput and latency targets. That's been the main downside. I guess the other downside is, like, the pool of developers from which 1 can hire is not as large as it might be in other languages. On the flip side, the people who know it tend to know it pretty darn well. And so, you know, there's a there's fairly high quality of developers in that community. Other benefits that we've seen, it's excellent for building cloud services, for building web services. There's a great degree of isolation.
You know, if a a a sort of rogue client process comes in and make some crazy request, it can take down that TCP connection, but it's unlikely to have, you know, a bigger blast radius taking out other parts of the stack. You know, that that that's something that that has been a fundamental design principle of the Erlang system from its days power and telecommunications, which is which it it still does today. The other thing that I think has been a real boon for us is the operational visibility into the system. You know, the fact that you can kinda, poke around in the running VM and gather a whole bunch of interesting diagnostics, and and, frankly, if you're feeling especially adventuresome, make changes to the running VM is something that is is pretty darn useful for chasing down the kinds of bugs that are, you know, heizen bugs, right, that don't seem to lend themselves to an easily reproducible test case, but that are cropping up in these kind of strange situations in production. The Erlang VM, in my experience, is quite a bit more amenable to that kind of operational visibility than than than other run time systems that I've been working with. Yeah. I've heard some pretty remarkable statistics
[00:23:13] Unknown:
in terms of the reliability that you can get out of Erlang, such as I believe there is 1 company that's managed to have a system that had something like 5 minutes of downtime over the course of maybe a dozen years, which is pretty crazy. It's, you know, it's fascinating. It is possible to design those kinds of systems
[00:23:29] Unknown:
and and do things like, you know, upgrade the code of a running process without actually restarting the the lightweight Erlang process inside the virtual machine. And we've done many of those kinds of things over the years. There's a little bit of a tension with all of the world of, container based cloud native programming in Kubernetes. Right? Where, you know, Kubernetes and and and it's they'll have their own opinions about how 1 ought to upgrade running services. And, you know, if you're really aspiring to upgrade the running code of process without hanging up the connection, Erlang has tremendous tools in that space. But they don't necessarily lend themselves toward that kind of hands off, declarative notion of let me go now upgrade this deployment my Kubernetes cluster. And so that brings us a bit into the replatforming
[00:24:17] Unknown:
work that you're trying to do with Foundation DB to replace the storage layer. And before we get too far into that, I'm wondering if you can give a bit of detail into the current way that CouchDB handles storage and some of the benefits that you're hoping to achieve with this replatforming?
[00:24:35] Unknown:
Yeah. Sure. So let's talk about the current storage engine. This is a this is a storage engine that is entirely of our own invention. It is a fully copy on right storage model. Changes to, you know, a b tree index in 1 of our files involve rewriting the entire path from that leaf node up to the root and appending all of that to the end of the file. This is a very robust design. It always leaves the database in a consistent state. We don't have to do anything like replay redo logs after an unclean shutdown. You know, you can kill minus 9 and CouchDB process and start it back up, and it will automatically seek from the end of the file to find the last consistent snapshot of that file and use that. It was beautiful for operations. Right? It very much simplifies a lot of the recovery processes that 1 would otherwise have to undertake in a more traditional database design. But that's just a description of the storage engine used for 1 replica of 1 chart file. Right? Then you kinda climb up into the level of the eventually consistent clustering architecture, and, you know, that's a whole other ball of wax.
So when we were looking at FoundationDB, our interest was not primarily in what you would think of as the storage engine, you know, a RocksDB or a LevelDB or a SQLite or CouchDB trees. We we didn't wanna regress on that front. We thought it was really important to have a bulletproof, reliable, well tested storage engine. Our interest though was in how do we provide, you know, serializable isolation at that storage layer while also getting horizontal scalability. Right? And I feel like that's the problem that Foundation DB as a project has just basically spent all of its time trying to tackle. Let me provide a basic key value interface. Let me provide strict serializable isolation over updates to those keys, and let me do it in a way that allows for horizontal scalability across cluster machines.
Those primitives were things that we looked at from a CouchDB perspective and said, wow. If we really had that underpinning CouchDB, we can deliver richer APIs to our users. We can improve the scalability of some of our operations, and, you know, we can sort of refocus our efforts on more of the things that, you know, CouchDB does that are truly, like, uniquely differentiating their application there.
[00:26:50] Unknown:
And in terms of this replatforming, I'm curious how you're planning to handle the sort of redistribution of features and capabilities within CouchDB, where it seems that in terms of the data replication, that can be relegated to FoundationDB. But I also know that CouchDB has out of the box support for change capture feeds. And I'm wondering just sort of what the overall rearchitecting
[00:27:18] Unknown:
will look like and where the capabilities will end up lying between Foundation DB and what role the CouchDB front end will serve? Yeah. It's a great question. So our view here is that you can you can look at it as as a set of layers, and that's how Foundation DB often talks about consumers of its, you know, of its, data store. At the lowest layer, Foundation DB provides the the durability. You know, all of the data that CouchDB is storing is stored in a FoundationDB cluster, but that FoundationDB cluster is not directly exposed to consumers of CouchDB. On the top, CouchDB is providing, you know, the the the the familiar JSON HTTP interface, the materialized view indexes, other types of secondary indexes, search indexes, and so on and so forth, those are all being written into FoundationDB.
As the needs of an application using CouchDB grow, right, as its throughput requirements grow, that FoundationDB cluster can horizontally scale to accommodate more data and to serve more throughput. And the CouchDB nodes on top can horizontally scale because they're stateless. They're essentially an application layer over top of FoundationDB at this point. So we have good scalability stories for both the storage and the compute, if you like, providing CouchDB's interfaces. If a but but all that said, that's 1 instance, 1 deployment of CouchDB. It's 1 endpoint that a user would interact with in their own application. If now you're saying, well, I'd like to have 1 of these instances running in my on prem data center, or I'd like to have 1 running in US east and 1 running in US west, we view those as separate deployments of CouchDB, each with its own FoundationDB environment under the hood, and CouchDB takes care of synchronizing the data between those different environments.
[00:28:51] Unknown:
And so it seems that this is going to impact the operational characteristics of running a CouchDB cluster as well and some of the ways that users will need to think about how to actually deploy and maintain their systems. And I'm curious what in the sort of current states of how you've been working through and experimenting with this, the operational characteristics have improved and any new edge cases or new considerations that operators need to be thinking about as they're designing these deployments?
[00:29:25] Unknown:
Yeah. That's a great topic, and it's 1 that when we at IBM first proposed this direction for the future of the CouchDB project, we certainly got some questions on that front that said, ah, it's all well and good for IBM to say, hey. We're gonna run FoundationDB. You have the skills to go, you know, pull up a team around that and make sure you can do it. What about the users who are running their own CouchDBs? How are we gonna make sure that they're comfortable with the administration of this project? And I guess what I would say is a couple of things. 1, FoundationDB is some of the best tested software I have ever seen, and that testing is not just about sort of the static, you know, details of of unit and integration and system testing. It's also introducing and injecting all kinds of random faults into running clusters and ensuring that the system ultimately, you know, never sacrifices its asset semantics and gets itself back up into a running state. It does make some different trade offs. You know, it's not it will take itself offline if there's a chance of, you know, data corruption, and and and, you know, it makes that that particular trade off in in in the cap triangle. But we, you know, we looked at it not just from the perspective of the functional capabilities that it was providing in terms of key value transactions, but also in terms of whether the project had the same commitment to, you know, data safety, data protection, data durability that users have come to expect from CouchDB, and I think there's a really good match there.
And, you know, I would say that our focus so far has been on the development work to bring over all of the capabilities of CouchDB on this, you know, like, architectural foundation of FoundationDB, but we recognize that, you know, getting, in in the run up to the 4 dot o release, 1 of the things we're gonna have to do is make sure that there's a familiar set of operational tooling for our existing users so that even though they're running a different distributed storage engine under the hood, you know, they they they're able to sort of, be able to manage it in a way that makes sense to them. So yeah. I mean, it's clearly part of what we're thinking about in terms of the the the must haves for the 4 dot o release.
[00:31:23] Unknown:
And another challenge that I'm curious about how you're handling is the migration path for people who have an existing installation. They've already got possibly sizable amounts of data in their existing clusters, and being able to move that to this new deployment with Foundation DB as the layer that's actually handling that storage. That's right. So as a project, we have not typically
[00:31:46] Unknown:
tried too hard to do in place major version migrations. We didn't have that from 1.0 to 2.0. We do anticipate that being something people do can do from 2.0 to 3.0. But we think folks, you know, we think as a posture of saying, yeah, jumping from 3.0 to 4.0 requires a data migration is something that I think people will find acceptable. We do have plans in place to optimize the, you know, data replication capability specifically for this cut over from 3.0 to 4.0. And I think that's a smart allocation of resources on our part because anything we do that makes data replication faster, more resource efficient, more reliable is something that helps out with a whole bunch of use cases, not just the 3 dot0 to 4 dot migration. And so for somebody who is moving, would it just be a matter of deploying a new set of instances,
[00:32:37] Unknown:
joining them to a cluster running 3.0, and then letting all of the data replicate, and then phasing out the 3.0 cluster as you hit certain points of the different shards being replicated? Yeah. We would look at it not so much as joining to the existing cluster, but just standing up as a a cluster next door and setting up a replication job to sync the data from the old cluster to the new cluster, and then having a load balancer over top repoint the endpoints to point to the new cluster. And then as far as the actual work of migrating the code base and integrating with Foundation DB and figuring out what the separation of concerns are, what have been some of the biggest challenges and some of the most interesting
[00:33:16] Unknown:
experiences that you and the other developers have had? If I go with challenges first, 1 of the things FoundationDB has done a good job of is being very explicit about the kinds of limits that are in place and the kinds of key sizes, value sizes, transaction durations, and so on are supported by the engine. In CouchDB, we've been a little bit more lax on that sort of thing. And so it has forced us to kind of come face to face with some of our relaxed postures around, well, how big of a document could you store in the base, or how long could that request actually last? And, you know, it's it's it's frankly been, I think, a conversation that we needed to have as a community because at some point, you just end up in a situation where you sort of say, well, I guess we didn't explicitly say you couldn't do this, you know, 10 gigabyte document, but as a practical matter, it's not going to be a very good experience for you. Now we're getting much more explicit about the kinds of limits that that, you know, are are the sort of safe operating envelope for users of GauchDB, but it's never pleasant to to introduce restrictions where there weren't any before. Right, because inevitably you end up breaking some edge case, you know, in in in the user base.
So that's been, you know, just something that's occupied a lot of time and energy on our part as we, you know, go through the audits to to see, like, okay. Where can we put this limit? So and who can we work with to make sure that we we, you know, bring all of our uses forward as we get ourselves onto a sort of a stronger footing overall? I think on the on the benefit side, I would say, 1, the Foundation DB user community and developer community is is incredibly sort of talented and, helpful. You know, we we've been, amazed every time we reach out with any sort of design question. We we get this, like, 3 paragraph answer that anticipates the other thing we hadn't get thought of, we were gonna run into, you know, the next time we came after them. So that's been that's been super, super helpful. And I think as people have gotten deeper into the implementation of the CouchDB functionality on FoundationDB, I I think there's just been a growing awareness of an appreciation for all of the powers of transactions inside Couch. And people are starting to come up with new ideas for ways that that, you know, like, that underlying capability can make our lives better as database developers.
[00:35:30] Unknown:
And in terms of once this migration is complete and people are running, 4 dot o release of CouchDB with the underlying FoundationDB engine, what are some of the new use cases that this might enable or some new considerations in terms of the ways that people are designing their applications on top of CouchDB will need to think about, or any thoughts in terms of exposing the underlying Foundation DB engine as another interface for being able to interact with the data stored within CouchDB?
[00:36:04] Unknown:
That is an excellent topic. So we have, as a community, said, look, our our goal here, 1st and foremost, is to try to look at all of the different parts of the existing API and make sure that this is a relatively non disruptive upgrade for our existing user base. That being said, it's incredibly tempting to look at some of the things that we can't do today in the CouchDB API that we might be able to do in the future with the Foundation DB storage layer. And, you know, so we have started having some of those conversations in the community to say, what could we do from a transactional perspective? You know, now that we have this underlying strictly serializable storage layer that, you know, scales across multiple machines, what could we do that, you know, we don't do today? And nothing committed on that front right now, but I think, more and more people are appreciating that, like, there's a real interesting opportunity there. You also talked about actually exposing the Foundation DB API, and I think that's a place where a lot of people go. They kind of look at it and say, is this is this like a multi model kind of database where I could use the document API on the 1 side, the graph API on the other side? And I think what you find is that anytime you're working with Foundation DB, when you're building a layer on top, if your goal is to build the best document layer possible, you're gonna make a number of design decisions that will be intention with something that says, hey. I'm trying to build a multimodel database. Therefore, I'm gonna create this completely generic data model under the hood that allows it to be accessed from different types of query languages and different types of APIs.
So I have I have looked at it and said, you know, I think the the right way to think about this is Foundation DB as an internal implementation detail, and then, you know, you as a tool in our tool belt, under the hood that allows us to deliver a great document database. And if over time someone else should say, hey. I wanna build a great graph layer over top Foundation DB, that makes the Foundation DB project better. But I'm not a huge proponent of thinking that, like, FoundationDB is the right level of abstraction to allow people to have a common data model that can be accessed by different types of APIs, different types of query languages. I guess the last piece you said was, well, what about the foundation d b API directly?
And that one's an interesting 1 too. If you look at at at FTP today, it's got this kind of very tight coupling between the client layer and the server layer that is, you know, a very different experience than, say, using CouchDB's REST API. The clients need to be kind of participating in the upgrade of the server, and they kind of need to be very cognizant of the version of the server that's running, and there's all kinds of details that go into that. There is some talk in that world that talk around introducing a more stable API, a gRPC API or something of that nature, that will allow for that kind of more direct exposure of Foundation DB. But that's not something that we as a product are really looking at. I think our view in CouchDB is that CouchDB is the kind of database that can be exposed directly as a cloud service, that can be exposed, you know, powering web applications that has the kind of security model, that has the kind of access control model that people need to have there. And FDB, frankly, just works at a lower level than that. And 1 of the interesting things about using FoundationDB
[00:39:12] Unknown:
as the storage layer for CouchDB is also the fact that because it has this capability implementing different types of data storage engines or databases on top of FoundationDB is being able to have a common cluster of the Foundation DB engine with multiple different use cases on top of it. And I'm wondering what you have seen as the viability of that, or if it would just cause too much conflict in terms of the operational characteristics and what the different use cases are trying to optimize for in terms of compute versus memory versus network, etcetera?
[00:39:51] Unknown:
Yeah. I that's that's something that we've turned over in our heads a couple of times here. I would say we have seen users of FoundationDB at scale embrace a single Foundation DB cluster for a number of use cases, but those are situations where it's kind of 1 development team, 1 SRE team that's cognizant of the different use cases, and they look at it as a, you know, a scalability improvement and efficiency improvement for them to have 1 storage layer powering these different microservices with different data models. When you get a little further afield and the workloads really are very distinct from 1 another, they're not microservices powering 1 solution, but they really are, like, totally different databases designed for totally different user communities.
That's the point where I would probably caution and say that the benefits of consolidating the storage layer onto a single instance of FoundationDB probably are not worth the potential for, you know, clashes and and and mismatches, like, impedance mismatches in terms of the resource requirements of those different databases. Nevertheless, I think as, you know, an organization thinks about developing an expertise in the foundation DB layer, I think there's still a ton of benefits to saying, hey, an organization that says, we'd like to run the document layer of Foundation DB and the CouchDB layer and the graph layer. We have different users that wanna do that, and we see benefits in an organization of saying, we've got 1 operational team that runs the Foundation DB system reliably and at scale. I think that's something that makes a ton of sense. Because I do think that these different data models, they still can be translated at that level of abstraction of FoundationDB into a set of, you know, transactional throughput, a key value, read throughput, write throughput, and amount of data stored.
And, you know, that kind of translation into those more low level, resources is 1 that I think provides a lot of operational benefits and a lot of capacity planning benefits for an organization. So I guess that's a long winded way of saying I would stop short of running multiple disparate workloads on same Foundation DB cluster, but there are still a ton of benefits from different workloads all compiling down, as it were, to the common abstraction layer of Foundation DB itself.
[00:42:05] Unknown:
And another interesting thing to explore is with CouchDB, there are different use cases that are obviously beneficial to the way it's designed. But what are some of the cases where it's the wrong choice? And it might, at first glance, look like it's useful because it's just a document engine and you wanna be able to store documents, but you would actually be better served using something else, whether it's a different document database or going with more of a relational use case or some other type of data model? I think the number 1 that we see today,
[00:42:36] Unknown:
is cases where you have very short lived data. You know, CouchDB spends a decent amount of energy on ensuring that data written into a database can be replicated to any peer at any point in time. And so there's a fairly long lived set of metadata around each record that you put into CouchDB. And if you're intending to use it as a, you know, a place where, you know, the record only has to live for a short period of time and then you wanna completely purge it, that will very much be at odds with the all of the metadata tracking that says, well, you know, we have to prepare for the case that some replication peer comes in a month later. We wanna make sure that they know that this document existed. These tombstones that we keep around, we keep around forever today. And I think, you know, as a community, we can debate the merits of that. Maybe we enhance things down the line to allow people to configure databases that are optimized for that short lived use case and don't care about multi region replication of that particular class of data. But as it stands today, if you have short lived data, CouchDB is an expensive proposition, for that use case. When it comes to, you know, sort of the the relational side of things, I think that detailed ad hoc analytics are something that CouchDB is not particularly well suited for.
True of other document databases in general as well, I think. It's well suited to delivering low latency responses to a set of queries that power a web application. It's not so well suited for mixed ad hoc exploration where you need all of the data from a certain set of fields, and then you need to go kind of compute a bunch of aggregations on the fly and, you merge any with another dataset. The relational model shines at being able to adequately serve unanticipated queries. Right? Document databases don't do as well on that front. And what are some of the most interesting or unexpected or innovative ways that you've seen CouchDB used? I think we're always heartened by the kinds of use cases that involve, bringing workers out into the field. Like, we've had insurance companies that had CouchDB powered on devices to go out and perform, you know, claims analysis in the aftermath of a natural disaster.
You know, 1 of our our collaborators in the community built an application for health care workers to respond to the Ebola crisis, where you could go through a day without any kind of connectivity, but you wanted to enable accurate record keeping as you were going out on on on the tour, but you still needed to roll that back up into a complete view of the state of, you know, the response, so that you could make the right decisions about where to allocate resources going forward. Those kinds of use cases have been an awesome fit for CouchDB, and it's just really I guess it's not unexpected. I mean, that's the kind of thing we anticipated, but it's always great to see people kind of choose the right technology for the job and tackle something that is quite far afield from kind of us sitting in the major tech hubs of, you know, developed countries and and building out more enterprise, you know, SaaS applications. So those kinds of things have always been really heartening for us to see, and and, you know, it's it's turned into something of a trend there. I think we're also now seeing quite a lot of uptick in, you know, retail environments where they're starting to look at ways to have a more distributed view of the datasets at their different, you know, warehouses, shipping centers, points of presence, retail environments.
And they're starting to look at the kind of replication mesh that you can get from something like CouchDB and say, no. That's actually probably a a really reasonable fit for us and has, you know, material improvements for, like, the availability of our system, for the independence of the stores. It's just, you know, our our world in retail is distributed, and why shouldn't our database be distributed to them. And 1 other thing that I was just thinking about is we discussed a little bit about the changes in the operational
[00:46:14] Unknown:
characteristics once this shift to Foundation DB is complete. But are there going to be any changes in the client interface where there'll be some exposure of things like the transactionality of the underlying data? Or do you anticipate that the user facing interface is going to remain largely unchanged?
[00:46:33] Unknown:
Our goal has been to make sure that this work in 4.0 is, you know, an upgrade of the experience for our users. Right? So our our first focus is making sure we don't break existing user applications, and we give everybody a path forward. We are super excited about exposing some more of those rich semantics up through the CouchDB API, and we've started those discussions about what would it look like to expose transactions in a database that is fundamentally, you know, asking you to make an HTTP request for every interaction with the data store. So I look at that as a pretty interesting, design opportunity, frankly, and I think there's a lot that we can do there that gives people transactional semantics in a cloud native world, without, you know, sort of baking it all down into SQL.
But, you know, we kinda have to keep our focus, and and I think our goal right now is to use Foundation DB to eliminate the eventual consistency of a cluster, to improve the scalability of our systems with an eye towards in future releases servicing more new semantics that take advantage of the transactional capabilities that we'll have under the hood.
[00:47:36] Unknown:
And what have been some of the most interesting or challenging lessons that you have learned in the process of working with the CouchDB project and community and being involved with it for so long? I would say that, over the past decade,
[00:47:50] Unknown:
I've just gained an increasing appreciation for the all of the realities of bringing a database to production as a user. K? And, you know, writing the actual code that managing that manages the transactions is just a small part of the overall picture. We've we've put a lot more effort in recent years into making sure our users understand how to deploy and manage and monitor the database and, you know, have just been a lot more explicit about the way it works and the way it can be used. And I would say, like, my experience on that front has led me to place more and more premium on documenting our design decisions, ensuring that we have full documentation of what we're doing when we're doing it, because, you know, we've sort of gained a greater appreciation as our client user base has has grown and matured. We've gained a greater appreciation for how much we can make their lives better upfront with all of those sort of non code assets that go along with the evolution of the project.
And I think that's actually a really nice kind of echo of some of the things that go on in the Apache Software Foundation. Like, people sometimes think of committers to an open source project as the folks who are fundamentally changing all the core data structures on disk. But in fact, we invite committers of all shapes and sizes. A lot of people who, don't program in Erlang. But they make these contributions in all other facets of the project that make it, you know, something that's more consumable and more reliable and more approachable for our users. And I think as you look at the project's evolution over time, more and more of our investment has gone into all of those other surrounding assets that make it a more well rounded, more approachable project as a whole. And circling back, 1 other thing I forgot to ask about is
[00:49:39] Unknown:
because of the fact that this is such a drastic change in the implementation and architecture of CouchDB, there's definitely a lot of opportunity for introducing new bugs or regressions, and I'm curious what your approach has been to handling the testing and validation of the functionality of CouchDB to ensure that you don't have too many breaking changes?
[00:50:02] Unknown:
Yeah. I mean, I don't know that we have any sort of special sauce there. We've got a robust test suite that we've maintained over time. And because we're committing to largely preserving the existing API, the existing test suite is something that we expect to be able to, you know, run unadulterated. We've also got, like, a good performance engineering team at IBM who's looking at all of the others kinds of scalability issues and and chasing down any cases where, you know, the new system may be a regression from the old 1 in some particular aspect. The Erlang community has done a lot of work in things like property based testing, you know, kind of, really sophisticated types of fuzzing, to introduce all kinds of unexpected inputs and see how the system responds. And we have some people who have done some work on that front as well. Usually, it's about testing internal data structures more than testing external APIs because it is quite expensive to generate this kind of, like, huge phase phase of test cases programmatically.
But I feel like there's maybe some opportunity for that as well. And, you know, I guess, we also just have, you know, the benefit of of having within IBM a large and diverse user base of people who are friendlies and are willing to work with us on this this journey because they appreciate the benefits that we're bringing to bear, and so we've been able to lean on them early and often to us out where anything might be happening here that's unexpected. Are there any other aspects of the CouchDB project and its ecosystem and community
[00:51:25] Unknown:
or FoundationDB or the work that you're doing to replatform onto the FoundationDB engine that we didn't discuss that you'd like to cover before we close out the show? No. I think we've covered a whole bunch of really interesting Gorpy aspects of building databases. I think that this approach
[00:51:41] Unknown:
of kind of a separation of concerns, right, and the the API that FoundationDB offers is an incredibly powerful tool in the data service developers toolkit, and I'm excited to see more people adopt that line of thinking. I think as a database community as a whole would be better off for it. So, frankly, I'm just, you know, I'm I'm jazzed about the project and where things are headed. Well, for anybody who wants to follow along with the work that you're doing or get in touch, I'll have you add your preferred contact information to the show notes. And as a final question, I would just like to get your perspective on what you see as being the biggest gap in the tooling or technology that's available for data management today. That is an excellent question. When I think of data management a little more broadly, I think that we've got a ton of people who are struggling to kinda gain as much insight as they can from the data that they've collected. And a lot of that comes down to kind of, like, the availability of good data catalogs that are integrated into the entire workflow. Like, we've got all these users, you know, all these clients who have databases scattered across you know, engineering you know, engineering. Right? Data curation and and transformation to get it into a shape where it's ready for the training of the model. At every step in that journey, you're creating more data, and you don't always understand how those datasets relate to 1 another. You don't always understand, you know, when there's some correction in the source data, what types of downstream effects that has.
And so I feel like we spend an inordinate amount of time and energy manually curating all of these datasets because of, like, the lack of capture of lineage information and how they relate to 1 another. And it's a super hard problem unless you're operating at a scale where you have, like, ruthless rules about how data gets processed because, you know, there's no other way to do it than to use the 1 standard tool. Most of our organizations are not like that. People use whatever tools they want, but you end up with this explosion of datasets and and very little idea about how they all relate to 1 another. So when I see, you know, a gap in tooling or technology, for me,
[00:53:48] Unknown:
it's you know, it comes down to this this metadata management at scale. It just becomes a huge problem for everybody. Yeah. That's definitely a common theme that has been something that's come up a lot of times in my conversations, but increasingly frequent in recent episodes and in the past several months and probably past year or so. Well, thank you very much for taking the time to join me and share the your experience of the work that you've done with CouchDB and your work that's ongoing for replatforming onto FoundationDB.
It's definitely an interesting database and product, and it's an interesting undertaking for something that has been in production for so long. So I appreciate you taking the time to share your insight and experience with that, and I hope you enjoy the rest of your day. Thank you, Tobias. You as well. Listening. Don't forget to check out our other show, podcast.initat pythonpodcast.com to learn about the Python language, its community, and the innovative ways that is being used. And visit the site at data engineeringpod cast.com to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show, then tell us about it. Email hosts at data engineering podcast.com with your story. And to help other people find the show, please leave a review on Itunes and tell your friends and coworkers.
Introduction and Sponsor Messages
Interview with Adam Koculoski Begins
Adam's Background in Data Management
Overview of CouchDB
CouchDB's Role in the NoSQL Movement
CouchDB's Architecture and Write Throughput
Data Modeling in CouchDB
CouchDB's Implementation in Erlang
Replatforming CouchDB to FoundationDB
Operational Characteristics and Migration Path
Challenges and Benefits of Replatforming
New Use Cases and Exposing FoundationDB API
When Not to Use CouchDB
Innovative Uses of CouchDB
Changes in Client Interface
Lessons Learned from CouchDB Project
Testing and Validation
Closing Remarks and Final Thoughts