Azure Cosmos DB Indexing Experiments

Azure Cosmos DB’s pricing by “Request Units” (or RUs) effectively means that the more you can optimize your data structures and your queries, the more money you can save.  While this is also true for databases like SQL Server, the direct link between optimization and pricing is perhaps a bit more distant than with Cosmos where you pay for a specific throughput capacity.

To that end, it is important to understand how to structure your data to take advantage of cost savings with Cosmos.  I think early horror stories of cost issues were partially due to the immature technology (you can see some examples of how recent improvements reduced RUs for common operations) and perhaps partially due to poor data design.

For me, it has been important to actually test different scenarios to understand how to structure data for the best cost and performance to cut through some of those early horror stories and not make the same mistakes.

Because of the document oriented nature of the database, it is common to end up with embedded references to external documents to reduce the number of operations required to display a record.

Consider the following structure:

The main document is an Order which contains references to Items.  By embedding some key information into the root Order document, we can reduce the cost of loading the Order.

But if we want to now find all Orders which contain a specific Item, we have the following options:

  1. Keep track of Item to Order mapping on the Item – bad idea since it means the document size has no bound.
  2. Keep track of Item to Order mapping in some other storage structure – not a bad idea and could be very performant, but requires multiple systems and increases complexity.
  3. Query across Orders using the itemsRef property – consumes our RUs and performance depends on many factors including data and index design.

Option 3 is what we are interested in, but the question is what’s the best way to represent this in Cosmos.  One option is as indicated above.  A second option is like so:

In this design, we encode the values as a string.  This may be more performant depending on how the indexer behind the scenes handles strings versus how it handles arrays of objects.  Of course, there are tradeoffs here because in the first case, we can query itemId even without knowing the name or if the name changes, we are still OK.  In this second scenario, we always have to have an “original name” or “static name” which we use specifically for looking up the object as a reference.

So the question is: how will Cosmos behave?  What can we expect?

To find out, I wrote a simple console app to create 1,000,000 documents with a randomly selected number of embedded references (between 3 and 8) from a set of 50,000 possible data points.  Each document looks like so:

The questions we want to answer are:

  1. Is it more cost effective to query by usersString?
  2. Is there any additional cost to querying by usersRef?
  3. Is there any difference in performance based on how the index strategy is specified?

Let’s query using usersString with the default indexing policy.  We execute the following query:

Then we query using usersRef with the default indexing policy with the following query:

Cosmos’ ARRAY_CONTAINS  allows partial matches of JSON objects when the third parameter of the predicate is set to true so we can effectively send it a prototype of the entity we are looking for in the embedded reference.

Let’s look at the results:

Query metrics comparing partial JSON match versus string match.

Surprisingly, the RUs are exactly the same.  However, the compute load of using the JSON matching is noticeably higher than the compute time of the string match.  At best I can tell, the system function execution time of the string match is around 0.02 ms (as the stats occasionally shows 0.02ms instead of 0).  This is not bad at all, even in a compute sensitive runtime environment like Azure Functions (which has a memory-time price component).

One question I had was whether the index policy could be tuned to improve the performance.  According to this Stackoverflow post from 2018, it was possible to change the index performance by using hash versus range for specific cases of array searches (but this was right around the time that the Cosmos team changed how indexing worked).  I customized the index policy:

So rather than using the default “index everything” policy, we specifically designate the indexes to create.

It turns out that there’s no difference in performance.

The conclusion is that it makes more sense to use an object reference and partial JSON matches than using the string matches.  There is an additional compute cost associated with it, but I think it’s worth it for ease of use and a better application design.

You may also like...