News, examples, tips, ideas and plans.
Thoughts around ORM, .NET and SQL databases.

Thursday, August 26, 2010

A good post about TPT ("table per type") inheritance mapping and its performance-related concerns in EF

Link to the post: http://goo.gl/xQEE

The article is interesting itself, but I'd like to highlight some differences related to support of TPT ("table per type" or "table per class") inheritance mapping in DataObjects.Net:

1. EF uses "greedy" queries joining all the tables in hierarchy. DO joins minimal set of tables necessary to run the query.

EF is not initially designed for wide usage of lazy loading, so it should fetch the whole entity state, and thus - join all the tables in hierarchy.

DO behaves differently: when you query for Dog type in Animal hierarchy, it will join only Animal, Mammal and Dog tables. That's exactly what's necessary to fetch Dog type properties. If actual instances are descendants of Dog declaring their own fields, we imply you'll use .Prefetch to additionally fetch their own fields, if this is necessary; otherwise, we'll anyway load them on demand, but this will lead to SELECT N+1 issue.

.Prefetch API, in fact, fetches entity states individually - by splitting entities to process into groups of entities having the same type, and fetching their state by batched queries having Id IN (@Id1, @Id2, ...) in their WHERE clause (or its equivalent with (... OR ... OR ...) for composite keys). We've chosen this approach, because it allows us to:
  • Ignore the entities with already cached state - i.e. if they were passed to .Prefetch, nothing will happen.
  • Fetch the state from global cache first, when it will appear. So .Prefetch won't load the database server at all, if state of all the entities is cached in global cache.
  • Issue "ideal" SQL queries with quite predictable cost: any SQL query sent by .Prefetch is, in fact, translated to a set of primary index seek operations.
  • .Prefetch queries are batched. Technically we can fetch pretty large number of entities by each of such batch (up to ~ 1000K entities per batch because of SQL Server query parameter limit), but actually we fetch 256 entities in each batch. Such limit is chosen because it's optimal: that's anyway a bit more than we can materialize during execution of such batch, on the other hand, queries we send are shorter, and thus simpler for optimizer.
Why having less joins is important? JOIN order optimization is computationally hard: currently most of RDBMS use a heuristic algorithm allowing to produce nearly-optimal join sequence using n! steps instead of 4^n (where n is count of JOINs). But even n! is implies quite fast growth of complexity with n:
  • 5! = 120
  • 10! = 3 628 800
  • 15! = 1 307 674 368 000
  • 20! = 2.43290201 × 10^18
Hopefully, it's clear now that query with 15 JOINs is nearly impossible to optimize well, even with use of described heuristics: 1300 billions of join sequences must be considered to handle this. Actually, I don't know what SQL Server will do in this case, but the best it can do is to use the best plan it could produce in some limited time. And this plan won't be optimal with 99.99...% probability, since 99.99...% of other possible plans were simply rejected.

That's why I constantly write EF simply can't be used in case you have large inheritance hierarchy: 16 types = 15 JOINs in any query to this hierarchy.

2. EF uses LEFT OUTER JOINs to join the tables in hierarchy. DO always uses INNER JOIN to do the same.

LEFT OUTER JOIN is performance killer in many many cases:
  • The "right part" (related to joined table) table may contain NULL values not just because they're actually are in that table, but because they appeared after join there. And if you'll use NULL such values in query somewhere (e.g. is such columns are used in ORDER BY clause, or compared with NULL in WHERE clause), SQL Server will need to execute the whole JOIN first, and only then - filter or sort the result! In fact, in this case indexes on descendants' tables won't be used at all. We implemented INNER JOIN usage optimization for reference properties just because of this, so now DO uses INNER JOIN in almost any case (exception is traversal of nullable reference property).
  • Worse: I just discovered SQL Server doesn't try to reorder any OUTER joins at all (see "Limit using Outer JOINs" section)! Looking at EF's JOIN order in SQL, this implies only indexes from hierarchy root table may actually be used by SQL Server - since join comes first there, and indexes become useless after this is done (cost of hash join is ~= cost of linear scan) - at least, as far as I can judge. So SQL Server simply can't produce an optimal plan for your query in this case - it will crunch all the data in nearly straighten-forward fashion instead.
The good thing is that part 1 (above) isn't applicable now - at least, in case with SQL Server. But this doesn't mean everything is better because of this - actually, everything is even worse.

What about NHibernate?

NHibernate seems more clever here (in comparison to EF ;) ): this post by Ayende shows it uses LEFT OUTER JOIN only when you query for non-leaf type; otherwise, it uses INNER JOIN. So:
  • queries for leaf types (or, more likely, JOINs related to superclasses) lead to nearly the same SQL as in DO,
  • but queries for non-leaf types (or, more likely, JOINs related to subclasses, if any) lead to nearly the same SQL as in EF.
So we have an intermediate case here. If you query for leaf types, NH works nearly as well as DO from the point of generated SQL. But if you query for types having several subclasses (descendants), you're facing the same issues as in case with EF.

12 comments:

  1. Nice post about SQL internals, good as usual Alex!

    ReplyDelete
  2. By taking the choice of join order away from SQL Server you are just completely eliminating any optimizations it may have done. By fetching manually by ID you have forced an effective nested loop join for all joins on SQL Server (which now has no choice but to seek into the clustered index for every single entity. It cannot do range scans and merge joins anymore).
    If it was so easy to get best performance by just _not_ doing the join order processing, then why does sql server do it?
    This is unfortunately an anti-optimization, at least for flat hierarchies. DO is missing a join prefetch mechanism. Batched prefetch a) forces a nested loops join and b) incurs unnecessary roundtrips.

    ReplyDelete
  3. An Example would be customers and their orders. If you cluster the orders by (CustomerID, ID) instead of (ID), it takes a single range seek to fetch all orders for a customer. With fetching by ID it takes 10 bookmark lookups (on a nonclustered index containing ID) and 10 seeks into the CI.

    ReplyDelete
  4. I know this is arguable - we were carefully thinking about the alternatives.

    Let's start from your example with Orders and Customers - actually, it is pretty easy for DO.

    Imagine, we must fetch all the Orders for, let's say, 30 Customers (1 page). As you've mentioned, ideal PK structure for Order here is pair of [Customer, Order.Id] (assuming Orders table has clustered PK). Customer.Orders in this case is EntitySet<Order^gt; paired to Order.Customer property.

    DO will execute just a single query (probably - 2 in 1 batch - I don't remember the limits) to fetch all these EntitySets for 30 Customers. And, as you've mentioned, SQL Server will perform 30 PK range scans at max (index seeks fetching a range of records) to handle this. If these customers nearly sequential IDs (this depends on customers we're fetching the data for, but the same rule is applicable in your case as well), only few pages will fetched from disk to handle this, if they aren't cached.

    If above case won't work well, nothing prevents us to e.g. create an index on Orders table including all the columns we need to display, and sorter in the way we need this (e.g. by Customer registration date). Of course, in this case we should use a different fetching strategy then .Prefetch (e.g. a query filtering by registration date range + local collection of Customer.Ids) to finally reach the goal of fetching all these orders in nearly single index range scan.

    ReplyDelete
  5. Now let's return back to the questions:

    "By taking the choice of join order away from SQL Server you are just completely eliminating any optimizations it may have done."

    Let's think what kind of optimizations SQL Server may do to handle prefetch queries, if they'd be expressed as JOINs:

    1. Such prefetches can be only expressed as LEFT OUTER JOIN - both in case with references and collections. So the only choice it has here (see the article) is the type of join algorithm to use.

    But MERGE JOIN and HASH JOIN imply both sets must be fully scanned - i.e. in your example, all the Orders must be scanned. So these algorithms are attractive only when either joined table is relatively small, or you read nearly all the data. But if it's really small, there is no problem with .Prefetch as well, since SQL Server will use scans to handle prefetch queries as well. And if you read all the data (note this is relatively rare task), you can:
    - do this directly - by fetching all the orders. Of course, if they'd fit in RAM.
    - do this using .Prefetch. If indexes are build well for such scan (let's consider they are, otherwise the problem isn't related to DO), you'll get nearly the case with Orders and Customers described above. Bot what's really cool here is that DO will provide partial materialization and automatic GC for unused entities (its Session-level cache can work in "weak" mode), so you'll be able to process nearly arbitrary large result set in your application.

    The only left type of join is NESTED LOOP JOIN -but that's exactly what we push it to do with .Prefetch.

    ReplyDelete
  6. 2. I don't know what to add here - all I mean is that actually SQL Server has nearly zero choice related to optimizations in typical cases. And when there is a choice, there are good solutions in case with DO as well.

    Ah, forgot: may be the main benefit in our case is that cost of .Prefetch is quite predictable. If you use it, you can be sure you won't get completely different query plan. Yes, prefetching the items might be a bit more expensive in this case, but definitely not 50...100 times more expensive.

    ReplyDelete
  7. "If it was so easy to get best performance by just _not_ doing the join order processing, then why does sql server do it?"

    That's because you consider it can optimize this, although practically, it can't. I described that all it can choose in this case is type of join algorithm to use, that actually isn't that important.

    Btw, this also depends on order of prefetch joins relatively to other parts of the query, but I suspect if you put them closer to the accessed tables, this is even worse, since it can't re-order these LEFT OUTER JOINs with others (= must perform these JOINs first), and thus looses nearly any opportunity to optimize the query. So the idea to put prefetch joins closer to the leaf nodes of query is bad - they must be injected closer to the root.

    ReplyDelete
  8. "Batched prefetch a) forces a nested loops join and b) incurs unnecessary roundtrips."

    Yes, that's true, although in case b) we do a lot to minimize them to just 1 or two.

    In general, I agree that both ways must be available. IMHO, what we did so far is provided a better one of these two - at least, for majority of cases: I admit if we'd provide just a way with JOINs, we'd get requests like "why .Prefetch slows down my query by 100 times" almost immediately. So far we didn't get any of such requests, and that's good. We didn't get any performance related complains at all in this case.

    So I think we must solve the problem when it will be clear it's really necessary for someone to get it working in another way.

    Btw, there is a feature request for such prefetch API - also with a good discussion and some benchmarks. So we're ready to implement this, when it will be vital for any of our customers. But until that, it's better to focus on other features.

    ReplyDelete
  9. This is a great answer, thanks. Your points are mainly valid but I want to point out that a merge join does not needed to fully scan the orders table. If the orders are clustered by (customerid, order.id) then a single seek over a contiguous range is enough. Not a single scanned row is wasted.

    Your reasoning is very clear and valid: The performance difference is small and customer complaints might come up.

    ReplyDelete
  10. This comment has been removed by a blog administrator.

    ReplyDelete
  11. I discovered your web site via Google while looking for a related subject, lucky for me your web site came up, it seems to be great. I have bookmarked it in my google bookmarks.
    Please Visit ===> Physician Assistant Training Class

    ReplyDelete


  12. شركة غسيل سجاد الكويت مصبغة غسيل سجاد الكويت
    فني كهربائي منازل الكويت فني كهربائي منازل الكويت

    شركة تنظيف في الكويت شركة الكويت سيرفيس للتنظيف

    ReplyDelete