Query is taking 1.5 hours

KTD 2017-12-20 21:08:30

I am adctually trying to delete records. This is the select version of the delete.

Aaron Bertrand 2017-12-21 17:00:01
A query that takes 1.5 hours isn't taking that long because it's inefficient, it's taking that long because either the delete is being blocked, or because it is removing so many rows that the logging and/or the data file disk subsystem can't keep up (maybe it is requiring a billion log file autogrows). In fact I would guess that you killed it after an hour and a half, rather than it finally finished successfully. When you are running the delete, what waits are you seeing in sys.dm_exec_requests? Anything in blocking_session_id? If it is affecting a large number of rows, have you considered chunking the deletes?
Hugo Kornelis 2017-12-21 18:54:21
So why didn't you post an execution plan of the delete? This is like taking my car to the garage because my wife's car won't start.

You have a TOP(1) in your query. Do you also have a TOP(1) in the actual delete query? If so, then perhaps the below can help you.

You might be victim of a rare side effect of rowgoals. You use a TOP expression in your query (probably TOP (1), but you posted an anonimyzed plan so I can't tell for sure). So the optimizer tries to find a plan that finds the first matching row quickly, without caring how long it would take to find them all.

Your query contains a single condition: "Where Object14.Column7 < ?". It appears that Column7 in table Object14 is not indexed. So the optimizer decides to start with an clustered index scan of table Object1, use a value from that table to do a clustered index seek of table Object13 where it can test the above predicate (Object14 is the alias for the table Object13 - don't blame me, this is how Plan Explorer's anonimization works - it REALLY hides your confidential stuff very well!) You posted an estimated plan, not an actual plan (I guess you didn't wait for it to finish?). So I cannot see how many rows were processed. But from what I can see in the execution plan, the optimizer expected to have to read just a few rows from tables Object1 and Object13 before finding a row with Column7 < the value you have in your query. What I expect to see in an actual execution plan is that in reality, it had to go through millions upon millions of rows before finally finding the first one that meets the criterium. (The execution plan shows that the table contains over 207 million rows total, so if it has to go through 90% of them, AND do an index seek into Object13 for each of them - yeah, that would definitely run slow!) (Imagine having a database of all sales for this year. You know that 1% of all sales are for a specific item. You want to find one. You might think that if you inspect them one by one, you should expect to find one after 100 tries. However, if the item you are looking for is a 2018 block calendar and you start searching in the January 2017 sales, you will have to go through all of them before hitting this item that is spiking sales right now but didn't even hit the market until two weeks ago) Indexing Column7 in Object13 might help. Not sure, but might be worth a try. Alternatively, if you know more about the data then SQL Server, you can try to help it. Sticking to the above analogy, if you tell me you are looking for that object in a sale made on or after November 25, I would not bother spending time on all those older order (assuming the order date is indexed). Yet another thing worth trying, if you need the TOP(1), is to add an ORDER BY (clustered index key) DESCENDING, so that SQL Server will start looking at the other end of the index. However, after all this: 1. Your DELETE execution plan might look 1000% different from the one you posted, and all the above does not apply at all; and 2. If you do not actually have a TOP(1) in your real query, then change "might" to "will" in the sentence above.