Query taking more time almost 2 hours

Maahi 2016-07-30 07:45:26


OS Name Microsoft Windows Server 2012 R2 Datacenter
System Model Virtual Machine
System Type x64-based PC
System SKU Unsupported
Processor Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz, 2594 Mhz, 8 Core(s), 8 Logical Processor(s)
Processor Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz, 2594 Mhz, 8 Core(s), 8 Logical Processor(s)
Installed Physical Memory (RAM) 112 GB
Available Physical Memory 16.0 GB

SQL Server details
SQL Server 2014
Version: 12.0.4439
max server memory = 90 GB
cost threshold for parallelism = 5

Maahi 2016-08-01 09:55:39
Hi Aaron,

Please find the requested info. Attaching the plan.

adding some more info


OS Name Microsoft Windows Server 2012 R2 Datacenter
Version 6.3.9600 Build 9600
System Model Virtual Machine
System Type x64-based PC
System SKU Unsupported
Processor Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz, 2594 Mhz, 8 Core(s), 8 Logical Processor(s)
Processor Intel(R) Xeon(R) CPU E5-4650L 0 @ 2.60GHz, 2594 Mhz, 8 Core(s), 8 Logical Processor(s)
Installed Physical Memory (RAM) 112 GB
Total Physical Memory 112 GB
Available Physical Memory 16.0 GB
Total Virtual Memory 128 GB
Available Virtual Memory 29.9 GB
Page File Space 16.0 GB
A hypervisor has been detected. Features required for Hyper-V will not be displayed.

SQL Server Details

SQL Server 2014 SP1
Enterprise Edition

Waittypes Observed for that SPID


sp_configure settings

max server memory(MB) = 90603 i.e. 88 GB
cost threshold for parallelism = 5
max degree of parallelism = 4
network packet size = 4096
optmize for adhoc workloads not turned on

Traceflags which are ON


link text

Aaron Bertrand 2016-08-01 19:28:46
Thanks, more to work from here. A couple of follow-up questions. Does this actually return 4.9 million rows to a client? Where are they trying to render this, a Management Studio grid? Given the fact that – even though the query goes parallel – total duration is 4x greater than CPU, SQL Server is waiting on something else, and I suspect that is where the ASYNC_NETWORK_IO waits are coming from – SQL Server is waiting for SSMS (or whatever client app is being used) to parse and render 4.9 million rows. What happens if you run the same query with TOP (10)? What is an end user really going to do with 4.9 million rows? How about with OPTION (MAXDOP 1)? Also, if you look at the warnings under the SELECT operator, it is clear that there are joins being attempted between mismatched data types. Any possibility of ensuring that all join columns are using the same data type and length (/precision/scale)?
Maahi 2016-08-02 02:57:16
Thanks Aaron for looking into it. Basically, 4.9 million data is being rendered in a drill down SSRS report. For testing purpose, I have run the query in SSMS. However, as you said even from the report I see ASYNC_NETWORK_IO queing up.

Definetly I can check for the mismatched data types. The problem is, this is Dataware house environment and they are having joins between Cleansing database where all datatypes are varchar as they gona dump all the data from different upstream db systems and load into a Datamart. So, thats the reason why there is mismatch between the datatypes. One question here, you can we separate function in joins so that I can have an index/implicit conversion avoided.Could you please give me a small example for avoiding implicit conversions in joins without having to change the datatype in table design.

  • Will check on (MAXDOP 1) option. Is it a good option going with MAXDOP 1 while returning 4.9 million rows?
Aaron Bertrand 2016-08-02 14:35:39
I don't think there is any magic way to get rid of warnings without fixing the data type mismatch. And I really don't think this is the basis of your problem anyway – it's waiting for the report to consume and render 4.9 million rows. I'm not sure if MAXDOP will help, was just curious to see if that made it worse or better.
Maahi 2016-08-02 15:05:43
Thanks Aaron. Setting MAXDOP option to 1 didn't help.
Any other options I can try to improve the query performance?
Aaron Bertrand 2016-08-02 15:55:57
I wasn't expecting MAXDOP to make the query take less than 2 hours. Was just curious what impact it had on the fact that your duration was 4x CPU.

As for speeding up the query, no, not really. If you absolutely need to send 4.9 million rows to a report, you're going to have to scan, because 4.9 million seeks don't make sense (well, they might at some upper bound, like 8 billion rows in the table). And you're going to have to wait for 4.9 million rows to go across the wire, get consumed by the report, and ultimately rendered.

My ideas around making this tolerable have nothing to do with the query itself. But rather whether the report can take a subset of the rows (say, TOP 1000 ordered by who knows what), or if you can pre-aggregate the data, or if you can use a caching mechanism.

Maahi 2016-08-02 16:09:18
Not sure why they are do so like getting all the data on to client machine with just the format they needed to render directly on SSRS reports. Might be to reduce the number of round trips to the database server.

Aaron, can you please shed some more light on pre-aggregating data. How can we achieve it?

Aaron Bertrand 2016-08-02 16:13:20
I mean have a scheduled job that stuffs the results of that query into a separate table (preferably on a separate instance on the report server). Then the report pulls from that table (not over the network) instead of from the primary SQL Server every time. I wrote about this a bit here and here – this wasn't about aggregating / pivoting / unpivoting data, but the core concept is the same. You could also look at caching solutions like memcached – you can find plenty of info on these technologies online, we're going way beyond the scope of this site here.
Maahi 2016-08-02 16:25:21
One last question, does the OPTION (FAST 100) will be useful in my case or any gotchas around FAST?
Aaron Bertrand 2016-08-02 16:28:46
OPTION (FAST 100) tells SQL Server "try to get the first 100 rows as quickly as possible, even if it means getting the remaining 4,899,900 rows even slower." That doesn't mean it can and will get those first 100 rows any quicker, and even if it can, I have no idea how your report works, if it can start rendering once it has received 100 rows and this will change the perception of the end user, and so I have no way to judge whether getting the first 100 rows quicker is of any benefit. You'll have to test it.
Maahi 2016-08-02 17:07:32
Thanks Aaron for continuous support.
Shaun_Goguen 2016-09-21 17:37:43
I would try to use the top10 in your SSRS .rdl dataset SQL statement and watch your Network Cacti Graph and see how much bandwidth is being utilized. I would then try to run your normal SQL statement and see how much is being utilized.

Rendering millions of rows across a Network Connection where your VM is, sometimes over subscribes the available WAN bandwidth.

Is this an on-demand report where the end users can run it whenever they want?