SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 SQL Server 2012 Forums
 SQL Server Administration (2012)
 Timeout issues on new server
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

ferretneck
Starting Member

11 Posts

Posted - 11/18/2013 :  04:52:38  Show Profile  Reply with Quote
Hi all

I have recently moved my website to a new server and every few days am receiving wait operation timed out error messages. Looking at the processes the SQL server process is using upwards of 5GB of memory, which I don't remember it doing on the old server.

Either way, there is more than enough RAM in the server to handle this but I still receive timeout messages until I restart the SQL service for this instance.

My first question is why would I be getting time outs, which are cured by an SQL server service restart, even when the memory resources can cope with it easily?

Thanks in advance

Ben

James K
Flowing Fount of Yak Knowledge

3589 Posts

Posted - 11/18/2013 :  08:30:18  Show Profile  Reply with Quote
How much memory is allocated to SQL Server, and how much do you have on the system? If you don't restrict the amount that SQL Server is allowed to use, it may grab so much memory that it will starve other processes and the OS of required memory. That may or may not be the root cause of your problem - but that would be the first thing that I look at. In SQL Server Management Studio right-click on server node in object explorer, then properties -> memory tab. Maximum memory should be set at a level such that there is enough memory left for OS and other apps if any. On a dedicated system, leave 2 to 3 GB for OS.
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 11/18/2013 :  08:56:40  Show Profile  Reply with Quote
Hi James, thanks for your reply. I will check this, however what happens when the application hits the maximum memory allocation? Would it error then too?
Go to Top of Page

James K
Flowing Fount of Yak Knowledge

3589 Posts

Posted - 11/18/2013 :  09:41:34  Show Profile  Reply with Quote
No, SQL Server should work correctly even after it hits the memory limits that you have set for it. Pages may not remain in memory as long as it might otherwise have been (look up Page Life Expectancy), but SQL Server is capable of managing it. Remember that when you set the maximum memory limit, it is set in MEGABYTES, (not in Giga bytes).
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 11/18/2013 :  12:01:57  Show Profile  Reply with Quote
Hi, I've checked and it is currently set to around 2gb, which leaves plenty for other resources.
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 11/18/2013 :  12:02:43  Show Profile  Reply with Quote
Actually, why then would the sqlserver process be using over 5GB if it is restricted to 2?

EDIT: This is probably because I have 2 instances running

Edited by - ferretneck on 11/18/2013 12:03:49
Go to Top of Page

James K
Flowing Fount of Yak Knowledge

3589 Posts

Posted - 11/18/2013 :  12:55:25  Show Profile  Reply with Quote
If you have two instances of SQL Server running, each should show its memory usage independent of the other. Where are you seeing the memory usage? In task manager/rsource monitor, or some place else?

The maximum memory used by the SQL server should be very close to the maximum memory you have set, especially so in SQL 2012 compared to previous versions. So if the max memory is set to 2 Gig and you are seeing 5 Gig in task manager, something is not quite right.

Did you change the max memory setting and then check the memory consumption not too long after that? If so, it may still be ok. SQL Server will release the memory and will eventually come down to your max setting.
Go to Top of Page

tkizer
Almighty SQL Goddess

USA
36845 Posts

Posted - 11/18/2013 :  13:05:40  Show Profile  Visit tkizer's Homepage  Reply with Quote
If the timeout issue happens again, see if DBCC FREEPROCCACHE resolves it before restarting SQL. Give it a few minutes after you run that command. If it does resolve it, let us know so that we can help you with why and how to resolve it.

Tara Kizer
SQL Server MVP since 2007
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 11/19/2013 :  05:20:47  Show Profile  Reply with Quote
The memory settings have been the same for a few weeks now.

I've just checked it and its using 14GB now! How is this possible? Each instance is set to - I'm I'm copying and pasting this - 2147483647 MB. (EDIT - this may be the issue he he - I've changed it to 2048 now..)

Even so, the system is still running fine and not timing out so I'm wondering if its not memory related?

I'm running a profiler session at the moment as it could be a blocking issue.

Thanks Tara - I'll try that next time it happens.

Edited by - ferretneck on 11/19/2013 05:22:39
Go to Top of Page

James K
Flowing Fount of Yak Knowledge

3589 Posts

Posted - 11/19/2013 :  08:20:53  Show Profile  Reply with Quote
If you change it to 2048 MB, the memory usage should come down from 14 Gig to somewhere around 2 Gigs (over a period of time, unless you restart the server).

When you had it at 2147483647, that is saying to SQL Server that it is allowed to use 2,147,483 Gigabytes of memory. You don't want to do that. What you should do is a) lookup how much memory the box has, b) calculate the memory required by other processes (including any other SQL Server instances) that are running on the server. Assume OS requires about 2 GB. c) Subtract b from c and enter it as the max memory in mega bytes.

For example, if you have a box with 12 GB memory, you would set the max memory as 10000 MB or thereabouts.
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 11/19/2013 :  08:32:04  Show Profile  Reply with Quote
I've set it to use 2gb and it seems pretty happy there. Should I start seeing problems I can up it another 8gb if necessary.

Not seen any timeouts again - rather weird!

Thanks again for your help.
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 04/29/2014 :  10:41:22  Show Profile  Reply with Quote
Hi

I am yet again receiving these timeout errors.

Using DBCC FREEPROCCACHE solves the problem instantly.

The error only seems to be happening on one page.

Any help is appreciated!

Thanks

Ben
Go to Top of Page

tkizer
Almighty SQL Goddess

USA
36845 Posts

Posted - 04/29/2014 :  11:47:25  Show Profile  Visit tkizer's Homepage  Reply with Quote
You need to identify the culprit code. I would suggest getting Adam Machanic's who is active code setup. You can then go back and see what was taking longer than before. It also gives you the estimated execution plan. You can compare that to what it would be when there is a good plan. You can then use a plan guide to "force" the optimizer to stick with the good plan.

You should also look into statistics. Maybe you need manual update stats more frequently, such as hourly, depending on your data changes.

You should also look into your indexes. Perhaps there is a bad index causing a bad plan. That's the situation we faced a bit ago. The index was never used in a good plan, so we dropped it.

Tara Kizer
SQL Server MVP since 2007
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

ferretneck
Starting Member

11 Posts

Posted - 04/29/2014 :  12:07:08  Show Profile  Reply with Quote
Thanks for the reply, there are some interesting suggestions here.

"Perhaps there is a bad index causing a bad plan"

What causes a bad index and how can I isolate it?
Go to Top of Page

tkizer
Almighty SQL Goddess

USA
36845 Posts

Posted - 04/29/2014 :  12:48:48  Show Profile  Visit tkizer's Homepage  Reply with Quote
Well I mean it is only used in "bad" execution plans and never for a "good" execution plan. I double quoted bad and good because a bad execution plan was deemed good for a particular set of parameter values, just might not be good for other parameter values. That's the major thing that we deal with. We have teeny tiny customers and very large customers. When the plan is optimized for a teeny tiny customer, it causes severe performance degradation across the system. We put plan guides in place to either optimize for the large customer or to tell it to use a particular index. Which way you go depends on testing.

There are other reasons an index may be "bad", such as:
1. 0 reads, tons and tons of writes
2. Duplicate to another index, either partially or entirely

Tara Kizer
SQL Server MVP since 2007
http://weblogs.sqlteam.com/tarad/
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.14 seconds. Powered By: Snitz Forums 2000