Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 SQL Server startup option "-g"

Author  Topic 

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2008-11-18 : 13:38:00
This stems from this ongoing issue I posted in the past - http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=101511&SearchTerms=linked,server.

Based on this MSDN article (http://msdn.microsoft.com/en-us/library/ms190737(SQL.90).aspx), I have a theory that I may be running out of memory reserved for loading "extended procedure .dll files, the OLE DB providers referenced by distributed queries, and automation objects referenced in Transact-SQL statements" (excerpted from the documentation of the -g startup parameter).

My question to anyone willing to answer, is there a way to measure this specific memory? Is there some sort of performance counter maintained over this (i.e. anything in sys.dm_os_performance_counters?).

The reason I suspect this needs to be increased is because if I reconfigure the linked server to use an ODBC datasource (that utilizes the same JET provider), an error is still generated but the error message is "system resources exceeded". But I would like to monitor the situation to see if this will resolve our problem.

mcrowley
Aged Yak Warrior

771 Posts

Posted - 2008-11-18 : 14:37:40
It is not really how much of the memory gets used. It is how large is the biggest continuous segment of memory. The memory in question can get fragmented pretty quickly, but I know of no way to directly measure this. Naturally a restart of SQL Server removes all fragmentation of this segment of memory. As well as the data cache and the procedure cache, of course.
Go to Top of Page

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2008-11-18 : 15:24:04
mcrowley - thanks for the response. Unfortunately this is a live production system and even with an active/passive failover cluster, having to restart the service causes minor interruptions. As frequently as this occurs I can not justify this approach on a continued basis.

All in all, I'm simply trying to verify if increasing this memory pool would resolve the issue (this server has 32 GB of memory so I have no problem upping it from the default 256 MB to like 1 GB).
Go to Top of Page

mcrowley
Aged Yak Warrior

771 Posts

Posted - 2008-11-19 : 09:41:20
Upping the memory will probably not eliminate the problem, but increase the mean time before failure. The section of memory will continue to get fragmented, but there is a better chance of finding a contiguous section of memory that is big enough for whatever process you need to run.

The only time I have run into this error was with a Sharepoint Teams Server before their SP1 came out. The statements to insert a document required space in this pool a hair larger than the document itself, and the pressure on this pool skyrocketed, since most of the I/O was document traffic. In SP1, Microsoft changed how documents were inserted, and relieved a great deal of that pressure. As I recall, they used Streams to update/insert the documents.
Go to Top of Page

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2008-11-19 : 10:16:56
This all makes sense then. This issue started occurring more frequently when a certain application was modified to run more processes concurrently. This application makes heavy use of this linked server so this translated to more simultaneous requests over the linked server.

All in all, I think my situation is that this pool is just being exhausted due to the increase in concurrency. I'll try this out and see if this works.

But on this note, is there a better way to address this if it is a just a memory fragmentation issue?
Go to Top of Page

tfountain
Constraint Violating Yak Guru

491 Posts

Posted - 2008-12-08 : 11:52:56
FYI for anyone that runs into this situation. We have been running with the "-g" startup option at 1 GB for the past two weeks and this issue has not resurfaced. Prior to this we would receive this error multiple times like clockwork (mainly on Fridays). Friday is the when our users push through most of the uploads for the week that are processed via this linked server.

All in all, I believe this solved our issue.
Go to Top of Page
   

- Advertisement -