Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 SQL Server in Memory like TimesTen?

Author  Topic 

mkloster
Starting Member

7 Posts

Posted - 2006-09-03 : 15:30:59
Does SQL Server have any options that enable it (or an individual database) to run only in memory like TimesTen? I have some long running data import DTS packages that I need to really shrink down, so I'll take any suggestions you might have.

Mike

rockmoose
SQL Natt Alfen

3279 Posts

Posted - 2006-09-03 : 17:19:03
Nope.

Identify which steps are running slowly, and try to understand why they are running slowly.
What are the dts package(s) doing?, if you describe this it will be easier to point you in the right direction.
How many rows are they processing?, and how are they processing them?, what is the source of the data?

rockmoose
Go to Top of Page

mkloster
Starting Member

7 Posts

Posted - 2006-09-05 : 10:00:49
Sorry, here is a little more information. The source is a flat/comma delim file. There are about 190,000 records (which takes about 5-8 minutes to import - depending upon the machine), and they are imported directly (no transformations, logic, etc.). There is only one task, which is the import task.

Because it is so basic, I'm convinced I won't be able to make a huge dent in the processing time, unless I get a massive DB machine. Even then, I'm not sure how much more I can help it. I could drop the indexes before I import the data, but I would have to reindex afterwards, so I don't know that the net gain would be all that great.

Any tips/suggestions would be a great help. IS SQL Server able to run the database completely in memory?

Thanks for you help!

Mike
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2006-09-05 : 10:04:26
If it is a flat file, try to use BCP instead. Insert into different database with only one table, an exact copy of your original table (without indexes) and database recovery set to simple mode.

After importing the file, insert the data into your production database.

190,000 rows should only take a few seconds to import this way.
How many rows are there in the destination table?


Peter Larsson
Helsingborg, Sweden
Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2006-09-05 : 10:16:01
>> There are about 190,000 records (which takes about 5-8 minutes to import - depending upon the machine)
Unless these are really wide rows then this is very slow. I would expect seconds rather than minutes.

Could be that there are a lot of indexes on the table?
Is the file local to the server? The network can take up a lot of time so copy it on to the server before importing.
Bulk insert instead of dts. Dts should be doing a bulk insert but using bulk insert from t-sql will be faster and easier to manage.

The processing is all done in memory anyway and the disk is only used when necessary - hence no need to be able to explicitly hold a database in memory - that would be a really bad idea.

I suspect to get that sort of performance it is probably a network thing. Are you running dts on a client? Could be reading the file to that machine then sending to the server so two hops.


==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

mkloster
Starting Member

7 Posts

Posted - 2006-09-05 : 10:39:11
Thanks for the suggestions. I kind of assumed SQL Server was using BCP in the background, but evidently I was wrong. The destination table is truncated before every data import, so there are no records when I start the import process. There are probably 3-4 indexes on the table, and the file is locally stored on the DB server.

I thought this performance was unreasonably slow. I will look at BCP and report back. Thanks a ton!

Mike
Go to Top of Page

nr
SQLTeam MVY

12543 Posts

Posted - 2006-09-05 : 10:46:22
In that case drop the indexes before the import and add them again after (the clustered one first).
And use bulk insert from a local file.
It would be best to put this all in a stored procedure.


==========================================
Cursors are useful if you don't know sql.
DTS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-09-05 : 14:38:11
"The destination table is truncated before every data import"

I would expect [but could be wrong!] that DTS is using DELETE, rather than TRUNCATE, which might be part of the slow-running.

Kristen
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-09-05 : 14:55:35
If the data has a small number of changes, say under 10 %, it may be faster to import into a temp table, update any changed rows in the permanent table, and then insert new rows into the permanent table.

If you are going to replace the data completely, I would truncate the table, drop all the indexes (but leave the clustered primary key constraint if you have one), load the data into the table in sequence by primary key, and then create the indexes again.





CODO ERGO SUM
Go to Top of Page

mkloster
Starting Member

7 Posts

Posted - 2006-09-05 : 15:30:03
The first step of the DTS package is a TRUNCATE TABEL...command that I wrote, so I know that is very fast.

That is a great suggestion to update the rows instead of dumping and re-adding. I'll have to run an IF EXISTS to see if I need to insert or update, so it may not be the most efficient. It would remove any gap when significant portions of the data is missing/unavailable, which is great. I'll have to look at how quickly that task can be performed. Thanks.

Mike
Go to Top of Page

mkloster
Starting Member

7 Posts

Posted - 2006-09-05 : 16:28:30
Interesting. I was using the SQL Server ODBC connection because I was executing the package remotely. When I changed it to OLE, the process immediately went from several minutes to 10-12 seconds. Anyone else notice this? Does this indicate any configuration problem, or is this to be expected?

Mike
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2006-09-05 : 16:33:49
quote:
Originally posted by mkloster

The first step of the DTS package is a TRUNCATE TABEL...command that I wrote, so I know that is very fast.

That is a great suggestion to update the rows instead of dumping and re-adding. I'll have to run an IF EXISTS to see if I need to insert or update, so it may not be the most efficient. It would remove any gap when significant portions of the data is missing/unavailable, which is great. I'll have to look at how quickly that task can be performed. Thanks.

Mike



Make sure you only update the rows that changed. If you update all rows, it will be much slower.

This is the basic logic you would use to insert missing rows:

insert into MyTable ( col list... )
select
col list from InputTable...
from
InputTable a
left join
MyTable b
on a.PrimaryKeyValue = b.a.PrimaryKeyValue
where
-- Where PK not in table
b.PrimaryKeyValue is n





CODO ERGO SUM
Go to Top of Page
   

- Advertisement -