Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 Old Forums
 CLOSED - General SQL Server
 Deleting records who's text column is the same

Author  Topic 

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-18 : 18:49:34
I am running a script that is always adding information into the database. I can not have any duplicates in the text field so before I insert the record I delete out anything that has the same text. I am running a SP to delete out the articles by comparing the text field. This table has 140,000 records in it and will be growing up into the millions shortly. Can someone look at tell me how this could be improved on?

SP

CREATE procedure sp_DeleteFeedsByDesc (@FeedDesc text)
as
DELETE FROM RSS_Feeds_Data WHERE (Data_Desc LIKE @FeedDesc)
GO


Table Structure

CREATE TABLE [dbo].[RSS_Feeds_Data] (
[Data_ID] [int] IDENTITY (1, 1) NOT NULL ,
[Data_FeedID] [int] NULL ,
[Data_Title] [nvarchar] (400) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Link] [nvarchar] (400) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Desc] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Date] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [dbo].[RSS_Feeds_Data] WITH NOCHECK ADD
CONSTRAINT [PK_RSS_Feeds_Data_1] PRIMARY KEY CLUSTERED
(
[Data_ID]
) ON [PRIMARY]
GO

CREATE INDEX [RSS_Feeds_Data2] ON [dbo].[RSS_Feeds_Data]([Data_FeedID], [Data_Date]) ON [PRIMARY]
GO

if (select DATABASEPROPERTY(DB_NAME(), N'IsFullTextEnabled')) <> 1
exec sp_fulltext_database N'enable'

GO

if not exists (select * from dbo.sysfulltextcatalogs where name = N'FT_RSS_Feed_Data')
exec sp_fulltext_catalog N'FT_RSS_Feed_Data', N'create'

GO

exec sp_fulltext_table N'[dbo].[RSS_Feeds_Data]', N'create', N'FT_RSS_Feed_Data', N'PK_RSS_Feeds_Data_1'
GO

exec sp_fulltext_column N'[dbo].[RSS_Feeds_Data]', N'Data_Title', N'add', 1033
GO

exec sp_fulltext_column N'[dbo].[RSS_Feeds_Data]', N'Data_Link', N'add', 1033
GO

exec sp_fulltext_column N'[dbo].[RSS_Feeds_Data]', N'Data_Desc', N'add', 1033
GO

exec sp_fulltext_column N'[dbo].[RSS_Feeds_Data]', N'Data_Date', N'add', 1033
GO

exec sp_fulltext_table N'[dbo].[RSS_Feeds_Data]', N'activate'
GO


Quality Web Hosting & Design
http://www.eznthosting.com

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-19 : 02:27:18
I was thinking about this tonight and would it be better if I stored the first 100 characters in a nvarchar(100) field and then I first return all the records that the first 100 characters are the exact same. Then if anything is returned I can go thru those records and delete them? To me this does not seem like a smart database structure but I have to be able to speed this up.

Quality Web Hosting & Design
http://www.eznthosting.com
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2005-03-19 : 03:15:09
We don't bother to compare past the first 8000 characters, I know that's a bit slack, but its easy to program!

WHERE DATALENGTH(Table1.MyTextCol) = DATALENGTH(Table2.MyTextCol)
AND CONVERT(varchar(8000), Table1.MyTextCol) COLLATE Latin1_General_BIN <> CONVERT(varchar(8000), Table2.MyTextCol)

(Leave the Blue out if your don't need case sensitive comparison)

Then go on and compare those for exact matches

Kristen
Go to Top of Page

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-19 : 11:33:21
Thats a good idea.... I tried the following but I get an error saying that Data_Desc does not match with a table name or alias used in the query. I know that table is in RSS_Feeds_Data table.


CREATE procedure sp_chkFeedDupe (@FeedDesc text)
as
SELECT Data_ID, Data_Desc
FROM RSS_Feeds_Data
WHERE DATALENGTH(Data_Desc.MyTextCol) = DATALENGTH(@FeedDesc)
AND CONVERT(varchar(500),Data_Desc.MyTextCol) <> CONVERT(varchar(500),@FeedDesc)
GO


Quality Web Hosting & Design
http://www.eznthosting.com
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-19 : 14:39:27
Instead of storing the first 100 characters, you can add up a series of binary checksums and store them in an 8 byte BIGINT column. There is a very small chance of a duplicate (less than 1 in 4,000,000,000), but that should be really rare and easy to check before you delete a duplicate. A big advantage is that you can create an index on the Data_Desc_Checksum checksum column for fast lookups.

I made the assumption that most of the time your data is not that long, so the code I am posting has more substrings for the first part. It shows an example of how to compute a checksum for the first 25,000 characters of a text data column.

It also looks like you can do this as a computed column. This might make it easier for you to change you application. I included code to do this, and create an index on the computed column.


select
Data_Desc_Checksum =
convert(bigint,binary_checksum(substring(Data_Desc, 1, 100)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 101, 200)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 201, 300)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 301, 400)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 401, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 1001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 1501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 2001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 2501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 3001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 3501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 4001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 4501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 5001,5000)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc,10001,5000)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc,15001,5000)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc,20001,5000)))+3000000000
from
[dbo].[RSS_Feeds_Data]


-- Create the table with a computed checksum column
drop table [dbo].[RSS_Feeds_Data]
go
CREATE TABLE [dbo].[RSS_Feeds_Data] (
[Data_ID] [int] IDENTITY (1, 1) NOT NULL ,
[Data_FeedID] [int] NULL ,
[Data_Title] [nvarchar] (400) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Link] [nvarchar] (400) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Desc] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Date] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
Data_Desc_Checksum as
(
convert(bigint,binary_checksum(substring(Data_Desc, 1, 100)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 101, 200)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 201, 300)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 301, 400)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 401, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 1001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 1501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 2001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 2501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 3001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 3501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 4001, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 4501, 500)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 5001,5000)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc,10001,5000)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc,15001,5000)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc,20001,5000)))+3000000000
)
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]


go


-- Create an index on the computed checksum
CREATE INDEX [IX_Data_Desc_Checksum2] ON
[dbo].[RSS_Feeds_Data]([Data_Desc_Checksum]) ON [PRIMARY]



quote:
Originally posted by redbrad0

I was thinking about this tonight and would it be better if I stored the first 100 characters in a nvarchar(100) field and then I first return all the records that the first 100 characters are the exact same. Then if anything is returned I can go thru those records and delete them? To me this does not seem like a smart database structure but I have to be able to speed this up.

Quality Web Hosting & Design
http://www.eznthosting.com



CODO ERGO SUM
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-03-19 : 16:03:40
Very nice solution, MVJ. But what is the reasoning behind the overlapping substrings on lines 2-5? (Or is that just a brain fart?)

convert(bigint,binary_checksum(substring(Data_Desc, 101, 200)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 201, 300)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 301, 400)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 401, 500)))+3000000000+


Be One with the Optimizer
TG
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-19 : 16:25:53
Brain lock.

When I thought about this some more, I decided that was overkill. The chances of a duplicate on even one checksum is small if the table doesn't contain millions of rows, and nothing really eliminates it.

I am posting a simpler solution here. It also uses only a 4-byte integer for the checksum, so the storage is less for the index. It's probably less overhead for inserts too.



select
Data_Desc_Checksum =
binary_checksum(substring(Data_Desc,1,8000))
from
[dbo].[RSS_Feeds_Data]


-- Create the table with a computed checksum column
drop table [dbo].[RSS_Feeds_Data]
go
CREATE TABLE [dbo].[RSS_Feeds_Data] (
[Data_ID] [int] IDENTITY (1, 1) NOT NULL ,
[Data_FeedID] [int] NULL ,
[Data_Title] [nvarchar] (400) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Link] [nvarchar] (400) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Desc] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Data_Date] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
Data_Desc_Checksum as
(binary_checksum(substring(Data_Desc,1,8000)))
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]


go

-- Create an index on the computed checksum
CREATE INDEX [IX_Data_Desc_Checksum2] ON
[dbo].[RSS_Feeds_Data]([Data_Desc_Checksum]) ON [PRIMARY]



quote:
Originally posted by TG

Very nice solution, MVJ. But what is the reasoning behind the overlapping substrings on lines 2-5? (Or is that just a brain fart?)

convert(bigint,binary_checksum(substring(Data_Desc, 101, 200)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 201, 300)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 301, 400)))+3000000000+
convert(bigint,binary_checksum(substring(Data_Desc, 401, 500)))+3000000000+


Be One with the Optimizer
TG



CODO ERGO SUM
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2005-03-20 : 02:33:46
MVJ isn't that just checksumming the first 8,000 characters - which can be compared (with CAST to VARCHAR) anyway?

redbrad0:

CREATE procedure sp_chkFeedDupe (@FeedDesc text)
as
SELECT Data_ID, Data_Desc
FROM RSS_Feeds_Data
WHERE DATALENGTH(Data_Desc.MyTextCol) = DATALENGTH(@FeedDesc)
AND CONVERT(varchar(8000), Data_Desc.MyTextCol) = CONVERT(varchar(8000), @FeedDesc)
GO

Kristen
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-20 : 04:14:05
Yes, but the point is to store the checksum as an indexed integer column. redbrad0 said she would eventually have millions of rows in this table, so being able to do a direct index lookup for possible duplicates will be a lot faster than a table scan that searches a text column.

DELETE FROM RSS_Feeds_Data
WHERE
Data_Desc_Checksum =
binary_checksum(substring(@FeedDesc,1,8000)))
and
Data_Desc LIKE @FeedDesc



quote:
Originally posted by Kristen

MVJ isn't that just checksumming the first 8,000 characters - which can be compared (with CAST to VARCHAR) anyway?

redbrad0:

CREATE procedure sp_chkFeedDupe (@FeedDesc text)
as
SELECT Data_ID, Data_Desc
FROM RSS_Feeds_Data
WHERE DATALENGTH(Data_Desc.MyTextCol) = DATALENGTH(@FeedDesc)
AND CONVERT(varchar(8000), Data_Desc.MyTextCol) = CONVERT(varchar(8000), @FeedDesc)
GO

Kristen



CODO ERGO SUM
Go to Top of Page

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-20 : 14:25:01
Michael Valentine Jones,

You said the solution that you posted is better but if you have over a million records it could delete dupes? If I can get this software working right I will expect it to have maybe 4 to 5 million records in the table at all time trying to delete, add, and search all day long every day.

On this checksum... what exactly is it doing... comparing the length of the string?

Quality Web Hosting & Design
http://www.eznthosting.com
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-03-20 : 15:27:47
The value of MVJ's binary_checksum solution is that you can very quickly and efficiently itentify the POSSIBLE dupes by way of an index on Data_Desc_checksum column. It may return values that aren't dupes but it will never miss values that are dupes. That's why he coded it with 2 conditions like this:

DELETE FROM RSS_Feeds_Data
WHERE
Data_Desc_Checksum =
binary_checksum(substring(@FeedDesc,1,8000)))
and
Data_Desc LIKE @FeedDesc


"where Data_Desc_Checksum = binary_checksum(substring(@FeedDesc,1,8000)))"
will very quickly identify the possible dupes. This will likely reduce the matches to a very managable amount (probably 1).

"and Data_Desc LIKE @FeedDesc"
This will be your actual check which will insure you're only deleting a real dupe. But it will only need to check a small number of records because of the first condition.

Binary_Checksum does more than just count places. I'm not sure what it does but each character and the order it appears affects the result.

SElect binary_checksum('abc')
,binary_checksum('acb')
,binary_checksum('bac')
,binary_checksum('cba')




Be One with the Optimizer
TG
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-20 : 15:32:13
No, it wouldn't delete dupes. It has the same condition you currently have, "Data_Desc LIKE @FeedDesc", but the comparison to the checksum allows it to do a fast indexed lookup for possible duplicates. If there are duplicates on the checksum, the LIKE will only have to compare 2 or 3 rows of text columns, not scan the whole 5 million rows table.

This is exactly what checksum was meant for. I use it for an application where we have a table with about 250 million varchar(50) values. We use a checksum index to do fast lookups. Its much faster, because it's an integer index, and it's much smaller than having a direct index on a varchar(50) column.

A checksum is a hash that produces an integer from a string of arbitrary length, and can return any value in the range of an integer. This means that there are over 4 billion possible values, and that a checksum of any row in your 5 million row table would only have about 1 chance in 800 of having a duplicate.

You should read about it in Books Online:
"The CHECKSUM function may be used to build hash indices. The hash index is built by adding a computed checksum column to the table being indexed, then building an index on the checksum column."



DELETE FROM RSS_Feeds_Data
WHERE
Data_Desc_Checksum =
binary_checksum(substring(@FeedDesc,1,8000)))
and
Data_Desc LIKE @FeedDesc



quote:
Originally posted by redbrad0

Michael Valentine Jones,

You said the solution that you posted is better but if you have over a million records it could delete dupes? If I can get this software working right I will expect it to have maybe 4 to 5 million records in the table at all time trying to delete, add, and search all day long every day.

On this checksum... what exactly is it doing... comparing the length of the string?

Quality Web Hosting & Design
http://www.eznthosting.com



CODO ERGO SUM
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-03-20 : 15:39:00
Sorry to jump in there, MVJ. I thought I'd be the only nut on-line on a Sunday. 250 million records! Wow! How much faster is the index search than the varchar search?

Be One with the Optimizer
TG
Go to Top of Page

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-20 : 16:31:41
Thanks guys you have been so helpful I love this forum. So let me make sure I have this correct so I do not screw anything up. I am going to run the below script to just alter the table to add the checksum. Will it automatically figure the checksum on the items in the database now, and will inserts/updates into this table change any?


-- Create the table with a computed checksum column
drop table [dbo].[RSS_Feeds_Data]
go
ALTER TABLE [dbo].[RSS_Feeds_Data] (
Data_Desc_Checksum as
(binary_checksum(substring(Data_Desc,1,8000)))
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]


go

-- Create an index on the computed checksum
ALTER INDEX [IX_Data_Desc_Checksum2] ON
[dbo].[RSS_Feeds_Data]([Data_Desc_Checksum]) ON [PRIMARY]




Quality Web Hosting & Design
http://www.eznthosting.com
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-20 : 16:36:09
I can't really say that we tested it with a varchar index, because the space considerations for an index on a varchar(50) column with an average length of 30 characters compared to an integer (7 GB vs 1 GB) made the checksum index the way to go. The design of the application made a clustered index on the varchar column impractical.

Considering that we do fairly large inserts into this table (1 million plus rows) and maintenance issues like index rebuilds, the varchar index would have been a nightmare. The transaction logs are hard enough to deal with now, but that would have sent them out of control. "You need another 50 GB of SAN storage for logs?"

Even with this many rows, about 95% of the rows do not have duplicates, so the index lookups are fast.


quote:
Originally posted by TG

Sorry to jump in there, MVJ. I thought I'd be the only nut on-line on a Sunday. 250 million records! Wow! How much faster is the index search than the varchar search?

Be One with the Optimizer
TG



CODO ERGO SUM
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-20 : 16:47:26
I have never done this on a text column, so I am not sure what happens with various types of updates, especially if you are using UPDATETEXT and WRITETEXT. You should carefully test this to make sure it does what you want. If there is a problem, as an alternative, you can create a real column to hold the checksum, and manually update that in your stored procedure after you change the text column.


quote:
Originally posted by redbrad0

Thanks guys you have been so helpful I love this forum. So let me make sure I have this correct so I do not screw anything up. I am going to run the below script to just alter the table to add the checksum. Will it automatically figure the checksum on the items in the database now, and will inserts/updates into this table change any?


-- Create the table with a computed checksum column
drop table [dbo].[RSS_Feeds_Data]
go
ALTER TABLE [dbo].[RSS_Feeds_Data] (
Data_Desc_Checksum as
(binary_checksum(substring(Data_Desc,1,8000)))
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]


go

-- Create an index on the computed checksum
ALTER INDEX [IX_Data_Desc_Checksum2] ON
[dbo].[RSS_Feeds_Data]([Data_Desc_Checksum]) ON [PRIMARY]




Quality Web Hosting & Design
http://www.eznthosting.com



CODO ERGO SUM
Go to Top of Page

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-20 : 16:56:35
Well I am in the process of backing up the database and then will run the above and hope for the best. Will keep you guys informed.

Quality Web Hosting & Design
http://www.eznthosting.com
Go to Top of Page

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-20 : 17:11:15
Just running this QUERY I get this error....


ALTER TABLE [dbo].[RSS_Feeds_Data] (
Data_Desc_Checksum as
(binary_checksum(substring(Data_Desc,1,8000)))
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]


go


ERROR

Server: Msg 170, Level 15, State 1, Line 1
Line 1: Incorrect syntax near '('.


Quality NT Web Hosting & Design
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2005-03-20 : 17:52:18
You should look up the correct syntax for ALTER TABLE in Books Online.



quote:
Originally posted by redbrad0

Just running this QUERY I get this error....


ALTER TABLE [dbo].[RSS_Feeds_Data] (
Data_Desc_Checksum as
(binary_checksum(substring(Data_Desc,1,8000)))
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]


go


ERROR

Server: Msg 170, Level 15, State 1, Line 1
Line 1: Incorrect syntax near '('.


Quality NT Web Hosting & Design



CODO ERGO SUM
Go to Top of Page

redbrad0
Posting Yak Master

176 Posts

Posted - 2005-03-20 : 19:45:35
I thought I was right with the alter but when I looked it up it showed me I was a little off.

Now I am having a different problem now that I have converted the table to have the Math function in it. Doing some research it appears I need to set ARITHABORT to on but that does not seem to help. Here are each SP and the error that is generated.


DELETE
CREATE procedure sp_DeleteFeedsByDesc (@FeedDesc text)
as
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON

DELETE FROM RSS_Feeds_Data WHERE Data_Desc_Checksum = binary_checksum(substring(@FeedDesc,1,8000))
AND Data_Desc LIKE @FeedDesc
GO
[Microsoft][ODBC SQL Server Driver][SQL Server]DELETE failed because the following
SET options have incorrect settings: 'ANSI_NULLS., QUOTED_IDENTIFIER, ARITHABORT'.


INSERT
CREATE procedure sp_InsertFeedData (@FeedID int, @FeedDataTitle nvarchar(300),
@FeedDataLink nvarchar(300), @FeedDataDesc text, @FeedDataDate nvarchar(20))
as
SET ARITHABORT ON
INSERT INTO RSS_Feeds_Data (Data_FeedID, Data_Title, Data_Link, Data_Desc, Data_Date)
VALUES (@FeedID, @FeedDataTitle, @FeedDataLink, @FeedDataDesc, @FeedDataDate)
GO
[Microsoft][ODBC SQL Server Driver][SQL Server]INSERT failed because the following
SET options have incorrect settings: 'ARITHABORT'.



Now if I run the SP thru Enterprise Manager it works just fine. Reading articles about this error says that EM assumes that ARITHABORT is on by default but for some reason I can not get it to work.
Quality NT Web Hosting & Design
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2005-03-21 : 00:24:57
I try to avoid having calculated columns in tables. They seem to raise compatibility issues with ARITHABORT that we struggle to work around. We calculate them in a trigger, or the SProc that creates/changes the data (you can't manipulate a TEXT column in a regular trigger, although I expect you will be able to grab the first 8000 characters)

Kristen
Go to Top of Page
    Next Page

- Advertisement -