Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Transact-SQL (2000)
 Automatically inserting int field during select

Author  Topic 

galbrecht
Starting Member

17 Posts

Posted - 2008-03-04 : 20:41:20
Hi

I have a table that has the following fields;

val_id | sequence_no | date | comment

09300/006.01 | | 01/01/2000 | ????????
09300/006.01 | | 02/03/2004 | ????????????


What I need is for the sequence_no to start at 1 and increase by 1 according to the date (oldest to newest) where the val_id is equal to val_id (the same).

Does anyone have any ideas how this can be done?

Thanks

Greg

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-04 : 20:50:34
Select so you want it to be like a running total?

Does it have to restart at 1 with a new val_id or is it to be a standalone unique identifier for the row of data?



Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-04 : 21:03:59
[code]
Create table #foo ( val_id varchar(12) not null,
sequence_no int identity(1,1),
aDate datetime not null)

Insert Into #foo (val_id, aDate)

Select '09300/006.01', '20000101' UNION
Select '09300/006.01', '20040203' UNION
Select '09300/006.02', '20030101' UNION
Select '09300/006.03', '20050101' UNION
Select '09300/006.03', '20050201'

Select * from #foo
/*results
09300/006.01 1 2000-01-01 00:00:00.000
09300/006.01 2 2004-02-03 00:00:00.000
09300/006.02 3 2003-01-01 00:00:00.000
09300/006.03 4 2005-01-01 00:00:00.000
09300/006.03 5 2005-02-01 00:00:00.000*/

/*
Selecting into from another table, restarting count with new id
*/

Create Table #fubar ( val_id varchar(12) not null,
sequence_no int not null,
aDate datetime not null

)


Insert Into #fubar (val_id,sequence_no,adate)
Select a.val_id
,(Select Count(*)+1
FROM #foo b
where b.aDate < a.aDate and b.Val_id = a.Val_id )
,a.adate

FROM #foo a

Select * from #fubar
/*results
09300/006.01 1 2000-01-01 00:00:00.000
09300/006.01 2 2004-02-03 00:00:00.000
09300/006.02 1 2003-01-01 00:00:00.000
09300/006.03 1 2005-01-01 00:00:00.000
09300/006.03 2 2005-02-01 00:00:00.000
*/


Drop Table #Foo
Drop Table #fubar


[/code]



Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

Jeff Moden
Aged Yak Warrior

652 Posts

Posted - 2008-03-04 : 21:51:15
Good idea. But, let's hope the Val_ID isn't repeated any more than just a couple of dozen times with that triangular join in there.

--Jeff Moden
Go to Top of Page

galbrecht
Starting Member

17 Posts

Posted - 2008-03-04 : 22:02:57
Thanks Guys,

Could this not be done via an update statemnet to my table?

Greg
Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-04 : 22:14:33
quote:
Originally posted by Jeff Moden

Good idea. But, let's hope the Val_ID isn't repeated any more than just a couple of dozen times with that triangular join in there.

--Jeff Moden



It was just an idea, since 2000 doesn't have row_number()...

triangular join? it isn't that unheard of to do the subquery like that to be a proxy rownumber is it?

I did this against a 30 million row table with 297K unique customer numbers and it was more complicated...ran in like 13 minutes for an update of needed incremental counter by customer for specific condition...what would more than a couple of dozen different val_id's cause that would be a concern?




Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-04 : 22:16:46
quote:
Originally posted by galbrecht

Thanks Guys,

Could this not be done via an update statemnet to my table?

Greg



Yes, but which is the right way? do you just want a row number equivalent, or a running count per each val_id?

How many records are in the table? are there a lot of dates per val_id?



Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

galbrecht
Starting Member

17 Posts

Posted - 2008-03-04 : 22:22:28
Just a row count should do. There are about 60000 rows in the table and I don't think a valuation_id would be repeated more that 10 times

Greg
Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-04 : 22:38:23
the easiest way is to use the identity(seed,increment) function and select into a new table, then drop the old and rename the new one.


but an update could look something like my first example, by first selecting your records into a tempt table and using the identity fuction, then updating the main table from there:

[code]Create table #foo ( val_id varchar(12) not null,
sequence_no int NULL,
aDate datetime not null)

Insert Into #foo (val_id, aDate)

Select '09300/006.01', '20000101' UNION
Select '09300/006.01', '20040203' UNION
Select '09300/006.02', '20030101' UNION
Select '09300/006.03', '20050101' UNION
Select '09300/006.03', '20050201'

Select IDENTITY(int, 1,1) as myid,val_id,aDate
INTO #foo2
FROM #foo
Order by val_id asc, aDate asc


Update #foo
SET sequence_no = #foo2.myid
FROM #foo2 INNER JOIN #foo
on #foo2.val_id = #foo.val_id and #foo2.aDate = #foo.aDate


Select * from #foo



/*results
09300/006.01 1 2000-01-01 00:00:00.000
09300/006.01 2 2004-02-03 00:00:00.000
09300/006.02 3 2003-01-01 00:00:00.000
09300/006.03 4 2005-01-01 00:00:00.000
09300/006.03 5 2005-02-01 00:00:00.000



*/
[code]




Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-04 : 22:44:29
this would be the "per val_id" option as seen in my second example, same concept..same select statement as second example, but insert records into a new table and updating from there.


Create table #foo ( val_id varchar(12) not null,
sequence_no int NULL,
aDate datetime not null)

Insert Into #foo (val_id, aDate)

Select '09300/006.01', '20000101' UNION
Select '09300/006.01', '20040203' UNION
Select '09300/006.02', '20030101' UNION
Select '09300/006.03', '20050101' UNION
Select '09300/006.03', '20050201'

Select a.val_id
,a.adate
,(Select Count(*)+1

FROM #foo b
where b.aDate < a.aDate and b.Val_id = a.Val_id ) as MyID
into #foo2
from #foo a


Update #foo
SET sequence_no = #foo2.myid
FROM #foo2 INNER JOIN #foo
on #foo2.val_id = #foo.val_id and #foo2.aDate = #foo.aDate


Select * from #foo



/*results
09300/006.01 1 2000-01-01 00:00:00.000
09300/006.01 2 2004-02-03 00:00:00.000
09300/006.02 1 2003-01-01 00:00:00.000
09300/006.03 1 2005-01-01 00:00:00.000
09300/006.03 2 2005-02-01 00:00:00.000



*/

Drop Table #foo
Drop table #foo2




Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

Jeff Moden
Aged Yak Warrior

652 Posts

Posted - 2008-03-05 : 12:17:57
quote:
Originally posted by dataguru1971

quote:
Originally posted by Jeff Moden

Good idea. But, let's hope the Val_ID isn't repeated any more than just a couple of dozen times with that triangular join in there.

--Jeff Moden



It was just an idea, since 2000 doesn't have row_number()...

triangular join? it isn't that unheard of to do the subquery like that to be a proxy rownumber is it?

I did this against a 30 million row table with 297K unique customer numbers and it was more complicated...ran in like 13 minutes for an update of needed incremental counter by customer for specific condition...what would more than a couple of dozen different val_id's cause that would be a concern?




Poor planning on your part does not constitute an emergency on my part.





That works out to about 100 rows per ID... qualifies as "as couple dozen" I suppose. Still, for an average 100 rows per ID, that will generate an average of 5,150 internal rows per ID. In other words, instead of processing the 30 million rows, you would be processing 300k*5150 or 1,545,000,000 internally and that's why it takes 13 minutes. It'll only take about 7 seconds per million (about 3.5 minutes) if you use the UPDATE method.

To explain how triangular joins work, please see the following.
[url]http://www.sqlservercentral.com/articles/T-SQL/61539/[/url]


--Jeff Moden
Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-05 : 16:04:54
Thanks for the info Jeff. What is your proposed solution for the OP then?



Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

Jeff Moden
Aged Yak Warrior

652 Posts

Posted - 2008-03-05 : 22:16:07
That would be another link... that particular link is down right now and I'll post it as soon as it comes back up.

--Jeff Moden
Go to Top of Page

dataguru1971
Master Smack Fu Yak Hacker

1464 Posts

Posted - 2008-03-05 : 22:18:01
quote:
Originally posted by Jeff Moden

That would be another link... that particular link is down right now and I'll post it as soon as it comes back up.

--Jeff Moden



No big deal, was just curious what you would propose as a solution...always looking to learn something new.



Poor planning on your part does not constitute an emergency on my part.

Go to Top of Page

Jeff Moden
Aged Yak Warrior

652 Posts

Posted - 2008-03-06 : 09:08:20
Understood... I'm the same way... hungry for knowledge...

The site is back up... here's the link...

http://www.sqlservercentral.com/articles/Advanced+Querying/61716/
--Jeff Moden
Go to Top of Page
   

- Advertisement -