SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 General SQL Server Forums
 Script Library
 Using SQL trigger to create unlimited unique id's
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

VinceGee
Starting Member

2 Posts

Posted - 01/26/2007 :  12:49:18  Show Profile  Reply with Quote
Introduction
This MS SQL Store Procedure solves a problem which is not common 
except when you have a table in a database which each row needs 
to be uniquely identified and their are more rows in the table 
than you can identfy with a big int or Unique Identifier.  

So for example, 

if you used a "unique identifier" you would be limited to 
8.6904152163272468261061026005374e+50 unique rows at best.  
If you used a "Big Int" you would be limited to -2^63 (
-9223372036854775808) through 2^63-1 (9223372036854775807). 


This method will allow you to have 2.2528399544939174411840147874773e+106.  (With cluster indexing the
identity field.)
or, 4.722366482869645213696e+129 (Without indexing the identity field)

Why would you need that many unique values?  Well, the reason for
 this invention is due to the need to track every email that an 
application sends for Sarbanes/Oxley Requirements.  With this 
technique, every email sent out will be uniquely identified for a 
very very very long time. 


The purpose of this is to show how to set up an identity column with
 a larger range than a big int or unique id. Try transaction logs 
where you need to track every change, or determining click paths 
through a website, etc. 

The point is, that this method pretty much does what the title 
says, "Create unlimited Unique ID's". What table you apply this too 
and for what reason is up the the programmer.

Background
This design uses basic counting methods and handles the limitations 
of MS SQL TSQL.  First, you can use a varchar(4000) as the unique id 
column but the issue with this is that as of MSSQL 2000, the largest
 indexable field is 900 character.  So if you need to be able to 
quickly search the table by key, or clustered keys, you need to limit
 your key column with to 900 characters, otherwise if you use a varchar(4000) make sure when searching the table you create a 
temporary table, select the subset into it and search that. 


Using the code
First, copy and paste all the TSQL into a Query Window and
 compile it in the database you wish to use it in. 

[Code]
/**********************************************************************************
Program: ALTER  Unlimited Unique ID's (Auto Increment)
Programmer: Vince Gee
Date:        9/28/2005
Parameters:
                @TABLE_NAME - The name of the table to establish the auto incrementing field in
                @COLUMN_NAME - The column name in the table to establish the auto incrementing field in
                @DROP_EXISTING_TRIGGER - Whether or not on running to drop an existing trigger with
                                             the same name.
Theory:
            A varchar 900 field will be able to have 2.2528399544939174411840147874773e+106 unique identifiers in it.

            A uniqueID only has 8.6904152163272468261061026005374e+50 unique identifiers in it.

Description:
            The purpose of the sql procedure is to automate the creation of 
            auto updating identities on a sql table without the trouble of
            writing the trigger each time.  

            So what does this do?  Well for example lets say we have the following 
            table which you will have many many many rows in.

            ALTER  TABLE [Countertest] 
                (
                [myKey] [varchar] (900) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
                [anyvalue1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
                [anyvalue2] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
                [AA] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL 
                )
            
            myKey is the unique identifier for each row.  We can set it's size really 
                    to anything, This proc will look for the column specified and determine it's
                    size.  The column should be nvarchar of type

            All the other columns don't matter, the only issue is if all the column names concated 
            together exceed the storage compacity w/ the trigger code of 4000 characters.  If this
            is the case your gonna have to write the trigger manually.

            So to set the auto incrementing field up you would call this proc:
                Execute SP_SET_UNIQUE_FIELD 'Countertest','myKey' 
            or
                Execute SP_SET_UNIQUE_FIELD 'Countertest','myKey',1
            
Output:
            When data is inserted into the table, the auto incrementing field will look like
                                    0000000001
                                    0000000002
                                    0000000003
                                    0000000004
                                    0000000005
                                    0000000006
                                    0000000007
                                    0000000008
                                    0000000009
                                    000000000A
                                    000000000B
                                    000000000C
                                    000000000D
                                    000000000E
                                    000000000F
                                    000000000G
                                    000000000H
                                    000000000I
                                    000000000J
                                    000000000K
                                    000000000L
            with how many 0's set up etc.  It goes 0-9, then A-Z


***********************************************************************************/

ALTER PROC SP_SET_UNIQUE_FIELD
            @TABLE_NAME VARCHAR(255),
            @COLUMN_NAME VARCHAR(255),
            @DROP_EXISTING_TRIGGER BIT =0
AS

DECLARE 
    @EXECSQLSTRING nvarchar (4000),
    @counter int,
    @COLUMN_NAMES varchar(4000),
    @tCOLUMN_NAME varchar(4000),
    @MAXORDINAL int,
    @KEYLENGTH int

--If the trigger 
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[' + 'IO_Trig_INS_' + @COLUMN_NAME + ']') and OBJECTPROPERTY(id, N'IsTrigger') = 1)
    begin
        IF @DROP_EXISTING_TRIGGER = 0
            BEGIN
                -- USER DOESN'T WANT US TO AUTODROP THE TRIGGER, BY DEFAULT AUTODROP TRIGGER IS OFF
                PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'
                PRINT 'STOP ERROR :: PLEASE DROP THE EXISTING TRIGGER BEFORE RUNNING THIS PROC'
                PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'
                RETURN
            END
        ELSE
            BEGIN
                --CREATE A SQL STRING TO DROP THE TRIGGER SO WE CAN RECREATE IT.
                set @EXECSQLSTRING = 'drop trigger IO_Trig_INS_' + @COLUMN_NAME
                --EXECUTE THE SQL
                EXEC SP_EXECUTESQL  @EXECSQLSTRING
            END
    end


--CREATE A TABLE TO HOLD THE RESULTS FOR THE SP_COLUMNS
create table #temp
    (
    TABLE_QUALIFIER varchar(255),
    TABLE_OWNER varchar(255),
    TABLE_NAME varchar(255),
    COLUMN_NAME varchar(255),
    DATA_TYPE int,
    [TYPE_NAME] varchar(255),
    [PRECISION] int,
    LENGTH    int,
    SCALE int,
    RADIX int,
    NULLABLE int,
    REMARKS  varchar(255),
    COLUMN_DEF  varchar(255),
    SQL_DATA_TYPE int,
    SQL_DATETIME_SUB varchar(255),
    CHAR_OCTET_LENGTH int,
    ORDINAL_POSITION    int,
    IS_NULLABLE  varchar(255),
    SS_DATA_TYPE int
    )
--POPULATE THE TEMP TABLE W/ A SP_COLUMNS ON THE TARGET TABLE
insert into #temp
exec sp_columns @TABLE_NAME

--CYCLE THROUGH ALL THE COLUMN NAMES AND BUILD OUR COLUMN NAME STRING 
--FOR INSERTS.  THE LAST COLUMN NAME IS ALWAYS THE IDENTITY FIELD.
SELECT @MAXORDINAL = MAX(ORDINAL_POSITION) FROM #TEMP
SET @COUNTER = 1
SET @COLUMN_NAMES = ''
WHILE  @COUNTER <= @MAXORDINAL
    BEGIN
        select @tCOLUMN_NAME = COLUMN_NAME FROM #TEMP WHERE ORDINAL_POSITION = @COUNTER
        if (@tCOLUMN_NAME <> @COLUMN_NAME)
            begin
                SET @COLUMN_NAMES = @COLUMN_NAMES  + @tCOLUMN_NAME+ ','
            end
        else
            begin
                select @KEYLENGTH = LENGTH FROM #TEMP WHERE ORDINAL_POSITION = @COUNTER
            end
    SET @COUNTER = @COUNTER +1
    END
--CLEAN UP
drop table #temp


IF @KEYLENGTH > 900    
Begin
    PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'
    PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!WARNING:: YOU WILL NOT BE ABLE TO INDEX THIS TABLE BY YOUR CHOSEN COLUMN,!!!!!!!!!!!!!!!!!!!!!'
    PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!BECAUSE THE COLUMN IS OVER 900 CHARACTERS.  900 CHARS ARE THE MAX THAT  !!!!!!!!!!!!!!!!!!!!!'
    PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!THAT CAN BE INDEXED                                                     !!!!!!!!!!!!!!!!!!!!!'
    PRINT '!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!'
    END
SET @EXECSQLSTRING = '
CREATE TRIGGER IO_Trig_INS_' + @COLUMN_NAME + ' ON ' + @TABLE_NAME + '

INSTEAD OF INSERT

AS

BEGIN

DECLARE 
        @VALUE VARCHAR(' + CONVERT(VARCHAR(900),@KEYLENGTH) + '),
        @REVERSED_VALUE VARCHAR(' + CONVERT(VARCHAR(900),@KEYLENGTH) + '),
        @COUNTER INT,
        @LEFT_SIDE VARCHAR(' + CONVERT(VARCHAR(900),@KEYLENGTH) + '),
        @RIGHT_SIDE VARCHAR(' + CONVERT(VARCHAR(900),@KEYLENGTH) + '),
        @CHAR_VALUE CHAR


select @VALUE = ISNULL(MAX(' + @COLUMN_NAME + '),REPLICATE (' + "'0'" + ',' + CONVERT(VARCHAR(900),@KEYLENGTH) + ') ) from ' + @TABLE_NAME + '

SET @REVERSED_VALUE = REVERSE(@VALUE)

SET @COUNTER = 1
 
 WHILE @COUNTER <= LEN(@REVERSED_VALUE)

     BEGIN

        SET @CHAR_VALUE = SUBSTRING(@REVERSED_VALUE,@COUNTER,1)

        IF ASCII(@CHAR_VALUE) <> 122

                BEGIN

                    IF @COUNTER = 1

                        SET @LEFT_SIDE = ''''

                    ELSE
                        SET @LEFT_SIDE = LEFT (@REVERSED_VALUE,@COUNTER - 1)

                    IF @COUNTER = LEN(@VALUE) 

                        SET @RIGHT_SIDE = ''''

                    ELSE 
                        SET @RIGHT_SIDE = RIGHT (@REVERSED_VALUE,LEN(@REVERSED_VALUE)- @COUNTER)

                    IF ASCII(@CHAR_VALUE) +  1 = 58

                        SET @CHAR_VALUE = CHAR(97)

                    ELSE

                        SET @CHAR_VALUE = CHAR(ASCII(@CHAR_VALUE) + 1)

                    SET @REVERSED_VALUE = ISNULL(@LEFT_SIDE,"") + ISNULL(@CHAR_VALUE,"") + ISNULL(@RIGHT_SIDE,"")

                    BREAK
                END
            ELSE
                BEGIN
                    IF @COUNTER = 1

                        SET @LEFT_SIDE = ''''

                    ELSE 

                        SET @LEFT_SIDE = LEFT (@REVERSED_VALUE,@COUNTER - 1)

                    IF @COUNTER = LEN(@VALUE) 

                        SET @RIGHT_SIDE = ''''

                    ELSE 

                        SET @RIGHT_SIDE = RIGHT (@REVERSED_VALUE,LEN(@REVERSED_VALUE)- @COUNTER)
            
                    SET @CHAR_VALUE = CHAR(48) --SET THE CURRENT POSITION TO ZERO AND WE INCREMENT THE NEXT DIGIT.

                    SET @REVERSED_VALUE = ISNULL(@LEFT_SIDE,"") + ISNULL(@CHAR_VALUE,"") + ISNULL(@RIGHT_SIDE,"")
                END

        SET @COUNTER = @COUNTER +1
     END 


SET @VALUE = REVERSE (@REVERSED_VALUE)

    INSERT INTO ' + @TABLE_NAME + ' (' + @COLUMN_NAMES + @COLUMN_NAME + ')

        SELECT 

                ' + @COLUMN_NAMES + '@VALUE

        FROM

            inserted

END'
if len(@EXECSQLSTRING) <4000
    begin
        EXEC SP_EXECUTESQL  @EXECSQLSTRING
    end
else
    begin
        print 'STOP ERROR:: BUFFER OVERFLOW.  THE GENERATED TRIGGER TEXT > 4000, Trigger must be hand written.'
    end



GO
SET QUOTED_IDENTIFIER OFF 
GO
SET ANSI_NULLS ON 
GO


First, to test the functionality create a temp table.

First, to test the functionality create a temp table.

Create TABLE [Countertest]
(
[myKey] [varchar] (900) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[anyvalue1] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[anyvalue2] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[AA] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL
)

Second, call the proc on the table. Parameters are:

Table Name - Name of the table to put the trigger on.
Column Name - Name of the column to use as the key.
Drop Existing Trigger - If this is set to 1 and a trigger with the name generated by this stored procedure exists it will drop it. (OPTIONAL)

Execute SP_SET_UNIQUE_FIELD 'Countertest','myKey'

or
Execute SP_SET_UNIQUE_FIELD 'Countertest','myKey',1

Now, we are going to test how this works. Copy and paste the following code into a query analyzer.


declare @t int
set @t = 0

while @t <= 40000
begin
insert into countertest
select '','s','s','s'
set @t = @t + 1
end
GO

Once this completes, you can inspect the unique id by selecting it from the table
SELECT RIGHT (MYKEY,10) FROM countertest

The table will keep incrementing the key column first 0-9 then a-z. When it reaches all 'z' it will roll over, but if your key column is wide enough this shouldn't happen while your still employeed.

Also, the stored procedure figures out how wide the key column is automatically and adjusts the script accordingly. So if you widen or shrink the key column, just rerun the proc to update the trigger script.


Edited by - VinceGee on 01/26/2007 13:09:30

SwePeso
Patron Saint of Lost Yaks

Sweden
29908 Posts

Posted - 01/26/2007 :  15:37:39  Show Profile  Visit SwePeso's Homepage  Reply with Quote
NEWID() is still sufficient.

For your example above with mails according to SOX, it can identify 1,000,000,000 mails per millisecond the next 1E+19 years.

I think I will have retired until then


Peter Larsson
Helsingborg, Sweden

Edited by - SwePeso on 01/26/2007 15:42:01
Go to Top of Page

jezemine
Flowing Fount of Yak Knowledge

USA
2884 Posts

Posted - 01/26/2007 :  16:01:31  Show Profile  Visit jezemine's Homepage  Reply with Quote
Vince: are you serious about this?

Did you really ever have a need for more rows than a bigint would provide you? I just don't believe it's possible.

If so I think you deserve some sort of VLDB medal.



www.elsasoft.org
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

Sweden
29908 Posts

Posted - 01/27/2007 :  04:08:17  Show Profile  Visit SwePeso's Homepage  Reply with Quote
With the same example as above, a BIGINT can identify 1,000,000 mails per second the next 584,554 years.


Peter Larsson
Helsingborg, Sweden
Go to Top of Page

VinceGee
Starting Member

2 Posts

Posted - 01/29/2007 :  07:56:53  Show Profile  Reply with Quote
Well what about clicks through a corporate website like intel tracking every web click and then doing pathing against it? :) And I've been reading articles about how DBA's have been failing to design datamodels which will need to track data for 20-30 years etc and how some engineers failed to handle it. It's a tool, it's cool, never know when you one day might need something like this and maybe it will save you 10 minutes in writing it.
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

Sweden
29908 Posts

Posted - 01/29/2007 :  09:27:26  Show Profile  Visit SwePeso's Homepage  Reply with Quote
NEWID() can give 16^32 (3.403E+38) unique values.
340 282 366 920 938 463 463 374 607 431 768 211 456

BIGINT can give 2^64 (1.845E+19) unique values.
18 446 744 073 709 551 616

Your code above can give 36^900 (4.702E+1400) unique values.
470 165 426 861 930 660 365 467 740 254 387 367 480 060 288 811 691 783 266 578 606 572 388 749 461 787 965 796 061 018 622 048 258 568 366 066 708 891 442 252 596 049 125 799 657 389 797 439 424 676 647 004 746 709 977 759 829 838 686 292 911 452 094 937 698 092 400 318 605 741 833 161 155 543 913 650 358 879 123 803 597 180 783 250 073 270 955 083 868 258 926 739 012 975 550 539 208 948 581 038 341 363 668 357 807 830 832 396 160 549 255 233 369 807 234 526 994 563 639 826 353 686 569 705 364 520 517 332 510 311 310 666 960 180 979 946 885 357 470 795 413 753 196 816 332 962 297 758 018 543 333 162 844 536 475 049 764 146 031 843 349 760 910 080 283 228 977 821 842 978 718 191 715 149 071 402 280 607 214 394 533 953 740 985 356 611 109 892 282 192 056 552 249 074 160 623 097 023 676 942 290 488 771 475 492 813 517 120 383 889 749 197 437 693 960 259 184 041 270 560 588 998 594 540 704 951 139 458 781 072 696 224 989 259 031 231 891 074 841 622 172 173 395 926 810 562 888 038 347 412 181 005 657 489 221 681 464 741 679 020 002 050 845 667 151 246 745 306 147 908 604 844 527 134 700 996 066 206 476 899 442 120 049 064 165 273 969 609 522 271 907 270 373 131 866 953 842 935 525 090 243 845 817 002 609 132 031 751 716 802 779 628 272 144 246 897 431 668 065 081 798 719 636 971 915 687 114 059 218 173 342 084 445 314 334 003 643 863 200 471 558 845 469 294 289 804 213 060 025 354 748 565 312 470 545 737 797 532 511 930 181 539 104 963 595 964 716 379 279 209 837 325 393 879 197 972 661 381 634 494 144 956 675 288 745 137 624 002 863 431 746 577 639 926 195 819 109 419 224 107 662 350 300 973 429 868 282 413 274 778 160 258 389 751 158 735 213 780 902 496 032 798 529 782 150 183 320 045 644 833 716 000 508 751 737 419 722 971 466 821 048 139 135 323 848 910 008 572 787 639 244 136 055 829 362 969 677 866 144 149 816 972 391 622 037 733 376

Rewrite your code above to have VARBINARY(900).

Then you will have 256^900 (2.606E+2167) unique values.
26 059 662 129 249 685 387 376 387 004 434 371 037 157 945 606 729 517 772 442 936 762 276 378 682 786 747 025 010 958 798 931 183 143 530 409 429 299 890 728 725 150 857 095 077 832 581 965 847 228 031 095 634 297 400 701 452 596 959 128 049 942 860 259 325 038 467 513 513 662 230 767 591 717 700 420 912 709 748 247 315 464 182 057 123 680 676 384 940 331 361 299 678 856 240 930 663 284 688 250 797 033 074 034 112 606 461 674 999 096 545 061 119 996 527 103 519 411 778 900 312 549 341 598 117 111 860 738 794 635 186 779 011 109 208 199 188 765 678 750 857 873 274 245 085 946 245 862 308 731 540 458 454 971 952 518 973 454 476 449 762 164 758 633 118 302 227 738 557 845 547 767 994 339 626 696 802 634 936 326 170 734 660 886 023 469 630 630 110 805 424 188 764 142 025 734 692 030 389 143 267 714 857 503 333 417 891 718 097 121 455 962 989 396 785 801 633 265 995 005 547 302 630 053 806 244 841 556 436 398 684 714 359 946 527 945 444 552 684 465 375 721 531 656 839 826 762 041 187 734 447 628 600 810 346 701 525 329 659 841 489 857 842 707 411 353 194 515 226 596 549 384 680 037 826 026 580 173 160 017 119 404 831 243 985 827 433 302 778 197 834 812 348 378 315 928 020 921 509 126 568 052 963 939 928 672 734 790 093 924 486 427 168 867 688 143 890 617 491 889 751 419 842 709 404 977 830 817 750 378 677 443 038 246 349 160 531 531 154 672 529 523 163 491 541 824 407 725 089 361 897 267 202 622 938 052 081 270 837 523 280 316 439 220 059 625 044 839 136 381 985 552 782 322 976 494 391 628 137 008 890 290 297 601 775 691 009 624 783 753 728 242 568 415 755 824 667 032 115 687 017 139 030 150 837 487 530 941 611 840 926 077 926 163 891 233 272 703 210 170 140 139 190 595 211 852 741 356 311 289 088 077 083 483 607 340 182 871 832 561 747 517 526 515 111 106 344 206 130 273 563 946 639 441 244 072 467 520 006 111 840 439 664 597 458 988 124 474 785 147 627 986 078 495 530 131 546 974 772 272 757 864 576 548 099 637 241 899 138 016 900 371 681 147 638 570 434 004 824 448 707 028 598 503 486 731 411 179 842 617 025 468 902 722 137 284 112 752 375 745 970 282 398 852 800 265 097 619 029 774 956 884 911 211 598 675 094 390 917 422 946 002 588 399 213 910 926 811 652 280 611 755 673 235 454 082 358 360 421 612 001 338 273 030 001 605 128 962 913 994 003 040 962 033 674 878 311 533 572 356 065 480 428 882 191 636 223 091 104 843 795 314 149 850 068 000 128 443 375 501 769 716 394 946 510 383 089 170 042 595 117 391 251 807 305 664 199 990 411 463 780 605 403 482 086 043 400 991 670 799 787 328 223 081 797 427 401 431 718 331 724 324 364 346 576 012 662 166 181 134 222 831 740 687 276 898 817 211 801 905 622 426 553 571 343 908 654 770 942 178 256 156 348 023 721 157 845 169 815 097 304 027 647 883 610 068 533 331 355 099 720 214 916 302 137 557 833 545 129 988 286 972 556 582 210 226 638 477 389 454 367 168 021 928 667 974 556 610 793 383 842 021 376


Peter Larsson
Helsingborg, Sweden

Edited by - SwePeso on 01/29/2007 09:43:05
Go to Top of Page

Kristen
Test

United Kingdom
22403 Posts

Posted - 01/29/2007 :  09:35:15  Show Profile  Reply with Quote
What are you guys using to back up this database?
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

Sweden
29908 Posts

Posted - 01/29/2007 :  09:42:32  Show Profile  Visit SwePeso's Homepage  Reply with Quote
Now, that's a great question Kristen.
Not only backup, where is the index and the data going to be stored?

Since these numbers are greater then the known number of atoms in the universe, I don't know...


Peter Larsson
Helsingborg, Sweden

Edited by - SwePeso on 01/29/2007 09:45:50
Go to Top of Page

Kristen
Test

United Kingdom
22403 Posts

Posted - 01/29/2007 :  09:50:31  Show Profile  Reply with Quote
"where is the index and the data going to be stored?"

I figured you guys had that cracked already, but I was worried that the Backup media had been overlooked - I'm presuming several databases on one server, and thus Backup Media needs to be some greater multiple of the Online Media size!

Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

USA
7020 Posts

Posted - 01/29/2007 :  10:32:15  Show Profile  Reply with Quote
A BIGINT will let you insert 1 trillion rows per day for over 25,000 years, so it seems like there would be very few situations where it would be a practical limitation.

On the physical storaqe side, a table with nothing but a single BIGINT column containing the limit of 9,223,372,036,854,775,807 positive integers would consume 67,108,864 TBytes of storage. This seems well beyond the limits of hardware we are likely to see in the near future.

If it does become a problem, we can move to the maximum of 10 to the 35 power on a NUMERIC datatype before creating a new way to do things. That would let us store a quintillion rows per day for 273,972,602,739,726 years, about 13,698 times the age of the universe. The table would be 1.6E20 TBytes in size so this seems like an unlikely practical limitation.








CODO ERGO SUM
Go to Top of Page

Kristen
Test

United Kingdom
22403 Posts

Posted - 01/29/2007 :  10:36:11  Show Profile  Reply with Quote
"a table with nothing but a single BIGINT column"

That sounds like a justification for a job ... rather than a solution to a real world problem
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

USA
7020 Posts

Posted - 01/29/2007 :  10:58:19  Show Profile  Reply with Quote
Seriously Vince, that's a lot of work for something so unlikely to be of any use.

I understand that Microsoft is working to overcome this limitation in their upcoming release of SQL Server 3000.


CODO ERGO SUM
Go to Top of Page

spirit1
Cybernetic Yak Master

Slovenia
11749 Posts

Posted - 01/29/2007 :  13:15:10  Show Profile  Visit spirit1's Homepage  Reply with Quote
just to let you know.
.net 3.5 (i think) will have a really big int (it's acctually called like that).
the only limit will be memory.



Go with the flow & have fun! Else fight the flow
blog thingie: http://weblogs.sqlteam.com/mladenp
Go to Top of Page

jezemine
Flowing Fount of Yak Knowledge

USA
2884 Posts

Posted - 01/29/2007 :  13:48:24  Show Profile  Visit jezemine's Homepage  Reply with Quote
quote:
Originally posted by Peso
Since these numbers are greater then the known number of atoms in the universe, I don't know...



IIRC the number of elementary particles in the observable universe is estimated at only a paltry 10^80.

and it certainly takes a great deal of elementary particles to store even a single bit on disk. so these large sized identifiers are truly useless from that perspective.


www.elsasoft.org

Edited by - jezemine on 01/29/2007 13:51:59
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

Sweden
29908 Posts

Posted - 01/29/2007 :  13:51:56  Show Profile  Visit SwePeso's Homepage  Reply with Quote
Only measly 1.0E+80?
So in "reality" we can use NEWID() to identify every observable elementary particle in our galaxy?


Peter Larsson
Helsingborg, Sweden
Go to Top of Page

jezemine
Flowing Fount of Yak Knowledge

USA
2884 Posts

Posted - 01/29/2007 :  13:53:40  Show Profile  Visit jezemine's Homepage  Reply with Quote
quote:
Originally posted by Peso

Only measly 1.0E+80?
So in "reality" we can use NEWID() to identify every observable elementary particle in our galaxy?


Peter Larsson
Helsingborg, Sweden



well, you could, except for the tiny detail that there aren't enough elementary particles around to write down or store all the guids you'd need to identify them!


www.elsasoft.org
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

Sweden
29908 Posts

Posted - 01/29/2007 :  13:55:03  Show Profile  Visit SwePeso's Homepage  Reply with Quote
It's a Catch 22 then?


Peter Larsson
Helsingborg, Sweden
Go to Top of Page

jezemine
Flowing Fount of Yak Knowledge

USA
2884 Posts

Posted - 01/29/2007 :  14:14:38  Show Profile  Visit jezemine's Homepage  Reply with Quote
sort of. information cannot exist without a without a way to physically represent it. in physics at the lowest level, information is represented by the quantum state of a particle. in the simplest case you would use a 2-state system such as a spin 1/2 particle like an electron, a so-called "qubit". Digressing a bit here...

Anyway, what I said above may be a little misleading. There are enough particles in the UNIVERSE to label all guids, just not enough in the part we can access. a guid is a 2^128 bit number, so there are about 10^38 of them. electrons in the universe outnumber guids by 10^42.

However we don't have access to all those electrons. Most of us poor programmers have to content ourselves with what we find here on earth. so as a practical matter, if you restrict yourself to what's available on earth, it's impossible to write down or store all the guids even given infinite time.

EDIT: actually, now I'm not so sure. I don't know what fraction of electrons in the universe are tied up in the earth. if it's bigger than 10^-42, then you could do it. a pretty silly enterprise however.


www.elsasoft.org

Edited by - jezemine on 01/29/2007 14:17:40
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

USA
7020 Posts

Posted - 01/29/2007 :  14:34:41  Show Profile  Reply with Quote
A uniqueidentifier is 128 bits, so that would be "only" 3.4E+38 (power(2E,128)) possible values.

I don't think you are considering the possibility of quantum storage, so you could have lot of bits on each elementry particle.






CODO ERGO SUM

Edited by - Michael Valentine Jones on 01/29/2007 14:35:32
Go to Top of Page

jezemine
Flowing Fount of Yak Knowledge

USA
2884 Posts

Posted - 01/29/2007 :  14:48:31  Show Profile  Visit jezemine's Homepage  Reply with Quote
quote:
Originally posted by Michael Valentine Jones
I don't think you are considering the possibility of quantum storage, so you could have lot of bits on each elementry particle.



it depends on how many internal states the particle has. most stuff in quantum information theory deals with qubits, which uses 2-state particles as the basis. here any spin 1/2 particle will fit the bill, such as an electron.

EDIT: of course, if all you use to store information is the spin, then you are disregarding other degrees of freedom that you *could* use to store information (such as position, momentum, etc). But as a practical matter, it's too difficult to use all the degrees of freedom so you just focus on one that's (relatively) easy to control and measure, like spin.


www.elsasoft.org

Edited by - jezemine on 01/29/2007 14:59:59
Go to Top of Page

jeffreymfischer
Starting Member

USA
10 Posts

Posted - 10/07/2009 :  11:18:32  Show Profile  Reply with Quote
I've written a detailed article on an enterprise-ready unique identifier solution.

http://blog.scoftware.com/post/2009/08/29/SQL-Server-UniqueIdentifier-Globally-Unique-Globally-Sequential-SOLUTION.aspx

Read it and provide feedback.

Jeff Fischer

Scoftware Achitect/Developer
http://blog.scoftware.com

Scoftware Achitect/Developer
http://blog.scoftware.com
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.22 seconds. Powered By: Snitz Forums 2000