Author |
Topic |
Kristen
Test
22859 Posts |
Posted - 2010-01-15 : 07:37:03
|
Actually, also (for me) "What New Features to Learn First in SQL2005?"(My need is for an OLTP application)Bit by bit we will adopt new ways of doing things.What would (or what DID) you find useful after moving from SQL2005, or SQL2000?I figure we can improve our Web PAGING - using "SELECT ROW_NUMBER() ... OVER ..."We have some Content Data that is in VARCHAR(7000) that I would like to change to VARCHAR(MAX)We have a (very) few Pivot SProcs that would benefit from a more modern way of doing things. PIVOT operator maybe.I thought we would use DATE datatype, but in practice we store the Time in most date fields, just don't display it very often. (SO we have an TIME on an Order Date; we only display DATE to customer, but in terms of the debugging the sequence that orders were placed it is useful)Anything else on your top-10-things-that-improved-your-application ? CTEs? Cross Apply?Thanks. |
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2010-01-15 : 07:46:51
|
CTE's, ROW_NUMBER(), RANK(), PIVOT, UNPIVOT from 2005 (ands Cross Apply if you'll use it).MERGE from 2008. |
 |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-15 : 07:53:25
|
MERGE, Partitioning and definitely the new XML methods.Sparse columns, composable dml.Oh, and OUTPUT! Definitely number 1! N 56°04'39.26"E 12°55'05.63" |
 |
|
madhivanan
Premature Yak Congratulator
22864 Posts |
Posted - 2010-01-15 : 08:08:49
|
To have compact code, I use--Assigning during declarationdeclare @i int=34select @i--Multiple row assignmentsdeclare @t table(i int,j int)insert into @t values (1,2),(2,5),(3,5)select * from @tMadhivananFailing to plan is Planning to fail |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-15 : 09:08:51
|
MERGE - you really use that? Huge amount of hard-to-maintain code (same code in two places, right?) just to get an UPSERT?Or am I ill-informed?OUTPUT - had not heard of that. Can I use that to get the IDENTITY and DEFAULT / Post Trigger Mangled values for columns after a batch (multi-row) insert? That would be handy."new XML methods"Dunno about the rest of you, but we've got over the excitement of XML. Where we thought we would use it as a transport-container for all sorts of stuff, we've retrenched. Where we thought we would ship XML data for client-side-rendering we've moved to JSON (leaner-meaner) - and in fact we plan to move to a propriety binary format that will be much lighter still - obviously not the solution for inter-application data sharing!"--Assigning during declarationdeclare @i int=34"Hadn't spotted that, but I definitely like it!"insert into @t values (1,2),(2,5),(3,5)"Are there practical applications for that? I'd be happy to insert them using SELECT 1,2 UNION ALL SELECT 2,5 ..."Sparse columns" I have considered. We have "UserDefined1" ... "UserDefined30" (various datatypes) in nearly all our tables. However, I think I read its pointless unless the tables are large?"composable dml" never heard of that, but I read a very good article on the internet http://weblogs.sqlteam.com/peterl/archive/2009/04/08/Composable-DML.aspx Thanks for that. Looks like if I want to process some rows twice, in some way, that will make a worthwhile difference.Cna I use it to work around problems trying to change a column value that is part of a Foreign Key?Insert into table as newnameinsert all child rows (i.e. TableB) with renamedelete original child rows (TableB)delete original rowall as one statement? |
 |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
|
sunitabeck
Master Smack Fu Yak Hacker
5155 Posts |
Posted - 2010-01-15 : 09:54:12
|
Number one on my list is table valued parameters. Whenever I had to interact with SQL Server from C#, I have suffered through various hacks for updating/inserting a set of rows in a transaction. (Everything from client side transactions to sending data as xml and shredding it in T-SQL to automatic transactions from LINQ etc.). Table valued parameters make all of that a thing of the past.Unfortunately, the servers at the company I work for still use SQL 2005. And, even if they were on SQL 2008, I am not considered a SQL programmer, so I won't be able to create the table valued types, unless I can charm our database programmers into doing that for me. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-15 : 10:35:20
|
"table valued parameters" Interesting. We have a number of data Filtration SProcs, and an assocaited data Retrieval sproc. These currently use a #TEMP temporary table, declared in the outer Sproc, to pass data between themselves.They are all designed to pass the same data around as it makes our front-end CMS templates easier for DEVs to use - all the same fields are available.So we have:Product Listing page (thumbnails and short descriptions)Special Offers / Best Sellers (fits in Right Pane, say)Associated Products / Alternative products / "People who bought that also bought" (display on Product Detail page)All these display product information using the same "pool" of available columns, so a template for one of these functions can eb reused / adapted for another part.Each of those activites has its own Filter Sproc, and then uses the same single Retrieval sproc to return the data to the application. (The Retrieval Sproc gets the right price for the current customer, all that sort of stuff, so that the logic is centralised.)Having the Filter sprocs pass a Table Valued Variable to the Retrieval sproc looks like a good idea. |
 |
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2010-01-15 : 10:58:33
|
quote: Originally posted by Kristen MERGE - you really use that? Huge amount of hard-to-maintain code (same code in two places, right?) just to get an UPSERT?Or am I ill-informed?
Used it for MIS report database that was grabbing data from a Sybase database, through a load database. Saved me month of development and worked so much faster than SSIS ever would have.I found it very useful, but each to their own I guess and it would depend on your particular need. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-15 : 11:38:23
|
Ah, hadn't thought of that use. For data import to "mirror tables" we use:Pull data to #TEMP (using OPENQUERY to avoid any distributed transaction issues)DELETE any rows from MirrorTable if no longer in #TEMPUPDATE existing rows WHERE different (i.e. using a Source.Col1 <> Destination.Col1 plus checks for NULL on either side, etc. etc. for every column)INSERT any new rows (found in #TEMP, NOT EXISTS in MirrorTable)We sometimes (if we just want a local copy, and have no additional columns in the imported table) do:DELETE any rows in MirrorTable if no longer in #TEMPDELETE from #TEMP WHERE all columns identical (i.e. using same sort of column-by-column test as above)DELETE from MirrorTable WHERE EXISTS IN (#TEMP)INSERT all of #TEMP into MirrorTableDo you think MERGE would help me with / improve on this? |
 |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-15 : 12:02:51
|
Yes, you have [NOT] MATCHED BY { SOURCE | TARGET } clause to deal with that. N 56°04'39.26"E 12°55'05.63" |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-15 : 12:26:20
|
Thanks Peso, I'll take a look at that as the updates are quite CPU/Disk intensive, and I expect that MERGE is going to do a better job than my mechanically generated compare-every-column WHERE clause stuff! |
 |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2010-01-15 : 13:44:52
|
I almost forgot...Learn about SYNONYMS too! Saves tons of hours of work. N 56°04'39.26"E 12°55'05.63" |
 |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
|
RickD
Slow But Sure Yak Herding Master
3608 Posts |
Posted - 2010-01-18 : 05:07:21
|
Seems like Peter has answered all the questions on MERGE. I'll just add that SYNONYMS are well worth the time.. |
 |
|
Lamprey
Master Smack Fu Yak Hacker
4614 Posts |
Posted - 2010-01-20 : 16:19:18
|
Peso,Did you ever do any investigation on using MERGE with triggers? I assume that it works correctly, but I haven't had a chance to verify. |
 |
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-01-20 : 16:30:00
|
Some of my favorite features are:CTEs -- but be careful...while they may be more elegant, often times they perform FAR worse than #temp tables or @tableVars or even correlated subqueries.Row_Number(), Rank() Over -- i love this.Partitioning -- for OLTP system this may not be as exciting for you as in a DW application (but it is still likely to be useful)Synonyms -- pretty cool.SSIS -- ok, It actually isn't one of my favorites, but we sure use it a lot. problem is I spent all those years using dts, then had to learn to do it a completely different way lol.For 2008, I'm a big fan of Resource Governor. We have a sandbox server that business analysts use (and abuse). Resource Governor made it a lot more palatable. |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2010-01-20 : 17:39:08
|
Table compression and built-in backup compression. Can't understand why they limited these features to Enterprise Edition. Great for when your users just can't stand to let you delete sales info from 12 years ago.And NOT CLR.CODO ERGO SUM |
 |
|
spirit1
Cybernetic Yak Master
11752 Posts |
Posted - 2010-01-20 : 18:02:49
|
From Dev standpoint:1. ranking functions: row_number, dense_rank, rank and ntilethis one really simplifies a lot of things like paging, ranking and stuff. Peso graduated from that. 2. Table valued parameteresi found these to be realy usefull when wanting to simulate in (@csvList).3. MERGEenough said already. note that it still has the same locking problem as the good old update/insert version.4. Cross appyif you have UDF's rewriting them can often bring quite a sped increase.5. SERVICE BROKER!this is true gem that noone seems to use. adds async processing capabilites to the db engine. possibilites are amazing. from async triggers to data queueing and processing when the server has time.from DBA standpoint:1. Extended events. designed for high perf environments to replace the SQL trace. has a few problem but nothing too huge.I especially like the: show me the sql statement when the error happens. you can't do that with profiler.2. Event notificationsthis is an async notification system for a subset of profiler trace events. i use this to get notified of deadlock.it's based on service broker. what i do is when the deadlock happens the event fires and i send myself a mail with the deadlock xml.3. DMVsespecially those for tracing missing, used and unused indexes: sys.dm_db_missing_index_details, sys.dm_db_missing_index_group_stats,sys.dm_db_missing_index_groups, sys.dm_db_missing_index_columns4. Backup compression and Datatype compressionin 2008 R2 Backup compression comes to the standard edition. it compresses your backups. enough said.datatype compression hugely lessens your IO at a small CPU price. shouldn't be too high price though.5. CLRCLR is a great tool for certain tasks that don't do well in set based. you should check for yourself if you have those.for security check out Transparent Data Encryption (TDE)some other useful ones are also Resource Governor and Data Collector and Management Data Warehouse___________________________________________________________________________Causing trouble since 1980Blog: http://weblogs.sqlteam.com/mladenpSpeed up SSMS development: www.ssmstoolspack.com <- version 1.7 out! |
 |
|
russell
Pyro-ma-ni-yak
5072 Posts |
Posted - 2010-01-20 : 18:26:05
|
have to agree with DMVs. That's actually the "new" feature I use the most since SQL2K5 came out. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-21 : 02:24:11
|
Some good stuff here. @Spirit1 I have printed your articles on Service Broker and will read soon. |
 |
|
Next Page
|