Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 Transact-SQL (2005)
 scripting data

Author  Topic 

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-28 : 17:39:27
I haven't put much thought into this yet, but thought I'd seek advice of the gurus. We have a C# app we're distributing with a sql server 2005 express db and need to support the database install along with all data, and be able to update the database after backing up user defined data. I have the sql/stored procs to do this, and we are using InstallAWare to call the scripts using sqlcmd currently and either install the app or update DLLs. I haven't ventured into writing sql with the CLR yet, but want to. Is there a cleaner way someone has used that will allow me to install/add to/update/delete the database? We have months before distributing the release product so I have plenty of time (yeah, right!) to implement something. InstallAWare, the guy that's implementing this tells me, can hide the command prompt windows so you don't see an ugly command prompt pop up each time he calls a script, but this still seems crude to me on the backend. An app could be written with an argument for a sql file to parse and run the commands, but that's still external to the installer, which already has a really nice, clean interface. Any ideas? TIA,

--Steve

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-06-28 : 22:32:39
What do you mean 'install/add to/update/delete the database'? Work on db objects or manipulate data in the db?
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-29 : 04:14:42
Couldn't you just "restore" a backup of a database that you have carefully created to be a perfect "starter database"?

Kristen
Go to Top of Page

Celia
Starting Member

8 Posts

Posted - 2007-06-29 : 04:51:01
Restoring the database sounds as the best option of all for me. Be aware about users when you restore the database. You know that every server has a Sessions' internal IDs:
select srvid,
sid,
name
from master..sysxlogins
where name = 'test'

Even when there's same session's name in 2 different servers, the internal IDs are different in every server.

On the other hand, every database has a list of users that have granted access to that database. Every time we want to grant access to a user, an entry is created in table sysusers of our database. That entry refers to the login in sysxlogins, using the SID column.
select uid,
name,
sid
from sysusers
where name = 'test'

When you restore a database in a server from other server, the SID reference between sysusers and sysxlogins can be lost. Same issue can happen if you restore a backup of your database that you made a long time ago. Here you have an deeper explanation about this issue and how to solve it:

[url]http://vyaskn.tripod.com/troubleshooting_orphan_users.htm[/url]

Regards,

Celia
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-29 : 09:02:50
I'd be inclined to have no user in the "starter database" and crate the appropriate user, and add it to a Group to set the permissions etc., as part of the install.

That way the actual user can be something appropriate for the user's server - and not clash with any pre-existing user of the same name etc.

The Permissions Group/Role within the database can, of course, be under your control as part of the "starter Database"

Kristen
Go to Top of Page

rudesyle
Posting Yak Master

110 Posts

Posted - 2007-06-29 : 10:19:15
quote:
Originally posted by Kristen

Couldn't you just "restore" a backup of a database that you have carefully created to be a perfect "starter database"?

Kristen



Agreed. This is as simple as pie, and what we do on just about evey project.
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-29 : 10:44:09
When you say restore, I think that's what we're doing now. Although it's not really a restore, it's a create. Or in the case when updates are needed, when tables or stored procedures change, we're thinking to let the installer call a script that just makes those changes.

I created a script by using Management Studio (tasks, generate scripts, etc.) and I modified that script by:
1) pulling out scripted db create section and replacing with section to check if db exists, kill connections, drop db, then manually creating it in the same location with the master db. This is an instance of sql server 2005 express.
2) remove all scripted create login lines and replace with creation of one user
3) insert data

It works fine, but our installer is just calling sqlcmd to install it fresh. If the database already exists, we'll just update it if necessary. Basically, in that case, checking if a column exists and adding it if necessary, dropping and recreating stored procedures that have changed, adding constraints and indexes if they don't exist, etc. It works, but my question is whether this is the right way to do it? Calling a script using sqlcmd from the installer seems crude. Or is this okay?

As far as the user, we have one user I'm creating this way:

IF NOT EXISTS (SELECT * FROM sys.server_principals WHERE name = N'etuser')
CREATE LOGIN etuser WITH PASSWORD='XXX', CHECK_POLICY=OFF;
GO
IF NOT EXISTS (SELECT * FROM sys.database_principals WHERE name = N'etuser')
CREATE USER [etuser] FOR LOGIN [etuser] WITH DEFAULT_SCHEMA=[etuser]
GO
GRANT CONNECT TO [etuser]
IF NOT EXISTS (SELECT * FROM sys.schemas WHERE name = N'etuser')
EXEC sys.sp_executesql N'CREATE SCHEMA [etuser] AUTHORIZATION [etuser]'

But I'm currently using sa to connect to the db from the app. Yeah, I know that's bad, but I got tired of applying permissions to everything for that user. At least in the development stages of our project. Our user will need extended rights though, because this user will have to have the ability to check and repair the database. This db will be installed on a laptop, which will be used in the field, and so it has just one user. I've read about that user problem you're referring to celia, but didn't think that was an issue with 2005 and the way I'm doing it above. Haven't verified it in a long time since I'm using sa for now. So, doing something like what Kristen is suggesting, and it seems to work. Any other suggestions welcome. TIA,


--Steve
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-29 : 11:39:00
"When you say restore, I think that's what we're doing now"

No you aren't, you are running scripts to create a database, create Sprocs, etc.

Do all that on a new "Starter Database" in your office, get it just right, back it up, include the Backup File in the files you ship, and then on the client sites have your Install program just RESTORE that database. There is a single command to restore a database, and the restored database will be identical to the Starter Database you maintain in your office - including all the tables, Sproc, etc. and also any Starter Client Data that you might want to have in there.

"But I'm currently using sa to connect to the db from the app. Yeah, I know that's bad,"

Its worse than bad ... its scary beyond belief. If you are installing a database on client machines in that format then you are heading for a fall ... something is going to spider the SA password and then reek havoc ... plus your application couldn't be isntalled here because we wouldn't release the SA password to you ... Do you leave copies of the master key to your house wherever you go? No? didn't think so ...

"I got tired of applying permissions to everything for that user"

Why? How hard can it be? Just make a ROLE and grant the permissions to that. We include the GRANT EXECUTE permission on the bottom of our Sproc Template, so any new Sproc we make automatically includes our default permissions, which we change when necessary - which is almost never.

Then in your Starter Database (or in the client's database once you have Restored that) create a user and grant that role to it.

"So, doing something like what Kristen is suggesting, and it seems to work"

Except that it doesn't sound like you are doing it that way ...

Kristen
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-29 : 11:54:18
Alright, but I don't have the control of location when I just restore the backup. That location is hard-coded in the backup. What if the user has two HD's and put all applications on the D drive for instance? What if they didn't install the sql server instance in the same location as mine? When I create, I tell it to install in the same location as the master db for our instance. Plus, we're installing a new instance of sql server 2005 express, for our stuff explicitly, and we only install our database there. We have the key (the sa pw) and keep it locked up in our installer, and don't give it out. We have to have the key to install the db server instance, so there's no way around this. A lot of our users are probably not going to have sql server 2005 express installed, and they won't have our instance installed unless they've installed our stuff previously.

And when I talk about applying permissions to sprocs, I'm talking about all the new ones I've had to create over the months, and hate it when I forget one. Maybe you have a better way to apply permissions on new sprocs automatically?

Maybe that's a good point about the spider, but how is that different than not using sa? The sa ID still exists, and we control it all since we own the db instance and the db. I'm arguing for sa I guess, but really had planned on changing before shipping.

Here's the section I use in the "starter database" to check if the db exists, kill connections, drop and then recreate the db:

-- go through all open processes and kill them, drop the ElanTools database, then recreate it
SET NOCOUNT ON

USE [Master]

DECLARE @spid smallint,
@SQLString varchar(50),
@cmd nvarchar(4000),
@data_path nvarchar(256)

-- retrieve data path based on location of master database
SET @data_path = (SELECT SUBSTRING(physical_name, 1, CHARINDEX(N'master.mdf', LOWER(physical_name)) - 1) FROM master.sys.master_files WHERE database_id = 1 AND file_id = 1);

IF EXISTS (SELECT name FROM sys.databases WHERE name = N'ElanTools')
BEGIN
DECLARE ffcursor CURSOR FAST_FORWARD FOR
select es.session_id from sys.dm_exec_sessions es join sysprocesses sp on (es.session_id = sp.spid)
where db_name(sp.dbid) = 'ElanTools'

OPEN ffcursor

FETCH NEXT FROM ffcursor
INTO @spid

WHILE @@FETCH_STATUS = 0
BEGIN
set @SQLString = 'KILL ' + cast(@spid as varchar(3))
exec(@SQLString)

FETCH NEXT FROM ffcursor
INTO @spid
END

CLOSE ffcursor
DEALLOCATE ffcursor

DROP DATABASE ElanTools
END

-- to allow advanced options to be changed
EXEC sp_configure 'show advanced options', 1
-- To update the currently configured value for advanced options
RECONFIGURE
-- To enable the feature
EXEC sp_configure 'xp_cmdshell', 1
-- To update the currently configured value for this feature
RECONFIGURE

-- remove the data and log files
set @cmd = 'IF EXIST "' + @data_path + 'ElanTools_Data.MDF" DEL "' + @data_path + 'ElanTools_Data.MDF"'
EXEC xp_cmdshell @cmd, no_output

set @cmd = 'IF EXIST "' + @data_path + 'ElanTools_Log.LDF" DEL "' + @data_path + 'ElanTools_Log.LDF"'
EXEC xp_cmdshell @cmd, no_output

-- create database in same directory with master database
IF NOT EXISTS (SELECT name FROM sys.databases WHERE name = N'ElanTools')
BEGIN
EXECUTE ('CREATE DATABASE [ElanTools] ON PRIMARY
( NAME = ElanTools_Data, FILENAME = ''' + @data_path + 'ElanTools_Data.MDF'', SIZE = 20MB, MAXSIZE = UNLIMITED, FILEGROWTH = 10% )
LOG ON
( NAME = ElanTools_Log, FILENAME = ''' + @data_path + 'ElanTools_Log.LDF'', SIZE = 5MB, MAXSIZE = 2048GB, FILEGROWTH = 10% )');
END

SET NOCOUNT OFF
GO
-- end drop/add database section

I have been the owner of the db and am the primary db developer, and do get it just right before creating the script. It matches mine that way, and I confirm that with Red Gate's sql compare and sql data compare. Thanks for all the helpful feedback,

--Steve
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-29 : 12:14:04
"I don't have the control of location when I just restore the backup"

You can restore it to any desired location as part of the Restore syntax

See: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55210&SearchTerms=RESTORE%20syntax%20/%20example

"but how is that different than not using sa"

If no application connects with SA its hard to spider. You can use a dictionary attack, but if you use a strong password that makes it hard work. Your application ought not to be running as SA - that allows access to the disk, Master, MSDB, and all other databases, and so on.

Whereas an "application user" created by you would be allowed access to ONLY the application database and ONLY to those tables / Sprocs / etc. that you permit. If you want your "application user" to have Read / Write access to everything there is a system-provided Role you can Grant it.

Our SA password dis locked in the safe. If it is needed for any reason there is a procedure to get it, it is used, a new password is allocated to SA and that is written down and locked in the safe for next time. Clearly we don't need to use it every day of the week!

"EXEC xp_cmdshell @cmd, no_output"

Well, if you are going to do that sort of stuff you are going to need hefty SysAdmin type user permissions.

But I can't see why you would need to - with RESTORE you can overwrite any pre-existing files.

"I have been the owner of the db and am the primary db developer, and do get it just right"

Excellent! You already have a perfect "Starter Database" then ...

Kristen
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-29 : 12:30:31
Okay, didn't know that about restore. I've only restored backups graphically. Also didn't know restore would overwrite existing files. That's pretty cool. On second thought, I have created the shell db, then restored on top of that, so I guess I knew it didn't really overwrite files, it just uses the current and updates it. Sounds like a backup restore has a built in way to care of an existing db, but you'd still need sysadmin rights to restore a backup won't you? We're currently calling sqlcmd with the sa since we created it.

Our ETUser that I set up only has access to stored procedures and one view. Access to tables? Ha - that would violate the reason for stored procedures - security! Yeah, I know, sa has access to that all that, but really hadn't planned on leaving that as is. But, it's educational to talk about the muriad problems associated with sa.

But, you guys still haven't addressed my original question. Is there a better way to install than just calling sqlcmd in a command prompt? I had thought about an app that lets you pass in a sql file, but that's still essentially the same thing as using sqlcmd.

--Steve
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-29 : 12:39:31
"it just uses the current and updates it"

Yup, but also in the sense that it will "size" the database according to what the Backup was made from - so, if you want to, you can make your Starter Database 100MB, albeit "empty", and that's how the client database will start off after the Restore is done.

"you'd still need sysadmin rights to restore a backup won't you? "

See BoL for details of the permissions needed. There are potential issues around CREATE DATABASE permissions and so on.

However, your Application is going to need to be able to Backup and Restore isn't it?

"Is there a better way to install than just calling sqlcmd in a command prompt?"

Probably not. Its either some application language, or shell to command line and use OSQL or somesuch.

Kristen
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-29 : 13:22:19
Cool - I hear you on the size. If I start out too small, will page splits occur as it grows? The application will need to be able to back up existing user data, then have a new updated db put in place, or a new db installed, then the existing user data restored. Glad to hear there's not an alternative to sqlcmd in a way, because that's one less thing I have to do. :) Looks like Nigel's backup and restore uses xp_cmdshell (http://www.nigelrivett.net/SQLAdmin/s_RestoreLatestBackup.html), so I don't feel too bad about those permissions.

Looks like you're just saying restoring a backup is a cleaner way to do it. I think the main issue is that I didn't think that restoring a backup gave us enough control. The only thing I don't like about this is the lack of ability to compare the text of backups. Or maybe that's possible? I will from time to time compare one script with another that's checked into subversion.

What about updates? Once the user has the db installed, we'll just be using a running script to make incremental updates to tables, sprocs, etc., so is that the best way to handle this? Just call it with sqlcmd? I guess i need to create another power user with rights to run this kind of script?

--Steve
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-29 : 13:47:58
"If I start out too small"

Well, if the Data and Log files are extended in, say, 1MB pieces the file is going to get pretty fragmented!

But anything else you set on the Starter Database will be carried through to the client after Restore. Collation, Database attributes, and so on (A default database on Express is probably going to be AutoClose. you may, or may not, want that)

"Looks like you're just saying restoring a backup is a cleaner way to do it."

Well its cleaner, but it is probably also less effort for you. Just make the Starter Database right.

If you use a script to Create the DB then all the objects etc. how do you debug that when it goes wrong in the field? I reckon that will be a nightmare, and that's principally my issue. Plus all the time to make the script, and make it robust, which you now don't have to do

"The only thing I don't like about this is the lack of ability to compare the text of backups."

Don't follow you on that one.


"What about updates? Once the user has the db installed, we'll just be using a running script to make incremental updates to tables, sprocs, etc., so is that the best way to handle this? Just call it with sqlcmd? I guess i need to create another power user with rights to run this kind of script?"

Well by that time the Application and the Database are in place. So your Application can deal with running the update. Best way I have seen is to ship all updates as XML and have the App. chew on the XML and update the database accordingly. The XML is probably just CREATE/ALTER stuff ... but it allows it to be packaged, and the App. can check that it is ruining everything in "version order" or whatever - so it can bail out if asked to run Version 3 but the current version = 1.

But in order to do that you have got to be able to create scripts to get you from Version 1 to Version 2 - every database change, and all appropriate data changes that go with it. That's quite a bit of work in itself.

I wrote up some stuff about that if you are interested here:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55210&SearchTerms=Script+Development+Database+Changes+to+Production+Database

Kristen
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-29 : 15:44:56
Good point on the fragmentation. I'll check out the restore backup and see if it works properly. I've written several stored procedures to insert data (so we don't have to worry with the identity columns or ID fields), as another guy is handling the data part now, and creates the installer too. He gets the data, then exports it to several different sql files that he calls during installation, based on what the user wants to install. I think leaving that as is would probably work and just run with sqlcmd. He came up with the idea of theme packs, the ability to install just what the user wants and no more. They always have the option of installing the minimum or the maximum, or anything in between.

The current script with the data used to take about 4 minutes to run. With the majority of the data now pulled out and inserted at install time, the full db script takes less than 30 seconds to run/create. My experience with backup restores of larger databases is fast, and that may help us on the speed side of things. We haven't had any issues with bugs in the main script yet, well none that weren't caused by me anyway and I fixed those. What I mean by not being able to compare backup files is that a backup is not human readable. I can't easily check it with an previous revision in subversion. I have to restore to another default instance on my machine and compare. What a pain, but I can live with it.

As far as the updates, sql server 2005 express doesn't support xml anymore. That seems like a pain to stick the update scripts into xml and update when we can handle that in a script. Not sure of the advantage when the installer will handle that the same way as the initial install. BTW, to anyone using InstallAWare, we never had any luck using their built-in sql support. Seems that it isn't fully compatible with certain sql statements or something - we have had to open up a command prompt and run the sql script using sqlcmd (or the now obsolete osql if you're using sql server 2000).

As far as your post on the creation of moving updates to production, I'm not sure I like it. I was planning on one update script and noting in the comments above each new section the date and just keep one file on a go-forward basis that updates their sql server with minor updates (1.1, 1.2, 1.3, etc.) but going with a new database each time we jumped to a major revision (2.0, 3.0, etc.). That way, our update script doesn't get too unwieldy. We have to distribute this, so several files seems painful to handle in an installer.

I don't know why you mentioned that if your alter table 2 update failed, alter table 3 was run outside a transaction. If the whole thing is wrapped in a transaction, and you store the @@ERROR in a local @Error var, you can check it each time, if it's 0, before reassigning it with a new @@ERROR value. That way, at the end of your script, you just call

if (@Error <> 0)
rollback transaction
else
commit transaction

I've been doing it this for years and seems to work fine. Not sure if there's something I'm missing on your update or not. Thanks for all the good and valuable feedback so far...



--Steve
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-06-30 : 05:11:46
"He gets the data, then exports it to several different sql files that he calls during installation, based on what the user wants to install"

My instinct is still to say that this will break during install somewhere for some reason, and makes the job harder to maintain.

I would either:

1. Have all the data in the Starter Database and [at install time] selectively delete the data
or
2. Have all the data in "temporary tables" in the Starter Database and selectively "insert" it from those as part of install, and then throw away the temporary tables.

The benefits of this is that to add a bit more data into your install you just have to get it into the Starter Database - instead of messing around with SQL INSERT scripts, and all the regression testing etc. that has to go with that.

But that's not to say that the Scripts method won't work! I just think in the long run it will have a higher maintenance cost.

Alternatively I would have some sort of "import" routine built into the application which can devour an XML file. That would help with the "Theme Packs" approach - allowing Theme Packs to be applied post-installation too.

"What I mean by not being able to compare backup files is that a backup is not human readable"

OK, I'm with you now. Yes that's true. We upgrade our "Starter Database". We use a fully automated Regression Test system to test that database, and we make comparisons against the previously shipped version and have a peer review of all the changes to decide that they are sensible.

You can take the view that you can read your Install Script more easily, but you aren't checking it AFTER it is installed! you are checking what you are ASSUMING it will be after it is installed ...

"sql server 2005 express doesn't support xml anymore"

Didn't know that. But actually I think you would be better off handling the XML in your application, and delivering data / DML statements, whatever, to the database from there. You should be able to do that in a way that runs each statement in turn, checks for any error, and immediately rolls back and bails out! if you hit an error - the process needs to be immune to Disk Full, Power Cut, all that sort of stuff.

"... when we can handle that in a script"

But can you handle all the errors in a script?

If your script goes:

Alter Table - Add Column

Insert some new System Data

Alter Table2 - Add Column2

...

is your script going to be able to recover from an error in the INSERT? And if there is a power cut what then? will the script be able to rerun, or will you start getting "Column already exists in table" type errors?

You need a framework that runs the script, and bails out on error. Sure, you don't have to ship that sort of stuff in XML, you might actually prefer a home-grown format. But it does need to be able to work out which version is installed, what versions are prerequisites for the new script, whether the client has Option-X and thus needs Patch-Y and all that sort of stuff. So there is potentially quite a lot of "meta data" as well as the "Alter Table Add Column ..." DML stuff which needs to be part of the "Upgrade Script"

(If you are just starting out on this product you may not have anything complex to worry about yet, but worth thinking about whether that is likely to be needed in the future)

"one update script per minor release"

That sounds fine. It is in effect what we do, except that we create the script by concatenating each change we make [and actually we have multiple scripts, but that is not relevant, it could be all in one script]. Sounds to me like you are planning to use some sort of SQL Compare to compare copies of the old 1.1 version database with a new 1.2 version database, and make your script from there.

That's fine, and will work well. I personally don't think that scales well to doing things like Daily Builds. lets say that you make a comparison, generate the script, etc. and then at the last minute find a bug; you fix that in the 1.2 database and start the compare process all over again. For us that fix would be an incremental add-on to all the scripts we already have on the shelf. Plus we freshen up our QA database daily [by just applying all the new change scripts] and run our automated Regression Tests overnight. And we can easily apply what has worked on QA on our Beta site ... long before we get to an actual 1.2 version release.

"going with a new database each time we jumped to a major revision"

So you would plan to install a new database and then migrate over the user's data?

We've though about that and never been able to convince ourselves its a better approach.

If you are tooled up to get from 1.1 to 1.2 and then 1.3 its exactly the same process to get to 2.0 - just a bigger script maybe!

However, if you script 1.1 to 1.2 to 1.3 and then have a data-copy-over process to get to 2.0 you have a whole new raft of rollout-stuff to build and support. And is it going to be able to understand the database format for all 1.n versions? (quite a challenge if the database structure has changed in those sub-releases), or are you going to have to update a 1.n client to 1.3 before you can apply the 2.0 upgrade? [we figure that if we have to get the client to 1.3 first then we might as well use the same methods to get the client to 2.0 ]

"the whole thing is wrapped in a transaction, and you store the @@ERROR in a local @Error var"

Well you can't do that with statements that have to run in their own batch.

So you cannot do CREATE TABLE and CREATE FUNCTION in one statement.

So you have

CREATE TABLE
...
GO
CREATE FUNCTION
...

and once you go past the GO without bailing out you have lost the context of any local variables.

Hence my suggestion that the script is run by an application which can run each "block", check for error and then bail out. Rather than using a Command Line utility like OSQL. (And that's why I was proposing having the "Script" in XML, albeit that by "XML" I actually mean "Some structure that your application can use to parcel up the script"). Sorry, repeating myself! I'll bugger off now ...

Kristen
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2007-06-30 : 12:14:53
Every time I open my mouth, I open up a new can of worms. :) Thanks for all the helpful feedback, it's probably good to be thinking about all this.

"1. Have all the data in the Starter Database and [at install time] selectively delete the data
or
2. Have all the data in "temporary tables" in the Starter Database and selectively "insert" it from those as part of install, and then throw away the temporary tables."


The problem is that we constantly make new theme packs. We have to support the addition of data related changes on a go-forward basis and they won't be in the starter database. There will be new ones all the time. So, the mechanism in place now is one way. Not sure how that could break, it's not insert statements, it's stored procedure calls to insert the data. The installer can check success/failure too, and we plan to do that. The problem with incorporating the addition of the theme packs into the application is that it's not really an installation then. From a user standpoint, it's cleaner to update the database with the theme packs thru the installer.

"But that's not to say that the Scripts method won't work! I just think in the long run it will have a higher maintenance cost."

I'm glad you said that, because it's less maintenance cost for me! I created several stored procedures that can generate the stored procedure calls that do the inserting of the data for these theme packs, which means I don't have to do it! The installation guy has the tools to generate the insert scripts now, along with the theme pack files. One more thing off my plate is a very good thing. Plus we have other data like templates, IR commands and serial commands, which I've also wrote stored procedures to generate inserts (stored procedure calls to do the inserts) for as well, so our Starter Database install is very small and takes less than 30 seconds to install. That way, I just have to worry about structural changes and I've given the installation guy the tools to take care of the data. He's happy learning sql and I'm happy because I don't have to mess with it. And we've got a pretty cool installer process that works like some other Microsoft software where you can choose what to install.

"you aren't checking it AFTER it is installed! you are checking what you are ASSUMING it will be after it is installed ..."

As far as checking the insert after installation, I can only check my default instance against our ElanTools instance and see if the updates worked properly using SQL Compare. I have to assume it's going to work in the field, and I agree that sounds dangerous compared to restoring a backup. I'm looking into the backup restore this weekend.

"is your script going to be able to recover from an error in the INSERT? And if there is a power cut what then? will the script be able to rerun, or will you start getting "Column already exists in table" type errors?"

The inserts are handled through stored procedures typically, and we have update flags on the sprocs to determine if we want to update existing data. It's built in to handle that. On other inserts, I ALWAYS call if not exists first (and in the sprocs). It'd be stupid not too. Yes, it's more work, but it avoids problems. Same thing for table updates. I call something like

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Models]') AND type in (N'U'))
BEGIN
-- if table exists and column doesn't
if (columnproperty(object_ID('Models'), 'ConnectHostPort', 'Precision') is null)
ALTER TABLE [dbo].[Models] ADD [ConnectHostPort] [bit] NOT NULL CONSTRAINT [DF_Models_ConnectHostPort] DEFAULT ((0))
END

I ALWAYS check if the column exists before adding it. I need to find a way if a constraint exists, haven't done that before, but sure it's possible. And there's a drop_existing parameter for indexes if you want to recreate them, like

CREATE UNIQUE CLUSTERED INDEX [IX_Icons] ON [dbo].[Icons]
(
[ParentTheme] ASC,
[IconName] ASC,
[IconFile] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

so if I need to reconfigure indexes I can easily do that too. I've checked it - it works.

As far as errors in the update script that need to rolled back. I don't think that's a problem, but I need to check that somehow. I'm not planning on using any GO statements in the update. Why would I? I'm planning on wrapping the whole thing up in a transaction, on failure roll it back. Not sure what you mean by "Well you can't do that with statements that have to run in their own batch.". Are you saying separated by GO statements? We don't have separate batch files, just one update file so that's all the installer has to deal with. "So you cannot do CREATE TABLE and CREATE FUNCTION in one statement." Why not? You mean they have to be separated by a GO statement? If they are, you're saying you can't wrap that in a transaction and get it to roll back if it's in the same file. If so, that's news to me.

"So you would plan to install a new database and then migrate over the user's data?"

Yes. We'll back up user defined data, I have a sproc for that too that works, and then restore. The only issue is when the sprocs change that are called by that backup process. But I came up with the idea of a preupdate script that the installer can call to update stored procedures that build the data. I hope that works out, and hopefully there's no major dependencies that prevent that from working. Maybe it would have to support the addition of columns in a table too, but that's okay if that's in the preupdate script too.

"However, if you script 1.1 to 1.2 to 1.3 and then have a data-copy-over process to get to 2.0 you have a whole new raft of rollout-stuff to build and support. And is it going to be able to understand the database format for all 1.n versions? (quite a challenge if the database structure has changed in those sub-releases), or are you going to have to update a 1.n client to 1.3 before you can apply the 2.0 upgrade? [we figure that if we have to get the client to 1.3 first then we might as well use the same methods to get the client to 2.0 ]"

We are planning on incremental updates, unless it's a major revision. We won't have to apply 1.2/1.3 updates if they're going to 2.0 since 2.0 will be a db restore (after backing up user data). I had thought about using a version table to update and track revisions. Then I thought it wasn't necessary if we wrap everything in a transaction in an update script, but now I'm back to thinking it might be necessary to make sure all updates are applied. For each 1.1 update, update the version to 1.1, apply 1.2 updates, then update version again to 1.2, etc. Plus we can check that version in our app under help, about, along with the DLL versions. For that reason alone, a db version seems essential. I was even thinking of doing a greater than compare and applying 1.1 updates if the current version is 1.0. If the current version is 1.1, and you're trying to apply 1.1 updates, it would compare and find you're already at 1.1, so the updates wouldn't be applied. Like

declare @ver nvarchar(50)
select @ver = name from version
if (@ver > '06.28.2007.15.00')
print 'run the updates'
else
print 'uh, don''t run the updates'

Maybe we don't have too many more posts left in this thread! Thanks,


--Steve
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-07-01 : 16:36:52
"'So you cannot do CREATE TABLE and CREATE FUNCTION in one statement.' Why not?"

Because SQL Server won't let you. Try it!

"You mean they have to be separated by a GO statement?"

Yup

"If they are, you're saying you can't wrap that in a transaction and get it to roll back if it's in the same file. If so, that's news to me."

Yes, you can put them in a single transaction. You can do:
BEGIN TRANSACTION
This
GO
That
GO
The Other
Go
COMMIT / ROLLBACK

but you cannot preserve a local variable past a GO statement, so you cannot preserve the "Error State" through a script that has GO statements in it.

You can sort that out if you execute the script [statement by statement] through your application, but its very hard to do it in a pure SQL script executed by, say, OSQL.

" We won't have to apply 1.2/1.3 updates if they're going to 2.0 since 2.0 will be a db restore (after backing up user data)."

I still think this will be a minefield if version 1.1/1.2/1.3 involve changes to database structure.

Version 2.0 will have to be prepared to import User Data for any variant of the database structure, unless by "after backing up user data" you are meaning something different?

I'm assuming you mean:

Install brand new, empty, Version 2.0 database
Import user data from the original 1.n database

"I had thought about using a version table to update and track revisions"

That would help with knowing that you are on, say, 1.2 and thus only need to apply patch for 1.3 (and thus ignore patches for 1.1 and 1.2 which are already applied). I don't know of any way to do this other than with a Version Table. Our Version Table has the name of the object, the version, and the date. Sure, the client is "supposed" to be at version 1.2, but actually for some reason we applied, say, three Sprocs from version 1.3. No actually problem with the rollout for version 1.3 because they will just be overwritten [with identical version code] but we do find this useful when something goes pear-shaped and the poor blighter doing the Support doesn't know that 3 x V1.3 Sprocs have been applied manually! Our Support Folk can just check the Version Table and see what was applied, by whom, and when. If you won't have a situation where the odd Sproc is applied out-of-step you can avoid this grizzly step!

Kristen
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2007-07-01 : 18:48:37
quote:
Originally posted by Kristen
You can sort that out if you execute the script [statement by statement] through your application, but its very hard to do it in a pure SQL script executed by, say, OSQL.



don't know if this will help, but if you use sqlcmd.exe and :setvar or environment variables, you can do something like this (say MYVAR is an env var set at the cmd line):


use $(MYVAR)_1
create proc useless as select 1
go

use $(MYVAR)_2
create proc useless as select 1
go



elsasoft.org
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-07-02 : 05:13:03
I'm missing something, how does that stop all successive statements being executed if the first one errors?

(I'm sure one could write the script with some sort of test for @@ERROR and then conditional tests against the presence of a dummy table, or somesuch, but it would be impossible to debug - in the main the script will just work because there would be no errors, forcing errors for each statement to test that the error handling works would be a nightmare thus you'd never be certain that any error anywhere would be caught and handled - what about a Deadlock for example, or a syntax error? @@ERROR won't catch those AFAIK.)

So I think some App. that executes the script statement by statement and bails out [i..e does a ROLLBACK and does not execute any more statements] at the first error is the only way to safely run an "Upgrade script".

Kristen
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2007-07-02 : 10:16:32
there's the -b flag (on error batch abort) that may work for this.


elsasoft.org
Go to Top of Page
    Next Page

- Advertisement -