Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 Transact-SQL (2008)
 Improving Design of Order Flow/Processing

Author  Topic 

Kristen
Test

22859 Posts

Posted - 2010-11-02 : 15:29:29
I'm just watching orders processing from Web to Back office / Order fulfilment database. It seems slow. Here's what we do - how would you improve it?

We have a scheduled task that runs a small APP, that APP calls an Sproc asking for work:

EXEC usp_ExportGetWork

the resultset is a single row for a single type of data record (the Sproc first returns a Customer, when no more of those it returns an Order, then other things like Catalogue requests)

The APP processes the single row using either API or EDI to the Order Processing System - which only provides access via API (Customers etc.) and EDI file-drop (Orders)

Then the APP calls another Sproc to indicate Success / Failure:

EXEC usp_ExportUpdate @JobType='ORDER', @ID=1234, @intErrorCode = 0, @strErrorMsg = NULL

The APP got the JobType & ID from the recordset returned by usp_ExportGetWork, and then provides them back as parameters which the the usp_ExportUpdate uses as "PK" for the task. Error Code and Msg are either 0/NULL or a suitable number & message.

The database updates the record (an Order in this example)

The Batch Process loops round and calls

EXEC usp_ExportGetWork

again. If there is no more work to be done it will get a "DONE" code, and exit - to be restarted by the scheduler in another hours time.

Obviously a SET-based approach would be much faster, but the worry is that failures then leave lots of rows in limbo which then have to be sorted out by a human - and they tend to take a caviller approach : "Look at all these stuck orders, I'll press [RESET]" - whereas in fact they might be in-process, just going rather slowly. So the result is that duplicates get generated.

So in effect we do:

Find next row to be processed. UPDATE its status to "In progress". Run a SELECT to give a recordset for the APP.

APP calls usp_ExportUpdate which UPDATEs the Status to "Done" or "Error" (if its an API) or to "Awaiting status update" (if it is an EDI)

I query of changed orders, run separately, will send back the actual Status for the Order, and thus it will change from "Awaiting status update" to "Processing", "Shipping" or "Billed" etc

If there a way to sue SET based processing that will still give fine-control over the state of individual orders being transferred / processed within the batch?

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-02 : 16:11:21
slow as in performance or slow as in too many pieces to the puzzle. maybe Service Broker and/or SSIS? I know SSIS and I like it very much. But i hear Service Broker is even better

If you don't have the passion to help people, you have no passion
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-02 : 18:10:07
I meant slow in the sense that processing record-by-record it takes a while.

I thoguth about Service Broker (which I have never used), but I think that would mean that I could use set-based to transfer the data to Service Broker, but then my APP would ask for each row one-by-one? Which I feel is still much the same solution - is that right?

Whereas if my APP could get

SELECT * FROM MyTable WHERE MyStatus = 'NewOrder'

then it would get the records much more quickly than one-at-a-time

... but then if the APP files its hard to determine what is processed, and what has not yet been processed.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-02 : 19:33:33
I am not too sure about service broker
Service Broker helps developers build asynchronous, loosely coupled applications in which independent components work together to accomplish a task. These application components exchange messages that contain the information that is required to complete the task. This topic describes the following fundamental aspects of Service Broker:

* Conversations
* Message ordering and coordination
* Transactional asynchronous programming
* Support for loosely coupled applications
* Service Broker components

is usp_ExportGetWork in a certain loop in the APP
While in the loop does it get a certaincustomerID as parm? how is the the customer collection built, is it another sproc?

If you don't have the passion to help people, you have no passion
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-03 : 02:32:01
I think the Database could send ALL Orders to Service Broker (as a set), and APP could then get them one-by-one from Service Broker. But I'm not sure that is any better than current:

"is usp_ExportGetWork in a certain loop in the APP"

It looks a bit like this:

LOOP
RS = sqlExecute("usp_ExportGetWork")
SET strTask = RS("TASK")
SET strPK = RS("PK_ID")
SET strCallBackSP = RS("CallBackSP")
IF strTask = "DONE" BREAK
SWITCH strTask
CASE "CUSTOMER"
CALL API_CUSTOMER_CREATE(RS("CUST_ID"), RS("CUST_NAME"), ...)
CASE "ORDER"
CALL EDI_ORDER_CREATE(RS("ORDER_ID"), RS("ORDER_DATE"), RS("CUST_ID"), ...)
CASE ....
END SWITCH
RS = sqlExecute(strCallBackSP, intErrorNo, strErrorMsg)
END LOOP

usp_ExportGetWork does:

SELECT TOP 1 @CustID = CUST_ID
FROM CUSTOMERS
WHERE CustStatus = 'NEW'

IF @@ROWCOUNT = 1
BEGIN
UPDATE CUSTOMERS
SET CustStatus = 'EXPORTING'
WHERE CUST_ID = @CustID

SELECT [TASK]='CUSTOMER', [KP_ID]=CUST_ID, [CallBackSP]='usp_CustomerExportUpdate',
CUST_ID, CUST_NAME, ...
FROM CUSTOMERS
WHERE CUST_ID = @CustID
GOTO MyExit
END

SELECT TOP 1 @OrderID = ORDER_ID
FROM ORDERS
WHERE OrderStatus = 'NEW'
IF @@ROWCOUNT = 1
BEGIN
...
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-03 : 12:04:07
ha looks like good old VBA-ish. looks like you were doing a crude version of Servive Broker before service broker came out :) I think it is worth trying service broker. you will be offloading this from front end vba (which is not designed for such task IMHO) to something more robust like service broker which is designed for EXACTLY your scenario. I know you might get resistance from the folks that push the [RESET] button but you will have to present a solid case for it in terms of $$$$. or better yet try it out in your dev environment and see if it improves things. if you have the time it is worth learning. if it does not work out you will at least gain a new skill under your belt.

But SSIS might be a good solution as well

If you don't have the passion to help people, you have no passion
Go to Top of Page

TimSman
Posting Yak Master

127 Posts

Posted - 2010-11-03 : 12:40:41
Would it be possible to process them by record type within a transaction? In other words, could you process all the Customer records, and if there were any problems, handle that issue, then move on to Orders, and so on.

Otherwise, Service Broker might be a consideration, but maybe tied to "live" changes to the table, meaning that instead of waiting for an hour, then processing all the records row by row, handle each change as it happens, but do so asynchronously through Service Broker. Think something like an asynch trigger. I have not done any real Service Broker work yet, but I have been researching it recently to handle some financial calculation issues.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-03 : 14:30:36
maybe a table driven process might be your solution + xp_cmdshell.


CREATE TABLE [dbo].[orders_processing_task](
[orders_processing_task_id] [int] IDENTITY(1,1) NOT NULL,
[process_descr] nvarchar(255) NOT NULL,
[process_sproc] nvarchar(255) NOT NULL,
[process_sproc_parms] nvarchar(255) NOT NULL,
[process_sproc_parms_xml] xml NOT NULL,
[process_ordinal_position] [int] NULL,
[process_state] [int] NULL,
[process_succeededYesNo] [bit] NOT NULL
) ON [PRIMARY]

INSERT INTO dbo.orders_processing_task
( process_descr ,
process_sproc ,
process_sproc_parms ,
process_sproc_parms_xml ,
process_ordinal_position ,
process_state ,
process_succeededYesNo
)
VALUES ( N'ORDER' , -- process_descr - nvarchar(255)
N'usp_ExportUpdate' , -- process_sproc - nvarchar(255)
N'' , -- process_sproc_parms - nvarchar(255)
'<?xml version="1.0" encoding="utf-8"?><Parms><JobType>ORDER</JobType></Parms>' ,
1 , -- process_ordinal_position - int
1 , -- process_state - int
1 -- process_succeededYesNo - bit
)

SELECT * FROM dbo.orders_processing_task

plug it to an SSIS package and you might be good to go



If you don't have the passion to help people, you have no passion
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-04 : 04:28:54
I worry that your approach is over simplified - in the sense that I want to create a bullet-proof system (otherwise we get loads of support calls to try to reverse engineer how something happened, and then we get told to prevent it happening again!, which comes back to building it fail-safe in the first place )

Lets assume I do something like:

(Lets just take Customers as an example)

BEGIN TRANSACTION

-- Get records for the batch
INSERT INTO @TEMP
SELECT @CustID = CUST_ID
FROM CUSTOMERS
WHERE CustStatus = 'NEW'

-- Flag them as done
UPDATE U
SET CustStatus = 'EXPORTED'
FROM CUSTOMERS AS U
JOIN @Temp AS T
ON T.CUST_ID = U.CUST_ID

-- Select the data
i.e. to pass to Service Broker, or my App (but SET based), or something similar
SELECT Col1, Col2, ...
FROM CUSTOMERS AS U
JOIN @Temp AS T
ON T.CUST_ID = U.CUST_ID

COMMIT

OK, so they are marked in my application as Exported. Now a number of things can happen:

Machine running APP crashes / reboots
APP has error (and had only processed half the batch)

My database is showing the records as EXPORTED, but only half of them have arrived in the target system.

We currently handle this sort of scenario with a double-handshake system.

We query the Target system every hour for "new/changed records". We will see the records we sent (or just those that have arrived OK), and we will do:

UPDATE LOCAL
SET Status = "ON-REMOTE"
FROM CUSTOMERS AS LOCAL
JOIN RemoteTargetSystem.CUSTOMERS AS REMOTE
ON REMOTE.CUST_ID = LOCAL.CUST_ID
WHERE LOCAL.CustStatus = 'EXPORTED'


I now se some records marked as

CustStatus = 'EXPORTED'

they could be "in transit" to the remote system, or they could have got missed in the Reboot / APP Error. How will Admin know which is which?

Currently, because our APP is processing them one-by-one, then only one record will get stuck at CustStatus = 'EXPORTED' when reboot / crash happens. There might also be ONE record which is really in transit, and which will succeed shortly, and thus if there is also an earlier record at CustStatus = 'EXPORTED' it is suspect.

I could improve this using Batch Numbers I think:

BEGIN TRANSACTION

SELECT @NextBatch = MAX(BatchNo)+1 FROM CUSTOMERS

-- Assign Customer Records to the Batch
UPDATE U
SET CustStatus = 'EXPORTED',
BatchNo = @NextBatch
FROM CUSTOMERS AS U
WHERE CustStatus = 'NEW'

COMMIT

-- Select the data
i.e. to pass to Service Broker, or my App (but SET based), or something similar
SELECT Col1, Col2, ...
FROM CUSTOMERS
WHERE BatchNo = @NextBatch

and then perhaps the processing end could send a message batch sayign "Batch 9999 completed successfully" which would enable me to change the status on all the records in that Batch to "Transferred" and then at some other, later, point the double-hand-shake would find them on the Target System which would enable a final update of CustStatus to "ON-REMOTE"

Is ServiceBroker bullet-proof in this way?

(My understanding is I could give all the NEW Customer Records to Service Broker (and update them to "EXPORTED") and Service Broker would guarantee (i.e. failsafe) to transfer them to my APP - which could take them, one by one, and generate EDI / API to remote system. If my APP fails then I will have one "lost" record - same as at present. Benefit would be that I could pass the NEW Customer Records to Service Broker using set-based - and thus SQL would have less work to do.

I don't see how to do this in SSIS at all - I am sure I could, but its just as much logic and processing as my current one-by-one APP isn't it? SSIS will either mark the whole batch as "done" and then fail part way, or will fail part way without any rows being marked as "DONE" (which will result in duplicates or the partial batch at the far end when the process next runs)
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-04 : 04:31:22
yosiasz: Can you re-edit your post and wrap your long line please? Thanks.
Go to Top of Page

AndrewMurphy
Master Smack Fu Yak Hacker

2916 Posts

Posted - 2010-11-04 : 12:09:33
What volumes are you talking about?
Would you consider fast?
What's the effort to correct/fix problems?

What's the adverse impact of "wait until the next hour"? Would solving this latter bit not be the cheapest win?
Is it more important to treat each item as independant of each other item, or can you afford to have "batches > 1" fail for items unrelated to each other?
Is it possible to identify systemic failures, to prevent other independent events going into error based on a system issue, but still allow multiple failures for data quality problems....
i.e. if server down, stop processing more queue items, because it's not worth putting more items onto the error queue, but if a local application data rule is failing, keep processing new stuff 'cause some may get through, etc
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-04 : 12:18:46
what does the app do that you cannot do in SQL? is 'EXPORTED' mean exported the EDI file? don't tell me it creates an EDI for a customer and then an EDI for their orders! is that what it does?
in that case I think maybe either make the remote system use xml web services or dump a customer_orders.xml file that the remote system consumes. ( i think you have previously expressed an aversion for xml )

If you don't have the passion to help people, you have no passion
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-04 : 14:53:50
"what does the app do that you cannot do in SQL?"

It basically converts a SQL RecordSet row into the appropriate "logic" to call the 3rd party APP's API (or generate an EDI file). The data going into the 3rd party app looks exactly the same as the data coming out of SQL - its just a "Glue APP". All it does is:

oCust = Object.Create("CUSTOMER")
oCust.CustID = RS("CUST_ID")
...
oCust.do("SAVE") (or whatever the syntax is!)

"don't tell me it creates an EDI for a customer and then an EDI for their orders! is that what it does?"

It uses an API to create the customer, one at a time.

If the Customer is new then it waits until it knows that the Customer is successfully created at the far (via double-handshake, as mentioned above) end before it sends the order. (i.e. the SELECT for ORDERS only includes orders for Customers that have a Status of "Successfully uploaded")

EDI files are "dropped" in a specific folder, and the 3rd party application imports them from there.

"i think you have previously expressed an aversion for xml"

If the system we are communicating with supported XML files I would hapilly use them. I'm not against them (although I think they are "bulky" and I prefer small, tight, files) but I haven't seen anything yet where it was significantly better than "the old way". We still ahve all the "nuance of the data" issues to sort out.

I personally don't see any great advantage with XML files - everywhere we use XML it causes no-less trouble than whatever clunky old method we used to use - but the reason for that is usually poor documentation for the low-use interfaces we are using it for.

Even the useless bank's credit card system's XML is poorly documented and extremely brain dead. For example, they want US to URL Encode some columns IN THE XML so they can pass them to a 3rd party. Surely we should be sending them the data unmangled and then they can mangle it if they want to.

(We do a 3D Secure credit card authentication and get a Hashed number. It MIGHT contain a "+" sign. We put the Hash code n XML and send it to the bank. XML guarantees that it does not get munged on the way. But still the bank wants us to Encode the "+" to "%2B" (or whatever it is) in the XML field. Idiots! ... I digress!)

XML would allow me (at least) to send a atch of orders, and thus to use set-based logic in SQL. And I could probably generate the XML straight out of the back of SQL Server without any "glue" between the systems.

"in that case I think maybe either make the remote system use xml web services or dump a customer_orders.xml file that the remote system consumes"

This is a system costing $10's of millions. What is there may be a bit clunky, but it works. It will be upgraded eventually but the cost [of the change] is astronomic.

What I'd like to do is get data out of SQL in sets - because that will save SQL Server some CPU cycles, and right now the SQL box is closest to capacity

"What volumes are you talking about?"

5,000 orders a day, 5,000 customers, 3,000 catalogues

EDI manages about 3 orders per second, API manages about 1 every 2 seconds. This is clearly not a SQL restriction, but in effect we are processing in the region of 20,000 records a day where we do one SProc to GET each record, one-by-one, and a second Sproc to update it - so 40,000 Sproc calls a day when we could do one or two each hour instead.

"Would you consider fast?"

Its just that I would prefer to get a SET of data, rather than one-at-a-time. Maybe one-at-a-time is not uncommon for this type of job? in which case I should perhaps just leave it alone ...

"What's the effort to correct/fix problems?"

If we have to implement Service Broker (which I know nothing about) then "several days". If we can move to SET based queries then a day, at most, I would think.

"What's the adverse impact of "wait until the next hour"? Would solving this latter bit not be the cheapest win?"

We rate-limit the process. The scheduled tasks runs twice-per-hour. It prioritises Customers who have requested Express delivery. Each of the various data types is limited to about 100 records-per-run (it allows extra during the quiet hours of the night). On a busy day it won't catch-its-tail until the night time.

"Is it more important to treat each item as independant of each other item, or can you afford to have "batches > 1" fail for items unrelated to each other? "

I suppose the way I look at it is that it is built to fail with the least amount of hassle for the Admins, and the minimum number of support calls to us. People stop services, reboot boxes without asking (ISP rebooted the server last night to add more CPUs. No one disabled the services for the duration of that task, even though its easy for someone to do!). So I would rather make it tollerant of that sort of "failure", rather than push-the-rock-uphill of trying to educate people to not reboot without first pressing the "Disable tasks" button and wait for a positive "There are no tasks currently running" message (a state which could take several minutes to be reached)

"Is it possible to identify systemic failures, to prevent other independent events going into error based on a system issue, but still allow multiple failures for data quality problems....
i.e. if server down, stop processing more queue items, because it's not worth putting more items onto the error queue, but if a local application data rule is failing, keep processing new stuff 'cause some may get through, etc
"

If my APP fails to create a record on the remote system (crashed, rebooted, could not "talk" to remote, Remote returned error) then the APP aborts. It will be restarted on next 30-minute cycle. As it stands right now it will start with the record that failed last time. So it will keep on retrying that record. The chances of it succeeding, and THEN failing, is very slim (that would have to be an error in the "Close and Log" routine which is used in normal processing, so unlikely to fail). So once the remote system comes back upright it will resume. Or it will continue to error, and we'll get an alert to fix it.

Worst case is:

Create record on Remote successfully
Just before updating Status on Source database the APP is rebooted. Thus it will recreate that same record next cycle (unless the double-handshake has run before then )

The alternative (which we don't currently do, but I did imply we did in the post above) is to mark the Source System's record as "Being processed" when it is put into the queue. If there is a problem downstream then the APP will process one record and abort. It will do this every 1/2 hour - so a failure at 6pm would probably run until 8am the following day and "fail to process" about 30 records.

We could shut it down if the double-handshake has not come back with ANYTHING after, say, an hour. But if the EDI process is working fine, and the double handshake is bust, we would be preventing orders getting to the processing system - and the Client would not like that very much!

Having typed that I'm starting to worry that it isn't as water-tight as I thought it was and maybe I should move to a queue management system (which is what I think Service Broker does?) to make it more robust.

Thanks for your input Chaps, its a big help to have my thinking challenged.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-04 : 16:43:11
i think you are getting warm here.
ok how many EDIs are you creating and for what
Customer.edi
Orders.edi
n+1.edi
Does it create 1 EDI per customer? Meaning is there one and only one customer in the EDI file or multiple customers?
So tell me if ingest of customer.edi with 100 customers fails on row 99, is the whole process considered fail?
Is there a CustomerID in the Orders.edi?


<yaks>
<Customer CustomerID="134">
<CustomerName><![CDATA[Kumboodle]]></CustomerName>
<Orders>
<Order OrderID="1" Expedite="1">
<Item ItemID="1" UOM="1 ea">
<Item ItemID="2" UOM="4 ct">
</Order>
</Orders>
</Customer>
<Customer CustomerID="135">
<CustomerName><![CDATA[Cuptiz]]></CustomerName>
<Orders>
<Order OrderID="1" Expedite="1">
<Item ItemID="1" UOM="1 ea">
<Item ItemID="2" UOM="4 ct">
</Order>
</Orders>
</Customer>
<yaks>


If you don't have the passion to help people, you have no passion
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-05 : 03:25:03
"ok how many EDIs are you creating and for what"

Note that some are API - but both EDI and API process records one-by-one. The difference is that the API gives me a success / fail result, so I can immediately pass that back to the data whereas for EDI I drop a file in a folder and have no idea if it has worked (until a separate query "discovers" the new Order on the remote system)

"Does it create 1 EDI per customer? Meaning is there one and only one customer in the EDI file or multiple customers?"

Yes. One-by-one. There are API for New Customer, Changed Customer, Catalogue Request and EDI for New Order.

"So tell me if ingest of customer.edi with 100 customers fails on row 99, is the whole process considered fail? "

The problem is if I give 100 rows to the APP I have no way of knowing if it is still running, or failed. So I have to assume that those 100 rows are "in progress". So I cannot give the same rows to a second instance of the APP - e.g. the one that runs on the next schedule, or one that is run manually for some reason.

So I would select 100 records, mark then as "In progress", give them to the APP and the APP would then tell me (currently one-by-one, but could be a SET at the end) that it has processed it - and I change the status on the record(s) to "Transferred".

Currently I have:

SQL gives one record to APP and sets status to "In progress"
APP processes record and calls SQL to change status to "Transferred"
APP loops and asks for SQL another record - until SQL says "You've had all 100"

APP fails on record 99. Record 99 in SQL is set to "In progress". Maybe EEDI/API was successful, maybe not.

If I change the APP to be SET based then:

SQL gives APP a recordset with 100 records, and sets them to "In progress"
APP processes each one, and fails on 99
None of the records are set to "Transferred" - even though 98 of them have been.
A query of the remote system will change those 98 records to "Completed". However, this may not run for a while.

As an alternative I could have SQL give the APP a set of 100 records but have the APP call SQL to update the status one-by-one as it processes them.

Then lets assume the APP fails on row 11. Now 90 records are set to "In progress". It is just possible they are running very slowly. Should the operator reset them all to "Try again"? This is the biggest problem because currently they would not do that without contacting us, and we would have to login in, check the systems, fiddle about ... not a good use of anyone's time.

But ultimately I'm going to have to decide that those 90 unprocessed rows are NOT going to get processed and then they will have to be reset to "try again".

Perhaps I could do this automatically - if there have been no updates from the APP for NNN minutes it must be dead?

"Is there a CustomerID in the Orders.edi?"

Yes - (and the Customer Record must pre-exist, so Customer is created first.

The Customer Record is via API, so we get a result from the API as to whether the Customer was created - and the ID number of the Customer on the remote.


I think an answer might be:

SQL sends 100 rows to APP and marks the rows with a Batch Number and a status "In progress"

APP calls SQL as it process EACH one to Update the Status (of that one row). For Customer API this will include Success / Fail code, and remote Customer ID. For Order this will just be "EDI file created".

SQL records the most recent TIME that the APP called SQL for a record update against that batch.

When APP gets to the end of the 100th row SQL can mark the Batch Record as "Completed"

Any batch which has a "last update call" of, say, more than 10 minutes ago must has failed. The remaining records in the batch could be automatically marked as "Try again"

I think that would work. The only issue, perhaps, is that the next "in progress" record has failed for some reason. Automatically sending it again may cause more trouble (maybe it did create the EDI file, and then the APP failed later in the process. We don't want to create duplicate EDI files!).

The double-handshake report from the Remote System would also mark the record as "Completed", so perhaps we should make sure that has run before setting any records in the batch to "Try again". In fact maybe the double-handshake process could be responsible for resetting stuck batches.

OK ... I like the idea of that
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-05 : 03:30:17
I suppose I should ask the question:

You want to transfer records to a 3rd party system. It provides an import routine of some sort: CSV file, XML, whatever.

How do you make it bullet-proof?

Just bullet points of the steps please.
Go to Top of Page

TimSman
Posting Yak Master

127 Posts

Posted - 2010-11-05 : 12:52:03
OK, well, firstly, I accept the fact that nothing is truly bulletproof.

Second, I would evaluate all the possible things that I can forsee going wrong: server failure, poor connectivity, meteor hitting a building, whatever you come up. The bigger the list, the better.

Third, rate the possibilities by possibility of it happening, and severity of the problem.

Fourth, start to break down the biggest issues first. These are the ones that have a high likelihood and high severity, then work down the list.

Realize that at some point, there's diminishing returns, and sometimes the answer is "Screw it, if it gets *this* bad, we're ok with re-entering some data, or dealing with phone calls."

I'd also look at the overall process, and identify the real bottlenecks. Not the technology, but the whole process from receiving data to final resting place.

Figure out what technologies will help you achieve your goals. I think you are looking for some sort of failsafe, redundant type of system that can notify of success/failure at each point along the way, and deal with each situation accordingly. If so, then you are probably looking for some sort of distributed system, like a message queueing system. I'd say that a lot of what you're trying to accomplish is probably not a SQL solution at all, but some other application(s) to handle the messaging, updating, etc.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2010-11-05 : 12:58:00
TimSman is right.

"I'd say that a lot of what you're trying to accomplish is probably not a SQL solution at all, but some other application(s) to handle the messaging, updating, etc."

That is why I suggest a web service SOAP approach.





If you don't have the passion to help people, you have no passion
Go to Top of Page

TimSman
Posting Yak Master

127 Posts

Posted - 2010-11-05 : 13:09:21
Yes, I was thinking web services type approach as well.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-05 : 14:25:35
"TimSman is right.

"I'd say that a lot of what you're trying to accomplish is probably not a SQL solution at all, but some other application(s) to handle the messaging, updating, etc."


I agree.

yosiasz: Lets assume I am using SOAP.

As I understand SOAP: a Local process sends a SOAP message to Remote, and (normally) Remote replies with Success / Failure via a SOAP message.

What happens if Local sends a SOAP message to remote (and Remote acknowledges - i.e. it got the message), but Remote then never replies with the Success / Failure packet (its crashed or something has error'd out)

Are the records locked on Local SQL? (I assume not as it will cause blocking as this process is relatively slow)
How do we stop a second process also trying to send the same records, again, to Remote? Are the records "owned" in some way by the first process?
or do we FLAG the set first, on Local SQL, and then send them by SOAP? if so how do we un-flag when we discover that the Remote crashed? (and how do we determine that the Remote has crashed, rather than its just running slow?)

I don't know SOAP so I am keen to learn how these conditions are handled.


I'm a bit clearer (I think!) with Service Broker

I think that: the Local SQL puts a set of records in a "queue" in Service Broker (which will be very efficient for SQL to do) and Service Broker guarantees to deliver them to the APP at the Remote end. Furthermore the Remote App can send back messages about success / failure. This is a disconnected system (and I think Service Broker / Disconnected is more likely to be what I need)

So how does this work in practice? The same questions above apply:

What stops a Second process sending a record that is already being sent by the First process?
When the Remote fails how do I discover that has happened, and how do the records get "reset" so they are sent a second time?

Please tell me that everyone isn't doing this with Fire & Forget ?
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-11-05 : 14:26:55
I do appreciate all your patience, and I hope that I am articulating the nub of the problem I want to solve. I can't be the first trying to do this! so perhaps there are White Papers out there I should read?
Go to Top of Page
    Next Page

- Advertisement -