For example, you have a stored procedure that requires to run at least 30 minutes and the DBA wants to be able to see the current progress.
In the long run process, it must fulfill the following requirements:
- The UPDATE/DELETE statement must be affecting minimal of records - this means you should not execute an UPDATE/DELETE statement that affects thousands of record. By minimizing the affected records per UPDATE/DELETE statement, it reduces the resources to be locked and it allows other users to have chance to access it concurrently. This will improve the overall database response time.
- The long run process must be breaking down to many sub-processes and report the status back to the main process - all sub-processes must run within a stored procedure whenever possible. This will ease the testing and future maintenance/enhancement. The way of the status returning back to the main process must be standardized. For example, if you have decided to use the OUTPUT parameter, then, all stored procedures must follow the same way.
- The transaction scope must be efficient for the number of records affected - in case your process is running mail merge process, you may consider to have a transaction for every 1,000 records. Another example is that if you are running a billing process, you might consider to have a transaction for each bill. Anyway, this requires many round of testing in the development or simulated live environment to confirm what is the appropriate numbers. In real life, the hardware specification, number of concurrent users, the number of records in the output, etc must be put into consideration.
After you have done the above, you will need a new table to store the process log.
CREATE TABLE tb_process_log (
process_log_id UNIQUEIDENTIFIER NOT NULL PRIMARY KEY,
sess_id UNIQUEIDENTIFIER,
seq INT NULL, msg NVARCHAR(MAX) NULL,
remarks NVARCHAR(255) NULL,
created_on DATETIME NULL,
created_by NVARCHAR(255) NOT NULL
);
Table usage
- process_log_id - this is the primary key of the table which stores the value generated by NEWID() function.
- sess_id - this field stores the session ID of the current process. For example, if two users are running the same process, then, both of them will have different session ID. The value is generated by NEWID() function.
- seq - this field stores the sequence number of the log record or the current step in the process. This is optional field.
- remarks - this is the special remarks set by the process and it is optional.
- created_on - this is the current date/time when the log record is inserted.
- created_by - this is the user ID who is executing the stored procedure.
- First thing first, for every process starts, it should insert a log record to indicate that the stored procedure is start running. This is very important because sometimes the stored procedure was not even has chance to run and failed. By having the process start log, you will be able to tell that there is no communication error in between the flow.
- Before exiting any stored procedure, it should insert a log record to indicate that the exist is intentional. This is to ensure that the exist is by design or stop at where you are expecting.
- Put the current process step details into "msg" field. This allows you to find out what is the runtime value instead of looking for way to re-produce the error or behavior. For example, "Issuing invoice to customer A.." and "Successfully issued an invoice #1234 (amounted $2,000.00) to customer A.". You just you have ensure that the "msg" value contains sufficient information for you to troubleshoot.
- As for the "remarks" field, we normally use it as the stored procedure name. But, you may define the way you like since the "remarks" field is not critical.
- Make sure that you have sufficient TRY..CATCH block within each stored procedure to track down the unexpected error and also perform cleanup.
No comments:
Post a Comment