Upgrading from 2017 to 2019 - Timeout

SOLVED

During the upgrade process, there is a timeout error:

Loading SQL Script File clup_app.sql Error - create procedure spFixActivityLog...
Microsoft OLE DB Driver for SQL Server Error 3633:
    Query timeout expired
This procedure deals with the tciDBActivityLog, we have 74,027,184 rows. So I implemented my own procedure that will move rows with an ActivityDate greater than 30 days and put them into an Archive table...then delete from the archive table after 1 year of storage. 
My fear however, is my tciMaintAuditLog table, 37,459,433 rows. Do any of you happen to do maintenance on this table like I have implemented on tciDBActivityLog? Is there a Best Practice on this?
Parents
  • +1
    verified answer

    You servers likely can't handle the queries being run against such a large data set. You shouldn't maintain this many rows in tciDBActivityLog as they don't really serve a purpose other than troubleshooting anyway. It was implemented to allow Support to identify third-party or user-affected data changes when researching a data corruption problem in native Sage tables. You apparently have a lot of unregistered interfaces or make a lot of data changes in the back-end, but either way an archive table is superfluous. You need to determine the sources of these mass changes and they can potentially be added to the interface exceptions, but you need to be careful with that many rows in any table.

    The size consideration applies to tciMaintAuditLog as well. The number of rows in that table seems to indicate you have either been using Sage 500 for a very long time, or you make a lot of changes to customers, vendors, items, etc. If you aren't reporting against the data or actively using it to identify changes, then you should consider removing some or all of the data and disabling the auditing features.

    Handling tables with this much data requires a review of system resources and database configuration, unless you just decide to truncate the tables and go on your merry way.

Reply
  • +1
    verified answer

    You servers likely can't handle the queries being run against such a large data set. You shouldn't maintain this many rows in tciDBActivityLog as they don't really serve a purpose other than troubleshooting anyway. It was implemented to allow Support to identify third-party or user-affected data changes when researching a data corruption problem in native Sage tables. You apparently have a lot of unregistered interfaces or make a lot of data changes in the back-end, but either way an archive table is superfluous. You need to determine the sources of these mass changes and they can potentially be added to the interface exceptions, but you need to be careful with that many rows in any table.

    The size consideration applies to tciMaintAuditLog as well. The number of rows in that table seems to indicate you have either been using Sage 500 for a very long time, or you make a lot of changes to customers, vendors, items, etc. If you aren't reporting against the data or actively using it to identify changes, then you should consider removing some or all of the data and disabling the auditing features.

    Handling tables with this much data requires a review of system resources and database configuration, unless you just decide to truncate the tables and go on your merry way.

Children