Small errors in user input cause lockups in OE / PO / etc... There must be a solution to this somewhere....

We've been fortunate enough to use and standardize on Sage 300 ERP for many many years and although it's a great product there seems to be some glaring shortcomings as our database is getting larger.  We seem to constantly hit walls when certain user errors are made that cause a complete lockup of certain modules when Sage 300 has to do some excessive database calls.  All our users operate from one of two terminal servers and the impacts are the same regardless of the terminal server they are using leading this to an Application / Database issue and not anything related to our Terminal Server.  Database load tests reveal that we're typically performing 350 - 500 Executions per second, and the DB server is able to perform under loads exceeding 3000 Executions per second so I don't really think it has anything to do with capacity.


E.G.: Leaving auto search enabled on a lookup screen from OE with data in it (unchanged) for an undefined period of time locks up OE until the window is closed

E.G.: Using auto search to find Invoices / Orders where the order value is greater than 0 (this scenario never finishes running)

E.G.: Running a large Invoice Reprint or Order Reprint Locks up OE.  (this scenario never finishes running)


In theses instances plus others, we consistently loose many cumulative hours per year in productivity and expend another group of hours in troubleshooting trying to determine the culprit.


My questions are as follows:

Is there a way to set a timeout for how long DB queries from specific users can run from a format of Max Execution time?

Is there a known / possible method for determining the user causing a Database deadlock scenario?

Is there any progress in Sage's Road-map for managing Database Dead Locks in a more multi-user friendly scenario?

Does anyone out there run Sage 300 with greater than 40 (ideally greater than 70 users)?  I'd love to compare notes!

  • 0
    To provide additional information,

    Sage 300 ERP v6.2 PU2 w/ upto 76 Concurrent Users
    Terminal Servers are Server 2008 R2 Ent w/48GB RAM each
    Sage Common directory Server Server 2008 w/48GB RAM
    Database server 2008R2 Ent w/96GB RAM 70+GB allocated to MSSQL
    Database size is 68+GB

    Changing / Removing Anti-Virus software on any system made no difference.
  • 0 in reply to data-splunker
    Change MS SQL server properties: Memory > Maximum server memory. Set this equal to 4GB (or more) less than the physical RAM on the server. See how that goes. This is the most common cause of such db lockups.
  • 0 in reply to Ettienne Schwagele
    RAM was actually assigned @90GB sorry for the error in my last post. So we can probably rule that out as an issue.

    We're also considering switching to SSD storage for the DB but that isn't going to solve the issue necessarily, just make the long queries run even faster. But I debate the ability of the Sage DB layers ability to keep up.
  • 0 in reply to data-splunker
    What's the db server's horsepower? SQL server is very resource hungry. Also check for network bottlenecks between servers.
    This is a complex setup with complex problems, it will require a proper analysis by a specialist to resolve these issues. You're not really going to get an answer here - pretty much like not getting medical issues resolved on WebMD. If you want a specialist to look at it then I would recommend Don Thomson at Tairox www.tairox.com - he is the best in the business.
  • 0 in reply to Ettienne Schwagele
    Average CPU Usage is between 3-10%
    Average Disk Activity is marginal at best less than 500KB / second avg as the Database is mostly contained in RAM
    Network Activity is 32Mb/s+ (note the small b)

    The only time these stats change is when the system is rebooted and it's reloading the primary Databases back into RAM, or it's going through a specialized backup and dumping log files (once per week)
  • 0 in reply to data-splunker
    One performance killer is having background processes running. Mid-day backups can slow things down. Re-building indexes in the middle of processing can easily lock things up. Sometimes maintenance plans get delayed and start running in the middle of the working day.

    Another performance killer is historical data within transactional tables. OE orders that aren't needed should be archived off so that their presence doesn't cause any delays. If you still need them then put them into another database.

    Don't run day-end during processing if you can avoid it.

    Re-build your indexes (but not during the day). This site (https://ola.hallengren.com/) has a great script that will create a stored procedure that will allow you to easily rebuild your indexes. It can make a difference if the indexes are heavily fragmented.

    Use server connections instead of ODBC connections.
  • I have a client running 100 users. They were experiencing issues similar to yours. In addition to the valuable suggestions already made, this client was able to narrow issues down to specific users / queries / processes using a product called Solarwinds:
    www.solarwinds.com/database-management-software.aspx

    If you email me - [email protected], I can put you in touch with them to compare notes.
  • 0
    Ok, so I've decided on Solarwinds DPA to try and track down the issue as 13381622 suggested. What I've come to see so far (1 full day of data sampling) is that most of the slowdown / wait time is related to SQL waiting for Accpac to process more data thus allowing to ship it the next in the set. Queries in SQLSMS operate orders of magnitude quicker than through Sage 300.

    I'll keep recording and report back in a few days.
  • 0 in reply to data-splunker
    HAHA! Within the first 4 days of monitoring we've now caught one culprit - I'll have to send this one to the sage product team. Under certain circumstances, with or without autosearch enabled a shared (hold) lock is placed on a table when someone has their search window open. This behaviour then causes anyone with a search window open to lock up modules such as PO/OE/IC/AR/AP ETC...

    Shared (S) locks are used for operations that read data, such as a SELECT statement. When shared locks are used, concurrent transactions can read (SELECT) a resource, but cannot modify the data. Typically the shared locks on a resource are released as soon as the data has been read. However, if the HOLDLOCK locking hint or transaction isolation level is set to REPEATABLE READ or SERIALIZABLE, shared locks on a resource will be held until the end of the transaction.

    I now have a confirmed logged instance of this with correlating trouble incidents reported by our users.

    Anyone from Sage care to chime in?
  • 0 in reply to data-splunker
    Congratulations on the find!

    You're unlikely to hear from Sage directly in this forum. Your best bet is to contact support and start a new issue. You're going to have to work through first level support and move onwards to second level. Plus you have Summit working against you next week (support will still be available, just might not have full staffing of level 2 support).
  • 0 in reply to data-splunker
    Here's the queries that cause locks in Sage 300 6.1 6.2 that I am aware of so far.:

    SELECT *
    FROM NLDAT..ICOPT

    FETCH API_CURSOR--- followed by any long integer. typically FETCH API_CURSOR0000000000000010 && FETCH API_CURSOR000000000005AD48

    for the locks that occurred today anyway
  • 0 in reply to data-splunker
    The fetch API_CURSOR lines are from finders. Sage will create a stored procedure while the finder is open and opens a cursor waiting for you to scroll up and down. That shouldn't be locking anything - but - if the SQL is built the way that you've found then you have a good case for talking to support.

    The select * from NLDAT..ICOPT shouldn't take very long at all. It's a single record. But it does get updated often by posting transactions and running day-end. Should likely be cached as well.

    As an aside - are you using transaction logs or the simple recovery option?
  • 0 in reply to Django
    Was using transaction logs, switched to simple recovery over the weekend to rule out excessive Disk IO. After running SolarWinds DPA it's basically answered a ton of questions I had. I think this is a product I'll be buying this month. It's always nice to be able to point a finger and have backup.
  • 0 in reply to data-splunker
    Good find. You will have to contact Sage support and sell the issue.
  • 0 in reply to Ettienne Schwagele
    Support Ticket (8004626358) bumped to Tier 2
  • 0 in reply to data-splunker
    Any further updates on this problem?
  • 0 in reply to Django
    The ticket is still open with Sage, we're waiting until the issue pop's up again and will try to RVSPY / DBSPY the user. This one is tricky as obviously it's counter intuitive, the first response is to resolve the issue as soon as possible not trying to log it yet again.

    I'm guessing in their sterile testing environment they weren't able to duplicate it.
  • 0 in reply to data-splunker
    Hi data-spelunker

    We are facing similar issue where one user post OE Order Entry. One by one all users experience hanging of the system and severe performance slowness. I have posted my question on sagecity.na.sage.com/.../92382.

    Sometimes it takes 15-20 minutes for that Order to get posted. We tried checking everything from index health to resource utilization. But couldn't pinpoint the issue. It is getting very frustrating to continue everyday with such a disastrous issue. Our customers for whom we are posting Orders/Invoice are complaining and going out of our business. We are loosing revenue every day.

    Could you please guide me, if at all (temporary) you were able to resolve the issue and put the performance back on track?

    I highly appreciate your help and guidance in this regards.

    Best regards,
    Talal
  • 0 in reply to Talal
    Sorry for taking so long to post back to this thread. We're now working closely with the Sage 300 team, trying to find a valid resolution for this issue. At present we're going to try and using the SQL transaction logs to trace the issue back.
  • 0 in reply to data-splunker
    Did you manage to resolve this issue? our client is also experiencing the same exact issue