Sales Orders List - Slow Searching

Hey,

We often find searching on the sales order list painfully slow, often leading to lock errors and time outs when searching an order number,
Even when filtered down to orders from the last week it can still be super slow.

Oddly though, if we do a filter for DocumentNo equals, and put the order in there it returns the result instantly, why is there such a difference between the two?

Is there anything we can do to speed up the normal way of searching?

To give an idea about the amount of data, I have just checked and there are 74k entries in SOPOrderReturn dated this January,

Thanks

  • 0

    Depending on your version of sage, there used to be a "DLL" that used to grow and become manageable, which would ultimately cause the lists to slow, and then stop working.

    if i recall its in a subfolder in the the c:\users\<username>\appdata\Roaming\sage200 folder, and called "Sage.MMS.Desktop.Common.Resources.dll" but i may be wrong.

    the resolution was to remove this file.

    in later versions they moved to the "c:\users\<username>\appdata\local\sage\Sage200\Desktop" folder and its supposed to refresh itself automatically now.

    but in either case if its large, delete it, and see what happens

    if its not this, then you may want to do some sql profilling and see what is happening under the hood, check your index fragmentation.

  • 0 in reply to Toby

    Thanks for the suggestions, I located the mentioned file in the second of the two locations, it's only coming in  at 123kb so likely not an issue.

    On the second suggestion the index for SOPOrderReturn\Customer documentNo is showing  over 98% fragmentation with 36396 pages so that looks like something we might want to address, although I suspect a rebuild on that is likely going to take an eternity and I would imagine performance would be even worse while it's running. However SOPOrderReturn\DocumentNo is only 48% / 23153 pags.

    There are actually 55 tables with fragmentation over 90%, although the majority of them only have double digit page counts so are probably fine, there is NLAccountPeriodValue\NLNominalAccountID with 99% / 1172 pages and WarehouseItem\StockItemView with 98% / 1065

  • 0 in reply to DaveG

    are you running the suggested maintenance plans from Sage to re-organise indexes nighlty, and re-build them weekly, as well as updating statistics daily?

  • 0 in reply to Toby

    I am still quite new to this roll, however I see nothing listed under maintenance plans in SSMS so I'm going to go with no. We'll bring that up with our Sage partner and see if they have a recommended plan(s).

  • 0 in reply to DaveG

    Index fragmentation is way, way less important than you'd think. But we'll come back to that later. Obviously I don't know what you know about SQL Server, so I might be telling you stuff you already are fully aware of - but even so, putting it here might be useful for other people.

    Firstly, let's discuss why you get better performance when you create a filter specifically to find a single Sales Order number. I'm going to assume that the filter is set so the the 'No' (as in Document No) column is equal to a given value. For the purposes of argument, let that value be 0000030136.

    When the filter is applied, effectively what happens is that the Object Store layer (that being the layer in Sage 200 which constructs the query to send to SQL Server) will build a query that looks a lot like this:

    SELECT TOP 360 [..whatever columns..]
    FROM SOPOrderReturn
    	[.. joined to other tables..]
    WHERE
    	[SOPOrderReturn].[DocumentTypeID] <> 4 
    AND [SOPOrderReturn].[DocumentTypeID] <> 7 
    AND [SOPOrderReturn].[DocumentTypeID] <> 8 
    AND [SOPOrderReturn].[DocumentTypeID] <> 9 
    AND [SOPOrderReturn].[DocumentTypeID] <> 5 
    AND [SOPOrderReturn].[DocumentTypeID] <> 6 
    AND [SOPOrderReturn].[DocumentNo] Not Like N'###%'
    AND [SOPOrderReturn].[DocumentNo] =  '0000030136'
    ORDER BY
    [SOPOrderReturn].[DocumentDate] DESC,[SOPOrderReturn].[SOPOrderReturnID] DESC

    The important bit is the WHERE clause. Yes, there's stuff in there that you didn't explicitly specify - and those are just the default constraints provided by the database view. But there are two important things to take note of: firstly we have a complete logical conjunction (all our clauses are AND), and secondly, we have a SARGable equality predicate on our document number. I'll explain SARGability later on.

    Right - now let's look at what happens when we remove the filter and just type the document number into the search box. We'll type the full number with its leading zeros. This is the kind of query we get now:

    SELECT TOP 360 [..whatever columns..]
    FROM SOPOrderReturn
    [.. joined to other tables..]
    WHERE
    [SOPOrderReturn].[DocumentTypeID] <> 4
    AND [SOPOrderReturn].[DocumentTypeID] <> 7
    AND [SOPOrderReturn].[DocumentTypeID] <> 8
    AND [SOPOrderReturn].[DocumentTypeID] <> 9
    AND [SOPOrderReturn].[DocumentTypeID] <> 5
    AND [SOPOrderReturn].[DocumentTypeID] <> 6
    AND [SOPOrderReturn].[DocumentNo] Not Like N'###%'
    AND (
    [SOPOrderReturn].[DocumentNo] Like '%0000030136%'
    OR [SOPOrderReturn].[DocumentTypeID] = -1
    OR [SLCustomerAccount].[CustomerAccountName] Like '%0000030136%'
    OR [SOPOrderReturn].[TotalGrossValue] = 30136.00
    OR [SOPOrderReturn].[DocumentStatusID] = -1
    OR [SOPOrderReturn].[AllocatedStatusID] =-1
    OR [SOPOrderReturn].[DespatchReceiptStatusID] = -1
    OR [SOPOrderReturn].[InvoiceCreditStatusID] = -1
    OR [SOPOrderReturn].[CancelledStatusID] = -1
    )
    ORDER BY
    [SOPOrderReturn].[DocumentDate] DESC,[SOPOrderReturn].[SOPOrderReturnID] DESC

    That's a fair bit different. Now not only do we have a logical disjunction (several ORs in there), but now the predicate on the DocumentNo column is not a full equality predicate. It's a LIKE - and even worse it's a 'find this string anywhere in the text of the column' predicate. Which means it's no longer SARGable.

    When you provide a search value, Sage 200 has no way of knowing what that value is supposed to represent, as you can search over all the columns. So we can see that it's worked out that it might be a text value in the DocumentNo or the CustomerAccountName columns. But it's realised that it could be interpreted as a numeric value - so it's turned it into a decimal and applied it to the TotalGrossValue column. It tried to do the same for the DocumentTypeID, DocumentStatusID etc. columns too - but then realised the value was nonsensical so replaced it with a -1 (a value it'll never find).

    So straight away we're giving SQL Server a more involved query to deal with. As soon as you start adding OR clauses into a query then the execution plan that the SQL Server query optimiser comes up with is going to contain a load more hash-match operators and generally have a higher estimated cost.

    So that's not helping. But neither is the non-SARGable nature of our predicates. And now I'll start talking about indexes. Indexes are super important; having the right set of indexes to support your most common queries is one of the key things to good SQL Server performance. For most of my clients' Sage 200 sites, the indexes they now have in place are not the ones that were provided out of the box. We monitor things using tools like Query Store (and some home-brew bits and bobs), and will replace indexes with new ones that better fit what demands are being made of SQL Server. We also rebuild in maintenance windows, and (much more importantly IMO) do regular full-scan statistics rebuilds. But even having optimised indexes can't always help.

    So finally we'll discuss SARGability. 'SARG' means Search ARGument. if a predicate is SARGable, it means that SQL Server can make use of that argument to search an index (be that the clustered index or any non-clustered index).

    Let's stick with our 'searching for a Sales Order' example. Assume we issue the following query in a set of Sage 200 data that has all the default indexes in place (all rebuilt) and which has roughly 60000 records in SOPOrderReturn:

    select SOPOrderReturnID, DocumentNo, DocumentDate
    from SOPOrderReturn WHERE DocumentNo = '0000030136'

    What I would expect is for SQL Server to do a search directly to the specific record in the IX_DocumentNo_SOPOrderReturnID index, and then a key lookup to the clustered index to get the DocumentDate column (which it can't get from the non-clustered index). And that's exactly what the execution plan tells me happened:


    Moreover, if I look at the query IO statistics on my machine, it can see that it only took 1 scan and 7 logical reads (a logical read being a page read from memory) to get the data. Also the Query Cost (a fairly arbitrary metric that gives a general idea of the work that the query is going to do) is 0.0065.

    Now we'll change the query to this:

    select SOPOrderReturnID, DocumentNo, DocumentDate
    from SOPOrderReturn WHERE DocumentNo LIKE '%0000030136%'

    Same result in terms of the data - one row - but now our execution plan is different:

    It's still using IX_DocumentNo_SOPOrderReturnID index, but we no longer have a search directly to the record. Instead we have a scan of the entire index. The IO statistics for this query tells me that it took 2650 logical reads and 2704 read-ahead reads to get the the same data as the previous query. Also, there's a warning on our execution plan which in this case is 'excessive grant', meaning that SQL Server overestimated the amount of memory it thought the query would need and gave it too much - which is a waste of resource. It could just as easily underestimated it, which would have let to it having to spill to disc in TempDb, which is also suboptimal. Also our Query Cost is 3.14906. Several orders of magnitude greater than our first query.

    So this is SARGability (or lack of it) at work. Why this happens is pretty easy to explain. I'm old enough to remember when the BT Telephone Directory was like a breeze block. If someone dropped that in front of you and told you to find a specific person then that's pretty easy. You search directly to the surname and then the first name - the equivalent of a couple of reads. Similarly if you were told to find all the people with a surname of 'Dobson' then again it's simple - you can find where the Dobsons start and keep reading until the name changes (in SQL Server this is a seek plus range scan). You could even make a reasonable estimate of how many records you'd be expecting by a quick glance at how many pages the record range spans (Smiths would cover several pages whereas Ximines probably only has one entry). You're effectively using an index here - as you're seeking to a specific point and reading either a single record or a range. Even if you were told to find all the people who had a surname beginning with 'Dob', you'd be able to do the same thing - quickly get the the first surname that matches the predicate and do a continual read until the surname no longer matched the predicate. But what if someone told you to find every entry in the phone book that *contained* the letters 'ate'? That string could be anywhere in the surname - so the only possible way for you to approach this task is to start on page one, and read the entire phone book, picking out the matches as you go.

    That's the same position SQL Server finds itself in. Because it's being told 'find every record in a table where a column contains a value', it's no longer SARGable because it can't search to a point in the index. It just has to start at the beginning and scan all the way through. As well as that, because it can't use the predicate to examine the table statistics to estimate how many rows it's going to get back it has to rely on a less accurate metric called Density Vector - which is why our second query got given more memory than it needed (and it could have been the other way and been given too little with the disk-bound IO penalty that that would incur). So imagine this scenario applied to the query above with all those 'OR LIKE THIS' columns - that's a whole load of additional full index (or full table) scans taking place and a whole load more logical reads. Plus the fact that the cardinality estimates for the query are going to be worse, so the memory grant has the potential to be underestimated and cause a disk spill.

    All of this is an extremely boring way of me telling you that you can't really expect to find a magic bullet to make the desktop searches faster, just because of how those queries are constructed. Of course, optimising the actual SQL Server is a necessary process (and there are many knobs and dials to tweak, but you need to know what those knobs and dials do, and how to monitor any changes you might make. Far too much to go into here).

    I've helped some of my clients by building new versions of specific desktop list for them. For example, one client has hundreds of thousands of active stock items - so their stock item desktop list now has two search boxes; the standard one which searches over all columns (I've actually limited the columns to a handful), and one that explicitly searches a single specific column. Another client who, like you, have thousands of Sales Orders have a modified desktop which will only apply the search to the Document Number, the Customer name or the delivery address postcode (which is an extra field I've surfaced in the list for them). This can be a bit more intelligent, as if they type something in which looks like a postcode then there's no point in applying the search to the DocumentNo column. You could approach your business partner to see if they can do something along similar lines for you. Mind you, you could always just think about archiving your sales orders! One of my clients bangs so many order on that they archive almost every day, just to keep the 'working' set of orders down to a sensible size.

    Hope that's been of some use.

  • 0 in reply to Chris Burke

    Wow,

    It's going to take me some time to fully ready and digest that but just wanted to say thanks for taking the time to write up a pretty comprehensive analysis, must have taken an age.

    I'm sure there will be something in there that will be of use.

    Thanks

  • 0 in reply to DaveG

    It's just the result of me taking the morning off work to wait in for a bloke to come and service the boiler and conseqently having nowt better to do.

    Seriously though, there's a lot of mystique around SQL Server - like it's some mystical black box that just magics up data. Knowing how it works under the hood can help when it comes to reasoning about why it might not be performing well.

  • 0 in reply to Chris Burke

    As Chris said, one solution is a modified desktop list with search boxes (I called them "quick filters"). That makes the search instant. This is an example for a modified Stock Control list:

    (It also has additional panels at the bottom, but that's another mod!)

  • 0 in reply to Geoff Turner

    how do you create a quick filter?

  • 0 in reply to Amanda Price

    It's a bespoke version of the desktop list, not a standard feature unfortunately