Sage 50 extremely slow performance when running from home via VPN

SUGGESTED

Has anyone solved the problems of an extremely slow Sage 50 performance when running the software over VPN, at home.

I have previously worked remotely for a different role, with no problems, so I know that my connection from home is not the problem.  I also connect directly via ethernet.

Parents
  • 0
    SUGGESTED

    I've spent some time using Wireshark to observe my Sage 50 client communicating with the Sage Data Service.

    My conclusion is that almost every action in Sage generates a huge number of small requests (of less than <500 bytes each), and receives a similarly huge number of replies from the data services. Loading my Transactions screen, for example, generates approximately 1500 HTTP request and reply packets in Sage.

    Sage developers have therefore favoured a small packet model of data transfer, which means that latency is significantly more important than bandwidth. Latency can only be improved by moving your client nearer to the server so that the requests and replies have less equipment to 'hop' through.

    Therefore a VPN connection will always be slower than a LAN connection due to the likely significant number of hops required to get from the home ISP connection through the various ISP interchanges into your VPN server and onto the Sage server.

    This can be easily seen using a standard tracert request to your Sage server using both LAN and VPN pathways. In the real world I've observed 20-30 hops, and about 70ms latency time, to my Sage server when using a VPN connection from my home address, whereas it is only one hop and 1-2ms latency when using my LAN connection. As a result, my Sage VPN access runs significantly slower than when using LAN, despite the fact that the bandwidth available over the VPN (approx. 1Gb/s symmetrical) is identical to my LAN connection (also 1Gb/s). Lack of bandwidth is absolutely not the issue.

    I'd imagine that the Sage 50 and the Sage Data Service could be improved by packaging its requests and replies into one larger bundle. Without the overheads associated with generating each packet, available bandwidth would become more important than latency.

    I guess this is why Sage Drive was developed because it means that all requests and replies can be carried out on local data without any equipment to hop through. It's a shame, therefore, that Sage Drive is so poorly developed and prone to crashing its server, otherwise I'd have everybody using it, even on the LAN.

Reply
  • 0
    SUGGESTED

    I've spent some time using Wireshark to observe my Sage 50 client communicating with the Sage Data Service.

    My conclusion is that almost every action in Sage generates a huge number of small requests (of less than <500 bytes each), and receives a similarly huge number of replies from the data services. Loading my Transactions screen, for example, generates approximately 1500 HTTP request and reply packets in Sage.

    Sage developers have therefore favoured a small packet model of data transfer, which means that latency is significantly more important than bandwidth. Latency can only be improved by moving your client nearer to the server so that the requests and replies have less equipment to 'hop' through.

    Therefore a VPN connection will always be slower than a LAN connection due to the likely significant number of hops required to get from the home ISP connection through the various ISP interchanges into your VPN server and onto the Sage server.

    This can be easily seen using a standard tracert request to your Sage server using both LAN and VPN pathways. In the real world I've observed 20-30 hops, and about 70ms latency time, to my Sage server when using a VPN connection from my home address, whereas it is only one hop and 1-2ms latency when using my LAN connection. As a result, my Sage VPN access runs significantly slower than when using LAN, despite the fact that the bandwidth available over the VPN (approx. 1Gb/s symmetrical) is identical to my LAN connection (also 1Gb/s). Lack of bandwidth is absolutely not the issue.

    I'd imagine that the Sage 50 and the Sage Data Service could be improved by packaging its requests and replies into one larger bundle. Without the overheads associated with generating each packet, available bandwidth would become more important than latency.

    I guess this is why Sage Drive was developed because it means that all requests and replies can be carried out on local data without any equipment to hop through. It's a shame, therefore, that Sage Drive is so poorly developed and prone to crashing its server, otherwise I'd have everybody using it, even on the LAN.

Children
  • 0 in reply to Tyron Barrett

    The way I explain it:

    Sage Calls up the office, ask for a copy of all the company records be sent to them and everyone must stop updating the records.

    It then searches for its info and updates as approximately and then sends all the records back, then everyone else can carry on.

    And it destroys the local duplicate copies.

    VS any modern DB system where the request is sent to the office, an admin member of staff looks up the info for them and send just the requested info back.

    Then there is sage drive....

    Which is just the top one but instead of a copy being sent every time there is already a copy locally, and it looks up there first and hopes the data is the same as the office.

    Then if it looks like you might need to update the office master copy. All STOP. Send in the info, do some checks and hope nobody else was doing anything in the meantime.

    But yes RDC and remoteapp are the best way to do sage accounts.