Options for moving on-premise Sage 50 CA to virtual private server e.g. Azure?

SOLVED
We currently have an on-premise installation of Sage 50 CA. We're in the process of moving most of our infrastructure to Microsoft Office 365 and would like to no longer have any on-premise server computer. For the near future, the Sage 50cloud offering is not an option.
I'm a sysadmin but I'm not expert in the architecture of Sage 50 CA. I can think of several possible ways to reconfigure Sage 50 CA so that an on-premise server is no longer needed. I'd like some feedback on which one(s) are feasible.
My (very limited) understanding is Sage 50 CA is a "client-server" application, with separate "server" and "workstation" components. Assuming that's true, one thing I don't know is how much network bandwidth is required between the workstation(s) and the server. Some CS applications are "chatty" and require significant bandwidth for adequate performance. This is not an issue with a typical on-premise installation on a gigabit LAN, but it might be if the server is in the "cloud" and the workstations are still local.
Do Sage data file(s) have to be stored on the same host that runs the server component? One question that has been asked is whether Sage data files can be stored on Microsoft OneDrive.
We're considering setting up a private server on a cloud service such as Microsoft Azure. If we did that there are at least a couple of scenarios:
  1. Server component + data files on Azure, workstations still local. Connection via VPN. This would require traffic between the server and workstations to go through the VPN, which will be much slower than the current gigabit LAN. Uplink speed would probably be 50Mbps or lower

  2. Cloud server configured as an RDS server. All Sage components installed on the cloud server. This should work fine as long as full product installation for multi-user on an RDS server is supported - can someone confirm?
Any comments or real-world experiences appreciated.
Thanks in advance.
Parents
  • 0

    I am in exactly the same boat you are in. In my experience, latency to the nearest Microsoft data center location is acceptable to move servers out and setup site-to-site VPN between your on-premises subnet and Azure subnet. 

    However, I was not able to install Connection Manager on a server core - it was citing that it needs MDAC to work. I don’t believe this is supported. Though you probably will be able to make it work on a GUI instance, but that will require a costlier instance with maybe 8gb of RAM. Connection manager does not require a lot of cpu capacity. I would like to be able to run it on the server core though. 

    Speaking of OneDrive - do not bet on that. Essentially, a shared Sage 50 file is a MySQL database exposed to a client app via their Connection Manager, having very little to do with a file on a share. And this is how you need to treat it. 

    Speaking of migration - I would still keep some services on premises. First, your users will need DNS - increased latency between data center and local network will hinder their user experience. They also need print services. Good news - you can get away with just cheap Intel Atom or Intel Celeron J based cheap thing consuming 15w of power, and not keeping anything important locally  

    Speaking of connection manager - you may consider to mount a drive living on an Azure server to your local server via iSCSI and expose that via Connection Manager to your users. I am planning to try this with my clients. 

  • 0 in reply to antonh80

    Apologies for the delayed reply.

    According to Sage Support Link the Connection Manager is installed on all computers, workstation or server. Is that basically a rebranded MySQL database engine that runs only on Windows? If so that's not a usual client-server architecture; normally only one database engine (on the server that hosts the shared database's files) would access those files and workstations/clients would all use ODBC or similar to connect to and make requests of that DB server. In that scenario there would be no need for a full MySQL DB server installation on the clients.

    Regardless of the exact implementation, there must be a MySQL DB server instance running somewhere. If we are trying to get rid of on-premise servers then it would need to run on a cloud-based Windows virtual server. At first glance I think I'd prefer to have the data files local to the DB server instance. iSCSI etc (or OneDrive for that matter) look technically interesting but I'd be concerned about link reliability and whether the DB engine would complain about latencies.

    In any case, thanks for the thought-provoking feedback.

  • 0 in reply to Al Doman
    there must be a MySQL DB server instance running somewhere

    There is, the Connection Manager starts it and... manages connections to it. 

    Sage used to support a few specific Linux distros for the Connection Manager + RDBMS piece, it was dropped years ago.  Sage 50 is always client-server over TCP/IP, and the only supported server is Windows.

    If we are trying to get rid of on-premise servers then it would need to run on a cloud-based Windows virtual server. At first glance I think I'd prefer to have the data files local to the DB server instance. iSCSI etc

    That is the only supported configuration, as far as I know.  Some offices were running linux based NAS up to the last file-based version of 2007, but any SAN through a Windows server should work.

Reply
  • 0 in reply to Al Doman
    there must be a MySQL DB server instance running somewhere

    There is, the Connection Manager starts it and... manages connections to it. 

    Sage used to support a few specific Linux distros for the Connection Manager + RDBMS piece, it was dropped years ago.  Sage 50 is always client-server over TCP/IP, and the only supported server is Windows.

    If we are trying to get rid of on-premise servers then it would need to run on a cloud-based Windows virtual server. At first glance I think I'd prefer to have the data files local to the DB server instance. iSCSI etc

    That is the only supported configuration, as far as I know.  Some offices were running linux based NAS up to the last file-based version of 2007, but any SAN through a Windows server should work.

Children
  • 0 in reply to RandyW

    Hmm, I'm still not 100% clear on the nuts and bolts.

    AFAIK MySQL only allows one database server instance to open and have control of a database's files (i.e. exclusive access). If an instance on a server opens a database located on that server, another instance running on a client can't open those same data files.

    So, what actually happens when a client goes to open a database/set of books stored on a server computer? Does Connection Manager not actually start a local MySQL instance, but instead revert to making ODBC queries to the MySQL instance on the server computer?

    Come to think of it, how is the MySQL instance on the server computer "told" that it has "ownership" of a given set of books (i.e. database) on that server?

  • +1 in reply to Al Doman
    verified answer
    AFAIK MySQL only allows one database server instance to open and have control of a database's files (i.e. exclusive access). If an instance on a server opens a database located on that server, another instance running on a client can't open those same data files.

    I don't work for Sage, so I can only see how the thing behaves, rather than how it's actually assembled. 

    Right, there is exclusive access.  The raw DB file is never accessed across a network, only by a DB instance running on the server with the disk.

    A process.pid text file is created in the .SAJ (data) folder when the first connection is made.  It contains the process ID of the MySQL database daemon.  If it isn't there, the Connection Manager knows there are no existing connections so it must first start an instance of the MySQL daemon.  One OS is in full control the disk, therefore no other copy of MySQL can access the data.  There is absolutely never a direct client read/write connection to the MySQL INNODB file (ibdata1).

    Each workstation's default installation gets all the software - both client and server, but the workstation's MySQL daemon only starts and runs if accessing data on the local workstation's filesystem - even if there were no other computers in the world, the Sage 50 client would still communicate from the local client to the local server via specific TCP/IP ports.  

    There are no 'local' connections via shared memory or direct to disk.  If the local MySQL daemon is disabled, for example, the client will be unable to open a local 'company file' but can access data on a server with no problems.

    The Connection Manager literally manages the connections but it doesn't handle company data itself- it seems aware of all the existing connections (check File | Properties in multi-user) and the status of company data on the server, probably in a temporary table in the database itself, I can't say.

    Whether the first connection or not, it will pass the an available port to the client on log-in (assuming the database isn't in 'single-user' mode, and there are licenses available, etc)

    There is only one type of client connection to the INNODB data file via MySQL, all the data is fetched using ODBC queries against the MySQL DBMS instance.  The requests and data get to the right place because they are made from different IP addresses and on different ports.

    I hope that helps, It could be better organized and maybe have a flowchart.   The important thing is that Sage 50 always, only, runs in client-server mode, never in a direct file-access mode.

  • 0 in reply to RandyW

    Most helpful, thanks! I'm pretty sure that gives me what I need to be able to evaluate virtual/cloud-based platforms for running Sage.

    I've marked your last post as the answer, I imagine it may also help some folks if they need to troubleshoot.

    Thanks again. Al