[SEI] SEI View could not load with infinite Pre-Fetching period after a few days

SUGGESTED

Note: I am taking a safe approach here since this client's Sage X3 and SEI setup can be extremely complex.

SEI version details (requires a defense and a guarantee if need to update):

```

Version : 2023
Release : 2
Build : 23.0.2.20012

```

SEI server specs

```

CPU: 8x vCPU (Intel Xeon Silver, 2.1 GHz; non-negotiable)

RAM: 32 GB

```

Sage X3 version: v12p31, not allowed to update.

Reference:  SEI View stuck in Pre-Fetching Data 

When I access a SEI view, I get the following message: "Pre-Fetching"

Normally, this is a loading screen and should not be of concern.

However, my client waited for half an hour when the report used to complete in 20 seconds.

As per reference, I have restarted the whole SEI infrastructure as a temporary stopgap.

This issue reoccurs randomly and could not be predicted when it will have infinite Pre-Fetching loading issue.

No error given from Chrome web console other than the Network tab wait for a long period without any timeout triggered.

How can I resolve this issue permanently?

Management have a concern of SEI reliability and getting caught with pants down due to this unexplained SEI "Pre-Fetching" issue. 

  • 0
    SUGGESTED

    Hi Chunheng,

    The version you are on is still a supported version of SEI so you wont be required to update to get assistance on this issue.

    However, the SEI online help does not reference this issue as you have reported so we will need to escalate this issue for Nectari to review.

    Please reach out to your local support team to create this escalation for Nectari to review.

    Thanks 

  • 0
    SUGGESTED

    Hi Chunheng,

    Has it been a while since you restarted the BI service? We have seen Pre-fetching issues resolved for a period of time by restarting this service. If it happens fairly soon after the restart, then I agree with Neville on setting up a call with the SEI product team.

  • 0 in reply to NevilleC

    Hi Mr. Nevillec,

    I have escalated the case to Nectari for help since this issue happening often enough to be a concern.

    Due to lack of experience with SEI, I could not identify if it could be the case of report's SQL query issue, database issue, or just SEI component issue.

  • 0
    SUGGESTED

    Chunheng,

    Is this a specific view/process or is it happening to all processes? If it just one process then the first thing I would do is take the actual query and run it in SQL and you'll see right away whether it's the query or SEI. In my setup I can find the queries in C:\Program Files\Nectari\Nectari Server\Server\BIService.log. In the RDMBS world it is not that rare that a query that used to in 20 seconds suddenly takes much longer. You might need some optimization. 

  • 0 in reply to Israel Braunfeld

    Once it happened on one of the views/processes, it will happen to the rest of the views/processes regardless of user.

    As a note, SEI and Sage X3 databases are sharing a single Sage X3 SQL Server instance.

    From C:\Program Files\Sage\SEI Server\Server\BIService{YYYYMMDD}.log:

    ```

    2024-11-05 15:34:40.149 +08:00 [Error] [BIService]
    {"SourceContext":"Nectari.DataAccess.SqlDataService","ThreadId":118}
    Unexpected exception occured while opening the Db Connection.
    System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.
       at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
       at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
       at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.Open()
       at Nectari.DataAccess.DataServiceBase.OpenConnection(IDbConnection connection)
       at Nectari.DataAccess.SqlDataService.GetSQLInfo(String statementSelectlist, Dictionary`2 mappingAliasTableDictionary, DataTable requestSchemaTable)
       at Nectari.DataAccess.DALBase`2.SetMetaData(String statementSelectlist, Dictionary`2 mappingAliasTableDictionary, IDataReader reader)
       at Nectari.DAL.RequestDAL.SQLGetInfoColumnsBySelectList()

    2024-11-05 15:34:40.150 +08:00 [Error] [BIService]
    {"SourceContext":"Nectari.DAL.RequestDAL","User":"","CentralPoint":"\\\\SAGEWEB\\CentralPoint","ThreadId":118}
    An unexpected exception occured.
    System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.
       at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
       at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
       at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.Open()
       at Nectari.DataAccess.DataServiceBase.OpenConnection(IDbConnection connection)
       at Nectari.DataAccess.SqlDataService.GetSQLInfo(String statementSelectlist, Dictionary`2 mappingAliasTableDictionary, DataTable requestSchemaTable)
       at Nectari.DataAccess.DALBase`2.SetMetaData(String statementSelectlist, Dictionary`2 mappingAliasTableDictionary, IDataReader reader)
       at Nectari.DAL.RequestDAL.SQLGetInfoColumnsBySelectList()
       at Nectari.Services.NectariService.SQLGetInfoColumns(String statementToken, INectariTraceEvent& eventEntity)

    2024-11-05 15:34:40.150 +08:00 [Error] [BIService]
    {"SourceContext":"Nectari.Services.NectariService","User":"","CentralPoint":"\\\\SAGEWEB\\CentralPoint","ThreadId":118}
    Unexpected exception occured in method SQLGetInfoColumns
    System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.
       at System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection)
       at System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions)
       at System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry)
       at System.Data.SqlClient.SqlConnection.Open()
       at Nectari.DataAccess.DataServiceBase.OpenConnection(IDbConnection connection)
       at Nectari.DataAccess.SqlDataService.GetSQLInfo(String statementSelectlist, Dictionary`2 mappingAliasTableDictionary, DataTable requestSchemaTable)
       at Nectari.DataAccess.DALBase`2.SetMetaData(String statementSelectlist, Dictionary`2 mappingAliasTableDictionary, IDataReader reader)
       at Nectari.DAL.RequestDAL.SQLGetInfoColumnsBySelectList()
       at Nectari.Services.NectariService.SQLGetInfoColumns(String statementToken, INectariTraceEvent& eventEntity)

    ```

  • 0 in reply to chunheng
    I also noted in the log that there's a "@LIB_" prefix which I am not sure of.
    ```
    ```
  • 0 in reply to chunheng

    Try to find the last SQL prior to the errors and try to run that directly in SQL. You can also check SQL Management Activity Monitor to see  what the session is doing. Seems like your session is not closing, running too long and may be causing other sessions to fail. 

    Is it one view/process  that is causing the error? I understand that once it happens all other sessions fail. 

    The @LIB is just a variable to your environment/ connection, just rep[lace it with your DB/schma/name, example x3.prod.. 

  • 0 in reply to Israel Braunfeld

    > Is it one view/process that is causing the error? 

    I genuinely do not know.

    SEI's default logging is not helping much and I do not know how to configure it.

    Issue from SEI side, SQL can solve it relatively quickly.

    Experiment context: Only three people is inside the Sage X3, after working hours. Observation about 45 mins since 8:00pm UTC+8.

    For reference:

    SQL Server running the query from inside SSMS: 2 seconds.

    SEI (after a fresh restart of server; manual clockwatch count, expect +/-2s variance for human error): 23 seconds. 

    SQL example, this is taken from the SEI server log after the SEI causing infinite prefect issue:

    ```

    ```

    Checking from Chrome network.

    Other than the consistent error from bad SEI codebase (Unexpected exception occured in method GetProcessView System.ArgumentException: An item with the same key has already been added.), based on the SEI log, it is not obvious to me which one of these failed. The SEI log doesn't immediately fail even after 10+ minutes (the longest I wait I have checked is 3 mins) and observing the SEI log, it looks like nothing happened with the same three paragraphs (1 error, 2 with @LIB_).

    Checking the SQL Activity Monitor, the server looks unchallenged:

    Currently, I do not know of a reliable way to catch the error since user had no issue until user found out there's an issue after waiting for 20 mins.

  • 0 in reply to chunheng

    Sage X3 on a SEI landing page after a server restart (caused by Windows Server Update):