Hello,

I have an old osTicket 1.9.12. Before updating, I made a copy of the system to test the update. I copied the database with an export and imported it in a new database. Only file_chunk was a problem, because its to large.

The update of the system went well. The only issue is, that the display of the ticket overview (queue?) takes very long, if there are more than a few hundreds of tickets. All below 500 is OK, but 6000 for example will take 2 minutes to load.
I have read in the forum about the patch in class.searchphp, converting to InnoDB and other tips. Nothing has worked.

The database and osTicket are running on the same sever as the current 1.9.12 version. The old version show the same view in milliseconds, the new one needs 2 minutes.

Before upgrading the productive system, I want understand and fixing the issue. Maybe its something I have done wrong when copying the database?

This is a query, which takes 2 minutes to load, the result are 6392 tickets:

SELECT A1.number AS number, A1.ticket_id AS ticket_id, A1.source AS source, A1.lastupdate AS lastupdate, B2.subject AS cdata__subject, A1.isoverdue AS isoverdue, B4.name AS user__name, B2.priority AS cdata__priority, B5.firstname AS staff__firstname, B5.lastname AS staff__lastname, B6.name AS team__name, A1.team_id AS team_id, A1.isanswered = '0' AS 2GkBZxz, (SELECT COUNT(R0.id) AS count FROM ots_thread Q7 JOIN ots_ticket Q8 ON (Q7.object_type = 'T' AND Q7.object_id = Q8.ticket_id) LEFT JOIN ots_thread_entry R0 ON (Q7.id = R0.thread_id) WHERE Q8.ticket_id = A1.ticket_id AND NOT R0.flags & 4 != 0) AS _thread_count, (SELECT COUNT(R1.id) AS count FROM ots_thread Q7 JOIN ots_ticket Q8 ON (Q7.object_type = 'T' AND Q7.object_id = Q8.ticket_id) LEFT JOIN ots_thread_entry R0 ON (Q7.id = R0.thread_id) LEFT JOIN ots_attachment R1 ON (R1.type = 'H' AND R0.id = R1.object_id) WHERE Q8.ticket_id = A1.ticket_id AND R1.inline = 0) AS _att_count, B7.expire > NOW() AND NOT B7.staff_id = 1 AS _locked, (SELECT COUNT(R0.id) AS count FROM ots_thread Q7 JOIN ots_ticket Q8 ON (Q7.object_type = 'T' AND Q7.object_id = Q8.ticket_id) LEFT JOIN ots_thread_collaborator R0 ON (Q7.id = R0.thread_id) WHERE Q8.ticket_id = A1.ticket_id) AS _collabs, COALESCE(B5.firstname, B5.lastname, B6.name, 'zzz') AS assignee FROM ots_ticket A1 JOIN ots_ticket_status A2 ON (A1.status_id = A2.id) LEFT JOIN ots_thread A3 ON (A3.object_type = 'T' AND A1.ticket_id = A3.object_id) LEFT JOIN ots_thread_referral A4 ON (A3.id = A4.thread_id) LEFT JOIN ots_staff A5 ON (A4.object_type = 'S' AND A4.object_id = A5.staff_id) LEFT JOIN ots_thread A6 ON (A1.ticket_id = A6.object_id AND A6.object_type = 'C') LEFT JOIN ots_thread_referral A7 ON (A6.id = A7.thread_id) LEFT JOIN ots_staff A8 ON (A7.object_type = 'S' AND A7.object_id = A8.staff_id) LEFT JOIN ots_department B0 ON (A4.object_type = 'D' AND A4.object_id = B0.id) LEFT JOIN ots_department B1 ON (A7.object_type = 'D' AND A7.object_id = B1.id) LEFT JOIN ots_ticket__cdata B2 ON (A1.ticket_id = B2.ticket_id) LEFT JOIN ots_ticket_priority B3 ON (B2.priority = B3.priority_id) LEFT JOIN ots_user B4 ON (A1.user_id = B4.id) LEFT JOIN ots_staff B5 ON (A1.staff_id = B5.staff_id) LEFT JOIN ots_team B6 ON (A1.team_id = B6.team_id) LEFT JOIN ots_lock B7 ON (A1.lock_id = B7.lock_id) WHERE A2.state = 'open' AND ((A2.state = 'open' AND (A1.staff_id = 1 OR A5.staff_id = 1 OR A6.object_type = 'C' AND A8.staff_id = 1)) OR A1.dept_id IN (1, 2, 3, 4) OR B0.id IN (1, 2, 3, 4) OR A6.object_type = 'C' AND B1.id IN (1, 2, 3, 4)) AND (A1.ticket_pid IS NULL OR A1.flags & 8 != 0) GROUP BY A1.ticket_id ORDER BY B3.priority_urgency ASC, A1.lastupdate DESC LIMIT 50

Any ideas?

best regards
Horst

    H0rst

    Sounds like you need to optimize MySQL. They have a good optimization guide on their website. If you can’t follow their instructions then it sounds like you simply need to dedicate more resources to MySQL.

    Reference:

    P.S.
    If you are using MySQL < 8.0 or using MariaDB >= 10.6 then I'd highly recommend upgrading to MySQL 8.0 as it typically has better performance out-of-the-box without optimizations, etc.

    Cheers.

    H0rst

    Here are the basic commands you can start with:

    MyISAM -> InnoDB

    • Run the query and run any queries it returns
    • Replace databse_name with your actual database name.
    SELECT  CONCAT('ALTER TABLE `', table_name, '` ENGINE=InnoDB;') AS sql_statements
    FROM    information_schema.tables AS tb
    WHERE   table_schema = 'database_name'
    AND     `ENGINE` = 'MyISAM'
    AND     `TABLE_TYPE` = 'BASE TABLE'
    ORDER BY table_name DESC;

    ROW_FORMAT=DYNAMIC

    • Run the query and run any queries it returns
    • Replace databse_name with your actual database name.
    SELECT  CONCAT('ALTER TABLE `', table_name, '` ROW_FORMAT=DYNAMIC;') AS sql_statements
    FROM    information_schema.tables AS tb
    WHERE   table_schema = 'database_name'
    AND     `ROW_FORMAT` != 'DYNAMIC'
    ORDER BY table_name DESC;

    InnoDB Buffer Pool Size

    • This is typically roughly 75% of the server’s total memory. Set in bytes [int] (eg. 1073741824 is 1 GB) or human readable size [string] (eg. '1 GB')
    -- Select current total buffer pool size in human readable format (GB)
    SELECT @@innodb_buffer_pool_size/1024/1024/1024;
    -- Set global buffer pool size to desired value (to persist you need to update this in my.cnf)
    SET GLOBAL innodb_buffer_pool_size = total_size_here;

    Analyze Tables

    • Run for each table in the database.
    • Replace table_name with each table name.
    -- Run ANALYZE to generate histograms for tables to improve performance
    ANALYZE TABLE `table_name` PERSISTENT FOR ALL;

    Cheers.

    Thank you KevinTheJedi,

    the database engine is InnoDB, the ROW_Format is dynamic. I have tried analyze (as optimize before) without any change. The database server is the same as my old 1.9.12 is running without problems. Unfortunately I had never worked with SQL and only have PhpMyAdmin. The server is a managed server (Quadcore, 32 GB RAM) and I have no root access. The only database used is the old osTicket system with 0 agents working at the moment and the new one with my test upgrade.
    The result for the buffer query is "5.000000000000". I can't change this. When it is important, I can ask my provider.

    Can the issue in the data? Maybe I have some empty tables or missing data, because of the upgrade from the old version? Can I have some specials in the structure of my data? I use it as simple email support system for some years. No teams, no departments, no helpdesk. Ca. 50.000 end users (status=guest) which have send tickets in the last years.

    Is there a better way to set up the test system as export/import the databases?
    Should I make a new installation and then copy the databases in some way?

    best regards
    Horst

      H0rst

      The result for the buffer query is "5.000000000000". I can't change this. When it is important, I can ask my provider.

      As suggested above the recommended innodb_buffer_pool_size should be around 75% of the server's total memory (at least that's what I've been told by DBAs and such). Being that it's a 32 GB server and the query you ran returned 5 GB that is nowhere near 75% of the total memory. I would ask them to update this setting to dedicate more memory. Obviously 75% is just a suggestion so you can use whatever you'd like or what your host is comfortable with.

      Can the issue in the data? Maybe I have some empty tables or missing data, because of the upgrade from the old version? Can I have some specials in the structure of my data?

      I wouldn't think so. I think it's that we simply use "beefier" queries due to Custom Columns/Custom Queues so your database need more "oomph" to run such queries. 1.9.x was old and used simple, basic queries.

      Is there a better way to set up the test system as export/import the databases?

      I don't know what you mean by this exactly but you can try exporting and re-importing the database to see if that helps clear any space MySQL hasn't cleared itself. Your server should have innodb_file_per_table enabled so it automatically reclaims storage itself. If you do not have this enabled the space will remain even though the data is deleted which can cause performance issues (some times). I would suggest dumping using the native mysqldump command via CLI and importing via the native mysql command via CLI. These have no timeouts and have the least possibility for issues.

      Cheers.

        Thank you. I talk to my provider.

        best regards and have a nice weekend
        Horst

          H0rst

          How was it resolved please? Can you please share

            st512
            I don't have a solution. My provider recommends to upgrade the server, because it is running with HDDs instead of NVMEs. He tells me, MariaDB/MySQL don't work good on HDDs.
            I got this email 2 hours ago. The upgrade will take some days. I am not sure, if I should do it before xmas. ;-)

            best regards
            Horst

            st512 KevinTheJedi
            OK, now I have a brand new server with 16-Cores, 128 GB RAM and fast SSDs.
            But it is no solution. The 2 minute query is now at 86 seconds.
            I continue to talk with my provider.

            best regards
            Horst

            Great news. My provider raised the innodb_buffer_pool_size to 90GB. And now all is working fast!

            @KevinTheJedi thank you for your help.

            Write a Reply...