KevinTheJedi Thanks
osTicket Version v1.17.4 (ea462cb) — Up to date Web Server Software Apache/2.4.53 (Rocky Linux) OpenSSL/3.0.1 MySQL Version 10.10.3 PHP Version 8.1.17
I have managed to do the following.
php manage.php file migrate --bk=F --to=3 --limit=3
@KevinTheJedi How do I:
Please advise.
Thanks
rsclmumbai
Check the bk column in the file table to ensure it says 3. Then check to make sure there are no associated file_chunk records for the files in question. Lastly, visit the Ticket where you uploaded the file and click the link to download.
Cheers.
@KevinTheJedi - Just to clarify, what I'm asking is:
Attachment table should have object_id with object_type of T and that’ll be the associated thread_entry record then you can backtrack from there to find the Ticket's ID or number. We have ERDs available here.
For S3 I’m pretty sure it puts all attachments in the bucket (and folder if configured) without subdirectories. So the file key column value should be the name of the file in S3.
KevinTheJedi Check the bk column in the file table to ensure it says 3. Yes. Its says 3 KevinTheJedi Then check to make sure there are no associated file_chunk records for the files in question. KevinTheJedi Lastly, visit the Ticket where you uploaded the file and click the link to download.
KevinTheJedi Check the bk column in the file table to ensure it says 3. Yes. Its says 3
KevinTheJedi Then check to make sure there are no associated file_chunk records for the files in question. KevinTheJedi Lastly, visit the Ticket where you uploaded the file and click the link to download.
How do I check this? This is where I'm stuck. How do I co-relate the columns in table ost_attachment to ost_ticket
KevinTheJedi My Attachment table has object_type H & D only.
Sorry it is H for thread_entry and D is for Draft.
As for the relations I provided a link to the ERDs above.
KevinTheJedi I have managed to successfully setup Attachments to S3 plugin & migrated current attachments (8GB) from File System to S3.
My Filesystem still shows the Attachment folder size to be 1.7GB.
Any idea why these files are not migrating and what may be in these files?
Run the command manually and see what errors you get.
KevinTheJedi Manually? I'm sorry I did not follow. What command should I run?
KevinTheJedi I found another bigger issue.
Around 8 hours ago, I set my plugin configuration as below.
But when I check MySQL > "files" table, I'm seeing a few files still on the file system and not on S3.
What configuration may I be missing?
Sorry it’s too early for me, I didn’t notice the screenshot. Are any still set to D in db? If so they can’t be moved for some reason. Either permissions issues, missing or corrupt data, etc.
Admin Panel > Settings > System > Store Attachments
This was outlined in the documentation linked earlier.
KevinTheJedi My bad. I thought enabling the plugin was all that was required. Apologies. Its fixed now. Thanks
@KevinTheJedi When I delete a ticket with an attachment, the corresponding record does not get deleted from files table and also from S3.
files
In S3, I have ALL Permissions granted to the AWS Account.
Can you suggest if I may be missing something?
If it’s an attached file (not an inline image) they will be removed after a day. The cron orphaned file cleanup and the ticket deletion methods only purge orphaned files after 24 hours. When you delete a ticket we unlink any associated attachment records. If the associated file is no longer attached to anything the orphaned file cleanup will eventually purge them. This is why it's important to have a cron job running as it will continuously check for orphaned files and do needed cleanup. It also does things like session cleanup, etc.
KevinTheJedi Got it. I will check this after 24 hours and update you. Thanks
To expedite this test simply go to the _file table, look for the orphaned file records, set the value for the created column to 2 days previous, and run cron via command line on the server (so you don't have to wait for cron job) to see if they are purged.
_file
created
KevinTheJedi Great idea & it worked. Thanks a ton. You have been super patient with my questions!!!!