Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Archive SQL Tables

  • 01-10-2010 12:55pm
    #1
    Registered Users, Registered Users 2 Posts: 1,740 ✭✭✭


    Guys

    running SQL 2000 SP4. I'm no DBA and inherited this monstrosity. anyway was running into issue with the CPU reaching 100% utalisation during a daily end of day process which included backing up the database so I spoke to the vendor of the app that uses the DB and after some back and forth they have said that I have a bunch of tables that are causing the problem as they are pretty big. they suggest I archive some of the data.

    anyone know if it's possible to archive data from tables older than a certain date or something, that way I can run the query every so often to keep the DB more manageable.


Comments

  • Registered Users, Registered Users 2 Posts: 68,317 ✭✭✭✭seamus


    Partitioning is what you're looking for. I don't know the extent of functionality in regards to partitioning in SQL 2000, I know it was massively extended in SQL 2005, so if you can upgrade the sever, I would.

    Partitioning involves splitting tables into separate files based on criteria (date would be the obvious one), and then you can choose which files to back up. In the date case obviously you wouldn't need to back up old records very often (if at all) and you can back up new records then on a standard schedule. If your old records are *never* updated, you can even make the partitions read-only and exclude them from full database backups, which should make the process a whole deal quicker.

    For bonus points you can locate the older (and therefore less often accessed) partitions on slower discs without fault tolerance and the heavily-used partitions on faster RAID volumes. This means, for example, that if you have 4 x 50GB partitions, but only one of these is under heavy use, then you don't need to consume 200GB of RAID space to house the table; you can shift 150GB off to older storage and stick the heavy-use stuff on the RAID block.


  • Registered Users, Registered Users 2 Posts: 1,378 ✭✭✭halkar


    As you are already doing backups. Restore the latest backup somewhere else to ensure it is a good backup. Archive the backup files somewhere else safe (tape backup or external drives etc). Once you are safe with your copy of database backup you can create a Sql Agent job to delete data older than X days, months etc. You can delete certain amount of data (using set rowcount) and schedule the job to run every 5-10-15 minutes during out of office or not busy hours.

    You need to watch the log growth as delete operations are logged. If the db is in full recovery mode you need to backup transactions logs often during this maintenance to avoid running out of space. You will also need to update indexes once you start archiving.

    The CPU usage may or may not be related to backup operations depends on the server's spec. Check the end of the day process and see if there are any bad performing queries, stored procedures etc. You be suprised how much improvement can be made with simple changes :)


Advertisement