Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Help Keep Boards Alive. Support us by going ad free today. See here: https://subscriptions.boards.ie/.
If we do not hit our goal we will be forced to close the site.

Current status: https://keepboardsalive.com/

Annual subs are best for most impact. If you are still undecided on going Ad Free - you can also donate using the Paypal Donate option. All contribution helps. Thank you.

fsck output question

  • 12-02-2006 05:14PM
    #1
    Registered Users, Registered Users 2 Posts: 37,485
    ✭✭✭✭


    Ok, for a variety for reasons, I thought my hdd might have gone tits up. I wiped the machine, reinstalled ubuntu, then ran fsck from knoppix. Here's the output.
    root@0[ramdisk]# e2fsck -c /dev/hda1
    e2fsck 1.38 (30-Jun-2005)
    Checking for bad blocks (read-only test): done                        934
    Pass 1: Checking inodes, blocks, and sizes
    Pass 2: Checking directory structure
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Pass 5: Checking group summary information
    
    /: ***** FILE SYSTEM WAS MODIFIED *****
    /: 80283/19398656 files (0.7% non-contiguous), 1097549/38788934 blocks
    

    Does the 934 mean there were 934 bad blocks? :(


Welcome!

It looks like you're new here. Sign in or register to get started.

Comments

  • Registered Users, Registered Users 2 Posts: 6,762 WizZard
    ✭✭✭


    The -v option (verbose) might give you a better clue. I tried it on my /boot and the number was 128, which doesn't correspond to anything I remember setting for that partition.
    I use reiserfs on my data partitions, so I can't try it on them


  • Registered Users, Registered Users 2 Posts: 37,485 Khannie
    ✭✭✭✭


    The whole process took about 16 hours to run on a 160GB hdd, so there'll be no running it again. ;)

    Everything seems to be ok for now. The system was very unstable before, but I'm hoping that's because I was using ReiserFS. The new install is running on Ext3, but I have had one system crash since my first post.

    The hard drive is about 16 months old, but I'd say it's seen less than 500 hours use in total.


  • Closed Accounts Posts: 210 deimos
    ✭✭


    Look at smartmontools tools, if it's already installed it will be "smartctl" at the prompt. It gets back the SMART data from the hard disk (most new hd's store a rake of data on errors, uptime and a variety of things). The program allows you to take a guesstimate of how long more your hd will last and all that.


    It might be of interest to you.


  • Registered Users, Registered Users 2 Posts: 37,485 Khannie
    ✭✭✭✭


    Nice one. Thanks. The hdd does have smart. I'll give that a bash when I get home.


Welcome!

It looks like you're new here. Sign in or register to get started.
Advertisement