Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Feeding lines of a text file into script for processing

  • 27-05-2007 10:20pm
    #1
    Registered Users, Registered Users 2 Posts: 2,078 ✭✭✭


    I have a file, each line contains some info, say 6 pieces, separated by spaces. A mix of alphanumeric strings and intergers

    I'm writing a script that will, based on its command line arguments, create a new file.

    I want to input the first file (say with n lines) directly into my script, which will produce n new files, each based on a line of the input file.

    I thought I could use xargs in some fashion to take the input file and feed it line by line into the script for processing, now I'm not so sure :/

    any suggestions?


Comments

  • Registered Users, Registered Users 2 Posts: 37,485 ✭✭✭✭Khannie


    I'm not sure I entirely understand you, but you should look at cut. This can sort information by column.


  • Registered Users, Registered Users 2 Posts: 1,287 ✭✭✭joe_chicken


    That sounds like you could use awk?


  • Closed Accounts Posts: 97 ✭✭koloughlin


    Here's a little piece of perl to do (I think) what you want. It's pretty bare bones at present, no comments, etc. Pass it as command line arguments the input file name and the output file name root. For example if you execute it as

    ./split.pl test.txt out

    and test.txt has 4 lines then you will get four files created, out1, out2, out3 and out4.

    I'm sure it's possible in awk too, but I find perl a little faster and more robust, so I generally stick with it.
    #!/usr/bin/perl -w
    
    use strict;
    
    if (@ARGV < 2) {
       print "usage is $0 infile outfileroot\n";
       exit 1;
    }
    
    my $infile = shift(@ARGV);
    my $outfileroot = shift(@ARGV);
    
    open (FILEIN, $infile) || die "Couldn't open $infile for read: $!";
    
    my $linenum = 0;
    while (<FILEIN>) {
       $linenum++;
       my $outfile = $outfileroot . $linenum;
       open (FILEOUT, ">$outfile") || die "Couldn't open $outfile for write: $!";
       print FILEOUT;
       close (FILEOUT);
    }
    close FILEIN;
    


  • Registered Users, Registered Users 2 Posts: 2,078 ✭✭✭theCzar


    I used Awk in the end, hampered initially by the fact I had never coded in Awk before yesterday morning, but it was similar enough to C that it easy to pick up. It's a simple program in the end, just a short for loop with a printf statement rigged to go for every record in the input file.;)

    Now if the client who wanted this stupidly verbose bunch of files would only clarify the final formatting, we're away. :rolleyes:


Advertisement