通过第一列合并许多TSV文件

时间:2016-11-28 16:12:59

标签: bash perl awk

我在一个只有两列的目录中有很多(几十个)TSV文件,我想根据第一列值合并所有这些文件(两列都有我需要维护的标题);如果存在此值,则必须添加相应第二列的值,依此类推(参见示例)。文件可能具有不同的行数,而不是按第一列排序,尽管可以通过排序轻松完成。

我尝试过加入,但只适用于两个文件。可以为目录中的所有文件扩展连接吗?我认为 awk 可能是更好的解决方案,但我在awk中的知识非常有限。有什么想法吗?

以下是三个文件的示例:

S01.tsv

Accesion    S01  
AJ863320    1  
AM930424    1  
AY664038    2

S02.tsv

Accesion    S02  
AJ863320    2  
AM930424    1  
EU236327    1  
EU434346    2 

S03.tsv

Accesion    S03  
AJ863320    5  
EU236327    2  
EU434346    2

Outfile应该是:

    Accesion    S01   S02   S03  
    AJ863320    1     2     5  
    AM930424    1     1
    AY664038    2  
    EU236327          1     2  
    EU434346          2     2

好的,多亏了詹姆斯·布朗,我得到了这段代码(我把它命名为compile.awk),但有一些小问题:

BEGIN { OFS="\t" }                            # tab separated columns
FNR==1 { f++ }                                # counter of files
{
    a[0][$1]=$1                               # reset the key for every record 
    for(i=2;i<=NF;i++)                        # for each non-key element
        a[f][$1]=a[f][$1] $i ( i==NF?"":OFS ) # combine them to array element
}
END {                                         # in the end
    for(i in a[0])                            # go thru every key
        for(j=0;j<=f;j++)                     # and all related array elements
            printf "%s%s", a[j][i], (j==f?ORS:OFS)
}                                             # output them, nonexistent will output empty

当我使用实际文件

运行它时
awk -f compile.awk 01.tsv 02.tsv 03.tsv

我得到的输出为:

LN854586.1.1236         1
JF128382.1.1303     1   
Accesion    S01 S02 S03
JN233077.1.1420 1       
HQ836180.1.1388     1   
KP718814.1.1338         1
JQ781640.1.1200         2

前两行不属于那里,因为文件应该以所有文件的标题(第三行)开头。 任何想法如何解决这个问题?

1 个答案:

答案 0 :(得分:2)

我可能会解决这个问题:

#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;

my @header; 
my %all_rows;
my %seen_cols;


#read STDIN or files specified as args. 
while ( <> ) {
   #detect a header row by keyword. 
   #can probably do this after 'open' but this way
   #means we can use <> and an arbitrary file list. 
   if ( m/^Accesion/ ) { 
      @header = split;       
      shift @header; #drop "accession" off the list so it's just S01,02,03 etc. 
      $seen_cols{$_}++ for @header; #keep track of uniques. 
   }
   else {
      #not a header row - split the row on whitespace.
      #can do /\t/ if that's not good enough, but it looks like it should be. 
      my ( $ID, @fields ) = split; 
      #use has slice to populate row.

      my %this_row;
      @this_row{@header} = @fields;

      #debugging
      print Dumper \%this_row; 

      #push each field onto the all rows hash. 
      foreach my $column ( @header ) {
         #append current to field, in case there's duplicates (no overwriting)
         $all_rows{$ID}{$column} .= $this_row{$column}; 
      }
   }
}

#print for debugging
print Dumper \%all_rows;
print Dumper \%seen_cols;

#grab list of column headings we've seen, and order them. 
my @cols_to_print = sort keys %seen_cols;

#print header row. 
print join "\t", "Accesion", @cols_to_print,"\n";
#iteate keys, and splice. 
foreach my $key ( sort keys %all_rows ) { 
    #print one row at a time.
    #map iterates all the columns, and gives the value or an empty string
    #if it's undefined. (prevents errors)
    print join "\t", $key, (map { $all_rows{$key}{$_} // '' } @cols_to_print),"\n"
}

鉴于您的输入 - 排除调试 - 打印:

Accesion    S01 S02 S03 
AJ863320    1   2   5   
AM930424    1   1       
AY664038    2           
EU236327        1   2   
EU434346        2   2   
相关问题