如何从文件中删除重复的行?

时间:2009-07-31 22:37:44

标签: python text file-io

我有一个包含一列的文件。如何删除文件中的重复行?

14 个答案:

答案 0 :(得分:57)

在Unix / Linux上,根据David Locke的回答使用uniq命令,或根据William Pursell的评论使用sort

如果您需要Python脚本:

lines_seen = set() # holds lines already seen
outfile = open(outfilename, "w")
for line in open(infilename, "r"):
    if line not in lines_seen: # not a duplicate
        outfile.write(line)
        lines_seen.add(line)
outfile.close()

更新sort / uniq组合将删除重复项,但会返回一个文件,其中的行已排序,这可能是您想要的,也可能不是。上面的Python脚本不会重新排序行,但只删除重复项。当然,要使上面的脚本也进行排序,只需省略outfile.write(line),而不是在循环后立即执行outfile.writelines(sorted(lines_seen))

答案 1 :(得分:34)

如果您使用的是* nix,请尝试运行以下命令:

sort <file name> | uniq

答案 2 :(得分:16)

uniqlines = set(open('/tmp/foo').readlines())

这将为您提供唯一行的列表。

将其写回某个文件就像这样简单:

bar = open('/tmp/bar', 'w').writelines(set(uniqlines))

bar.close()

答案 3 :(得分:6)

这里已经说过了什么 - 这里我说的是什么。

import optparse

def removeDups(inputfile, outputfile):
        lines=open(inputfile, 'r').readlines()
        lines_set = set(lines)
        out=open(outputfile, 'w')
        for line in lines_set:
                out.write(line)

def main():
        parser = optparse.OptionParser('usage %prog ' +\
                        '-i <inputfile> -o <outputfile>')
        parser.add_option('-i', dest='inputfile', type='string',
                        help='specify your input file')
        parser.add_option('-o', dest='outputfile', type='string',
                        help='specify your output file')
        (options, args) = parser.parse_args()
        inputfile = options.inputfile
        outputfile = options.outputfile
        if (inputfile == None) or (outputfile == None):
                print parser.usage
                exit(1)
        else:
                removeDups(inputfile, outputfile)

if __name__ == '__main__':
        main()

答案 4 :(得分:4)

获取列表中的所有行并创建一组行,您就完成了。 例如,

>>> x = ["line1","line2","line3","line2","line1"]
>>> list(set(x))
['line3', 'line2', 'line1']
>>>

并将内容写回文件。

答案 5 :(得分:4)

Python One衬垫:

python -c "import sys; lines = sys.stdin.readlines(); print ''.join(sorted(set(lines)))" < InputFile > OutputFile

答案 6 :(得分:4)

你可以这样做:

import os
os.system("awk '!x[$0]++' /path/to/file > /path/to/rem-dups")

这里你正在使用bash进入python:)

您还有其他方式:

with open('/tmp/result.txt') as result:
        uniqlines = set(result.readlines())
        with open('/tmp/rmdup.txt', 'w') as rmdup:
            rmdup.writelines(set(uniqlines))

答案 7 :(得分:2)

添加到@David Locke的答案,使用* nix系统可以运行

sort -u messy_file.txt > clean_file.txt

将创建clean_file.txt按字母顺序删除重复项。

答案 8 :(得分:2)

  

查看我创建的脚本以从文本文件中删除重复的电子邮件。希望这有帮助!

# function to remove duplicate emails
def remove_duplicate():
    # opens emails.txt in r mode as one long string and assigns to var
    emails = open('emails.txt', 'r').read()
    # .split() removes excess whitespaces from str, return str as list
    emails = emails.split()
    # empty list to store non-duplicate e-mails
    clean_list = []
    # for loop to append non-duplicate emails to clean list
    for email in emails:
        if email not in clean_list:
            clean_list.append(email)
    return clean_list
    # close emails.txt file
    emails.close()
# assigns no_duplicate_emails.txt to variable below
no_duplicate_emails = open('no_duplicate_emails.txt', 'w')

# function to convert clean_list 'list' elements in to strings
for email in remove_duplicate():
    # .strip() method to remove commas
    email = email.strip(',')
    no_duplicate_emails.write(f"E-mail: {email}\n")
# close no_duplicate_emails.txt file
no_duplicate_emails.close()

答案 9 :(得分:1)

这是我的解决方案

if __name__ == '__main__':
f = open('temp.txt','w+')
flag = False
with open('file.txt') as fp:
    for line in fp:
        for temp in f:
            if temp == line:
                flag = True
                print('Found Match')
                break
        if flag == False:
            f.write(line)
        elif flag == True:
            flag = False
        f.seek(0)
    f.close()

答案 10 :(得分:1)

如果有人正在寻找使用散列并且更加华丽的解决方案,那么这就是我目前使用的:

def remove_duplicate_lines(input_path, output_path):

    if os.path.isfile(output_path):
        raise OSError('File at {} (output file location) exists.'.format(output_path))

    with open(input_path, 'r') as input_file, open(output_path, 'w') as output_file:
        seen_lines = set()

        def add_line(line):
            seen_lines.add(line)
            return line

        output_file.writelines((add_line(line) for line in input_file
                                if line not in seen_lines))

这个函数不是很有效,因为哈希计算了两次,但是,我很确定该值会被缓存。

答案 11 :(得分:1)

在同一文件中对其进行编辑

public class GettingThingsController : ApiController
    {

        [HttpPost]
        public IHttpActionResult GetPeople()
        {
             return Ok();
        }

        [HttpPost]
        public IHttpActionResult GetProducts()
        {
            return Ok();
        }
    }

答案 12 :(得分:0)

可读且简洁

with open('sample.txt') as fl:
    content = fl.read().split('\n')

content = set([line for line in content if line != ''])

content = '\n'.join(content)

with open('sample.txt', 'w') as fl:
    fl.writelines(content)

答案 13 :(得分:0)

cat <filename> | grep '^[a-zA-Z]+$' | sort -u > outfile.txt

从文件中过滤和删除重复值。