gzipstream内存流到文件

时间:2016-05-09 14:11:10

标签: c# filestream memorystream gzipstream

我正在尝试使用Gzip压缩来压缩JSON文件以发送到另一个位置。它需要每天处理5,000到10,000个文件,并且我不需要本地计算机上的文件的压缩版本(它们实际上被转移到AWS S3进行长期存档)。

由于我不需要它们,我正在尝试压缩到内存流,然后使用它来写入AWS,而不是将每个压缩到磁盘。每当我尝试这样做时,文件都会被破坏(例如,当我在7-Zip中打开它们并尝试打开里面的JSON文件时,我得到"数据错误文件被破坏)。

当我尝试将内存流写入本地文件时会发生同样的事情,所以我现在试图解决这个问题。这是代码:

string[] files = Directory.GetFiles(@"C:\JSON_Logs");

foreach(string file in files)
{
    FileInfo fileToCompress = new FileInfo(file);
    using (FileStream originalFileStream = fileToCompress.OpenRead())
    {
        using (MemoryStream compressedMemStream = new MemoryStream())
        {
            using (GZipStream compressionStream = new GZipStream(compressedMemStream, CompressionMode.Compress))
            {
                originalFileStream.CopyTo(compressionStream);
                compressedMemStream.Seek(0, SeekOrigin.Begin);
                FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz");

                //Eventually this will be the AWS transfer, but that's not important here
                compressedMemStream.WriteTo(compressedFileStream); 
            }
        }
    }      
}

1 个答案:

答案 0 :(得分:4)

重新排列using语句,以便在您读取内存流内容时明确完成GZipStream

foreach(string file in files)
{
    FileInfo fileToCompress = new FileInfo(file);
    using (MemoryStream compressedMemStream = new MemoryStream())
    {
        using (FileStream originalFileStream = fileToCompress.OpenRead())
        using (GZipStream compressionStream = new GZipStream(
            compressedMemStream, 
            CompressionMode.Compress,
            leaveOpen: true))
        {
            originalFileStream.CopyTo(compressionStream);
        }
        compressedMemStream.Seek(0, SeekOrigin.Begin);

        FileStream compressedFileStream = File.Create(fileToCompress.FullName + ".gz");
        //Eventually this will be the AWS transfer, but that's not important here
        compressedMemStream.WriteTo(compressedFileStream); 
    }
}

处理流需要进行冲洗和关闭。