可空成员

时间:2017-09-27 20:11:27

标签: kotlin

如果我们将成员变量定义为

private var foo: Foo? = null

我们想在调用带参数的方法(初始化Foo时需要)时初始化它,有没有更好的方法呢?

fun generateFoo(bar: Bar): Foo {
    var localFoo = foo
    if (localFoo == null) {
        localFoo = Foo(bar)
        foo = localFoo
    }
    return localFoo
}

我正在考虑避免所有变量分配。

编辑:这里的版本略短,但仍不理想

fun generateFoo(bar: Bar): Foo {
    var localFoo = foo ?: Foo(bar)
    foo = localFoo
    return localFoo
}

1 个答案:

答案 0 :(得分:3)

这是安全的,除非你有多个线程击中你的班级:

import io
import zipfile
import boto3
import sys
import multiprocessing
# from multiprocessing.dummy import Pool as ThreadPool
import time


s3_client = boto3.client('s3')
s3 = boto3.resource('s3', 'us-east-1')


def stream_zip_file():
    # pool = ThreadPool(threads)
    start_time_main = time.time()
    start_time_stream = time.time()
    obj = s3.Object(
        bucket_name='monkey-business-dev-data',
        key='sample-files/daily/banana/large/banana.zip'
    )
    end_time_stream = time.time()
    # process_queue = multiprocessing.Queue()
    buffer = io.BytesIO(obj.get()["Body"].read())
    output = io.BytesIO()
    print (buffer)
    z = zipfile.ZipFile(buffer)
    foo2 = z.open(z.infolist()[0])
    print(sys.getsizeof(foo2))
    line_counter = 0
    file_clounter = 0
    for line in foo2:
        line_counter += 1
        output.write(line)
        if line_counter >= 5000:
            file_clounter += 1
            line_counter = 0
            # pool.map(upload_to_s3, (output, file_clounter))
            # upload_to_s3(output, file_clounter)
            # process_queue.put(output)
            output.close()
            output = io.BytesIO()
    if line_counter > 0:
        # process_queue.put(output)
        # upload_to_s3(output, file_clounter)
        # pool.map(upload_to_s3, args =(output, file_clounter))
        output.close()
    print('Total Files: {}'.format(file_clounter))
    print('Total Lines: {}'.format(line_counter))
    output.seek(0)
    start_time_upload = time.time()

    end_time_upload = time.time()

    output.close()
    z.close()
    end_time_main = time.time()

    print('''
    main: {}
    stream: {}
    upload: {}
    '''.format((end_time_main-start_time_main),(end_time_stream-start_time_stream),(end_time_upload-start_time_upload)))


def upload_to_s3(output, file_name):
    output.seek(0)
    s3_client.put_object(
        Bucket='monkey-business-dev-data', Key='sample-files/daily/banana/large/{}.txt'.format(file_name),
        ServerSideEncryption='AES256',
        Body=output,
        ACL='bucket-owner-full-control'
    )

#     consumer_process = multiprocessing.Process(target=data_consumer, args=(process_queue))
#     consumer_process.start()
#
#
# def data_consumer(queue):
#     while queue.empty() is False:



if __name__ == '__main__':
    stream_zip_file()

但如果你愿意,你可以做这样的事情 - 取决于你是否认为这比你已经拥有的更长版本更具可读性:

fun generateFoo(bar: Bar): Foo {
    if (foo == null) {
        foo = Foo(bar)
    }
    return foo!!
}