我的程序中出现内存不足错误

时间:2017-10-02 11:40:45

标签: java arraylist java-8 out-of-memory

我编写了一个程序,在对象的列表(最大800个)上进行一些数据处理。这份清单上的工作主要有以下几点:

  1. 很多SQL查询
  2. 处理查询数据
  3. 分组和匹配
  4. 将它们写入CSV文件
  5. 所有这一切都运行得很好,但数据处理部分和SQL数据的大小日益增加,程序开始耗尽内存并经常崩溃。

    为了避免这种情况,我决定将这个大的列表分成几个较小的块然后尝试在这些较小的列表上做同样的工作(我会在进入下一个小列表之前清除并取消当前的小列表)希望它会解决问题。但这根本没有帮助,程序仍然耗尽内存。

    程序在for循环的第一次迭代中不会耗尽内存但在第二次或第三次左右。

    我是否正确清除并取消了for循环中的所有列表和对象,以便内存在下一次​​迭代时可以自由使用?

    我该如何解决这个问题?我把我的代码放在下面。

    非常感谢任何建议/解决方案。

    提前致谢。 干杯!

    List<someObject> unchoppedList = new ArrayList<someObject>();
    for (String pb : listOfNames) {
        someObject tccw = null;
        tccw = new someObject(...);
        unchoppedList.add(tccw);
    }
    Collections.shuffle(unchoppedList);
    List<List<someObject>> master = null;
    if (unchoppedList.size() > 0 && unchoppedList.size() <= 175) {
        master = chopped(unchoppedList, 1);
    } else if (unchoppedList.size() > 175 && unchoppedList.size() <= 355) {
        master = chopped(unchoppedList, 2);
    } else if (unchoppedList.size() > 355 && unchoppedList.size() <= 535) {
        master = chopped(unchoppedList, 3);
    } else if (unchoppedList.size() > 535&& unchoppedList.size() <= 800)) {
        master = chopped(unchoppedList, 4);
    }
    
    for (int i = 0 ; i < master.size() ; i++) {
        List<someObject> m = master.get(i);
        System.gc(); // I insterted this statement to force GC
        executor1 = Executors.newFixedThreadPool(Configuration.getNumberOfProcessors());
        generalList = new ArrayList<ProductBean>();
        try {
            m.parallelStream().forEach(work -> {
                try {
                    generalList.addAll(executor1.submit(work).get());
                    work = null;
                } catch (Exception e) {
                    logError(e);
                }
            });
        } catch (Exception e) {
            logError(e);
        }
        executor1.shutdown();
        executor1.awaitTermination(30, TimeUnit.SECONDS);
        m.clear();
        m = null;
        executor1 = null;
    
        //once the general list is produced the program randomly matches some "good" products to highly similar "not-so-good" products
        List<ProductBean> controlList = new ArrayList<ProductBean>();
        List<ProductBean> tempKaseList = new ArrayList<ProductBean>();
        for (ProductBean kase : generalList) {
            if (kase.getGoodStatus() == 0 && kase.getBadStatus() == 1) {
                controlList.add(kase1);
            } else if (kase.getGoodStatus() == 1 && kase.getBadStatus() == 0) {
                tempKaseList.add(kase1);
            }
        }
        generalList = new ArrayList<ProductBean>(tempKaseList);
        tempKaseList.clear();
        tempKaseList = null;
    
        Collections.shuffle(generalList);
        Collections.shuffle(controlList);
        final List<List<ProductBean>> compliCases = chopped(generalList, 3);
        final List<List<ProductBean>> compliControls = chopped(controlList, 3);
        generalList.clear();
        controlList.clear();
        generalList = null;
        controlList = null;
    
        final List<ProductBean> remainingCases = Collections.synchronizedList(new ArrayList<ProductBean>());
        IntStream.range(0, compliCases.size()).parallel().forEach(i -> {
            compliCases.get(i).forEach(c -> {
                TheRandomMatchWorker tRMW = new TheRandomMatchWorker(compliControls.get(i), c);
                List<String[]> reportData = tRMW.generateReport();
                writeToCSVFile(reportData);
                // if the program cannot find required number of products to match it is added to a new list to look for matching candidates elsewhere
                if (tRMW.getTheKase().isEverythingMathced == false) {
                    remainingCases.add(tRMW.getTheKase());
                }
                compliControls.get(i).removeAll(tRMW.getTheMatchedControls());
                tRMW = null;
                stuff.clear();
            });
        });
    
        controlList = new ArrayList<ProductBean>();
        for (List<ProductBean> c10 : compliControls) {
            controlList.addAll(c10);
        }
        compliCases.clear();
        compliControls.clear();
    
        //last sweep where the program for last time tries to match some "good" products to highly similar "not-so-good" products
        try {
            for (ProductBean kase : remainingCases) {
                if (kase.getNoOfContrls() < ccv.getNoofctrl()) {
                    TheRandomMatchWorker tRMW = new TheRandomMatchWorker(controlList, kase );
                    List<String[]> reportData = tRMW.generateReport();
                    writeToCSVFile(reportData);
                    if (tRMW.getTheKase().isEverythingMathced == false) {
                        remainingCases.add(tRMW.getTheKase());
                    }
                    compliControls.get(i).removeAll(tRMW.getTheMatchedControls());
                    tRMW = null;
                    stuff.clear();
                }
            }
        } catch (Exception e) {
            logError(e);
        }
    
        remainingCases.clear();
        controlList.clear();
        controlList = null;
        master.get(i).clear();
        master.set(i, null);
        System.gc();
    }
    master.clear();
    master = null;
    

    这是切碎的方法

    static <T> List<List<T>> chopped(List<T> list, final int L) {
        List<List<T>> parts = new ArrayList<List<T>>();
        final int N = list.size();
        int y = N / L, m = 0, c = y;
        int r = c * L;
        for (int i = 1; i <= L; i++) {
            if (i == L) {
                c += (N - r);
            }
            parts.add(new ArrayList<T>(list.subList(m, c)));
            m = c;
            c += y;
        }
        return parts;
    }
    

    这是请求的堆栈跟踪

    java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at Controller.MasterStudyController.lambda$1(MasterStudyController.java:212)
        at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
        at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
        at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
        at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at java.util.concurrent.ForkJoinPool$WorkQueue.execLocalTasks(ForkJoinPool.java:1040)
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1058)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
        at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
    Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
        at org.postgresql.core.Encoding.decode(Encoding.java:204)
        at org.postgresql.core.Encoding.decode(Encoding.java:215)
        at org.postgresql.jdbc.PgResultSet.getString(PgResultSet.java:1913)
        at org.postgresql.jdbc.PgResultSet.getString(PgResultSet.java:2484)
        at Controller.someObject.findControls(someObject.java:214)
        at Controller.someObject.call(someObject.java:81)
        at Controller.someObject.call(someObject.java:1)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    [19:13:35][ERROR] Jarvis: Exception:
    java.util.concurrent.ExecutionException: java.lang.AssertionError: Failed generating bytecode for <eval>:-1
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at Controller.MasterStudyController.lambda$1(MasterStudyController.java:212)
        at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
        at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
        at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
        at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:291)
        at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
        at java.util.concurrent.ForkJoinPool$WorkQueue.execLocalTasks(ForkJoinPool.java:1040)
        at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1058)
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
        at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
    Caused by: java.lang.AssertionError: Failed generating bytecode for <eval>:-1
        at jdk.nashorn.internal.codegen.CompilationPhase$BytecodeGenerationPhase.transform(CompilationPhase.java:431)
        at jdk.nashorn.internal.codegen.CompilationPhase.apply(CompilationPhase.java:624)
        at jdk.nashorn.internal.codegen.Compiler.compile(Compiler.java:655)
        at jdk.nashorn.internal.runtime.Context.compile(Context.java:1317)
        at jdk.nashorn.internal.runtime.Context.compileScript(Context.java:1251)
        at jdk.nashorn.internal.runtime.Context.compileScript(Context.java:627)
        at jdk.nashorn.api.scripting.NashornScriptEngine.compileImpl(NashornScriptEngine.java:535)
        at jdk.nashorn.api.scripting.NashornScriptEngine.compileImpl(NashornScriptEngine.java:524)
        at jdk.nashorn.api.scripting.NashornScriptEngine.evalImpl(NashornScriptEngine.java:402)
        at jdk.nashorn.api.scripting.NashornScriptEngine.eval(NashornScriptEngine.java:155)
        at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:264)
        at Controller.someObject.findCases(someObject.java:108)
        at Controller.someObject.call(someObject.java:72)
        at Controller.someObject.call(someObject.java:1)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
    [19:13:52][ERROR] Jarvis: Exception:
    [19:51:41][ERROR] Jarvis: Exception:
    org.postgresql.util.PSQLException: Ran out of memory retrieving query results.
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2157)
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
        at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
        at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
        at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:169)
        at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:117)
        at Controller.someObject.lookForSomething(someObject.java:763)
        at Controller.someObject.call(someObject.java:70)
        at Controller.someObject.call(someObject.java:1)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
    

1 个答案:

答案 0 :(得分:1)

好的,JVM的48GB内存相当多(我假设你在谈论堆空间,所以-Xmx48G)。我们在这里清楚地谈论大数据集,这当然使事情变得复杂,因为创建最小的可重复示例并不容易。

我要尝试的第一件事是更深入地了解消耗所有内存的内容。当使用以下选项耗尽内存时,您可以让Java生成堆转储:

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp

当程序因OutOfMemoryError而崩溃时,这应该在/ tmp中创建一个java_xxxxxx.hprof文件。

然后,您可以尝试使用工具来分析此转储,尽管巨大的规模将带来挑战。例如,试图在MAT中简单地打开它很可能不起作用,但是有一些方法可以在命令行上运行它的一部分 - 可能是在一个强大的服务器上远程运行。

有些文章描述了对大堆转储的分析:

简而言之,这些说明归结为:

  • 下载并安装MAT
  • 根据分析期间可用的内容配置MAT的内存设置(显然,更多=更好)
  • 它应包含一个ParseHeapDump.sh脚本,您可以使用该脚本运行某些分析并准备索引/报告文件。请注意,这当然需要很长时间。

    ./ParseHeapDump.sh /path/to/your.hprof
    ./ParseHeapDump.sh /path/to/your.hprof org.eclipse.mat.api:suspects
    ./ParseHeapDump.sh /path/to/your.hprof org.eclipse.mat.api:overview
    ./ParseHeapDump.sh /path/to/your.hprof org.eclipse.mat.api:top_components
    

然后,您应该能够使用MAT打开生成的报告,希望能够对它们做一些有用的事情。

在你的评论中,你说大多数内存正被SomeObjects列表使用,并怀疑它们没有被释放。

根据您发布的代码,SomeObject对象未被释放,因为它们仍可通过unchoppedList列表访问:该列表未在您发布的代码中清除,因此调用{{1}对所用的内存几乎没有任何影响,因为所有这些对象仍在别处引用。

因此,解决方案可能就像填充主列表后添加行m.clear()一样简单:

unchoppedList.clear();

在回应关于ArrayList的非线程安全使用的其他评论时,我必须与其他人一致认为这通常是一个坏主意。

为了解决最明显的问题,我甚至没有看到在向执行人提交工作时使用List<List<someObject>> master = null; // lets also get rid of hardcoded numbers of lists int maxListSize = 175; int nbSublists = (unchoppedList.size() + maxListSize - 1) / maxListSize; // obtain rounded up integer division master = chopped(unchoppedList, nbSublists); // important: clear the unchoppedList so it doesn't keep references to *all* SomeObject unchoppedList.clear(); 的充分理由。使用正常的顺序流将确保这再次是线程安全的(从而消除潜在的问题来源)。

请注意,如果这一变化对性能产生影响,我相信它甚至可能是积极的。

  • lambda表达式是微不足道的,因此执行速度非常快;并行流的理论最大收益似乎很小
  • 每个处理的顺序项都会启动一个新线程,直到执行程序达到最大值,因此所有核心应该几乎立即忙碌
  • 使用并行流甚至可以自己创建大量开销,在这种情况下,并行流线程还必须与执行程序线程竞争CPU时间

除此之外,可能还有其他并发问题在起作用;没有完整的程序,它很难评估,但您的parallelStream调用也可能存在问题。

相关问题