As a member of the core infrastructure / escalation team at Delphix, I've worked on many OutOfMemoryError issues firsthand and I'm excited to help raise the quality bar for Java applications elsewhere by sharing my experiences.
Dan Kimmel
Mar 15, 2015
Share
From the classic computer error haikus:
Out of memory. We wish to hold the whole sky, But we never will.
Specifically, your application "wishes to hold the whole sky" and it's failing intermittently because it's running out of memory. You can't find the issue and your customers are growing impatient. How do you defuse the situation? I was inspired to write this post by a recent bug we fixed, where our core infrastructure team worked on a problem involving the Java heap, the garbage collector, Java Native Access (JNA), native memory allocations, and even segfaults!
It is an attempt to detail all of the debugging tools and techniques we use to root cause OutOfMemoryErrors, and some approaches for fixing them. My hope is that publishing our playbook on OutOfMemoryErrors will help others solve tough Java memory allocation issues with less effort and discovery than it's taken us to get where we are today. Although it's specifically written about the Oracle JVM for Java 7 running on a Unix operating system, most of the tools I describe are available or have a 1:1 replacement in other JVMs and OSes.
The high-level concepts and techniques even apply to garbage collected languages that don't run on the JVM, but you'll have to do a bit more mental translation.
First and foremost, it's important to understand vaguely how the JVM manages memory. In a normal, single-threaded C program your memory map will look something like this:
However, in Java it's a little different. Everyone knows that Java is garbage collected, and probably a lot of people assume that the Java memory allocator and garbage collector use malloc() and free() to manage the memory they use to store your Java objects. It turns out that this isn't the case, so to figure out what's actually going on, let's take a look at the output from one of the JDK's standard memory debugging tools (I removed some lines of output and bolded others to keep focused on the interesting stuff):
As you can see, there are several different heap "spaces" that Java uses. Java objects are stored in the "Young Generation" (a combination of the "Eden" / "From" / "To" spaces) and the "Old Generation"; the "Perm[anent] Generation" is where .class files are loaded. Typically, each space has one (or sometimes multiple) native memory regions associated with it, and none of the spaces reside in the native heap (where malloc() takes its memory from).
The reason that the JVM uses multiple spaces has to do with its garbage collector implementation. Because the lifetime for objects tends to either be very short (stack variables, objects passed as user input or read temporarily into memory from a database, etc. usually only last for a few seconds) or very long (singleton objects, caches, etc. last the entire lifetime of the application), the garbage collector inspects the short-lived objects frequently and only occasionally inspects the long-lived objects.
When you say "Object o = new Object()", that object initially resides in the Eden space of the young generation, and if it is short-lived it will be garbage collected without ever leaving this space. However, if it survives a few collection cycles in the Eden space, it will travel through the To and From spaces and eventually be moved to the old generation, signifying that the garbage collector doesn't think it will become garbage soon.
Notably, there is no guarantee on how often or how long the garbage collector will run, and it's common that "full" garbage collections (the ones that traverse all objects) won't happen at all unless the young generation fills up most of the way. The native heap isn't even listed in the jmap output above, but it is used by the JVM to store internal state such as just-in-time compiled code, information about the state of the program or bytecode interpreter, etc. On most UNIX OSes you can use pmap to see all the memory regions in the JVM's address space (including the heap):
Unlike Java heap space which can only be used to store Java objects allocated using "new", native memory is allocated by many different users, some trusted and others less so. The most trustworthy one is the JVM itself -- whenever it needs memory in order to run some Java code, the memory will come from the native heap. The other normal user of native memory is more surprising: some of Java's standard libraries use native implementations for performance reasons.
Specifically, this applies to some encryption and compression functions as well as for java.nio ByteBuffer objects. These two native memory users are very well behaved: they are well tested, rarely leak memory, and throw Java exceptions when they can't allocate enough memory. There are also less-trusted allocators of native memory: third party native code. If you're interfacing to native libraries through JNA or JNI (for instance, we use JNA to make calls to some DelphixOS-specific libraries for filesystem and OS management) then both JNA / JNI and the native libraries themselves will use native memory. Java agents attached to your JVM also run natively and use native memory.
Notably, all these native third-party additions to the JVM are also able to bypass the native heap and directly manipulate your JVM's memory map by allocating new memory regions, mmap'ing files, etc. This can really screw stuff up, so you should be careful to only use native libraries and Java agents that meet a high quality bar.
If you're like me, at this point you're itching to get to OutOfMemoryErrors and how to debug them. When you run out of memory, there's a lot of different reasons that could be responsible. Let's look at the easy ones first, along with the messages you'll see them generate in your logs or on stdout/stderr.
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
If you get this, it means you've just tried to allocate an array that is so large it would consume all Java heap space, so in 99.9% of cases it's a bug in your code where you've incorrectly calculated the array size to allocate. In the very unlikely case that it's not a bug, you'll need to increase your maximum Java heap size by passing a higher value to the JVM's -Xmx argument (and probably buy a lot more RAM).
java.lang.OutOfMemoryError: PermGen space
As mentioned above, the Permanent Generation is where .class files with your Java bytecode are stored in memory. About 98% of the time, this error is really simple: if every time you start your program it quickly runs out of PermGen space, it means the code you're using for your project has become too large and you need to increase the maximum PermGen size to make room for it using the -XX:MaxPermSize=<size> JVM argument.
To give you an idea of a reasonable limit, our PermGen size is about 82MB for a Java codebase with maybe 500 thousand lines of code and several dozen .jar libraries, and we cap the PermGen at a safe distance of 512MB to avoid being a nuisance but to still catch egregious PermGen space hogs. However, if you're hitting PermGen issues a long time after Java starts, it typically means that either your code or some third-party library you're using is dynamically generating and running Java code.
For instance, before we switched to PostgreSQL a couple years ago, we used an in-Java SQL database called Derby to store our application metadata, and it would dynamically compile code to execute each new query we asked it. (I learned about this feature when it ran us out of PermGen space at a customer site. Ugh.) Fixing problems like this is a lot more application-specific, but in general the best way to handle it is to cap the amount of generated code. In the worst case scenario, you can also just cope with it by increasing the limit a lot.
If you're not sure which case your error falls into or you don't know what library is generating code and causing the PermGen to grow, the graphical VisualVM tool for analyzing Java heap problems will be your best friend. It allows you to do many things, including breaking down the the PermGen contents by what type of object they are, or breaking down the .class files in the PermGen by the classloader that loaded them.
(If you get an OutOfMemoryError when you try to open a large Java dump in VisualVM, you can adjust the VisualVM options (on Mac OS X this file is in /Applications/VisualVM.app/Contents/Resources/visualvm/etc/visualvm.conf) and increase the maximum memory limit with the line:
Obligatory Inception movie poster here.) Another possible tool is the simpler jmap -permstat, which can be used to get some lower-resolution PermGen data:
Note that this tool works a bit better during postmortem analysis on a core file (live analysis as shown above is frequently not super accurate).
java.lang.OutOfMemoryError: Unable to create new native thread
From the message, you probably already guessed that this has to do with native memory -- specifically, threads require native memory for their stacks to reside in that is separate from the heap. This memory shows up in pmap output as "[ stack tid=... ]". Depending on what OS you're using and what the default stack size is, threads take up different amounts of native memory so the limit on thread count varies a lot, but it's typically in the thousands.
However, one thing is platform independent: it's almost always a bug in your Java code to use this many threads. In our stack we typically have around 300-500 threads, but that number hasn't grown in the past few years so I think it's become pretty stable. Ways to work around this limit include using thread pools, Futures, SelectableChannels, WatchService, etc. to distribute the same amount of work across a smaller number of threads.
If you're not sure what all your threads are doing and need to figure it out to decide how to consolidate your thread use, try the jstack tool, which gives you a backtrace of every running thread in your JVM:
Now that we've gone through the special cases, it's time to tackle real heap exhaustion. This is the broadest category of OutOfMemoryError issues, and as a result it's also one of the hardest to debug. There's a variety of error messages that might show up in this case, including:
java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.OutOfMemoryError: <reason> <stack trace> (Native method)
java.lang.OutOfMemoryError: request <size> bytes for <reason>. Out of swap space?
The Java process was killed by a SIGSEGV signal (segfault).
The first two happen when you've overflowed the Java heap (or are about to because the garbage collector only freed 2% of the heap after stealing the processor for 98% of the runtime of your process) and the rest are signs of native heap exhaustion. The main difference between these two is the tools you use to identify and debug the issue, so I'll talk about the broad categories of heap exhaustion before diving into concrete methodology.
Let's simplify by characterizing the three most common cases: when you have a heap that grows at a steady rate and eventually overflows, when you have a heap that's holding steady at some baseline and then overflows during a sudden flood of allocations, and when you have a heap whose baseline is higher than the maximum size.
Steady growth
In the case where you have a heap growing at a roughly linear rate over time, usually it means that there is some kind of memory leak. To be clear, if you're looking at native heap exhaustion it could be a real leak (in which case you should fix it by freeing the leaked memory), but in general I don't mean memory which nobody is referencing anymore. What I really mean are logical leaks: data structures that sometimes grow and never shrink.
The classic case is an object cache, where every time a user gives you more data you add it to the cache, but then you never remove anything from the cache even if it's not being used. Generally speaking, this is either a result of laziness (using an in-memory data structure instead of persistent storage because it requires less effort) or premature optimization (imagining that the cached objects will need to be fetched from disk much more frequently than they actually are).
In the former case, storing the objects to disk to free the memory associated with them is the way to go, and in the latter, just remove the cache and ensure that you don't see a noticeable performance regression in the code someone thought would exercise the cache. In the rare case that a cache is actually necessary for performance reasons, the best approach is to bound the size of the cache. The easiest way to do this is to set a hard limit on the number of objects in the cache and remove the least-recently or least-frequently used ones when you reach that number.
If you're interested in a minimal-effort caching solution, the excellent Guava library includes caching mechanisms with lots of nice options. An improvement which requires very little effort is allowing manually tuning of the limit on individual systems where performance is degrading (note: this only works well if you have a small number of deployments to tune manually!). The hardest but certainly the best approach is creating a cache which grows to hold as many objects as necessary to achieve some performance goal.
However, you need to ensure your goal is feasible in all scenarios, otherwise you'll end up growing the cache indefinitely -- usually this means using a cache hit percentage as your performance goal instead of, say, the mean fetch time through the cache. Alternately, you can add a "sanity check" upper limit (which can itself be tunable). Perhaps surprisingly, because the cache usually doesn't grow very quickly the thread that receives the first out of memory notification is frequently not the thread leaking objects, so you will have to look at the contents of the heap to determine where all your memory has gone.
Explosive growth
If your heap has reached a roughly stable size but then suddenly fills due to a very large group of allocations, this usually is a result of receiving unexpectedly large input. Like all growing companies, as our customers have put more data into our systems we've hit a couple of bugs of this nature where we do a database query that unexpectedly returns, say, 100-1000x the number of rows we expected.
This is especially common with time-based data that's being collected every second / minute. Not only does this tie up the database for a good long time fetching all the data, but once it hands the data to the JVM there's no possible way Java can hold it all in memory. The same basic problem has been seen when reading large files into memory as well. If you're only using a small subset of the data you're reading, the obvious workaround is to only read in what's necessary.
For us, a big subset of these problems involves reading too many rows from a database before processing them in memory (maybe calculating an average value of one of the columns, etc.). Usually the best fix for this involves pushing the calculation down a level to the database itself, which can easily do moderately complex calculations on a large amount of data in a more memory-efficient way. It gets a bit trickier if you really need all the data, or if your calculations are impossible to express in SQL.
You could resort to a "streaming" approach, where you read a little of the data at a time and process it, but that idea doesn't work for all algorithms. To accommodate these use cases, you either have to increase the heap size (easy, basically ignores the problem and places the burden on your OS's paging system, can result in very poor performance) or use persistent storage to hold intermediate results (hard, usually results in hard to maintain code, might be faster than relying on paging if you design it well).
Note that MapReduce is essentially a design paradigm for solving this problem, so if you find yourself writing this kind of code a lot, you might consider a similar design. Identifying the problem after there's explosive growth is usually not very difficult because your heap will be chock-full of one object type allocated from one code path, and the thread that runs out of memory is usually also the one which is allocating it like crazy.
Baseline is higher than maximum size
This case is relatively difficult to identify but very easy to deal with. Once you've ruled out that you're neither dealing with a leak or a flood, and especially if you start running out of memory during many different but equally pedestrian workloads, you're probably dealing with this case. This is by far the best reason to expand the heap size because it means that your program's working set has simply grown beyond the bounds you set for it.
The first question when I start any heap debugging investigation is, "Am I out of Java heap or native heap?" It's usually obvious from the error message. Let's look at the tools you can use in each case to help you determine what went wrong.
My favorite tool for debugging the Java heap is VisualVM (mentioned above). It allows you to break down the heap by Java object type, view the object reference tree to see what objects are pinning the most memory, and browse other pre-built metrics. The pre-built metrics are enough for most cases, but VisualVM's truly killer feature is the object query language, which allows you to run SQL-like queries over the entire heap to collect specific metrics about each object that might help you debug your problem. An alternative tool is the Eclipse Memory Analyzer tool; much of its functionality is the same as VisualVM's but I find it somewhat harder to navigate. To use either of these tools, you'll need to collect a Java heap dump from a live process:
Or from a core file (make sure the jmap and java executables you use in this command line are from the same JDK):
A slightly lower-overhead tool that gives a subset of the same information is jmap -histo:
This is less useful because it only tells you what the types are, and usually the most-used object types are primitive types (the [B and [I above), String, HashMap$Entry, and so on. Since every library under the sun uses Strings and HashMaps, this doesn't give much insight into who is holding the memory, so I find the object reference tree that VisualVM provides to be more helpful.
Another useful bit of information that the JVM provides is information about when the garbage collector runs. The simplest way to get debug information is by adding the JVM arguments -XX:+PrintGC, -XX:+PrintGCDetails, and -XX:+PrintGCTimeStamps. This produces output on stdout or a log file you specify via -Xloggc that contains a line for every garbage collection run that is performed and the time it happened at.
The output lines themselves are a bit difficult to parse, but with a tiny awk script and a spreadsheet you can graph the heap size over time. If they go back far enough in time, these graphs provide the best insight about which of the heap exhaustion cases above you are debugging, so it can direct where you spend your debugging effort next.
Finally, since DelphixOS is based on illumos (and therefore originally based on OpenSolaris), DTrace is available to do arbitrarily complex runtime logging of various JVM events. The place to start is the documentation on the DTrace hotspot provider, which can tell you when methods are JIT compiled, when garbage collection runs happen, and even when methods are invoked (although that requires an additional JVM flag -XX:+ExtendedDTraceProbes because enabling it creates a big performance hit). DTrace and these probes are also available in FreeBSD and Mac OS X.
Debugging the native heap is much more platform-dependent. First, let's talk about where to start if your JVM is segfaulting. If this wasn't caused by a bug in your native libraries, it probably indicates that JNA or JNI called malloc(), it returned NULL, and then that address was dereferenced. You can use gdb's backtrace command (or mdb's ::status / ::stack commands if you're on a Solaris-based system) on the core dump to figure out what the failed allocation was (sometimes this stack will be from the thread that caused the error, sometimes not, as mentioned above).
DelphixOS also has the excellent tool libumem for tracking native memory allocations through malloc(), and mdb is available to analyze the libumem debugging information. To use libumem, we start the JVM with the environment variables LD_PRELOAD=/path/to/libumem.so and UMEM_DEBUG=audit set. This makes libumem intercept all calls to malloc() from the Java process and log some information about them, including the stack trace where malloc was called -- an invaluable piece of information since native memory has no type information to help you understand what it's being used for.
Notably, using UMEM_DEBUG=audit will also cause your program to crash when it detects that you've overwritten freed memory, done multiple frees of the same buffer, etc. so you should either thoroughly test your programs with libumem enabled (like we do) or add the noabort argument to the UMEM_DEBUG options to keep it from killing your process when it makes mistakes (you can make it log memory errors to stdout by adding the verbose option too).
Because of the additional debugging information, libumem can also increase the memory utilization of your processes when it's in audit mode, so try to use it with programs that don't do too many native allocations (the JVM is usually a fine candidate), or limit the amount of logging information it collects. You can use the Solaris-only debugger mdb to read the libumem debug information in many ways. For example, you can check for leaks by performing a "mark and sweep" algorithm on your native heap.
You can also use libumem's internal tracking to understand what kind of memory you've been allocating (by allocation size) and see if any allocation failures have occurred:
On the lines that say "Total [umem_internal]" and "Total [umem_default]" you can see that for 169MB of native heap space, libumem's debugging information occupies 107MB of additional native heap space. A lot of that space is being used to store stack traces so that we can see where native allocations happened, which can give us a very clear idea of what the top memory native users are:
In the output above, you can obviously see that at least 8MB of memory is being used by a ZFS routine, and 5.5MB is being used by a Java unzipping utility. Before we started collecting this information, we were in the dark about why any JVM process ran out of native heap space. We would look at the Java heap and hope that there would be many instances of one of the objects normally associated with native allocations, but usually there wouldn't be and we would be stumped, or we would spend several stressful hours or days trying to reproduce the issue in-house while running DTrace on all calls to malloc().
With the information that libumem collects, we can root cause native heap exhaustion much more easily. Unfortunately, I don't know of an equivalent that's available on non-Solaris platforms. A port of libumem is available, but on other platforms there's no equivalent to mdb, so there's no way to read the debugging information. On Linux, the next-best thing is probably valgrind, but that couldn't be used in production because of the high overhead that it places on every allocation. If you know of anything better, share it in the comments!
This final type of heap exhaustion is orthogonal to the others, and worth pulling out into its own case because it's a bit more difficult to understand. (In fact, this was the problem that inspired this blog post.) Sometimes, objects that you allocate in Java have native memory associated with them -- for instance, we already mentioned ByteBuffers from the java.nio framework.
ByteBuffers occupy very little Java heap space (basically they're just a pointer to native memory), so the Java heap might be relatively empty and the garbage collector won't run even if there are a lot of unreferenced ByteBuffers lying around waiting to be collected. Unfortunately, ByteBuffers correspond to a comparatively large amount of native memory, so you can actually end up running out of native memory because the garbage collector needs to run on the Java heap. There's actually a bug filed against the JVM describing this issue.
Although the bug references Java NIO in particular, other native libraries can cause the issue as well and are not as easily resolved -- our bug was caused by callbacks from native code back to Java methods (through JNA). The workaround listed in the JVM bug (run System.gc() more often, reduce the size of the young generation to force more frequent garbage collections) are pretty weak because they're just making suggestions to the garbage collector that might be ignored. What we ended up doing was a brutal hack, but it amounted to wrapping the Java object so that it had a .close() method which would free the native memory associated with it deterministically.
What debugging blog post would be complete without information on how to collect the information you'll need to debug issues after they cause a crash? This information is vital for fixing customer problems quickly, otherwise you're stuck waiting for the issue to be reproduced, which could take days or weeks and erodes customer faith in your ability to solve the issue. The essentials you'll want to diagnose an OutOfMemoryError are:
Collect Java heap dumps when OutOfMemoryErrors occur. To turn these on you should use -XX:+HeapDumpOnOutOfMemoryError and related options.
To tell what happened when the JVM dies because of a fatal error, dump an error file using the -XX:ErrorFile=/path/to/errfile.log JVM option.
Dump a core file when your JVM segfaults. This is highly OS-dependent, so look up the relevant documentation for your OS.
If it's available, you should limit the native address space and/or native heap size of your JVM. This is highly OS-dependent, but many Unix systems provide rlimit (RLIMIT_AS or RLIMIT_DATA, in particular) and on Solaris-based systems you can use resource_controls(5). This is important on 64-bit JVMs because they are not artificially bounded to 4GB of address space like 32-bit JVMs are. Ideally whatever limiting mechanism you use should send a fatal signal to the JVM so that you get a core dump which can be debugged after the fact.
If it's available, you should use libumem with UMEM_DEBUG=audit. This will make your native core dumps significantly more useful on systems that can read the information libumem generates.
A minimal set of debug symbols should be loaded into your JVM process to make postmortem native debugging easier. On Solaris and other systems that use CTF symbols this barely requires more memory to be used and is very valuable; on Linux you may need to download a special JDK.
Logs of when your JVM starts, stops, restarts, crashes. Usually if you're running the JVM from upstart, startd, service, svcadm, or some other OS-level service management program these events are probably logged for you, but otherwise you should print recognizable strings into your Java log saying the JVM is being booted. (Hopefully, you're doing something to ensure that if Java crashes it will be rebooted.)
Garbage collection logging with the -XX:PrintGC JVM family of arguments.
Moderately detailed logs of what general activity your application was doing.
Here are a few lessons we've learned about issues that can hide OutOfMemoryErrors until they're in production, instead of putting them front and center during development.
The most important lesson: never use different garbage collection or memory settings in development than you use in production. More specifically:
Don't use the -client JVM option in development and the -server option in production.
Don't use a 32-bit JVM in development and 64-bit JVM in production.
Don't switch garbage collectors, garbage collection tuning, or heap maximums between development and production.
Don't switch OSes or JVMs between development and production.
When your development machine or a production system creates core files, Java heap dumps, or fatal error files, surface them so that you don't accidentally ignore them. When your user is a consumer the best way to do this is to offer them the ability to "submit a bug report"; otherwise, make a utility that scans for these files automatically on developer / tester systems and makes it easy to turn them into bug reports when they are found on test systems.
As a member of the core infrastructure / escalation team at Delphix, I've worked on many OutOfMemoryError issues firsthand and I'm excited to help raise the quality bar for Java applications elsewhere by sharing my experiences. I hope this post will help others using the JVM and other garbage collected environments to get the information they need to identify, fix, and postmortem-debug any heap exhaustion issue they might encounter.
It's always good to invest in infrastructure: "Remember, if anything exciting happens, we've done it wrong."