You can take a simple code snippet creating a HashMap with 10,000,000 elements and run it with a <100m heap. Lo and behold will you be surprised if you compare the results in JDK 7u25 against the next, JDK 7u40 release.
In the u40 the JDK engineers have changed the Hashmap(initialCapacity, loadFactor) constructor, which now ignores your will to construct a HashMap with the initial size being equal to initialCapacity. Instead, you see the underlying array being allocated lazily only when the first put() method is called on the map.
A seemingly very reasonable change – JVM is lazy by nature in different aspects, so why not postpone the allocation of large data structures until the need for such allocation becomes imminent. So in that sense a good call.
In the sense that a particular application was performing tricks via reflection and directly accessing the internal structures of the Map implementations – maybe not. But again, one should not bypass the API and start being clever, so maybe the particular developer is now a bit more convinced that each newly found concept is not applicable everywhere.
Would you have made the change yourself if you were the API developer? I am not convinced I would have had the guts, knowing that there has to be around bazillion apps out there depending on all kind of weird aspects of the implementation.
If you are interested to see the full post about the case study, check out the original post.