Let’s be honest: I only use Java for Minecraft. So I only debugged with it. But all version, server or client, all launchers. All of them use double (or more) RAM. In the game the correctly allocated amount is used, but on my system double or more is allocated. Thus my other apps don’t get enough memory, causing crashes, while the game is suffering as well.

I’m not wise enough to know what logs or versions or whatever I should post here as a cry for help, but I’ll update this with anything that’ll help, just tell me. I have no idea how to approach the problem. One idea I have is to run a non-Minecraft java application, but who has( or knows about) one of those?

@[email protected]’s request:

launch arguments [-Xms512m, -Xmx1096m, -Duser.language=en] (it’s this little, so that the difference shows clearly. I have a modpack that I give 8gb to and uses way more as well. iirc around 12)

game version 1.18.2

total system memory 32gb

memory used by the game I’m using KDE’s default system monitor, but here’s Btop as well:

this test was on max render distance, with 1gb of ram, it crashed ofc, but it crashed at almost 4gbs, what the hell! That’s 4 times as much

I’m on arch (btw) (sry)

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    56
    ·
    edit-2
    3 months ago

    When you control the memory allocation for Minecraft, you really only are configuring the JVM’s garbage collector to use that much memory. That doesn’t include any shared resources outside of the JVM, such as Java itself, OpenGL resources and everywhere else that involves native code and system libraries and drivers.

    If you have an integrated GPU, all the textures that normally gets sent to a GPU may also live on your regular RAM too since those use unified memory. That can inflate the amount of memory Java appears to use.

    A browser for example, might not have a whole lot of JavaScript memory used, couple MBs maybe. But the tab itself uses a ton more because of the renderer and assets and CSS effects.

    • UnRelatedBurner@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      12
      arrow-down
      3
      ·
      3 months ago

      This is interesting and infuriating, but I don’t think this is quite right in my scenario. As I also observe the over-usage when running a server from console. There shouldn’t be any GPU shenanigans with that, I hope.

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        3 months ago

        There are stilly plenty of native libraries and the JVM itself. For instance, the networking library (Netty) uses off-heap memory which it preallocates in fairly large blocks. The server will spawn quite a few threads both for networking and for handling async chunk loading+generation, each of which will add likely multiple megabytes of off-heap memory for stack space and thread-locals and GC state and system memory allocator state and I/O buffers. And none of this is accounting for the memory used by the JVM itself, which includes up to a few hundred megabytes of space for JIT-compiled code, JIT compiler state such as code profiling information (in practice a good chunk of opcodes need to track this), method signatures and field layouts and superclass+superinterface information for every single loaded class (for modern Minecraft, this is well into the 10s of thousands), full uncompressed bytecode for every single method in every single loaded class. If you’re using G1 or Shenandoah (you almost certainly are), add the GC card table, which IIRC is one byte per alignment unit of heap space (so by default, one byte per 8 bytes of JVM heap) (I don’t recall if this is bitpacked, I don’t think it is for performance reasons). I could go on, but you get the picture.