GNU bug report logs - #43389
28.0.50; Emacs memory leaks

Previous Next

Package: emacs;

Reported by: Michael Heerdegen <michael_heerdegen <at> web.de>

Date: Mon, 14 Sep 2020 00:44:01 UTC

Severity: normal

Merged with 43395, 43876, 44666

Found in version 28.0.50

Done: Stefan Monnier <monnier <at> iro.umontreal.ca>

Bug is archived. No further changes may be made.

To add a comment to this bug, you must first unarchive it, by sending
a message to control AT debbugs.gnu.org, with unarchive 43389 in the body.
You can then email your comments to 43389 AT debbugs.gnu.org in the normal way.

Toggle the display of automated, internal messages from the tracker.

View this report as an mbox folder, status mbox, maintainer mbox


Report forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 14 Sep 2020 00:44:01 GMT) Full text and rfc822 format available.

Acknowledgement sent to Michael Heerdegen <michael_heerdegen <at> web.de>:
New bug report received and forwarded. Copy sent to bug-gnu-emacs <at> gnu.org. (Mon, 14 Sep 2020 00:44:01 GMT) Full text and rfc822 format available.

Message #5 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: bug-gnu-emacs <at> gnu.org
Subject: 28.0.50; Emacs memory leaks
Date: Mon, 14 Sep 2020 02:43:30 +0200
Hello,

from time to time my Emacs' memory usage grows above 4 GB for no obvious
reason.  I didn't investigate when that happened so far, will do the
next time.

Anybody who sees the same problem is invited to provide details!


Thanks,

Michael.






Merged 43389 43395. Request was from Eli Zaretskii <eliz <at> gnu.org> to control <at> debbugs.gnu.org. (Mon, 14 Sep 2020 15:00:05 GMT) Full text and rfc822 format available.

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 14 Sep 2020 19:26:01 GMT) Full text and rfc822 format available.

Message #10 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Juri Linkov <juri <at> linkov.net>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 14 Sep 2020 22:09:05 +0300
> from time to time my Emacs' memory usage grows above 4 GB for no obvious
> reason.  I didn't investigate when that happened so far, will do the
> next time.
>
> Anybody who sees the same problem is invited to provide details!

Maybe manually evaluating (clear-image-cache) helps to free memory?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 15 Sep 2020 00:33:01 GMT) Full text and rfc822 format available.

Message #13 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Juri Linkov <juri <at> linkov.net>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 15 Sep 2020 02:32:19 +0200
Juri Linkov <juri <at> linkov.net> writes:

> Maybe manually evaluating (clear-image-cache) helps to free memory?

I'll try the next time when this happens.  I would not expect the image
cache to be the cause though: I don't view many images in Emacs, and I
typically rebuild and restart Emacs daily.


Thanks,

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 15 Sep 2020 18:25:01 GMT) Full text and rfc822 format available.

Message #16 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 15 Sep 2020 19:54:18 +0200
On Tue, Sep 15, 2020 at 02:32:19AM +0200, Michael Heerdegen wrote:
> I'll try the next time when this happens.  I would not expect the image
> cache to be the cause though: I don't view many images in Emacs, and I
> typically rebuild and restart Emacs daily.

htop says my emacs RSS is now 5148MB. I ran M-x garbage-collect and it ran
at 100% cpu for 5 minutes and released nothing. I also tried manually
executing (clear-image-cache) and nothing.

I run Emacs 27.1 as a daemon, uptime 4 days, 3 hours, 22 minutes, 53
seconds. Yesterday conki was reporting Emacs at 28% memory usage,
today it's at 33%. No dramatically huge files loaded, just a few
megabytes of text. No inline images (local or remote).

In GNU Emacs 27.1 (build 2, x86_64-pc-linux-gnu, X toolkit, Xaw3d scroll bars)
 of 2020-08-17 built on maokai
Windowing system distributor 'The X.Org Foundation', version 11.0.12008000
System Description: Gentoo/Linux

Recent messages:
Unable to load color "unspecified-fg" [4 times]
4 days, 3 hours, 22 minutes, 53 seconds

Configured using:
 'configure --prefix=/home/adamsrl/.local/stow/emacs-27.1
 --without-libsystemd --without-dbus --with-x-toolkit=lucid'

Configured features:
XAW3D XPM JPEG TIFF GIF PNG RSVG SOUND GSETTINGS GLIB NOTIFY INOTIFY ACL
GNUTLS LIBXML2 FREETYPE HARFBUZZ XFT ZLIB TOOLKIT_SCROLL_BARS LUCID X11
XDBE XIM MODULES THREADS JSON PDUMPER LCMS2 GMP

Important settings:
  value of $LANG: en_US.utf8
  locale-coding-system: utf-8-unix

Major mode: Org

Minor modes in effect:
  recentf-mode: t
  flyspell-mode: t
  pdf-occur-global-minor-mode: t
  helm-mode: t
  helm-ff-cache-mode: t
  helm--remap-mouse-mode: t
  async-bytecomp-package-mode: t
  shell-dirtrack-mode: t
  show-paren-mode: t
  savehist-mode: t
  global-hl-line-mode: t
  override-global-mode: t
  tooltip-mode: t
  global-eldoc-mode: t
  electric-indent-mode: t
  mouse-wheel-mode: t
  file-name-shadow-mode: t
  global-font-lock-mode: t
  font-lock-mode: t
  auto-composition-mode: t
  auto-encryption-mode: t
  auto-compression-mode: t
  column-number-mode: t
  line-number-mode: t
  auto-fill-function: org-auto-fill-function
  abbrev-mode: t

Load-path shadows:
/home/adamsrl/.quicklisp/dists/quicklisp/software/slime-v2.24/slime-tests hides /home/adamsrl/.config/emacs/elpa/slime-20200810.224/slime-tests
/home/adamsrl/.quicklisp/dists/quicklisp/software/slime-v2.24/slime hides /home/adamsrl/.config/emacs/elpa/slime-20200810.224/slime
/home/adamsrl/.quicklisp/dists/quicklisp/software/slime-v2.24/slime-autoloads hides /home/adamsrl/.config/emacs/elpa/slime-20200810.224/slime-autoloads
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-stan hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-stan
/home/adamsrl/.config/emacs/elpa/org-20200810/org-macs hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-macs
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-gnuplot hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-gnuplot
/home/adamsrl/.config/emacs/elpa/org-20200810/org-num hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-num
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sql hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sql
/home/adamsrl/.config/emacs/elpa/org-20200810/org-lint hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-lint
/home/adamsrl/.config/emacs/elpa/org-20200810/ol hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol
/home/adamsrl/.config/emacs/elpa/org-20200810/org-indent hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-indent
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-perl hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-perl
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lisp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lisp
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-maxima hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-maxima
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-tangle hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-tangle
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-vala hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-vala
/home/adamsrl/.config/emacs/elpa/org-20200810/org-tempo hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-tempo
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-comint hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-comint
/home/adamsrl/.config/emacs/elpa/org-20200810/org-list hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-list
/home/adamsrl/.config/emacs/elpa/org-20200810/org-src hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-src
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-irc hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-irc
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-hledger hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-hledger
/home/adamsrl/.config/emacs/elpa/org-20200810/org-goto hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-goto
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-latex hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-latex
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-latex hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-latex
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-org hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-org
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-exp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-exp
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-abc hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-abc
/home/adamsrl/.config/emacs/elpa/org-20200810/ox hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-groovy hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-groovy
/home/adamsrl/.config/emacs/elpa/org-20200810/org-mouse hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-mouse
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-publish hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-publish
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-coq hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-coq
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ocaml hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ocaml
/home/adamsrl/.config/emacs/elpa/org-20200810/org-version hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-version
/home/adamsrl/.config/emacs/elpa/org-20200810/org-habit hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-habit
/home/adamsrl/.config/emacs/elpa/org-20200810/org-agenda hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-agenda
/home/adamsrl/.config/emacs/elpa/org-20200810/org-ctags hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-ctags
/home/adamsrl/.config/emacs/elpa/org-20200810/org-attach hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-attach
/home/adamsrl/.config/emacs/elpa/org-20200810/org-colview hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-colview
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-rmail hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-rmail
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-matlab hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-matlab
/home/adamsrl/.config/emacs/elpa/org-20200810/org-install hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-install
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-bibtex hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-bibtex
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-eval hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-eval
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-makefile hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-makefile
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-calc hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-calc
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-python hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-python
/home/adamsrl/.config/emacs/elpa/org-20200810/org-timer hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-timer
/home/adamsrl/.config/emacs/elpa/org-20200810/org-crypt hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-crypt
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-org hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-org
/home/adamsrl/.config/emacs/elpa/org-20200810/org-clock hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-clock
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ruby hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ruby
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-fortran hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-fortran
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-docview hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-docview
/home/adamsrl/.config/emacs/elpa/org-20200810/org-pcomplete hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-pcomplete
/home/adamsrl/.config/emacs/elpa/org-20200810/org-macro hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-macro
/home/adamsrl/.config/emacs/elpa/org-20200810/org-element hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-element
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ditaa hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ditaa
/home/adamsrl/.config/emacs/elpa/org-20200810/org-table hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-table
/home/adamsrl/.config/emacs/elpa/org-20200810/ob hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-mscgen hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-mscgen
/home/adamsrl/.config/emacs/elpa/org-20200810/org-footnote hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-footnote
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-eww hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-eww
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lob hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lob
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-haskell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-haskell
/home/adamsrl/.config/emacs/elpa/org-20200810/org-faces hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-faces
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-md hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-md
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-table hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-table
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-awk hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-awk
/home/adamsrl/.config/emacs/elpa/org-20200810/org-mobile hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-mobile
/home/adamsrl/.config/emacs/elpa/org-20200810/org-archive hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-archive
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ref hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ref
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-emacs-lisp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-emacs-lisp
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-dot hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-dot
/home/adamsrl/.config/emacs/elpa/org-20200810/org-duration hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-duration
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-js hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-js
/home/adamsrl/.config/emacs/elpa/org-20200810/org hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-beamer hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-beamer
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-ascii hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-ascii
/home/adamsrl/.config/emacs/elpa/org-20200810/org-loaddefs hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-loaddefs
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-shell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-shell
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-scheme hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-scheme
/home/adamsrl/.config/emacs/elpa/org-20200810/org-entities hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-entities
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ebnf hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ebnf
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-plantuml hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-plantuml
/home/adamsrl/.config/emacs/elpa/org-20200810/org-keys hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-keys
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lilypond hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lilypond
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-C hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-C
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-J hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-J
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-mhe hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-mhe
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-info hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-info
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sed hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sed
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-lua hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-lua
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-octave hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-octave
/home/adamsrl/.config/emacs/elpa/org-20200810/org-attach-git hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-attach-git
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-forth hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-forth
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-w3m hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-w3m
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-ledger hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-ledger
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-screen hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-screen
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-java hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-java
/home/adamsrl/.config/emacs/elpa/org-20200810/org-datetree hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-datetree
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sqlite hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sqlite
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-shen hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-shen
/home/adamsrl/.config/emacs/elpa/org-20200810/org-id hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-id
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-asymptote hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-asymptote
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-html hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-html
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-io hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-io
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-man hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-man
/home/adamsrl/.config/emacs/elpa/org-20200810/org-feed hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-feed
/home/adamsrl/.config/emacs/elpa/org-20200810/org-protocol hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-protocol
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-eshell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-eshell
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-texinfo hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-texinfo
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-core hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-core
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-clojure hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-clojure
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-R hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-R
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-icalendar hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-icalendar
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-picolisp hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-picolisp
/home/adamsrl/.config/emacs/elpa/org-20200810/org-plot hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-plot
/home/adamsrl/.config/emacs/elpa/org-20200810/org-compat hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-compat
/home/adamsrl/.config/emacs/elpa/org-20200810/org-capture hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-capture
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-bbdb hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-bbdb
/home/adamsrl/.config/emacs/elpa/org-20200810/org-inlinetask hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/org-inlinetask
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-eshell hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-eshell
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-css hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-css
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-processing hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-processing
/home/adamsrl/.config/emacs/elpa/org-20200810/ob-sass hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ob-sass
/home/adamsrl/.config/emacs/elpa/org-20200810/ox-odt hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ox-odt
/home/adamsrl/.config/emacs/elpa/org-20200810/ol-gnus hides /home/adamsrl/.local/stow/emacs-27.1/share/emacs/27.1/lisp/org/ol-gnus

Features:
(shadow sort mail-extr warnings emacsbug time org-num org-tempo tempo
org-protocol org-mouse org-mobile org-indent org-goto org-feed org-crypt
org-attach lisp-mnt mm-archive org-archive timezone gnutls
network-stream url-cache org-clock conf-mode image-file ffap cal-move
tabify dabbrev ob-org help-fns radix-tree sh-script executable log-edit
pcvs-util add-log smerge-mode diff vc helm-command helm-elisp helm-eval
edebug backtrace mule-util misearch multi-isearch vc-git sendmail
term/rxvt term/screen term/xterm xterm rx mhtml-mode css-mode-expansions
css-mode smie eww mm-url url-queue js-mode-expansions js
cc-mode-expansions cc-mode cc-fonts cc-guess cc-menus cc-cmds cc-styles
cc-align cc-engine cc-vars cc-defs html-mode-expansions sgml-mode winner
recentf tree-widget helm-x-files org-duration cal-iso vc-dispatcher
vc-hg diff-mode flyspell ispell ol-eww ol-rmail ol-mhe ol-irc ol-info
ol-gnus nnir ol-docview doc-view ol-bibtex bibtex ol-bbdb ol-w3m
face-remap org-agenda server company-oddmuse company-keywords
company-etags company-gtags company-dabbrev-code company-dabbrev
company-files company-clang company-capf company-cmake company-semantic
company-template company-bbdb org-caldav org-id url-dav url-http
url-auth url-gw nsm pdf-occur ibuf-ext ibuffer ibuffer-loaddefs tablist
tablist-filter semantic/wisent/comp semantic/wisent
semantic/wisent/wisent semantic/util-modes semantic/util semantic
semantic/tag semantic/lex semantic/fw mode-local cedet pdf-isearch
let-alist pdf-misc imenu pdf-tools cus-edit cus-start cus-load pdf-view
jka-compr pdf-cache pdf-info tq pdf-util image-mode exif org-noter
ox-odt rng-loc rng-uri rng-parse rng-match rng-dt rng-util rng-pttrn
nxml-parse nxml-ns nxml-enc xmltok nxml-util ox-latex ox-icalendar
ox-html table ox-ascii ox-publish ox org-element avl-tree gnus-icalendar
org-capture gnus-art mm-uu mml2015 mm-view mml-smime smime dig gnus-sum
shr svg dom gnus-group gnus-undo gnus-start gnus-cloud nnimap nnmail
mail-source utf7 netrc nnoo gnus-spec gnus-int gnus-range message rmc
puny dired dired-loaddefs rfc822 mml mml-sec epa derived epg epg-config
mailabbrev mailheader gnus-win gnus nnheader gnus-util rmail
rmail-loaddefs mail-utils wid-edit mm-decode mm-bodies mm-encode
mail-parse rfc2231 rfc2047 rfc2045 mm-util ietf-drums mail-prsvr
gmm-utils icalendar ob-sql ob-shell skeleton appt diary-lib
diary-loaddefs slime-fancy slime-indentation slime-cl-indent cl-indent
slime-trace-dialog slime-fontifying-fu slime-package-fu slime-references
slime-compiler-notes-tree slime-scratch slime-presentations bridge
slime-macrostep macrostep slime-mdot-fu slime-enclosing-context
slime-fuzzy slime-fancy-trace slime-fancy-inspector slime-c-p-c
slime-editing-commands slime-autodoc slime-repl slime-parse slime
compile etags fileloop generator xref project arc-mode archive-mode
hyperspec orgalist the-org-mode-expansions org ob ob-tangle ob-ref
ob-lob ob-table ob-exp org-macro org-footnote org-src ob-comint
org-pcomplete org-list org-faces org-entities noutline outline
org-version ob-emacs-lisp ob-core ob-eval org-table ol org-keys
org-compat org-macs org-loaddefs find-func cal-menu calendar
cal-loaddefs helm-recoll helm-for-files helm-bookmark helm-adaptive
helm-info bookmark text-property-search pp helm-external helm-net xml
url url-proxy url-privacy url-expand url-methods url-history url-cookie
url-domsuf url-util mailcap ido helm-mode helm-files helm-buffers
helm-occur helm-tags helm-locate helm-grep helm-regexp helm-utils
helm-help helm-types helm async-bytecomp helm-global-bindings
helm-easymenu helm-source eieio-compat helm-multi-match helm-lib async
helm-config vc-fossil expand-region text-mode-expansions
er-basic-expansions expand-region-core expand-region-custom company
pcase multiple-cursors mc-hide-unmatched-lines-mode
mc-separate-operations rectangular-region-mode mc-mark-pop mc-mark-more
thingatpt mc-cycle-cursors mc-edit-lines multiple-cursors-core advice
rect paredit htmlize monky tramp tramp-loaddefs trampver
tramp-integration files-x tramp-compat shell pcomplete comint ansi-color
ring parse-time iso8601 time-date ls-lisp format-spec view ediff
ediff-merg ediff-mult ediff-wind ediff-diff ediff-help ediff-init
ediff-util bindat cl color rainbow-delimiters cl-extra help-mode paren
edmacro kmacro savehist dracula-theme hl-line use-package
use-package-ensure use-package-delight use-package-diminish
use-package-bind-key bind-key easy-mmode use-package-core finder-inf
slime-autoloads info package easymenu browse-url url-handlers url-parse
auth-source cl-seq eieio eieio-core cl-macs eieio-loaddefs
password-cache json subr-x map url-vars seq byte-opt gv bytecomp
byte-compile cconv cl-loaddefs cl-lib tooltip eldoc electric uniquify
ediff-hook vc-hooks lisp-float-type mwheel term/x-win x-win
term/common-win x-dnd tool-bar dnd fontset image regexp-opt fringe
tabulated-list replace newcomment text-mode elisp-mode lisp-mode
prog-mode register page tab-bar menu-bar rfn-eshadow isearch timer
select scroll-bar mouse jit-lock font-lock syntax facemenu font-core
term/tty-colors frame minibuffer cl-generic cham georgian utf-8-lang
misc-lang vietnamese tibetan thai tai-viet lao korean japanese eucjp-ms
cp51932 hebrew greek romanian slovak czech european ethiopic indian
cyrillic chinese composite charscript charprop case-table epa-hook
jka-cmpr-hook help simple abbrev obarray cl-preloaded nadvice loaddefs
button faces cus-face macroexp files text-properties overlay sha1 md5
base64 format env code-pages mule custom widget hashtable-print-readable
backquote threads inotify lcms2 dynamic-setting system-font-setting
font-render-setting x-toolkit x multi-tty make-network-process emacs)

Memory information:
((conses 16 1997471 1645948)
 (symbols 48 52500 1)
 (strings 32 328202 267401)
 (string-bytes 1 10837531)
 (vectors 16 133457)
 (vector-slots 8 2460308 965956)
 (floats 8 808 4810)
 (intervals 56 184154 78227)
 (buffers 1000 129))

------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 15 Sep 2020 18:53:01 GMT) Full text and rfc822 format available.

Message #19 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 15 Sep 2020 21:52:45 +0300
> Date: Tue, 15 Sep 2020 19:54:18 +0200
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> htop says my emacs RSS is now 5148MB. I ran M-x garbage-collect and it ran
> at 100% cpu for 5 minutes and released nothing. I also tried manually
> executing (clear-image-cache) and nothing.

Can you use some utility that produces a memory map of an application,
and see how much of those 5GB are actually free for allocation by
Emacs?  Also, do you see any libraries used by Emacs that have high
memory usage?

> I run Emacs 27.1 as a daemon, uptime 4 days, 3 hours, 22 minutes, 53
> seconds. Yesterday conki was reporting Emacs at 28% memory usage,
> today it's at 33%.

28% and 33% of what amount?

If your RSS is 5GB after 4 days of uptime, and the memory footprint
grows at a constant rate, it would mean more than 1GB per day.  But
I'm guessing that 33% - 28% = 5% of your total memory is much less
than 1GB.  In which case the memory footprint must sometimes jump by
very large amounts, not grow slowly and monotonically each day.
Right?  So which events cause those sudden increases in RSS?

Also, what is your value of gc-cons-threshold, and do you have some
customizations that change its value under some conditions?  If so,
please tell the details.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 15 Sep 2020 21:13:02 GMT) Full text and rfc822 format available.

Message #22 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 15 Sep 2020 23:12:09 +0200
On Tue, Sep 15, 2020 at 09:52:45PM +0300, Eli Zaretskii wrote:
> > htop says my emacs RSS is now 5148MB. I ran M-x garbage-collect and it ran
>
> Can you use some utility that produces a memory map of an application,
> and see how much of those 5GB are actually free for allocation by
> Emacs?

Any suggestions? I still have it running. I used htop because it shows
a sane total value.

> Also, do you see any libraries used by Emacs that have high
> memory usage?

Emacs is the top memory usage on my laptop, firefox is second at
2GB. The rest are <1G.

> 28% and 33% of what amount?

16GB

> If your RSS is 5GB after 4 days of uptime, and the memory footprint
> grows at a constant rate, it would mean more than 1GB per day.  But
> I'm guessing that 33% - 28% = 5% of your total memory is much less
> than 1GB.

No, 33% is ~5GB. ;]

> In which case the memory footprint must sometimes jump by
> very large amounts, not grow slowly and monotonically each day.
> Right?  So which events cause those sudden increases in RSS?

I can't say. I have a few megs total in buffers open, and I've run
org-caldav a few times to upload. Mostly org-mode buffers open, a few
mail buffers (not gnus, just mail-mode editing mutt files), package
list, and cruft. Not actively doing any development, just editing Org
files.

I don't recall having edited any huge files in the last 4 days.

> Also, what is your value of gc-cons-threshold, and do you have some
> customizations that change its value under some conditions?  If so,
> please tell the details.

gc-cons-threshold is 800000 (#o3032400, #xc3500).

No customization that I'm aware of to memory. The only thing that may
be relative is my savehist settings, but that file is only 98k (down
from 500meg in emacs 26). I've now limited my savehists.


------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 16 Sep 2020 14:53:02 GMT) Full text and rfc822 format available.

Message #25 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 16 Sep 2020 17:52:48 +0300
> Date: Tue, 15 Sep 2020 23:12:09 +0200
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> > Can you use some utility that produces a memory map of an application,
> > and see how much of those 5GB are actually free for allocation by
> > Emacs?
> 
> Any suggestions?

Your Internet search is as good as mine.  This page offers some
possibilities:

  https://stackoverflow.com/questions/36523584/how-to-see-memory-layout-of-my-program-in-c-during-run-time

> > Also, do you see any libraries used by Emacs that have high
> > memory usage?
> 
> Emacs is the top memory usage on my laptop, firefox is second at
> 2GB. The rest are <1G.

No, I meant the shared libraries that Emacs loads.  Maybe one of them
has a leak, not Emacs's own code.

> > 28% and 33% of what amount?
> 
> 16GB
> 
> > If your RSS is 5GB after 4 days of uptime, and the memory footprint
> > grows at a constant rate, it would mean more than 1GB per day.  But
> > I'm guessing that 33% - 28% = 5% of your total memory is much less
> > than 1GB.
> 
> No, 33% is ~5GB. ;]
> 
> > In which case the memory footprint must sometimes jump by
> > very large amounts, not grow slowly and monotonically each day.
> > Right?  So which events cause those sudden increases in RSS?
> 
> I can't say.

Well, actually the above seems to indicate that your memory footprint
grows by about 1GB each day: 5% of 16GB is 0.8GB.  So maybe my guess
is wrong, and the memory does increase roughly linearly with time.
Hmm...

We had a discussion several times regarding the possible effects of
the fact that glibc doesn't return malloc'ed memory to the system.  I
don't think we reached any firm conclusions about that, but it could
be that some usage patterns cause memory fragmentation, whereby small
chunks of free'd memory gets "trapped" between regions of used memory,
and cannot be reallocated.

We used to use some specialized malloc features to prevent this, but
AFAIU they are no longer supported on modern GNU/Linux systems.

Not sure whether this is relevant to what you see.

Anyway, I think the way forward is to try to understand which code
"owns" the bulk of the 5GB memory.  Then maybe we will have some
ideas.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 17 Sep 2020 20:48:02 GMT) Full text and rfc822 format available.

Message #28 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 17 Sep 2020 22:47:04 +0200
From Emacs memory-usage package:

Garbage collection stats:
((conses 16 1912248 251798) (symbols 48 54872 19) (strings 32 327552 81803) (string-bytes 1 12344346) (vectors 16 158994) (vector-slots 8 2973919 339416) (floats 8 992 4604) (intervals 56 182607 7492) (buffers 1000 195))

 =>	29.2MB (+ 3.84MB dead) in conses
	2.51MB (+ 0.89kB dead) in symbols
	10.00MB (+ 2.50MB dead) in strings
	11.8MB in string-bytes
	2.43MB in vectors
	22.7MB (+ 2.59MB dead) in vector-slots
	7.75kB (+ 36.0kB dead) in floats
	9.75MB (+  410kB dead) in intervals
	 190kB in buffers

Total in lisp objects: 97.9MB (live 88.5MB, dead 9.36MB)

Buffer ralloc memory usage:
81 buffers
4.71MB total (1007kB in gaps)

----------------------------------------------------------------------

And /proc/PID/smaps which is huge so I pastebinned it.

https://termbin.com/2sx5

Of interest is:

56413d24a000-5642821c6000 rw-p 00000000 00:00 0                          [heap]
Size:            5324272 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Rss:             5245496 kB
Pss:             5245496 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:   5245496 kB
Referenced:      5245496 kB
Anonymous:       5245496 kB
LazyFree:              0 kB
AnonHugePages:         0 kB
ShmemPmdMapped:        0 kB
FilePmdMapped:        0 kB
Shared_Hugetlb:        0 kB
Private_Hugetlb:       0 kB
Swap:                  0 kB
SwapPss:               0 kB
Locked:                0 kB
THPeligible:            0
VmFlags: rd wr mr mw me ac

------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 17 Sep 2020 22:42:01 GMT) Full text and rfc822 format available.

Message #31 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Joshua Branson <jbranso <at> dismail.de>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 17 Sep 2020 17:58:51 -0400
Over in #guix irc, the guix people seemed to think it was a memory leak with helm.

I was watching my emacs consume about 0.1% more system memory every 2 or 3 seconds. Setting

(setq helm-ff-keep-cached-candidates nil)

Seemed to make the problem go away.

I also made a video, where I watched this memory usage continually go up
and then stay steady after I turned off helm-ff-keep-cached-candidates.
This happens at about the 35 minute mark.

https://video.hardlimit.com/videos/watch/3069e16a-d75c-4e40-8686-9102e40e333f

And here's the bug report on guix system:

https://issues.guix.gnu.org/43406#10


--
Joshua Branson
Sent from Emacs and Gnus




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 17 Sep 2020 22:46:01 GMT) Full text and rfc822 format available.

Message #34 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Thomas Ingram <taingram <at> mtu.edu>
To: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 17 Sep 2020 16:59:16 -0400
Hello.

I experienced something similar today, I noticed Emacs was using 3.6GB 
of memory under light org
mode usage (dozen buffers, all files smaller half a MB). I had to close 
Emacs as my computer was
locking up, but here is my emacs-report-bug output with roughly the same 
workload open.

I'll try to gather more information next time I notice unusual memory usage.

Thanks.

In GNU Emacs 27.1 (build 1, x86_64-redhat-linux-gnu, GTK+ Version 
3.24.21, cairo version 1.16.0)
 of 2020-08-20 built on buildvm-x86-24.iad2.fedoraproject.org
Windowing system distributor 'Fedora Project', version 11.0.12008000
System Description: Fedora 32 (Workstation Edition)

Recent messages:
org-babel-exp process emacs-lisp at position 9286...
org-babel-exp process nil at position 9867...
org-babel-exp process make at position 10150...
Setting up indent for shell type bash
Indentation variables are now local.
Indentation setup for shell type bash
Saving file 
/home/thomas/Documents/taingram.org/html/blog/org-mode-blog.html...
Wrote /home/thomas/Documents/taingram.org/html/blog/org-mode-blog.html
Mark saved where search started
Making completion list...

Configured using:
 'configure --build=x86_64-redhat-linux-gnu
 --host=x86_64-redhat-linux-gnu --program-prefix=
 --disable-dependency-tracking --prefix=/usr --exec-prefix=/usr
 --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc
 --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64
 --libexecdir=/usr/libexec --localstatedir=/var
 --sharedstatedir=/var/lib --mandir=/usr/share/man
 --infodir=/usr/share/info --with-dbus --with-gif --with-jpeg --with-png
 --with-rsvg --with-tiff --with-xft --with-xpm --with-x-toolkit=gtk3
 --with-gpm=no --with-xwidgets --with-modules --with-harfbuzz
 --with-cairo --with-json build_alias=x86_64-redhat-linux-gnu
 host_alias=x86_64-redhat-linux-gnu 'CFLAGS=-DMAIL_USE_LOCKF -O2 -g
 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2
 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong
 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
 -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection'
 LDFLAGS=-Wl,-z,relro
 PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'

Configured features:
XPM JPEG TIFF GIF PNG RSVG CAIRO SOUND DBUS GSETTINGS GLIB NOTIFY
INOTIFY ACL LIBSELINUX GNUTLS LIBXML2 FREETYPE HARFBUZZ M17N_FLT LIBOTF
ZLIB TOOLKIT_SCROLL_BARS GTK3 X11 XDBE XIM MODULES THREADS XWIDGETS
LIBSYSTEMD JSON PDUMPER GMP

Important settings:
  value of $LANG: en_US.UTF-8
  value of $XMODIFIERS: @im=ibus
  locale-coding-system: utf-8-unix

Major mode: Org

Minor modes in effect:
  flyspell-mode: t
  shell-dirtrack-mode: t
  global-company-mode: t
  company-mode: t
  override-global-mode: t
  recentf-mode: t
  tooltip-mode: t
  global-eldoc-mode: t
  electric-indent-mode: t
  mouse-wheel-mode: t
  menu-bar-mode: t
  file-name-shadow-mode: t
  global-font-lock-mode: t
  font-lock-mode: t
  blink-cursor-mode: t
  auto-composition-mode: t
  auto-encryption-mode: t
  auto-compression-mode: t
  column-number-mode: t
  line-number-mode: t
  auto-fill-function: org-auto-fill-function
  transient-mark-mode: t

Load-path shadows:
/home/thomas/.config/emacs/elpa/xref-1.0.3/xref hides 
/usr/share/emacs/27.1/lisp/progmodes/xref
/home/thomas/.config/emacs/elpa/flymake-1.0.9/flymake hides 
/usr/share/emacs/27.1/lisp/progmodes/flymake
/home/thomas/.config/emacs/elpa/project-0.5.2/project hides 
/usr/share/emacs/27.1/lisp/progmodes/project
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-screen 
hides /usr/share/emacs/27.1/lisp/org/ob-screen
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-table 
hides /usr/share/emacs/27.1/lisp/org/org-table
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lisp hides 
/usr/share/emacs/27.1/lisp/org/ob-lisp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-core hides 
/usr/share/emacs/27.1/lisp/org/ob-core
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-md hides 
/usr/share/emacs/27.1/lisp/org/ox-md
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-calc hides 
/usr/share/emacs/27.1/lisp/org/ob-calc
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-crypt 
hides /usr/share/emacs/27.1/lisp/org/org-crypt
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-plot hides 
/usr/share/emacs/27.1/lisp/org/org-plot
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-awk hides 
/usr/share/emacs/27.1/lisp/org/ob-awk
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-perl hides 
/usr/share/emacs/27.1/lisp/org/ob-perl
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-org hides 
/usr/share/emacs/27.1/lisp/org/ox-org
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-odt hides 
/usr/share/emacs/27.1/lisp/org/ox-odt
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ebnf hides 
/usr/share/emacs/27.1/lisp/org/ob-ebnf
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ditaa hides 
/usr/share/emacs/27.1/lisp/org/ob-ditaa
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ocaml hides 
/usr/share/emacs/27.1/lisp/org/ob-ocaml
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-install 
hides /usr/share/emacs/27.1/lisp/org/org-install
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sql hides 
/usr/share/emacs/27.1/lisp/org/ob-sql
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-js hides 
/usr/share/emacs/27.1/lisp/org/ob-js
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-org hides 
/usr/share/emacs/27.1/lisp/org/ob-org
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-pcomplete 
hides /usr/share/emacs/27.1/lisp/org/org-pcomplete
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-exp hides 
/usr/share/emacs/27.1/lisp/org/ob-exp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-src hides 
/usr/share/emacs/27.1/lisp/org/org-src
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-java hides 
/usr/share/emacs/27.1/lisp/org/ob-java
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-stan hides 
/usr/share/emacs/27.1/lisp/org/ob-stan
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-mscgen 
hides /usr/share/emacs/27.1/lisp/org/ob-mscgen
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-gnus hides 
/usr/share/emacs/27.1/lisp/org/ol-gnus
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-shell hides 
/usr/share/emacs/27.1/lisp/org/ob-shell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-matlab 
hides /usr/share/emacs/27.1/lisp/org/ob-matlab
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lilypond 
hides /usr/share/emacs/27.1/lisp/org/ob-lilypond
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-bibtex 
hides /usr/share/emacs/27.1/lisp/org/ol-bibtex
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-num hides 
/usr/share/emacs/27.1/lisp/org/org-num
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-coq hides 
/usr/share/emacs/27.1/lisp/org/ob-coq
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ruby hides 
/usr/share/emacs/27.1/lisp/org/ob-ruby
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-compat 
hides /usr/share/emacs/27.1/lisp/org/org-compat
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-J hides 
/usr/share/emacs/27.1/lisp/org/ob-J
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-ctags 
hides /usr/share/emacs/27.1/lisp/org/org-ctags
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-goto hides 
/usr/share/emacs/27.1/lisp/org/org-goto
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-archive 
hides /usr/share/emacs/27.1/lisp/org/org-archive
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-clojure 
hides /usr/share/emacs/27.1/lisp/org/ob-clojure
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-macro 
hides /usr/share/emacs/27.1/lisp/org/org-macro
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-picolisp 
hides /usr/share/emacs/27.1/lisp/org/ob-picolisp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-haskell 
hides /usr/share/emacs/27.1/lisp/org/ob-haskell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-attach-git 
hides /usr/share/emacs/27.1/lisp/org/org-attach-git
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-agenda 
hides /usr/share/emacs/27.1/lisp/org/org-agenda
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-tempo 
hides /usr/share/emacs/27.1/lisp/org/org-tempo
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-inlinetask 
hides /usr/share/emacs/27.1/lisp/org/org-inlinetask
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-forth hides 
/usr/share/emacs/27.1/lisp/org/ob-forth
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-latex hides 
/usr/share/emacs/27.1/lisp/org/ox-latex
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-vala hides 
/usr/share/emacs/27.1/lisp/org/ob-vala
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-protocol 
hides /usr/share/emacs/27.1/lisp/org/org-protocol
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol hides 
/usr/share/emacs/27.1/lisp/org/ol
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-emacs-lisp 
hides /usr/share/emacs/27.1/lisp/org/ob-emacs-lisp
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-icalendar 
hides /usr/share/emacs/27.1/lisp/org/ox-icalendar
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-element 
hides /usr/share/emacs/27.1/lisp/org/org-element
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-texinfo 
hides /usr/share/emacs/27.1/lisp/org/ox-texinfo
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-tangle 
hides /usr/share/emacs/27.1/lisp/org/ob-tangle
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-fortran 
hides /usr/share/emacs/27.1/lisp/org/ob-fortran
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ledger 
hides /usr/share/emacs/27.1/lisp/org/ob-ledger
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-eww hides 
/usr/share/emacs/27.1/lisp/org/ol-eww
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sqlite 
hides /usr/share/emacs/27.1/lisp/org/ob-sqlite
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-publish 
hides /usr/share/emacs/27.1/lisp/org/ox-publish
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-C hides 
/usr/share/emacs/27.1/lisp/org/ob-C
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-octave 
hides /usr/share/emacs/27.1/lisp/org/ob-octave
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-attach 
hides /usr/share/emacs/27.1/lisp/org/org-attach
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-hledger 
hides /usr/share/emacs/27.1/lisp/org/ob-hledger
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-entities 
hides /usr/share/emacs/27.1/lisp/org/org-entities
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox hides 
/usr/share/emacs/27.1/lisp/org/ox
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-mobile 
hides /usr/share/emacs/27.1/lisp/org/org-mobile
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-indent 
hides /usr/share/emacs/27.1/lisp/org/org-indent
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-list hides 
/usr/share/emacs/27.1/lisp/org/org-list
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-keys hides 
/usr/share/emacs/27.1/lisp/org/org-keys
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lob hides 
/usr/share/emacs/27.1/lisp/org/ob-lob
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-rmail hides 
/usr/share/emacs/27.1/lisp/org/ol-rmail
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-macs hides 
/usr/share/emacs/27.1/lisp/org/org-macs
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-w3m hides 
/usr/share/emacs/27.1/lisp/org/ol-w3m
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-mhe hides 
/usr/share/emacs/27.1/lisp/org/ol-mhe
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-maxima 
hides /usr/share/emacs/27.1/lisp/org/ob-maxima
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-lua hides 
/usr/share/emacs/27.1/lisp/org/ob-lua
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-css hides 
/usr/share/emacs/27.1/lisp/org/ob-css
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-lint hides 
/usr/share/emacs/27.1/lisp/org/org-lint
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-irc hides 
/usr/share/emacs/27.1/lisp/org/ol-irc
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org hides 
/usr/share/emacs/27.1/lisp/org/org
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-shen hides 
/usr/share/emacs/27.1/lisp/org/ob-shen
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-bbdb hides 
/usr/share/emacs/27.1/lisp/org/ol-bbdb
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-datetree 
hides /usr/share/emacs/27.1/lisp/org/org-datetree
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-plantuml 
hides /usr/share/emacs/27.1/lisp/org/ob-plantuml
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-feed hides 
/usr/share/emacs/27.1/lisp/org/org-feed
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-capture 
hides /usr/share/emacs/27.1/lisp/org/org-capture
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-habit 
hides /usr/share/emacs/27.1/lisp/org/org-habit
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sass hides 
/usr/share/emacs/27.1/lisp/org/ob-sass
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-footnote 
hides /usr/share/emacs/27.1/lisp/org/org-footnote
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-timer 
hides /usr/share/emacs/27.1/lisp/org/org-timer
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-duration 
hides /usr/share/emacs/27.1/lisp/org/org-duration
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-R hides 
/usr/share/emacs/27.1/lisp/org/ob-R
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-faces 
hides /usr/share/emacs/27.1/lisp/org/org-faces
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-comint 
hides /usr/share/emacs/27.1/lisp/org/ob-comint
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-docview 
hides /usr/share/emacs/27.1/lisp/org/ol-docview
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-man hides 
/usr/share/emacs/27.1/lisp/org/ox-man
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-ascii hides 
/usr/share/emacs/27.1/lisp/org/ox-ascii
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-eval hides 
/usr/share/emacs/27.1/lisp/org/ob-eval
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-version 
hides /usr/share/emacs/27.1/lisp/org/org-version
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob hides 
/usr/share/emacs/27.1/lisp/org/ob
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-table hides 
/usr/share/emacs/27.1/lisp/org/ob-table
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-colview 
hides /usr/share/emacs/27.1/lisp/org/org-colview
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-clock 
hides /usr/share/emacs/27.1/lisp/org/org-clock
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-eshell 
hides /usr/share/emacs/27.1/lisp/org/ob-eshell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-sed hides 
/usr/share/emacs/27.1/lisp/org/ob-sed
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-ref hides 
/usr/share/emacs/27.1/lisp/org/ob-ref
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-io hides 
/usr/share/emacs/27.1/lisp/org/ob-io
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-html hides 
/usr/share/emacs/27.1/lisp/org/ox-html
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-abc hides 
/usr/share/emacs/27.1/lisp/org/ob-abc
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-id hides 
/usr/share/emacs/27.1/lisp/org/org-id
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-asymptote 
hides /usr/share/emacs/27.1/lisp/org/ob-asymptote
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-scheme 
hides /usr/share/emacs/27.1/lisp/org/ob-scheme
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-python 
hides /usr/share/emacs/27.1/lisp/org/ob-python
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-info hides 
/usr/share/emacs/27.1/lisp/org/ol-info
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-groovy 
hides /usr/share/emacs/27.1/lisp/org/ob-groovy
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-latex hides 
/usr/share/emacs/27.1/lisp/org/ob-latex
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-dot hides 
/usr/share/emacs/27.1/lisp/org/ob-dot
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-loaddefs 
hides /usr/share/emacs/27.1/lisp/org/org-loaddefs
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ox-beamer 
hides /usr/share/emacs/27.1/lisp/org/ox-beamer
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/org-mouse 
hides /usr/share/emacs/27.1/lisp/org/org-mouse
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ol-eshell 
hides /usr/share/emacs/27.1/lisp/org/ol-eshell
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-processing 
hides /usr/share/emacs/27.1/lisp/org/ob-processing
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-gnuplot 
hides /usr/share/emacs/27.1/lisp/org/ob-gnuplot
/home/thomas/.config/emacs/elpa/org-plus-contrib-20200907/ob-makefile 
hides /usr/share/emacs/27.1/lisp/org/ob-makefile
/home/thomas/.config/emacs/elpa/eldoc-1.10.0/eldoc hides 
/usr/share/emacs/27.1/lisp/emacs-lisp/eldoc

Features:
(misearch multi-isearch mhtml-mode css-mode eww mm-url url-queue color
js cc-mode cc-fonts cc-guess cc-menus cc-cmds cc-styles cc-align
cc-engine cc-vars cc-defs sgml-mode url-http url-auth url-gw nsm
sh-script smie executable htmlize mule-util ibuf-ext ibuffer
ibuffer-loaddefs pp shadow sort mail-extr eieio-opt speedbar sb-image
ezimage dframe help-fns radix-tree emacsbug sendmail imenu man go-mode
find-file ffap rx vc-git diff-mode org-eldoc flyspell ispell ol-eww
ol-rmail ol-mhe ol-irc ol-info ol-gnus nnir gnus-sum url url-proxy
url-privacy url-expand url-methods url-history mailcap shr url-cookie
url-domsuf url-util svg dom gnus-group gnus-undo gnus-start gnus-cloud
nnimap nnmail mail-source utf7 netrc nnoo parse-time iso8601 gnus-spec
gnus-int gnus-range message rmc puny rfc822 mml mml-sec epa derived epg
epg-config mm-decode mm-bodies mm-encode mail-parse rfc2231 mailabbrev
gmm-utils mailheader gnus-win gnus nnheader gnus-util rmail
rmail-loaddefs rfc2047 rfc2045 ietf-drums text-property-search
mail-utils mm-util mail-prsvr ol-docview doc-view jka-compr image-mode
exif ol-bibtex bibtex ol-bbdb ol-w3m org-tempo tempo ox-odt rng-loc
rng-uri rng-parse rng-match rng-dt rng-util rng-pttrn nxml-parse nxml-ns
nxml-enc xmltok nxml-util ox-latex ox-icalendar ox-html table ox-ascii
ox-publish ox org-element avl-tree ob-latex ob-shell shell org ob
ob-tangle ob-ref ob-lob ob-table ob-exp org-macro org-footnote org-src
ob-comint org-pcomplete pcomplete org-list org-faces org-entities
noutline outline org-version ob-emacs-lisp ob-core ob-eval org-table ol
org-keys org-compat advice org-macs org-loaddefs format-spec find-func
cal-menu calendar cal-loaddefs dired dired-loaddefs time-date checkdoc
lisp-mnt flymake-proc flymake compile comint ansi-color warnings
thingatpt modus-operandi-theme company-oddmuse company-keywords
company-etags etags fileloop generator xref project ring company-gtags
company-dabbrev-code company-dabbrev company-files company-clang
company-capf company-cmake company-semantic company-template
company-bbdb company pcase delight cl-extra help-mode use-package
use-package-ensure use-package-delight use-package-diminish
use-package-bind-key bind-key easy-mmode use-package-core finder-inf
edmacro kmacro recentf tree-widget wid-edit clang-rename
clang-include-fixer let-alist clang-format xml info package easymenu
browse-url url-handlers url-parse auth-source cl-seq eieio eieio-core
cl-macs eieio-loaddefs password-cache json subr-x map url-vars seq
byte-opt gv bytecomp byte-compile cconv cl-loaddefs cl-lib tooltip eldoc
electric uniquify ediff-hook vc-hooks lisp-float-type mwheel term/x-win
x-win term/common-win x-dnd tool-bar dnd fontset image regexp-opt fringe
tabulated-list replace newcomment text-mode elisp-mode lisp-mode
prog-mode register page tab-bar menu-bar rfn-eshadow isearch timer
select scroll-bar mouse jit-lock font-lock syntax facemenu font-core
term/tty-colors frame minibuffer cl-generic cham georgian utf-8-lang
misc-lang vietnamese tibetan thai tai-viet lao korean japanese eucjp-ms
cp51932 hebrew greek romanian slovak czech european ethiopic indian
cyrillic chinese composite charscript charprop case-table epa-hook
jka-cmpr-hook help simple abbrev obarray cl-preloaded nadvice loaddefs
button faces cus-face macroexp files text-properties overlay sha1 md5
base64 format env code-pages mule custom widget hashtable-print-readable
backquote threads dbusbind inotify dynamic-setting system-font-setting
font-render-setting xwidget-internal cairo move-toolbar gtk x-toolkit x
multi-tty make-network-process emacs)

Memory information:
((conses 16 468606 317258)
 (symbols 48 38138 118)
 (strings 32 160466 36787)
 (string-bytes 1 4836226)
 (vectors 16 59254)
 (vector-slots 8 1357600 343876)
 (floats 8 443 1316)
 (intervals 56 2105 1619)
 (buffers 1000 37))





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 17 Sep 2020 23:10:01 GMT) Full text and rfc822 format available.

Message #37 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 18 Sep 2020 01:09:33 +0200
I haven't tried to recreate yet, I still have it open. Monitoring if
it grows, and hoping to find something useful in the existing process.

On Thu, Sep 17, 2020 at 05:58:51PM -0400, Joshua Branson via Bug reports for GNU Emacs, the Swiss army knife of text editors wrote:
>
> Over in #guix irc, the guix people seemed to think it was a memory leak with helm.
>
> I was watching my emacs consume about 0.1% more system memory every 2 or 3 seconds. Setting
>
> (setq helm-ff-keep-cached-candidates nil)
>
> Seemed to make the problem go away.
>
> I also made a video, where I watched this memory usage continually go up
> and then stay steady after I turned off helm-ff-keep-cached-candidates.
> This happens at about the 35 minute mark.
>
> https://video.hardlimit.com/videos/watch/3069e16a-d75c-4e40-8686-9102e40e333f
>
> And here's the bug report on guix system:
>
> https://issues.guix.gnu.org/43406#10
>
>
> --
> Joshua Branson
> Sent from Emacs and Gnus
>
>
>


------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 18 Sep 2020 06:57:01 GMT) Full text and rfc822 format available.

Message #40 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Joshua Branson <jbranso <at> dismail.de>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 18 Sep 2020 09:56:14 +0300
> Date: Thu, 17 Sep 2020 17:58:51 -0400
> From: Joshua Branson via "Bug reports for GNU Emacs,
>  the Swiss army knife of text editors" <bug-gnu-emacs <at> gnu.org>
> 
> 
> Over in #guix irc, the guix people seemed to think it was a memory leak with helm.

Thanks.

But if it's due to helm, why doesn't the huge memory usage show in the
report produced by GC?  That report should show all the Lisp object
that we allocate and manage, no?  Where does helm-ff-cache keeps those
"candidates"?  (And what is this cache, if someone could be kind
enough to describe it?)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 18 Sep 2020 07:55:02 GMT) Full text and rfc822 format available.

Message #43 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Robert Pluim <rpluim <at> gmail.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, Joshua Branson <jbranso <at> dismail.de>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 18 Sep 2020 09:53:59 +0200
>>>>> On Fri, 18 Sep 2020 09:56:14 +0300, Eli Zaretskii <eliz <at> gnu.org> said:

    >> Date: Thu, 17 Sep 2020 17:58:51 -0400
    >> From: Joshua Branson via "Bug reports for GNU Emacs,
    >> the Swiss army knife of text editors" <bug-gnu-emacs <at> gnu.org>
    >> 
    >> 
    >> Over in #guix irc, the guix people seemed to think it was a memory leak with helm.

    Eli> Thanks.

    Eli> But if it's due to helm, why doesn't the huge memory usage show in the
    Eli> report produced by GC?  That report should show all the Lisp object
    Eli> that we allocate and manage, no?  Where does helm-ff-cache keeps those
    Eli> "candidates"?  (And what is this cache, if someone could be kind
    Eli> enough to describe it?)

Itʼs a hash table. It caches directory contents, as far as I can tell.

Robert




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 18 Sep 2020 08:14:01 GMT) Full text and rfc822 format available.

Message #46 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Robert Pluim <rpluim <at> gmail.com>
Cc: 43389 <at> debbugs.gnu.org, jbranso <at> dismail.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 18 Sep 2020 11:13:05 +0300
> From: Robert Pluim <rpluim <at> gmail.com>
> Cc: Joshua Branson <jbranso <at> dismail.de>,  43389 <at> debbugs.gnu.org
> Date: Fri, 18 Sep 2020 09:53:59 +0200
> 
>     Eli> But if it's due to helm, why doesn't the huge memory usage show in the
>     Eli> report produced by GC?  That report should show all the Lisp object
>     Eli> that we allocate and manage, no?  Where does helm-ff-cache keeps those
>     Eli> "candidates"?  (And what is this cache, if someone could be kind
>     Eli> enough to describe it?)
> 
> Itʼs a hash table. It caches directory contents, as far as I can tell.

Then its memory usage should be part of the GC report, no?

I guess, if this helm feature is really the culprit, then the growth
of memory footprint is not due to the hash-table itself, but to
something else, which is not a Lisp object and gets allocated via
direct calls to malloc or something?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 18 Sep 2020 08:23:02 GMT) Full text and rfc822 format available.

Message #49 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 18 Sep 2020 11:22:54 +0300
> Date: Thu, 17 Sep 2020 22:47:04 +0200
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> >From Emacs memory-usage package:
> 
> Garbage collection stats:
> ((conses 16 1912248 251798) (symbols 48 54872 19) (strings 32 327552 81803) (string-bytes 1 12344346) (vectors 16 158994) (vector-slots 8 2973919 339416) (floats 8 992 4604) (intervals 56 182607 7492) (buffers 1000 195))
> 
>  =>	29.2MB (+ 3.84MB dead) in conses
> 	2.51MB (+ 0.89kB dead) in symbols
> 	10.00MB (+ 2.50MB dead) in strings
> 	11.8MB in string-bytes
> 	2.43MB in vectors
> 	22.7MB (+ 2.59MB dead) in vector-slots
> 	7.75kB (+ 36.0kB dead) in floats
> 	9.75MB (+  410kB dead) in intervals
> 	 190kB in buffers
> 
> Total in lisp objects: 97.9MB (live 88.5MB, dead 9.36MB)
> 
> Buffer ralloc memory usage:
> 81 buffers
> 4.71MB total (1007kB in gaps)
> 
> ----------------------------------------------------------------------
> 
> And /proc/PID/smaps which is huge so I pastebinned it.
> 
> https://termbin.com/2sx5

Thanks.

> 56413d24a000-5642821c6000 rw-p 00000000 00:00 0                          [heap]
> Size:            5324272 kB
> KernelPageSize:        4 kB
> MMUPageSize:           4 kB
> Rss:             5245496 kB
> Pss:             5245496 kB
> Shared_Clean:          0 kB
> Shared_Dirty:          0 kB
> Private_Clean:         0 kB
> Private_Dirty:   5245496 kB
> Referenced:      5245496 kB
> Anonymous:       5245496 kB
> LazyFree:              0 kB
> AnonHugePages:         0 kB
> ShmemPmdMapped:        0 kB
> FilePmdMapped:        0 kB
> Shared_Hugetlb:        0 kB
> Private_Hugetlb:       0 kB
> Swap:                  0 kB
> SwapPss:               0 kB
> Locked:                0 kB
> THPeligible:            0
> VmFlags: rd wr mr mw me ac

So it seems to be our heap that takes most of the 5GB.

It might be interesting to see which operations/commands cause this
part to increase.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 20 Sep 2020 20:09:02 GMT) Full text and rfc822 format available.

Message #52 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: jbranso <at> dismail.de
To: "Eli Zaretskii" <eliz <at> gnu.org>, "Robert Pluim" <rpluim <at> gmail.com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Sun, 20 Sep 2020 20:08:17 +0000
Maybe I spoke a little too soon. I just saw two related bug reports and thought I would connect them. Ludo actually closed the bug in Guix System.  He confirmed that for him, helm seemed to be the problem.  

September 18, 2020 4:12 AM, "Eli Zaretskii" <eliz <at> gnu.org> wrote:

>> From: Robert Pluim <rpluim <at> gmail.com>
>> Cc: Joshua Branson <jbranso <at> dismail.de>, 43389 <at> debbugs.gnu.org
>> Date: Fri, 18 Sep 2020 09:53:59 +0200
>> 
>> Eli> But if it's due to helm, why doesn't the huge memory usage show in the
>> Eli> report produced by GC? That report should show all the Lisp object
>> Eli> that we allocate and manage, no? Where does helm-ff-cache keeps those
>> Eli> "candidates"? (And what is this cache, if someone could be kind
>> Eli> enough to describe it?)
>> 
>> Itʼs a hash table. It caches directory contents, as far as I can tell.
> 
> Then its memory usage should be part of the GC report, no?
> 
> I guess, if this helm feature is really the culprit, then the growth
> of memory footprint is not due to the hash-table itself, but to
> something else, which is not a Lisp object and gets allocated via
> direct calls to malloc or something?




Merged 43389 43395 43876. Request was from Eli Zaretskii <eliz <at> gnu.org> to control <at> debbugs.gnu.org. (Fri, 09 Oct 2020 06:59:03 GMT) Full text and rfc822 format available.

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 29 Oct 2020 21:42:01 GMT) Full text and rfc822 format available.

Message #57 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 29 Oct 2020 21:17:20 +0100
I'm regularly encountering a bug that might be this one.  As with 
the previous posters, one of my emacs instances regularly grows up 
to 7-10 GB.  Garbage collection shows emacs is only aware of 
~250MB and has nothing to collect, and /proc/<pid>/smaps shows all 
of the usage in the heap.

The only emacs instance that hits this is the one I use the 
"emacs-slack" package in, which means long-lived HTTPS 
connections.  I'm aware that this is a relatively unusual use of 
emacs.

It doesn't start leaking until it has been active for 2-3 days. 
It might depends on other factors, such as suspending or losing 
network connectivity.  Once the leak triggers, it grows at a rate 
of about 1MB every few seconds. My machine has 32GB, so it gets 
pretty far before I notice and kill it. I'm not sure if there is a 
limit.

I built emacs with debug symbols and dumped some strace logs last 
time it happened.  This is from the "native-comp" branch, since 
it's the only one I had built with debug symbols:  GNU Emacs 
28.0.50, commit feed53f8b5da0e58cce412cd41a52883dba6c1be.  I see 
the same with the version installed from my package manager (Arch, 
GNU Emacs 27.1), and the strace log looks about the same, though 
without symbols.

I waited until it was actively leaking, and then ran the following 
command to print a stack trace whenever the heap is extended with 
brk():

$ sudo strace -p $PID -k -r --trace="?brk" --signal="SIGTERM"

The findings: this particular leak is triggered in libgnutls.  I 
get large batches of the following (truncated) stack trace

--- SNIP ---
> /usr/lib/libc-2.32.so(brk+0xb) [0xf6e7b] 
> /usr/lib/libc-2.32.so(__sbrk+0x84) [0xf6f54] 
> /usr/lib/libc-2.32.so(__default_morecore+0xd) [0x8d80d] 
> /usr/lib/libc-2.32.so(sysmalloc+0x372) [0x890e2] 
> /usr/lib/libc-2.32.so(_int_malloc+0xd9e) [0x8ad6e] 
> /usr/lib/libc-2.32.so(__libc_malloc+0x1c1) [0x8be51]
> /usr/lib/libgnutls.so.30.28.1(gnutls_session_ticket_send+0x566) 
> [0x3cc36] 
> /usr/lib/libgnutls.so.30.28.1(gnutls_record_check_corked+0xc0a) 
> [0x3e42a] 
> /usr/lib/libgnutls.so.30.28.1(gnutls_transport_get_int+0x11b1) 
> [0x34d31] 
> /usr/lib/libgnutls.so.30.28.1(gnutls_transport_get_int+0x3144) 
> [0x36cc4] 
> /home/trevor/applications/opt/bin/emacs-28.0.50(emacs_gnutls_read+0x5d) 
> [0x2e40a7] 
> /home/trevor/applications/opt/bin/emacs-28.0.50(read_process_output+0x28e) 
> [0x2def18] 
--- SNIP ---

A larger log file is available here: 
http://trevorbentley.com/emacs_strace.log

I'm not sure if gnutls is giving back buffers that emacs is 
supposed to free, or if the leak is entirely contained within 
gnutls, but something in that path is hanging on to a lot of 
allocations indefinitely.

Hope this is useful, and let me know if I can provide any other 
information that would be helpful.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 30 Oct 2020 08:01:01 GMT) Full text and rfc822 format available.

Message #60 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 30 Oct 2020 10:00:29 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Date: Thu, 29 Oct 2020 21:17:20 +0100
> 
> It doesn't start leaking until it has been active for 2-3 days. 
> It might depends on other factors, such as suspending or losing 
> network connectivity.  Once the leak triggers, it grows at a rate 
> of about 1MB every few seconds. My machine has 32GB, so it gets 
> pretty far before I notice and kill it. I'm not sure if there is a 
> limit.
> 
> I built emacs with debug symbols and dumped some strace logs last 
> time it happened.  This is from the "native-comp" branch, since 
> it's the only one I had built with debug symbols:  GNU Emacs 
> 28.0.50, commit feed53f8b5da0e58cce412cd41a52883dba6c1be.  I see 
> the same with the version installed from my package manager (Arch, 
> GNU Emacs 27.1), and the strace log looks about the same, though 
> without symbols.
> 
> I waited until it was actively leaking, and then ran the following 
> command to print a stack trace whenever the heap is extended with 
> brk():
> 
> $ sudo strace -p $PID -k -r --trace="?brk" --signal="SIGTERM"
> 
> The findings: this particular leak is triggered in libgnutls.  I 
> get large batches of the following (truncated) stack trace

Thanks.  This trace doesn't show how many bytes were allocated, does
it?  Without that it is hard to judge whether these GnuTLS calls could
be the culprit.  Because the full trace shows other calls to malloc,
for example this:

   > /usr/lib/libc-2.32.so(brk+0xb) [0xf6e7b]
   > /usr/lib/libc-2.32.so(__sbrk+0x84) [0xf6f54]
   > /usr/lib/libc-2.32.so(__default_morecore+0xd) [0x8d80d]
   > /usr/lib/libc-2.32.so(sysmalloc+0x372) [0x890e2]
   > /usr/lib/libc-2.32.so(_int_malloc+0xd9e) [0x8ad6e]
   > /usr/lib/libc-2.32.so(_int_memalign+0x3f) [0x8b01f]
   > /usr/lib/libc-2.32.so(_mid_memalign+0x13c) [0x8c12c]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(lisp_align_malloc+0x2e) [0x2364ee]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fcons+0x65) [0x237f74]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(store_in_alist+0x5f) [0x5c9a3]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(gui_report_frame_params+0x46a) [0x607f1]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fframe_parameters+0x499) [0x5d88b]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fframe_parameter+0x381) [0x5dc9c]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(eval_sub+0x7a7) [0x26f964]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fif+0x1f) [0x26b590]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(eval_sub+0x38b) [0x26f548]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Feval+0x7a) [0x26ef45]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(funcall_subr+0x257) [0x271463]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Ffuncall+0x192) [0x270fe9]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(internal_condition_case_n+0xa1) [0x26d81a]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(safe__call+0x211) [0x73943]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(safe__call1+0xba) [0x73b47]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(safe__eval+0x35) [0x73bd7]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(display_mode_element+0xe32) [0xb5515]

This seems to indicate some mode-line element that uses :eval, but
without knowing what it does it is hard to say anything more specific.

I also see this:

   > /home/trevor/applications/opt/bin/emacs-28.0.50(_start+0x2e) [0x4598e]
       2.870962 brk(0x55f5ed9a4000)       = 0x55f5ed9a4000
   > /usr/lib/libc-2.32.so(brk+0xb) [0xf6e7b]
   > /usr/lib/libc-2.32.so(__sbrk+0x84) [0xf6f54]
   > /usr/lib/libc-2.32.so(__default_morecore+0xd) [0x8d80d]
   > /usr/lib/libc-2.32.so(sysmalloc+0x372) [0x890e2]
   > /usr/lib/libc-2.32.so(_int_malloc+0xd9e) [0x8ad6e]
   > /usr/lib/libc-2.32.so(_int_memalign+0x3f) [0x8b01f]
   > /usr/lib/libc-2.32.so(_mid_memalign+0x13c) [0x8c12c]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(lisp_align_malloc+0x2e) [0x2364ee]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fcons+0x65) [0x237f74]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fmake_list+0x4f) [0x238544]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(concat+0x5c3) [0x2792f6]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(Fcopy_sequence+0x16a) [0x278d2a]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(timer_check+0x33) [0x1b79dd]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(readable_events+0x1a) [0x1b5d00]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(get_input_pending+0x2f) [0x1bcf3a]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(detect_input_pending_run_timers+0x2e) [0x1c4eb1]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(wait_reading_process_output+0x14ec) [0x2de0c0]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(sit_for+0x211) [0x53e78]
   > /home/trevor/applications/opt/bin/emacs-28.0.50(read_char+0x1019) [0x1b3f62]

This indicates some timer that runs; again, without knowing which
timer and what it does, it is hard to proceed.

Etc. etc. -- the bottom line is that I think we need to know how many
bytes are allocated in each call to make some progress.  It would be
even more useful if we could somehow know which of the allocated
buffers are free'd soon and which aren't.  That's because Emacs calls
memory allocation functions _a_lot_, and it is completely normal to
see a lot of these calls.  What we need is to find allocations that
don't get free'd, and whose byte counts come close to explaining the
rate of 1MB every few seconds.  So these calls need to be filtered
somehow, otherwise we will not see the forest for the gazillion trees.

> I'm not sure if gnutls is giving back buffers that emacs is 
> supposed to free, or if the leak is entirely contained within 
> gnutls, but something in that path is hanging on to a lot of 
> allocations indefinitely.

The GnuTLS functions we call in emacs_gnutls_read are:

  gnutls_record_recv
  emacs_gnutls_handle_error

The latter is only called if there's an error, so I'm guessing it is
not part of your trace.  And the former doesn't say in its
documentation that Emacs should free any buffers after calling it, so
I'm not sure how Emacs could be the culprit here.  If GnuTLS is the
culprit (and as explained above, this is not certain at this point),
perhaps upgrading to a newer GnuTLS version or reporting this to
GnuTLS developers would allow some progress.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 09 Nov 2020 20:47:02 GMT) Full text and rfc822 format available.

Message #63 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 09 Nov 2020 21:46:11 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> So it seems to be our heap that takes most of the 5GB.

Today it happened again to me.  I'm writing from an Emacs session using
more than 5 GB of memory.  I've started it some hours ago and have no
clue why today had been special.  I didn't do anything exceptional.

Here is output from memory-usage:

Garbage collection stats:
((conses 16 2645730 3784206) (symbols 48 68678 724) (strings 32 528858 451889) (string-bytes 1 18127696) (vectors 16 213184) (vector-slots 8 3704641 2189052) (floats 8 2842 5514) (intervals 56 264780 87057) (buffers 992 119))

 =>	40.4MB (+ 57.7MB dead) in conses
	3.14MB (+ 33.9kB dead) in symbols
	16.1MB (+ 13.8MB dead) in strings
	17.3MB in string-bytes
	3.25MB in vectors
	28.3MB (+ 16.7MB dead) in vector-slots
	22.2kB (+ 43.1kB dead) in floats
	14.1MB (+ 4.65MB dead) in intervals
	 115kB in buffers

Total in lisp objects:  216MB (live  123MB, dead 93.0MB)

Buffer ralloc memory usage:
119 buffers
16.1MB total (1.71MB in gaps)

Anything I can do to find out more?  I dunno how long I can keep this
session open.  Tried `clear-image-cache', it does not release any
memory.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 09 Nov 2020 21:26:01 GMT) Full text and rfc822 format available.

Message #66 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 09 Nov 2020 22:24:55 +0100
Michael Heerdegen <michael_heerdegen <at> web.de> writes:

> Anything I can do to find out more?  I dunno how long I can keep this
> session open.  Tried `clear-image-cache', it does not release any
> memory.

I found this line in pmap output:

0000557322314000 6257824K rw---   [ anon ]

Is it relevant?

Thanks,

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 09 Nov 2020 21:52:01 GMT) Full text and rfc822 format available.

Message #69 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 09 Nov 2020 22:51:10 +0100
Michael Heerdegen <michael_heerdegen <at> web.de> writes:

> I found this line in pmap output:
>
> 0000557322314000 6257824K rw---   [ anon ]

I guess that's the heap again.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 03:31:01 GMT) Full text and rfc822 format available.

Message #72 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 05:30:53 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: Russell Adams <RLAdams <at> AdamsInfoServ.Com>,  43389 <at> debbugs.gnu.org
> Date: Mon, 09 Nov 2020 21:46:11 +0100
> 
> Garbage collection stats:
> ((conses 16 2645730 3784206) (symbols 48 68678 724) (strings 32 528858 451889) (string-bytes 1 18127696) (vectors 16 213184) (vector-slots 8 3704641 2189052) (floats 8 2842 5514) (intervals 56 264780 87057) (buffers 992 119))
> 
>  =>	40.4MB (+ 57.7MB dead) in conses
> 	3.14MB (+ 33.9kB dead) in symbols
> 	16.1MB (+ 13.8MB dead) in strings
> 	17.3MB in string-bytes
> 	3.25MB in vectors
> 	28.3MB (+ 16.7MB dead) in vector-slots
> 	22.2kB (+ 43.1kB dead) in floats
> 	14.1MB (+ 4.65MB dead) in intervals
> 	 115kB in buffers
> 
> Total in lisp objects:  216MB (live  123MB, dead 93.0MB)
> 
> Buffer ralloc memory usage:
> 119 buffers
> 16.1MB total (1.71MB in gaps)

Once again, the memory managed by GC doesn't explain the overall
footprint.

> Anything I can do to find out more?

If you have some tool that can produce a detailed memory map, stating
which part and which library uses what memory, please do.  Otherwise,
the most important thing is to try to describe what you did from the
beginning of the session, including the files you visited and other
features/commands you invoked that could at some point consume memory.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 03:37:01 GMT) Full text and rfc822 format available.

Message #75 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 05:36:15 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: 43389 <at> debbugs.gnu.org,  Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> Date: Mon, 09 Nov 2020 22:51:10 +0100
> 
> Michael Heerdegen <michael_heerdegen <at> web.de> writes:
> 
> > I found this line in pmap output:
> >
> > 0000557322314000 6257824K rw---   [ anon ]
> 
> I guess that's the heap again.

Yes, the heap.  So it more and more looks like this is the result of
glibc not releasing memory to the system, which with some usage
patterns causes the memory footprint grow to ludicrous size.

We need to find an expert on this and bring him aboard for finding a
solution.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 06:34:02 GMT) Full text and rfc822 format available.

Message #78 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 01:33:17 +0300
* Michael Heerdegen <michael_heerdegen <at> web.de> [2020-11-09 23:47]:
> Eli Zaretskii <eliz <at> gnu.org> writes:
> 
> > So it seems to be our heap that takes most of the 5GB.
> 
> Today it happened again to me.  I'm writing from an Emacs session using
> more than 5 GB of memory.  I've started it some hours ago and have no
> clue why today had been special.  I didn't do anything exceptional.

I may confirm having similar issue.

It was happening regularly under EXWM. Memory get occupied more and
more and more until it does not go any more, swapping becomes tedious
and computer becomes non-responsive. Then I had to kill it. By using
symon-mode I could see swapping of 8 GB and more. My memory is 4 GB
plus 8 GB swap currently.

This similar condition takes place only after keeping Emacs long in
memory like maybe 5-8 hours.

After putting laptop to sleep it happens more often.

When I changed to IceWM this happened only once.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 08:23:02 GMT) Full text and rfc822 format available.

Message #81 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andreas Schwab <schwab <at> linux-m68k.org>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: Michael Heerdegen <michael_heerdegen <at> web.de>, 43389 <at> debbugs.gnu.org,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 09:22:20 +0100
On Nov 10 2020, Eli Zaretskii wrote:

>> From: Michael Heerdegen <michael_heerdegen <at> web.de>
>> Cc: 43389 <at> debbugs.gnu.org,  Russell Adams <RLAdams <at> AdamsInfoServ.Com>
>> Date: Mon, 09 Nov 2020 22:51:10 +0100
>> 
>> Michael Heerdegen <michael_heerdegen <at> web.de> writes:
>> 
>> > I found this line in pmap output:
>> >
>> > 0000557322314000 6257824K rw---   [ anon ]
>> 
>> I guess that's the heap again.
>
> Yes, the heap.  So it more and more looks like this is the result of
> glibc not releasing memory to the system, which with some usage
> patterns causes the memory footprint grow to ludicrous size.

The heap can only shrink if you free memory at the end of it, so there
is nothing wrong here.

You can call malloc_info (0, stdout) to see the state of the heap.

Andreas.

-- 
Andreas Schwab, schwab <at> linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 10:26:01 GMT) Full text and rfc822 format available.

Message #84 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 11:25:15 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> Yes, the heap.  So it more and more looks like this is the result of
> glibc not releasing memory to the system, which with some usage
> patterns causes the memory footprint grow to ludicrous size.

FWIW, I'm still in that session, it's still running, and since
yesterday, that session's memory use has shrunk a lot.  Nearly half of
the memory that had been in use yesterday apparently has been freed now.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 13:00:02 GMT) Full text and rfc822 format available.

Message #87 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Andreas Schwab <schwab <at> linux-m68k.org>
Cc: 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 13:59:15 +0100
Andreas Schwab <schwab <at> linux-m68k.org> writes:

> You can call malloc_info (0, stdout) to see the state of the heap.

Was that meant for me?  If yes: where do I call this?  gdb?

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 13:02:02 GMT) Full text and rfc822 format available.

Message #90 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andreas Schwab <schwab <at> linux-m68k.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 14:01:26 +0100
On Nov 10 2020, Michael Heerdegen wrote:

> Andreas Schwab <schwab <at> linux-m68k.org> writes:
>
>> You can call malloc_info (0, stdout) to see the state of the heap.
>
> Was that meant for me?  If yes: where do I call this?  gdb?

Yes, as long as you are not stopped inside malloc.

Andreas.

-- 
Andreas Schwab, schwab <at> linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 13:11:02 GMT) Full text and rfc822 format available.

Message #93 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Andreas Schwab <schwab <at> linux-m68k.org>
Cc: 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 14:10:20 +0100
Andreas Schwab <schwab <at> linux-m68k.org> writes:

> Yes, as long as you are not stopped inside malloc.

My gdb session looks like this:

[...]
Attaching to process 416219
[New LWP 416220]
[New LWP 416221]
[New LWP 416223]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007f3eae76e926 in pselect () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) malloc_info (0, stdout)
Undefined command: "malloc_info".  Try "help".

I guess I have an optimized build.  Anything I can do better than above?

Thx, Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 13:21:02 GMT) Full text and rfc822 format available.

Message #96 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>,
 Andreas Schwab <schwab <at> linux-m68k.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 15:20:07 +0200
On November 10, 2020 3:10:20 PM GMT+02:00, Michael Heerdegen <michael_heerdegen <at> web.de> wrote:
> Andreas Schwab <schwab <at> linux-m68k.org> writes:
> 
> > Yes, as long as you are not stopped inside malloc.
> 
> My gdb session looks like this:
> 
> [...]
> Attaching to process 416219
> [New LWP 416220]
> [New LWP 416221]
> [New LWP 416223]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library
> "/lib/x86_64-linux-gnu/libthread_db.so.1".
> 0x00007f3eae76e926 in pselect () from /lib/x86_64-linux-gnu/libc.so.6
> (gdb) malloc_info (0, stdout)
> Undefined command: "malloc_info".  Try "help".
> 
> I guess I have an optimized build.  Anything I can do better than
> above?

Try this instead:

  (gdb) call malloc_info(0, stdout)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 13:27:02 GMT) Full text and rfc822 format available.

Message #99 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com,
 Andreas Schwab <schwab <at> linux-m68k.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 14:26:10 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> Try this instead:
>
>   (gdb) call malloc_info(0, stdout)

Hmm:

(gdb) call malloc_info(0, stdout)
'malloc_info' has unknown return type; cast the call to its declared return type

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 14:26:01 GMT) Full text and rfc822 format available.

Message #102 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com,
 Andreas Schwab <schwab <at> linux-m68k.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 15:25:44 +0100
Michael Heerdegen <michael_heerdegen <at> web.de> writes:

> Hmm:
>
> (gdb) call malloc_info(0, stdout)
> 'malloc_info' has unknown return type; cast the call to its declared
> return type

BTW, because I'm such a C noob, I can also offer to give me a (phone or
Signal) call if you are interested, maybe that's more efficient.

Maybe Andreas could do that if he speaks German (?) (I speak English to
some degree: you can understand me and I will understand the most from
you, but it's not good enough to prevent RMS making jokes about my
language from time to time.)

I'm also watching my mailbox all the time of course.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 15:35:02 GMT) Full text and rfc822 format available.

Message #105 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:34:27 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: Andreas Schwab <schwab <at> linux-m68k.org>,  43389 <at> debbugs.gnu.org,
>  RLAdams <at> AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 14:26:10 +0100
> 
> (gdb) call malloc_info(0, stdout)
> 'malloc_info' has unknown return type; cast the call to its declared return type

Compliance!

  (gdb) call (int)malloc_info (0, stdout)

(I would actually try stderr instead of stdout, but I yield to
Andreas's expertise here.)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 15:37:02 GMT) Full text and rfc822 format available.

Message #108 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:36:35 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: Andreas Schwab <schwab <at> linux-m68k.org>,  43389 <at> debbugs.gnu.org,
>   RLAdams <at> AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 15:25:44 +0100
> 
> > (gdb) call malloc_info(0, stdout)
> > 'malloc_info' has unknown return type; cast the call to its declared
> > return type
> 
> BTW, because I'm such a C noob, I can also offer to give me a (phone or
> Signal) call if you are interested, maybe that's more efficient.

If the information proves to be useful, maybe we should provide a Lisp
command to call that function.  It could be instrumental in asking
people who see this problem report their results.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 15:48:02 GMT) Full text and rfc822 format available.

Message #111 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: michael_heerdegen <at> web.de, 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:47:22 +0200
> Date: Tue, 10 Nov 2020 01:33:17 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: Eli Zaretskii <eliz <at> gnu.org>, 43389 <at> debbugs.gnu.org,
>   Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> It was happening regularly under EXWM. Memory get occupied more and
> more and more until it does not go any more, swapping becomes tedious
> and computer becomes non-responsive. Then I had to kill it. By using
> symon-mode I could see swapping of 8 GB and more. My memory is 4 GB
> plus 8 GB swap currently.
> 
> This similar condition takes place only after keeping Emacs long in
> memory like maybe 5-8 hours.
> 
> After putting laptop to sleep it happens more often.
> 
> When I changed to IceWM this happened only once.

If this was due to a WM, are you sure it was Emacs that was eating up
memory, and not the WM itself?  If it was Emacs, then I think the only
way it could depend on the WM is if the WM feeds Emacs with many X
events that somehow consume memory.

Michael, what WM are you using.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 15:54:02 GMT) Full text and rfc822 format available.

Message #114 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Andreas Schwab <schwab <at> linux-m68k.org>
Cc: michael_heerdegen <at> web.de, 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:53:36 +0200
> From: Andreas Schwab <schwab <at> linux-m68k.org>
> Cc: Michael Heerdegen <michael_heerdegen <at> web.de>,  43389 <at> debbugs.gnu.org,
>   RLAdams <at> AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 09:22:20 +0100
> 
> > Yes, the heap.  So it more and more looks like this is the result of
> > glibc not releasing memory to the system, which with some usage
> > patterns causes the memory footprint grow to ludicrous size.
> 
> The heap can only shrink if you free memory at the end of it, so there
> is nothing wrong here.

Yes.  Except that some people say once this problem starts, the memory
footprint starts growing very fast, and the question is why.

Also, perhaps Emacs could do something to prevent large amounts of
free memory from being trapped by a small allocation, by modifying
something in how we allocate memory.

(It is a pity that a problem which was solved decades ago by using
ralloc.c is back, and on GNU/Linux of all the platforms, where such
aspects of memory fragmentation aren't supposed to happen, and all the
malloc knobs we could perhaps use to avoid that were deprecated and/or
removed.)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 15:56:02 GMT) Full text and rfc822 format available.

Message #117 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:55:59 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: 43389 <at> debbugs.gnu.org,  RLAdams <at> AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 11:25:15 +0100
> 
> FWIW, I'm still in that session, it's still running, and since
> yesterday, that session's memory use has shrunk a lot.  Nearly half of
> the memory that had been in use yesterday apparently has been freed now.

So the "leak" is not permanent, as some other people here reported?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 16:37:02 GMT) Full text and rfc822 format available.

Message #120 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com,
 Jean Louis <bugs <at> gnu.support>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:36:08 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> If this was due to a WM, are you sure it was Emacs that was eating up
> memory, and not the WM itself?  If it was Emacs, then I think the only
> way it could depend on the WM is if the WM feeds Emacs with many X
> events that somehow consume memory.

I'm using openbox here, comparably lightweight as icewm.  I don't see an
indication to blame the window manager to be related.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 16:43:01 GMT) Full text and rfc822 format available.

Message #123 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:41:51 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> > FWIW, I'm still in that session, it's still running, and since
> > yesterday, that session's memory use has shrunk a lot.  Nearly half of
> > the memory that had been in use yesterday apparently has been freed now.
>
> So the "leak" is not permanent, as some other people here reported?

Maybe not, or not completely.  Memory usage still was gigantic, though.

Most of the time people will recognize the problem when it causes
trouble, and then they probably use to restart Emacs.  Maybe most of
them did not try to continue using such a session?  Only guessing.  But
yes, mine did free say 2 GB of 7 GB used, without any intervention from
my side.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 16:50:02 GMT) Full text and rfc822 format available.

Message #126 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 17:49:16 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> Compliance!
>
>   (gdb) call (int)malloc_info (0, stdout)

I'm very sorry, but it's gone.

I used Magit in that session to show a log buffer.  That lead to memory
usage grow too much, and a daemon killed the session to avoid swapping.

Maybe the problem is even related to Magit usage.  But I had a second X
session running at that moment so there was a lot less memory left on
the system when that happened.

FWIW, the only "exceptional" thing that happened yesterday had been that
Gnus one time got stalled after starting.  That also can be totally
unrelated.

I'll try to start some timer that will report me live about heavily
growing memory usage so that I can recognize the problem directly when
it happens.

Regards,

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 17:14:01 GMT) Full text and rfc822 format available.

Message #129 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 19:13:37 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: schwab <at> linux-m68k.org,  43389 <at> debbugs.gnu.org,  RLAdams <at> AdamsInfoServ.Com
> Date: Tue, 10 Nov 2020 17:49:16 +0100
> 
> I'll try to start some timer that will report me live about heavily
> growing memory usage so that I can recognize the problem directly when
> it happens.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 17:45:01 GMT) Full text and rfc822 format available.

Message #132 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: michael_heerdegen <at> web.de
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 19:44:16 +0200
> Date: Tue, 10 Nov 2020 17:36:35 +0200
> From: Eli Zaretskii <eliz <at> gnu.org>
> Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
> 
> > > (gdb) call malloc_info(0, stdout)
> > > 'malloc_info' has unknown return type; cast the call to its declared
> > > return type
> > 
> > BTW, because I'm such a C noob, I can also offer to give me a (phone or
> > Signal) call if you are interested, maybe that's more efficient.
> 
> If the information proves to be useful, maybe we should provide a Lisp
> command to call that function.  It could be instrumental in asking
> people who see this problem report their results.

I've now added such a command to the master branch.  Redirect stderr
to a file, and then invoke "M-x malloc-info RET" when you want a
memory report.  The command doesn't display anything, it just writes
the info to the redirected file.

HTH




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 18:56:01 GMT) Full text and rfc822 format available.

Message #135 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 19:55:12 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> I've now added such a command to the master branch.  Redirect stderr
> to a file, and then invoke "M-x malloc-info RET" when you want a
> memory report.  The command doesn't display anything, it just writes
> the info to the redirected file.

Great, thanks, I'll use it next time when the issue happens.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 10 Nov 2020 20:58:01 GMT) Full text and rfc822 format available.

Message #138 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: michael_heerdegen <at> web.de, 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 10 Nov 2020 22:51:16 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-10 18:47]:
> > Date: Tue, 10 Nov 2020 01:33:17 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: Eli Zaretskii <eliz <at> gnu.org>, 43389 <at> debbugs.gnu.org,
> >   Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> > 
> > It was happening regularly under EXWM. Memory get occupied more and
> > more and more until it does not go any more, swapping becomes tedious
> > and computer becomes non-responsive. Then I had to kill it. By using
> > symon-mode I could see swapping of 8 GB and more. My memory is 4 GB
> > plus 8 GB swap currently.
> > 
> > This similar condition takes place only after keeping Emacs long in
> > memory like maybe 5-8 hours.
> > 
> > After putting laptop to sleep it happens more often.
> > 
> > When I changed to IceWM this happened only once.
> 
> If this was due to a WM, are you sure it was Emacs that was eating up
> memory, and not the WM itself?

More often I could not do anything. So I have just hard reset computer
without shutdown. For some reason not even the Magic SysRq key was
enabled on Hyperbola GNU/Linux-libre, so I have enabled that one to at
least synchronize disk data and unmount disks before the rest.

How I know it was Emacs? I do not know, I am just assuming. I was
using almost exclusively Emacs and sometimes sxiv image viewer which
exits after viewing and browser. Then I switched to console and tried
killing browser to see if system becomes responsive. Killing any other
program did not make system responsive, so only killing Emacs gave me
back responsiveness. Provided I could switch to console as
responsiveness was terrible. From maybe 20 times I could switch maybe
few times to console to actually get responsiveness.

This happened more than 20 times and I was using symon-mode to monitor
swapping. When I have seen that swapping is few gigabytes for no good
reason I have tried killing everything to understand what is going
on. I've end up killing Emacs and EXWM and restarting X to get into
good shape.

Because it was tedious over weeks not to be able to rely on computer
under EXWM, I have switched to IceWM which is familiar to me. And I
did not encounter anything like that regardless how long Emacs runs.

Now after discussion of other bug where you suggested limiting rss and
after limiting rss I could invoke ./a.out and get prompt, and maybe
that ulimit -m or other tweaking could stop that type of behavior. I
have to look into it.

It could be again that Emacs is not responsible for that but rather
liberal system settings.

> If it was Emacs, then I think the only way it could depend on the WM
> is if the WM feeds Emacs with many X events that somehow consume
> memory.

I was thinking to report to EXWM but I am unsure why it is happening
and cannot easily find out what is really swapping. But because I used
often Emacs exclusively that is how I know that it has to be Emacs
swapping.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 11 Nov 2020 21:16:01 GMT) Full text and rfc822 format available.

Message #141 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 11 Nov 2020 22:15:21 +0100
> Thanks.  This trace doesn't show how many bytes were allocated, 
> does it?  Without that it is hard to judge whether these GnuTLS 
> calls could be the culprit.  Because the full trace shows other 
> calls to malloc, for example this: 

It doesn't show the size of the individual allocations, but it 
indirectly shows the size of the heap.  Each brk() line like this 
one is the start of an entry:

0.000000 brk(0x55f5ed93e000)       = 0x55f5ed93e000 

Where the first field is relative time since the last brk() call, 
and the argument in parentheses is the size requested. 
Subtracting the argument to one call from the argument to the 
previous call shows how much the heap has been extended.  In this 
capture, subtracting the first from the last shows that the heap 
grew by 8,683,520 bytes, and summing the relative timestamps shows 
that this happened in 90.71 seconds.  It's growing at about 
100KB/sec at this point.

Also, keep in mind that this is brk().  There could have been any 
number of malloc() calls in between, zero or millions, but these 
are the ones that couldn't find any unused blocks and had to 
extend the heap.

> I'm not sure how Emacs could be the culprit here.  If GnuTLS is 
> the culprit (and as explained above, this is not certain at this 
> point), perhaps upgrading to a newer GnuTLS version or reporting 
> this to GnuTLS developers would allow some progress. 

I think you are right, GnuTLS was probably a symptom, not a cause. 
I took a while to respond because I tried running emacs in 
Valgrind's Massif heap debugging tool, and it took forever.  Some 
results are in now, and it looks like GnuTLS wasn't present in the 
leak this time around.

First of all, if you aren't familiar with Massif (as I wasn't), it 
captures occassional snapshots of the whole heap and all 
allocations, and lets you dump a tree-view of those allocations 
later with the "ms_print" tool.  The timestamps are fairly 
useless, as they are in "number of instructions executed."  Here 
are three files from my investigation:

The raw massif output:

http://trevorbentley.com/massif.out.3364630

The *full* tree output:

http://trevorbentley.com/ms_print.3364630.txt

The tree output showing only entries above 10% usage:

http://trevorbentley.com/ms_print.thresh10.3364630.txt

What you can see from the handy ASCII graph at the top is that 
memory usage was chugging along, growing upwards for a couple of 
days, and then spiked very quickly up to just over 4GB over a few 
hours.

If you scroll down to the very last checkpoint (the 10% threshold 
file is better for this), you can see where most of the memory is 
used.  Very large sums of memory, but from different sources. 
1.7GB from lisp_align_malloc (nearly all from Fcons), 1.4GB from 
lmalloc (half from allocate_vector_block), 700MB from lrealloc 
(mostly from enlarge_buffer_text).

There were no large buffers open, but there were long-lived 
network sockets and plenty of timers.  I didn't check, but I'd say 
the largest buffer was up to a couple of megabytes, since 
emacs-slack logs fairly heavily.

I'm not sure what to make of this, really.  It seems like a 
general, sudden-onset, intense craving for more memory while not 
particularly doing much.  I could blindly suggest extreme memory 
fragmentation problems, but that doesn't seem very likely.

It's trivial to reproduce, but takes 3-5 days, so not exactly 
handy to debug.  Let me know if you have any requests for the next 
iteration before I kill it.  It's running in Valgrind again.

Thanks,

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 12 Nov 2020 14:25:02 GMT) Full text and rfc822 format available.

Message #144 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 12 Nov 2020 16:24:48 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: 43389 <at> debbugs.gnu.org
> Date: Wed, 11 Nov 2020 22:15:21 +0100
> 
> The raw massif output:
> 
> http://trevorbentley.com/massif.out.3364630
> 
> The *full* tree output:
> 
> http://trevorbentley.com/ms_print.3364630.txt
> 
> The tree output showing only entries above 10% usage:
> 
> http://trevorbentley.com/ms_print.thresh10.3364630.txt
> 
> What you can see from the handy ASCII graph at the top is that 
> memory usage was chugging along, growing upwards for a couple of 
> days, and then spiked very quickly up to just over 4GB over a few 
> hours.

When this pick happens, I see the following unusual circumstances:

  . ImageMagick functions are called and request a lot of (aligned)
    memory;
  . something called "gomp_thread_start" is called, and also allocates
    a lot of memory -- does this mean additional threads start running?

Or am I reading the graphs incorrectly?

Also, I see that you are using the native-compilation branch, and
something called slack-image is being loaded?  What is this about?

And can you tell me whether src/config.h defines DOUG_LEA_MALLOC to a
non-zero value on that system?

> If you scroll down to the very last checkpoint (the 10% threshold 
> file is better for this), you can see where most of the memory is 
> used.  Very large sums of memory, but from different sources. 
> 1.7GB from lisp_align_malloc (nearly all from Fcons), 1.4GB from 
> lmalloc (half from allocate_vector_block), 700MB from lrealloc 
> (mostly from enlarge_buffer_text).
> 
> There were no large buffers open, but there were long-lived 
> network sockets and plenty of timers.  I didn't check, but I'd say 
> the largest buffer was up to a couple of megabytes, since 
> emacs-slack logs fairly heavily.
> 
> I'm not sure what to make of this, really.  It seems like a 
> general, sudden-onset, intense craving for more memory while not 
> particularly doing much.  I could blindly suggest extreme memory 
> fragmentation problems, but that doesn't seem very likely.

It is important to understand what was going one when the memory
started growing fast.  You say there were no large buffers, but what
about temporary buffers? what could cause gomp_thread_start, whatever
that is, to start?

We recently added a malloc-info command, maybe you could use it to
show more information about the malloc arenas before and after it
starts to eat up memory.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 16 Nov 2020 20:17:01 GMT) Full text and rfc822 format available.

Message #147 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: fweimer <at> redhat.com, carlos <at> redhat.com, dj <at> redhat.com
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 16 Nov 2020 22:16:12 +0200
Bringing on board of this discussion glibc malloc experts: Florian
Weimer, DJ Delorie, and Carlos O'Donell.

For some time (several months, I think) we have reports from Emacs
users that the memory footprints of their Emacs sessions sometimes
start growing very quickly, from several hundreds of MBytes to several
gigabytes in a day or even just few hours, and in some cases causing
the OOMK to kick in and kill the Emacs process.  Please refer to the
details described in the discussions of this bug report:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389

and 3 other bugs merged to it, which describe what sounds like the
same problem.

The questions that I'd like to eventually be able to answer are:

  . is this indeed due to some malloc'ed chunk that is being used for
    prolonged periods of time, and prevents releasing parts of the
    heap to the system?  IOW, is this pathological, but correct
    behavior, or is this some bug?

  . if this is correct behavior, can Emacs do something to avoid
    triggering it?  For example, should we consider tuning glibc's
    malloc in some way, by changing the 3 calls to mallopt in
    init_alloc_once_for_pdumper?

Your thoughts and help in investigating these problems will be highly
appreciated.  Please feel free to ask any questions you come up with,
including about the details of Emacs's memory management and anything
related.

Thanks!




Merged 43389 43395 43876 44666. Request was from Eli Zaretskii <eliz <at> gnu.org> to control <at> debbugs.gnu.org. (Mon, 16 Nov 2020 20:24:01 GMT) Full text and rfc822 format available.

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 16 Nov 2020 20:43:02 GMT) Full text and rfc822 format available.

Message #152 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Florian Weimer <fweimer <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 16 Nov 2020 21:42:39 +0100
* Eli Zaretskii:

> For some time (several months, I think) we have reports from Emacs
> users that the memory footprints of their Emacs sessions sometimes
> start growing very quickly, from several hundreds of MBytes to several
> gigabytes in a day or even just few hours, and in some cases causing
> the OOMK to kick in and kill the Emacs process.

The last time I saw this was a genuine memory leak in the Emacs C code.
Just saying. 8-)

> The questions that I'd like to eventually be able to answer are:
>
>   . is this indeed due to some malloc'ed chunk that is being used for
>     prolonged periods of time, and prevents releasing parts of the
>     heap to the system?  IOW, is this pathological, but correct
>     behavior, or is this some bug?
>
>   . if this is correct behavior, can Emacs do something to avoid
>     triggering it?  For example, should we consider tuning glibc's
>     malloc in some way, by changing the 3 calls to mallopt in
>     init_alloc_once_for_pdumper?
>
> Your thoughts and help in investigating these problems will be highly
> appreciated.  Please feel free to ask any questions you come up with,
> including about the details of Emacs's memory management and anything
> related.

There is an issue with reusing posix_memalign allocations.  On my system
(running Emacs 27.1 as supplied by Fedora 32), I only see such
allocations as the backing storage for the glib (sic) slab allocator.
It gets exercised mostly when creating UI elements, as far as I can
tell.  In theory, these backing allocations should be really long-term
and somewhat limited, so the fragmentation peculiar to aligned
allocations issue should not be a concern.

There is actually a glibc patch floating around that fixes the aligned
allocation problem, at some (hopefully limited) performance cost to
aligned allocations.  We want to get it reviewed and integrated into
upstream glibc.  If there is a working reproducer, we could run it
against a patched glibc.

The other issue we have is that thread counts has exceeded in recent
times more than system memory, and glibc basically scales RSS overhead
with thread count, not memory.  A use of libgomp suggests that many
threads might indeed be spawned.  If their lifetimes overlap, it would
not be unheard of to end up with some RSS overhead in the order of
peak-usage-per-thread times 8 times the number of hardware threads
supported by the system.  Setting MALLOC_ARENA_MAX to a small value
counteracts that, so it's very simple to experiment with it if you have
a working reproducer.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 15:46:01 GMT) Full text and rfc822 format available.

Message #155 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Florian Weimer <fweimer <at> redhat.com>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 17:45:15 +0200
> From: Florian Weimer <fweimer <at> redhat.com>
> Cc: carlos <at> redhat.com,  dj <at> redhat.com,  43389 <at> debbugs.gnu.org
> Date: Mon, 16 Nov 2020 21:42:39 +0100
> 
> * Eli Zaretskii:
> 
> > For some time (several months, I think) we have reports from Emacs
> > users that the memory footprints of their Emacs sessions sometimes
> > start growing very quickly, from several hundreds of MBytes to several
> > gigabytes in a day or even just few hours, and in some cases causing
> > the OOMK to kick in and kill the Emacs process.
> 
> The last time I saw this was a genuine memory leak in the Emacs C code.

That's always a possibility.  However, 2 aspects of these bug reports
seem to hint that there's more here than meets the eye:

 . the problem happens only to a small number of people, and it is
   hard to find an area in Emacs that would use memory in some special
   enough way to happen rarely

 . the Emacs sessions of the people who reported this would run for
   many days and even weeks on end with fairly normal memory footprint
   (around 500MB) that was very stable, and then suddenly begin
   growing by the minute to 10 or 20 times that

> There is an issue with reusing posix_memalign allocations.  On my system
> (running Emacs 27.1 as supplied by Fedora 32), I only see such
> allocations as the backing storage for the glib (sic) slab allocator.

(By "backing storage" you mean malloc calls that request large chunks
so that malloc obtains the memory from mmap?  Or do you mean something
else?)

Are the problems with posix_memalign also relevant to calls to
aligned_alloc?  Emacs calls the latter _a_lot_, see lisp_align_malloc.

> It gets exercised mostly when creating UI elements, as far as I can
> tell.

I guess your build uses GTK as the toolkit?

> There is actually a glibc patch floating around that fixes the aligned
> allocation problem, at some (hopefully limited) performance cost to
> aligned allocations.  We want to get it reviewed and integrated into
> upstream glibc.  If there is a working reproducer, we could run it
> against a patched glibc.

We don't have a reproducer, but several people said that the problem
happens to them regularly enough in their normal usage.  So I think we
can ask them to try a patches glibc and see if the problem goes away.

> The other issue we have is that thread counts has exceeded in recent
> times more than system memory, and glibc basically scales RSS overhead
> with thread count, not memory.  A use of libgomp suggests that many
> threads might indeed be spawned.  If their lifetimes overlap, it would
> not be unheard of to end up with some RSS overhead in the order of
> peak-usage-per-thread times 8 times the number of hardware threads
> supported by the system.  Setting MALLOC_ARENA_MAX to a small value
> counteracts that, so it's very simple to experiment with it if you have
> a working reproducer.

"Small value" being something like 2?

Emacs doesn't use libgomp, I think that comes from ImageMagick, and
most people who reported these problems use Emacs that wasn't built
with ImageMagick.  The only other source of threads in Emacs I know of
is GTK, but AFAIK it starts a small number of them, like 4.

In any case, experimenting with MALLOC_ARENA_MAX is easy, so I think
we should ask the people who experience this to try that.

Any other suggestions or thoughts?

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 16:34:01 GMT) Full text and rfc822 format available.

Message #158 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Florian Weimer <fweimer <at> redhat.com>
Cc: 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 11:32:23 -0500
On 11/17/20 10:45 AM, Eli Zaretskii wrote:
>> From: Florian Weimer <fweimer <at> redhat.com>
>> Cc: carlos <at> redhat.com,  dj <at> redhat.com,  43389 <at> debbugs.gnu.org
>> Date: Mon, 16 Nov 2020 21:42:39 +0100
>> There is an issue with reusing posix_memalign allocations.  On my system
>> (running Emacs 27.1 as supplied by Fedora 32), I only see such
>> allocations as the backing storage for the glib (sic) slab allocator.
> 
> (By "backing storage" you mean malloc calls that request large chunks
> so that malloc obtains the memory from mmap?  Or do you mean something
> else?)

In this case I expect Florian means that glib (sic), which is a slab
allocator, needs to allocate an aligned slab (long lived) and so uses
posix_memalign to create such an allocation. Therefore these long-lived
aligned allocations should not cause significant internal fragmentation.
 
> Are the problems with posix_memalign also relevant to calls to
> aligned_alloc?  Emacs calls the latter _a_lot_, see lisp_align_malloc.

All aligned allocations suffer from an algorithmic defect that causes
subsequent allocations of the same alignment to be unable to use previously
free'd aligned chunks. This causes aligned allocations to internally
fragment the heap and this internal fragmentation could spread to the
entire heap and cause heap growth.

The WIP glibc patch is here (June 2019):
https://lists.fedoraproject.org/archives/list/glibc <at> lists.fedoraproject.org/thread/2PCHP5UWONIOAEUG34YBAQQYD7JL5JJ4/

>> The other issue we have is that thread counts has exceeded in recent
>> times more than system memory, and glibc basically scales RSS overhead
>> with thread count, not memory.  A use of libgomp suggests that many
>> threads might indeed be spawned.  If their lifetimes overlap, it would
>> not be unheard of to end up with some RSS overhead in the order of
>> peak-usage-per-thread times 8 times the number of hardware threads
>> supported by the system.  Setting MALLOC_ARENA_MAX to a small value
>> counteracts that, so it's very simple to experiment with it if you have
>> a working reproducer.
> 
> "Small value" being something like 2?

The current code creates 8 arenas per core on a 64-bit system.

You could set it to 1 arena per core to force more threads into the 
arenas and push them to reuse more chunks.

export MALLOC_ARENA_MAX=$(nproc)

And see if that helps.
 
> Emacs doesn't use libgomp, I think that comes from ImageMagick, and
> most people who reported these problems use Emacs that wasn't built
> with ImageMagick.  The only other source of threads in Emacs I know of
> is GTK, but AFAIK it starts a small number of them, like 4.
> 
> In any case, experimenting with MALLOC_ARENA_MAX is easy, so I think
> we should ask the people who experience this to try that.
> 
> Any other suggestions or thoughts?

Yes, we have malloc trace utilities for capturing and simulating traces
from applications:

https://pagure.io/glibc-malloc-trace-utils

If you can capture the application allocations with the tracer then we
should be able to reproduce it locally and observe the problem.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 16:34:02 GMT) Full text and rfc822 format available.

Message #161 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Florian Weimer <fweimer <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 17:33:13 +0100
* Eli Zaretskii:

>> There is an issue with reusing posix_memalign allocations.  On my system
>> (running Emacs 27.1 as supplied by Fedora 32), I only see such
>> allocations as the backing storage for the glib (sic) slab allocator.
>
> (By "backing storage" you mean malloc calls that request large chunks
> so that malloc obtains the memory from mmap?  Or do you mean something
> else?)

Larger chunks that are split up by the glib allocator.  Whether they are
allocated by mmap is unclear.

> Are the problems with posix_memalign also relevant to calls to
> aligned_alloc?  Emacs calls the latter _a_lot_, see lisp_align_malloc.

Ahh.  I don't see many such calls, even during heavy Gnus usage.  But
opening really large groups triggers such calls.

aligned_alloc is equally problematic.  I don't know if the Emacs
allocation pattern triggers the pathological behavior.

I seem to suffer from the problem as well.  glibc malloc currently maintains
more than 200 MiB of unused memory:

   <size from="1065345" to="153025249" total="226688532" count="20"/>

   <total type="fast" count="0" size="0"/>
   <total type="rest" count="3802" size="238948201"/>

Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.

It's possible to generate such statistics using GDB, by calling the
malloc_info function.

My Emacs process does not look like it suffered from the aligned_alloc
issue.  It would leave behind many smaller, unused allocations, not such
large ones.

>> It gets exercised mostly when creating UI elements, as far as I can
>> tell.
>
> I guess your build uses GTK as the toolkit?

I think so:

  GNU Emacs 27.1 (build 1, x86_64-redhat-linux-gnu, GTK+ Version
  3.24.21, cairo version 1.16.0) of 2020-08-20

>> The other issue we have is that thread counts has exceeded in recent
>> times more than system memory, and glibc basically scales RSS overhead
>> with thread count, not memory.  A use of libgomp suggests that many
>> threads might indeed be spawned.  If their lifetimes overlap, it would
>> not be unheard of to end up with some RSS overhead in the order of
>> peak-usage-per-thread times 8 times the number of hardware threads
>> supported by the system.  Setting MALLOC_ARENA_MAX to a small value
>> counteracts that, so it's very simple to experiment with it if you have
>> a working reproducer.
>
> "Small value" being something like 2?

Yes, that would be a good start.  But my Emacs process isn't affected by
this, so this setting wouldn't help there.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 17:09:02 GMT) Full text and rfc822 format available.

Message #164 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Florian Weimer <fweimer <at> redhat.com>,
 Trevor Bentley <trevor <at> trevorbentley.com> michael_heerdegen <at> web.de,
 Jean Louis <bugs <at> gnu.support>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 19:08:24 +0200
> From: Florian Weimer <fweimer <at> redhat.com>
> Cc: carlos <at> redhat.com,  dj <at> redhat.com,  43389 <at> debbugs.gnu.org
> Date: Tue, 17 Nov 2020 17:33:13 +0100
> 
>    <size from="1065345" to="153025249" total="226688532" count="20"/>
> 
>    <total type="fast" count="0" size="0"/>
>    <total type="rest" count="3802" size="238948201"/>
> 
> Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.

Yes, I wouldn't expect to see such a large footprint.  How long is
this session running?  (You can use "M-x emacs-uptime" to answer
that.)

> It's possible to generate such statistics using GDB, by calling the
> malloc_info function.

Emacs 28 (from the master branch) has recently acquired the
malloc-info command which will emit this to stderr.  You can see one
example of its output here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=44666#5

which doesn't seem to show any significant amounts of free memory at
all?

I encourage all the people who reported similar problems to try the
measures mentioned by Florian and Carlos, including malloc-info, and
report the results.

> My Emacs process does not look like it suffered from the aligned_alloc
> issue.  It would leave behind many smaller, unused allocations, not such
> large ones.
> [...]
> >> supported by the system.  Setting MALLOC_ARENA_MAX to a small value
> >> counteracts that, so it's very simple to experiment with it if you have
> >> a working reproducer.
> >
> > "Small value" being something like 2?
> 
> Yes, that would be a good start.  But my Emacs process isn't affected by
> this, so this setting wouldn't help there.

So both known problems seem to be not an issue in your case.  What
other reasons could cause that?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 17:14:01 GMT) Full text and rfc822 format available.

Message #167 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 19:13:06 +0200
> Cc: dj <at> redhat.com, 43389 <at> debbugs.gnu.org
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Tue, 17 Nov 2020 11:32:23 -0500
> 
> > "Small value" being something like 2?
> 
> The current code creates 8 arenas per core on a 64-bit system.
> 
> You could set it to 1 arena per core to force more threads into the 
> arenas and push them to reuse more chunks.
> 
> export MALLOC_ARENA_MAX=$(nproc)

Isn't that too many?  Emacs is a single-threaded program, with a small
number of GTK threads that aren't supposed to allocate a lot of
memory.  Sounds like 2 should be enough, no?

> > Any other suggestions or thoughts?
> 
> Yes, we have malloc trace utilities for capturing and simulating traces
> from applications:
> 
> https://pagure.io/glibc-malloc-trace-utils
> 
> If you can capture the application allocations with the tracer then we
> should be able to reproduce it locally and observe the problem.

You mean, trace all the memory allocations in Emacs with the tracer?
That would produce huge amounts of data, as Emacs calls malloc at an
insane frequency.  Or maybe I don't understand what kind of tracing
procedure you had in mind (I never used these tools, and didn't know
they existed until you pointed to them).

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 17:25:01 GMT) Full text and rfc822 format available.

Message #170 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Florian Weimer <fweimer <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, Jean Louis <bugs <at> gnu.support>, dj <at> redhat.com,
 michael_heerdegen <at> web.de, Trevor Bentley <trevor <at> trevorbentley.com>,
 carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 18:24:50 +0100
* Eli Zaretskii:

>> From: Florian Weimer <fweimer <at> redhat.com>
>> Cc: carlos <at> redhat.com,  dj <at> redhat.com,  43389 <at> debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 17:33:13 +0100
>> 
>>    <size from="1065345" to="153025249" total="226688532" count="20"/>
>> 
>>    <total type="fast" count="0" size="0"/>
>>    <total type="rest" count="3802" size="238948201"/>
>> 
>> Total RSS is 1 GiB, but even 1 GiB minus 200 MiB would be excessive.
>
> Yes, I wouldn't expect to see such a large footprint.  How long is
> this session running?  (You can use "M-x emacs-uptime" to answer
> that.)

15 days.

>> It's possible to generate such statistics using GDB, by calling the
>> malloc_info function.
>
> Emacs 28 (from the master branch) has recently acquired the
> malloc-info command which will emit this to stderr.  You can see one
> example of its output here:
>
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=44666#5
>
> which doesn't seem to show any significant amounts of free memory at
> all?

No, these values look suspiciously good.

But I seem to have this issue as well—with the 800 MiB that are actually
in use.  The glibc malloc pathological behavior comes on top of that.

Is there something comparable to malloc-info to dump the Emacs allocator
freelists?

> So both known problems seem to be not an issue in your case.  What
> other reasons could cause that?

Large allocations not getting forwarded to mmap, almost all of them
freed, but a late allocation remained.  This prevents returning memory
from the main arena to the operating system.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 19:01:01 GMT) Full text and rfc822 format available.

Message #173 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: DJ Delorie <dj <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 12:20:21 -0500
Eli Zaretskii <eliz <at> gnu.org> writes:
> You mean, trace all the memory allocations in Emacs with the tracer?
> That would produce huge amounts of data, as Emacs calls malloc at an
> insane frequency.  Or maybe I don't understand what kind of tracing
> procedure you had in mind

That's exactly what it does, and yes, it easily generates gigabytes
(sometimes terabytes) of trace information.  But it also captures the
most accurate view of what's going on, and lets us replay (via
simulation) all the malloc API calls, so we can reproduce most
malloc-related problems on a whim.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 19:53:02 GMT) Full text and rfc822 format available.

Message #176 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: DJ Delorie <dj <at> redhat.com>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 21:52:34 +0200
> From: DJ Delorie <dj <at> redhat.com>
> Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org
> Date: Tue, 17 Nov 2020 12:20:21 -0500
> 
> Eli Zaretskii <eliz <at> gnu.org> writes:
> > You mean, trace all the memory allocations in Emacs with the tracer?
> > That would produce huge amounts of data, as Emacs calls malloc at an
> > insane frequency.  Or maybe I don't understand what kind of tracing
> > procedure you had in mind
> 
> That's exactly what it does, and yes, it easily generates gigabytes
> (sometimes terabytes) of trace information.  But it also captures the
> most accurate view of what's going on, and lets us replay (via
> simulation) all the malloc API calls, so we can reproduce most
> malloc-related problems on a whim.

Is it possible to start tracing only when the fast growth of memory
footprint commences?  Or is tracing from the very beginning a
necessity for providing meaningful data?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:01:01 GMT) Full text and rfc822 format available.

Message #179 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: DJ Delorie <dj <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 14:59:54 -0500
Eli Zaretskii <eliz <at> gnu.org> writes:
> Is it possible to start tracing only when the fast growth of memory
> footprint commences?  Or is tracing from the very beginning a
> necessity for providing meaningful data?

Well, both.  The API allows you to start/stop tracing whenever you like,
but the state of your heap depends on the entire history of calls.

So, for example, a trace during the "fast growth" period might show a
pattern that helps us[*] debug the problem, but if we want to
*reproduce* the problem, we'd need a full trace.

[*] and by "us" I mostly mean "emacs developers who understand their
    code" ;-)





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:14:02 GMT) Full text and rfc822 format available.

Message #182 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Florian Weimer <fweimer <at> redhat.com>
To: DJ Delorie <dj <at> redhat.com>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 21:13:00 +0100
* DJ Delorie:

> Eli Zaretskii <eliz <at> gnu.org> writes:
>> Is it possible to start tracing only when the fast growth of memory
>> footprint commences?  Or is tracing from the very beginning a
>> necessity for providing meaningful data?
>
> Well, both.  The API allows you to start/stop tracing whenever you like,
> but the state of your heap depends on the entire history of calls.
>
> So, for example, a trace during the "fast growth" period might show a
> pattern that helps us[*] debug the problem, but if we want to
> *reproduce* the problem, we'd need a full trace.
>
> [*] and by "us" I mostly mean "emacs developers who understand their
>     code" ;-)

But how helpful would that be, given that malloc_info does not really
show any inactive memory (discounting my 200 MiB hole)?

We would need a comparable tracer for the Lisp-level allocator, I think.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:17:01 GMT) Full text and rfc822 format available.

Message #185 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: DJ Delorie <dj <at> redhat.com>
To: Florian Weimer <fweimer <at> redhat.com>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, eliz <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 15:16:11 -0500
Florian Weimer <fweimer <at> redhat.com> writes:
> But how helpful would that be, given that malloc_info does not really
> show any inactive memory (discounting my 200 MiB hole)?

One doesn't know how helpful until after looking at the data.  If RSS is
going up fast, something is calling either sbrk or mmap.  If that thing
is malloc, a trace tells us if there's a pattern.  If that pattern
blames the lisp allocator, my job here is done ;-)





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:28:01 GMT) Full text and rfc822 format available.

Message #188 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: DJ Delorie <dj <at> redhat.com>
Cc: fweimer <at> redhat.com, carlos <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 22:27:27 +0200
> From: DJ Delorie <dj <at> redhat.com>
> Cc: eliz <at> gnu.org, carlos <at> redhat.com, 43389 <at> debbugs.gnu.org
> Date: Tue, 17 Nov 2020 15:16:11 -0500
> 
> Florian Weimer <fweimer <at> redhat.com> writes:
> > But how helpful would that be, given that malloc_info does not really
> > show any inactive memory (discounting my 200 MiB hole)?
> 
> One doesn't know how helpful until after looking at the data.  If RSS is
> going up fast, something is calling either sbrk or mmap.  If that thing
> is malloc, a trace tells us if there's a pattern.  If that pattern
> blames the lisp allocator, my job here is done ;-)

I won't hold my breath for the lisp allocator to take the blame.  A
couple of people who were hit by the problem reported the statistics
of Lisp objects as produced by GC (those reports are somewhere in the
bug discussions, you should be able to find them).  Those statistics
indicated a very moderate amount of live Lisp objects, nowhere near
the huge memory footprint.

(It would be interesting to see the GC statistics from Florian's
session, btw.)

Given this data, it seems that if the Lisp allocator is involved, the
real problem is in what happens with memory it frees when objects are
GC'ed.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:37:02 GMT) Full text and rfc822 format available.

Message #191 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Florian Weimer <fweimer <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, DJ Delorie <dj <at> redhat.com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 21:35:54 +0100
* Eli Zaretskii:

> (It would be interesting to see the GC statistics from Florian's
> session, btw.)

Is this the value of (garbage-collect)?

((conses 16 1877807 263442)
 (symbols 48 40153 113)
 (strings 32 164110 77752)
 (string-bytes 1 5874689)
 (vectors 16 64666)
 (vector-slots 8 1737780 331974)
 (floats 8 568 1115)
 (intervals 56 163746 19749)
 (buffers 1000 1092))

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:38:01 GMT) Full text and rfc822 format available.

Message #194 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 22:36:58 +0200
[Please use Reply All to keep the bug tracker on the CC list.]

> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: 
> Date: Tue, 17 Nov 2020 21:22:52 +0100
> 
> >   . something called "gomp_thread_start" is called, and also 
> >   allocates 
> >     a lot of memory -- does this mean additional threads start 
> >     running? 
> > 
> > Or am I reading the graphs incorrectly? 
> 
> You are right that they are present, but that path isn't 
> responsible for a significant percentage of the total memory usage 
> at the end.  Doesn't look like gomp_thread_start is in the 
> bottom-most snapshot at all.  It was reporting ~100MB allocated by 
> gomp_thread_start, out of 4GB.  And those are related to images, 
> so 100MB is perhaps reasonable.

AFAIK, glibc's malloc allocates a new heap arena for each thread that
calls malloc.  The arena is large, so having many threads could
enlarge the footprint by a lot.  That's hwy Florian suggested to set
MALLOC_ARENA_MAX to a small value, to keep this path of footprint
growth in check.
> However, I'm now a bit suspicious of these log buffers.  Last time 
> the usage spiked I had 15MB of reported buffers, and I was 
> watching the process RSS increase by 1MB every 5 seconds in top, 
> like a clockwork.  I killed all of the large log buffers (3MB 
> each), and RSS stopped noticeably increasing.  Not sure if that 
> _stopped_ the leak, or only slowed it down to beneath the 
> threshold top could show me.  Either way, it should need 1.5GB of 
> RAM to track 15MB of text.

Unless malloc somehow allocates buffer memory via sbrk and not mmap,
buffers shouldn't be part of the footprint growth issue, because any
mmap'ed memory can be munmap'ed without any restrictions, and returns
to the OS.  And you can see how many buffer memory you have by
watching the statistics returned by garbage-collect.

> gomp_thread_start appears to be triggered when images are 
> displayed.

Yes, I believe ImageMagick starts them to scale images.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:44:01 GMT) Full text and rfc822 format available.

Message #197 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Florian Weimer <fweimer <at> redhat.com>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 22:43:35 +0200
> From: Florian Weimer <fweimer <at> redhat.com>
> Cc: DJ Delorie <dj <at> redhat.com>,  carlos <at> redhat.com,  43389 <at> debbugs.gnu.org
> Date: Tue, 17 Nov 2020 21:35:54 +0100
> 
> * Eli Zaretskii:
> 
> > (It would be interesting to see the GC statistics from Florian's
> > session, btw.)
> 
> Is this the value of (garbage-collect)?
> 
> ((conses 16 1877807 263442)
>  (symbols 48 40153 113)
>  (strings 32 164110 77752)
>  (string-bytes 1 5874689)
>  (vectors 16 64666)
>  (vector-slots 8 1737780 331974)
>  (floats 8 568 1115)
>  (intervals 56 163746 19749)
>  (buffers 1000 1092))

Yes.  "C-h f garbage-collect" will describe the meaning of the
numbers.  AFAICT, this barely explains 70 MBytes and change of Lisp
data.  (The "buffers" part excludes buffer text, but you should be
able to add that by summing the sizes shown by "C-x C-b".)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:50:01 GMT) Full text and rfc822 format available.

Message #200 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: Florian Weimer <fweimer <at> redhat.com>, 43389 <at> debbugs.gnu.org,
 Jean Louis <bugs <at> gnu.support>, dj <at> redhat.com, michael_heerdegen <at> web.de,
 Trevor Bentley <trevor <at> trevorbentley.com>, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 23:39:41 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-17 20:09]:
> I encourage all the people who reported similar problems to try the
> measures mentioned by Florian and Carlos, including malloc-info, and
> report the results.

For now I am doing with:

export MALLOC_ARENA_MAX=4

After days I will tell more.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 20:58:02 GMT) Full text and rfc822 format available.

Message #203 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: DJ Delorie <dj <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 eliz <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 15:57:26 -0500
Jean Louis <bugs <at> gnu.support> writes:
> After days I will tell more.

Do we have any strong hints on things we (i.e. I) can do to cause this
to happen faster?





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 21:00:02 GMT) Full text and rfc822 format available.

Message #206 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Florian Weimer <fweimer <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 21:58:57 +0100
* Eli Zaretskii:

>> From: Florian Weimer <fweimer <at> redhat.com>
>> Cc: DJ Delorie <dj <at> redhat.com>,  carlos <at> redhat.com,  43389 <at> debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 21:35:54 +0100
>> 
>> * Eli Zaretskii:
>> 
>> > (It would be interesting to see the GC statistics from Florian's
>> > session, btw.)
>> 
>> Is this the value of (garbage-collect)?
>> 
>> ((conses 16 1877807 263442)
>>  (symbols 48 40153 113)
>>  (strings 32 164110 77752)
>>  (string-bytes 1 5874689)
>>  (vectors 16 64666)
>>  (vector-slots 8 1737780 331974)
>>  (floats 8 568 1115)
>>  (intervals 56 163746 19749)
>>  (buffers 1000 1092))
>
> Yes.  "C-h f garbage-collect" will describe the meaning of the
> numbers.  AFAICT, this barely explains 70 MBytes and change of Lisp
> data.  (The "buffers" part excludes buffer text, but you should be
> able to add that by summing the sizes shown by "C-x C-b".)

I get this:

(let ((size 0))
  (dolist (buffer (buffer-list) size)
    (setq size (+ size (buffer-size buffer)))))
⇒ 98249826

So it's not a small number, but still far away from those 800 MiB.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 21:11:02 GMT) Full text and rfc822 format available.

Message #209 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Florian Weimer <fweimer <at> redhat.com>
Cc: carlos <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 17 Nov 2020 23:10:19 +0200
> From: Florian Weimer <fweimer <at> redhat.com>
> Cc: dj <at> redhat.com,  carlos <at> redhat.com,  43389 <at> debbugs.gnu.org
> Date: Tue, 17 Nov 2020 21:58:57 +0100
> 
> (let ((size 0))
>   (dolist (buffer (buffer-list) size)
>     (setq size (+ size (buffer-size buffer)))))
> ⇒ 98249826
> 
> So it's not a small number, but still far away from those 800 MiB.

Yes.  I have a very similar value: 94642916 (in 376 buffers; you have
more than 1000).  This is in a session that runs for 17 days and whose
VM size is 615 MB: a "normal" size for a long-living session, nowhere
near 2GB, let alone 11GB someone reported.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 17 Nov 2020 21:47:02 GMT) Full text and rfc822 format available.

Message #212 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: DJ Delorie <dj <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, michael_heerdegen <at> web.de,
 trevor <at> trevorbentley.com, carlos <at> redhat.com, eliz <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 00:45:48 +0300
* DJ Delorie <dj <at> redhat.com> [2020-11-17 23:57]:
> Jean Louis <bugs <at> gnu.support> writes:
> > After days I will tell more.
> 
> Do we have any strong hints on things we (i.e. I) can do to cause this
> to happen faster?

This is because I cannot know when is it happening. In general it was
taking place almost all the time under EXWM (Emacs X Window Manager)
then I switched to IceWM just to see if it is problem that EXWM is
invoking. Now in IceWM I got it 3 times, but much less times than in
EXWM and I do not see that I anyhow have changed my habits of using
Emacs.

Today I had more than 10 hours session and then what I did? I do not
know exactly. I have kept only XTerm and Emacs on X, at some point of
time it starts using swap but it is unclear to me if it uses swap or
does something else with the disk. Some minutes before that I was
inspecting it with htop and found Emacs with 9.7 GB memory. Later
system was unusable.

All I could see during that time is hard disk LED turned on all the
time. I cannot do almost nothing, I cannot interrupt Emacs or switch
to console. Then I use Magic SysRq and do the necessary to at least
synchronize hard disks, unmount and reboot.

I am running it with this script:

#!/bin/bash
# CDPATH invokes bugs in eshell, not related to this
unset CDPATH
# I was trying to tune ulimit -m but it did not help
# ulimit -m 3145728
# I am trying this now
export MALLOC_ARENA_MAX=4
date >> /home/data1/protected/tmp/emacs-debug
# This below is for M-x malloc-info
emacs >> /home/data1/protected/tmp/emacs-debug 2>&1

Maybe some simple new and automatic function could be temporarily
included to spit errors to output on what is Emacs doing when it
starts swapping (if it is swapping), then such errors could at least
be captured in a file even if I have to reboot computer.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 05:45:02 GMT) Full text and rfc822 format available.

Message #215 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Florian Weimer <fweimer <at> redhat.com>
Cc: 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 00:43:55 -0500
On 11/17/20 4:10 PM, Eli Zaretskii wrote:
>> From: Florian Weimer <fweimer <at> redhat.com>
>> Cc: dj <at> redhat.com,  carlos <at> redhat.com,  43389 <at> debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 21:58:57 +0100
>>
>> (let ((size 0))
>>   (dolist (buffer (buffer-list) size)
>>     (setq size (+ size (buffer-size buffer)))))
>> ⇒ 98249826
>>
>> So it's not a small number, but still far away from those 800 MiB.
> 
> Yes.  I have a very similar value: 94642916 (in 376 buffers; you have
> more than 1000).  This is in a session that runs for 17 days and whose
> VM size is 615 MB: a "normal" size for a long-living session, nowhere
> near 2GB, let alone 11GB someone reported.

If you get us a data trace I will run it through the simulator and produce
a report that includes graphs explaining the results of the trace and
we'll see if a smoking gun shows up.

The biggest smoking gun is a spike in RSS size without a matching Ideal
RSS (integral of API calls). This would indicate an algorithmic issue.

Usually though we can have ratcheting effects due to mixed object
liftimes and those are harder to detect and we don't have tooling for
that to look for such issues. We'd need to track chunk lifetimes.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 06:52:02 GMT) Full text and rfc822 format available.

Message #218 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: Florian Weimer <fweimer <at> redhat.com>, 43389 <at> debbugs.gnu.org,
 Eli Zaretskii <eliz <at> gnu.org>, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 09:09:22 +0300
Is it recommended to collect strace with this below?

strace emacs > output 2>&1





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 08:33:01 GMT) Full text and rfc822 format available.

Message #221 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andreas Schwab <schwab <at> linux-m68k.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: Carlos O'Donell <carlos <at> redhat.com>, Florian Weimer <fweimer <at> redhat.com>,
 dj <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 09:32:24 +0100
On Nov 18 2020, Jean Louis wrote:

> Is it recommended to collect strace with this below?
>
> strace emacs > output 2>&1

It is preferable to use the -o option to decouple the strace output from
the inferior output.

Andreas.

-- 
Andreas Schwab, schwab <at> linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 09:16:02 GMT) Full text and rfc822 format available.

Message #224 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Andreas Schwab <schwab <at> linux-m68k.org>
Cc: Carlos O'Donell <carlos <at> redhat.com>, Florian Weimer <fweimer <at> redhat.com>,
 dj <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 12:01:39 +0300
* Andreas Schwab <schwab <at> linux-m68k.org> [2020-11-18 11:32]:
> On Nov 18 2020, Jean Louis wrote:
> 
> > Is it recommended to collect strace with this below?
> >
> > strace emacs > output 2>&1
> 
> It is preferable to use the -o option to decouple the strace output from
> the inferior output.

Thank you, I have seen that in options and right now I am running it
with:

#!/bin/bash
unset CDPATH
# ulimit -m 3145728
#export MALLOC_ARENA_MAX=4
date >> /home/data1/protected/tmp/emacs-debug
strace -o emacs.strace emacs >> /home/data1/protected/tmp/emacs-debug 2>&1




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 15:04:02 GMT) Full text and rfc822 format available.

Message #227 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 17:03:26 +0200
> Date: Wed, 18 Nov 2020 00:45:48 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: eliz <at> gnu.org, fweimer <at> redhat.com, trevor <at> trevorbentley.com,
>   michael_heerdegen <at> web.de, carlos <at> redhat.com, 43389 <at> debbugs.gnu.org
> 
> Maybe some simple new and automatic function could be temporarily
> included to spit errors to output on what is Emacs doing when it
> starts swapping (if it is swapping), then such errors could at least
> be captured in a file even if I have to reboot computer.

Emacs doesn't know when the system starts swapping.  But you can write
a function that tracks the vsize of the Emacs process, using emacs-pid
and process-attributes, and displays some prominent message when the
vsize increments become larger than some threshold, or the vsize
itself becomes larger than some fixed number.  Then run this function
off a timer that fires every 10 or 15 seconds, and wait for it to tell
you when the fun starts.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 16:20:02 GMT) Full text and rfc822 format available.

Message #230 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 17:19:21 +0100
I'd be happy to run my Emacs with debugging to try and troubleshoot
this memory leak since it has happened twice to me. I can't yet
consistently reproduce it though. I think it's somewhere between helm
or org-caldav or slime, being in daemon mode.

Can someone summarize what debug options I should run with, recompile
with, etc to provide proper information for next time? I'd like to be
able to make an effective report when it next occurs.

On Wed, Nov 18, 2020 at 12:01:39PM +0300, Jean Louis wrote:
> * Andreas Schwab <schwab <at> linux-m68k.org> [2020-11-18 11:32]:
> > On Nov 18 2020, Jean Louis wrote:
> >
> > > Is it recommended to collect strace with this below?
> > >
> > > strace emacs > output 2>&1
> >
> > It is preferable to use the -o option to decouple the strace output from
> > the inferior output.
>
> Thank you, I have seen that in options and right now I am running it
> with:
>
> #!/bin/bash
> unset CDPATH
> # ulimit -m 3145728
> #export MALLOC_ARENA_MAX=4
> date >> /home/data1/protected/tmp/emacs-debug
> strace -o emacs.strace emacs >> /home/data1/protected/tmp/emacs-debug 2>&1
>
>
>


------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 17:31:01 GMT) Full text and rfc822 format available.

Message #233 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 19:30:07 +0200
> Date: Wed, 18 Nov 2020 17:19:21 +0100
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> I'd be happy to run my Emacs with debugging to try and troubleshoot
> this memory leak since it has happened twice to me. I can't yet
> consistently reproduce it though. I think it's somewhere between helm
> or org-caldav or slime, being in daemon mode.
> 
> Can someone summarize what debug options I should run with, recompile
> with, etc to provide proper information for next time? I'd like to be
> able to make an effective report when it next occurs.

If you mean debug options for compiling Emacs, I don't think it
matters.

I suggest to try the tools pointed out here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

and when the issue happens, collect the data and ask here where and
how to upload it for analysis.

Thanks.

P.S. Please CC the other people I added to the CC line, as I don't
think they are subscribed to the bug list, and it is important for us
to keep them in the loop, so they could help us investigate this.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 18:03:02 GMT) Full text and rfc822 format available.

Message #236 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 20:01:48 +0200
> Cc: dj <at> redhat.com, 43389 <at> debbugs.gnu.org
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Wed, 18 Nov 2020 00:43:55 -0500
> 
> >> (let ((size 0))
> >>   (dolist (buffer (buffer-list) size)
> >>     (setq size (+ size (buffer-size buffer)))))
> >> ⇒ 98249826
> >>
> >> So it's not a small number, but still far away from those 800 MiB.
> > 
> > Yes.  I have a very similar value: 94642916 (in 376 buffers; you have
> > more than 1000).  This is in a session that runs for 17 days and whose
> > VM size is 615 MB: a "normal" size for a long-living session, nowhere
> > near 2GB, let alone 11GB someone reported.
> 
> If you get us a data trace I will run it through the simulator and produce
> a report that includes graphs explaining the results of the trace and
> we'll see if a smoking gun shows up.

If you asked Florian, then I agree that his data could be useful.  If
you were asking me, then my data is not useful, because the footprint
is reasonable and never goes up to gigabyte range.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 18:28:01 GMT) Full text and rfc822 format available.

Message #239 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: DJ Delorie <dj <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 13:27:22 -0500
Eli Zaretskii <eliz <at> gnu.org> writes:
> If you asked Florian, then I agree that his data could be useful.  If
> you were asking me, then my data is not useful, because the footprint
> is reasonable and never goes up to gigabyte range.

Yeah, the hard part here is capturing the actual problem.  I'm running
the latest Emacs too but haven't seen the growth.  Traces tend to be
more useful when the problem is reproducible in situ but really hard to
reproduce in a test environment.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 18 Nov 2020 22:09:02 GMT) Full text and rfc822 format available.

Message #242 received at submit <at> debbugs.gnu.org (full text, mbox):

From: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 18 Nov 2020 21:47:30 +0000
On Tue, Nov 17 2020, Eli Zaretskii wrote:

>> From: Florian Weimer <fweimer <at> redhat.com>
>> Cc: dj <at> redhat.com,  carlos <at> redhat.com,  43389 <at> debbugs.gnu.org
>> Date: Tue, 17 Nov 2020 21:58:57 +0100
>> 
>> (let ((size 0))
>>   (dolist (buffer (buffer-list) size)
>>     (setq size (+ size (buffer-size buffer)))))
>> ⇒ 98249826
>> 
>> So it's not a small number, but still far away from those 800 MiB.
>
> Yes.  I have a very similar value: 94642916 (in 376 buffers; you have
> more than 1000).  This is in a session that runs for 17 days and whose
> VM size is 615 MB: a "normal" size for a long-living session, nowhere
> near 2GB, let alone 11GB someone reported.

As an additional datapoint, since version 27 (i usually compile from
master, so also before its release), i'm experiencing bigger RAM
consumption from my emacs processes too.  

It used to always be way below 1Gb, and at some point (i have the
impression it was with the switch to pdumper), typical footprints went
up to ~2Gb.  

In my case, there seems to be a jump in RAM footprint every now and then
(i get to ~1.5Gb in a day almost for sure, and 1.8Gb is not rare at
all), but they're not systematic.  

Everything starts "normal" (300Mb), then i open Gnus an it grows a bit
after reading some groups (500Mb, say), and so on, and be there for a
while even if i keep using Gnus for reading similarly sized message
groups.  But, at some point, quite suddenly, i see RAM going to ~1Gb,
without any obvious change in the libraries i've loaded or in my usage
of them.  The pattern repeats until i find myself with ~2Gb in N days,
with N varying from 1 to 3.

It's difficult for me to be more precise because i use emacs for
absolutely everything. But, perhaps tellingly, i don't use most of the
packages that have been mentioned in this thread (in my case it's ivy
instead of helm, i use pdf-tools and that has a considerable footprint,
but i see jumps without having it loaded too, similar thing for
emacs-w3m), and i see the jumps to appear so consistently that my
impression is that they're not directly caused by a single package.  

The only coincidence i've seen is that i use EXWM too (btw, that's a
window manager implemened in ELisp that makes emacs itself the window
manager, calling directly the X11 api through FFI), but other people are
having problems without it.

I've also tried with emacs compiled with and without GTK (i usually
compile without any toolkit at all) and with and without ImageMagick,
and the increased footprint is the same in all those combinations.  I
cannot see either any difference between the released 27.1 and 28.0.50
regularly compile form master: both seem to misbehave in the same way.

As i mentioned above, i've got a hunch that this all started, at least
for me, with pdumper, but i must say that is most probably a red
herring.

I hope this helps a bit, despite its vagueness.

Cheers,
jao

P.S.: I'm not copying the external GCC developers in this response
because i think most of the above makes only sense to emacs developers;
please let me know if you'd rather i did copy them.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 14:05:02 GMT) Full text and rfc822 format available.

Message #245 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, carlos <at> redhat.com, fweimer <at> redhat.com, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 19 Nov 2020 16:03:51 +0200
> From: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
> Date: Wed, 18 Nov 2020 21:47:30 +0000
> 
> As an additional datapoint, since version 27 (i usually compile from
> master, so also before its release), i'm experiencing bigger RAM
> consumption from my emacs processes too.  
> 
> It used to always be way below 1Gb, and at some point (i have the
> impression it was with the switch to pdumper), typical footprints went
> up to ~2Gb.  
> 
> In my case, there seems to be a jump in RAM footprint every now and then
> (i get to ~1.5Gb in a day almost for sure, and 1.8Gb is not rare at
> all), but they're not systematic.  
> 
> Everything starts "normal" (300Mb), then i open Gnus an it grows a bit
> after reading some groups (500Mb, say), and so on, and be there for a
> while even if i keep using Gnus for reading similarly sized message
> groups.  But, at some point, quite suddenly, i see RAM going to ~1Gb,
> without any obvious change in the libraries i've loaded or in my usage
> of them.  The pattern repeats until i find myself with ~2Gb in N days,
> with N varying from 1 to 3.
> 
> It's difficult for me to be more precise because i use emacs for
> absolutely everything. But, perhaps tellingly, i don't use most of the
> packages that have been mentioned in this thread (in my case it's ivy
> instead of helm, i use pdf-tools and that has a considerable footprint,
> but i see jumps without having it loaded too, similar thing for
> emacs-w3m), and i see the jumps to appear so consistently that my
> impression is that they're not directly caused by a single package.  

Thanks.  If you can afford it, would you please try using the malloc
tracing tools pointed to here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

and then tell us where we could get the data you collected?

> As i mentioned above, i've got a hunch that this all started, at least
> for me, with pdumper, but i must say that is most probably a red
> herring.

For the record, can you please tell what flavor and version of
GNU/Linux are you using?

> P.S.: I'm not copying the external GCC developers in this response
> because i think most of the above makes only sense to emacs developers;
> please let me know if you'd rather i did copy them.

I've added them.  Please CC them in the future, it is important for us
that the glibc experts see the data points people report in this
matter.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 14:39:01 GMT) Full text and rfc822 format available.

Message #248 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: bug#43389: 28.0.50; Emacs memory leaks
 using hard disk all time
Date: Thu, 19 Nov 2020 16:37:39 +0200
> Date: Thu, 19 Nov 2020 09:59:44 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: 44666 <at> debbugs.gnu.org
> 
> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-17 10:04]:
> > > If there is nothing to be done with this bug, we can close.
> > 
> > No, closing is premature.  I've merged this bug with 3 other similar
> > ones, and we are discussing this issue with glibc malloc experts.
> 
> If bug is merged, do I just reply on this email?

No, it's better to reply to bug#43389 (I've redirected the discussion
now), and please keep the other addressees on the CC list, as they are
not subscribed to the bug list, I believe.

> My emacs-uptime now is 19 hours, and I can see 4819 MB swapping
> according  to symon-mode
> 
> I have not get number of buffers, I tried to delete it and there is no
> change. User processes are below. I have not finished this session and
> so I am prematurely sending the file 
> emacs.strace-2020-11-18-14:42:59-Wednesday which may be accessed here
> below on the link. I could not copy the file fully through eshell probably
> because if I do copy through eshell the strace becomes longer and
> longer and copy never finishes. So I have aborted the copy, file may
> not be complete. It is also not complete for reason that session is
> not finished.
> 
> strace is here, 13M download, when unpacked it is more than 1.2 GB.
> https://gnu.support/files/tmp/emacs.strace-2020-11-18-14:42:59-Wednesday.lz

I've looked at that file, but couldn't see any smoking guns.  It shows
that your brk goes up and up and up until it reaches more than 7GB.
Some of the requests come in groups, totaling about 5MB, not sure why
(these groups always follow a call to timerfd_settime, which seems to
hint that we are setting an atimer for something).  However, without
time stamps for each syscall, it is hard to tell whether these series
of calls to 'brk' are indeed made one after the other, nor whether
they are indeed related to something we use atimers for, because it is
unknown how much time passed between these calls.

I think you should try using the malloc tracing tools pointed to here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

Also, next time your vsize is several GBytes, please see how much do
your buffers take, by evaluating this form:

 (let ((size 0))
   (dolist (buffer (buffer-list) size)
     (setq size (+ size (buffer-size buffer)))))





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 15:12:02 GMT) Full text and rfc822 format available.

Message #251 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, carlos <at> redhat.com, fweimer <at> redhat.com,
 "Jose A. Ortega Ruiz" <jao <at> gnu.org>, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 19 Nov 2020 17:34:32 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:05]:
> Thanks.  If you can afford it, would you please try using the malloc
> tracing tools pointed to here:
> 
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

I have built it. Slight problem is that I do not get any output as
written that I should get, something like this:

mtrace: writing to /tmp/mtrace.mtr.706

I do not see here:

LD_PRELOAD=./libmtrace.so ls
block_size_rss.c  INSTALL	mtrace.c      trace2wl.c	    trace_hist.sh
config.log	  libmtrace.so	mtrace.h      trace_allocs	    trace_plot.m
config.status	  LICENSES	README.md     trace_allocs.c	    trace_run
configure	  MAINTAINERS	sample.c      trace_analysis.sh     trace_run.c
configure.ac	  Makefile	statistics.c  trace_block_size_rss  trace_sample
COPYING		  Makefile.in	tests	      trace_dump	    trace_statistics
COPYING.LIB	  malloc.h	trace2wl      trace_dump.c          util.h

But I did get something in /tmp/mtrace.mtr.XXX

So I will run Emacs that way.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 15:58:03 GMT) Full text and rfc822 format available.

Message #254 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Andreas Schwab <schwab <at> linux-m68k.org>, Jean Louis <bugs <at> gnu.support>
Cc: Florian Weimer <fweimer <at> redhat.com>, 43389 <at> debbugs.gnu.org, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 19 Nov 2020 10:57:46 -0500
On 11/18/20 3:32 AM, Andreas Schwab wrote:
> On Nov 18 2020, Jean Louis wrote:
> 
>> Is it recommended to collect strace with this below?
>>
>> strace emacs > output 2>&1
> 
> It is preferable to use the -o option to decouple the strace output from
> the inferior output.

strace -ttt -ff -o NAME.logs BINARY

Gives timing, and follows forks to see what children are being run.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 16:04:01 GMT) Full text and rfc822 format available.

Message #257 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, fweimer <at> redhat.com,
 "Jose A. Ortega Ruiz" <jao <at> gnu.org>, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 19 Nov 2020 11:03:27 -0500
On 11/19/20 9:34 AM, Jean Louis wrote:
> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:05]:
>> Thanks.  If you can afford it, would you please try using the malloc
>> tracing tools pointed to here:
>>
>>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> 
> I have built it. Slight problem is that I do not get any output as
> written that I should get, something like this:
> 
> mtrace: writing to /tmp/mtrace.mtr.706

This was changed recently in commit 4594db1defd40289192a0ea641c50278277f1737
because output to stdout interferes with the application output so it is
disabled by default. The docs show that MTRACE_CTL_FILE will dictate
where the trace is written to and that MTRACE_CTL_VERBOSE will output
verbose information to stdout.

I've pushed a doc update to indicate that in the example.
 
> I do not see here:
> 
> LD_PRELOAD=./libmtrace.so ls
> block_size_rss.c  INSTALL	mtrace.c      trace2wl.c	    trace_hist.sh
> config.log	  libmtrace.so	mtrace.h      trace_allocs	    trace_plot.m
> config.status	  LICENSES	README.md     trace_allocs.c	    trace_run
> configure	  MAINTAINERS	sample.c      trace_analysis.sh     trace_run.c
> configure.ac	  Makefile	statistics.c  trace_block_size_rss  trace_sample
> COPYING		  Makefile.in	tests	      trace_dump	    trace_statistics
> COPYING.LIB	  malloc.h	trace2wl      trace_dump.c          util.h
> 
> But I did get something in /tmp/mtrace.mtr.XXX
> 
> So I will run Emacs that way.

That should work.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 16:10:01 GMT) Full text and rfc822 format available.

Message #260 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: DJ Delorie <dj <at> redhat.com>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 19 Nov 2020 11:08:56 -0500
On 11/18/20 1:27 PM, DJ Delorie wrote:
> Eli Zaretskii <eliz <at> gnu.org> writes:
>> If you asked Florian, then I agree that his data could be useful.  If
>> you were asking me, then my data is not useful, because the footprint
>> is reasonable and never goes up to gigabyte range.
> 
> Yeah, the hard part here is capturing the actual problem.  I'm running
> the latest Emacs too but haven't seen the growth.  Traces tend to be
> more useful when the problem is reproducible in situ but really hard to
> reproduce in a test environment.

My commitment is this: If anyone can reproduce the problem with the tracer
enabled then I will analyze the trace and produce a report for the person
submitting the trace.

The report will include some graphs, and an analysis of the API calls and
the resulting RSS usage.

I've written several of these reports, but so far they haven't been all
that satisfying to read. We rarely find an easily discoverable root cause.

We probably need better information on the exact lifetimes of the the
allocations.

For example I recently added a "caller" frame trace which uses the dwarf
unwinder to find the caller and record that data. It's expensive and on
only if requested. This is often useful in determining who made the API
request (requires tracing through 2 frames at a minimum). The performance
loss may make the bug go away though, and so that should be considered.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 19 Nov 2020 17:26:02 GMT) Full text and rfc822 format available.

Message #263 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: jao <jao <at> gnu.org>
To: "Eli Zaretskii" <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, carlos <at> redhat.com, fweimer <at> redhat.com, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 19 Nov 2020 17:25:27 +0000
On Thu, Nov 19 2020, Eli Zaretskii wrote:

[...]

> Thanks.  If you can afford it, would you please try using the malloc
> tracing tools pointed to here:
>
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
>
> and then tell us where we could get the data you collected?

i'll see what i can do, yes (possibly over the weekend).

>> As i mentioned above, i've got a hunch that this all started, at least
>> for me, with pdumper, but i must say that is most probably a red
>> herring.
>
> For the record, can you please tell what flavor and version of
> GNU/Linux are you using?

Debian sid.

Cheers,
jao
-- 
If you could kick in the pants the person responsible for most of your
trouble, you wouldn't sit for a month. — Theodore Roosevelt




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 20 Nov 2020 05:35:02 GMT) Full text and rfc822 format available.

Message #266 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 20 Nov 2020 06:16:26 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:38]:
> I think you should try using the malloc tracing tools pointed to here:
> 
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

When running for long time Emacs will crush at certain point of time
as my hard disk get full as /tmp is just about 2 gigabytes. I did not
understand Carlos how to change the location for files.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 20 Nov 2020 08:12:02 GMT) Full text and rfc822 format available.

Message #269 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 20 Nov 2020 10:10:56 +0200
> Date: Fri, 20 Nov 2020 06:16:26 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> 
> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:38]:
> > I think you should try using the malloc tracing tools pointed to here:
> > 
> >   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> 
> When running for long time Emacs will crush at certain point of time
> as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> understand Carlos how to change the location for files.

Carlos, could you please help Jean to direct the traces to a place
other than /tmp?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 22 Nov 2020 20:03:02 GMT) Full text and rfc822 format available.

Message #272 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 22 Nov 2020 22:52:14 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-20 03:11]:
> > Date: Fri, 20 Nov 2020 06:16:26 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
> >   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> > 
> > * Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:38]:
> > > I think you should try using the malloc tracing tools pointed to here:
> > > 
> > >   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> > 
> > When running for long time Emacs will crush at certain point of time
> > as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> > understand Carlos how to change the location for files.
> 
> Carlos, could you please help Jean to direct the traces to a place
> other than /tmp?

I am now following this strategy here:
https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking

I have run emacs -Q for very short time, with:

MALLOC_CONF=prof_leak:true,lg_prof_sample:0,prof_final:true \
LD_PRELOAD=/package/lib/jemalloc/lib/libjemalloc.so.2 emacs -Q

and there are PDF files generated. I also wish to mention that I use 2
dynamic modules, one is emacs-libpq and other emacs-libvterm if that
influences overall. 

You may know easier how to interpret those files and may spot
something. This Emacs session was running just a minute or something. 

https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26915.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26915.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26918.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26918.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26921.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26921.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26922.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26922.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26923.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26923.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26924.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26924.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26925.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26925.0.f.heap.pdf
https://gnu.support/files/tmp/2020-11-22/jeprof.26931.0.f.heap
https://gnu.support/files/tmp/2020-11-22/jeprof.26931.0.f.heap.pdf

I am now running new session and will have maybe quite different data
after hours of run.

Jean




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 22 Nov 2020 20:17:02 GMT) Full text and rfc822 format available.

Message #275 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 22 Nov 2020 22:16:24 +0200
> Date: Sun, 22 Nov 2020 22:52:14 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> 
> I am now following this strategy here:
> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking

That uses a different implementation of malloc, so I'm not sure it
will help us.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 22 Nov 2020 20:20:01 GMT) Full text and rfc822 format available.

Message #278 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Deus Max <deusmax <at> gmx.com>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 DJ Delorie <dj <at> redhat.com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Sun, 22 Nov 2020 22:19:29 +0200
On Thu, Nov 19 2020, Carlos O'Donell wrote:

> On 11/18/20 1:27 PM, DJ Delorie wrote:
>> Eli Zaretskii <eliz <at> gnu.org> writes:
>>> If you asked Florian, then I agree that his data could be useful.  If
>>> you were asking me, then my data is not useful, because the footprint
>>> is reasonable and never goes up to gigabyte range.
>>
>> Yeah, the hard part here is capturing the actual problem.  I'm running
>> the latest Emacs too but haven't seen the growth.  Traces tend to be
>> more useful when the problem is reproducible in situ but really hard to
>> reproduce in a test environment.
>
> My commitment is this: If anyone can reproduce the problem with the tracer
> enabled then I will analyze the trace and produce a report for the person
> submitting the trace.
>

My emacs has been experiencing leaks and crashes very often. Both at
home and at work. This is very annoying. Can hear the fan, suddenly
"noising"-up or the keys not responding.... and oh-oh, that... here we
go again, feeling comes back.

If it is easy to provide instructions/recommendations on how to run
Emacs for producing a usefull trace report, I will be happy to do so.
Even to recompile as needed.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 03:27:02 GMT) Full text and rfc822 format available.

Message #281 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Dias Badekas <dias <at> badekas.org>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, dj <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 23 Nov 2020 05:26:22 +0200
> From: Deus Max <deusmax <at> gmx.com>
> Cc: DJ Delorie <dj <at> redhat.com>,  Eli Zaretskii <eliz <at> gnu.org>,
>   fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org
> Date: Sun, 22 Nov 2020 22:19:29 +0200
> 
> If it is easy to provide instructions/recommendations on how to run
> Emacs for producing a usefull trace report, I will be happy to do so.
> Even to recompile as needed.

Carlos provided a pointer to the tracing tools, see

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158

There are some instructions there; if something is not clear enough, I
suggest to ask specific questions here.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 03:36:02 GMT) Full text and rfc822 format available.

Message #284 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 22 Nov 2020 22:35:28 -0500
On 11/19/20 10:16 PM, Jean Louis wrote:
> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:38]:
>> I think you should try using the malloc tracing tools pointed to here:
>>
>>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> 
> When running for long time Emacs will crush at certain point of time
> as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> understand Carlos how to change the location for files.
 
The glibc malloc tracer functionality can be adjusted with environment
variables.

Example:

MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=./ls.mtr LD_PRELOAD=./libmtrace.so ls
mtrace: writing to ./ls.mtr.350802

The appended PID helps keep the files distinct (and includes a sequence
number in the event of conflict).

In the above example the use of MTRACE_CTL_FILE=./ls.mtr instructs the
tracer to write the trace file to the current directory.

The tracer appends the PID of the traced process to the ls.mtr file name
(and a sequence number that increases monotonically in the event of a
name conflict).

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 03:42:02 GMT) Full text and rfc822 format available.

Message #287 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 22 Nov 2020 22:41:00 -0500
On 11/22/20 3:16 PM, Eli Zaretskii wrote:
>> Date: Sun, 22 Nov 2020 22:52:14 +0300
>> From: Jean Louis <bugs <at> gnu.support>
>> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
>>
>> I am now following this strategy here:
>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> 
> That uses a different implementation of malloc, so I'm not sure it
> will help us.

Correct, that is a different malloc implementation and may have
completely different behaviour for your given workload. That is
not to say that it isn't viable solution to try another allocator
that matches your workload. However, in this bug we're trying to
determine why the "default" configuration of emacs and glibc's
allocator causes memory usage to grow.

We want to run the glibc malloc algorithms because that is the
implementation under which we are observing the increased memory
pressure. The tracer I've suggested will get us an API trace
that we can use to determine if it is actually API calls that
are causing an increase in the memory usage or if it's an
algorithmic issue. It is not always obvious to see from the
API calls, but having the trace is better than not.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 08:30:02 GMT) Full text and rfc822 format available.

Message #290 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 11:11:22 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-22 23:17]:
> > Date: Sun, 22 Nov 2020 22:52:14 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
> >   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> > 
> > I am now following this strategy here:
> > https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> 
> That uses a different implementation of malloc, so I'm not sure it
> will help us.

It will not help if you are able to interpret the PDF reports and you
do not see anything helpful. If you do interpret those PDF reports
please tell me as such could be useful to find possible causes or find
other issues in Emacs.

Does this here tells you anything?
https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap.pdf

Does this add module isra.0 inside tells you anything?
https://gnu.support/files/tmp/2020-11-22/jeprof.26922.0.f.heap.pdf

I am using dynamic modules like vterm and libpq, can that influence
memory or create memory leaks?

What is tst_post_reentrancy_raw, is that something that eats memory?

I am still running this session with jemalloc and I wish to see if
anything will happen that blocks the work similar how it blocks with
the normal run. This helps slightly in determination. As if run of
Emacs with jemalloc does not cause problems one time, maybe 2-5 times
or 10 times, that may be deduce problem to standard malloc and not
Emacs.

Then in the next session I will try again the tools as described and
submit data.

To help me understand, do you think problem is in Emacs or in glibc
malloc?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 10:00:02 GMT) Full text and rfc822 format available.

Message #293 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 11:59:42 +0200
On November 23, 2020 10:11:22 AM GMT+02:00, Jean Louis <bugs <at> gnu.support> wrote:
> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-22 23:17]:
> > > Date: Sun, 22 Nov 2020 22:52:14 +0300
> > > From: Jean Louis <bugs <at> gnu.support>
> > > Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
> > >   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
> carlos <at> redhat.com
> > > 
> > > I am now following this strategy here:
> > >
> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> > 
> > That uses a different implementation of malloc, so I'm not sure it
> > will help us.
> 
> It will not help if you are able to interpret the PDF reports and you
> do not see anything helpful. If you do interpret those PDF reports
> please tell me as such could be useful to find possible causes or find
> other issues in Emacs.

Granted, I looked at the reports before writing that response.  I don't see there anything related to Emacs code.

> Does this here tells you anything?
> https://gnu.support/files/tmp/2020-11-22/jeprof.26889.0.f.heap.pdf

It says that most of memory was allocated by a subroutine of jemalloc.  As I'm not familiar with how jemalloc works, I see no way for us to draw any significant conclusions from that.

> Does this add module isra.0 inside tells you anything?

AFAIU, it's some internal jemalloc midule.

> I am using dynamic modules like vterm and libpq, can that influence
> memory or create memory leaks?

I have no idea, but I don't think I see any of their functions in these reports.

> What is tst_post_reentrancy_raw, is that something that eats memory?

I don't know.  It's something internal to jemalloc.

> I am still running this session with jemalloc and I wish to see if
> anything will happen that blocks the work similar how it blocks with
> the normal run. This helps slightly in determination. As if run of
> Emacs with jemalloc does not cause problems one time, maybe 2-5 times
> or 10 times, that may be deduce problem to standard malloc and not
> Emacs.

The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had such a glaring memory leak.  So trying different malloc implementations is from my POV waste of time at this stage.

> Then in the next session I will try again the tools as described and
> submit data.
> 
> To help me understand, do you think problem is in Emacs or in glibc
> malloc?

I suspect the problem is in how we use glibc's malloc -- there are some usage patterns that cause glibc to be suboptimal in its memory usage, and I hope we will find ways to fine tune it to our needs.

But that is just a guess, and so I wish you'd use the tools pointed out by Carlos, because they are the most efficient way of collecting evidence that might allow us to make some progress here.

We have the attention of the best experts on the issue; let's use their attention and their time as best as we possibly can.

TIA





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 11:01:01 GMT) Full text and rfc822 format available.

Message #296 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 13:59:47 +0300
The session I was running with jemalloc memory leak logging is
finished now. 

Just the same thing happened. It started getting slower and slower. 

In the IceWM window manager I have visual representation of memory
usage and that is how I get feeling, there is also tooltip telling me
that more and more memory is used. When it starts to swap like 3 GB
then I turn on symon-mode and in Emacs I see more and more swapping.

The heap file is here 24M, maybe not needed for review:
https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap

Visualization is here 20K PDF file:
https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap.pdf

Do you see anything interesting inside that should tell about memory leaks?

Jean





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 13:29:02 GMT) Full text and rfc822 format available.

Message #299 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 14:07:47 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 06:35]:
> On 11/19/20 10:16 PM, Jean Louis wrote:
> > * Eli Zaretskii <eliz <at> gnu.org> [2020-11-19 17:38]:
> >> I think you should try using the malloc tracing tools pointed to here:
> >>
> >>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> > 
> > When running for long time Emacs will crush at certain point of time
> > as my hard disk get full as /tmp is just about 2 gigabytes. I did not
> > understand Carlos how to change the location for files.
>  
> The glibc malloc tracer functionality can be adjusted with environment
> variables.
> 
> Example:
> 
> MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=./ls.mtr LD_PRELOAD=./libmtrace.so ls
> mtrace: writing to ./ls.mtr.350802
> 
> The appended PID helps keep the files distinct (and includes a sequence
> number in the event of conflict).

Alright, thank you.

My session started with it.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 13:29:02 GMT) Full text and rfc822 format available.

Message #302 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 16:27:52 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-22 23:17]:
> > Date: Sun, 22 Nov 2020 22:52:14 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
> >   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> > 
> > I am now following this strategy here:
> > https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> 
> That uses a different implementation of malloc, so I'm not sure it
> will help us.

This is how I have run the shorter Emacs session until it got blocked:

MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1

And here is mtrace:

https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz

I cannot run Emacs that way as something happens and Emacs get
blocked. Problem arrives with M-s M-w to search for anything on
Internet with eww. Anything blocks. And I get message:

error in process filter: Quit

after that C-g does not work, I cannot kill buffer, I cannot save the
current work or other buffers, I cannot switch from buffer to buffer
neither open any menu.

Debugging requires longer run sessions and actual work in Emacs.

This happens all the time when I run Emacs like the above example
command.

Unless there is safer way for debugging the above one is not
functional as it blocks everything and I do use incidentally or
accidentally eww in the work.

I hope that something will be visible from that mtrace.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 15:47:02 GMT) Full text and rfc822 format available.

Message #305 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 17:46:16 +0200
> Date: Mon, 23 Nov 2020 13:59:47 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> 
> In the IceWM window manager I have visual representation of memory
> usage and that is how I get feeling, there is also tooltip telling me
> that more and more memory is used. When it starts to swap like 3 GB
> then I turn on symon-mode and in Emacs I see more and more swapping.

I think I described how to write an Emacs function that you could use
to watch the vsize of the Emacs process and alert you to it being
above some threshold.

> The heap file is here 24M, maybe not needed for review:
> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap
> 
> Visualization is here 20K PDF file:
> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap.pdf
> 
> Do you see anything interesting inside that should tell about memory leaks?

I'm not sure.  I think I see that you have some timer that triggers a
lot of memory allocations because it conses a lot of Lisp objects.
Whether that is part of the problem or not is not clear.

Next time when your session causes the system to swap, please type:

  M-: (garbage-collect) RET

and post here the output of that (it should be a list of numbers
whose meanings are explained in the doc string of garbage-collect).

Also, I think I asked to tell how large are your buffers by evaluation
the following (again, near the point where your session causes the
system to page heavily):

  (let ((size 0))
    (dolist (buffer (buffer-list) size)
      (setq size (+ size (buffer-size buffer)))))

It is important to have both these pieces of information from the same
session at the same time near the point where you must kill Emacs, so
that we know how much memory is actually used by your session at that
point (as opposed to memory that is "free" in the heap, but was  not
returned to the OS).

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 15:55:02 GMT) Full text and rfc822 format available.

Message #308 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 10:54:10 -0500
On 11/23/20 8:27 AM, Jean Louis wrote:
> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-22 23:17]:
>>> Date: Sun, 22 Nov 2020 22:52:14 +0300
>>> From: Jean Louis <bugs <at> gnu.support>
>>> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>>>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
>>>
>>> I am now following this strategy here:
>>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
>>
>> That uses a different implementation of malloc, so I'm not sure it
>> will help us.
> 
> This is how I have run the shorter Emacs session until it got blocked:
> 
> MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> And here is mtrace:
> 
> https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
> 
> I cannot run Emacs that way as something happens and Emacs get
> blocked. Problem arrives with M-s M-w to search for anything on
> Internet with eww. Anything blocks. And I get message:
> 
> error in process filter: Quit

Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
which may affect the process if using pipes.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 16:46:01 GMT) Full text and rfc822 format available.

Message #311 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Deus Max <deusmax <at> gmx.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, dj <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 23 Nov 2020 18:45:23 +0200
On Mon, Nov 23 2020, Eli Zaretskii wrote:

>> From: Deus Max <deusmax <at> gmx.com>
>> Cc: DJ Delorie <dj <at> redhat.com>,  Eli Zaretskii <eliz <at> gnu.org>,
>>   fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org
>> Date: Sun, 22 Nov 2020 22:19:29 +0200
>>
>> If it is easy to provide instructions/recommendations on how to run
>> Emacs for producing a usefull trace report, I will be happy to do so.
>> Even to recompile as needed.
>
> Carlos provided a pointer to the tracing tools, see
>
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
>
> There are some instructions there; if something is not clear enough, I
> suggest to ask specific questions here.
>
> Thanks.

Will read and try it out.
Thank you.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 17:08:01 GMT) Full text and rfc822 format available.

Message #314 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Deus Max <deusmax <at> gmx.com>
Cc: carlos <at> redhat.com, fweimer <at> redhat.com, dj <at> redhat.com, 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 23 Nov 2020 19:07:16 +0200
> From: Deus Max <deusmax <at> gmx.com>
> Cc:  carlos <at> redhat.com,  dj <at> redhat.com,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org
> Date: Mon, 23 Nov 2020 18:45:23 +0200
> 
> >   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#158
> >
> > There are some instructions there; if something is not clear enough, I
> > suggest to ask specific questions here.
> >
> > Thanks.
> 
> Will read and try it out.

Thanks.  Please find more detailed instructions here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#284

with an important update here:

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#308




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 17:20:01 GMT) Full text and rfc822 format available.

Message #317 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Jean Louis <bugs <at> gnu.support>,
 dj <at> redhat.com, michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 18:19:32 +0100
> The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had
> such a glaring memory leak.

This has to be something introduced fairly recently, right?

I didn't have any such problems before, but since maybe few weeks ago, I
have also experienced heavy lockdowns of my entire OS. To the point
where entire X11 got unresposnsive, when it happens I can't even switch
to terminal to kill Emacs. What I do is Alt-Shift to another virtual
linux console. I don't even need to log into the system in that console,
I can then Alt-Shift 1 to go back to one I am logged into, and
everything is normal. Emacs is every time restarted by systemd and
everything is repsonsive and working as normal. 

This started sometime ago; and I have noticed that it happens when I was
cleaning my disk and reading big directories in Dired (I have some with
~7k-10k files in them). I was using Helm to complete paths when I was
shifting fiels and folders around. After maybe hour or so I would
experience big slowdown. I don't have swap file on my system enabled at
all, so I am not sure what was going, but I didn't have time to
participate in this memory leak thing yet. I haven't experienced any
problems since I recompiled Emacs last time, which was in 18th (last
Wendesday). I have recompiled without Gtk this time, but I have no idea
if it has anything to do with the issue, was just a wild shot to see if
things are better.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 17:30:02 GMT) Full text and rfc822 format available.

Message #320 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Jean Louis <bugs <at> gnu.support>,
 dj <at> redhat.com, michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 18:29:40 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> Date: Mon, 23 Nov 2020 13:59:47 +0300
>> From: Jean Louis <bugs <at> gnu.support>
>> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
>> 
>> In the IceWM window manager I have visual representation of memory
>> usage and that is how I get feeling, there is also tooltip telling me
>> that more and more memory is used. When it starts to swap like 3 GB
>> then I turn on symon-mode and in Emacs I see more and more swapping.
>
> I think I described how to write an Emacs function that you could use
> to watch the vsize of the Emacs process and alert you to it being
> above some threshold.
>
>> The heap file is here 24M, maybe not needed for review:
>> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap
>> 
>> Visualization is here 20K PDF file:
>> https://gnu.support/files/tmp/2020-11-23/jeprof.23826.0.f.heap.pdf
>> 
>> Do you see anything interesting inside that should tell about memory leaks?
>
> I'm not sure.  I think I see that you have some timer that triggers a
> lot of memory allocations because it conses a lot of Lisp objects.
> Whether that is part of the problem or not is not clear.
>
> Next time when your session causes the system to swap, please type:
>
>   M-: (garbage-collect) RET
>
> and post here the output of that (it should be a list of numbers
> whose meanings are explained in the doc string of garbage-collect).
>
> Also, I think I asked to tell how large are your buffers by evaluation
> the following (again, near the point where your session causes the
> system to page heavily):
>
>   (let ((size 0))
>     (dolist (buffer (buffer-list) size)
>       (setq size (+ size (buffer-size buffer)))))
>
> It is important to have both these pieces of information from the same
> session at the same time near the point where you must kill Emacs, so
> that we know how much memory is actually used by your session at that
> point (as opposed to memory that is "free" in the heap, but was  not
> returned to the OS).
>
> Thanks.
For me it happends like really, really fast. Things work normally, and
then suddenly everythign freezes, and after first freeze, it takes for
every to see result of any keypress. For example video in Firefox gets
slow down to like a frame per minut or so; I can see that system is
alive, but it is impossible to type something like (garbage-collect) and
see the result; I would be sitting here for a day :-). 

The only thing I can is switch to another console, and then back. By
that time Emacs process is restarted and everything is normal. I don't
use swap file at all, and I can't believe that Emacs is eating up 32 gig
or RAM either. However I can't type any command to see what it is
peeking at since everything is efefctively frozen. I have seen it at 800
meg on my machine at some time, but it is far away from 32 gig I have.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 17:45:02 GMT) Full text and rfc822 format available.

Message #323 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 19:44:07 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: Jean Louis <bugs <at> gnu.support>,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org,  dj <at> redhat.com,  michael_heerdegen <at> web.de,
>   trevor <at> trevorbentley.com,  carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 18:19:32 +0100
> 
> > The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had
> > such a glaring memory leak.
> 
> This has to be something introduced fairly recently, right?

Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
somewhat differently, and OTOH glibc removed some of the malloc hooks
we used to use in versions of Emacs before 26.  In addition, glibc is
also being developed, and maybe some change there somehow triggered
this.

As you see, there's more than one factor that could possibly be
related.

> I didn't have any such problems before, but since maybe few weeks ago, I
> have also experienced heavy lockdowns of my entire OS. To the point
> where entire X11 got unresposnsive, when it happens I can't even switch
> to terminal to kill Emacs. What I do is Alt-Shift to another virtual
> linux console. I don't even need to log into the system in that console,
> I can then Alt-Shift 1 to go back to one I am logged into, and
> everything is normal. Emacs is every time restarted by systemd and
> everything is repsonsive and working as normal. 
> 
> This started sometime ago; and I have noticed that it happens when I was
> cleaning my disk and reading big directories in Dired (I have some with
> ~7k-10k files in them). I was using Helm to complete paths when I was
> shifting fiels and folders around. After maybe hour or so I would
> experience big slowdown. I don't have swap file on my system enabled at
> all, so I am not sure what was going, but I didn't have time to
> participate in this memory leak thing yet. I haven't experienced any
> problems since I recompiled Emacs last time, which was in 18th (last
> Wendesday). I have recompiled without Gtk this time, but I have no idea
> if it has anything to do with the issue, was just a wild shot to see if
> things are better.

If the problem is memory, I suggest to look at the system log to see
if there are any signs of that.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 17:46:01 GMT) Full text and rfc822 format available.

Message #326 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 19:45:44 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: Jean Louis <bugs <at> gnu.support>,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org,  dj <at> redhat.com,  michael_heerdegen <at> web.de,
>   trevor <at> trevorbentley.com,  carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 18:29:40 +0100
> 
> For me it happends like really, really fast. Things work normally, and
> then suddenly everythign freezes, and after first freeze, it takes for
> every to see result of any keypress. For example video in Firefox gets
> slow down to like a frame per minut or so; I can see that system is
> alive, but it is impossible to type something like (garbage-collect) and
> see the result; I would be sitting here for a day :-). 

That doesn't sound like a memory problem to me.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 18:35:02 GMT) Full text and rfc822 format available.

Message #329 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 19:34:26 +0100
[Message part 1 (text/plain, inline)]
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: Jean Louis <bugs <at> gnu.support>,  fweimer <at> redhat.com,
>>   43389 <at> debbugs.gnu.org,  dj <at> redhat.com,  michael_heerdegen <at> web.de,
>>   trevor <at> trevorbentley.com,  carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 18:19:32 +0100
>> 
>> > The glibc malloc is the prime suspect anyway.  I don't really believe Emacs had
>> > such a glaring memory leak.
>> 
>> This has to be something introduced fairly recently, right?
>
> Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
> somewhat differently, and OTOH glibc removed some of the malloc hooks
> we used to use in versions of Emacs before 26.  In addition, glibc is
> also being developed, and maybe some change there somehow triggered
> this.
It has past long since v 26, and pdumber as well :-) You know I am
rebuilding all the time and am on relatively latest master so I would
have noticed it earlier, so it must be something since last month or so,
I am not claiming anything exact, but not too far ago.

> As you see, there's more than one factor that could possibly be
> related.
Yeah; I understand that :-). 

>> I didn't have any such problems before, but since maybe few weeks ago, I
>> have also experienced heavy lockdowns of my entire OS. To the point
>> where entire X11 got unresposnsive, when it happens I can't even switch
>> to terminal to kill Emacs. What I do is Alt-Shift to another virtual
>> linux console. I don't even need to log into the system in that console,
>> I can then Alt-Shift 1 to go back to one I am logged into, and
>> everything is normal. Emacs is every time restarted by systemd and
>> everything is repsonsive and working as normal. 
>> 
>> This started sometime ago; and I have noticed that it happens when I was
>> cleaning my disk and reading big directories in Dired (I have some with
>> ~7k-10k files in them). I was using Helm to complete paths when I was
>> shifting fiels and folders around. After maybe hour or so I would
>> experience big slowdown. I don't have swap file on my system enabled at
>> all, so I am not sure what was going, but I didn't have time to
>> participate in this memory leak thing yet. I haven't experienced any
>> problems since I recompiled Emacs last time, which was in 18th (last
>> Wendesday). I have recompiled without Gtk this time, but I have no idea
>> if it has anything to do with the issue, was just a wild shot to see if
>> things are better.
>
> If the problem is memory, I suggest to look at the system log to see
> if there are any signs of that.
Nothing else crashes, and I have 32 gig, so I am not sure what can be a
problem.

It is obvious that Emacs causes the lockdown, but I don't know how.
I am not really sure what to make of the syslog in this case either.

You can take a peek  at the last crash I had (17th last week), if it
tells you anything more then what apps I use :-). I was playing music
with Emacs, so you will see start with pulseaudio, and what happened
untill Emacs restarted. As you see everything is happening in 4 seconds
interval, so it must be the point when I switched to another console
with Alt+Shift. I have no idea why systemd kills Emacs when I do that
either, but I discovered it does so. My intention from the beginning
was to just pkill Emacs, and hoped it was just X11 that was locked, not
entire system, but I discovered that I didn't even needed to kill emacs,
it was already killed by the time I logged into another console and
everything seemed to work nice after switch to other console, so I kept
using it as my workaround since it started; 3 - 4 weeks ago? At least
what I am aware of.

[crash-log.txt (text/plain, attachment)]

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 18:41:01 GMT) Full text and rfc822 format available.

Message #332 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 19:40:23 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: Jean Louis <bugs <at> gnu.support>,  fweimer <at> redhat.com,
>>   43389 <at> debbugs.gnu.org,  dj <at> redhat.com,  michael_heerdegen <at> web.de,
>>   trevor <at> trevorbentley.com,  carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 18:29:40 +0100
>> 
>> For me it happends like really, really fast. Things work normally, and
>> then suddenly everythign freezes, and after first freeze, it takes for
>> every to see result of any keypress. For example video in Firefox gets
>> slow down to like a frame per minut or so; I can see that system is
>> alive, but it is impossible to type something like (garbage-collect) and
>> see the result; I would be sitting here for a day :-). 
>
> That doesn't sound like a memory problem to me.
Ok; acknowledged; any idea what it could be? I have attached you a
syslog from one crash point, you can see Emacs is using almost 8gig or
RAM, but I have 32, so there is plenty of unused RAM over. Mayve Emacs
internal book keeping of memory? Number of pages? I have no idea myself,
sorry if I am not so helpful.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 18:57:01 GMT) Full text and rfc822 format available.

Message #335 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Mon, 23 Nov 2020 21:55:57 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-18 18:04]:
> > Date: Wed, 18 Nov 2020 00:45:48 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: eliz <at> gnu.org, fweimer <at> redhat.com, trevor <at> trevorbentley.com,
> >   michael_heerdegen <at> web.de, carlos <at> redhat.com, 43389 <at> debbugs.gnu.org
> > 
> > Maybe some simple new and automatic function could be temporarily
> > included to spit errors to output on what is Emacs doing when it
> > starts swapping (if it is swapping), then such errors could at least
> > be captured in a file even if I have to reboot computer.

I use now M-x vsize-with-timer for 2GB and M-x good-bye to capture
that basic data.

(defun vsize-value ()
  (let* ((attributes (process-attributes (emacs-pid)))
	 (vsize-name (car (elt attributes 5)))
	 (vsize-value (cdr (elt attributes 5))))
    (list vsize-name vsize-value)))

(defun vsize-check (&optional gb)
  (let* ((vsize (cadr (vsize-value)))
	 (gb (or gb 2))
	 (gb-1 1048576.0)
	 (gb (* gb gb-1)))
    (when (> vsize gb)
	(message "vsize: %.02fG" (/ vsize gb-1)))))

(defun vsize-with-timer (gb)
  (interactive "nGiB: ")
  (let ((timer (run-with-timer 1 30 'vsize-check gb)))
    (message "Timer: %s" timer)))

(defun good-bye ()
  (interactive)
  (let* ((garbage (garbage-collect))
	 (size 0)
	 (buffers-size (dolist (buffer (buffer-list) size)
			(setq size (+ size (buffer-size buffer)))))
	 (uptime (emacs-uptime))
	 (pid (emacs-pid))
	 (vsize (vsize-value))
	 (file (format "~/tmp/emacs-session-%s.el" pid))
	 (list (list (list 'uptime uptime) (list 'pid pid)
		     (list 'garbage garbage) (list 'buffers-size buffers-size)
		     (list 'vsize vsize))))
    (with-temp-file file
      (insert (prin1-to-string list)))
    (message file)))




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 18:57:02 GMT) Full text and rfc822 format available.

Message #338 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:33:09 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-23 18:46]:
> I think I described how to write an Emacs function that you could use
> to watch the vsize of the Emacs process and alert you to it being
> above some threshold.

Yes I will do.

I will use this to inform you:

(defun good-bye ()
  (interactive)
  (let* ((garbage (garbage-collect))
	 (size 0)
	 (buffers-size (dolist (buffer (buffer-list) size)
			(setq size (+ size (buffer-size buffer)))))
	 (uptime (emacs-uptime))
	 (pid (emacs-pid))
	 (file (format "~/tmp/emacs-session-%s.el" pid))
	 (list (list (list 'uptime uptime) (list 'pid pid)
		     (list 'garbage garbage) (list 'buffers-size buffers-size))))
    (with-temp-file file
      (insert (prin1-to-string list)))
    (message file)))




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:01:02 GMT) Full text and rfc822 format available.

Message #341 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:58:28 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 18:54]:
> On 11/23/20 8:27 AM, Jean Louis wrote:
> > * Eli Zaretskii <eliz <at> gnu.org> [2020-11-22 23:17]:
> >>> Date: Sun, 22 Nov 2020 22:52:14 +0300
> >>> From: Jean Louis <bugs <at> gnu.support>
> >>> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
> >>>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
> >>>
> >>> I am now following this strategy here:
> >>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
> >>
> >> That uses a different implementation of malloc, so I'm not sure it
> >> will help us.
> > 
> > This is how I have run the shorter Emacs session until it got blocked:
> > 
> > MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> > 
> > And here is mtrace:
> > 
> > https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
> > 
> > I cannot run Emacs that way as something happens and Emacs get
> > blocked. Problem arrives with M-s M-w to search for anything on
> > Internet with eww. Anything blocks. And I get message:
> > 
> > error in process filter: Quit
> 
> Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
> which may affect the process if using pipes.

# MTRACE_CTL_VERBOSE=1
MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1

I have tried like above and it will block as soon as eww is loads some
page with the same error as previously.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:07:01 GMT) Full text and rfc822 format available.

Message #344 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:06:06 +0300
* Arthur Miller <arthur.miller <at> live.com> [2020-11-23 21:34]:
> It has past long since v 26, and pdumber as well :-) You know I am
> rebuilding all the time and am on relatively latest master so I would
> have noticed it earlier, so it must be something since last month or so,
> I am not claiming anything exact, but not too far ago.
 
I do not remember having this problem by the Bwindi Impenetrable
Forest until July 14th, and computer was all the time turned on, went
to sleep, turned on. But it was different computer with 8GB while this
one is 4GB.

I was using EXWM. My experience is similar to Arthur's though I think
it is little longer then one month.

Maybe instead of all debuggers our human experience can find
approximate change introduced.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:17:02 GMT) Full text and rfc822 format available.

Message #347 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:15:56 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>   carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 19:34:26 +0100
> 
> >> This has to be something introduced fairly recently, right?
> >
> > Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
> > somewhat differently, and OTOH glibc removed some of the malloc hooks
> > we used to use in versions of Emacs before 26.  In addition, glibc is
> > also being developed, and maybe some change there somehow triggered
> > this.
> It has past long since v 26, and pdumber as well :-) You know I am
> rebuilding all the time and am on relatively latest master so I would
> have noticed it earlier, so it must be something since last month or so,

Not necessarily.  This problem seems to happen rarely, and not for
everyone.  So it's entirely possible you didn't see it by sheer luck.

> > If the problem is memory, I suggest to look at the system log to see
> > if there are any signs of that.
> Nothing else crashes, and I have 32 gig, so I am not sure what can be a
> problem.

Then it most probably isn't memory.  IOW, not the problem discussed in
this bug report.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:25:02 GMT) Full text and rfc822 format available.

Message #350 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:23:50 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>   carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 19:40:23 +0100
> 
> > That doesn't sound like a memory problem to me.
> Ok; acknowledged; any idea what it could be?

Actually, I take that back: it does look like the OOM killer that
killed Emacs:

  nov 17 16:32:44 pascal kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user <at> 1000.service,task=emacs,pid=>
  nov 17 16:32:44 pascal kernel: Out of memory: Killed process 605 (emacs) total-vm:29305960kB, anon-rss:29035892kB, file-rss:0kB, shmem-rss:5096kB, UID:1000 pgtables:57144kB oom_score_adj:0

> I have attached you a syslog from one crash point, you can see Emacs
> is using almost 8gig or RAM, but I have 32, so there is plenty of
> unused RAM over.

It says above that the total VM size of the Emacs process was 29GB,
not 8.

So maybe yours is the same problem after all.

How about writing a simple function that reports the total VM size of
the Emacs process (via process-attributes), and running it from some
timer?  Then you could see how long it takes you to get from, say, 2GB
to more than 20GB, and maybe also take notes of what you are doing at
that time?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:35:02 GMT) Full text and rfc822 format available.

Message #353 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:34:33 +0200
> Date: Mon, 23 Nov 2020 21:58:28 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de,
>   trevor <at> trevorbentley.com
> 
> > Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
> > which may affect the process if using pipes.
> 
> # MTRACE_CTL_VERBOSE=1
> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1

Any reason you redirect stderr to stdout?  I'm not saying that is the
reason for the EWW problems, but just to be sure, can you try without
that?  The trace goes to stderr, right?  So just "2> file" should be
sufficient to collect the trace.  Carlos, am I right?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:38:01 GMT) Full text and rfc822 format available.

Message #356 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 14:37:24 -0500
On 11/23/20 1:58 PM, Jean Louis wrote:
> * Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 18:54]:
>> On 11/23/20 8:27 AM, Jean Louis wrote:
>>> * Eli Zaretskii <eliz <at> gnu.org> [2020-11-22 23:17]:
>>>>> Date: Sun, 22 Nov 2020 22:52:14 +0300
>>>>> From: Jean Louis <bugs <at> gnu.support>
>>>>> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>>>>>   michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
>>>>>
>>>>> I am now following this strategy here:
>>>>> https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Leak-Checking
>>>>
>>>> That uses a different implementation of malloc, so I'm not sure it
>>>> will help us.
>>>
>>> This is how I have run the shorter Emacs session until it got blocked:
>>>
>>> MTRACE_CTL_VERBOSE=1 MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
>>>
>>> And here is mtrace:
>>>
>>> https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
>>>
>>> I cannot run Emacs that way as something happens and Emacs get
>>> blocked. Problem arrives with M-s M-w to search for anything on
>>> Internet with eww. Anything blocks. And I get message:
>>>
>>> error in process filter: Quit
>>
>> Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
>> which may affect the process if using pipes.
> 
> # MTRACE_CTL_VERBOSE=1
> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> I have tried like above and it will block as soon as eww is loads some
> page with the same error as previously.

That's interesting. Are you able to attach gdb and get a backtrace to see
what the process is blocked on?

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:39:02 GMT) Full text and rfc822 format available.

Message #359 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 20:38:45 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>>   carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 19:40:23 +0100
>> 
>> > That doesn't sound like a memory problem to me.
>> Ok; acknowledged; any idea what it could be?
>
> Actually, I take that back: it does look like the OOM killer that
> killed Emacs:
>
>   nov 17 16:32:44 pascal kernel:
> oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/user <at> 1000.service,task=emacs,pid=>
>   nov 17 16:32:44 pascal kernel: Out of memory: Killed process 605 (emacs)
> total-vm:29305960kB, anon-rss:29035892kB, file-rss:0kB, shmem-rss:5096kB,
> UID:1000 pgtables:57144kB oom_score_adj:0

>> I have attached you a syslog from one crash point, you can see Emacs
>> is using almost 8gig or RAM, but I have 32, so there is plenty of
>> unused RAM over.
Haha, I'm such a noob :-). You have eagle eye; I wasn't looking
carefully. I just looked at the process list which showed ~7 gig or ram.

> It says above that the total VM size of the Emacs process was 29GB,
> not 8.
>
> So maybe yours is the same problem after all.

> How about writing a simple function that reports the total VM size of
> the Emacs process (via process-attributes), and running it from some
> timer?  Then you could see how long it takes you to get from, say, 2GB
> to more than 20GB, and maybe also take notes of what you are doing at
> that time?
Ouch; I have to look up (process-attributes) in the info ... :-(. I
planned to do something else today, but I'll give it a look.

By the way; I haven't experienced this since 18th this month; day after
when I rebuild. So it has been almost 5 days without a crash. But I also
don't shift big folders any more; I cleanud up my old backup drive.
Is there some hefty ram-tasking benchmark with lots of random list
creations and deletions I could run; maybe some suitable ert-test
already written?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:41:01 GMT) Full text and rfc822 format available.

Message #362 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andrea Corallo <akrl <at> sdf.org>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Arthur Miller <arthur.miller <at> live.com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 19:39:35 +0000
I think would be nice to have a script that monitors Emacs memory
footprint and attach gdb on it when the memory usage is over a certain
(high) threshold.

This way should be easy to see what we are doing because at that point
we are supposed to be allocating extremely often.

  Andrea




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:50:02 GMT) Full text and rfc822 format available.

Message #365 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 20:49:48 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>>   carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 19:34:26 +0100
>> 
>> >> This has to be something introduced fairly recently, right?
>> >
>> > Maybe, I'm not sure.  Since we introduced the pdumper, we use malloc
>> > somewhat differently, and OTOH glibc removed some of the malloc hooks
>> > we used to use in versions of Emacs before 26.  In addition, glibc is
>> > also being developed, and maybe some change there somehow triggered
>> > this.
>> It has past long since v 26, and pdumber as well :-) You know I am
>> rebuilding all the time and am on relatively latest master so I would
>> have noticed it earlier, so it must be something since last month or so,
>
> Not necessarily.  This problem seems to happen rarely, and not for
> everyone.  So it's entirely possible you didn't see it by sheer luck.
Of course, but why would I suddently start to experience it? Neither my
usage pattern not even Emacs or system configuration changed at that
time.Can't be just shear luck, I haven'tdone anything differently that I
wasn't doing 2 or 6 months before; same ol; just newer master & system
updates.

The only thing that changed regularly was of course system updates: kernel,
gcc & co etc. So it maybe is as mentioned earlier in this thread by
either you or somebody else is that glibc changed and that maybe
triggers something in Emacs based on how Emacs use it. I don't know I am
not expert in this. Isn't Valgrind good for this kind of problems? Can I
run emacs as a systemd service in Valgrind?





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:51:01 GMT) Full text and rfc822 format available.

Message #368 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:49:51 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-23 22:35]:
> > Date: Mon, 23 Nov 2020 21:58:28 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com,
> >   43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de,
> >   trevor <at> trevorbentley.com
> > 
> > > Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
> > > which may affect the process if using pipes.
> > 
> > # MTRACE_CTL_VERBOSE=1
> > MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> Any reason you redirect stderr to stdout?  I'm not saying that is the
> reason for the EWW problems, but just to be sure, can you try without
> that?  The trace goes to stderr, right?  So just "2> file" should be
> sufficient to collect the trace.  Carlos, am I right?

That could be. I have just tried with:

MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs

and there is some lock, I have to invoke xkill to kill Emacs.

I wonder why it worked before.

Now it blocks also like this:

LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs

It must be something with my configuration, so I will research and try
again when I find what is the problem.






Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:51:02 GMT) Full text and rfc822 format available.

Message #371 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 14:50:29 -0500
On 11/23/20 8:27 AM, Jean Louis wrote:
> And here is mtrace:
> https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz

Initial analysis is up:
https://sourceware.org/glibc/wiki/emacs-malloc

Nothing conclusive.

We need a longer trace that shows the problem.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:53:01 GMT) Full text and rfc822 format available.

Message #374 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:52:18 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>   carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 20:38:45 +0100
> 
> By the way; I haven't experienced this since 18th this month; day after
> when I rebuild. So it has been almost 5 days without a crash. But I also
> don't shift big folders any more; I cleanud up my old backup drive.
> Is there some hefty ram-tasking benchmark with lots of random list
> creations and deletions I could run; maybe some suitable ert-test
> already written?

I don't think so, and we don't have a clear idea yet regarding what
exactly causes this, so it's difficult to know what could be
relevant.  We must wait until something like that happen, and collect
data then.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 19:56:02 GMT) Full text and rfc822 format available.

Message #377 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:55:10 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 22:37]:
> > 
> > # MTRACE_CTL_VERBOSE=1
> > MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> > 
> > I have tried like above and it will block as soon as eww is loads some
> > page with the same error as previously.
> 
> That's interesting. Are you able to attach gdb and get a backtrace to see
> what the process is blocked on?

I can do C-g one time to interrupt something going on, then I get error

(gdb) continue
Continuing.
[New Thread 0x7f10ed01fc00 (LWP 25293)]
[New Thread 0x7f10ed007c00 (LWP 25294)]
[New Thread 0x7f10ecfefc00 (LWP 25295)]
[New Thread 0x7f10ecfd7c00 (LWP 25296)]
[Thread 0x7f10ed01fc00 (LWP 25293) exited]
[Thread 0x7f10ed007c00 (LWP 25294) exited]
[Thread 0x7f10ecfd7c00 (LWP 25296) exited]
[Thread 0x7f10ecfefc00 (LWP 25295) exited]
HERE I cannot do anything with GDB prompt, there is no prompt, I can
C-c and I get:

gdb) continue
Continuing.
[New Thread 0x7f10ed01fc00 (LWP 25293)]
[New Thread 0x7f10ed007c00 (LWP 25294)]
[New Thread 0x7f10ecfefc00 (LWP 25295)]
[New Thread 0x7f10ecfd7c00 (LWP 25296)]
[Thread 0x7f10ed01fc00 (LWP 25293) exited]
[Thread 0x7f10ed007c00 (LWP 25294) exited]
[Thread 0x7f10ecfd7c00 (LWP 25296) exited]
[Thread 0x7f10ecfefc00 (LWP 25295) exited]

continue
^C
Thread 1 "emacs" received signal SIGINT, Interrupt.
0x00007f10fe08fe7d in read () from /lib/libpthread.so.0





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:00:02 GMT) Full text and rfc822 format available.

Message #380 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Andrea Corallo <akrl <at> sdf.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 20:59:21 +0100
Andrea Corallo <akrl <at> sdf.org> writes:

> I think would be nice to have a script that monitors Emacs memory
> footprint and attach gdb on it when the memory usage is over a certain
> (high) threshold.
>
> This way should be easy to see what we are doing because at that point
> we are supposed to be allocating extremely often.
>
>   Andrea
Indeed.


How hard/possible is to use this tool in Emacs:

https://gperftools.github.io/gperftools/heapprofile.html

By the way, has anyone tried this one (heaptrack):

https://github.com/KDE/heaptrack




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:04:01 GMT) Full text and rfc822 format available.

Message #383 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:03:05 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>>   carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 20:38:45 +0100
>> 
>> By the way; I haven't experienced this since 18th this month; day after
>> when I rebuild. So it has been almost 5 days without a crash. But I also
>> don't shift big folders any more; I cleanud up my old backup drive.
>> Is there some hefty ram-tasking benchmark with lots of random list
>> creations and deletions I could run; maybe some suitable ert-test
>> already written?
>
> I don't think so, and we don't have a clear idea yet regarding what
> exactly causes this, so it's difficult to know what could be
> relevant.  We must wait until something like that happen, and collect
> data then.
Yes yes, ok. thanks.

I'll try to build heaptrack and see if it works well with Emacs first;
I'm little bit curious about it.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:05:02 GMT) Full text and rfc822 format available.

Message #386 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 15:04:33 -0500
On 11/23/20 2:34 PM, Eli Zaretskii wrote:
>> Date: Mon, 23 Nov 2020 21:58:28 +0300
>> From: Jean Louis <bugs <at> gnu.support>
>> Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com,
>>   43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de,
>>   trevor <at> trevorbentley.com
>>
>>> Sorry, please drop MTRACE_CTL_VERBOSE=1, as it adds output to stdout
>>> which may affect the process if using pipes.
>>
>> # MTRACE_CTL_VERBOSE=1
>> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> 
> Any reason you redirect stderr to stdout?  I'm not saying that is the
> reason for the EWW problems, but just to be sure, can you try without
> that?  The trace goes to stderr, right?  So just "2> file" should be
> sufficient to collect the trace.  Carlos, am I right?
 
No, the trace goes to the trace file specified by MTRACT_CTL_FILE.

By default the tracer is as minimally intrusive as possible.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:05:02 GMT) Full text and rfc822 format available.

Message #389 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:04:35 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>   carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 20:49:48 +0100
> 
> Isn't Valgrind good for this kind of problems? Can I run emacs as a
> systemd service in Valgrind?

You can run Emacs under Valgrind, see etc/DEBUG for the details.  But
I'm not sure it will work as systemd service.

Valgrind is only the right tool if we think there's a memory leak in
Emacs itself.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:07:01 GMT) Full text and rfc822 format available.

Message #392 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 15:06:25 -0500
On 11/23/20 2:55 PM, Jean Louis wrote:
> * Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 22:37]:
>>>
>>> # MTRACE_CTL_VERBOSE=1
>>> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
>>>
>>> I have tried like above and it will block as soon as eww is loads some
>>> page with the same error as previously.
>>
>> That's interesting. Are you able to attach gdb and get a backtrace to see
>> what the process is blocked on?
> 
> I can do C-g one time to interrupt something going on, then I get error
> 
> (gdb) continue
Please issue 'thread apply all backtrace' to get a backtrace from all
the threads to see where they are stuck.

You will need debug information for this for all associated frames in
the backtrace. Depending on your distribution this may require debug
information packages.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:12:01 GMT) Full text and rfc822 format available.

Message #395 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:10:49 +0200
> Date: Mon, 23 Nov 2020 22:55:10 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de,
>   trevor <at> trevorbentley.com
> 
> > That's interesting. Are you able to attach gdb and get a backtrace to see
> > what the process is blocked on?
> 
> I can do C-g one time to interrupt something going on, then I get error
> 
> (gdb) continue
> Continuing.

Instead of "continue", type "thread apply all bt", and post the
result.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:16:01 GMT) Full text and rfc822 format available.

Message #398 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 akrl <at> sdf.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:15:15 +0200
> From: Arthur Miller <arthur.miller <at> live.com>
> Cc: Eli Zaretskii <eliz <at> gnu.org>,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org,  bugs <at> gnu.support,  dj <at> redhat.com,
>   michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,  carlos <at> redhat.com
> Date: Mon, 23 Nov 2020 20:59:21 +0100
> 
> How hard/possible is to use this tool in Emacs:
> 
> https://gperftools.github.io/gperftools/heapprofile.html

AFAIU, this cannot be used with glibc's malloc, it needs libtcmalloc
instead.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:17:02 GMT) Full text and rfc822 format available.

Message #401 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:16:35 +0200
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>  michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Mon, 23 Nov 2020 15:04:33 -0500
> 
> > Any reason you redirect stderr to stdout?  I'm not saying that is the
> > reason for the EWW problems, but just to be sure, can you try without
> > that?  The trace goes to stderr, right?  So just "2> file" should be
> > sufficient to collect the trace.  Carlos, am I right?
>  
> No, the trace goes to the trace file specified by MTRACT_CTL_FILE.

Thanks, that's even easier: it means no standard stream needs to be
redirected.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:19:01 GMT) Full text and rfc822 format available.

Message #404 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:59:25 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 22:50]:
> On 11/23/20 8:27 AM, Jean Louis wrote:
> > And here is mtrace:
> > https://gnu.support/files/tmp/2020-11-23/mtraceEMACS.mtr.9294.lz
> 
> Initial analysis is up:
> https://sourceware.org/glibc/wiki/emacs-malloc
> 
> Nothing conclusive.
> 
> We need a longer trace that shows the problem.

At least it says there is nothing pathological with my behavior :-)

And it could be just wrong indication.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:19:02 GMT) Full text and rfc822 format available.

Message #407 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 23:18:13 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 23:06]:
> On 11/23/20 2:55 PM, Jean Louis wrote:
> > * Carlos O'Donell <carlos <at> redhat.com> [2020-11-23 22:37]:
> >>>
> >>> # MTRACE_CTL_VERBOSE=1
> >>> MTRACE_CTL_FILE=/home/data1/protected/tmp/mtraceEMACS.mtr LD_PRELOAD=/home/data1/protected/Programming/git/glibc-malloc-trace-utils/libmtrace.so emacs >> $DEBUG 2>&1
> >>>
> >>> I have tried like above and it will block as soon as eww is loads some
> >>> page with the same error as previously.
> >>
> >> That's interesting. Are you able to attach gdb and get a backtrace to see
> >> what the process is blocked on?
> > 
> > I can do C-g one time to interrupt something going on, then I get error
> > 
> > (gdb) continue
> Please issue 'thread apply all backtrace' to get a backtrace from all
> the threads to see where they are stuck.
> 
> You will need debug information for this for all associated frames in
> the backtrace. Depending on your distribution this may require debug
> information packages.

sudo gdb -pid 25584
GNU gdb (GDB) 7.12.1
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 25584
[New LWP 25585]
[New LWP 25586]
[New LWP 25588]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/libthread_db.so.1".
0x00007f6afd4765dc in pselect () from /lib/libc.so.6
(gdb) continue
Continuing.
[New Thread 0x7f6aed1dbc00 (LWP 25627)]
[Thread 0x7f6aed1dbc00 (LWP 25627) exited]
[New Thread 0x7f6aed1dbc00 (LWP 25628)]
[Thread 0x7f6aed1dbc00 (LWP 25628) exited]
  C-c C-c
Thread 1 "emacs" received signal SIGINT, Interrupt.
0x00007f6afd4765dc in pselect () from /lib/libc.so.6
(gdb) thread apply backtrace
Invalid thread ID: backtrace
(gdb) thread apply all backtrace

Thread 4 (Thread 0x7f6aee2ae700 (LWP 25588)):
#0  0x00007f6afd47435d in poll () at /lib/libc.so.6
#1  0x00007f6b011a4b98 in  () at /lib/libglib-2.0.so.0
#2  0x00007f6b011a4f52 in g_main_loop_run () at /lib/libglib-2.0.so.0
#3  0x00007f6b019b62c8 in  () at /usr/lib/libgio-2.0.so.0
#4  0x00007f6b011ccfca in  () at /lib/libglib-2.0.so.0
#5  0x00007f6afe242069 in start_thread () at /lib/libpthread.so.0
#6  0x00007f6afd47e30f in clone () at /lib/libc.so.6

Thread 3 (Thread 0x7f6aeeaaf700 (LWP 25586)):
#0  0x00007f6afd47435d in poll () at /lib/libc.so.6
#1  0x00007f6b011a4b98 in  () at /lib/libglib-2.0.so.0
#2  0x00007f6b011a4cbe in g_main_context_iteration () at /lib/libglib-2.0.so.0
#3  0x00007f6aeeab755d in  () at /usr/lib/gio/modules/libdconfsettings.so
#4  0x00007f6b011ccfca in  () at /lib/libglib-2.0.so.0
#5  0x00007f6afe242069 in start_thread () at /lib/libpthread.so.0
#6  0x00007f6afd47e30f in clone () at /lib/libc.so.6

Thread 2 (Thread 0x7f6aef6c8700 (LWP 25585)):
#0  0x00007f6afd47435d in poll () at /lib/libc.so.6
#1  0x00007f6b011a4b98 in  () at /lib/libglib-2.0.so.0
---Type <return> to continue, or q <return> to quit---
#2  0x00007f6b011a4cbe in g_main_context_iteration () at /lib/libglib-2.0.so.0
#3  0x00007f6b011a4d12 in  () at /lib/libglib-2.0.so.0
#4  0x00007f6b011ccfca in  () at /lib/libglib-2.0.so.0
#5  0x00007f6afe242069 in start_thread () at /lib/libpthread.so.0
#6  0x00007f6afd47e30f in clone () at /lib/libc.so.6

Thread 1 (Thread 0x7f6b049e9100 (LWP 25584)):
#0  0x00007f6afd4765dc in pselect () at /lib/libc.so.6
#1  0x00000000005cf500 in really_call_select (arg=0x7ffc16edfa80) at thread.c:592
#2  0x00000000005d006e in flush_stack_call_func (arg=0x7ffc16edfa80, func=0x5cf4b0 <really_call_select>) at lisp.h:3791
#3  0x00000000005d006e in thread_select (func=<optimized out>, max_fds=max_fds <at> entry=19, rfds=rfds <at> entry=0x7ffc16edfb60, wfds=wfds <at> entry=0x7ffc16edfbe0, efds=efds <at> entry=0x0, timeout=timeout <at> entry=0x7ffc16ee0170, sigmask=0x0) at thread.c:624
#4  0x00000000005eb023 in xg_select (fds_lim=19, rfds=rfds <at> entry=0x7ffc16ee02a0, wfds=0x7ffc16ee0320, efds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at xgselect.c:131
#5  0x00000000005aeab4 in wait_reading_process_output (time_limit=time_limit <at> entry=30, nsecs=nsecs <at> entry=0, read_kbd=-1, do_display=do_display <at> entry=true, wait_for_cell=wait_for_cell <at> entry=0x0, wait_proc=wait_proc <at> entry=0x0, just_wait_proc=0) at process.c:5604
#6  0x00000000004253f8 in sit_for (timeout=timeout <at> entry=0x7a, reading=reading <at> entry=true, display_option=display_option <at> entry=1) at dispnew.c:6111
#7  0x00000000004fe415 in read_char (commandflag=commandflag <at> entry=1, map=map <at> entry=0x3184a63, p---Type <return> to continue, or q <return> to quit---
rev_event=<optimized out>, used_mouse_menu=used_mouse_menu <at> entry=0x7ffc16ee0b5b, end_time=end_time <at> entry=0x0) at keyboard.c:2742
#8  0x0000000000500841 in read_key_sequence (keybuf=keybuf <at> entry=0x7ffc16ee0c50, prompt=prompt <at> entry=0x0, dont_downcase_last=dont_downcase_last <at> entry=false, can_return_switch_frame=can_return_switch_frame <at> entry=true, fix_current_buffer=fix_current_buffer <at> entry=true, prevent_redisplay=prevent_redisplay <at> entry=false) at keyboard.c:9546
#9  0x0000000000502040 in command_loop_1 () at keyboard.c:1354
#10 0x000000000056a40e in internal_condition_case (bfun=bfun <at> entry=0x501e30 <command_loop_1>, handlers=handlers <at> entry=0x90, hfun=hfun <at> entry=0x4f8da0 <cmd_error>) at eval.c:1359
#11 0x00000000004f370c in command_loop_2 (ignore=ignore <at> entry=0x0) at keyboard.c:1095
#12 0x000000000056a3ac in internal_catch (tag=tag <at> entry=0xd740, func=func <at> entry=0x4f36f0 <command_loop_2>, arg=arg <at> entry=0x0) at eval.c:1120
#13 0x00000000004f36c9 in command_loop () at keyboard.c:1074
#14 0x00000000004f89c6 in recursive_edit_1 () at keyboard.c:718
#15 0x00000000004f8ce4 in Frecursive_edit () at keyboard.c:790
#16 0x000000000041a8f3 in main (argc=1, argv=0x7ffc16ee1048) at emacs.c:2047
(gdb) 




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:32:02 GMT) Full text and rfc822 format available.

Message #410 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:31:46 +0200
> Date: Mon, 23 Nov 2020 23:18:13 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de,
>   trevor <at> trevorbentley.com
> 
> Thread 1 (Thread 0x7f6b049e9100 (LWP 25584)):
> #0  0x00007f6afd4765dc in pselect () at /lib/libc.so.6
> #1  0x00000000005cf500 in really_call_select (arg=0x7ffc16edfa80) at thread.c:592
> #2  0x00000000005d006e in flush_stack_call_func (arg=0x7ffc16edfa80, func=0x5cf4b0 <really_call_select>) at lisp.h:3791
> #3  0x00000000005d006e in thread_select (func=<optimized out>, max_fds=max_fds <at> entry=19, rfds=rfds <at> entry=0x7ffc16edfb60, wfds=wfds <at> entry=0x7ffc16edfbe0, efds=efds <at> entry=0x0, timeout=timeout <at> entry=0x7ffc16ee0170, sigmask=0x0) at thread.c:624
> #4  0x00000000005eb023 in xg_select (fds_lim=19, rfds=rfds <at> entry=0x7ffc16ee02a0, wfds=0x7ffc16ee0320, efds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at xgselect.c:131
> #5  0x00000000005aeab4 in wait_reading_process_output (time_limit=time_limit <at> entry=30, nsecs=nsecs <at> entry=0, read_kbd=-1, do_display=do_display <at> entry=true, wait_for_cell=wait_for_cell <at> entry=0x0, wait_proc=wait_proc <at> entry=0x0, just_wait_proc=0) at process.c:5604
> #6  0x00000000004253f8 in sit_for (timeout=timeout <at> entry=0x7a, reading=reading <at> entry=true, display_option=display_option <at> entry=1) at dispnew.c:6111
> #7  0x00000000004fe415 in read_char (commandflag=commandflag <at> entry=1, map=map <at> entry=0x3184a63, p---Type <return> to continue, or q <return> to quit---
> rev_event=<optimized out>, used_mouse_menu=used_mouse_menu <at> entry=0x7ffc16ee0b5b, end_time=end_time <at> entry=0x0) at keyboard.c:2742
> #8  0x0000000000500841 in read_key_sequence (keybuf=keybuf <at> entry=0x7ffc16ee0c50, prompt=prompt <at> entry=0x0, dont_downcase_last=dont_downcase_last <at> entry=false, can_return_switch_frame=can_return_switch_frame <at> entry=true, fix_current_buffer=fix_current_buffer <at> entry=true, prevent_redisplay=prevent_redisplay <at> entry=false) at keyboard.c:9546
> #9  0x0000000000502040 in command_loop_1 () at keyboard.c:1354
> #10 0x000000000056a40e in internal_condition_case (bfun=bfun <at> entry=0x501e30 <command_loop_1>, handlers=handlers <at> entry=0x90, hfun=hfun <at> entry=0x4f8da0 <cmd_error>) at eval.c:1359

This says Emacs is simply waiting for input.

Are you saying Emacs doesn't respond to keyboard input in this state?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:33:01 GMT) Full text and rfc822 format available.

Message #413 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 23:31:46 +0300
* Arthur Miller <arthur.miller <at> live.com> [2020-11-23 23:22]:
> The only thing that changed regularly was of course system updates: kernel,
> gcc & co etc. So it maybe is as mentioned earlier in this thread by
> either you or somebody else is that glibc changed and that maybe
> triggers something in Emacs based on how Emacs use it. I don't know I am
> not expert in this. Isn't Valgrind good for this kind of problems? Can I
> run emacs as a systemd service in Valgrind?

I did not change anything like glibc or kernel in Hyperbola
GNU/Linux-libre





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:43:01 GMT) Full text and rfc822 format available.

Message #416 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 23:41:26 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-23 23:32]:
> > Date: Mon, 23 Nov 2020 23:18:13 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com,
> >   43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de,
> >   trevor <at> trevorbentley.com
> > 
> > Thread 1 (Thread 0x7f6b049e9100 (LWP 25584)):
> > #0  0x00007f6afd4765dc in pselect () at /lib/libc.so.6
> > #1  0x00000000005cf500 in really_call_select (arg=0x7ffc16edfa80) at thread.c:592
> > #2  0x00000000005d006e in flush_stack_call_func (arg=0x7ffc16edfa80, func=0x5cf4b0 <really_call_select>) at lisp.h:3791
> > #3  0x00000000005d006e in thread_select (func=<optimized out>, max_fds=max_fds <at> entry=19, rfds=rfds <at> entry=0x7ffc16edfb60, wfds=wfds <at> entry=0x7ffc16edfbe0, efds=efds <at> entry=0x0, timeout=timeout <at> entry=0x7ffc16ee0170, sigmask=0x0) at thread.c:624
> > #4  0x00000000005eb023 in xg_select (fds_lim=19, rfds=rfds <at> entry=0x7ffc16ee02a0, wfds=0x7ffc16ee0320, efds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at xgselect.c:131
> > #5  0x00000000005aeab4 in wait_reading_process_output (time_limit=time_limit <at> entry=30, nsecs=nsecs <at> entry=0, read_kbd=-1, do_display=do_display <at> entry=true, wait_for_cell=wait_for_cell <at> entry=0x0, wait_proc=wait_proc <at> entry=0x0, just_wait_proc=0) at process.c:5604
> > #6  0x00000000004253f8 in sit_for (timeout=timeout <at> entry=0x7a, reading=reading <at> entry=true, display_option=display_option <at> entry=1) at dispnew.c:6111
> > #7  0x00000000004fe415 in read_char (commandflag=commandflag <at> entry=1, map=map <at> entry=0x3184a63, p---Type <return> to continue, or q <return> to quit---
> > rev_event=<optimized out>, used_mouse_menu=used_mouse_menu <at> entry=0x7ffc16ee0b5b, end_time=end_time <at> entry=0x0) at keyboard.c:2742
> > #8  0x0000000000500841 in read_key_sequence (keybuf=keybuf <at> entry=0x7ffc16ee0c50, prompt=prompt <at> entry=0x0, dont_downcase_last=dont_downcase_last <at> entry=false, can_return_switch_frame=can_return_switch_frame <at> entry=true, fix_current_buffer=fix_current_buffer <at> entry=true, prevent_redisplay=prevent_redisplay <at> entry=false) at keyboard.c:9546
> > #9  0x0000000000502040 in command_loop_1 () at keyboard.c:1354
> > #10 0x000000000056a40e in internal_condition_case (bfun=bfun <at> entry=0x501e30 <command_loop_1>, handlers=handlers <at> entry=0x90, hfun=hfun <at> entry=0x4f8da0 <cmd_error>) at eval.c:1359
> 
> This says Emacs is simply waiting for input.
> 
> Are you saying Emacs doesn't respond to keyboard input in this state?

Yes. But once I could kill it straight with C-x c without any
questions or something.

It happens during eww call, not immediately but during. I could do 3
times C-g and get the error and then after nothing, I could not kill
buffer, could not quit, nothing but xkill

Now, last 3 attempts I can interrupt and I get keyboard control, I can
see half page loaded. And I can kill buffer.

I was thinking maybe ivy, but I turned it off, it is not ivy.

So if I just interrupt it during loading, I have no keyboard control,
but if I continue interrupting with C-g then half page appears and I
get keyboard control.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:54:02 GMT) Full text and rfc822 format available.

Message #419 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andreas Schwab <schwab <at> linux-m68k.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 21:53:22 +0100
On Nov 23 2020, Jean Louis wrote:

> It happens during eww call, not immediately but during.

That probably just means it is busy in libxml parsing the page.

Andreas.

-- 
Andreas Schwab, schwab <at> linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 20:55:01 GMT) Full text and rfc822 format available.

Message #422 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andrea Corallo <akrl <at> sdf.org>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 20:53:37 +0000
Arthur Miller <arthur.miller <at> live.com> writes:

> Andrea Corallo <akrl <at> sdf.org> writes:
>
>> I think would be nice to have a script that monitors Emacs memory
>> footprint and attach gdb on it when the memory usage is over a certain
>> (high) threshold.
>>
>> This way should be easy to see what we are doing because at that point
>> we are supposed to be allocating extremely often.
>>
>>   Andrea
> Indeed.

*not* very much tested:

<https://gitlab.com/koral/mem-watchdog.el/-/blob/master/mem-watchdog.el>

You can run an Emacs -Q where you use this to monitor the Emacs you are
working on (hopefully the first one does not crash too).  Note you have
to set the OS to allow for gdb to attach on other processes or run the
Emacs monitor as root.

Hope it helps.

  Andrea




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 21:13:02 GMT) Full text and rfc822 format available.

Message #425 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:12:12 +0100
[Message part 1 (text/plain, inline)]
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>>   carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 20:49:48 +0100
>> 
>> Isn't Valgrind good for this kind of problems? Can I run emacs as a
>> systemd service in Valgrind?
>
> You can run Emacs under Valgrind, see etc/DEBUG for the details.  But
> I'm not sure it will work as systemd service.
>
> Valgrind is only the right tool if we think there's a memory leak in
> Emacs itself.
Ok, I'll take a look at debug docs; It's ok, just i get a test I can run
it as normal process; it's ok.

Anyway I have tested heaptrack; It built in like few seconds, nothing
special there.

I am not sure about the tool; I think it missunderstands memory taken by
lisp environement as a leaked memory. It repports like heap loads of
leaks :-), so it must be that it just missunderstands Emacs. I am not
sure, I am attaching few screenshots, but I don't believe it can be that
many leaks as it rapports. It is just emacs what one gets from emacs -Q
there. I will attach the generated data too.

I had some problem with it too. I tried to attach it to a running deamon
process (started by sysd) and it failed untill I run it as sudo
user. As soon as it attached itself seems that both server and
emacsclient got completely unresponsive and stayed that way. I killed
client process, but windowed stayed alive, I had to kill it with
xkill. After I restarded server Emacs didn't read the init file, because
paths got messed up, so I had to sort that out too. Also the tool
produced empty rapport (it didn't work). But runnign on standalone emacs
process as a sudo user worked.

Anyway, despite problems it seems to be very nice graphical tool to see
call stack and how Emacs looks like internally; but I am not sure if it
works at all to find leaks in Emacs.

[em-heaptrack1.png (image/png, attachment)]
[em-heaptrack2.png (image/png, attachment)]
[em-heaptrack3.png (image/png, attachment)]
[em-heaptrack4.png (image/png, attachment)]
[em-heaptrack5.png (image/png, attachment)]
[heaptrack.emacs.52042.zst (application/zstd, attachment)]

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 21:16:02 GMT) Full text and rfc822 format available.

Message #428 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 akrl <at> sdf.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:15:39 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: Eli Zaretskii <eliz <at> gnu.org>,  fweimer <at> redhat.com,
>>   43389 <at> debbugs.gnu.org,  bugs <at> gnu.support,  dj <at> redhat.com,
>>   michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,  carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 20:59:21 +0100
>> 
>> How hard/possible is to use this tool in Emacs:
>> 
>> https://gperftools.github.io/gperftools/heapprofile.html
>
> AFAIU, this cannot be used with glibc's malloc, it needs libtcmalloc
> instead.
Oh yes I understand, there is not a chance it would help to run emacs on
tcmalloc instead of standard malloc? If there is by a chance a leak
somewhere in Emacs? ... god forbid of course :-)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 21:23:02 GMT) Full text and rfc822 format available.

Message #431 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:22:28 +0100
Jean Louis <bugs <at> gnu.support> writes:

> * Arthur Miller <arthur.miller <at> live.com> [2020-11-23 23:22]:
>> The only thing that changed regularly was of course system updates: kernel,
>> gcc & co etc. So it maybe is as mentioned earlier in this thread by
>> either you or somebody else is that glibc changed and that maybe
>> triggers something in Emacs based on how Emacs use it. I don't know I am
>> not expert in this. Isn't Valgrind good for this kind of problems? Can I
>> run emacs as a systemd service in Valgrind?
>
> I did not change anything like glibc or kernel in Hyperbola
> GNU/Linux-libre
Didn't you update your system since last summer?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 21:32:01 GMT) Full text and rfc822 format available.

Message #434 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 22:30:57 +0100
Ah geez, there's a dozen threads now.  I'll just start from here.

I haven't setup the memory trace lib yet, but I've been running an 
instance of emacs and printing as much as I can about its memory 
usage, including (malloc-info).  I reduced MALLOC_ARENA_MAX to 2.

This instance sat around at ~300MB for a day, then spiked to 
1000MB.  I ran a bunch of memory-related functions, and it stopped 
growing.  I believe (garbage-collect) halted the growth.

It ran for another 3 days at ~1100MB until another sudden spike up 
to 2300MB.

As usual, this is a graphical instance running emacs-slack with 
tons of network traffic and images and the such.

In the mean time, while that instance was running, a second 
graphical instance suddenly spiked to 4100MB.  The other instance 
is interesting, as it's not doing anything special at all.  It has 
a few elisp files open, and reports only 700KB of buffers and 
42.2MB in elisp data.

A third graphical instance has been idling during this time.  I've 
never done a single thing with it beyond start it.  That one is 
still at 83MB.

Below is a large memory report from the emacs-slack instance:

----------------
BEGIN LOG
----------------
;;--------------------------------------  ;; one day of runtime
;; growing 1MB every few seconds RSS 1100MB 
;; -------------------------------------- 
;; --------------------------------------  (getenv 
;; "MALLOC_ARENA_MAX") "2"  buffers ~= 60MB (let ((size 0)) 
 (dolist (buffer (buffer-list) size) 
   (setq size (+ size (buffer-size buffer))))) 
60300462  ;; sums to ~100MB if I'm reading it right? 
(garbage-collect) ((conses 16 1143686 1675416) (symbols 48 32466 
160) (strings 32 241966 542675) (string-bytes 1 5872840) (vectors 
16 116994) (vector-slots 8 8396419 357942) (floats 8 1705 7024) 
(intervals 56 27139 10678) (buffers 992 53))  ;; /proc/$PID/smaps 
heap 56395d707000-56399b330000 rw-p 00000000 00:00 0 
[heap] Size:            1011876 kB KernelPageSize:        4 kB 
MMUPageSize:           4 kB Rss:             1010948 kB Pss: 
1010948 kB Shared_Clean:          0 kB Shared_Dirty:          0 kB 
Private_Clean:         0 kB Private_Dirty:   1010948 kB 
Referenced:      1007016 kB Anonymous:       1010948 kB LazyFree: 
0 kB AnonHugePages:         0 kB ShmemPmdMapped:        0 kB 
FilePmdMapped:        0 kB Shared_Hugetlb:        0 kB 
Private_Hugetlb:       0 kB Swap:                  0 kB SwapPss: 
0 kB Locked:                0 kB THPeligible:            0 
ProtectionKey:         0  ;; malloc-info <malloc version="1"> 
<heap nr="0"> <sizes> 
 <size from="17" to="32" total="64" count="2"/> <size from="33" 
 to="48" total="192" count="4"/> <size from="33" to="33" 
 total="56826" count="1722"/> <size from="49" to="49" 
 total="16121" count="329"/> <size from="65" to="65" 
 total="567970" count="8738"/> <size from="81" to="81" 
 total="38070" count="470"/> <size from="97" to="97" 
 total="80122" count="826"/> <size from="113" to="113" 
 total="37629" count="333"/> <size from="129" to="129" 
 total="435117" count="3373"/> <size from="145" to="145" 
 total="44805" count="309"/> <size from="161" to="161" 
 total="111090" count="690"/> <size from="177" to="177" 
 total="35577" count="201"/> <size from="193" to="193" 
 total="293553" count="1521"/> <size from="209" to="209" 
 total="33858" count="162"/> <size from="225" to="225" 
 total="66600" count="296"/> <size from="241" to="241" 
 total="35909" count="149"/> <size from="257" to="257" 
 total="179900" count="700"/> <size from="273" to="273" 
 total="28938" count="106"/> <size from="289" to="289" 
 total="48841" count="169"/> <size from="305" to="305" 
 total="21655" count="71"/> <size from="321" to="321" 
 total="127758" count="398"/> <size from="337" to="337" 
 total="20220" count="60"/> <size from="353" to="353" 
 total="37065" count="105"/> <size from="369" to="369" 
 total="28044" count="76"/> <size from="385" to="385" 
 total="90860" count="236"/> <size from="401" to="401" 
 total="21253" count="53"/> <size from="417" to="417" 
 total="51291" count="123"/> <size from="433" to="433" 
 total="21217" count="49"/> <size from="449" to="449" 
 total="77228" count="172"/> <size from="465" to="465" 
 total="19995" count="43"/> <size from="481" to="481" 
 total="32227" count="67"/> <size from="497" to="497" 
 total="19383" count="39"/> <size from="513" to="513" 
 total="63099" count="123"/> <size from="529" to="529" 
 total="14283" count="27"/> <size from="545" to="545" 
 total="31065" count="57"/> <size from="561" to="561" 
 total="23001" count="41"/> <size from="577" to="577" 
 total="50199" count="87"/> <size from="593" to="593" 
 total="18383" count="31"/> <size from="609" to="609" 
 total="38367" count="63"/> <size from="625" to="625" 
 total="21875" count="35"/> <size from="641" to="641" 
 total="39101" count="61"/> <size from="657" to="657" 
 total="28251" count="43"/> <size from="673" to="673" 
 total="30958" count="46"/> <size from="689" to="689" 
 total="19292" count="28"/> <size from="705" to="705" 
 total="38070" count="54"/> <size from="721" to="721" 
 total="12978" count="18"/> <size from="737" to="737" 
 total="33902" count="46"/> <size from="753" to="753" 
 total="20331" count="27"/> <size from="769" to="769" 
 total="33067" count="43"/> <size from="785" to="785" 
 total="18840" count="24"/> <size from="801" to="801" 
 total="29637" count="37"/> <size from="817" to="817" 
 total="17157" count="21"/> <size from="833" to="833" 
 total="35819" count="43"/> <size from="849" to="849" 
 total="16131" count="19"/> <size from="865" to="865" 
 total="21625" count="25"/> <size from="881" to="881" 
 total="14977" count="17"/> <size from="897" to="897" 
 total="31395" count="35"/> <size from="913" to="913" 
 total="18260" count="20"/> <size from="929" to="929" 
 total="37160" count="40"/> <size from="945" to="945" 
 total="28350" count="30"/> <size from="961" to="961" 
 total="40362" count="42"/> <size from="977" to="977" 
 total="30287" count="31"/> <size from="993" to="993" 
 total="43692" count="44"/> <size from="1009" to="1009" 
 total="1426726" count="1414"/> <size from="1025" to="1073" 
 total="1167589" count="1093"/> <size from="1089" to="1137" 
 total="1370809" count="1209"/> <size from="1153" to="1201" 
 total="723005" count="605"/> <size from="1217" to="1265" 
 total="467988" count="372"/> <size from="1281" to="1329" 
 total="258180" count="196"/> <size from="1345" to="1393" 
 total="128221" count="93"/> <size from="1409" to="1457" 
 total="143844" count="100"/> <size from="1473" to="1521" 
 total="129078" count="86"/> <size from="1537" to="1585" 
 total="93980" count="60"/> <size from="1601" to="1649" 
 total="108995" count="67"/> <size from="1665" to="1713" 
 total="98218" count="58"/> <size from="1729" to="1777" 
 total="121253" count="69"/> <size from="1793" to="1841" 
 total="110877" count="61"/> <size from="1857" to="1905" 
 total="92257" count="49"/> <size from="1921" to="1969" 
 total="83691" count="43"/> <size from="1985" to="2033" 
 total="235973" count="117"/> <size from="2049" to="2097" 
 total="213783" count="103"/> <size from="2113" to="2161" 
 total="653793" count="305"/> <size from="2177" to="2225" 
 total="682581" count="309"/> <size from="2241" to="2289" 
 total="260931" count="115"/> <size from="2305" to="2337" 
 total="109375" count="47"/> <size from="2369" to="2417" 
 total="88789" count="37"/> <size from="2433" to="2481" 
 total="83378" count="34"/> <size from="2497" to="2545" 
 total="98263" count="39"/> <size from="2561" to="2609" 
 total="77438" count="30"/> <size from="2657" to="2673" 
 total="42656" count="16"/> <size from="2689" to="2737" 
 total="48754" count="18"/> <size from="2753" to="2801" 
 total="63879" count="23"/> <size from="2817" to="2865" 
 total="62422" count="22"/> <size from="2881" to="2929" 
 total="57988" count="20"/> <size from="2945" to="2993" 
 total="68247" count="23"/> <size from="3009" to="3057" 
 total="133164" count="44"/> <size from="3073" to="3121" 
 total="397169" count="129"/> <size from="3137" to="3569" 
 total="2008020" count="612"/> <size from="3585" to="4081" 
 total="666716" count="172"/> <size from="4097" to="4593" 
 total="7549855" count="1775"/> <size from="4609" to="5105" 
 total="2643468" count="540"/> <size from="5121" to="5617" 
 total="5882607" count="1103"/> <size from="5633" to="6129" 
 total="2430783" count="415"/> <size from="6145" to="6641" 
 total="3494147" count="547"/> <size from="6657" to="7153" 
 total="2881062" count="422"/> <size from="7169" to="7665" 
 total="5880630" count="790"/> <size from="7681" to="8177" 
 total="2412798" count="302"/> <size from="8193" to="8689" 
 total="11000664" count="1320"/> <size from="8705" to="9201" 
 total="4458714" count="490"/> <size from="9217" to="9713" 
 total="4959696" count="528"/> <size from="9729" to="10225" 
 total="6223631" count="623"/> <size from="10241" to="10737" 
 total="3347537" count="321"/> <size from="10753" to="12273" 
 total="7665386" count="666"/> <size from="12289" to="16369" 
 total="37137026" count="2658"/> <size from="16385" to="20465" 
 total="26637896" count="1496"/> <size from="20481" to="24561" 
 total="17043773" count="765"/> <size from="24593" to="28657" 
 total="15934986" count="602"/> <size from="28673" to="32753" 
 total="21737575" count="711"/> <size from="32769" to="36849" 
 total="17276544" count="496"/> <size from="36865" to="40945" 
 total="14702299" count="379"/> <size from="40961" to="65521" 
 total="53337460" count="1044"/> <size from="65585" to="98289" 
 total="51364750" count="654"/> <size from="98369" to="131057" 
 total="27361507" count="243"/> <size from="131121" to="163665" 
 total="27275915" count="187"/> <size from="163841" to="262129" 
 total="63020958" count="302"/> <size from="262145" to="519809" 
 total="126431823" count="351"/> <size from="525073" to="4639665" 
 total="148733598" count="174"/> <unsorted from="18465" 
 to="18465" total="18465" count="1"/> 
</sizes> <total type="fast" count="6" size="256"/> <total 
type="rest" count="50540" size="735045803"/> <system 
type="current" size="1036161024"/> <system type="max" 
size="1036161024"/> <aspace type="total" size="1036161024"/> 
<aspace type="mprotect" size="1036161024"/> </heap> <heap nr="1"> 
<sizes> 
 <size from="33" to="33" total="231" count="7"/> <size from="49" 
 to="49" total="245" count="5"/> <size from="65" to="65" 
 total="260" count="4"/> <size from="81" to="81" total="243" 
 count="3"/> <size from="97" to="97" total="97" count="1"/> <size 
 from="113" to="113" total="113" count="1"/> <size from="129" 
 to="129" total="516" count="4"/> <size from="161" to="161" 
 total="644" count="4"/> <size from="209" to="209" total="418" 
 count="2"/> <size from="241" to="241" total="241" count="1"/> 
 <size from="257" to="257" total="257" count="1"/> <size 
 from="305" to="305" total="610" count="2"/> <size from="705" 
 to="705" total="705" count="1"/> <size from="1294673" 
 to="3981489" total="7995027" count="3"/> <unsorted from="30561" 
 to="4013649" total="4044210" count="2"/> 
</sizes> <total type="fast" count="0" size="0"/> <total 
type="rest" count="42" size="20184569"/> <system type="current" 
size="20250624"/> <system type="max" size="20250624"/> <aspace 
type="total" size="20250624"/> <aspace type="mprotect" 
size="20250624"/> <aspace type="subheaps" size="1"/> </heap> 
<total type="fast" count="6" size="256"/> <total type="rest" 
count="50582" size="755230372"/> <total type="mmap" count="4" 
size="44789760"/> <system type="current" size="1056411648"/> 
<system type="max" size="1056411648"/> <aspace type="total" 
size="1056411648"/> <aspace type="mprotect" size="1056411648"/> 
</malloc>     ;;-------------------------------------- 
;;-------------------------------------- ;; ~3 hours later.  ;; 
growth slowed after the previous (garbage-collect) ;; RSS 1140MB 
;;-------------------------------------- 
;;--------------------------------------  (memory-limit) ;; 
virtual memory, not RSS 1429620 (message "%f" gc-cons-threshold) 
"800000.000000" (message "%f" gc-cons-percentage) "0.100000" 
(emacs-uptime) "1 day, 4 hours, 50 minutes, 30 seconds" (message 
"%f" gcs-done) "708.000000" (message "%f" gc-elapsed) "201.724018" 
(message "%s" memory-full) "nil"  (memory-use-counts) (224118465 
575286 217714299 65607 946347937 563190 26430775)  (memory-usage) 
((conses 16 1199504 2511807) (symbols 48 32742 159) (strings 32 
246671 575263) (string-bytes 1 5992063) (vectors 16 118364) 
(vector-slots 8 8412872 474129) (floats 8 1771 10028) (intervals 
56 29873 12035) (buffers 992 60)) 

=>	18.3MB (+ 38.3MB dead) in conses 
	1.50MB (+ 7.45kB dead) in symbols 7.53MB (+ 17.6MB dead) in 
	strings 5.71MB in string-bytes 1.81MB in vectors 64.2MB (+ 
	3.62MB dead) in vector-slots 13.8kB (+ 78.3kB dead) in floats 
	1.60MB (+  658kB dead) in intervals 58.1kB in buffers  Total in 
	lisp objects:  161MB (live  101MB, dead 60.2MB)  Buffer ralloc 
	memory usage: 60 buffers 64.4MB total ( 956kB in gaps) 
     Size	Gap	Name 

 47795241	745530	 *censored* 
  4681196	29261	   *censored* 4543324	25017	   *censored* 
  4478601	28398	   *censored* 
   862373	622	     *censored* 859981	4898	   *censored* 859617 
   3696	   *censored* 859355	4131	   *censored* 859131	4009 
   *censored* 471538	6609	   *censored* 
    60099	6451	   *censored* 20589	1312	   *censored* 19452 
    2129	   *censored* 17776	1746	   *censored* 16877	217 
    *censored* 16484	1447	   *censored* 13488	56 
    *censored* 13212	1810	   *censored* 12747	2081 
    *censored* 12640	2098	   *censored* 12478	900 
    *censored* 12130	453	     *censored* 10745	10186 
    *censored* 10703	2082	   *censored* 
     9965	474	     *censored* 9828	1075	   *censored* 8000 
     226	     *censored* 5117	1396	   *censored* 4282	1891 
     *censored* 2546	1544	   *censored* 1630	675 
     *censored* 1479	591	     *censored* 1228	918 
     *censored* 
      883	1280	   *censored* 679	1574	   *censored* 678	5483 
      *censored* 513	27194	   *censored* 299	1731	   *censored* 
      232	3839	   *censored* 131	1985	   *censored* 
       97	1935	   *censored* 92	1979	   *censored* 72	1999 
       *censored* 69	1999	   *censored* 69	4009	   *censored* 
       67	1999	   *censored* 64	1985	   *censored* 62	6034 
       *censored* 62	1999	   *censored* 61	1960	   *censored* 
       28	4030	   *censored* 27	1999	   *censored* 
        0	2026	   *censored* 0	20	     *censored* 0	2065 
        *censored* 0	2072	   *censored* 0	20	     *censored* 0 
        20	     *censored* 0	2059	   *censored* 0	2037 
        *censored* 



;;--------------------------------------  ;; 3 days later ;; RSS 
;;--------------------------------------was steady at 1150MB ;; 
;;--------------------------------------leaped to 2.3GB very 
;;--------------------------------------suddenly ;; ;; RSS 2311M 
;;--------------------------------------;; ~182MB (let ((size 0)) 
 (dolist (buffer (buffer-list) size) 
   (setq size (+ size (buffer-size buffer))))) 
182903045   ;; sums to ~142MB if I'm reading it right? 
(garbage-collect) ((conses 16 2081486 2630206) (symbols 48 61019 
79) (strings 32 353371 288980) (string-bytes 1 13294206) (vectors 
16 144742) (vector-slots 8 9503757 592939) (floats 8 2373 8320) 
(intervals 56 46660 10912) (buffers 992 82))  (reduce '+ (cl-loop 
for thing in (garbage-collect) 
                   collect (* (nth 1 thing) (nth 2 thing)))) 
142115406  ;; /proc/$PID/smaps heap 56395d707000-5639e0d43000 rw-p 
00000000 00:00 0                          [heap] Size: 
2152688 kB KernelPageSize:        4 kB MMUPageSize:           4 kB 
Rss:             2152036 kB Pss:             2152036 kB 
Shared_Clean:          0 kB Shared_Dirty:          0 kB 
Private_Clean:         0 kB Private_Dirty:   2152036 kB 
Referenced:      2146588 kB Anonymous:       2152036 kB LazyFree: 
0 kB AnonHugePages:         0 kB ShmemPmdMapped:        0 kB 
FilePmdMapped:        0 kB Shared_Hugetlb:        0 kB 
Private_Hugetlb:       0 kB Swap:                  0 kB SwapPss: 
0 kB Locked:                0 kB THPeligible:            0 
ProtectionKey:         0   ;; malloc-info (malloc-info) <malloc 
version="1"> <heap nr="0"> <sizes> 
 <size from="33" to="48" total="240" count="5"/> <size from="113" 
 to="128" total="128" count="1"/> <size from="129" to="129" 
 total="26961" count="209"/> <size from="145" to="145" 
 total="112230" count="774"/> <size from="161" to="161" 
 total="4830" count="30"/> <size from="177" to="177" 
 total="66375" count="375"/> <size from="193" to="193" 
 total="159804" count="828"/> <size from="209" to="209" 
 total="6897" count="33"/> <size from="225" to="225" 
 total="82800" count="368"/> <size from="241" to="241" 
 total="48923" count="203"/> <size from="257" to="257" 
 total="119505" count="465"/> <size from="273" to="273" 
 total="47775" count="175"/> <size from="289" to="289" 
 total="73984" count="256"/> <size from="305" to="305" 
 total="33855" count="111"/> <size from="321" to="321" 
 total="147660" count="460"/> <size from="337" to="337" 
 total="33700" count="100"/> <size from="353" to="353" 
 total="73424" count="208"/> <size from="369" to="369" 
 total="5166" count="14"/> <size from="385" to="385" 
 total="94325" count="245"/> <size from="401" to="401" 
 total="44511" count="111"/> <size from="417" to="417" 
 total="67971" count="163"/> <size from="433" to="433" 
 total="31176" count="72"/> <size from="449" to="449" 
 total="88004" count="196"/> <size from="465" to="465" 
 total="33480" count="72"/> <size from="481" to="481" 
 total="86580" count="180"/> <size from="497" to="497" 
 total="36778" count="74"/> <size from="513" to="513" 
 total="108243" count="211"/> <size from="529" to="529" 
 total="15341" count="29"/> <size from="545" to="545" 
 total="64310" count="118"/> <size from="561" to="561" 
 total="28050" count="50"/> <size from="577" to="577" 
 total="76741" count="133"/> <size from="593" to="593" 
 total="40917" count="69"/> <size from="609" to="609" 
 total="77343" count="127"/> <size from="625" to="625" 
 total="30000" count="48"/> <size from="641" to="641" 
 total="164737" count="257"/> <size from="657" to="657" 
 total="35478" count="54"/> <size from="673" to="673" 
 total="44418" count="66"/> <size from="689" to="689" 
 total="4134" count="6"/> <size from="705" to="705" total="86010" 
 count="122"/> <size from="721" to="721" total="35329" 
 count="49"/> <size from="737" to="737" total="63382" 
 count="86"/> <size from="753" to="753" total="45933" 
 count="61"/> <size from="769" to="769" total="85359" 
 count="111"/> <size from="785" to="785" total="51810" 
 count="66"/> <size from="801" to="801" total="191439" 
 count="239"/> <size from="817" to="817" total="42484" 
 count="52"/> <size from="833" to="833" total="7497" count="9"/> 
 <size from="849" to="849" total="5094" count="6"/> <size 
 from="865" to="865" total="4325" count="5"/> <size from="881" 
 to="881" total="5286" count="6"/> <size from="897" to="897" 
 total="6279" count="7"/> <size from="913" to="913" total="6391" 
 count="7"/> <size from="929" to="929" total="4645" count="5"/> 
 <size from="945" to="945" total="3780" count="4"/> <size 
 from="961" to="961" total="1922" count="2"/> <size from="977" 
 to="977" total="9770" count="10"/> <size from="1009" to="1009" 
 total="122089" count="121"/> <size from="1025" to="1073" 
 total="156226" count="146"/> <size from="1089" to="1137" 
 total="148084" count="132"/> <size from="1153" to="1201" 
 total="75664" count="64"/> <size from="1217" to="1265" 
 total="83731" count="67"/> <size from="1281" to="1329" 
 total="101437" count="77"/> <size from="1345" to="1393" 
 total="107822" count="78"/> <size from="1409" to="1457" 
 total="91680" count="64"/> <size from="1473" to="1521" 
 total="51074" count="34"/> <size from="1537" to="1585" 
 total="65482" count="42"/> <size from="1601" to="1649" 
 total="32484" count="20"/> <size from="1665" to="1713" 
 total="50638" count="30"/> <size from="1729" to="1777" 
 total="33283" count="19"/> <size from="1793" to="1825" 
 total="18106" count="10"/> <size from="1857" to="1905" 
 total="35683" count="19"/> <size from="1921" to="1969" 
 total="117132" count="60"/> <size from="1985" to="2033" 
 total="46295" count="23"/> <size from="2049" to="2097" 
 total="257804" count="124"/> <size from="2113" to="2161" 
 total="92075" count="43"/> <size from="2177" to="2225" 
 total="39666" count="18"/> <size from="2241" to="2289" 
 total="81972" count="36"/> <size from="2305" to="2353" 
 total="337953" count="145"/> <size from="2369" to="2417" 
 total="399879" count="167"/> <size from="2433" to="2481" 
 total="555635" count="227"/> <size from="2497" to="2545" 
 total="372660" count="148"/> <size from="2561" to="2609" 
 total="431415" count="167"/> <size from="2625" to="2673" 
 total="325771" count="123"/> <size from="2689" to="2737" 
 total="412584" count="152"/> <size from="2753" to="2801" 
 total="335673" count="121"/> <size from="2817" to="2865" 
 total="235587" count="83"/> <size from="2881" to="2929" 
 total="283890" count="98"/> <size from="2945" to="2993" 
 total="335073" count="113"/> <size from="3009" to="3057" 
 total="278876" count="92"/> <size from="3073" to="3121" 
 total="358180" count="116"/> <size from="3137" to="3569" 
 total="2372709" count="709"/> <size from="3585" to="4081" 
 total="1847856" count="480"/> <size from="4097" to="4593" 
 total="5672856" count="1320"/> <size from="4609" to="5105" 
 total="4675836" count="956"/> <size from="5121" to="5617" 
 total="6883318" count="1286"/> <size from="5633" to="6129" 
 total="6011919" count="1023"/> <size from="6145" to="6641" 
 total="6239871" count="975"/> <size from="6657" to="7153" 
 total="6540165" count="949"/> <size from="7169" to="7665" 
 total="5515848" count="744"/> <size from="7681" to="8177" 
 total="5148216" count="648"/> <size from="8193" to="8689" 
 total="8190223" count="975"/> <size from="8705" to="9201" 
 total="5854315" count="651"/> <size from="9217" to="9713" 
 total="5312354" count="562"/> <size from="9729" to="10225" 
 total="5154212" count="516"/> <size from="10241" to="10737" 
 total="4074005" count="389"/> <size from="10753" to="12273" 
 total="11387550" count="990"/> <size from="12289" to="16369" 
 total="32661229" count="2317"/> <size from="16385" to="20465" 
 total="36652437" count="2037"/> <size from="20481" to="24561" 
 total="21272131" count="947"/> <size from="24577" to="28657" 
 total="25462302" count="958"/> <size from="28673" to="32753" 
 total="28087234" count="914"/> <size from="32769" to="36849" 
 total="39080113" count="1121"/> <size from="36865" to="40945" 
 total="30141527" count="775"/> <size from="40961" to="65521" 
 total="166092799" count="3119"/> <size from="65537" to="98289" 
 total="218425380" count="2692"/> <size from="98321" to="131057" 
 total="178383171" count="1555"/> <size from="131089" to="163825" 
 total="167800886" count="1142"/> <size from="163841" to="262065" 
 total="367649915" count="1819"/> <size from="262161" to="522673" 
 total="185347984" count="560"/> <size from="525729" 
 to="30878897" total="113322865" count="97"/> <unsorted from="33" 
 to="33" total="33" count="1"/> 
</sizes> <total type="fast" count="6" size="368"/> <total 
type="rest" count="43944" size="1713595767"/> <system 
type="current" size="2204352512"/> <system type="max" 
size="2204352512"/> <aspace type="total" size="2204352512"/> 
<aspace type="mprotect" size="2204352512"/> </heap> <heap nr="1"> 
<sizes> 
 <size from="17" to="32" total="160" count="5"/> <size from="33" 
 to="48" total="336" count="7"/> <size from="49" to="64" 
 total="448" count="7"/> <size from="65" to="80" total="560" 
 count="7"/> <size from="97" to="112" total="784" count="7"/> 
 <size from="33" to="33" total="231" count="7"/> <size from="49" 
 to="49" total="245" count="5"/> <size from="65" to="65" 
 total="390" count="6"/> <size from="81" to="81" total="162" 
 count="2"/> <size from="97" to="97" total="97" count="1"/> <size 
 from="113" to="113" total="113" count="1"/> <size from="129" 
 to="129" total="516" count="4"/> <size from="161" to="161" 
 total="644" count="4"/> <size from="209" to="209" total="2299" 
 count="11"/> <size from="241" to="241" total="241" count="1"/> 
 <size from="257" to="257" total="257" count="1"/> <size 
 from="305" to="305" total="610" count="2"/> <size from="32209" 
 to="32209" total="64418" count="2"/> <size from="1294673" 
 to="4053073" total="27998472" count="8"/> <unsorted from="209" 
 to="4053073" total="4080781" count="13"/> 
</sizes> <total type="fast" count="33" size="2288"/> <total 
type="rest" count="69" size="42357748"/> <system type="current" 
size="42426368"/> <system type="max" size="42426368"/> <aspace 
type="total" size="42426368"/> <aspace type="mprotect" 
size="42426368"/> <aspace type="subheaps" size="1"/> </heap> 
<total type="fast" count="39" size="2656"/> <total type="rest" 
count="44013" size="1755953515"/> <total type="mmap" count="6" 
size="121565184"/> <system type="current" size="2246778880"/> 
<system type="max" size="2246778880"/> <aspace type="total" 
size="2246778880"/> <aspace type="mprotect" size="2246778880"/> 
</malloc>   (memory-limit) ;; virtual memory, not RSS 2630768 
(message "%f" gc-cons-threshold) "800000.000000"  (message "%f" 
gc-cons-percentage) "0.100000"  (emacs-uptime) "4 days, 4 hours, 5 
minutes, 3 seconds"  (message "%f" gcs-done) "2140.000000" 
(message "%f" gc-elapsed) "760.624580"  (message "%s" memory-full) 
"nil"  ;; I belive this is cumulative, not current? 
(memory-use-counts) (989044259 2763760 754240919 143568 2633617972 
2535567 76512576)  (reduce '+ (memory-use-counts)) 4509544031 

 
(memory-usage) ((conses 16 2081326 3094498) (symbols 48 61019 79) 
(strings 32 353291 494869) (string-bytes 1 13286757) (vectors 16 
144725) (vector-slots 8 9503378 623467) (floats 8 2373 8320) 
(intervals 56 46640 11652) (buffers 992 82)) 

=>	31.8MB (+ 47.2MB dead) in conses 
	2.79MB (+ 3.70kB dead) in symbols 10.8MB (+ 15.1MB dead) in 
	strings 12.7MB in string-bytes 2.21MB in vectors 72.5MB (+ 
	4.76MB dead) in vector-slots 18.5kB (+ 65.0kB dead) in floats 
	2.49MB (+  637kB dead) in intervals 79.4kB in buffers  Total in 
	lisp objects:  203MB (live  135MB, dead 67.8MB)  Buffer ralloc 
	memory usage: 82 buffers 
176MB total (2.04MB in gaps) 
     Size	Gap	Name 

 91928037	1241610	*censored* 27233492	123915	*censored* 
 16165441	173855	*censored* 15789683	66347	  *censored* 
 15688792	205051	*censored* 
  3040510	1437	  *censored* 3030476	17503	  *censored* 3027663 
  15314	  *censored* 3027493	16032	  *censored* 3026818	15601 
  *censored* 
   211934	5198	  *censored* 
    87685	23923	  *censored* 57762	2629	  *censored* 52780 
    677	    *censored* 35991	2269	  *censored* 25403	1824 
    *censored* 18008	1514	  *censored* 16930	64	    *censored* 
    16877	217	    *censored* 16484	1447	  *censored* 14232 
    14654	  *censored* 14192	605	    *censored* 13715	1130 
    *censored* 13575	1689	  *censored* 13343	1377	  *censored* 
    13198	1540	  *censored* 13178	1598	  *censored* 12747 
    2081	  *censored* 10883	1902	  *censored* 10271	632 
    *censored* 
     6402	44449	  *censored* 5127	1386	  *censored* 5005	1156 
     *censored* 4282	1891	  *censored* 3840	2313	  *censored* 
     3409	16717	  *censored* 3409	16717	  *censored* 2872	1186 
     *censored* 2541	1511	  *censored* 2067	2011	  *censored* 
     1630	675	    *censored* 1626	444	    *censored* 1490	679 
     *censored* 1413	26294	  *censored* 1159	4937	  *censored* 
      962	1063	  *censored* 678	1574	  *censored* 562	2297 
      *censored* 324	2008	  *censored* 324	2008	  *censored* 
      151	1967	  *censored* 137	1887	  *censored* 133	1983 
      *censored* 
       97	1935	  *censored* 78	3998	  *censored* 72	1999 
       *censored* 71	3985	  *censored* 69	1999	  *censored* 67 
       1999	  *censored* 64	1985	  *censored* 62	1999 
       *censored* 61	6035	  *censored* 49	2008	  *censored* 33 
       2038	  *censored* 31	4040	  *censored* 27	1999 
       *censored* 25	1999	  *censored* 25	1999	  *censored* 25 
       1999	  *censored* 22	1999	  *censored* 20	0 
       *censored* 16	2021	  *censored* 16	4	      *censored* 
        0	2026	  *censored* 0	20	    *censored* 0	5026 
        *censored* 0	2072	  *censored* 0	20	    *censored* 0 
        20	    *censored* 0	2059	  *censored* 0	20 
        *censored* 0	20	    *censored* 

----------------
END LOG
----------------

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 23 Nov 2020 22:12:01 GMT) Full text and rfc822 format available.

Message #437 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 23 Nov 2020 23:11:03 +0100
Trevor Bentley <trevor <at> trevorbentley.com> writes:

> Below is a large memory report from the emacs-slack instance: 

Formatting was butchered.  Try this:

https://trevorbentley.com/emacs_malloc_info.log

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 02:08:02 GMT) Full text and rfc822 format available.

Message #440 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 03:07:39 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> From: Arthur Miller <arthur.miller <at> live.com>
>> Cc: bugs <at> gnu.support,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>>   dj <at> redhat.com,  michael_heerdegen <at> web.de,  trevor <at> trevorbentley.com,
>>   carlos <at> redhat.com
>> Date: Mon, 23 Nov 2020 20:49:48 +0100
>> 
>> Isn't Valgrind good for this kind of problems? Can I run emacs as a
>> systemd service in Valgrind?
>
> You can run Emacs under Valgrind, see etc/DEBUG for the details.  But
> I'm not sure it will work as systemd service.
>
> Valgrind is only the right tool if we think there's a memory leak in
> Emacs itself.
Yeah, you are right;

I was trying to crash my Emacs for like 4 hours now, I tried to simulate
dired and copying/moving around files since I experienced crashes mostly
when in dired and helm; I put a function on a timer where I made 1000 files every
few seconds, red those files back inoto lists, copy them around and deleted
them; and watched allocations and all I got was spent time; Emacs was
rock solid. Typical :D.

I hope that this pmem for the process is correct; I was looking at
attributes and I saw it go up and down, but it seemed to stay in reange
~2.5 tp ~3.5%, 

This looked typical, pmem was different for every run, but stayed below
3.5%

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 683640 125000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 684725 570000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 685810 502000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 686711 538000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

((args . "/home/arthur/repos/emacs/src/emacs --fg-daemon") (pmem . 2.919526565234921) (pcpu . 13.355092518800808) (etime 0 5521 40000 0) (rss . 958748) (vsize . 1125912) (start 24508 19530 687465 69000) (thcount . 2) (nice . 0) (pri . 20) (ctime 0 6 880000 0) (cstime 0 0 420000 0) (cutime 0 6 460000 0) (time 0 737 340000 0) (stime 0 47 950000 0) (utime 0 689 390000 0) (cmajflt . 485) (cminflt . 214598) (majflt . 73) (minflt . 1286399) (tpgid . -1) (ttname . "") (sess . 24105) (pgrp . 24105) (ppid . 595) (state . "R") (comm . "emacs") (group . "users") (egid . 100) (user . "arthur") (euid . 1000))

I will see it comes back, and see if I can play more with it; I give up for now.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 03:27:02 GMT) Full text and rfc822 format available.

Message #443 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Andreas Schwab <schwab <at> linux-m68k.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 05:25:59 +0200
> From: Andreas Schwab <schwab <at> linux-m68k.org>
> Cc: Eli Zaretskii <eliz <at> gnu.org>,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org,  dj <at> redhat.com,  carlos <at> redhat.com,
>   trevor <at> trevorbentley.com,  michael_heerdegen <at> web.de
> Date: Mon, 23 Nov 2020 21:53:22 +0100
> 
> On Nov 23 2020, Jean Louis wrote:
> 
> > It happens during eww call, not immediately but during.
> 
> That probably just means it is busy in libxml parsing the page.

That's not what the backtrace is showing, though.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 05:07:02 GMT) Full text and rfc822 format available.

Message #446 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Andreas Schwab <schwab <at> linux-m68k.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 00:09:03 +0300
* Andreas Schwab <schwab <at> linux-m68k.org> [2020-11-23 23:53]:
> On Nov 23 2020, Jean Louis wrote:
> 
> > It happens during eww call, not immediately but during.
> 
> That probably just means it is busy in libxml parsing the page.

The instance without LD_PRELOAD is fast. Instance with LD_PRELOAD will
show me page but not allow any keyboard input unless I interrupt it
few times then few times. And there is no CPU activity going on that I
can see it on the indicator.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 05:45:01 GMT) Full text and rfc822 format available.

Message #449 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 08:29:47 +0300
* Arthur Miller <arthur.miller <at> live.com> [2020-11-24 00:23]:
> Jean Louis <bugs <at> gnu.support> writes:
> 
> > * Arthur Miller <arthur.miller <at> live.com> [2020-11-23 23:22]:
> >> The only thing that changed regularly was of course system updates: kernel,
> >> gcc & co etc. So it maybe is as mentioned earlier in this thread by
> >> either you or somebody else is that glibc changed and that maybe
> >> triggers something in Emacs based on how Emacs use it. I don't know I am
> >> not expert in this. Isn't Valgrind good for this kind of problems? Can I
> >> run emacs as a systemd service in Valgrind?
> >
> > I did not change anything like glibc or kernel in Hyperbola
> > GNU/Linux-libre
> Didn't you update your system since last summer?

I am pulling Emacs from git and consider system upgraded that way.

For system packages, pacman says there is nothing to do most of time,
unless there is new kernel or some security issue.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 08:16:02 GMT) Full text and rfc822 format available.

Message #452 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 09:15:20 +0100
Jean Louis <bugs <at> gnu.support> writes:

> * Arthur Miller <arthur.miller <at> live.com> [2020-11-24 00:23]:
>> Jean Louis <bugs <at> gnu.support> writes:
>> 
>> > * Arthur Miller <arthur.miller <at> live.com> [2020-11-23 23:22]:
>> >> The only thing that changed regularly was of course system updates: kernel,
>> >> gcc & co etc. So it maybe is as mentioned earlier in this thread by
>> >> either you or somebody else is that glibc changed and that maybe
>> >> triggers something in Emacs based on how Emacs use it. I don't know I am
>> >> not expert in this. Isn't Valgrind good for this kind of problems? Can I
>> >> run emacs as a systemd service in Valgrind?
>> >
>> > I did not change anything like glibc or kernel in Hyperbola
>> > GNU/Linux-libre
>> Didn't you update your system since last summer?
>
> I am pulling Emacs from git and consider system upgraded that way.
same here

> For system packages, pacman says there is nothing to do most of time,
> unless there is new kernel or some security issue.

Aha, you are running LTS kernel?

Mine pacman brings in updates every day.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 09:11:02 GMT) Full text and rfc822 format available.

Message #455 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 12:06:03 +0300
* Arthur Miller <arthur.miller <at> live.com> [2020-11-24 11:15]:
> > I am pulling Emacs from git and consider system upgraded that way.
> same here
> 
> > For system packages, pacman says there is nothing to do most of time,
> > unless there is new kernel or some security issue.
> 
> Aha, you are running LTS kernel?
> 
> Mine pacman brings in updates every day.

Really?

/boot:

config-linux-libre-lts
grub
initramfs-linux-libre-lts-fallback.img
initramfs-linux-libre-lts.img
vmlinuz-linux-libre-lts

So you have Hyperbola and you get updates every day? How comes?





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 09:28:01 GMT) Full text and rfc822 format available.

Message #458 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 10:27:17 +0100
Jean Louis <bugs <at> gnu.support> writes:

> * Arthur Miller <arthur.miller <at> live.com> [2020-11-24 11:15]:
>> > I am pulling Emacs from git and consider system upgraded that way.
>> same here
>> 
>> > For system packages, pacman says there is nothing to do most of time,
>> > unless there is new kernel or some security issue.
>> 
>> Aha, you are running LTS kernel?
>> 
>> Mine pacman brings in updates every day.
>
> Really?
Yepp; but I am not on lts-kernel, that is probably why.

> /boot:
>
> config-linux-libre-lts
> grub
> initramfs-linux-libre-lts-fallback.img
> initramfs-linux-libre-lts.img
> vmlinuz-linux-libre-lts
>
> So you have Hyperbola and you get updates every day? How comes?
No Hyperbola don't even know what distro it is; Just Arch Linux here.

I guess because I am not on lts-kernel and probably because I have lots
of stuff installed.

Harddrive is cheap nowdays. I have entire kde/gnome stack installed; and
lots more. When I need to compile a library or application I don't want
ot chase dependencies around. I just don't use them as desktops and
don't run apps.  For example yesterday I was just able to git clone
heaptrack and compile it, no headaches.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 16:09:02 GMT) Full text and rfc822 format available.

Message #461 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 18:07:52 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>  michael_heerdegen <at> web.de, carlos <at> redhat.com
> Cc: 
> Date: Mon, 23 Nov 2020 22:30:57 +0100
> 
> ;;--------------------------------------
> ;;--------------------------------------
> ;; 3 days later
> ;; RSS was steady at 1150MB
> ;; leaped to 2.3GB very suddenly
> ;;
> ;; RSS 2311M
> ;;--------------------------------------
> ;;--------------------------------------

> ;; ~182MB
> (let ((size 0))
>   (dolist (buffer (buffer-list) size)
>     (setq size (+ size (buffer-size buffer)))))
> 182903045
> 
> ;; sums to ~142MB if I'm reading it right?
> (garbage-collect)
> ((conses 16 2081486 2630206) (symbols 48 61019 79) (strings 32 353371 288980) (string-bytes 1 13294206) (vectors 16 144742) (vector-slots 8 9503757 592939) (floats 8 2373 8320) (intervals 56 46660 10912) (buffers 992 82))

> (reduce '+ (cl-loop for thing in (garbage-collect)
>                     collect (* (nth 1 thing) (nth 2 thing))))
> 142115406
> 
> ;; malloc-info
> (malloc-info)
> <malloc version="1">
> <heap nr="0">
> <sizes>
>   <size from="33" to="48" total="240" count="5"/>
>   <size from="113" to="128" total="128" count="1"/>
> [...]
>   <size from="3137" to="3569" total="2372709" count="709"/>
>   <size from="3585" to="4081" total="1847856" count="480"/>
>   <size from="4097" to="4593" total="5672856" count="1320"/>
>   <size from="4609" to="5105" total="4675836" count="956"/>
>   <size from="5121" to="5617" total="6883318" count="1286"/>
>   <size from="5633" to="6129" total="6011919" count="1023"/>
>   <size from="6145" to="6641" total="6239871" count="975"/>
>   <size from="6657" to="7153" total="6540165" count="949"/>
>   <size from="7169" to="7665" total="5515848" count="744"/>
>   <size from="7681" to="8177" total="5148216" count="648"/>
>   <size from="8193" to="8689" total="8190223" count="975"/>
>   <size from="8705" to="9201" total="5854315" count="651"/>
>   <size from="9217" to="9713" total="5312354" count="562"/>
>   <size from="9729" to="10225" total="5154212" count="516"/>
>   <size from="10241" to="10737" total="4074005" count="389"/>
>   <size from="10753" to="12273" total="11387550" count="990"/>
>   <size from="12289" to="16369" total="32661229" count="2317"/>
>   <size from="16385" to="20465" total="36652437" count="2037"/>
>   <size from="20481" to="24561" total="21272131" count="947"/>
>   <size from="24577" to="28657" total="25462302" count="958"/>
>   <size from="28673" to="32753" total="28087234" count="914"/>
>   <size from="32769" to="36849" total="39080113" count="1121"/>
>   <size from="36865" to="40945" total="30141527" count="775"/>
>   <size from="40961" to="65521" total="166092799" count="3119"/>
>   <size from="65537" to="98289" total="218425380" count="2692"/>
>   <size from="98321" to="131057" total="178383171" count="1555"/>
>   <size from="131089" to="163825" total="167800886" count="1142"/>
>   <size from="163841" to="262065" total="367649915" count="1819"/>
>   <size from="262161" to="522673" total="185347984" count="560"/>
>   <size from="525729" to="30878897" total="113322865" count="97"/>

Look at the large chunks in the tail of this.  Together, they do
account for ~2GB.

Carlos, are these chunks in use (i.e. allocated and not freed), or are
they the free chunks that are available for allocation, but not
released to the OS?  If the former, then it sounds like this session
does have around 2GB of allocated heap data, so either there's some
allocated memory we don't account for, or there is indeed a memory
leak in Emacs.  If these are the free chunks, then the way glibc
manages free'd memory is indeed an issue.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 19:06:02 GMT) Full text and rfc822 format available.

Message #464 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 20:05:15 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:
> 
> Look at the large chunks in the tail of this.  Together, they do 
> account for ~2GB. 
> 
> Carlos, are these chunks in use (i.e. allocated and not freed), 
> or are they the free chunks that are available for allocation, 
> but not released to the OS?  If the former, then it sounds like 
> this session does have around 2GB of allocated heap data, so 
> either there's some allocated memory we don't account for, or 
> there is indeed a memory leak in Emacs.  If these are the free 
> chunks, then the way glibc manages free'd memory is indeed an 
> issue. 

I just updated the log on my website.  Same instance a day later, 
after yet another memory spike up to 4.3GB.  Concatenated to the 
end:

https://trevorbentley.com/emacs_malloc_info.log

Some interesting observations:
- (garbage-collect) takes forever, like on the order of 5-10 
minutes, with one CPU core pegged to 100% and emacs frozen.
- The leaking stops for a while after (garbage-collect).  It was 
leaking 1MB per second for this last log, and stopped growing 
after the garbage collection.

Question 1: (garbage-collect) shows the memory usage *after* 
collecting, right?  Is there any way to get the same info without 
actually reaping dead references?  It could be that there really 
were 4.3GB of dead references.

Question 2: are the background garbage collections equivalent to 
the (garbage-collect) function?  I certainly don't notice 5-10 
minute long pauses during normal use, though "gcs-done" is 
incrementing.  Does it have a different algorithm for partial 
collection during idle, perhaps?

Question 3: I've never used the malloc_trim() function.  Could 
that be something worth experimenting with, to see if it releases 
any of the massive heap back to the OS?

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 24 Nov 2020 19:36:02 GMT) Full text and rfc822 format available.

Message #467 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 21:35:03 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de, carlos <at> redhat.com
> Cc: 
> Date: Tue, 24 Nov 2020 20:05:15 +0100
> 
> I just updated the log on my website.  Same instance a day later, 
> after yet another memory spike up to 4.3GB.  Concatenated to the 
> end:
> 
> https://trevorbentley.com/emacs_malloc_info.log

I don't think I can interpret that.  In particular, how come "total"
is 4GB, but I see no comparable sizes in any of the other fields?
where do those 4GB hide?  Carlos, can you help interpreting this
report?

> Some interesting observations:
>  - (garbage-collect) takes forever, like on the order of 5-10 
>  minutes, with one CPU core pegged to 100% and emacs frozen.

Is this with the default values of gc-cons-threshold and
gc-cons-percentage?

>  - The leaking stops for a while after (garbage-collect).  It was 
>  leaking 1MB per second for this last log, and stopped growing 
>  after the garbage collection.

Now, what happens in that session once per second (in an otherwise
idle Emacs, I presume?) to cause such memory consumption?  Some
timers?  If you run with a breakpoint in malloc that just shows the
backtrace and continues, do you see what could consume 1MB every
second?

> Question 1: (garbage-collect) shows the memory usage *after* 
> collecting, right?

Yes.

> Is there any way to get the same info without actually reaping dead
> references?

What do you mean by "reaping dead references" here?

> It could be that there really were 4.3GB of dead references.

Not sure I understand what are you trying to establish here.

> Question 2: are the background garbage collections equivalent to 
> the (garbage-collect) function?  I certainly don't notice 5-10 
> minute long pauses during normal use, though "gcs-done" is 
> incrementing.  Does it have a different algorithm for partial 
> collection during idle, perhaps?

There's only one garbage-collect, it is called for _any_ GC.

What do you mean by "during normal use" in this sentence:

  I certainly don't notice 5-10 minute long pauses during normal use,
  though "gcs-done" is incrementing.

How is what you did here, where GC took several minutes, different
from "normal usage"?

> Question 3: I've never used the malloc_trim() function.  Could 
> that be something worth experimenting with, to see if it releases 
> any of the massive heap back to the OS?

That's for glibc guys to answer.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 04:24:02 GMT) Full text and rfc822 format available.

Message #470 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 24 Nov 2020 20:18:17 +0300
* Arthur Miller <arthur.miller <at> live.com> [2020-11-24 12:27]:
> Yepp; but I am not on lts-kernel, that is probably why.

I think it is the other issue that you hve many packages, I also have
many for Gnome and KDE but do not get updates, maybe I use mirror that
is not updated. I will see that.

> > So you have Hyperbola and you get updates every day? How comes?
> No Hyperbola don't even know what distro it is; Just Arch Linux
> here.

Well then it is different thing. You are updating from different
repository than me.

> Harddrive is cheap nowdays. I have entire kde/gnome stack installed; and
> lots more. When I need to compile a library or application I don't want
> ot chase dependencies around. I just don't use them as desktops and
> don't run apps.  For example yesterday I was just able to git clone
> heaptrack and compile it, no headaches.

That is different OS and Hyperbola is different. Arch Linux has lax policy
against non-free software, while Hyperbola GNU/Linux-libre has very
strict policy and does not allow anything non-free, that is reason I
am using it. It does not use systemd trap and is working stable.

Few times I got problem with building for example webkit, but
otherwise anything builds pretty well.

Hyperbola is independent project that receives little support, it
should receive so much more. They will also create new HyperbolaBSD
system that will move an OpenBSD kernel into GNU GPL direction.

Jean




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 10:23:01 GMT) Full text and rfc822 format available.

Message #473 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 11:22:16 +0100
Eli Zaretskii <eliz <at> gnu.org> writes: 
>> Some interesting observations: 
>>  - (garbage-collect) takes forever, like on the order of 5-10 
>>  minutes, with one CPU core pegged to 100% and emacs frozen. 
> 
> Is this with the default values of gc-cons-threshold and 
> gc-cons-percentage? 

Yes, and they're both printed in the logs: threshold 800000, 
percentage 0.1.

>>  - The leaking stops for a while after (garbage-collect).  It 
>>  was  leaking 1MB per second for this last log, and stopped 
>>  growing  after the garbage collection. 
> 
> Now, what happens in that session once per second (in an 
> otherwise idle Emacs, I presume?) to cause such memory 
> consumption?  Some timers?  If you run with a breakpoint in 
> malloc that just shows the backtrace and continues, do you see 
> what could consume 1MB every second? 

Not an idle emacs at all, in this case.  I have seen the memory 
growth in an idle emacs, but the only one I can reproduce it on is 
the emacs-slack one, which is connected to a corporate Slack 
account.  Tons of short messages streaming in over the network and 
being displayed in rotating buffers, with images mixed in.  It's a 
big 'ol "web 2.0" API... it can easily pass 1MB/s of bloated JSON 
messages through.  This is one _very active_ emacs.

The original strace logs and valgrind output I posted before 
showed a random assortment of calls from gnutls, imagemagick, and 
lisp strings, with lisp strings dominating the malloc calls 
(enlarge_buffer_text, mostly).

>> Is there any way to get the same info without actually reaping 
>> dead references? 
> 
> What do you mean by "reaping dead references" here? 
> 
>> It could be that there really were 4.3GB of dead references. 
> 
> Not sure I understand what are you trying to establish here. 
>

GC is running through a list of active allocations and freeing the 
ones with no remaining references, right?  Presumably, if a lot of 
active malloc() allocations are no longer refernced, and 
(garbage-collect) calls free() on a bunch of blocks.  I'm 
wondering how to figure out how much memory a call to 
(garbage-collect) has actually freed.  Possibly a sort of "dry 
run" where it performs the GC algorithm, but doesn't release any 
memory.

(I'm very much assuming how emacs memory management works.  Please 
corect me if I'm wrong.)

> There's only one garbage-collect, it is called for _any_ GC. 
> 
> What do you mean by "during normal use" in this sentence: 
> 
>   I certainly don't notice 5-10 minute long pauses during normal 
>   use, though "gcs-done" is incrementing. 
> 
> How is what you did here, where GC took several minutes, 
> different from "normal usage"?

In this log, I am explicitly executing "(garbage-collect)", and it 
takes 10 minutes, during which the UI is unresponsive and 
sometimes even turns grey when the window stops redrawing.

By "normal use", I mean that I use this emacs instance on-and-off 
all day long.  I would notice if it were freezing for minutes at a 
time, and it definitely is not.

As far as I understand, garbage collection is supposed to happen 
automatically during idle.  I would certainly notice if it locked 
up the whole instance for 10 minutes from an idle GC.  I think 
this means the automatic garbage collection is either not 
happening, or running on a different thread, or being interrupted, 
or simply works differently.  I have no idea, hence asking you :)

The confusing part is that "gcs-done" increments a lot between my 
manual (garbage-collect) calls.  It looks like it does about 500 
per day.  There is no way emacs freezes and pegs a CPU core to max 
500 times per day, but it does exactly that every time I manually 
execute garbage-collect. 

Side note: it inflated to 7670MB overnight.  I'm running 
(garbage-collect) as I type this, but it has been churning for 30 
minutes with the UI frozen, and still isn't done.  I'm going to 
give up and kill it if it doesn't finish soon, as I kind of need 
that 8GB back.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 15:00:03 GMT) Full text and rfc822 format available.

Message #476 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Arthur Miller <arthur.miller <at> live.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 15:59:21 +0100
Jean Louis <bugs <at> gnu.support> writes:

> * Arthur Miller <arthur.miller <at> live.com> [2020-11-24 12:27]:
>> Yepp; but I am not on lts-kernel, that is probably why.
>
> I think it is the other issue that you hve many packages, I also have
> many for Gnome and KDE but do not get updates, maybe I use mirror that
> is not updated. I will see that.
>
>> > So you have Hyperbola and you get updates every day? How comes?
>> No Hyperbola don't even know what distro it is; Just Arch Linux
>> here.
>
> Well then it is different thing. You are updating from different
> repository than me.
>
>> Harddrive is cheap nowdays. I have entire kde/gnome stack installed; and
>> lots more. When I need to compile a library or application I don't want
>> ot chase dependencies around. I just don't use them as desktops and
>> don't run apps.  For example yesterday I was just able to git clone
>> heaptrack and compile it, no headaches.
>
> That is different OS and Hyperbola is different. Arch Linux has lax policy
> against non-free software, while Hyperbola GNU/Linux-libre has very
> strict policy and does not allow anything non-free, that is reason I
> am using it. It does not use systemd trap and is working stable.
>
> Few times I got problem with building for example webkit, but
> otherwise anything builds pretty well.
>
> Hyperbola is independent project that receives little support, it
> should receive so much more. They will also create new HyperbolaBSD
> system that will move an OpenBSD kernel into GNU GPL direction.
>
> Jean
Oki; thansk. I never heard of the Hypberbola before.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 17:46:02 GMT) Full text and rfc822 format available.

Message #479 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, michael_heerdegen <at> web.de,
 dj <at> redhat.com, bugs <at> gnu.support
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 12:45:04 -0500
On 11/24/20 11:07 AM, Eli Zaretskii wrote:
> Look at the large chunks in the tail of this.  Together, they do
> account for ~2GB.
> 
> Carlos, are these chunks in use (i.e. allocated and not freed), or are
> they the free chunks that are available for allocation, but not
> released to the OS?  If the former, then it sounds like this session
> does have around 2GB of allocated heap data, so either there's some
> allocated memory we don't account for, or there is indeed a memory
> leak in Emacs.  If these are the free chunks, then the way glibc
> manages free'd memory is indeed an issue.

These chunks are all free and mapped for use by the algorithm to satisfy
a request by the application.

Looking at the last malloc_info (annotated):
https://trevorbentley.com/emacs_malloc_info.log
===============================================
;; malloc-info
(malloc-info)
<malloc version="1">
<heap nr="0">
<sizes>
</sizes>
<total type="fast" count="0" size="0"/>

=> No fast bins.

<total type="rest" count="1" size="112688"/>

=> 1 unused bin.

=> In total we have only 112KiB in 1 unused chunk free'd on the stack.
=> The rest of the stack is in use by the application.
=> It looks like the application usage goes down to zero and then up again?

<system type="current" size="4243079168"/>

=> Currently at 4.2GiB in arena 0 (kernel assigned heap).
=> The application is using that sbrk'd memory.

<system type="max" size="4243079168"/>
<aspace type="total" size="4243079168"/>
<aspace type="mprotect" size="4243079168"/>

=> This indicates *real* API usage of 4.2GiB.

</heap>
<heap nr="1">

=> This is arena 1, which is a thread heap, and uses mmap to create heaps.

<sizes>
  <size from="17" to="32" total="32" count="1"/>
  <size from="33" to="48" total="240" count="5"/>
  <size from="49" to="64" total="256" count="4"/>
  <size from="65" to="80" total="160" count="2"/>
  <size from="97" to="112" total="224" count="2"/>
  <size from="33" to="33" total="231" count="7"/>
  <size from="49" to="49" total="294" count="6"/>
  <size from="65" to="65" total="390" count="6"/>
  <size from="81" to="81" total="162" count="2"/>
  <size from="97" to="97" total="97" count="1"/>
  <size from="129" to="129" total="516" count="4"/>
  <size from="161" to="161" total="644" count="4"/>
  <size from="209" to="209" total="1254" count="6"/>
  <size from="241" to="241" total="241" count="1"/>
  <size from="257" to="257" total="257" count="1"/>
  <size from="305" to="305" total="610" count="2"/>
  <size from="32209" to="32209" total="32209" count="1"/>
  <size from="3982129" to="8059889" total="28065174" count="6"/>
  <unsorted from="209" to="4020593" total="4047069" count="13"/>
</sizes>
<total type="fast" count="14" size="912"/>
<total type="rest" count="61" size="42357420"/>

=> Pretty small, 912 bytes in fastbins, and 42MiB in cached chunks.

<system type="current" size="42426368"/>
<system type="max" size="42426368"/>
<aspace type="total" size="42426368"/>
<aspace type="mprotect" size="42426368"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="14" size="912"/>
<total type="rest" count="62" size="42470108"/>
<total type="mmap" count="9" size="208683008"/>
<system type="current" size="4285505536"/>
<system type="max" size="4285505536"/>
<aspace type="total" size="4285505536"/>
<aspace type="mprotect" size="4285505536"/>
</malloc>
===============================================

This shows the application is USING memory on the main system heap.

It might not be "leaked" memory since the application might be using it.

You want visibility into what is USING that memory.

With glibc-malloc-trace-utils you can try to do that with:

LD_PRELOAD=libmtrace.so \
MTRACE_CTL_FILE=/home/user/app.mtr \
MTRACE_CTL_BACKTRACE=1 \
./app

This will use libgcc's unwinder to get a copy of the malloc caller
address and then we'll have to decode that based on a /proc/self/maps.

Next steps:
- Get a glibc-malloc-trace-utils trace of the application ratcheting.
- Get a copy of /proc/$PID/maps for the application (shorter version of smaps).

Then we might be able to correlate where all the kernel heap data went?

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 17:48:01 GMT) Full text and rfc822 format available.

Message #482 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 19:47:16 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de, carlos <at> redhat.com
> Date: Wed, 25 Nov 2020 11:22:16 +0100
> 
> >>  - The leaking stops for a while after (garbage-collect).  It 
> >>  was  leaking 1MB per second for this last log, and stopped 
> >>  growing  after the garbage collection. 
> > 
> > Now, what happens in that session once per second (in an 
> > otherwise idle Emacs, I presume?) to cause such memory 
> > consumption?  Some timers?  If you run with a breakpoint in 
> > malloc that just shows the backtrace and continues, do you see 
> > what could consume 1MB every second? 
> 
> Not an idle emacs at all, in this case.  I have seen the memory 
> growth in an idle emacs, but the only one I can reproduce it on is 
> the emacs-slack one, which is connected to a corporate Slack 
> account.  Tons of short messages streaming in over the network and 
> being displayed in rotating buffers, with images mixed in.  It's a 
> big 'ol "web 2.0" API... it can easily pass 1MB/s of bloated JSON 
> messages through.  This is one _very active_ emacs.

Then I don't think we will be able to understand what consumes memory
at such high rate without some debugging.  Have you considered using
breakpoints and collecting backtraces, as I suggested earlier?

The hard problem is to understand which memory is allocated and not
freed "soon enough", but for such a high rate of memory consumption
perhaps just knowing which code request so much memory would be an
important clue.

> The original strace logs and valgrind output I posted before 
> showed a random assortment of calls from gnutls, imagemagick, and 
> lisp strings, with lisp strings dominating the malloc calls 
> (enlarge_buffer_text, mostly).

Enlarging buffer text generally causes malloc to call mmap (as opposed
to brk/sbrk), so this cannot cause the situation where a lot of unused
memory that is not returned to the OS.  And we already saw that just
by summing up the buffer text memory we never get even close to the VM
size of the process.

> > What do you mean by "reaping dead references" here? 
> > 
> >> It could be that there really were 4.3GB of dead references. 
> > 
> > Not sure I understand what are you trying to establish here. 
> 
> GC is running through a list of active allocations and freeing the 
> ones with no remaining references, right?  Presumably, if a lot of 
> active malloc() allocations are no longer refernced, and 
> (garbage-collect) calls free() on a bunch of blocks.

We only call free on "unfragmented" Lisp data, e.g. if some block of
Lisp strings was freed in its entirety.  If some Lisp objects in a
block are still alive, we don't free the block, we just mark the freed
Lisp objects as being free and available for reuse.

So the result of GC shows only tells you how much of the memory was
freed but NOT returned to glibc, it doesn't show how much was actually
free'd.

> I'm wondering how to figure out how much memory a call to
> (garbage-collect) has actually freed.  Possibly a sort of "dry run"
> where it performs the GC algorithm, but doesn't release any memory.

"Freed" in what sense? returned to glibc?

> > There's only one garbage-collect, it is called for _any_ GC. 
> > 
> > What do you mean by "during normal use" in this sentence: 
> > 
> >   I certainly don't notice 5-10 minute long pauses during normal 
> >   use, though "gcs-done" is incrementing. 
> > 
> > How is what you did here, where GC took several minutes, 
> > different from "normal usage"?
> 
> In this log, I am explicitly executing "(garbage-collect)", and it 
> takes 10 minutes, during which the UI is unresponsive and 
> sometimes even turns grey when the window stops redrawing.
> 
> By "normal use", I mean that I use this emacs instance on-and-off 
> all day long.  I would notice if it were freezing for minutes at a 
> time, and it definitely is not.
> 
> As far as I understand, garbage collection is supposed to happen 
> automatically during idle.  I would certainly notice if it locked 
> up the whole instance for 10 minutes from an idle GC.  I think 
> this means the automatic garbage collection is either not 
> happening, or running on a different thread, or being interrupted, 
> or simply works differently.  I have no idea, hence asking you :)

That is very strange.  There's only one function to perform GC, and it
is called both from garbage-collect and from an internal function
called when Emacs is idle or when it calls interpreter functions like
'eval' or 'funcall'.  The only thing garbage-collect does that the
internal function doesn't is generate the list that is the return
value of garbage-collect, but that cannot possibly take minutes.

I suggest to set garbage-collection-messages non-nil, then you should
see when each GC, whether the one you invoke interactively or the
automatic one, starts and ends.  maybe the minutes you wait are not
directly related to GC, but to something else that is triggered by GC?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 17:49:01 GMT) Full text and rfc822 format available.

Message #485 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, michael_heerdegen <at> web.de,
 dj <at> redhat.com, bugs <at> gnu.support
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 12:48:20 -0500
On 11/24/20 2:35 PM, Eli Zaretskii wrote:
>> From: Trevor Bentley <trevor <at> trevorbentley.com>
>> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>>  dj <at> redhat.com, michael_heerdegen <at> web.de, carlos <at> redhat.com
>> Cc: 
>> Date: Tue, 24 Nov 2020 20:05:15 +0100
>>
>> I just updated the log on my website.  Same instance a day later, 
>> after yet another memory spike up to 4.3GB.  Concatenated to the 
>> end:
>>
>> https://trevorbentley.com/emacs_malloc_info.log
> 
> I don't think I can interpret that.  In particular, how come "total"
> is 4GB, but I see no comparable sizes in any of the other fields?
> where do those 4GB hide?  Carlos, can you help interpreting this
> report?

The 4GiB are in use by the application and it is up to us to increase
the observability of that usage with our tooling.

>> Question 3: I've never used the malloc_trim() function.  Could 
>> that be something worth experimenting with, to see if it releases 
>> any of the massive heap back to the OS?
> 
> That's for glibc guys to answer.

If malloc_info() shows memory that is free'd and unused then malloc_trim()
can free back any unused pages to the OS.

However, in your last day malloc_info() output you only show ~50MiB of
unused memory out of ~4GiB, so calling malloc_trim() would only free
~50MiB. There is heavy usage of the kernel heap by something. Finding
out what is using that memory is our next step.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 18:04:02 GMT) Full text and rfc822 format available.

Message #488 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 20:03:35 +0200
> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Wed, 25 Nov 2020 12:45:04 -0500
> 
> On 11/24/20 11:07 AM, Eli Zaretskii wrote:
> > Look at the large chunks in the tail of this.  Together, they do
> > account for ~2GB.
> > 
> > Carlos, are these chunks in use (i.e. allocated and not freed), or are
> > they the free chunks that are available for allocation, but not
> > released to the OS?  If the former, then it sounds like this session
> > does have around 2GB of allocated heap data, so either there's some
> > allocated memory we don't account for, or there is indeed a memory
> > leak in Emacs.  If these are the free chunks, then the way glibc
> > manages free'd memory is indeed an issue.
> 
> These chunks are all free and mapped for use by the algorithm to satisfy
> a request by the application.

So we have more than 1.5GB free memory available for allocation, is
that right?

But then how to reconcile this with what you say next:

> <system type="current" size="4243079168"/>
> 
> => Currently at 4.2GiB in arena 0 (kernel assigned heap).
> => The application is using that sbrk'd memory.
> 
> <system type="max" size="4243079168"/>
> <aspace type="total" size="4243079168"/>
> <aspace type="mprotect" size="4243079168"/>
> 
> => This indicates *real* API usage of 4.2GiB.

Here you seem to say that these 4.2GB are _used_ by the application?
While I thought the large chunks I asked about, which total more than
1.5GB, are a significant part of those 4.2GB?

To make sure there are no misunderstandings, I'm talking about this
part of the log:

  <heap nr="0">
  <sizes>
    [...]
    <size from="10753" to="12273" total="11387550" count="990"/>
    <size from="12289" to="16369" total="32661229" count="2317"/>
    <size from="16385" to="20465" total="36652437" count="2037"/>
    <size from="20481" to="24561" total="21272131" count="947"/>
    <size from="24577" to="28657" total="25462302" count="958"/>
    <size from="28673" to="32753" total="28087234" count="914"/>
    <size from="32769" to="36849" total="39080113" count="1121"/>
    <size from="36865" to="40945" total="30141527" count="775"/>
    <size from="40961" to="65521" total="166092799" count="3119"/>
    <size from="65537" to="98289" total="218425380" count="2692"/>
    <size from="98321" to="131057" total="178383171" count="1555"/>
    <size from="131089" to="163825" total="167800886" count="1142"/>
    <size from="163841" to="262065" total="367649915" count="1819"/>
    <size from="262161" to="522673" total="185347984" count="560"/>
    <size from="525729" to="30878897" total="113322865" count="97"/>
    <unsorted from="33" to="33" total="33" count="1"/>
  </sizes>

If I sum up the "total=" parts of these large numbers, I get 1.6GB.
Is this free memory, given back to glibc for future allocations from
this arena, and if so, are those 1.6GB part of the 4.2GB total?

> This shows the application is USING memory on the main system heap.
> 
> It might not be "leaked" memory since the application might be using it.
> 
> You want visibility into what is USING that memory.
> 
> With glibc-malloc-trace-utils you can try to do that with:
> 
> LD_PRELOAD=libmtrace.so \
> MTRACE_CTL_FILE=/home/user/app.mtr \
> MTRACE_CTL_BACKTRACE=1 \
> ./app
> 
> This will use libgcc's unwinder to get a copy of the malloc caller
> address and then we'll have to decode that based on a /proc/self/maps.
> 
> Next steps:
> - Get a glibc-malloc-trace-utils trace of the application ratcheting.
> - Get a copy of /proc/$PID/maps for the application (shorter version of smaps).
> 
> Then we might be able to correlate where all the kernel heap data went?

Thanks for the instructions.  Would people please try that and report
the results?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 18:36:01 GMT) Full text and rfc822 format available.

Message #491 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Arthur Miller <arthur.miller <at> live.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 18:09:25 +0300
* Arthur Miller <arthur.miller <at> live.com> [2020-11-25 17:59]:
> > Hyperbola is independent project that receives little support, it
> > should receive so much more. They will also create new HyperbolaBSD
> > system that will move an OpenBSD kernel into GNU GPL direction.
> >
> > Jean
> Oki; thansk. I never heard of the Hypberbola before.

https://www.hyperbola.info

And there are other fully free operating systems endorsed by the FSF
such as:

Trisquel GNU/Linux-libre
https://trisquel.info

and others on https://www.gnu.org

Those are only that I am using due to agreement among people to
provide fully free software without access to anything non-free.

Jean




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 18:36:02 GMT) Full text and rfc822 format available.

Message #494 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, Trevor Bentley <trevor <at> trevorbentley.com>,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 21:08:35 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-25 20:45]:
> With glibc-malloc-trace-utils you can try to do that with:
> 
> LD_PRELOAD=libmtrace.so \
> MTRACE_CTL_FILE=/home/user/app.mtr \
> MTRACE_CTL_BACKTRACE=1 \
> ./app
> 
> This will use libgcc's unwinder to get a copy of the malloc caller
> address and then we'll have to decode that based on a
> /proc/self/maps.

I will also try that in the next session.

One problem I have here is that since I run this session I have not
get any problem. My uptime is over 2 days, I have not changed my
habbits of work within Emacs and my swap remains under 200 MB and only
10% memory used by Emacs, normally 80-90%

Almost by the rule I could not run longer than 1 day until I would get
swap of about 3 GB - 4 GB and not responsive Emacs.

Can it be that libmtrace.so could prevent something happening what is
normally happening?





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 18:52:01 GMT) Full text and rfc822 format available.

Message #497 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Jean Louis <bugs <at> gnu.support>, Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 19:51:39 +0100
Jean Louis <bugs <at> gnu.support> writes:

>> This will use libgcc's unwinder to get a copy of the malloc 
>> caller address and then we'll have to decode that based on a 
>> /proc/self/maps. 
> 
> I will also try that in the next session. 

As will I, but probably won't set it up until this weekend.

> One problem I have here is that since I run this session I have 
> not get any problem. My uptime is over 2 days, I have not 
> changed my habbits of work within Emacs and my swap remains 
> under 200 MB and only 10% memory used by Emacs, normally 80-90% 
> 
> Almost by the rule I could not run longer than 1 day until I 
> would get swap of about 3 GB - 4 GB and not responsive Emacs. 
> 
> Can it be that libmtrace.so could prevent something happening 
> what is normally happening? 

I see high variation in how long it takes to hit it on my machine. 
The shortest was after ~4 hours, average is 1.5 days, and the 
longest was 5 days.  Perhaps you're seeing the same.

I also still hit it while running under Valgrind; the whole emacs 
session was slow as hell, but still managed to blow out its heap 
in a few days.  Of course, libmtrace could be different, but at 
least it doesn't seem to be a heisenbug.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 18:58:01 GMT) Full text and rfc822 format available.

Message #500 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 13:57:34 -0500
On 11/25/20 1:03 PM, Eli Zaretskii wrote:
>> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>>  dj <at> redhat.com, michael_heerdegen <at> web.de
>> From: Carlos O'Donell <carlos <at> redhat.com>
>> Date: Wed, 25 Nov 2020 12:45:04 -0500
>>
>> On 11/24/20 11:07 AM, Eli Zaretskii wrote:
>>> Look at the large chunks in the tail of this.  Together, they do
>>> account for ~2GB.
>>>
>>> Carlos, are these chunks in use (i.e. allocated and not freed), or are
>>> they the free chunks that are available for allocation, but not
>>> released to the OS?  If the former, then it sounds like this session
>>> does have around 2GB of allocated heap data, so either there's some
>>> allocated memory we don't account for, or there is indeed a memory
>>> leak in Emacs.  If these are the free chunks, then the way glibc
>>> manages free'd memory is indeed an issue.
>>
>> These chunks are all free and mapped for use by the algorithm to satisfy
>> a request by the application.
> 
> So we have more than 1.5GB free memory available for allocation, is
> that right?

There are 3 malloc_info traces in the log.

1. Lines 47-219. Day 1: 1100MiB of RSS.
2. Lines 386-556. Day 4: 2.3GiB of RSS.
3. Lines 744-792. Day 5: 4.2GiB of RSS.

Lines are numbered for the log starting at 1.
 
> To make sure there are no misunderstandings, I'm talking about this
> part of the log:

Your analysis is for trace #2, lines 386-556.

My analysis was for trace #3, lines 744-792.

>   <heap nr="0">
>   <sizes>
>     [...]
>     <size from="10753" to="12273" total="11387550" count="990"/>
>     <size from="12289" to="16369" total="32661229" count="2317"/>
>     <size from="16385" to="20465" total="36652437" count="2037"/>
>     <size from="20481" to="24561" total="21272131" count="947"/>
>     <size from="24577" to="28657" total="25462302" count="958"/>
>     <size from="28673" to="32753" total="28087234" count="914"/>
>     <size from="32769" to="36849" total="39080113" count="1121"/>
>     <size from="36865" to="40945" total="30141527" count="775"/>
>     <size from="40961" to="65521" total="166092799" count="3119"/>
>     <size from="65537" to="98289" total="218425380" count="2692"/>
>     <size from="98321" to="131057" total="178383171" count="1555"/>
>     <size from="131089" to="163825" total="167800886" count="1142"/>
>     <size from="163841" to="262065" total="367649915" count="1819"/>
>     <size from="262161" to="522673" total="185347984" count="560"/>
>     <size from="525729" to="30878897" total="113322865" count="97"/>
>     <unsorted from="33" to="33" total="33" count="1"/>
>   </sizes>
> 
> If I sum up the "total=" parts of these large numbers, I get 1.6GB.
> Is this free memory, given back to glibc for future allocations from
> this arena, and if so, are those 1.6GB part of the 4.2GB total?

In trace #2 we have these final statistics:

549 <total type="fast" count="39" size="2656"/>
550 <total type="rest" count="44013" size="1755953515"/>
551 <total type="mmap" count="6" size="121565184"/>
552 <system type="current" size="2246778880"/>
553 <system type="max" size="2246778880"/>
554 <aspace type="total" size="2246778880"/>
555 <aspace type="mprotect" size="2246778880"/>
556 </malloc>

This shows ~1.7GiB of unused free chunks. Keep in mind glibc malloc is a
heap-based allocator so if you have FIFO usage pattern you won't see the kernel
heap decrease until you free the most recently allocated chunk. In trace #3 we 
*do* see that application demand consumes all these free chunks again, so
something is using them in the application. There are none left reported in
the malloc_info statistics (could also be chunk corruption).

During trace #2 the only way to free some of the ~1.7GiB in-use by the algorithm
is to call malloc_trim() to free back unused pages (requires free/unsorted chunk
walk and mmumap() calls to the kernel to reduce RSS accounting). Calling malloc_trim
is expensive, particularly if you're just going to use the chunks again, as
appears to be happening the next day.

In trace #3, for which we are at 4.2GiB of RSS usage, we see the following:

742 ;; malloc-info
743 (malloc-info)
744 <malloc version="1">
745 <heap nr="0">
746 <sizes>
747 </sizes>
748 <total type="fast" count="0" size="0"/>
749 <total type="rest" count="1" size="112688"/>

a. Arena 0 (kernel heap) shows 0KiB of unused fast bins, 112KiB of other
   in 1 bin (probably top-chunk).

750 <system type="current" size="4243079168"/>
751 <system type="max" size="4243079168"/>
752 <aspace type="total" size="4243079168"/>
753 <aspace type="mprotect" size="4243079168"/>

b. Arena 0 (kernel heap) shows 4.2GiB "current" which means that the
   sbrk-extended kernel heap is in use up to 4.2GiB.
   WARNING: We count "foreign" uses of sbrk as brk space, so looking for
   sbrk or brk by a foreign source is useful.

754 </heap>
755 <heap nr="1">
756 <sizes>
757   <size from="17" to="32" total="32" count="1"/>
758   <size from="33" to="48" total="240" count="5"/>
759   <size from="49" to="64" total="256" count="4"/>
760   <size from="65" to="80" total="160" count="2"/>
761   <size from="97" to="112" total="224" count="2"/>
762   <size from="33" to="33" total="231" count="7"/>
763   <size from="49" to="49" total="294" count="6"/>
764   <size from="65" to="65" total="390" count="6"/>
765   <size from="81" to="81" total="162" count="2"/>
766   <size from="97" to="97" total="97" count="1"/>
767   <size from="129" to="129" total="516" count="4"/>
768   <size from="161" to="161" total="644" count="4"/>
769   <size from="209" to="209" total="1254" count="6"/>
770   <size from="241" to="241" total="241" count="1"/>
771   <size from="257" to="257" total="257" count="1"/>
772   <size from="305" to="305" total="610" count="2"/>
773   <size from="32209" to="32209" total="32209" count="1"/>
774   <size from="3982129" to="8059889" total="28065174" count="6"/>
775   <unsorted from="209" to="4020593" total="4047069" count="13"/>
776 </sizes>
777 <total type="fast" count="14" size="912"/>
778 <total type="rest" count="61" size="42357420"/>
779 <system type="current" size="42426368"/>
780 <system type="max" size="42426368"/>
781 <aspace type="total" size="42426368"/>
782 <aspace type="mprotect" size="42426368"/>
783 <aspace type="subheaps" size="1"/>

c. Arena 1 has 42MiB of free'd chunks for use.

784 </heap>
785 <total type="fast" count="14" size="912"/>
786 <total type="rest" count="62" size="42470108"/>
787 <total type="mmap" count="9" size="208683008"/>

d. We have:
   - 912KiB of fast bins.
   - 42MiB of regular bins.
   - 200MiB of mmap'd large chunks.

788 <system type="current" size="4285505536"/>
789 <system type="max" size="4285505536"/>
790 <aspace type="total" size="4285505536"/>

e. Total allocated space is 4.2GiB.

791 <aspace type="mprotect" size="4285505536"/>
792 </malloc>

Something is using the kernel heap chunks, or calling sbrk/brk
directly (since foreign brks are counted by our statistics).

>> This shows the application is USING memory on the main system heap.
>>
>> It might not be "leaked" memory since the application might be using it.
>>
>> You want visibility into what is USING that memory.
>>
>> With glibc-malloc-trace-utils you can try to do that with:
>>
>> LD_PRELOAD=libmtrace.so \
>> MTRACE_CTL_FILE=/home/user/app.mtr \
>> MTRACE_CTL_BACKTRACE=1 \
>> ./app
>>
>> This will use libgcc's unwinder to get a copy of the malloc caller
>> address and then we'll have to decode that based on a /proc/self/maps.
>>
>> Next steps:
>> - Get a glibc-malloc-trace-utils trace of the application ratcheting.
>> - Get a copy of /proc/$PID/maps for the application (shorter version of smaps).
>>
>> Then we might be able to correlate where all the kernel heap data went?
> 
> Thanks for the instructions.  Would people please try that and report
> the results?
> 


-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:02:01 GMT) Full text and rfc822 format available.

Message #503 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, Trevor Bentley <trevor <at> trevorbentley.com>,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 14:01:32 -0500
On 11/25/20 1:08 PM, Jean Louis wrote:
> * Carlos O'Donell <carlos <at> redhat.com> [2020-11-25 20:45]:
>> With glibc-malloc-trace-utils you can try to do that with:
>>
>> LD_PRELOAD=libmtrace.so \
>> MTRACE_CTL_FILE=/home/user/app.mtr \
>> MTRACE_CTL_BACKTRACE=1 \
>> ./app
>>
>> This will use libgcc's unwinder to get a copy of the malloc caller
>> address and then we'll have to decode that based on a
>> /proc/self/maps.
> 
> I will also try that in the next session.
> 
> One problem I have here is that since I run this session I have not
> get any problem. My uptime is over 2 days, I have not changed my
> habbits of work within Emacs and my swap remains under 200 MB and only
> 10% memory used by Emacs, normally 80-90%
> 
> Almost by the rule I could not run longer than 1 day until I would get
> swap of about 3 GB - 4 GB and not responsive Emacs.
> 
> Can it be that libmtrace.so could prevent something happening what is
> normally happening?

It could. If there are timing sensitivities to this issue then it might
be sufficiently perturbed that it doesn't reproduce. The above backtracing
is expensive and increases the performance impact. However, given that
we want to know who the caller was and determine the source of the 4.2GiB
allocations... we need to try capture that information.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:03:01 GMT) Full text and rfc822 format available.

Message #506 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Trevor Bentley <trevor <at> trevorbentley.com>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 14:02:33 -0500
On 11/25/20 1:51 PM, Trevor Bentley wrote:
> I also still hit it while running under Valgrind; the whole emacs
> session was slow as hell, but still managed to blow out its heap in a
> few days.  Of course, libmtrace could be different, but at least it
> doesn't seem to be a heisenbug.

Do you have a valgrind report to share?

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:07:02 GMT) Full text and rfc822 format available.

Message #509 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 20:06:21 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> Then I don't think we will be able to understand what consumes 
> memory at such high rate without some debugging.  Have you 
> considered using breakpoints and collecting backtraces, as I 
> suggested earlier? 

Next up will be libmtrace, and then I can look into gdb.  It's 
going to be really noisy... we'll see how it goes.

> 
> So the result of GC shows only tells you how much of the memory 
> was freed but NOT returned to glibc, it doesn't show how much 
> was actually free'd. 
> 
>> I'm wondering how to figure out how much memory a call to 
>> (garbage-collect) has actually freed.  Possibly a sort of "dry 
>> run" where it performs the GC algorithm, but doesn't release 
>> any memory. 
> 
> "Freed" in what sense? returned to glibc? 

I was referring to glibc malloc/free, but emacs internal 
allocations would also be interesting.  It's a moot point, as I 
don't think emacs supports it.  In short, the question is "what 
has garbage-collect done?"  It prints the state of memory after it 
is finished, but I have no idea if it has actually "collected" 
anything.

>> As far as I understand, garbage collection is supposed to 
>> happen  automatically during idle.  I would certainly notice if 
>> it locked  up the whole instance for 10 minutes from an idle 
>> GC.  I think  this means the automatic garbage collection is 
>> either not  happening, or running on a different thread, or 
>> being interrupted,  or simply works differently.  I have no 
>> idea, hence asking you :) 
> 
> That is very strange.  There's only one function to perform GC, 
> and it is called both from garbage-collect and from an internal 
> function called when Emacs is idle or when it calls interpreter 
> functions like 'eval' or 'funcall'.  The only thing 
> garbage-collect does that the internal function doesn't is 
> generate the list that is the return value of garbage-collect, 
> but that cannot possibly take minutes. 
> 
> I suggest to set garbage-collection-messages non-nil, then you 
> should see when each GC, whether the one you invoke 
> interactively or the automatic one, starts and ends.  maybe the 
> minutes you wait are not directly related to GC, but to 
> something else that is triggered by GC? 

I just set garbage-collection-messages to non-nil and evaluated 
(garbage-collect), and nothing was printed... you are suggesting 
that it should print something to *Messages*, right?

I've never tried emacs's profiler.  I'll try that next time I do a 
big garbage-collect and see what it shows.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:15:01 GMT) Full text and rfc822 format available.

Message #512 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 21:13:59 +0200
> Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Wed, 25 Nov 2020 13:57:34 -0500
> 
> There are 3 malloc_info traces in the log.
> 
> 1. Lines 47-219. Day 1: 1100MiB of RSS.
> 2. Lines 386-556. Day 4: 2.3GiB of RSS.
> 3. Lines 744-792. Day 5: 4.2GiB of RSS.
> 
> Lines are numbered for the log starting at 1.
>  
> > To make sure there are no misunderstandings, I'm talking about this
> > part of the log:
> 
> Your analysis is for trace #2, lines 386-556.
> 
> My analysis was for trace #3, lines 744-792.

OK, thanks for clarifying my confusion.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:18:01 GMT) Full text and rfc822 format available.

Message #515 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Carlos O'Donell <carlos <at> redhat.com>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, Eli Zaretskii <eliz <at> gnu.org>, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 20:17:27 +0100
Carlos O'Donell <carlos <at> redhat.com> writes:

> On 11/25/20 1:51 PM, Trevor Bentley wrote: 
>> I also still hit it while running under Valgrind; the whole 
>> emacs session was slow as hell, but still managed to blow out 
>> its heap in a few days.  Of course, libmtrace could be 
>> different, but at least it doesn't seem to be a heisenbug. 
> 
> Do you have a valgrind report to share? 

Yes, they were earlier in this bug report, perhaps before you 
joined.  It was the 'massif' heap tracing tool from the valgrind 
suite, not the regular valgrind leak detector.

Here are the links again:

 The raw massif output: 

 http://trevorbentley.com/massif.out.3364630 

 The *full* tree output: 

 http://trevorbentley.com/ms_print.3364630.txt 

 The tree output showing only entries above 10% usage: 

 http://trevorbentley.com/ms_print.thresh10.3364630.txt

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:23:02 GMT) Full text and rfc822 format available.

Message #518 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 21:22:02 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de, carlos <at> redhat.com
> Cc: 
> Date: Wed, 25 Nov 2020 20:06:21 +0100
> 
> > "Freed" in what sense? returned to glibc? 
> 
> I was referring to glibc malloc/free, but emacs internal 
> allocations would also be interesting.  It's a moot point, as I 
> don't think emacs supports it.  In short, the question is "what 
> has garbage-collect done?"  It prints the state of memory after it 
> is finished, but I have no idea if it has actually "collected" 
> anything.

GC always frees something, don't worry about that.  Your chances of
finding Emacs in a state that it has no garbage to free are nil.

> I just set garbage-collection-messages to non-nil and evaluated 
> (garbage-collect), and nothing was printed...

??? really?  That can only happen if memory-full is non-nil.  Is it?

> you are suggesting that it should print something to *Messages*,
> right?

No, in the echo area.  these messages don't go to *Messages*.

> I've never tried emacs's profiler.  I'll try that next time I do a 
> big garbage-collect and see what it shows.

That won't help in this case: GC is in C, and the profiler doesn't
profile C code that is not exposed to Lisp.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 19:39:02 GMT) Full text and rfc822 format available.

Message #521 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 20:38:38 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> you are suggesting that it should print something to 
>> *Messages*, right? 
> 
> No, in the echo area.  these messages don't go to *Messages*. 

Oh!  Well, yes, it is there then.  I didn't realize you can echo 
without going to *Messages*.  It's extremely fleeting... is there 
some way to persist these messages?

>> I've never tried emacs's profiler.  I'll try that next time I 
>> do a  big garbage-collect and see what it shows. 
> 
> That won't help in this case: GC is in C, and the profiler 
> doesn't profile C code that is not exposed to Lisp. 

Ah, ok.  Well, I'll try it anyway, and expect nothing.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 20:03:01 GMT) Full text and rfc822 format available.

Message #524 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 22:02:39 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de, carlos <at> redhat.com
> Cc: 
> Date: Wed, 25 Nov 2020 20:38:38 +0100
> 
> Eli Zaretskii <eliz <at> gnu.org> writes:
> 
> >> you are suggesting that it should print something to 
> >> *Messages*, right? 
> > 
> > No, in the echo area.  these messages don't go to *Messages*. 
> 
> Oh!  Well, yes, it is there then.  I didn't realize you can echo 
> without going to *Messages*.  It's extremely fleeting... is there 
> some way to persist these messages?

But if GC is taking minutes, you should be seeing the first of these 2
messages sitting in the echo area for the full duration of those
minutes.  So how can they be so ephemeral in your case?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 20:44:02 GMT) Full text and rfc822 format available.

Message #527 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 21:43:06 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> Oh!  Well, yes, it is there then.  I didn't realize you can 
>> echo  without going to *Messages*.  It's extremely 
>> fleeting... is there  some way to persist these messages? 
> 
> But if GC is taking minutes, you should be seeing the first of 
> these 2 messages sitting in the echo area for the full duration 
> of those minutes.  So how can they be so ephemeral in your case? 

Yes, for the long ones I expect to see the message hang in the 
echo area.  I was just hoping to also see when it is GC'ing in 
general (if it is GCi'ng in general, since it's behaving so 
weirdly).  A timestamped log of every time garbage-collect runs 
would be great.  Maybe I can do that with "(add-function :around 
...)".

The long garbage-collect doesn't happen until I'm in exploding 
memory mode.  I recently restarted emacs, so right now a GC is 
instantaneous.  I'll let you know how it goes next time the memory 
runs away.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 25 Nov 2020 20:52:01 GMT) Full text and rfc822 format available.

Message #530 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Trevor Bentley <trevor <at> trevorbentley.com>, Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>,
 michael_heerdegen <at> web.de, dj <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 25 Nov 2020 15:51:16 -0500
On 11/25/20 2:17 PM, Trevor Bentley wrote:
> Carlos O'Donell <carlos <at> redhat.com> writes:
> 
>> On 11/25/20 1:51 PM, Trevor Bentley wrote:
>>> I also still hit it while running under Valgrind; the whole emacs session was slow as hell, but still managed to blow out its heap in a few days.  Of course, libmtrace could be different, but at least it doesn't seem to be a heisenbug. 
>>
>> Do you have a valgrind report to share? 
> 
> Yes, they were earlier in this bug report, perhaps before you joined.  It was the 'massif' heap tracing tool from the valgrind suite, not the regular valgrind leak detector.
> 
> Here are the links again:
> 
>  The raw massif output:
>  http://trevorbentley.com/massif.out.3364630
>  The *full* tree output:
>  http://trevorbentley.com/ms_print.3364630.txt
>  The tree output showing only entries above 10% usage:
>  http://trevorbentley.com/ms_print.thresh10.3364630.txt

This data is pretty clear:

 1.40GiB - lisp_align_malloc (alloc.c:1195)
 1.40GiB - lmalloc (alloc.c:1359)
 0.65GiB - lrealloc (alloc.c:1374)
 0.24GiB - AcquireAlignedMemory (/usr/lib/libMagickCore-7.Q16HDRI.so.7.0.0)
--------
 3.60Gib - In use as of the snapshot.

That's a fairly high fraction of the ~4.2GiB that is eventually in use.

With lisp_align_malloc, lmalloc, and lrealloc shooting up exponentially at the end of the run look like they are making lists and processing numbers and other objects.

This is a direct expression of something increasing demand for memory.
	
-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 09:10:02 GMT) Full text and rfc822 format available.

Message #533 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 Carlos O'Donell <carlos <at> redhat.com>, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 12:09:32 +0300
Hello Eli,

Here is short report on the behavior:

Emacs uptime: 2 days, 19 hours, 46 minutes, 49 seconds

I think it was 11:12 o'clock my time zone. I was not doing nothing
special just writing emails and invoking emacsclient. All the time
before the swap shown by symon-mode was just about 200 MB suddenly it
grew to large number maybe few gigabytes, hard disk started working
heavily. It became all very slow but I could write letters.

I have tried to invoke M-x good-bye around 11:12, that is where it
became all very slow and started working with hard disk. Almost
everything blocked on screen. Emacs was kind of empty, no menus,
nothing, just blank black background, no mode line. So I moved it to
other workspace and continued working with zile.

About 36 minutes later it finally wrote this information into file:

((uptime "2 days, 18 hours, 32 minutes, 32 seconds") (pid 13339) (garbage ((conses 16 4438358 789442) (symbols 48 86924 25) (strings 32 571988 149785) (string-bytes 1 25104928) (vectors 16 245282) (vector-slots 8 4652918 1622184) (floats 8 1860 19097) (intervals 56 645336 37479) (buffers 992 900))) (buffers-size 200839861) (vsize (vsize 5144252)))

There after few minutes I have invoked the good-bye again:

((uptime "2 days, 18 hours, 35 minutes, 19 seconds") (pid 13339) (garbage ((conses 16 4511014 617524) (symbols 48 86926 23) (strings 32 576134 114546) (string-bytes 1 25198549) (vectors 16 245670) (vector-slots 8 4636183 1560354) (floats 8 1859 18842) (intervals 56 655325 24178) (buffers 992 900))) (buffers-size 200898858) (vsize (vsize 5144252)))

But what happened after 36 minutes of waiting is that Emacs became
responsive. So I am still running this session and I hope to get
mtrace after the session has finished.

Before I was not patient longer than maybe 3-5 minutes and I have
aborted Emacs. But now I can see it stabilized after hard work with
memory or whatever it was doing. Swap is 1809 MB and vsize just same
as above.

Observation on "what I was doing when vsize started growing" is
simple, I was just editing email, nothing drastic. I did not do
anything special.

If you say I should finish session now and send the mtrace, I can do
it.

Jean


(defun good-bye ()
  (interactive)
  (let* ((garbage (garbage-collect))
	 (size 0)
	 (buffers-size (dolist (buffer (buffer-list) size)
			(setq size (+ size (buffer-size buffer)))))
	 (uptime (emacs-uptime))
	 (pid (emacs-pid))
	 (vsize (vsize-value))
	 (file (format "~/tmp/emacs-session-%s.el" pid))
	 (list (list (list 'uptime uptime) (list 'pid pid)
		     (list 'garbage garbage) (list 'buffers-size buffers-size)
		     (list 'vsize vsize))))
    (with-temp-file file
      (insert (prin1-to-string list)))
    (message file)))






Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 12:39:02 GMT) Full text and rfc822 format available.

Message #536 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Carlos O'Donell <carlos <at> redhat.com>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 13:37:54 +0100
> You want visibility into what is USING that memory. 
> 
> With glibc-malloc-trace-utils you can try to do that with: 
> 
> LD_PRELOAD=libmtrace.so \ MTRACE_CTL_FILE=/home/user/app.mtr \ 
> MTRACE_CTL_BACKTRACE=1 \ ./app 
> 
> This will use libgcc's unwinder to get a copy of the malloc 
> caller address and then we'll have to decode that based on a 
> /proc/self/maps. 
> 
> Next steps: - Get a glibc-malloc-trace-utils trace of the 
> application ratcheting.  - Get a copy of /proc/$PID/maps for the 
> application (shorter version of smaps). 
> 

Oh, this is going to be a problem.  I guess it is producing one 
trace file per thread?

I ran it with libmtrace overnight.  Memory usage was very high, 
but it doesn't look like the same problem.  I hit 1550MB of RSS, 
but smaps reported only ~350MB of that was in the heap, which 
seemed reasonable for the ~150MB that emacs reported it was using. 
Does libmtrace add a lot of memory overhead?

However, libmtrace has made 4968 files totalling 26GB in that 
time.  Ouch.

It's going to be hard to tell when I hit the bug under libmtrace, 
questionable whether the report will even fit on my disk, and 
tricky to share however many tens of gigabytes of trace files it 
results in.

If it's one trace per thread, though, then we at least know that 
my emacs process in question is blazing through threads.  That 
could be relevant.

Other thing to note (for Eli): I wrapped garbage-collect like so:

---
(defun trev/garbage-collect (orig-fun &rest args) 
 (message "%s -- Starting garbage-collect." 
 (current-time-string)) (let ((time (current-time)) 
       (result (apply orig-fun args))) 
   (message "%s -- Finished garbage-collect in %.06f" 
   (current-time-string) (float-time (time-since time))) result)) 
(add-function :around (symbol-function 'garbage-collect) 
#'trev/garbage-collect)
---

This printed a start and stop message each time I evaluated 
garbage-collect manually.  It did not print any messages in 11 
hours of running unattended.  This is with an active network 
connection receiving messages fairly frequently, so there was 
plenty of consing going on.  Hard for me to judge if it should run 
any garbage collection in that time, but I would have expected so.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 13:59:01 GMT) Full text and rfc822 format available.

Message #539 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 15:58:04 +0200
> Cc: Eli Zaretskii <eliz <at> gnu.org>, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Wed, 25 Nov 2020 15:51:16 -0500
> 
> >  The raw massif output:
> >  http://trevorbentley.com/massif.out.3364630
> >  The *full* tree output:
> >  http://trevorbentley.com/ms_print.3364630.txt
> >  The tree output showing only entries above 10% usage:
> >  http://trevorbentley.com/ms_print.thresh10.3364630.txt
> 
> This data is pretty clear:
> 
>  1.40GiB - lisp_align_malloc (alloc.c:1195)
>  1.40GiB - lmalloc (alloc.c:1359)
>  0.65GiB - lrealloc (alloc.c:1374)
>  0.24GiB - AcquireAlignedMemory (/usr/lib/libMagickCore-7.Q16HDRI.so.7.0.0)
> --------
>  3.60Gib - In use as of the snapshot.
> 
> That's a fairly high fraction of the ~4.2GiB that is eventually in use.
> 
> With lisp_align_malloc, lmalloc, and lrealloc shooting up exponentially at the end of the run look like they are making lists and processing numbers and other objects.
> 
> This is a direct expression of something increasing demand for memory.

So, at least in Trevor's case, it sounds like we sometimes request a
lot of memory during short periods of time.  But what kind of memory
is that?

lmalloc is called by xmalloc, xrealloc, xzalloc, and xpalloc --
functions Emacs calls to get memory unrelated to Lisp data.  But it is
also called by lisp_malloc, which is used to allocate memory for some
Lisp objects.  lisp_align_malloc, OTOH, is used exclusively for
allocating Lisp data (conses, strings, etc.).

It is somewhat strange that lisp_align_malloc and lmalloc were called
to allocate similar amounts of memory: these two functions are
orthogonal, AFAICS, used for disparate groups of Lisp object types,
and it sounds strange that we somehow allocate very similar amounts of
memory for those data types.

Another observation is that since GC succeeds to release a large
portion of this memory, it would probably mean some significant
proportion of the calls are for Lisp data, maybe strings (because GC
compacts strings, which can allow Emacs to release more memory to
glibc's heap allocation machinery).

Apart of that, I think we really need to see the most significant
customers of these functions when the memory footprint starts growing
fast.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 14:15:01 GMT) Full text and rfc822 format available.

Message #542 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 16:13:58 +0200
> Date: Thu, 26 Nov 2020 12:09:32 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: Carlos O'Donell <carlos <at> redhat.com>, trevor <at> trevorbentley.com,
>   fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
>   michael_heerdegen <at> web.de
> 
> ((uptime "2 days, 18 hours, 35 minutes, 19 seconds") (pid 13339) (garbage ((conses 16 4511014 617524) (symbols 48 86926 23) (strings 32 576134 114546) (string-bytes 1 25198549) (vectors 16 245670) (vector-slots 8 4636183 1560354) (floats 8 1859 18842) (intervals 56 655325 24178) (buffers 992 900))) (buffers-size 200898858) (vsize (vsize 5144252)))
> 
> But what happened after 36 minutes of waiting is that Emacs became
> responsive. So I am still running this session and I hope to get
> mtrace after the session has finished.
> 
> Before I was not patient longer than maybe 3-5 minutes and I have
> aborted Emacs. But now I can see it stabilized after hard work with
> memory or whatever it was doing. Swap is 1809 MB and vsize just same
> as above.

It's still 5GB, which is a fairly large footprint, certainly for a
2-day session.

> Observation on "what I was doing when vsize started growing" is
> simple, I was just editing email, nothing drastic. I did not do
> anything special.

Can you describe in more detail how you edit email?  Which email
package(s) do you do, and what would composing email generally
involve?

Also, are there any background activities that routinely run in your
Emacs sessions?

> If you say I should finish session now and send the mtrace, I can do
> it.

That's for Carlos to say.

Thanks for the info.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 14:31:02 GMT) Full text and rfc822 format available.

Message #545 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 16:30:14 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: bugs <at> gnu.support, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  dj <at> redhat.com, michael_heerdegen <at> web.de
> Cc: 
> Date: Thu, 26 Nov 2020 13:37:54 +0100
> 
> If it's one trace per thread, though, then we at least know that 
> my emacs process in question is blazing through threads.

I don't see how this could be true, unless some library you use
(ImageMagick?) starts a lot of threads.  Emacs itself is
single-threaded, and the only other threads are those from GTK, which
should be very few (like, 4 or 5).  This assumes you didn't use Lisp
threads, of course.

> Other thing to note (for Eli): I wrapped garbage-collect like so:
> 
> ---
> (defun trev/garbage-collect (orig-fun &rest args) 
>   (message "%s -- Starting garbage-collect." 
>   (current-time-string)) (let ((time (current-time)) 
>         (result (apply orig-fun args))) 
>     (message "%s -- Finished garbage-collect in %.06f" 
>     (current-time-string) (float-time (time-since time))) result)) 
> (add-function :around (symbol-function 'garbage-collect) 
> #'trev/garbage-collect)
> ---
> 
> This printed a start and stop message each time I evaluated 
> garbage-collect manually.  It did not print any messages in 11 
> hours of running unattended.

That's expected, because the automatic GC doesn't call
garbage-collect.  garbage-collect is just a thin wrapper around a C
function, called garbage_collect, and the automatic GC calls that
function directly from C.  And you cannot advise C functions not
exposed to Lisp.

If you want to have record of the times it took each GC to run, you
will have to modify the C sources.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 15:21:01 GMT) Full text and rfc822 format available.

Message #548 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 16:19:53 +0100
>> If it's one trace per thread, though, then we at least know 
>> that  my emacs process in question is blazing through threads. 
> 
> I don't see how this could be true, unless some library you use 
> (ImageMagick?) starts a lot of threads.  Emacs itself is 
> single-threaded, and the only other threads are those from GTK, 
> which should be very few (like, 4 or 5).  This assumes you 
> didn't use Lisp threads, of course. 

Oh, it may be subprocesses instead of threads.  emacs-slack is 
doing all sorts of things, involving both ImageMagick and 
launching curl subprocesses.  Is there a way to prevent libmtrace 
from following children?

I've just hooked make-process and make-thread, and see both being 
called back-to-back very often for spawning curl subprocesses.

>> This printed a start and stop message each time I evaluated 
>> garbage-collect manually.  It did not print any messages in 11 
>> hours of running unattended. 
> 
> That's expected, because the automatic GC doesn't call 
> garbage-collect.  garbage-collect is just a thin wrapper around 
> a C function, called garbage_collect, and the automatic GC calls 
> that function directly from C.  And you cannot advise C 
> functions not exposed to Lisp. 
> 
> If you want to have record of the times it took each GC to run, 
> you will have to modify the C sources. 

Gotcha.  No surprise, then.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 15:33:02 GMT) Full text and rfc822 format available.

Message #551 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 17:31:37 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: carlos <at> redhat.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
> Cc: 
> Date: Thu, 26 Nov 2020 16:19:53 +0100
> 
> I've just hooked make-process and make-thread, and see both being 
> called back-to-back very often for spawning curl subprocesses.

What Lisp commands cause make-thread to be called?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 15:45:01 GMT) Full text and rfc822 format available.

Message #554 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 26 Nov 2020 16:42:19 +0100
On Thu, Sep 17, 2020 at 10:47:04PM +0200, Russell Adams wrote:
> From Emacs memory-usage package:
>
> Garbage collection stats:
> ((conses 16 1912248 251798) (symbols 48 54872 19) (strings 32 327552 81803) (string-bytes 1 12344346) (vectors 16 158994) (vector-slots 8 2973919 339416) (floats 8 992 4604) (intervals 56 182607 7492) (buffers 1000 195))
>
>  =>	29.2MB (+ 3.84MB dead) in conses
> 	2.51MB (+ 0.89kB dead) in symbols
> 	10.00MB (+ 2.50MB dead) in strings
> 	11.8MB in string-bytes
> 	2.43MB in vectors
> 	22.7MB (+ 2.59MB dead) in vector-slots
> 	7.75kB (+ 36.0kB dead) in floats
> 	9.75MB (+  410kB dead) in intervals
> 	 190kB in buffers
>
> Total in lisp objects: 97.9MB (live 88.5MB, dead 9.36MB)

I had the memory leak occur again and this time I had the
glibc-malloc-trace-utils loaded and running from the start.

So my emacs grew to 8GB in RAM, and what was curious is if it was a
background task (not window focused on an emacsclient), then the
memory stayed the same. When I had the window focused, I could watch
the memory constantly increasing in htop a few megs at a time.

Garbage collection stats:
((conses 16 1749077 1176908)
 (symbols 48 47530 38)
 (strings 32 307123 144020)
 (string-bytes 1 10062511)
 (vectors 16 113172)
 (vector-slots 8 2105205 486800)
 (floats 8 709 1719)
 (intervals 56 174593 44804)
 (buffers 1000 71))

 =>	26.7MB (+ 18.0MB dead) in conses
	2.18MB (+ 1.78kB dead) in symbols
	9.37MB (+ 4.40MB dead) in strings
	9.60MB in string-bytes
	1.73MB in vectors
	16.1MB (+ 3.71MB dead) in vector-slots
	5.54kB (+ 13.4kB dead) in floats
	9.32MB (+ 2.39MB dead) in intervals
	69.3kB in buffers

Total in lisp objects:  103MB (live 75.0MB, dead 28.5MB)

Buffer ralloc memory usage:
47 buffers
3.36MB total ( 232kB in gaps)
      Size	Gap	Name

    926626	1504	AIS.org
    690050	1933	Personal.org
    553850	2000	Abuffer.org
    490398	3851	*Packages*
    215653	2000	KB.org
     76686	1708	X230.org
     59841	2123	Agenda.org
     51375	51076	*sly-events for sbcl*
     51060	1902	ASC.org
     44596	2000	Contacts.org
     36825	1792	*Messages*
     23882	2309	*org-caldav-debug*
     22867	2000	rgb.lisp
     14678	746	*sly-mrepl for sbcl*
      6640	1173	VirtualFCMap.lisp
      4096	2000	 *code-converting-work*
      3409	16717	 *http orgmode.org:443*
      1946	104	*Org Agenda*
      1528	2028	 *http gaming.demosthenes.org*-491231
      1524	2028	 *http gaming.demosthenes.org*-15349
      1518	2028	 *http gaming.demosthenes.org*
      1276	1368	*sly-inferior-lisp for sbcl*
      1231	2026	 *http gaming.demosthenes.org*-464306
      1208	825	*Help*
       679	1574	*Buffer Details*
       641	1975	 *Agenda Commands*
       531	1494	*Calendar*
       324	2008	 *http melpa.org:443*
       278	3775	*helm M-x*
       185	1838	*org caldav sync result*
       144	2000	*scratch*
        57	21434	*helm find files*
        44	5610	 *icalendar-work*
        30	2000	 *sly-fontify*
        21	2000	*log-edit-files*
        20	0	 *pdf-info-query--escape*
        18	4077	*helm mini*
        12	8630	 *code-conversion-work*
         5	4065	 *Echo Area 1*
         0	2033	 *Minibuf-1*
         0	20	 *Minibuf-0*
         0	20	 *server*
         0	4060	 *Echo Area 0*
         0	61547	 *sly-1*
         0	20	 *sly-dds-1-1*
         0	20	*changes to ~/ASC/Software/Snaps/*
         0	20	*vc*

I started emacs with:

MTRACE_CTL_FILE=mtraceEMACS.mtr LD_PRELOAD=~/software/glibc-malloc-trace-utils/libmtrace.so ~/.local/bin/emacs --daemon >> ~/.config/emacs/emacs.log 2>&1

This created some huge files. By the time I reached 8GB in RAM, the
mtr file for the main process (I think) was 53 GB. I also have little mtrace
files littered everywhere in different project directories.

-rw-r--r--   1 adamsrl adamsrl  53G Nov 26 13:23 mtraceEMACS.mtr.15236
-rw-r--r--   1 adamsrl adamsrl 4.2G Nov 26 13:36 my.wl
-rw-r--r--   1 adamsrl adamsrl 1.3G Nov 26 13:50 mtraceEMACS.mtr.15236.allocs
-rw-r--r--   1 adamsrl adamsrl  32K Nov 26 13:55 mtraceEMACS.mtr.15236.binnedallocs.log
-rw-r--r--   1 adamsrl adamsrl 6.0G Nov 26 15:12 vmrssout
-rw-r--r--   1 adamsrl adamsrl 6.0G Nov 26 15:12 vmout
-rw-r--r--   1 adamsrl adamsrl 8.6G Nov 26 15:12 idealrssout

I converted the mtraceEMACS.mtr.15236 to my.wl using trace2wl.

The trace_run command did this output:

% ~/software/glibc-malloc-trace-utils/trace_run ./my.wl vmout vmrssout idealrssout
11,757,635,230,744 cycles
4,532,472,554 usec wall time
5,966,752,470 usec across 3 threads
8,461,721,600 bytes Max RSS (218,308,608 -> 8,680,030,208)
Starting VmRSS 218308608 (bytes)
Starting VmSize 219549696 (bytes)
Starting MaxRSS 218308608 (bytes)
Ending VmRSS 8680030208 (bytes)
Ending VmSize 8903626752 (bytes)
Ending MaxRSS 8680030208 (bytes)
8,131,008 Kb Max Ideal RSS

sizeof ticks_t is 8
Avg malloc time:    145 in 422,186,832 calls
Avg calloc time: 12,538 in  1,164,584 calls
Avg realloc time:   566 in  3,294,165 calls
Avg free time:      110 in 449,397,629 calls
Total call time: 127,318,389,383 cycles

These files are impossible to share around, is there anything I can
run to extract anything else useful from them?

% ~/software/glibc-malloc-trace-utils/trace_statistics mtraceEMACS.mtr.15236
Min allocation size: 0
Max allocation size: 1603869
Mean allocation size: 128

I did follow the instructions for downsampling, but I haven't a clue
what to do in Octave. Is it worth posting those files?

I have the impression this is more about how often more RAM was
requested, and not the source of the call?

I should mention I'm present in #emacs and happy to discuss there.

------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 16:35:02 GMT) Full text and rfc822 format available.

Message #557 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 26 Nov 2020 18:34:31 +0200
> Date: Thu, 26 Nov 2020 16:42:19 +0100
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> So my emacs grew to 8GB in RAM, and what was curious is if it was a
> background task (not window focused on an emacsclient), then the
> memory stayed the same. When I had the window focused, I could watch
> the memory constantly increasing in htop a few megs at a time.

Was the memory increasing even when you did nothing in the session?
If so, do you have some background functions running, e.g. timers?  If
Emacs was not idle, can you describe what you were doing at that time?

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 16:55:01 GMT) Full text and rfc822 format available.

Message #560 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 26 Nov 2020 17:54:36 +0100
On Thu, Nov 26, 2020 at 06:34:31PM +0200, Eli Zaretskii wrote:
> > Date: Thu, 26 Nov 2020 16:42:19 +0100
> > From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> >
> > So my emacs grew to 8GB in RAM, and what was curious is if it was a
> > background task (not window focused on an emacsclient), then the
> > memory stayed the same. When I had the window focused, I could watch
> > the memory constantly increasing in htop a few megs at a time.
>
> Was the memory increasing even when you did nothing in the session?
> If so, do you have some background functions running, e.g. timers?  If
> Emacs was not idle, can you describe what you were doing at that time?

At one point I was watching htop and every time I switched to the
Emacs window and returned to htop, I'd see it grow by several more MB
over 3-5 seconds and then stop. So I left Emacs as the focused window
overnight, and it grew from 4GB to 8GB.

In this instance, I had my cursor at the bottom of a saved Org file. I
wasn't even actively typing or interacting with Emacs. I just grew
each time it got window focus.

Yes I have a few timers, but those trip at midnight. I call org-agenda
and org-caldev-sync. I don't have any other timers that I know of.

Mind you I'm running daemon mode and I'm looking at an emacsclient
frame.

Thanks.

------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 19:22:02 GMT) Full text and rfc822 format available.

Message #563 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 26 Nov 2020 21:20:42 +0200
> Date: Thu, 26 Nov 2020 17:54:36 +0100
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> At one point I was watching htop and every time I switched to the
> Emacs window and returned to htop, I'd see it grow by several more MB
> over 3-5 seconds and then stop. So I left Emacs as the focused window
> overnight, and it grew from 4GB to 8GB.
> 
> In this instance, I had my cursor at the bottom of a saved Org file. I
> wasn't even actively typing or interacting with Emacs. I just grew
> each time it got window focus.

OK, so an idling Emacs with one focused frame gains about 0.5GB every
hour, would that be more or less accurate?

> Yes I have a few timers, but those trip at midnight. I call org-agenda
> and org-caldev-sync. I don't have any other timers that I know of.

Just so we have the hard evidence: could you please show the values of
timer-list and timer-idle-list on that system?

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 20:22:01 GMT) Full text and rfc822 format available.

Message #566 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 15:21:04 -0500
On 11/26/20 8:58 AM, Eli Zaretskii wrote:
> Apart of that, I think we really need to see the most significant
> customers of these functions when the memory footprint starts growing
> fast.
 
It's in the mastiff captured data.

Of the 1.7GiB it's all in Fcons:

448.2 MiB: Fmake_list
270.3 MiB: in 262 places all over the place (below massif's threshold)
704.0 MiB: list4 -> exec_byte_code
109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
102.2 MiB: Flist -> exec_byte_code ...
 68.5 MiB: Fcopy_alist -> Fframe_parameters ...

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 26 Nov 2020 20:31:02 GMT) Full text and rfc822 format available.

Message #569 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 22:30:14 +0200
> Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Thu, 26 Nov 2020 15:21:04 -0500
> 
> On 11/26/20 8:58 AM, Eli Zaretskii wrote:
> > Apart of that, I think we really need to see the most significant
> > customers of these functions when the memory footprint starts growing
> > fast.
>  
> It's in the mastiff captured data.
> 
> Of the 1.7GiB it's all in Fcons:
> 
> 448.2 MiB: Fmake_list
> 270.3 MiB: in 262 places all over the place (below massif's threshold)
> 704.0 MiB: list4 -> exec_byte_code
> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
> 102.2 MiB: Flist -> exec_byte_code ...
>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...

Thanks.  Those are the low-level primitives, they tell nothing about
the Lisp code which caused this much memory allocation.  We need
higher levels of callstack, and preferably in Lisp terms.  GDB
backtraces would show them, due to tailoring in src/.gdbinit.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 01:02:01 GMT) Full text and rfc822 format available.

Message #572 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 Trevor Bentley <trevor <at> trevorbentley.com>, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 21:25:35 +0300
My mtrace files do not have the PID from Emacs. It got lost maybe
because I killed Emacs. There are many other PID files. Or maybe
initial PID file was based by the script that run it.

Should I provide mtrace files which do not have emacs PID?





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 01:02:02 GMT) Full text and rfc822 format available.

Message #575 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 21:37:56 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-26 17:14]:
> > Date: Thu, 26 Nov 2020 12:09:32 +0300
> > From: Jean Louis <bugs <at> gnu.support>
> > Cc: Carlos O'Donell <carlos <at> redhat.com>, trevor <at> trevorbentley.com,
> >   fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
> >   michael_heerdegen <at> web.de
> > 
> > ((uptime "2 days, 18 hours, 35 minutes, 19 seconds") (pid 13339) (garbage ((conses 16 4511014 617524) (symbols 48 86926 23) (strings 32 576134 114546) (string-bytes 1 25198549) (vectors 16 245670) (vector-slots 8 4636183 1560354) (floats 8 1859 18842) (intervals 56 655325 24178) (buffers 992 900))) (buffers-size 200898858) (vsize (vsize 5144252)))
> > 
> > But what happened after 36 minutes of waiting is that Emacs became
> > responsive. So I am still running this session and I hope to get
> > mtrace after the session has finished.
> > 
> > Before I was not patient longer than maybe 3-5 minutes and I have
> > aborted Emacs. But now I can see it stabilized after hard work with
> > memory or whatever it was doing. Swap is 1809 MB and vsize just same
> > as above.
> 
> It's still 5GB, which is a fairly large footprint, certainly for a
> 2-day session.

And this time I could observe it was quick to reach, like from some
200 MB swap reported it grew to few gigabytes in few minutes.

> > Observation on "what I was doing when vsize started growing" is
> > simple, I was just editing email, nothing drastic. I did not do
> > anything special.
> 
> Can you describe in more detail how you edit email?  Which email
> package(s) do you do, and what would composing email generally
> involve?

I was using XTerm invoked from outside with mutt. Mutt invokes
emacsclient and it uses normally same frame, but sometimes other
frame. Default setting is to use new frame, but I sometimes change to
invoke it without creating new frame.

There are 2 modules vterm that I load and emacs-libpq for database.

> Also, are there any background activities that routinely run in your
> Emacs sessions?

Jabber doing XMPP without problem before, persistent scratch,
symon-mode, helm, sql-postgres mode, there is eshell always invoked
and shell.

Timers now:
               5.0s            - undo-auto--boundary-timer
              10.1s        30.0s jabber-whitespace-ping-do
              18.8s      1m 0.0s display-time-event-handler
           4m 49.4s      5m 0.0s persistent-scratch-save
          31m 10.9s   1h 0m 0.0s url-cookie-write-file
   *           0.1s            t show-paren-function
   *           0.5s      :repeat blink-cursor-start
   *           0.5s            t #f(compiled-function () #<bytecode 0x23a02dfeda0a1d> [jit-lock--antiblink-grace-timer jit-lock-context-fontify])
   *           1.0s            - helm-ff--cache-mode-refresh
   *           2.0s            t jabber-activity-clean

> > If you say I should finish session now and send the mtrace, I can do
> > it.
> 
> That's for Carlos to say.
> 
> Thanks for the info.

That session after some time invoked much harder hard disk swapping
and I have killed Emacs. But I could not find mtrace with
corresponding PID for that Emacs session

For this session I can see the corresponding PID on the disk. I am now
at 8 hours session. Once finishes I hope that mtrace file will not be
deleted even if I kill Emacs.

((uptime "8 hours, 8 minutes, 11 seconds") (pid 7385) (garbage ((conses 16 1032190 170175) (symbols 48 49048 11) (strings 32 252789 45307) (string-bytes 1 8153413) (vectors 16 84232) (vector-slots 8 1713735 81778) (floats 8 690 1822) (intervals 56 68015 4240) (buffers 984 105))) (buffers-size 3632683) (vsize (vsize 1217088)))




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 04:55:02 GMT) Full text and rfc822 format available.

Message #578 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Trevor Bentley <trevor <at> trevorbentley.com>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, michael_heerdegen <at> web.de,
 dj <at> redhat.com, bugs <at> gnu.support
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 23:54:21 -0500
On 11/26/20 10:19 AM, Trevor Bentley wrote:
>>> If it's one trace per thread, though, then we at least know that
>>> my emacs process in question is blazing through threads.
>> 
>> I don't see how this could be true, unless some library you use
>> (ImageMagick?) starts a lot of threads.  Emacs itself is
>> single-threaded, and the only other threads are those from GTK,
>> which should be very few (like, 4 or 5).  This assumes you didn't
>> use Lisp threads, of course.
> 
> Oh, it may be subprocesses instead of threads.  emacs-slack is doing
> all sorts of things, involving both ImageMagick and launching curl
> subprocesses.  Is there a way to prevent libmtrace from following
> children?

Each process generates a trace, and that trace contains the data for
all threads in the process.

I've just pushed MTRACE_CTL_CHILDREN, set that to 0 and the children
will not trace. Thanks for the feedback and enhancement.

commit 8a88a4840b5a573c50264f04f68f71d0496913d3
Author: Carlos O'Donell <carlos <at> redhat.com>
Date:   Thu Nov 26 23:50:57 2020 -0500

    mtrace: Add support for MTRACE_CTL_CHILDREN.
    
    Allow the tracer to only trace the parent process and disable
    tracing in all child processes unless those processes choose
    to programmatically re-eanble tracing via the exposed API.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 04:56:02 GMT) Full text and rfc822 format available.

Message #581 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
 Trevor Bentley <trevor <at> trevorbentley.com>, dj <at> redhat.com,
 michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 26 Nov 2020 23:55:45 -0500
On 11/26/20 1:25 PM, Jean Louis wrote:
> My mtrace files do not have the PID from Emacs. It got lost maybe
> because I killed Emacs. There are many other PID files. Or maybe
> initial PID file was based by the script that run it.
> 
> Should I provide mtrace files which do not have emacs PID?
 
Each PID is from a spawned subprocess.

I've just pushed new code to the tracer to allow you to do:
MTRACE_CTL_CHILDREN=0 to avoid tracing the spawned child
processes.

We would only want the mtrace file for the emacs PID (all
contained threads store to that file).

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 05:06:01 GMT) Full text and rfc822 format available.

Message #584 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 00:04:56 -0500
On 11/26/20 3:30 PM, Eli Zaretskii wrote:
>> Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
>> From: Carlos O'Donell <carlos <at> redhat.com>
>> Date: Thu, 26 Nov 2020 15:21:04 -0500
>>
>> On 11/26/20 8:58 AM, Eli Zaretskii wrote:
>>> Apart of that, I think we really need to see the most significant
>>> customers of these functions when the memory footprint starts growing
>>> fast.
>>  
>> It's in the mastiff captured data.
>>
>> Of the 1.7GiB it's all in Fcons:
>>
>> 448.2 MiB: Fmake_list
>> 270.3 MiB: in 262 places all over the place (below massif's threshold)
>> 704.0 MiB: list4 -> exec_byte_code
>> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
>> 102.2 MiB: Flist -> exec_byte_code ...
>>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...
> 
> Thanks.  Those are the low-level primitives, they tell nothing about
> the Lisp code which caused this much memory allocation.  We need
> higher levels of callstack, and preferably in Lisp terms.  GDB
> backtraces would show them, due to tailoring in src/.gdbinit.

Sure, let me pick one for you:

lisp_align_malloc (alloc.c:1195)
 Fcons (alloc.c:2694)
  concat (fns.c:730)
   Fcopy_sequence (fns.c:598)
    timer_check (keyboard.c:4395)
     wait_reading_process_output (process.c:5334)
      sit_for (dispnew.c:6056)
       read_char (keyboard.c:2742)
        read_key_sequence (keyboard.c:9551)
         command_loop_1 (keyboard.c:1354)
          internal_condition_case (eval.c:1365)
           command_loop_2 (keyboard.c:1095)
            internal_catch (eval.c:1126)
             command_loop (keyboard.c:1074)
              recursive_edit_1 (keyboard.c:718)
               Frecursive_edit (keyboard.c:790)
                main (emacs.c:2080)
 
There is a 171MiB's worth of allocations in that path.

There are a lot of traces ending in wait_reading_process_output that
are consuming 50MiB.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 05:09:01 GMT) Full text and rfc822 format available.

Message #587 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, trevor <at> trevorbentley.com,
 dj <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 00:08:21 -0500
On 11/26/20 1:37 PM, Jean Louis wrote:
> For this session I can see the corresponding PID on the disk. I am now
> at 8 hours session. Once finishes I hope that mtrace file will not be
> deleted even if I kill Emacs.

Nothing should be deleting the on-disk traces.

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 07:42:02 GMT) Full text and rfc822 format available.

Message #590 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 09:40:53 +0200
> Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Fri, 27 Nov 2020 00:04:56 -0500
> 
> >> 448.2 MiB: Fmake_list
> >> 270.3 MiB: in 262 places all over the place (below massif's threshold)
> >> 704.0 MiB: list4 -> exec_byte_code
> >> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
> >> 102.2 MiB: Flist -> exec_byte_code ...
> >>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...
> > 
> > Thanks.  Those are the low-level primitives, they tell nothing about
> > the Lisp code which caused this much memory allocation.  We need
> > higher levels of callstack, and preferably in Lisp terms.  GDB
> > backtraces would show them, due to tailoring in src/.gdbinit.
> 
> Sure, let me pick one for you:
> 
> lisp_align_malloc (alloc.c:1195)
>  Fcons (alloc.c:2694)
>   concat (fns.c:730)
>    Fcopy_sequence (fns.c:598)
>     timer_check (keyboard.c:4395)
>      wait_reading_process_output (process.c:5334)
>       sit_for (dispnew.c:6056)
>        read_char (keyboard.c:2742)
>         read_key_sequence (keyboard.c:9551)
>          command_loop_1 (keyboard.c:1354)
>           internal_condition_case (eval.c:1365)
>            command_loop_2 (keyboard.c:1095)
>             internal_catch (eval.c:1126)
>              command_loop (keyboard.c:1074)
>               recursive_edit_1 (keyboard.c:718)
>                Frecursive_edit (keyboard.c:790)
>                 main (emacs.c:2080)
>  
> There is a 171MiB's worth of allocations in that path.
> 
> There are a lot of traces ending in wait_reading_process_output that
> are consuming 50MiB.

Thanks.  If they are like the one above, the allocations are due to
some timer.  Could be jabber, I'll take a look at it.  Or maybe
helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
well.

However, GDB's backtraces are even more informative, as they show Lisp
functions invoked in-between (via exec_byte_code, funcall_subr, etc.).
These pinpoint the offending Lisp code much more accurately.  The
downside is that running with GDB stopping Emacs and emitting the
backtrace is no fun...




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 07:53:01 GMT) Full text and rfc822 format available.

Message #593 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: trevor <at> trevorbentley.com
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 09:52:00 +0200
> Date: Fri, 27 Nov 2020 09:40:53 +0200
> From: Eli Zaretskii <eliz <at> gnu.org>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
>  michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
> 
> > Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
> >  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
> > From: Carlos O'Donell <carlos <at> redhat.com>
> > Date: Fri, 27 Nov 2020 00:04:56 -0500
> > 
> > >> 448.2 MiB: Fmake_list
> > >> 270.3 MiB: in 262 places all over the place (below massif's threshold)
> > >> 704.0 MiB: list4 -> exec_byte_code
> > >> 109.7 MiB: F*_json_read_string_0 -> funcall_subr ...
> > >> 102.2 MiB: Flist -> exec_byte_code ...
> > >>  68.5 MiB: Fcopy_alist -> Fframe_parameters ...
> > > 
> > > Thanks.  Those are the low-level primitives, they tell nothing about
> > > the Lisp code which caused this much memory allocation.  We need
> > > higher levels of callstack, and preferably in Lisp terms.  GDB
> > > backtraces would show them, due to tailoring in src/.gdbinit.
> > 
> > Sure, let me pick one for you:
> > 
> > lisp_align_malloc (alloc.c:1195)
> >  Fcons (alloc.c:2694)
> >   concat (fns.c:730)
> >    Fcopy_sequence (fns.c:598)
> >     timer_check (keyboard.c:4395)
> >      wait_reading_process_output (process.c:5334)
> >       sit_for (dispnew.c:6056)
> >        read_char (keyboard.c:2742)
> >         read_key_sequence (keyboard.c:9551)
> >          command_loop_1 (keyboard.c:1354)
> >           internal_condition_case (eval.c:1365)
> >            command_loop_2 (keyboard.c:1095)
> >             internal_catch (eval.c:1126)
> >              command_loop (keyboard.c:1074)
> >               recursive_edit_1 (keyboard.c:718)
> >                Frecursive_edit (keyboard.c:790)
> >                 main (emacs.c:2080)
> >  
> > There is a 171MiB's worth of allocations in that path.
> > 
> > There are a lot of traces ending in wait_reading_process_output that
> > are consuming 50MiB.
> 
> Thanks.  If they are like the one above, the allocations are due to
> some timer.  Could be jabber, I'll take a look at it.  Or maybe
> helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
> well.

Oops, I got this mixed up: the timer list is from Jean, but the massif
files are from Trevor.

Trevor, can you show the list of timers running on your system?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 08:22:02 GMT) Full text and rfc822 format available.

Message #596 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: carlos <at> redhat.com
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 10:20:46 +0200
> Date: Fri, 27 Nov 2020 09:52:00 +0200
> From: Eli Zaretskii <eliz <at> gnu.org>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
>  carlos <at> redhat.com, michael_heerdegen <at> web.de
> 
> > Date: Fri, 27 Nov 2020 09:40:53 +0200
> > From: Eli Zaretskii <eliz <at> gnu.org>
> > Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
> >  michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
> > 
> > > lisp_align_malloc (alloc.c:1195)
> > >  Fcons (alloc.c:2694)
> > >   concat (fns.c:730)
> > >    Fcopy_sequence (fns.c:598)
> > >     timer_check (keyboard.c:4395)
> > >      wait_reading_process_output (process.c:5334)
> > >       sit_for (dispnew.c:6056)
> > >        read_char (keyboard.c:2742)
> > >         read_key_sequence (keyboard.c:9551)
> > >          command_loop_1 (keyboard.c:1354)
> > >           internal_condition_case (eval.c:1365)
> > >            command_loop_2 (keyboard.c:1095)
> > >             internal_catch (eval.c:1126)
> > >              command_loop (keyboard.c:1074)
> > >               recursive_edit_1 (keyboard.c:718)
> > >                Frecursive_edit (keyboard.c:790)
> > >                 main (emacs.c:2080)
> > >  
> > > There is a 171MiB's worth of allocations in that path.
> > > 
> > > There are a lot of traces ending in wait_reading_process_output that
> > > are consuming 50MiB.
> > 
> > Thanks.  If they are like the one above, the allocations are due to
> > some timer.  Could be jabber, I'll take a look at it.  Or maybe
> > helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
> > well.
> 
> Oops, I got this mixed up: the timer list is from Jean, but the massif
> files are from Trevor.

Double oops: the above just shows that each time we process timers, we
copy the list of the timers first.  Not sure what to do about that.
Hmm...  Maybe we should try GC at the end of each timer_check call?

Is it possible to tell how much time did it take to allocate those
171MB via the above chain of calls?  I'm trying to assess the rate of
allocations we request this way.

Each call to lisp_align_malloc above requests a 1008-byte chunk of
memory for a new block of Lisp conses.  Would it benefit us to tune
this value to a larger or smaller size, as far as glibc's malloc is
concerned?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 09:51:02 GMT) Full text and rfc822 format available.

Message #599 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, Trevor Bentley <trevor <at> trevorbentley.com>,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 11:44:33 +0300
* Carlos O'Donell <carlos <at> redhat.com> [2020-11-27 07:54]:
> Each process generates a trace, and that trace contains the data for
> all threads in the process.
> 
> I've just pushed MTRACE_CTL_CHILDREN, set that to 0 and the children
> will not trace. Thanks for the feedback and enhancement.

Thank you, that is nice feature, I will use it for the next session.

I have finished one trace and now packing it to see if it can be packed and uploaded.

I will upload it and share the hyperlink to Carlos and Eli as private email.

Sadly I could not invoke my function M-x good-bye and I also did not
see this time problem with swapping. Problem came that I have invoked
M-x eww and was browsing and it blocked. I had to interrupt. But
nothing worked in the end and user interface became not responsive. I
could not type a key, use mouse or do anything. Hard disk was working,
not much, and not that the LED was turned on as usual continually.

I have been doing usual work, nothing special. Just using eww. Mouse
and menu did not work. M-x did not work. Interrupting with ESC man
times or C-g did not work. It worked once to get error in process
filter but after everything was blocked.

My vsize function have been showing me over 4 GB vsize value in
minibuffer. Swap size was under 200 MB this time. 

When the condition occurs that we are trying to capture my swap size
was always 2-3 GB minimum, and I have 4 GB RAM.

I had to invoke xkill to kill Emacs. Hyperlink with mtrace is coming
as soon as it hopefully gets packed better. 

Thank you,
Jean




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 10:46:02 GMT) Full text and rfc822 format available.

Message #602 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 27 Nov 2020 11:45:20 +0100
On Thu, Nov 26, 2020 at 09:20:42PM +0200, Eli Zaretskii wrote:
> > Date: Thu, 26 Nov 2020 17:54:36 +0100
> > From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> >
> > At one point I was watching htop and every time I switched to the
> > Emacs window and returned to htop, I'd see it grow by several more MB
> > over 3-5 seconds and then stop. So I left Emacs as the focused window
> > overnight, and it grew from 4GB to 8GB.
> >
> > In this instance, I had my cursor at the bottom of a saved Org file. I
> > wasn't even actively typing or interacting with Emacs. I just grew
> > each time it got window focus.
>
> OK, so an idling Emacs with one focused frame gains about 0.5GB every
> hour, would that be more or less accurate?
>
> > Yes I have a few timers, but those trip at midnight. I call org-agenda
> > and org-caldev-sync. I don't have any other timers that I know of.
>
> Just so we have the hard evidence: could you please show the values of
> timer-list and timer-idle-list on that system?
>
> Thanks.
>

           3.15     1.00 appt-check
           8.38        - undo-auto--boundary-timer
         117.38     5.00 savehist-autosave
        1143.17    60.00 url-cookie-write-file
       44223.15  1440.00 org-save-all-org-buffers
       44283.15  1440.00 org-agenda-list
       44343.15  1440.00 org-caldav-sync
   *       0.00        t show-paren-function
   *       0.50        t #f(compiled-function () #<bytecode 0x1ffd99dba7bf> [jit-lock--antiblink-grace-timer jit-lock-context-fontify])
   *       1.00        - helm-ff--cache-mode-refresh

Unfortunately the Emacs that was 8GB has since been stopped, I killed
it before working with the trace files. My laptop was rebooted later
when the trace statistics utils ate all the RAM (my error, wrong input
file).

This list of timers is from a new instance, but the configuration
hasn't changed.

Are the 50+GB of trace files I have of any value?

------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 12:39:02 GMT) Full text and rfc822 format available.

Message #605 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 27 Nov 2020 14:38:07 +0200
> Date: Fri, 27 Nov 2020 11:45:20 +0100
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> 
> > > Yes I have a few timers, but those trip at midnight. I call org-agenda
> > > and org-caldev-sync. I don't have any other timers that I know of.
> >
> > Just so we have the hard evidence: could you please show the values of
> > timer-list and timer-idle-list on that system?
> >
> > Thanks.
> >
> 
>            3.15     1.00 appt-check
>            8.38        - undo-auto--boundary-timer
>          117.38     5.00 savehist-autosave
>         1143.17    60.00 url-cookie-write-file
>        44223.15  1440.00 org-save-all-org-buffers
>        44283.15  1440.00 org-agenda-list
>        44343.15  1440.00 org-caldav-sync
>    *       0.00        t show-paren-function
>    *       0.50        t #f(compiled-function () #<bytecode 0x1ffd99dba7bf> [jit-lock--antiblink-grace-timer jit-lock-context-fontify])
>    *       1.00        - helm-ff--cache-mode-refresh

Thanks.

> Unfortunately the Emacs that was 8GB has since been stopped, I killed
> it before working with the trace files. My laptop was rebooted later
> when the trace statistics utils ate all the RAM (my error, wrong input
> file).
> 
> This list of timers is from a new instance, but the configuration
> hasn't changed.
> 
> Are the 50+GB of trace files I have of any value?

I don't think Carlos and others saw your reports, because they were
not CC'ed.  I'm CC'ing them now; please make sure to reply to all of
them next time.

Carlos, please read

  https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#554

for the details posted by Russel about his data points.  If you can
instruct him how to produce some analysis from the mtrace files, or
how to make them available for your analysis, please do.

Thanks.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 27 Nov 2020 15:34:01 GMT) Full text and rfc822 format available.

Message #608 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Carlos O'Donell <carlos <at> redhat.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Fri, 27 Nov 2020 17:33:11 +0200
> Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
> From: Carlos O'Donell <carlos <at> redhat.com>
> Date: Fri, 27 Nov 2020 00:04:56 -0500
> 
> lisp_align_malloc (alloc.c:1195)
>  Fcons (alloc.c:2694)
>   concat (fns.c:730)
>    Fcopy_sequence (fns.c:598)
>     timer_check (keyboard.c:4395)
>      wait_reading_process_output (process.c:5334)
>       sit_for (dispnew.c:6056)
>        read_char (keyboard.c:2742)
>         read_key_sequence (keyboard.c:9551)
>          command_loop_1 (keyboard.c:1354)
>           internal_condition_case (eval.c:1365)
>            command_loop_2 (keyboard.c:1095)
>             internal_catch (eval.c:1126)
>              command_loop (keyboard.c:1074)
>               recursive_edit_1 (keyboard.c:718)
>                Frecursive_edit (keyboard.c:790)
>                 main (emacs.c:2080)
>  
> There is a 171MiB's worth of allocations in that path.

Are there chains of calls that are responsible for more memory
allocated than 171MB?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 09:01:01 GMT) Full text and rfc822 format available.

Message #611 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: carlos <at> redhat.com
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 28 Nov 2020 11:00:17 +0200
> Date: Fri, 27 Nov 2020 10:20:46 +0200
> From: Eli Zaretskii <eliz <at> gnu.org>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
>  michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
> 
> > > > lisp_align_malloc (alloc.c:1195)
> > > >  Fcons (alloc.c:2694)
> > > >   concat (fns.c:730)
> > > >    Fcopy_sequence (fns.c:598)
> > > >     timer_check (keyboard.c:4395)
> > > >      wait_reading_process_output (process.c:5334)
> > > >       sit_for (dispnew.c:6056)
> > > >        read_char (keyboard.c:2742)
> > > >         read_key_sequence (keyboard.c:9551)
> > > >          command_loop_1 (keyboard.c:1354)
> > > >           internal_condition_case (eval.c:1365)
> > > >            command_loop_2 (keyboard.c:1095)
> > > >             internal_catch (eval.c:1126)
> > > >              command_loop (keyboard.c:1074)
> > > >               recursive_edit_1 (keyboard.c:718)
> > > >                Frecursive_edit (keyboard.c:790)
> > > >                 main (emacs.c:2080)
> > > >  
> > > > There is a 171MiB's worth of allocations in that path.
> > > > 
> > > > There are a lot of traces ending in wait_reading_process_output that
> > > > are consuming 50MiB.
> > > 
> > > Thanks.  If they are like the one above, the allocations are due to
> > > some timer.  Could be jabber, I'll take a look at it.  Or maybe
> > > helm-ff--cache-mode-refresh, whatever that is; need to look at Helm as
> > > well.
> > 
> > Oops, I got this mixed up: the timer list is from Jean, but the massif
> > files are from Trevor.
> 
> Double oops: the above just shows that each time we process timers, we
> copy the list of the timers first.  Not sure what to do about that.
> Hmm...  Maybe we should try GC at the end of each timer_check call?

This doesn't seem to be necessary: timer functions are called via
'funcall', whose implementation already includes a call to maybe_gc.

Just to see if we have some problem there, I left an otherwise idle
Emacs with 20 timer functions firing every second run overnight.  It
gained less than 1MB of memory footprint after 10 hours.  So timers
alone cannot explain the dramatic increase in memory footprints
described in this bug report, although they might be a contributing
factor when the Emacs process already has lots of memory allocated to
it.

> Each call to lisp_align_malloc above requests a 1008-byte chunk of
> memory for a new block of Lisp conses.

More accurately, malloc is asked to provide a block of memory whose
size is 1024 bytes minus sizeof (void *).




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 10:47:02 GMT) Full text and rfc822 format available.

Message #614 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 28 Nov 2020 13:45:38 +0300
Hello,

My good-by function took this time abut 7 minutes with swap being
about 650 MB. Swap was constantly less than 200 MB. Then without me
doing something special, maybe I was idling, swap grew to 650
MB. That is where I invoked the function:

((uptime "8 hours, 56 minutes, 27 seconds") (pid 14637) (garbage ((conses 16 2191203 1613364) (symbols 48 52843 237) (strings 32 301705 122437) (string-bytes 1 9982401) (vectors 16 99828) (vector-slots 8 1856426 1471952) (floats 8 738 5008) (intervals 56 180891 252942) (buffers 984 343))) (buffers-size 38553249) (vsize (vsize 3268444)))

One can see larger vsize of 3.12 G

Largest buffer is PDF of 5394959, the 4322895, 3706662, and so on.

I have tried deleting some buffers with M-x list-buffers:

- few largest buffers I have deleted without problem

- I have tried deleting my Org file with size 966405 and when I
  pressed D nothing was shown on screen, rather hard disk started
  working and it looks by behavior related to memory or swapping

- screen came back and I could press x to delete those buffers.

- even those some deleted buffers were deleted with x, at next click
  on Size in list-buffers I could again find the deleted buffers in
  the list. This is probably unrelated bug. I pressed x again and they
  disappeared. But what if they were not realy delete first time?

I will work little more in this session and will then provide mtrace
for pid 14637.

If anything else to be provided let me know.

Jean





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 17:32:01 GMT) Full text and rfc822 format available.

Message #617 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 28 Nov 2020 18:31:47 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:
>>  Thanks.  If they are like the one above, the allocations are 
>> due to some timer.  Could be jabber, I'll take a look at it. 
>> Or maybe helm-ff--cache-mode-refresh, whatever that is; need to 
>> look at Helm as well. 
> 
> Oops, I got this mixed up: the timer list is from Jean, but the 
> massif files are from Trevor. 
> 
> Trevor, can you show the list of timers running on your system? 

I use helm as well, emacs-slack sets a bunch of timers, and I have 
a custom treemacs-based UI for emacs-slack that also refreshes on 
a timer.  A typical timer list looks like this:

(list-timers) 
              0.2s            - thread-list--timer-func 5.0s 
              - undo-auto--boundary-timer 5.1s            - 
              slack-ws-ping 5.1s            - slack-ws-ping 5.1s 
              - slack-ws-ping 5.2s            - slack-ws-ping 
              5.2s            - slack-ws-ping 
             35.6s      1m 0.0s trev/slack--refresh-cache 
  *           0.5s            - #f(compiled-function () 
  #<bytecode 0x1b49fd33ce7c2899> [eldoc-mode global-eldoc-mode 
  eldoc--supported-p (debug error) 
  eldoc-print-current-symbol-info message "eldoc error: %s" nil]) 
  *           0.5s            t #f(compiled-function () 
  #<bytecode 0xbaac23f6e8899> [jit-lock--antiblink-grace-timer 
  jit-lock-context-fontify]) *           0.5s      :repeat 
  blink-cursor-start *           1.0s            - 
  helm-ff--cache-mode-refresh 

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 17:50:01 GMT) Full text and rfc822 format available.

Message #620 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, carlos <at> redhat.com
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 28 Nov 2020 18:49:37 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> Just to see if we have some problem there, I left an otherwise 
> idle Emacs with 20 timer functions firing every second run 
> overnight.  It gained less than 1MB of memory footprint after 10 
> hours.  So timers alone cannot explain the dramatic increase in 
> memory footprints described in this bug report, although they 
> might be a contributing factor when the Emacs process already 
> has lots of memory allocated to it. 

Something else worth noting is that I have dozens and dozens of 
emacs processes running at all times, and only graphical X11 
clients have had memory explosion.  Plenty of my `emacs -nw` 
instances have been open for 30+ days with heavy use, and all have 
stayed under 100MB RSS.

The most recent instance I ran is a graphical instance that I 
haven't done anything in except scroll around in a single small 
elisp file.  This one has an interesting difference in memory 
usage: the usage is large (2GB heap), but it isn't growing on its 
own.  It seems to grow by 10-20MB every time it gets X11 window 
focus, and other than that it's stable.  If I alt-tab to it 
continuously, I can force its usage up.  It appears to be 
permanent.  This differs from my emacs-slack instances, which 
constantly grow even when backgrounded.

I have yet another graphical instance that I just opened and 
minimized, and never focus.  It's still only using 70MB after over 
a week.  So at least it's not simply leaking all the time... some 
active use has to trigger it.

I'll have an mtrace for you from the current experiment (X11 focus 
leak) tomorrow or Monday.  I hope it's the same issue.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 19:57:02 GMT) Full text and rfc822 format available.

Message #623 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Sat, 28 Nov 2020 20:56:31 +0100
On Fri, Nov 27, 2020 at 02:38:07PM +0200, Eli Zaretskii wrote:
> > Unfortunately the Emacs that was 8GB has since been stopped, I killed
> > it before working with the trace files. My laptop was rebooted later
> > when the trace statistics utils ate all the RAM (my error, wrong input
> > file).
> >
> > This list of timers is from a new instance, but the configuration
> > hasn't changed.
> >
> > Are the 50+GB of trace files I have of any value?
>
> I don't think Carlos and others saw your reports, because they were
> not CC'ed.  I'm CC'ing them now; please make sure to reply to all of
> them next time.
>
> Carlos, please read
>
>   https://debbugs.gnu.org/cgi/bugreport.cgi?bug=43389#554
>
> for the details posted by Russel about his data points.  If you can
> instruct him how to produce some analysis from the mtrace files, or
> how to make them available for your analysis, please do.

I find particularly of interest the growth of Emacs processes while
idle.

Yesterday I restarted Emacs and over the next 18 hours I left my
laptop idle with Emacs as the focused application. My Emacs has grown
to 3GB and every time I select my Emacs window it will grow by a few
MB while I watch in htop.

I will restart it again tonight and leave it focused, and see if I can
reproduce the growth. It also appears that the growth is not linear,
slower at first and hard to see, but in the multiple MB at a time
later when the total is in GB.

Again I use emacs in daemon mode with one or more emacsclient
processes connected (x11 and terminal). I use StumpWM in full screen
mode with my emacsclient, and if it's focused it seems the growth
continues despite xscreensaver coming on and dimming the screen.

------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 20:15:02 GMT) Full text and rfc822 format available.

Message #626 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Sat, 28 Nov 2020 22:13:56 +0200
> Date: Sat, 28 Nov 2020 20:56:31 +0100
> From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
> Cc: dj <at> redhat.com, fweimer <at> redhat.com, trevor <at> trevorbentley.com,
> 	michael_heerdegen <at> web.de, carlos <at> redhat.com, 43389 <at> debbugs.gnu.org
> 
> I find particularly of interest the growth of Emacs processes while
> idle.
> 
> Yesterday I restarted Emacs and over the next 18 hours I left my
> laptop idle with Emacs as the focused application. My Emacs has grown
> to 3GB and every time I select my Emacs window it will grow by a few
> MB while I watch in htop.

Is there any way to get a trace/record of X events that are delivered
to Emacs during this kind of idleness?  Those events and the timers
are, I think, the only things that are going inside such an idle
session.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 28 Nov 2020 21:53:01 GMT) Full text and rfc822 format available.

Message #629 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: "Basil L. Contovounesios" <contovob <at> tcd.ie>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 Russell Adams <RLAdams <at> AdamsInfoServ.Com>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Sat, 28 Nov 2020 21:52:42 +0000
Eli Zaretskii <eliz <at> gnu.org> writes:

> Is there any way to get a trace/record of X events that are delivered
> to Emacs during this kind of idleness?  Those events and the timers
> are, I think, the only things that are going inside such an idle
> session.

What about asynchronous processes, such as url.el retrievals?
(Though I guess those would be accounted for in buffer/GC lists.)

-- 
Basil




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 29 Nov 2020 03:31:02 GMT) Full text and rfc822 format available.

Message #632 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: "Basil L. Contovounesios" <contovob <at> tcd.ie>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Sun, 29 Nov 2020 05:29:43 +0200
> From: "Basil L. Contovounesios" <contovob <at> tcd.ie>
> Cc: Russell Adams <RLAdams <at> AdamsInfoServ.Com>,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org,  dj <at> redhat.com,  michael_heerdegen <at> web.de,
>   trevor <at> trevorbentley.com,  carlos <at> redhat.com
> Date: Sat, 28 Nov 2020 21:52:42 +0000
> 
> Eli Zaretskii <eliz <at> gnu.org> writes:
> 
> > Is there any way to get a trace/record of X events that are delivered
> > to Emacs during this kind of idleness?  Those events and the timers
> > are, I think, the only things that are going inside such an idle
> > session.
> 
> What about asynchronous processes, such as url.el retrievals?

Those should not depend on whether the session is GUI or TTY, nor on
whether an Emacs frame has focus.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 30 Nov 2020 17:18:02 GMT) Full text and rfc822 format available.

Message #635 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, carlos <at> redhat.com
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 30 Nov 2020 18:17:28 +0100
> I'll have an mtrace for you from the current experiment (X11 
> focus  leak) tomorrow or Monday.  I hope it's the same issue. 

Ok, here is my latest memory log and a matching libmtrace:

https://trevorbentley.com/mtrace3/

This capture is unique in three ways:
1) Compared to my other tests, this one did not run emacs-slack 
and did about half of its leaking from X11 focus events, and the 
other half drifting upwards during idle.  This session has barely 
done anything.

2) I added a custom (malloc-trim) command, and called it after 
making my standard memory log.  At the end of the log, you can 
see that after the trim memory usage fell from 4GB to 50MB. 
Unfortunately, this malloc_trim() might make the libmtrace trace 
harder to make sense of.  But, at least in this case, it meant 
99% of the memory could be given back to the OS?

3) I ran the built-in emacs profiler.  The profiler memory 
results are in the log, both in normal and reversed format, with 
the largest element expanded.  I don't know how to interpret it, 
but it looks like maybe a periodic timer started by helm is 
responsible for 3+GB of RAM?

Also note that the (garbage-collect) call is timed now.  318 
seconds for this one.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 30 Nov 2020 18:17:01 GMT) Full text and rfc822 format available.

Message #638 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 30 Nov 2020 20:15:54 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support,
>  dj <at> redhat.com, michael_heerdegen <at> web.de
> Cc: 
> Date: Mon, 30 Nov 2020 18:17:28 +0100
> 
>  3) I ran the built-in emacs profiler.  The profiler memory 
>  results are in the log

Thanks, but this doesn't really measure memory usage.  It just uses
malloc calls as a poor man's replacement for SIGPROF signal, so the
results show a kind of CPU profile, not memory profile.

>  I don't know how to interpret it, but it looks like maybe a
>  periodic timer started by helm is responsible for 3+GB of RAM?

More like it's responsible for most of the CPU activity.

> Also note that the (garbage-collect) call is timed now.  318 
> seconds for this one.

And the automatic GCs were much faster?

Thanks.  I hope Carlos will be able to give some hints based on your
data.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 30 Nov 2020 18:34:01 GMT) Full text and rfc822 format available.

Message #641 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 30 Nov 2020 19:33:38 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> Also note that the (garbage-collect) call is timed now.  318 
>> seconds for this one. 
> 
> And the automatic GCs were much faster? 
> 

Automatic GCs were unnoticeable, as before.  Still not sure what 
that means.  I think I'll instrument it in C to try to figure out 
what is going on.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 30 Nov 2020 19:03:02 GMT) Full text and rfc822 format available.

Message #644 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 30 Nov 2020 21:02:10 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  bugs <at> gnu.support, dj <at> redhat.com, michael_heerdegen <at> web.de
> Cc: 
> Date: Mon, 30 Nov 2020 19:33:38 +0100
> 
> Eli Zaretskii <eliz <at> gnu.org> writes:
> 
> >> Also note that the (garbage-collect) call is timed now.  318 
> >> seconds for this one. 
> > 
> > And the automatic GCs were much faster? 
> > 
> 
> Automatic GCs were unnoticeable, as before.  Still not sure what 
> that means.  I think I'll instrument it in C to try to figure out 
> what is going on.

I'm stomped by this discrepancy, and feel that I'm missing something
very basic here...




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 01 Dec 2020 09:01:02 GMT) Full text and rfc822 format available.

Message #647 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 Trevor Bentley <trevor <at> trevorbentley.com>, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 30 Nov 2020 22:17:09 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-11-30 22:10]:
> > From: Trevor Bentley <trevor <at> trevorbentley.com>
> > Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
> >  bugs <at> gnu.support, dj <at> redhat.com, michael_heerdegen <at> web.de
> > Cc: 
> > Date: Mon, 30 Nov 2020 19:33:38 +0100
> > 
> > Eli Zaretskii <eliz <at> gnu.org> writes:
> > 
> > >> Also note that the (garbage-collect) call is timed now.  318 
> > >> seconds for this one. 
> > > 
> > > And the automatic GCs were much faster? 
> > > 
> > 
> > Automatic GCs were unnoticeable, as before.  Still not sure what 
> > that means.  I think I'll instrument it in C to try to figure out 
> > what is going on.
> 
> I'm stomped by this discrepancy, and feel that I'm missing something
> very basic here...

This issue on helm is closed but looks very similar to what is
happening here and could maybe give related information:

https://github.com/helm/helm/issues/3121

Other issues related to memory leak at helm:
https://github.com/helm/helm/issues?q=memory+leak




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 01 Dec 2020 10:15:02 GMT) Full text and rfc822 format available.

Message #650 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Jean Louis <bugs <at> gnu.support>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 01 Dec 2020 11:14:46 +0100
Jean Louis <bugs <at> gnu.support> writes:
> 
> This issue on helm is closed but looks very similar to what is 
> happening here and could maybe give related information: 
> 
> https://github.com/helm/helm/issues/3121 
> 
> Other issues related to memory leak at helm: 
> https://github.com/helm/helm/issues?q=memory+leak 

This is a different "helm" project, unrelated to emacs as far as I 
can tell.  The emacs helm is here: 
https://github.com/emacs-helm/helm

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 01 Dec 2020 10:36:02 GMT) Full text and rfc822 format available.

Message #653 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 1 Dec 2020 13:33:37 +0300
* Trevor Bentley <trevor <at> trevorbentley.com> [2020-12-01 13:15]:
> Jean Louis <bugs <at> gnu.support> writes:
> > 
> > This issue on helm is closed but looks very similar to what is happening
> > here and could maybe give related information:
> > 
> > https://github.com/helm/helm/issues/3121
> > 
> > Other issues related to memory leak at helm:
> > https://github.com/helm/helm/issues?q=memory+leak
> 
> This is a different "helm" project, unrelated to emacs as far as I can tell.
> The emacs helm is here: https://github.com/emacs-helm/helm

Ohhh ÷)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 01 Dec 2020 16:01:01 GMT) Full text and rfc822 format available.

Message #656 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 01 Dec 2020 18:00:12 +0200
> Date: Mon, 30 Nov 2020 22:17:09 +0300
> From: Jean Louis <bugs <at> gnu.support>
> Cc: Trevor Bentley <trevor <at> trevorbentley.com>, fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
>   michael_heerdegen <at> web.de
> 
> This issue on helm is closed but looks very similar to what is
> happening here and could maybe give related information:
> 
> https://github.com/helm/helm/issues/3121
> 
> Other issues related to memory leak at helm:
> https://github.com/helm/helm/issues?q=memory+leak

Are these at all relevant? they are not about Emacs, AFAIU.  There are
many ways to have a leak and run out of memory, most of them unrelated
to what happens in our case.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 01 Dec 2020 16:15:02 GMT) Full text and rfc822 format available.

Message #659 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andrea Corallo <akrl <at> sdf.org>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, Jean Louis <bugs <at> gnu.support>,
 dj <at> redhat.com, carlos <at> redhat.com, trevor <at> trevorbentley.com,
 michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 01 Dec 2020 16:14:30 +0000
Eli Zaretskii <eliz <at> gnu.org> writes:

>> Date: Mon, 30 Nov 2020 22:17:09 +0300
>> From: Jean Louis <bugs <at> gnu.support>
>> Cc: Trevor Bentley <trevor <at> trevorbentley.com>, fweimer <at> redhat.com,
>>   43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
>>   michael_heerdegen <at> web.de
>> 
>> This issue on helm is closed but looks very similar to what is
>> happening here and could maybe give related information:
>> 
>> https://github.com/helm/helm/issues/3121
>> 
>> Other issues related to memory leak at helm:
>> https://github.com/helm/helm/issues?q=memory+leak
>
> Are these at all relevant? they are not about Emacs, AFAIU.  There are
> many ways to have a leak and run out of memory, most of them unrelated
> to what happens in our case.

That's another helm "The package manager for Kubernetes", not the Elisp
package.

  Andrea




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 03 Dec 2020 07:22:02 GMT) Full text and rfc822 format available.

Message #662 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 3 Dec 2020 09:30:54 +0300
I have finished one Emacs session over 2 days and 11 hours with some
differences in my behavior, and I have not observed no problem with
Emacs swapping hard or having memory problem that impacts my work. I
have not upgraded from git as well.

- while I did use helm mode in the sense to directly invoke it, I did
  not turn it on with helm-mode but some functions used helm
  indirectly. This is because it was said that helm could be
  problem. Now without using helm I did not encounter problem in by
  average longer time than before when I did encounter it.

- I have not used helm to install packages `helm-system-packages' what
  I often do

- my state for input-method before 1.5 days could not be switched back
  any more. C-\ did not work. Anything I would do the input method
  remained. This may or may not be related. To me it looks apparently
  related.

- symon-mode could not be turned off any more. It would say it is
  turned off but it was not. I think it runs with timer and something
  happened. It also looks related to this problem just by feeling. It
  may not be.

Because of not being able to change input method back to normal I have
to restart session.

I have sent one mtrace, there is no report, so I am not sending the
previous 2 mtraces which had the memory problem and swapping, that I
had to kill emacs. Once it becomes needed, I can send it.

I have mtrace for this session and I will send it when somebody tells
me it is needed.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 01:08:02 GMT) Full text and rfc822 format available.

Message #665 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 08 Dec 2020 02:07:10 +0100
[Message part 1 (text/plain, inline)]
Michael Heerdegen <michael_heerdegen <at> web.de> writes:

> > Compliance!
> >
> >   (gdb) call (int)malloc_info (0, stdout)
>
> I'm very sorry, but it's gone.

Today, "it" happened again (not sure how many problems were are
discussing here, though).

I had been cleaning my web.de INBOX with Gnus.  Started Gnus, deleted or
moved some messages, shut down, and repeated.  Then I suddenly saw that
our problem was back, Emacs using 6GB or so.  The session is gone now (I
shut it down normally).  I'm sure that at least a significant part of
the problem materialized while using (more or less only) Gnus.

And here is that heap output you wanted:

[heap.txt (text/plain, inline)]
<malloc version="1">
<heap nr="0">
<sizes>
  <size from="657" to="657" total="2628" count="4"/>
  <size from="673" to="673" total="2019" count="3"/>
  <size from="689" to="689" total="689" count="1"/>
  <size from="705" to="705" total="705" count="1"/>
  <size from="721" to="721" total="721" count="1"/>
  <size from="737" to="737" total="1474" count="2"/>
  <size from="753" to="753" total="2259" count="3"/>
  <size from="785" to="785" total="1570" count="2"/>
  <size from="801" to="801" total="801" count="1"/>
  <size from="817" to="817" total="817" count="1"/>
  <size from="833" to="833" total="1666" count="2"/>
  <size from="897" to="897" total="1794" count="2"/>
  <size from="961" to="961" total="961" count="1"/>
  <size from="977" to="977" total="1954" count="2"/>
  <size from="993" to="993" total="993" count="1"/>
  <size from="1182753" to="1182753" total="1182753" count="1"/>
  <unsorted from="527265" to="527265" total="527265" count="1"/>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="30" size="1832141"/>
<system type="current" size="7946854400"/>
<system type="max" size="7946854400"/>
<aspace type="total" size="7946854400"/>
<aspace type="mprotect" size="7946854400"/>
</heap>
<heap nr="1">
<sizes>
  <size from="17" to="32" total="32" count="1"/>
  <size from="33" to="48" total="96" count="2"/>
  <size from="65" to="80" total="80" count="1"/>
  <unsorted from="481" to="657" total="1138" count="2"/>
</sizes>
<total type="fast" count="4" size="208"/>
<total type="rest" count="3" size="132722"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<heap nr="2">
<sizes>
  <size from="17" to="32" total="704" count="22"/>
  <size from="33" to="48" total="192" count="4"/>
  <size from="97" to="112" total="112" count="1"/>
</sizes>
<total type="fast" count="27" size="1008"/>
<total type="rest" count="1" size="101424"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<heap nr="3">
<sizes>
  <size from="17" to="32" total="608" count="19"/>
  <size from="33" to="48" total="96" count="2"/>
  <size from="97" to="112" total="112" count="1"/>
  <unsorted from="513" to="513" total="513" count="1"/>
</sizes>
<total type="fast" count="22" size="816"/>
<total type="rest" count="2" size="48289"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<heap nr="4">
<sizes>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="1" size="132240"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="53" size="2032"/>
<total type="rest" count="37" size="2246816"/>
<total type="mmap" count="11" size="305704960"/>
<system type="current" size="7947395072"/>
<system type="max" size="7947395072"/>
<aspace type="total" size="7947395072"/>
<aspace type="mprotect" size="7947395072"/>
</malloc>
[Message part 3 (text/plain, inline)]

HTH,

Michael.

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 05:19:02 GMT) Full text and rfc822 format available.

Message #668 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>, schwab <at> linux-m68k.org,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 8 Dec 2020 08:13:04 +0300
* Michael Heerdegen <michael_heerdegen <at> web.de> [2020-12-08 04:08]:
> Michael Heerdegen <michael_heerdegen <at> web.de> writes:
> 
> > > Compliance!
> > >
> > >   (gdb) call (int)malloc_info (0, stdout)
> >
> > I'm very sorry, but it's gone.
> 
> Today, "it" happened again (not sure how many problems were are
> discussing here, though).
> 
> I had been cleaning my web.de INBOX with Gnus.  Started Gnus, deleted or
> moved some messages, shut down, and repeated.  Then I suddenly saw that
> our problem was back, Emacs using 6GB or so.  The session is gone now (I
> shut it down normally).  I'm sure that at least a significant part of
> the problem materialized while using (more or less only) Gnus.
> 
> And here is that heap output you wanted:

Michael, since I stopped using helm-mode always on, I still use it,
but not awlays on and I do not query system packages with helm, since
then I have not get problem of swapping hard with 5 GB and more.

I could observe that vsize is increasing as Eli asked me for that. And
I could observe slow down, like that it slows down being harder to
type. But hard disk was not working. I could do garbage collect
without waiting 40-50 minutes for function to finish. And I did not
update or changed Emacs version yet. I have all the mtraces when it
happened and also after when I stopped using helm and waiting for
developers to tell if they need those mtraces.

Now question is, do you use helm with helm mode always on?

Of course it need not be related. But it is interesting as since I
stopped using it at least I did not get swapping problem where Emacs
tries to get some memory or has troubles with it.

Especially I am thinking of the helm function helm-system-packages
which always takes longer time as it searches through many
packages. It need not be related but I do remember that I had problem
with memory hours after using that function or turning helm always
on. Since I do not use, I did not yet observe the same
problem. Usually it would be after one day.






Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 12:09:01 GMT) Full text and rfc822 format available.

Message #671 received at submit <at> debbugs.gnu.org (full text, mbox):

From: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 08 Dec 2020 03:24:27 +0000
On Tue, Dec 08 2020, Michael Heerdegen wrote:

> shut it down normally).  I'm sure that at least a significant part of
> the problem materialized while using (more or less only) Gnus.

I also have anecdotal evidence of that.  Quite systematically, i start
emacs, things load, i'm around 300Mb or RAM, quite stable.  Then i start
Gnus, read some groups, and, ver soon after that, while emacs is
basically idle, i can see RAM increasing by ~10Mb every ~10secs until it
reaches something like 800-900Mb.  

I've checked and i think the only timer with a periodicity of 10secs
always present when that happens is undo-auto--boundary-timer.
(Sometimes there's also slack-ws-ping, which checks that a websocket
connection is open, but i think i've seen this behaviour without that
timer on).

I'm sorry i don't have the time to obtain better benchmark data. Just
mentioning the above in case it rings a bell to someone knowledgeable.

Cheers,
jao





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 12:39:01 GMT) Full text and rfc822 format available.

Message #674 received at submit <at> debbugs.gnu.org (full text, mbox):

From: Russell Adams <RLAdams <at> AdamsInfoServ.Com>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 8 Dec 2020 13:37:37 +0100
On Tue, Dec 08, 2020 at 03:24:27AM +0000, Jose A. Ortega Ruiz wrote:
> On Tue, Dec 08 2020, Michael Heerdegen wrote:
>
> > shut it down normally).  I'm sure that at least a significant part of
> > the problem materialized while using (more or less only) Gnus.
>
> I also have anecdotal evidence of that.  Quite systematically, i start
> emacs, things load, i'm around 300Mb or RAM, quite stable.  Then i start
> Gnus, read some groups, and, ver soon after that, while emacs is
> basically idle, i can see RAM increasing by ~10Mb every ~10secs until it
> reaches something like 800-900Mb.

I have consistently encountered this memory leak without a clear path
to reproducing it other than regular use over time, and I don't use
Gnus. I read mail in Mutt in another terminal window.

Thus I'm not sure Gnus is the culprit.


------------------------------------------------------------------
Russell Adams                            RLAdams <at> AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 16:31:01 GMT) Full text and rfc822 format available.

Message #677 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Jean Louis <bugs <at> gnu.support>
Cc: 43389 <at> debbugs.gnu.org, Eli Zaretskii <eliz <at> gnu.org>, schwab <at> linux-m68k.org,
 RLAdams <at> AdamsInfoServ.Com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Tue, 08 Dec 2020 17:29:52 +0100
Jean Louis <bugs <at> gnu.support> writes:

> Michael, since I stopped using helm-mode always on, I still use it,
> but not awlays on and I do not query system packages with helm, since
> then I have not get problem of swapping hard with 5 GB and more.

Yesterday it was not swapping yet.  I'm monitoring memory usage with
gkrellm.  When it starts blinking red, which was the case yesterday,
memory starts running out.  It skipped the blinking yellow state, which
means that a lot of memory must have been acquired in a short time
period.

> Now question is, do you use helm with helm mode always on?

I regularly use some Helm commands (e.g. for C-x C-f or M-x) but not
helm-mode.

> I could observe that vsize is increasing as Eli asked me for that. And
> I could observe slow down, like that it slows down being harder to
> type. But hard disk was not working. I could do garbage collect
> without waiting 40-50 minutes for function to finish.

I think we see different symptoms.  I don't see any slow-down at all
(unless swapping starts, obviously).  When I do M-x garbage-collect, it
finishes immediately without freeing an significant amount of memory.

> Of course it need not be related. But it is interesting as since I
> stopped using it at least I did not get swapping problem where Emacs
> tries to get some memory or has troubles with it.
>
> Especially I am thinking of the helm function helm-system-packages
> which always takes longer time as it searches through many
> packages.

I was not using this command.

Maybe our problems have a similar cause, but seems they are a bit
different.


Regards,

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 21:51:01 GMT) Full text and rfc822 format available.

Message #680 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, carlos <at> redhat.com
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 08 Dec 2020 22:50:37 +0100
Trevor Bentley <trevor <at> trevorbentley.com> writes:

I'm back with 5 mtraces:

https://trevorbentley.com/mtrace/

Keep in mind that these things compress well, so the largest one 
is on the order of 45GB when decompressed.

These are from various emacs instances, some running the 
emacs-slack package and others just editing elisp code.  All 
inflated to several gigabytes of heap over 1-4 days.

Log files similar to the ones I've been posting in this thread are 
in the archives.  I don't think there's any point of including 
them here anymore, as they're all about the same.

I've been too busy to modify emacs to print garbage collects, but 
these still show really long (garbage-collect) calls, often 
exceeding 15 minutes.

Last thing: I've had one unused (graphical) emacs session running 
for 16 days now, minimized.  It's still at 57MB RSS.  I can 
definitively say that the leak doesn't occur unless emacs is 
actively used, for all the good that does us.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 22:13:01 GMT) Full text and rfc822 format available.

Message #683 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Trevor Bentley <trevor <at> trevorbentley.com>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, michael_heerdegen <at> web.de,
 dj <at> redhat.com, bugs <at> gnu.support
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 8 Dec 2020 17:12:41 -0500
On 12/8/20 4:50 PM, Trevor Bentley wrote:
> Trevor Bentley <trevor <at> trevorbentley.com> writes:
> 
> I'm back with 5 mtraces:
> 
> https://trevorbentley.com/mtrace/
> 
> Keep in mind that these things compress well, so the largest one is on the order of 45GB when decompressed.
> 
> These are from various emacs instances, some running the emacs-slack package and others just editing elisp code.  All inflated to several gigabytes of heap over 1-4 days.
> 
> Log files similar to the ones I've been posting in this thread are in the archives.  I don't think there's any point of including them here anymore, as they're all about the same.
> 
> I've been too busy to modify emacs to print garbage collects, but these still show really long (garbage-collect) calls, often exceeding 15 minutes.
> 
> Last thing: I've had one unused (graphical) emacs session running for 16 days now, minimized.  It's still at 57MB RSS.  I can definitively say that the leak doesn't occur unless emacs is actively used, for all the good that does us.

I'm fetching this trace for analysis:
https://trevorbentley.com/mtrace/mtrace9.tar.bz2

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Tue, 08 Dec 2020 22:16:01 GMT) Full text and rfc822 format available.

Message #686 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Carlos O'Donell <carlos <at> redhat.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Tue, 8 Dec 2020 17:15:29 -0500
On 11/27/20 10:33 AM, Eli Zaretskii wrote:
>> Cc: trevor <at> trevorbentley.com, bugs <at> gnu.support, fweimer <at> redhat.com,
>>  43389 <at> debbugs.gnu.org, dj <at> redhat.com, michael_heerdegen <at> web.de
>> From: Carlos O'Donell <carlos <at> redhat.com>
>> Date: Fri, 27 Nov 2020 00:04:56 -0500
>>
>> lisp_align_malloc (alloc.c:1195)
>>  Fcons (alloc.c:2694)
>>   concat (fns.c:730)
>>    Fcopy_sequence (fns.c:598)
>>     timer_check (keyboard.c:4395)
>>      wait_reading_process_output (process.c:5334)
>>       sit_for (dispnew.c:6056)
>>        read_char (keyboard.c:2742)
>>         read_key_sequence (keyboard.c:9551)
>>          command_loop_1 (keyboard.c:1354)
>>           internal_condition_case (eval.c:1365)
>>            command_loop_2 (keyboard.c:1095)
>>             internal_catch (eval.c:1126)
>>              command_loop (keyboard.c:1074)
>>               recursive_edit_1 (keyboard.c:718)
>>                Frecursive_edit (keyboard.c:790)
>>                 main (emacs.c:2080)
>>  
>> There is a 171MiB's worth of allocations in that path.
> 
> Are there chains of calls that are responsible for more memory
> allocated than 171MB?
 
Yes, you can view them all yourself, just fetch the massif data
and use massif-visualizer to view the data:

http://trevorbentley.com/massif.out.3364630

-- 
Cheers,
Carlos.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 09 Dec 2020 20:09:01 GMT) Full text and rfc822 format available.

Message #689 received at submit <at> debbugs.gnu.org (full text, mbox):

From: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
To: bug-gnu-emacs <at> gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 09 Dec 2020 19:41:36 +0000
On Tue, Dec 08 2020, Russell Adams wrote:

> On Tue, Dec 08, 2020 at 03:24:27AM +0000, Jose A. Ortega Ruiz wrote:
>> On Tue, Dec 08 2020, Michael Heerdegen wrote:
>>
>> > shut it down normally).  I'm sure that at least a significant part of
>> > the problem materialized while using (more or less only) Gnus.
>>
>> I also have anecdotal evidence of that.  Quite systematically, i start
>> emacs, things load, i'm around 300Mb or RAM, quite stable.  Then i start
>> Gnus, read some groups, and, ver soon after that, while emacs is
>> basically idle, i can see RAM increasing by ~10Mb every ~10secs until it
>> reaches something like 800-900Mb.
>
> I have consistently encountered this memory leak without a clear path
> to reproducing it other than regular use over time, and I don't use
> Gnus. I read mail in Mutt in another terminal window.
>
> Thus I'm not sure Gnus is the culprit.

Neither am i :) Actually, i just observed the pattern above (RAM going
up by 1Mb/sec bringing total memory from 300Mb to 800Mb, then stopping)
before starting Gnus.  So i guess that, if Gnus plays any role, it must
be indirectly.

jao
-- 
I don't necessarily agree with everything I say.
 -Marshall McLuhan (1911-1980)





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 09 Dec 2020 20:26:01 GMT) Full text and rfc822 format available.

Message #692 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Lars Ingebrigtsen <larsi <at> gnus.org>
To: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 09 Dec 2020 21:25:37 +0100
"Jose A. Ortega Ruiz" <jao <at> gnu.org> writes:

> Neither am i :) Actually, i just observed the pattern above (RAM going
> up by 1Mb/sec bringing total memory from 300Mb to 800Mb, then stopping)
> before starting Gnus.  So i guess that, if Gnus plays any role, it must
> be indirectly.

I haven't been following this thread closely, but it strikes me as
puzzling that there's a lot of people seeing these leaks -- and there's
also many people (like me) that don't see these leaks at all.  (And I
have Emacsen running for weeks on end, doing all sorts of odd stuff.)

Has anybody tried compiling a list of features people who see the leaks
are using?  Not that there's really any good way of gathering that data,
but ...  Like, helm is known for using lots of memory, and eww can, too,
under some circumstances, and so can image caching...

-- 
(domestic pets only, the antidote for overdose, milk.)
   bloggy blog: http://lars.ingebrigtsen.no




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 09 Dec 2020 21:06:01 GMT) Full text and rfc822 format available.

Message #695 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
To: "Lars Ingebrigtsen" <larsi <at> gnus.org>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Wed, 09 Dec 2020 21:04:58 +0000
On Wed, Dec 09 2020, Lars Ingebrigtsen wrote:
[...]

> Has anybody tried compiling a list of features people who see the leaks
> are using?  Not that there's really any good way of gathering that data,
> but ...  Like, helm is known for using lots of memory, and eww can, too,
> under some circumstances, and so can image caching...

in my case, it's ivy and emacs-w3m.  the first burst i observe is
usually at the beginning, so not many of the miriad other packages i use
have been active at all.  i use exwm, so that's one that's always there
for sure, and ivy takes control immediately, but little else seems
"needed".

regarding images, i use pdf-tools, and it has a heavy memory footprint
(opening any PDF increases easily emacs ram consumption in 200Mb, no
matter how big the PDF). but those jumps are immediate upon opening the
doc.

in my case, another source of puzzlement is this "bursty" behaviour.
after the firs one, i can be at ~1Gb for a day or two (doing almost
everything inside emacs, so all kinds of packages used), and then,
without any change in my usage patterns i could tell, a new burst will
take my RAM, 10Mbs at a time, up to ~2Gb.  and then stop, again without
me doing, concisouly, anything differently.

jao
-- 
To see ourselves as others see us is a most salutary gift. Hardly less
important is the capacity to see others as they see themselves.
 -Aldous Huxley, novelist (1894-1963)




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 00:52:02 GMT) Full text and rfc822 format available.

Message #698 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Jean Louis <bugs <at> gnu.support>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 10 Dec 2020 01:50:43 +0100
Michael Heerdegen <michael_heerdegen <at> web.de> writes:

> I think we see different symptoms.  I don't see any slow-down at all
> (unless swapping starts, obviously).  When I do M-x garbage-collect, it
> finishes immediately without freeing an significant amount of memory.

I must correct myself.  While this all was definitely the case the last
time I tried to investigate this issue (one or two months ago) the
garbage-collect statement is not true anymore.  I did M-x
garbage-collect today when the memory was getting short and then Emacs
froze (in the sense of "didn't respond, even to C-g"), without gkrellm
reporting much progress, so I killed it (after 20 seconds or so - aeons
for a computer).

I did not experience a slowdown, however (maybe I've faster RAM?).

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 08:33:01 GMT) Full text and rfc822 format available.

Message #701 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: 43389 <at> debbugs.gnu.org, RLAdams <at> AdamsInfoServ.Com, schwab <at> linux-m68k.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Thu, 10 Dec 2020 08:43:56 +0300
* Michael Heerdegen <michael_heerdegen <at> web.de> [2020-12-10 03:51]:
> Michael Heerdegen <michael_heerdegen <at> web.de> writes:
> 
> > I think we see different symptoms.  I don't see any slow-down at all
> > (unless swapping starts, obviously).  When I do M-x garbage-collect, it
> > finishes immediately without freeing an significant amount of memory.
> 
> I must correct myself.  While this all was definitely the case the last
> time I tried to investigate this issue (one or two months ago) the
> garbage-collect statement is not true anymore.  I did M-x
> garbage-collect today when the memory was getting short and then Emacs
> froze (in the sense of "didn't respond, even to C-g"), without gkrellm
> reporting much progress, so I killed it (after 20 seconds or so - aeons
> for a computer).
> 
> I did not experience a slowdown, however (maybe I've faster RAM?).

One time I waited for 36 minutes and it completed the garbage collection.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 18:47:02 GMT) Full text and rfc822 format available.

Message #704 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>,
 Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 20:45:40 +0200
Stefan, please help with this complex issue (or maybe several
issues).  We have collected some evidence in this bug report, but I
don't yet see where is this going, or how to make any real progress
here.

One thing that I cannot explain is this:

> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support,
>  dj <at> redhat.com, michael_heerdegen <at> web.de
> Cc: 
> Date: Tue, 08 Dec 2020 22:50:37 +0100
> 
> I've been too busy to modify emacs to print garbage collects, but 
> these still show really long (garbage-collect) calls, often 
> exceeding 15 minutes.

Trevor reported several times that automatic GC is fast as usual, but
manual invocations of "M-x garbage-collect" take much longer, many
minutes.  I don't understand how this could happen, because both
methods of invoking GC do exactly the same job.

I thought about possible ways of explaining the stark differences in
the time it takes to GC, and came up with these:

 . The depth of the run-time (C-level) stack.  If this is much deeper
   in one of the cases, it could explain the longer time.  But in that
   case, I'd expect the automatic GC to take longer, because typically
   the C stack is relatively shallow when Emacs is idle than when it
   runs some Lisp.  This contradicts Trevor's observations.

 . Some difference in buffers and strings, which causes the manual GC
   to relocate and compact a lot of them.  But again: (a) why the
   automatic GC never hits the same condition, and (b) I can explain
   the reverse easier, i.e. that lots of temporary strings and buffers
   exist while Lisp runs, but not when Emacs is idle.

Any other ideas?  Any data Trevor could provide, e.g. by attaching a
debugger during these prolonged GC, and telling us something
interesting?

TIA




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 19:22:01 GMT) Full text and rfc822 format available.

Message #707 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Stefan Monnier <monnier <at> iro.umontreal.ca>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, Trevor Bentley <trevor <at> trevorbentley.com>,
 michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 14:21:16 -0500
> Trevor reported several times that automatic GC is fast as usual, but
> manual invocations of "M-x garbage-collect" take much longer, many
> minutes.  I don't understand how this could happen, because both
> methods of invoking GC do exactly the same job.

Indeed, that makes no sense.  The only thing that comes to mind is that
when they do `M-x garbage-collect` the 15 minutes aren't actually spent
in the GC but in some pre/post command hook or something like that
(e.g. in `execute-extended-command--shorter`)?

Do we have a `profiler-report` available for those 15 minutes?
I've taken a quick look at the massive threads in that bug report,
but haven't had the time to read in detail.  AFAICT we don't have a
profiler output for those 15minutes, so it would be good to try:

    M-x profiler-start RET RET
    M-x garbage-collect RET     ;; This should presumably take several minutes
    M-x profiler-report RET

and then shows us this report (using C-u RET on the top-level elements
to unfold them).


        Stefan





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 19:34:02 GMT) Full text and rfc822 format available.

Message #710 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 20:33:20 +0100
Stefan Monnier <monnier <at> iro.umontreal.ca> writes:

> Do we have a `profiler-report` available for those 15 minutes? 
> I've taken a quick look at the massive threads in that bug 
> report, but haven't had the time to read in detail.  AFAICT we 
> don't have a profiler output for those 15minutes, so it would be 
> good to try: 
> 
>     M-x profiler-start RET RET M-x garbage-collect RET     ;; 
>     This should presumably take several minutes M-x 
>     profiler-report RET 
> 
> and then shows us this report (using C-u RET on the top-level 
> elements to unfold them). 

I made one a profiler report for a complete 1-2 day session (see 
the e-mail referencing "mtrace3"), but none for just garbage 
collection.  I'll do that for the next one.

Is there any easy way to check if any of my packages are adding 
extra hooks around garbage-collect?  I can't imagine why they 
would, but you never know.

Thanks

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 19:49:01 GMT) Full text and rfc822 format available.

Message #713 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Stefan Monnier <monnier <at> iro.umontreal.ca>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 14:47:51 -0500
> Is there any easy way to check if any of my packages are adding extra hooks
> around garbage-collect?  I can't imagine why they would, but you never know.

I think there can be so many hooks involved that the profiler is the
only good way to figure that out.


        Stefan





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 21:46:02 GMT) Full text and rfc822 format available.

Message #716 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 Trevor Bentley <trevor <at> trevorbentley.com>, michael_heerdegen <at> web.de,
 Stefan Monnier <monnier <at> iro.umontreal.ca>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 23:24:04 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-12-10 21:47]:
> Trevor reported several times that automatic GC is fast as usual, but
> manual invocations of "M-x garbage-collect" take much longer, many
> minutes.  I don't understand how this could happen, because both
> methods of invoking GC do exactly the same job.

Sometimes 30-36 minutes.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 21:46:02 GMT) Full text and rfc822 format available.

Message #719 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 Trevor Bentley <trevor <at> trevorbentley.com>, michael_heerdegen <at> web.de,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 23:26:24 +0300
* Stefan Monnier <monnier <at> iro.umontreal.ca> [2020-12-10 22:21]:
>     M-x profiler-start RET RET
>     M-x garbage-collect RET     ;; This should presumably take several minutes
>     M-x profiler-report RET

I will try with function doing all three together.





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Thu, 10 Dec 2020 21:46:02 GMT) Full text and rfc822 format available.

Message #722 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 Trevor Bentley <trevor <at> trevorbentley.com>, michael_heerdegen <at> web.de,
 Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Thu, 10 Dec 2020 23:30:05 +0300
* Stefan Monnier <monnier <at> iro.umontreal.ca> [2020-12-10 22:21]:
> > Trevor reported several times that automatic GC is fast as usual, but
> > manual invocations of "M-x garbage-collect" take much longer, many
> > minutes.  I don't understand how this could happen, because both
> > methods of invoking GC do exactly the same job.
> 
> Indeed, that makes no sense.  The only thing that comes to mind is that
> when they do `M-x garbage-collect` the 15 minutes aren't actually spent
> in the GC but in some pre/post command hook or something like that
> (e.g. in `execute-extended-command--shorter`)?
> 
> Do we have a `profiler-report` available for those 15 minutes?
> I've taken a quick look at the massive threads in that bug report,
> but haven't had the time to read in detail.  AFAICT we don't have a
> profiler output for those 15minutes, so it would be good to try:
> 
>     M-x profiler-start RET RET
>     M-x garbage-collect RET     ;; This should presumably take several minutes
>     M-x profiler-report RET

Another issue is that since I use LD_PRELOAD with gmalloc trace is
that I have not encountered high swapping and Emacs being totally
unusable. And I have not upgraded Emacs. Changed basically nothing but
using the mtrace.

What I can still observe is that vsize grows high as usual. But I have
not observed swap growing high or that hard disk starts working to
find some swap memory for 40 minutes or longer indefinitely maybe.






Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Fri, 11 Dec 2020 13:57:01 GMT) Full text and rfc822 format available.

Message #725 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Lars Ingebrigtsen <larsi <at> gnus.org>
To: "Jose A. Ortega Ruiz" <jao <at> gnu.org>
Cc: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks
Date: Fri, 11 Dec 2020 14:55:49 +0100
As previously briefly discussed, I wondered whether adding a command
that shows "large" buffers and variables would tell us anything
interesting in these cases, and I've now implemented that.

`M-x memory-report'

on the current trunk.  It may or may not tell us something
interesting -- please give it a whirl and report back.

-- 
(domestic pets only, the antidote for overdose, milk.)
   bloggy blog: http://lars.ingebrigtsen.no





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 01:31:01 GMT) Full text and rfc822 format available.

Message #728 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Jean Louis <bugs <at> gnu.support>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 carlos <at> redhat.com, Trevor Bentley <trevor <at> trevorbentley.com>,
 michael_heerdegen <at> web.de, Stefan Monnier <monnier <at> iro.umontreal.ca>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 04:28:26 +0300
* Eli Zaretskii <eliz <at> gnu.org> [2020-12-10 21:46]:
> Stefan, please help with this complex issue (or maybe several
> issues).  We have collected some evidence in this bug report, but I
> don't yet see where is this going, or how to make any real progress
> here.
> 
> One thing that I cannot explain is this:
> 
> > From: Trevor Bentley <trevor <at> trevorbentley.com>
> > Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support,
> >  dj <at> redhat.com, michael_heerdegen <at> web.de
> > Cc: 
> > Date: Tue, 08 Dec 2020 22:50:37 +0100
> > 
> > I've been too busy to modify emacs to print garbage collects, but 
> > these still show really long (garbage-collect) calls, often 
> > exceeding 15 minutes.
> 
> Trevor reported several times that automatic GC is fast as usual, but
> manual invocations of "M-x garbage-collect" take much longer, many
> minutes.  I don't understand how this could happen, because both
> methods of invoking GC do exactly the same job.

My observation over time is that that running M-x garbage-collect
created the same effect just as when I observed that Emacs starts
doing something with hard disk and continues so for unpredicted number
of minutes. Normally so long until I kill it. It could be 10-20
minutes that I have waited. So that could be where the problem lies.

Something happens inside of Emacs, automatic garbage-collect is
invoked which cannot soon finish its job.

About 2 times I invoked garbage-collect manually and caused about
visually same behavior to take place. I hope you understand this
explanation.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 08:51:02 GMT) Full text and rfc822 format available.

Message #731 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Andreas Schwab <schwab <at> linux-m68k.org>
To: Jean Louis <bugs <at> gnu.support>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, carlos <at> redhat.com,
 Trevor Bentley <trevor <at> trevorbentley.com>, michael_heerdegen <at> web.de,
 Stefan Monnier <monnier <at> iro.umontreal.ca>, Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 09:49:56 +0100
On Dez 12 2020, Jean Louis wrote:

> My observation over time is that that running M-x garbage-collect
> created the same effect just as when I observed that Emacs starts
> doing something with hard disk and continues so for unpredicted number
> of minutes.

This is totally expected.  When you are tight on memory, rummaging
through all of it can only make things worse.

Andreas.

-- 
Andreas Schwab, schwab <at> linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 11:22:02 GMT) Full text and rfc822 format available.

Message #734 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 12:20:57 +0100
Stefan Monnier <monnier <at> iro.umontreal.ca> writes:

> Do we have a `profiler-report` available for those 15 minutes? 
> I've taken a quick look at the massive threads in that bug 
> report, but haven't had the time to read in detail.  AFAICT we 
> don't have a profiler output for those 15minutes, so it would be 
> good to try: 
> 
>     M-x profiler-start RET RET M-x garbage-collect RET     ;; 
>     This should presumably take several minutes M-x 
>     profiler-report RET 
> 
> and then shows us this report (using C-u RET on the top-level 
> elements to unfold them). 

I'm back with a new mtrace, a profile of the long garbage-collect, 
and a new discovery.

First of all, the 26GB mtrace of a session that exploded to over 
8GB is available in mtrace12.tar.bz2 here:

https://trevorbentley.com/mtrace/

The summary log is in mtrace12_log.txt in the same directory, 
including output of profiler-report for only the duration of the 
garbage-collect, which took a record 50 minutes to complete.

As you can see in the profiler log, it is, in fact, the C 
garbage_collect() function eating all of the time:

----
;;(profiler-report) - ... 
901307  99% 
  Automatic GC 
  901281  99% 
+ trev/slack--refresh-cache 
19  0%
----

Not only that, but I added printfs in emacs itself around the 
garbage_collect() and gc_sweep() functions.  Each line prints the 
unix timestamp when it began, and the 'end' lines print the 
duration since the start.  You can see that the entire 50 minutes 
was spent in gc_sweep():

----
1607695679: garbage_collect start 1607695680: gc_sweep start 
1607695680: gc_sweep end (0 s) 1607695680: garbage_collect #1085 
end (1 s) 1607695761: garbage_collect start 1607695762: gc_sweep 
start 1607695762: gc_sweep end (0 s) 1607726912: garbage_collect 
start 1607726913: gc_sweep start 1607729921: gc_sweep end (3008 s) 
1607729922: garbage_collect #1086 end (3010 s)
----

And finally, here's what I find very suspicious: it was nearly 9 
hours since the last garbage collect ran (1607726912 - 
1607695762).  This is an instance that I used all day long, 
flittering back and forth between it and other work.  It had both 
tons of interactive use, and tons of idle time.  I don't think 9 
hours between garbage collects sounds right.

The last garbage collect before the long manual one also never 
printed an end message, which is confusing.  I see no early 
returns in garbage_collect()... is there some macro that can 
trigger a return, or maybe something uses longjmp?

Thanks,

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 18:33:02 GMT) Full text and rfc822 format available.

Message #737 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 13:40:38 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: carlos <at> redhat.com, fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org,
>  bugs <at> gnu.support, dj <at> redhat.com, michael_heerdegen <at> web.de
> Cc: 
> Date: Sat, 12 Dec 2020 12:20:57 +0100
> 
> Not only that, but I added printfs in emacs itself around the 
> garbage_collect() and gc_sweep() functions.  Each line prints the 
> unix timestamp when it began, and the 'end' lines print the 
> duration since the start.  You can see that the entire 50 minutes 
> was spent in gc_sweep():

I think this is expected if you have a lot of objects to sweep.

> And finally, here's what I find very suspicious: it was nearly 9 
> hours since the last garbage collect ran (1607726912 - 
> 1607695762).  This is an instance that I used all day long, 
> flittering back and forth between it and other work.  It had both 
> tons of interactive use, and tons of idle time.  I don't think 9 
> hours between garbage collects sounds right.

It isn't.  So it is now important to find out why this happens.  Could
it be that some of your packages plays with the value of GC threshold?

> The last garbage collect before the long manual one also never 
> printed an end message, which is confusing.  I see no early 
> returns in garbage_collect()... is there some macro that can 
> trigger a return, or maybe something uses longjmp?

Not that I know of, no.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 19:15:02 GMT) Full text and rfc822 format available.

Message #740 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Stefan Monnier <monnier <at> iro.umontreal.ca>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, Trevor Bentley <trevor <at> trevorbentley.com>,
 michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 14:14:39 -0500
>> Not only that, but I added printfs in emacs itself around the 
>> garbage_collect() and gc_sweep() functions.  Each line prints the 
>> unix timestamp when it began, and the 'end' lines print the 
>> duration since the start.  You can see that the entire 50 minutes 
>> was spent in gc_sweep():
>
> I think this is expected if you have a lot of objects to sweep.

Actually, I'm surprised most of the time is spent in gc_sweep:
mark_object is usually where most of the time is spent, so this suggests
that the total heap size is *much* larger than the amount of live objects.


        Stefan





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 19:22:01 GMT) Full text and rfc822 format available.

Message #743 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 21:20:52 +0200
> From: Stefan Monnier <monnier <at> iro.umontreal.ca>
> Cc: Trevor Bentley <trevor <at> trevorbentley.com>,  carlos <at> redhat.com,
>   fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,  bugs <at> gnu.support,
>   dj <at> redhat.com,  michael_heerdegen <at> web.de
> Date: Sat, 12 Dec 2020 14:14:39 -0500
> 
> >> Not only that, but I added printfs in emacs itself around the 
> >> garbage_collect() and gc_sweep() functions.  Each line prints the 
> >> unix timestamp when it began, and the 'end' lines print the 
> >> duration since the start.  You can see that the entire 50 minutes 
> >> was spent in gc_sweep():
> >
> > I think this is expected if you have a lot of objects to sweep.
> 
> Actually, I'm surprised most of the time is spent in gc_sweep:
> mark_object is usually where most of the time is spent, so this suggests
> that the total heap size is *much* larger than the amount of live objects.

Sure.  But isn't that the same as what I said, just from another POV?
"A lot of objects to sweep" means there are many objects that aren't
live and need to have their memory freed.

Since GC wasn't run for many hours, having a lot of garbage to collect
is expected, right?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 19:47:02 GMT) Full text and rfc822 format available.

Message #746 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Stefan Monnier <monnier <at> iro.umontreal.ca>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 14:46:20 -0500
> Sure.  But isn't that the same as what I said, just from another POV?
> "A lot of objects to sweep" means there are many objects that aren't
> live and need to have their memory freed.
>
> Since GC wasn't run for many hours, having a lot of garbage to collect
> is expected, right?

Could be, but for tens of minutes?

AFAIK gc_sweep shouldn't cause too much thrashing either (the sweep is
a mostly sequential scan of memory, so even if the total heap is larger
than your total RAM, it should be ~O(total heap size / bandwidth from
swap partition)), so I can't imagine how we could spend tens of minutes
doing gc_sweep (or maybe the time is spend in gc_sweep but doing
something else than the sweep itself, e.g. handling weak pointers, or
removing dead markers from marker lists, ... still seems hard to
imagine spending tens of minutes, tho).


        Stefan





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 19:52:01 GMT) Full text and rfc822 format available.

Message #749 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, trevor <at> trevorbentley.com, michael_heerdegen <at> web.de
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 21:51:15 +0200
> From: Stefan Monnier <monnier <at> iro.umontreal.ca>
> Cc: trevor <at> trevorbentley.com,  carlos <at> redhat.com,  fweimer <at> redhat.com,
>   43389 <at> debbugs.gnu.org,  bugs <at> gnu.support,  dj <at> redhat.com,
>   michael_heerdegen <at> web.de
> Date: Sat, 12 Dec 2020 14:46:20 -0500
> 
> > Sure.  But isn't that the same as what I said, just from another POV?
> > "A lot of objects to sweep" means there are many objects that aren't
> > live and need to have their memory freed.
> >
> > Since GC wasn't run for many hours, having a lot of garbage to collect
> > is expected, right?
> 
> Could be, but for tens of minutes?

If the system is paging, it could take that long, yes.

> AFAIK gc_sweep shouldn't cause too much thrashing either (the sweep is
> a mostly sequential scan of memory, so even if the total heap is larger
> than your total RAM, it should be ~O(total heap size / bandwidth from
> swap partition)), so I can't imagine how we could spend tens of minutes
> doing gc_sweep (or maybe the time is spend in gc_sweep but doing
> something else than the sweep itself, e.g. handling weak pointers, or
> removing dead markers from marker lists, ... still seems hard to
> imagine spending tens of minutes, tho).

Does gc_sweep involve touching all the memory we free?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 20:15:02 GMT) Full text and rfc822 format available.

Message #752 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, michael_heerdegen <at> web.de, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 21:14:05 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> Could be, but for tens of minutes? 
> 
> If the system is paging, it could take that long, yes. 
> 
>> AFAIK gc_sweep shouldn't cause too much thrashing either (the 
>> sweep is a mostly sequential scan of memory, so even if the 
>> total heap is larger than your total RAM, it should be ~O(total 
>> heap size / bandwidth from 

In my particular case, I have plenty of free memory.  I assume 
nothing is paging to disk in any of my reports, though I haven't 
thought to explicitly check.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 12 Dec 2020 22:18:02 GMT) Full text and rfc822 format available.

Message #755 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Michael Heerdegen <michael_heerdegen <at> web.de>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 carlos <at> redhat.com, Trevor Bentley <trevor <at> trevorbentley.com>,
 monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sat, 12 Dec 2020 23:16:46 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

> Could it be that some of your packages plays with the value of GC
> threshold?

Dunno if it matters, but `gnus-registry-save' binds it temporarily to a
high value, and I once had experienced memory grow largely while using
Gnus.

Michael.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 03:35:02 GMT) Full text and rfc822 format available.

Message #758 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 carlos <at> redhat.com, trevor <at> trevorbentley.com, monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 05:34:35 +0200
> From: Michael Heerdegen <michael_heerdegen <at> web.de>
> Cc: Trevor Bentley <trevor <at> trevorbentley.com>,  monnier <at> iro.umontreal.ca,
>   carlos <at> redhat.com,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>   bugs <at> gnu.support,  dj <at> redhat.com
> Date: Sat, 12 Dec 2020 23:16:46 +0100
> 
> Eli Zaretskii <eliz <at> gnu.org> writes:
> 
> > Could it be that some of your packages plays with the value of GC
> > threshold?
> 
> Dunno if it matters, but `gnus-registry-save' binds it temporarily to a
> high value

I'd prefer very much that our core code never did that.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 10:21:01 GMT) Full text and rfc822 format available.

Message #761 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>, Michael Heerdegen <michael_heerdegen <at> web.de>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com,
 carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 11:20:32 +0100
>> Dunno if it matters, but `gnus-registry-save' binds it 
>> temporarily to a high value 
> 
> I'd prefer very much that our core code never did that. 

I'm not sure what that is, but I'm not calling it directly, and 
probably not indirectly either.  Not doing any mail reading in the 
instances that are inflating.

I print the gc variables in each of my log analyses, and they have 
always been the same: the default.

I have one instance running that has clearly hit the problem. 
garbage_collect() never printed its "end" message, and there have 
been no further garbage collects in nearly 20 hours:

----
1607783297: garbage_collect start 1607783297: gc_sweep start 
1607783297: gc_sweep end (0 s) ----

Right now, I'm leaning towards this being the root cause. 
Something is causing a garbage collect to crash or hang or 
otherwise exit in some unknown way, and automatic garbage 
collection gets disabled until I manually retrigger it.

Garbage collect never runs on other threads/forks, right?  If it 
were hung forever inside garbage_collect(), I would expect the 
whole window to be frozen, but it is not.

I'll add more printfs in garbage_collect() and try to figure out 
where it is exiting.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 15:32:02 GMT) Full text and rfc822 format available.

Message #764 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 17:30:42 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: monnier <at> iro.umontreal.ca, carlos <at> redhat.com, fweimer <at> redhat.com,
>  43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 11:20:32 +0100
> 
> I have one instance running that has clearly hit the problem. 
> garbage_collect() never printed its "end" message, and there have 
> been no further garbage collects in nearly 20 hours:
> 
> ----
> 1607783297: garbage_collect start 1607783297: gc_sweep start 
> 1607783297: gc_sweep end (0 s) ----
> 
> Right now, I'm leaning towards this being the root cause. 
> Something is causing a garbage collect to crash or hang or 
> otherwise exit in some unknown way, and automatic garbage 
> collection gets disabled until I manually retrigger it.
> 
> Garbage collect never runs on other threads/forks, right?

If you use packages or commands that create Lisp threads, I think GC
can run from any of these Lisp threads.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 19:35:02 GMT) Full text and rfc822 format available.

Message #767 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 20:34:11 +0100
Eli Zaretskii <eliz <at> gnu.org> writes:

>> Garbage collect never runs on other threads/forks, right? 
> 
> If you use packages or commands that create Lisp threads, I 
> think GC can run from any of these Lisp threads. 

Hmm, that makes it trickier.  No clue if my default packages 
launch threads, but it's possible.

I just hit the bug in one of my sessions: the call to 
unblock_input() in garbage_collect() never returns.  But the 
session still completely works, so I'm not really sure what's 
going on here.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 19:40:02 GMT) Full text and rfc822 format available.

Message #770 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 21:38:46 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: michael_heerdegen <at> web.de, monnier <at> iro.umontreal.ca, carlos <at> redhat.com,
>  fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 20:34:11 +0100
> 
> >> Garbage collect never runs on other threads/forks, right? 
> > 
> > If you use packages or commands that create Lisp threads, I 
> > think GC can run from any of these Lisp threads. 
> 
> Hmm, that makes it trickier.  No clue if my default packages 
> launch threads, but it's possible.

Grep them for make-thread.

> I just hit the bug in one of my sessions: the call to 
> unblock_input() in garbage_collect() never returns.

If that ran in a thread, perhaps the thread died.

> But the session still completely works, so I'm not really sure
> what's going on here.

As long as the main thread runs, you might indeed see nothing special.




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 20:00:02 GMT) Full text and rfc822 format available.

Message #773 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 20:59:34 +0100
>> Hmm, that makes it trickier.  No clue if my default packages 
>> launch threads, but it's possible. 
> 
> Grep them for make-thread. 
> 
>> I just hit the bug in one of my sessions: the call to 
>> unblock_input() in garbage_collect() never returns. 
> 
> If that ran in a thread, perhaps the thread died. 
> 
>> But the session still completely works, so I'm not really sure 
>> what's going on here. 
> 
> As long as the main thread runs, you might indeed see nothing 
> special. 

This was exactly my thought: a thread I'm not even aware of must 
be silently crashing and leaving GC in a bad state.

But there's only a single case of 'make-thread' in my ~/.emacs.d/, 
and it's extremely unlikely that function ever runs 
("lsp-download-install").

More importantly, I'm comparing (list-threads) in emacs and "info 
threads" in gdb, and the failed instance looks identical to the 
non-failed instances: a single emacs thread ("Main"), and three 
real threads ("emacs", "gmain", "gdbus").  garbage_collect() not 
present in any backtrace when interrupted.

I'm at a loss for how it teleported out of that garbage_collect() 
call.  Back to printf, I guess.  Maybe there was a short-lived 
thread that isn't normally running...

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 20:22:02 GMT) Full text and rfc822 format available.

Message #776 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 22:21:00 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: michael_heerdegen <at> web.de, monnier <at> iro.umontreal.ca, carlos <at> redhat.com,
>  fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 20:59:34 +0100
> 
> > As long as the main thread runs, you might indeed see nothing 
> > special. 
> 
> This was exactly my thought: a thread I'm not even aware of must 
> be silently crashing and leaving GC in a bad state.
> 
> But there's only a single case of 'make-thread' in my ~/.emacs.d/, 
> and it's extremely unlikely that function ever runs 
> ("lsp-download-install").
> 
> More importantly, I'm comparing (list-threads) in emacs and "info 
> threads" in gdb, and the failed instance looks identical to the 
> non-failed instances: a single emacs thread ("Main"), and three 
> real threads ("emacs", "gmain", "gdbus").  garbage_collect() not 
> present in any backtrace when interrupted.
> 
> I'm at a loss for how it teleported out of that garbage_collect() 
> call.  Back to printf, I guess.  Maybe there was a short-lived 
> thread that isn't normally running...

Does thread-last-error return something non-nil?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 13 Dec 2020 20:42:02 GMT) Full text and rfc822 format available.

Message #779 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Sun, 13 Dec 2020 21:41:40 +0100
> Does thread-last-error return something non-nil? 

Nope, nil in all instance, including the one in a weird state.

I'm running one instance with printfs in some of the 
unblock_input() functions, and one in gdb with breakpoints on 
Fmake_thread, pthread_create, and emacs_abort.  If you have other 
suggested probe points, I'm happy to test.

Opening 10 emacses at a time seems to be going better for 
reproducing.  Sometimes it triggers in an hour, sometimes in 3 
days, but if I just flood the system with emacs processes I tend 
to hit it within a day.

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 14 Dec 2020 03:26:02 GMT) Full text and rfc822 format available.

Message #782 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 14 Dec 2020 05:24:43 +0200
> From: Trevor Bentley <trevor <at> trevorbentley.com>
> Cc: michael_heerdegen <at> web.de, monnier <at> iro.umontreal.ca, carlos <at> redhat.com,
>  fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, bugs <at> gnu.support, dj <at> redhat.com
> Cc: 
> Date: Sun, 13 Dec 2020 21:41:40 +0100
> 
> > Does thread-last-error return something non-nil? 
> 
> Nope, nil in all instance, including the one in a weird state.

Then it's unlikely that a thread died unnatural death.

> I'm running one instance with printfs in some of the 
> unblock_input() functions, and one in gdb with breakpoints on 
> Fmake_thread, pthread_create, and emacs_abort.  If you have other 
> suggested probe points, I'm happy to test.

A breakpoint in watch_gc_cons_percentage, perhaps, to see if and when
the threshold gets changed?




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Mon, 14 Dec 2020 21:26:02 GMT) Full text and rfc822 format available.

Message #785 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Mon, 14 Dec 2020 22:24:56 +0100
>> > Does thread-last-error return something non-nil?  
>>  Nope, nil in all instance, including the one in a weird state. 
> 
> Then it's unlikely that a thread died unnatural death. 
> 

No, sure doesn't seem like it.  Just hit it in an instance with 
more printfs, and it looks like it leaps right out of some 
sub-call of process_pending_signals(), continuing to run elsewhere 
without finishing garbage_collect().  To me, that means exactly 
one thing: longjmp.

If something manages to longjmp out of garbage_collect() at that 
point, it leaves with consing_until_gc set to HI_THRESHOLD.  This 
must explain why automatic GC stops running for hours or days, but 
manual GCs still work.

I tried setting a breakpoint in longjmp, but it's called 3 times 
for every keypress!  That's inconvenient.  Running one single 
instance now with a conditional breakpoint on longjmp: it will 
break if longjmp is called while it's in unblock_input().

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 20 Jan 2021 12:03:02 GMT) Full text and rfc822 format available.

Message #788 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 20 Jan 2021 13:02:23 +0100
> I tried setting a breakpoint in longjmp, but it's called 3 times 
> for every keypress!  That's inconvenient.  Running one single 
> instance now with a conditional breakpoint on longjmp: it will 
> break if longjmp is called while it's in unblock_input(). 

I disappeared for ages because... the problem disappeared.  I went 
a month without reproducing it, despite putting a hold on 
upgrading both system and emacs packages while debugging.  Very 
odd.

But today it appeared again.  And, for the first time, in a gdb 
session with breakpoints to confirm my theory.  I believe I've 
found the underlying issue.

If you have a look at this long backtrace, you can see that we are 
inside a garbage_collect call (frame #38).  An X11 focus event 
comes in, triggering a bunch of GTK/GDK/X calls.  Mysteriously, 
this leads to a maybe_quit() call which in turn calls longjmp(). 
longjmp jumps right out of the garbage collect, leaving it 
unfinished.

Internally in garbage_collect, consing_until_gc was set to the 
HI_THRESHOLD upper-bound.  It is left that way when longjmp leaps 
out of it, and no automatic garbage collect is ever performed 
again.  This is the start of the ballooning memory.

This also explains why my minimized emacs session never hits it 
and my work sessions hit it very often, and less often on 
weekends.  It's triggered by focus events.  I flitter around 
between windows constantly while working.

I don't know emacs internals, so you'll have to figure out if this 
is X dependent (probably) and/or GTK dependent.  It should be 
possible to come up with an easier way to reproduce it now.

Backtrace:
-----------
(gdb) bt #0  0x00007ffff5571230 in siglongjmp () at 
/usr/lib/libc.so.6 #1  0x00005555557bd38d in unwind_to_catch 
(catch=0x555555dfc320, type=NONLOCAL_EXIT_THROW, value=0x30) at 
eval.c:1181 #2  0x00005555557bd427 in Fthrow (tag=0xe75830, 
value=0x30) at eval.c:1198 #3  0x00005555557bdea7 in 
process_quit_flag () at eval.c:1526 #4  0x00005555557bdeef in 
maybe_quit () at eval.c:1547 #5  0x00005555557cbbb1 in Fassq 
(key=0xd0b0, alist=0x55555901c573) at fns.c:1609 #6 
0x0000555555632b63 in window_parameter (w=0x555555f2d088, 
parameter=0xd0b0) at window.c:2262 #7  0x000055555563a075 in 
window_wants_tab_line (w=0x555555f2d088) at window.c:5410 #8 
0x00005555555c22b1 in get_phys_cursor_geometry (w=0x555555f2d088, 
row=0x55555d9f3ef0, glyph=0x55555fd20e00, xp=0x7fffffff9c48, 
yp=0x7fffffff9c4c, heightp=0x7fffffff9c50) at xdisp.c:2650 #9 
0x00005555556c1b12 in x_draw_hollow_cursor (w=0x555555f2d088, 
row=0x55555d9f3ef0) at xterm.c:9495 #10 0x00005555556c24f9 in 
x_draw_window_cursor (w=0x555555f2d088, glyph_row=0x55555d9f3ef0, 
x=32, y=678, cursor_type=HOLLOW_BOX_CURSOR, cursor_width=1, 
on_p=true, active_p=false) at xterm.c:9682 #11 0x000055555561a922 
in display_and_set_cursor (w=0x555555f2d088, on=true, hpos=2, 
vpos=18, x=32, y=678) at xdisp.c:31738 #12 0x000055555561aa5b in 
update_window_cursor (w=0x555555f2d088, on=true) at xdisp.c:31773 
#13 0x000055555561aabf in update_cursor_in_window_tree 
(w=0x555555f2d088, on_p=true) at xdisp.c:31791 #14 
0x000055555561aaab in update_cursor_in_window_tree 
(w=0x55555907a490, on_p=true) at xdisp.c:31789 #15 
0x000055555561aaab in update_cursor_in_window_tree 
(w=0x55555a514b68, on_p=true) at xdisp.c:31789 #16 
0x000055555561ab37 in gui_update_cursor (f=0x555556625468, 
on_p=true) at xdisp.c:31805 #17 0x00005555556b9829 in 
x_frame_unhighlight (f=0x555556625468) at xterm.c:4490 #18 
0x00005555556ba22d in x_frame_rehighlight (dpyinfo=0x55555626d6c0) 
at xterm.c:4852 #19 0x00005555556b98fc in x_new_focus_frame 
(dpyinfo=0x55555626d6c0, frame=0x0) at xterm.c:4520 #20 
0x00005555556b9a3d in x_focus_changed (type=10, state=2, 
dpyinfo=0x55555626d6c0, frame=0x555556625468, bufp=0x7fffffffa0d0) 
at xterm.c:4554 #21 0x00005555556ba0a6 in x_detect_focus_change 
(dpyinfo=0x55555626d6c0, frame=0x555556625468, 
event=0x7fffffffa840, bufp=0x7fffffffa0d0) at xterm.c:4787 #22 
0x00005555556c0235 in handle_one_xevent (dpyinfo=0x55555626d6c0, 
event=0x7fffffffa840, finish=0x555555c901d4 <current_finish>, 
hold_quit=0x7fffffffab50) at xterm.c:8810 #23 0x00005555556bde28 
in event_handler_gdk (gxev=0x7fffffffa840, ev=0x55555cccf0c0, 
data=0x0) at xterm.c:7768 #24 0x00007ffff75f780f in  () at 
/usr/lib/libgdk-3.so.0 #25 0x00007ffff75fb3cb in  () at 
/usr/lib/libgdk-3.so.0 #26 0x00007ffff759f15b in 
gdk_display_get_event () at /usr/lib/libgdk-3.so.0 #27 
0x00007ffff75fb104 in  () at /usr/lib/libgdk-3.so.0 #28 
0x00007ffff6fcb8f4 in g_main_context_dispatch () at 
/usr/lib/libglib-2.0.so.0 #29 0x00007ffff701f821 in  () at 
/usr/lib/libglib-2.0.so.0 #30 0x00007ffff6fca121 in 
g_main_context_iteration () at /usr/lib/libglib-2.0.so.0 #31 
0x00007ffff784e2c7 in gtk_main_iteration () at 
/usr/lib/libgtk-3.so.0 #32 0x00005555556c1821 in XTread_socket 
(terminal=0x5555560b7460, hold_quit=0x7fffffffab50) at 
xterm.c:9395 #33 0x000055555570f3a2 in gobble_input () at 
keyboard.c:6890 #34 0x000055555570f894 in handle_async_input () at 
keyboard.c:7121 #35 0x000055555570f8dd in process_pending_signals 
() at keyboard.c:7139 #36 0x000055555570f9cf in unblock_input_to 
(level=0) at keyboard.c:7162 #37 0x000055555570fa4c in 
unblock_input () at keyboard.c:7187 #38 0x000055555578f49a in 
garbage_collect () at alloc.c:6121 #39 0x000055555578efe7 in 
maybe_garbage_collect () at alloc.c:5964 #40 0x00005555557bb292 in 
maybe_gc () at lisp.h:5041 #41 0x00005555557c12d6 in Ffuncall 
(nargs=2, args=0x7fffffffad68) at eval.c:2793 #42 
0x000055555580f7d6 in exec_byte_code
...  --------------

For breakpoints, I am doing the following:

1) make a global static variable in alloc.c:
static int enable_gc_trace = 0;

2) in garbage_collect(), 'enable_gc_trace++' when it starts and 
'enable_gc_trace--' when it ends.  I just wrapped the call to 
unblock_input(), but you could widen that window.

3) run in gdb with conditional breakpoints on GC and longjmp 
functions:
b siglongjmp if enable_gc_trace > 0
b internal_catch if enable_gc_trace > 0
b internal_catch_all if enable_gc_trace > 0
b maybe_garbage_collect if enable_gc_trace > 0

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 20 Jan 2021 12:09:02 GMT) Full text and rfc822 format available.

Message #791 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, monnier <at> iro.umontreal.ca, 
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 20 Jan 2021 13:08:44 +0100
I'm incompetent at formatting e-mails.  Have a link to the 
backtrace instead:

https://trevorbentley.com/mtrace/backtrace.txt

-Trevor




Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 20 Jan 2021 14:54:02 GMT) Full text and rfc822 format available.

Message #794 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Stefan Monnier <monnier <at> iro.umontreal.ca>
To: Trevor Bentley <trevor <at> trevorbentley.com>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, Eli Zaretskii <eliz <at> gnu.org>
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 20 Jan 2021 09:53:08 -0500
> If you have a look at this long backtrace, you can see that we are inside
> a garbage_collect call (frame #38).  An X11 focus event comes in, triggering
> a bunch of GTK/GDK/X calls.  Mysteriously, this leads to a maybe_quit() call
> which in turn calls longjmp(). longjmp jumps right out of the garbage
> collect, leaving it unfinished.

Indeed, thanks!

> I don't know emacs internals, so you'll have to figure out if this is
> X dependent (probably) and/or GTK dependent.  It should be possible to come
> up with an easier way to reproduce it now.

The backtrace is clear enough, no need to reproduce it.

The GC properly speaking is actually finished at that point, BTW
(luckily: I think you'd have seen worse outcomes if that weren't the
case ;-).

I installed the simple patch below into `master.  It should fix the
immediate problem of failing to set consing_until_gc back to a sane
value and it should also fix the other immediate problem of getting to
`siglongjmp` from `unblock_input` via `window_parameter`.

Eli, do you think it should go to `emacs-27`?

> Backtrace:
> -----------
> (gdb) bt
> #0  0x00007ffff5571230 in siglongjmp () at /usr/lib/libc.so.6
> #1  0x00005555557bd38d in unwind_to_catch (catch=0x555555dfc320, type=NONLOCAL_EXIT_THROW, value=0x30) at eval.c:1181
> #2  0x00005555557bd427 in Fthrow (tag=0xe75830, value=0x30) at eval.c:1198
> #3  0x00005555557bdea7 in process_quit_flag () at eval.c:1526
> #4  0x00005555557bdeef in maybe_quit () at eval.c:1547
> #5  0x00005555557cbbb1 in Fassq (key=0xd0b0, alist=0x55555901c573) at fns.c:1609
> #6 0x0000555555632b63 in window_parameter (w=0x555555f2d088, parameter=0xd0b0) at window.c:2262
> #7 0x000055555563a075 in window_wants_tab_line (w=0x555555f2d088) at window.c:5410
> #8 0x00005555555c22b1 in get_phys_cursor_geometry (w=0x555555f2d088, row=0x55555d9f3ef0, glyph=0x55555fd20e00, xp=0x7fffffff9c48, yp=0x7fffffff9c4c, heightp=0x7fffffff9c50) at xdisp.c:2650
> #9 0x00005555556c1b12 in x_draw_hollow_cursor (w=0x555555f2d088, row=0x55555d9f3ef0) at xterm.c:9495
> #10 0x00005555556c24f9 in x_draw_window_cursor (w=0x555555f2d088, glyph_row=0x55555d9f3ef0, x=32, y=678, cursor_type=HOLLOW_BOX_CURSOR, cursor_width=1, on_p=true, active_p=false) at xterm.c:9682
> #11 0x000055555561a922 in display_and_set_cursor (w=0x555555f2d088, on=true, hpos=2, vpos=18, x=32, y=678) at xdisp.c:31738
> #12 0x000055555561aa5b in update_window_cursor (w=0x555555f2d088, on=true) at xdisp.c:31773
> #13 0x000055555561aabf in update_cursor_in_window_tree (w=0x555555f2d088, on_p=true) at xdisp.c:31791
> #14 0x000055555561aaab in update_cursor_in_window_tree (w=0x55555907a490, on_p=true) at xdisp.c:31789
> #15 0x000055555561aaab in update_cursor_in_window_tree (w=0x55555a514b68, on_p=true) at xdisp.c:31789
> #16 0x000055555561ab37 in gui_update_cursor (f=0x555556625468, on_p=true) at xdisp.c:31805
> #17 0x00005555556b9829 in x_frame_unhighlight (f=0x555556625468) at xterm.c:4490
> #18 0x00005555556ba22d in x_frame_rehighlight (dpyinfo=0x55555626d6c0) at xterm.c:4852
> #19 0x00005555556b98fc in x_new_focus_frame (dpyinfo=0x55555626d6c0, frame=0x0) at xterm.c:4520
> #20 0x00005555556b9a3d in x_focus_changed (type=10, state=2, dpyinfo=0x55555626d6c0, frame=0x555556625468, bufp=0x7fffffffa0d0) at xterm.c:4554
> #21 0x00005555556ba0a6 in x_detect_focus_change (dpyinfo=0x55555626d6c0, frame=0x555556625468, event=0x7fffffffa840, bufp=0x7fffffffa0d0) at xterm.c:4787
> #22 0x00005555556c0235 in handle_one_xevent (dpyinfo=0x55555626d6c0, event=0x7fffffffa840, finish=0x555555c901d4 <current_finish>, hold_quit=0x7fffffffab50) at xterm.c:8810
> #23 0x00005555556bde28 in event_handler_gdk (gxev=0x7fffffffa840, ev=0x55555cccf0c0, data=0x0) at xterm.c:7768
> #24 0x00007ffff75f780f in  () at /usr/lib/libgdk-3.so.0
> #25 0x00007ffff75fb3cb in  () at /usr/lib/libgdk-3.so.0
> #26 0x00007ffff759f15b in gdk_display_get_event () at /usr/lib/libgdk-3.so.0
> #27 0x00007ffff75fb104 in  () at /usr/lib/libgdk-3.so.0
> #28 0x00007ffff6fcb8f4 in g_main_context_dispatch () at /usr/lib/libglib-2.0.so.0
> #29 0x00007ffff701f821 in  () at /usr/lib/libglib-2.0.so.0
> #30 0x00007ffff6fca121 in g_main_context_iteration () at /usr/lib/libglib-2.0.so.0
> #31 0x00007ffff784e2c7 in gtk_main_iteration () at /usr/lib/libgtk-3.so.0
> #32 0x00005555556c1821 in XTread_socket (terminal=0x5555560b7460, hold_quit=0x7fffffffab50) at xterm.c:9395
> #33 0x000055555570f3a2 in gobble_input () at keyboard.c:6890
> #34 0x000055555570f894 in handle_async_input () at keyboard.c:7121
> #35 0x000055555570f8dd in process_pending_signals () at keyboard.c:7139
> #36 0x000055555570f9cf in unblock_input_to (level=0) at keyboard.c:7162
> #37 0x000055555570fa4c in unblock_input () at keyboard.c:7187
> #38 0x000055555578f49a in garbage_collect () at alloc.c:6121
> #39 0x000055555578efe7 in maybe_garbage_collect () at alloc.c:5964
> #40 0x00005555557bb292 in maybe_gc () at lisp.h:5041
> #41 0x00005555557c12d6 in Ffuncall (nargs=2, args=0x7fffffffad68) at eval.c:2793
> #42 0x000055555580f7d6 in exec_byte_code
> ...  --------------

Of course, there might be other places where we could get to
`maybe_quit` from `XTread_socket`, given the enormous amount of code it
can execute.  :-(


        Stefan


diff --git a/src/alloc.c b/src/alloc.c
index c0a55e61b9..b86ed4ed26 100644
--- a/src/alloc.c
+++ b/src/alloc.c
@@ -6101,11 +6101,13 @@ garbage_collect (void)
 
   gc_in_progress = 0;
 
-  unblock_input ();
-
   consing_until_gc = gc_threshold
     = consing_threshold (gc_cons_threshold, Vgc_cons_percentage, 0);
 
+  /* Unblock *after* re-setting `consing_until_gc` in case `unblock_input`
+     signals an error (see bug#43389).  */
+  unblock_input ();
+
   if (garbage_collection_messages && NILP (Vmemory_full))
     {
       if (message_p || minibuf_level > 0)
diff --git a/src/window.c b/src/window.c
index e025e0b082..eb16e2a433 100644
--- a/src/window.c
+++ b/src/window.c
@@ -2260,7 +2260,7 @@ DEFUN ("window-parameters", Fwindow_parameters, Swindow_parameters,
 Lisp_Object
 window_parameter (struct window *w, Lisp_Object parameter)
 {
-  Lisp_Object result = Fassq (parameter, w->window_parameters);
+  Lisp_Object result = assq_no_quit (parameter, w->window_parameters);
 
   return CDR_SAFE (result);
 }





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 20 Jan 2021 15:33:01 GMT) Full text and rfc822 format available.

Message #797 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>
Cc: fweimer <at> redhat.com, 43389 <at> debbugs.gnu.org, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 20 Jan 2021 17:32:32 +0200
> From: Stefan Monnier <monnier <at> iro.umontreal.ca>
> Cc: Eli Zaretskii <eliz <at> gnu.org>,  michael_heerdegen <at> web.de,
>   carlos <at> redhat.com,  fweimer <at> redhat.com,  43389 <at> debbugs.gnu.org,
>   bugs <at> gnu.support,  dj <at> redhat.com
> Date: Wed, 20 Jan 2021 09:53:08 -0500
> 
> > I don't know emacs internals, so you'll have to figure out if this is
> > X dependent (probably) and/or GTK dependent.  It should be possible to come
> > up with an easier way to reproduce it now.
> 
> The backtrace is clear enough, no need to reproduce it.

Indeed.

> I installed the simple patch below into `master.  It should fix the
> immediate problem of failing to set consing_until_gc back to a sane
> value and it should also fix the other immediate problem of getting to
> `siglongjmp` from `unblock_input` via `window_parameter`.
> 
> Eli, do you think it should go to `emacs-27`?

Definitely, thanks.




Reply sent to Stefan Monnier <monnier <at> iro.umontreal.ca>:
You have taken responsibility. (Wed, 20 Jan 2021 15:41:01 GMT) Full text and rfc822 format available.

Notification sent to Michael Heerdegen <michael_heerdegen <at> web.de>:
bug acknowledged by developer. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Message #802 received at 43389-done <at> debbugs.gnu.org (full text, mbox):

From: Stefan Monnier <monnier <at> iro.umontreal.ca>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, trevor <at> trevorbentley.com, carlos <at> redhat.com,
 43389-done <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 20 Jan 2021 10:40:31 -0500
>> Eli, do you think it should go to `emacs-27`?
> Definitely, thanks.

OK, done.

Trevor: I marked this bug as closed under the assumption that this
problem is solved, but of course, if it re-occurs feel free to re-open
(ideally while running under GDB in a similar setup, so we get a clear
backtrace again ;-)



        Stefan





Reply sent to Stefan Monnier <monnier <at> iro.umontreal.ca>:
You have taken responsibility. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Notification sent to Madhu <enometh <at> meer.net>:
bug acknowledged by developer. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Reply sent to Stefan Monnier <monnier <at> iro.umontreal.ca>:
You have taken responsibility. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Notification sent to Naveed Chehrazi <nchehrazi <at> gmail.com>:
bug acknowledged by developer. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Reply sent to Stefan Monnier <monnier <at> iro.umontreal.ca>:
You have taken responsibility. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Notification sent to Jean Louis <bugs <at> gnu.support>:
bug acknowledged by developer. (Wed, 20 Jan 2021 15:41:02 GMT) Full text and rfc822 format available.

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Wed, 20 Jan 2021 15:50:02 GMT) Full text and rfc822 format available.

Message #820 received at 43389-done <at> debbugs.gnu.org (full text, mbox):

From: Trevor Bentley <trevor <at> trevorbentley.com>
To: Stefan Monnier <monnier <at> iro.umontreal.ca>, Eli Zaretskii <eliz <at> gnu.org>
Cc: fweimer <at> redhat.com, , dj <at> redhat.com, bugs <at> gnu.support,
 michael_heerdegen <at> web.de, carlos <at> redhat.com, 43389-done <at> debbugs.gnu.org
Subject: Re: bug#43389: 28.0.50; Emacs memory leaks using hard disk all time
Date: Wed, 20 Jan 2021 16:49:28 +0100
Stefan Monnier <monnier <at> iro.umontreal.ca> writes:

> Trevor: I marked this bug as closed under the assumption that 
> this problem is solved, but of course, if it re-occurs feel free 
> to re-open (ideally while running under GDB in a similar setup, 
> so we get a clear backtrace again ;-) 

Agreed.

And thanks to everyone for all of the help!  I very much look 
forward to having long-lived emacs processes again :)

-Trevor





Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sat, 06 Feb 2021 16:26:02 GMT) Full text and rfc822 format available.

Message #823 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Madhu <enometh <at> meer.net>
To: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43395: closed
Date: Sat, 06 Feb 2021 21:55:39 +0530 (IST)
I think I am facing the problem again presently:

GNU Emacs 28.0.50 (build 2, x86_64-pc-linux-gnu, GTK+ Version 3.24.24,
cairo version 1.16.0) of 2021-01-21 (pgtk branch; i think the
corresponding commit on master was 8b33b76eb9fb)

  PID  %MEM    VIRT   SWAP    RES   CODE    DATA    SHR nMaj OOMs nDRT  %CPU COMMAND
 9912  17.8   81.8g      0   1.3g   2916   49.3g  10976  48k  732    0   0.0 emacs

I was able to get a M-x memory-report and M-x memory-usage (88.7 MiB
Overall Object Memory Usage) but I couldn't get a M-x malloc-info as
this was started --daemon.  Unfortunately I botched up and killed the
emacs process when trying to open a file and redirect malloc_info to
it in gdb.  I didn't check gc-cons-threshold gc-cons-percentage but I
did kill all buffers and did a few manual gc-s so i think those were
normal.

Were the paths leading to the code which was fixed understood?  (on
another note perhaps malloc_trim could be introduced into the gc via
an optional path?)




bug archived. Request was from Debbugs Internal Request <help-debbugs <at> gnu.org> to internal_control <at> debbugs.gnu.org. (Sun, 07 Mar 2021 12:24:07 GMT) Full text and rfc822 format available.

bug unarchived. Request was from Madhu <enometh <at> meer.net> to control <at> debbugs.gnu.org. (Sun, 21 Mar 2021 15:53:02 GMT) Full text and rfc822 format available.

Information forwarded to bug-gnu-emacs <at> gnu.org:
bug#43389; Package emacs. (Sun, 21 Mar 2021 15:58:01 GMT) Full text and rfc822 format available.

Message #830 received at 43389 <at> debbugs.gnu.org (full text, mbox):

From: Madhu <enometh <at> meer.net>
To: 43389 <at> debbugs.gnu.org
Subject: Re: bug#43395: closed
Date: Sun, 21 Mar 2021 19:40:44 +0530 (IST)
[Message part 1 (text/plain, inline)]
I think this dragon has not been put to sleep yet.  I ran into the
problem again - quite quickly within some 5 hours of emacs uptime

GNU Emacs 28.0.50 (build 1, x86_64-pc-linux-gnu, Motif Version 2.3.8,
cairo version 1.16.0) of 2021-03-08 (master commit a190bc9f3 - with
the motif removal reverted.)

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
21301 madhu     20   0 2988364   2.7g   0.0  36.7   5:04.01 S emacs

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
madhu    21301  1.9 36.7 2988364 2809536 pts/2 Ssl+ 14:06   5:03 /12/build/emacs/build-motif/src/emacs -nw

A full gc does not release the resident memory.

I had an emacs -nw session and one X emacsclient session.  I was
prompted for a passwd by mew in the gui frame, and the prompt appeared
on the tty frame - which was where I entered the password. Then I
noticed the cpu temperature was up, and and a Ctrl-G on emacs stopped
that. I think the leak may have occured then but I didn't notice it
until later. When I did notice it i killed all the buffers did a gc
and ran the memory and malloc reports - which I'm attaching here - in
case it gives any clues.

The emacs command line was:

TERM=xterm-256color MALLOC_ARENA_MAX=2 exec /12/build/emacs/build-motif/src/emacs -nw" > ~/emacs.log 2>&1
[memory-usage.txt (text/plain, inline)]
Garbage collection stats:
((conses 16 788433 262686) (symbols 48 72449 79) (strings 32 397265 20150) (string-bytes 1 26247796) (vectors 16 88641) (vector-slots 8 1961725 208444) (floats 8 1191 1377) (intervals 56 9514 5343) (buffers 992 8))

 =>	12.0MB (+ 4.01MB dead) in conses
	3.32MB (+ 3.70kB dead) in symbols
	12.1MB (+  630kB dead) in strings
	25.0MB in string-bytes
	1.35MB in vectors
	15.0MB (+ 1.59MB dead) in vector-slots
	9.30kB (+ 10.8kB dead) in floats
	 520kB (+  292kB dead) in intervals
	7.75kB in buffers

Total in lisp objects: 75.9MB (live 69.3MB, dead 6.51MB)

Buffer ralloc memory usage:
8 buffers
16.2kB total (14.0kB in gaps)
      Size	Gap	Name

      1277	753	memory-report.txt
       670	1575	*Buffer Details*
       274	5855	*Ibuffer*
       103	1918	*Messages*
        35	2002	 *Echo Area 0*
         0	2087	 *Minibuf-1*
         0	20	 *Minibuf-0*
         0	20	 *Echo Area 1*
[memory-report.txt (text/plain, inline)]
Estimated Emacs Memory Usage

  69.3 MiB  Overall Object Memory Usage
  11.1 MiB  Memory Used By Global Variables
   6.6 MiB  Reserved (But Unused) Object Memory
   5.5 MiB  Memory Used By Symbol Plists
  61.7 KiB  Total Buffer Memory Usage
   1.2 KiB  Total Image Cache Size

Object Storage

  37.2 MiB  Strings
  16.3 MiB  Vectors
  12.0 MiB  Conses
   3.3 MiB  Symbols
 514.2 KiB  Intervals
   9.3 KiB  Floats
   6.8 KiB  Buffer-Objects

Largest Buffers

  31.5 KiB   *Minibuf-1*
  25.1 KiB  *Ibuffer*
   2.1 KiB   *Echo Area 0*
   1.3 KiB  *Memory Report*
   1.2 KiB  *Messages*
   0.3 KiB   *Minibuf-0*
   0.2 KiB   *Echo Area 1*

Largest Variables

   1.4 MiB  load-history
   1.2 MiB  $portage-category-package-names
 951.6 KiB  +lw-manual-data-7-1-0-0+
 574.5 KiB  ivy--all-candidates
 491.4 KiB  command-history
 296.5 KiB  face-new-frame-defaults
 282.5 KiB  help-definition-prefixes
 236.3 KiB  obarray
 143.1 KiB  org-entities
 137.5 KiB  save-place-alist
  92.5 KiB  global-map
  92.5 KiB  widget-global-map
  89.3 KiB  bibtex-biblatex-entry-alist
  84.4 KiB  buffer-name-history
  83.2 KiB  lw::manual-symbols
  82.2 KiB  gnus-summary-mode-map
  80.9 KiB  coding-system-alist
  79.0 KiB  shortdoc--groups
  77.3 KiB  ivy-history
  74.7 KiB  ivy--virtual-buffers

[malloc-info.txt (text/plain, inline)]
<malloc version="1">
<heap nr="0">
<sizes>
  <size from="17" to="32" total="4992" count="156"/>
  <size from="33" to="48" total="96" count="2"/>
  <size from="49" to="64" total="189824" count="2966"/>
  <size from="65" to="80" total="12640" count="158"/>
  <size from="81" to="96" total="576" count="6"/>
  <size from="97" to="112" total="448" count="4"/>
  <size from="33" to="33" total="8778" count="266"/>
  <size from="49" to="49" total="686" count="14"/>
  <size from="193" to="193" total="6369" count="33"/>
  <size from="209" to="209" total="5225" count="25"/>
  <size from="225" to="225" total="5400" count="24"/>
  <size from="241" to="241" total="241" count="1"/>
  <size from="257" to="257" total="15677" count="61"/>
  <size from="273" to="273" total="6825" count="25"/>
  <size from="289" to="289" total="7225" count="25"/>
  <size from="305" to="305" total="915" count="3"/>
  <size from="321" to="321" total="21507" count="67"/>
  <size from="337" to="337" total="6740" count="20"/>
  <size from="353" to="353" total="3530" count="10"/>
  <size from="369" to="369" total="1845" count="5"/>
  <size from="385" to="385" total="20790" count="54"/>
  <size from="401" to="401" total="4010" count="10"/>
  <size from="417" to="417" total="2085" count="5"/>
  <size from="433" to="433" total="2165" count="5"/>
  <size from="449" to="449" total="16164" count="36"/>
  <size from="465" to="465" total="2325" count="5"/>
  <size from="481" to="481" total="3848" count="8"/>
  <size from="497" to="497" total="1491" count="3"/>
  <size from="513" to="513" total="15903" count="31"/>
  <size from="529" to="529" total="5819" count="11"/>
  <size from="545" to="545" total="4360" count="8"/>
  <size from="561" to="561" total="2805" count="5"/>
  <size from="577" to="577" total="21926" count="38"/>
  <size from="593" to="593" total="4151" count="7"/>
  <size from="609" to="609" total="4263" count="7"/>
  <size from="625" to="625" total="625" count="1"/>
  <size from="641" to="641" total="16666" count="26"/>
  <size from="657" to="657" total="24966" count="38"/>
  <size from="673" to="673" total="4711" count="7"/>
  <size from="689" to="689" total="4134" count="6"/>
  <size from="705" to="705" total="12690" count="18"/>
  <size from="721" to="721" total="8652" count="12"/>
  <size from="737" to="737" total="6633" count="9"/>
  <size from="753" to="753" total="753" count="1"/>
  <size from="769" to="769" total="9228" count="12"/>
  <size from="785" to="785" total="3140" count="4"/>
  <size from="801" to="801" total="4806" count="6"/>
  <size from="817" to="817" total="817" count="1"/>
  <size from="833" to="833" total="4165" count="5"/>
  <size from="849" to="849" total="10188" count="12"/>
  <size from="865" to="865" total="3460" count="4"/>
  <size from="881" to="881" total="2643" count="3"/>
  <size from="897" to="897" total="29601" count="33"/>
  <size from="913" to="913" total="2739" count="3"/>
  <size from="929" to="929" total="1858" count="2"/>
  <size from="945" to="945" total="9450" count="10"/>
  <size from="961" to="961" total="23064" count="24"/>
  <size from="977" to="977" total="18563" count="19"/>
  <size from="993" to="993" total="4965" count="5"/>
  <size from="1009" to="1009" total="94846" count="94"/>
  <size from="1025" to="1073" total="442846" count="430"/>
  <size from="1089" to="1137" total="94742" count="86"/>
  <size from="1153" to="1201" total="32700" count="28"/>
  <size from="1217" to="1249" total="29432" count="24"/>
  <size from="1281" to="1329" total="32617" count="25"/>
  <size from="1345" to="1393" total="20495" count="15"/>
  <size from="1409" to="1457" total="24369" count="17"/>
  <size from="1473" to="1521" total="16459" count="11"/>
  <size from="1537" to="1585" total="20317" count="13"/>
  <size from="1601" to="1649" total="19388" count="12"/>
  <size from="1665" to="1713" total="11783" count="7"/>
  <size from="1729" to="1777" total="8757" count="5"/>
  <size from="1793" to="1841" total="16377" count="9"/>
  <size from="1857" to="1905" total="15016" count="8"/>
  <size from="1921" to="1969" total="33153" count="17"/>
  <size from="1985" to="2033" total="68418" count="34"/>
  <size from="2049" to="2097" total="205492" count="100"/>
  <size from="2113" to="2161" total="89514" count="42"/>
  <size from="2177" to="2225" total="30782" count="14"/>
  <size from="2241" to="2289" total="27068" count="12"/>
  <size from="2305" to="2353" total="34799" count="15"/>
  <size from="2369" to="2417" total="28748" count="12"/>
  <size from="2433" to="2481" total="12277" count="5"/>
  <size from="2497" to="2529" total="17623" count="7"/>
  <size from="2561" to="2609" total="18119" count="7"/>
  <size from="2689" to="2737" total="16230" count="6"/>
  <size from="2753" to="2785" total="19431" count="7"/>
  <size from="2817" to="2865" total="17094" count="6"/>
  <size from="2881" to="2929" total="8723" count="3"/>
  <size from="2945" to="2993" total="29706" count="10"/>
  <size from="3009" to="3057" total="36524" count="12"/>
  <size from="3073" to="3121" total="101665" count="33"/>
  <size from="3137" to="3553" total="293369" count="89"/>
  <size from="3585" to="4081" total="163002" count="42"/>
  <size from="4097" to="4561" total="345522" count="82"/>
  <size from="4641" to="5105" total="166258" count="34"/>
  <size from="5121" to="5617" total="185635" count="35"/>
  <size from="5633" to="6113" total="100273" count="17"/>
  <size from="6145" to="6641" total="170315" count="27"/>
  <size from="6705" to="7153" total="61977" count="9"/>
  <size from="7169" to="7553" total="167719" count="23"/>
  <size from="7777" to="8177" total="95996" count="12"/>
  <size from="8193" to="8673" total="602825" count="73"/>
  <size from="8737" to="9201" total="72424" count="8"/>
  <size from="9217" to="9713" total="437983" count="47"/>
  <size from="9729" to="9985" total="88681" count="9"/>
  <size from="11473" to="11473" total="11473" count="1"/>
  <size from="13201" to="16369" total="1057223" count="71"/>
  <size from="16401" to="20321" total="2678684" count="156"/>
  <size from="20497" to="24513" total="617340" count="28"/>
  <size from="24657" to="28433" total="479890" count="18"/>
  <size from="28929" to="32529" total="368252" count="12"/>
  <size from="32801" to="36833" total="480238" count="14"/>
  <size from="37089" to="40241" total="196741" count="5"/>
  <size from="42017" to="65249" total="1370090" count="26"/>
  <size from="65649" to="92913" total="1487091" count="19"/>
  <size from="101089" to="115713" total="440132" count="4"/>
  <size from="141361" to="157793" total="299154" count="2"/>
  <size from="163889" to="230721" total="1753993" count="9"/>
  <size from="324161" to="468993" total="2808135" count="7"/>
  <size from="545265" to="1799181393" total="2731856606" count="30"/>
  <unsorted from="129" to="16929" total="163118" count="30"/>
</sizes>
<total type="fast" count="3292" size="208576"/>
<total type="rest" count="3139" size="2751257554"/>
<system type="current" size="2856476672"/>
<system type="max" size="2856476672"/>
<aspace type="total" size="2856476672"/>
<aspace type="mprotect" size="2856476672"/>
</heap>
<heap nr="1">
<sizes>
  <size from="17" to="32" total="992" count="31"/>
  <size from="33" to="48" total="240" count="5"/>
  <size from="97" to="112" total="112" count="1"/>
</sizes>
<total type="fast" count="37" size="1344"/>
<total type="rest" count="1" size="96656"/>
<system type="current" size="135168"/>
<system type="max" size="135168"/>
<aspace type="total" size="135168"/>
<aspace type="mprotect" size="135168"/>
<aspace type="subheaps" size="1"/>
</heap>
<total type="fast" count="3329" size="209920"/>
<total type="rest" count="3140" size="2751354210"/>
<total type="mmap" count="2" size="692224"/>
<system type="current" size="2856611840"/>
<system type="max" size="2856611840"/>
<aspace type="total" size="2856611840"/>
<aspace type="mprotect" size="2856611840"/>
</malloc>

bug archived. Request was from Debbugs Internal Request <help-debbugs <at> gnu.org> to internal_control <at> debbugs.gnu.org. (Mon, 19 Apr 2021 11:24:06 GMT) Full text and rfc822 format available.

This bug report was last modified 3 years ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.