error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2]
I'm in the middle of recoverying from a tactical error copying
around an Mac OS X 10.10.5 Time Machine backup (turns out Apple's
instructions aren't great...), and I had rsync running for the past 6
hours repairing permissions/acls on 1.5 TB of data (not copying the
data), and then it just died in the middle with:
.L....og.... 2015-03-11-094807/platinum-bar2/usr/local/mysql ->
ERROR: out of memory in expand_item_list [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2]
rsync: [sender] write error: Broken pipe (32)
Because this is a Time Machine backup, and there were 66 snapshots of a
1 TB disk consuming about 1.5 TB, there were a *lot* of hard links. Many
of directories rather than individual files, so it's a little
challenging to estimate what the number of files to links is.
Are there any useful tips here?
Is it worth filing a bug report on this thin record?
I guess I can turn on core dumps and increase (unlimit completely) the
Although it doesn't seem to have segfaulted, so I'm not sure having
core dumps enabled would have helped?
p.s.: If I had to start over, I would have spent less time just deleting
the data and recopying it, rather than trying to fixup the metadata and
dealing with magic Apple stuff like the inability to modify symlinks
inside a top-level Backups.backupdb directory of a Time Machine hfs
volume (But you can move the top-level directory into another directory
and then modify symlinks inside and then move it back). This has been
an "interesting" experience.
Although I imagine that output might be voluminous [but maybe not]?
Again, I don't have time to build test cases and reproduce this
carefully, every run is painful and long and slow. But I'd like to do the
responsible thing if someone can tell me what that is.
> p.s.: If I had to start over, I would have spent less time just deleting
> the data and recopying it, rather than trying to fixup the metadata and
Indeed, it's looking like fixing the metadata with rsync is an order
of magnitude slower, even as far as I've gotten. So maybe it's time to find
another method. I don't think fts(3) is optimized any better for large
hardlink farms, so I think maybe I need a homegrown solution? Ughhhh.
> Because this is a Time Machine backup, and there were 66 snapshots of a
> 1 TB disk consuming about 1.5 TB, there were a *lot* of hard links. Many
> of directories rather than individual files, so it's a little
Err, whoops? No, I was tired and confused. They are not hard links
to directories, that would screw up the universe. Still, lots of hard
> Do I need to run this under lldb and set a breakpoint in expand_item_list()?
> Quick inspection suggests running with -vvvv might give some useful output:
the result of this was it processed a few files for a minute or so and
then hung in select() and consumed no cpu and there was no disk
activity. Unfortunately apparently my clang/lldb workflow was broken
and I didn't have functional debugging symbols (...) and I also lost
the stack trace I thought I had (inadequate scrollback), so I'm not
sure what was going on. But at first blush, it appeared that adding -vvvv
made things hang forever.
Removing it, and rerunning, it's now happily trucking along and has
been for the past hour actually doing work.