Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah yeah, getting good kernel<>userspace oneshot memcpy performance for large files is surprisingly hard. mmap has setup/teardown overhead that's significant for oneshot transfers, regular read/write calls suffer from page cache/per page overhead. Hopefully all the large folio work in the kernel will help with that.


From what I've seen a surprisingly large part of the overhead is due to SMAP when doing larger reads from the page cache - i.e. if I boot with clearcpuid=smap (not for prod use!), larger reads go significantly faster. On both Intel and AMD CPUs interestingly.

On Intel it's also not hard to simply reach the per-core memory bandwidth with modern storage HW. This matters most prominently for writes by the checkpointing process, which needs to compute data checksums given the current postgres implementation (if enabled). But even for reads it can be a bottleneck, e.g. when prewarming the buffer pool after a restart.


> if I boot with clearcpuid=smap (not for prod use!), larger reads go significantly faster. On both Intel and AMD CPUs interestingly.

Is there a page anywhere that collects these sorts of "turn the whole hardware security layer off" switches that can be flipped to get better throughput out of modern x86 CPUs, when your system has no real attack surface to speak of (e.g. air-gapped single-tenant HPC)?


On the kernel side there's a boot parameter for all of them: mitigations=off Software that was compiled with additional fences may have to be recompiled to remove them.

https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...


mitigations=off disables workarounds for bugs or "mis-features" in the CPU that could be exploited to bypass OS security measures.

smap is an OS security measure, and so does not get disabled by mitigations=off. smap can be pretty draining for certain IO performance though. IMO it should be more well-known or covered by a more obvious option.

Linux kernel developers are really bad at defining and naming options like this.


SMAP overhead should be roughly constant, and I’d be quite surprised if it’s noticeable for large reads. Small reads are a different story.


It turns out to be the other way round, curiously. The bigger the reads (i.e. how much to read in one syscall) and the bigger the target area of the reads (how long before a target memory location is reused), the bigger the overhead of SMAP gets.

If interesting I can dig up the reproducer I had at some point.


That is definitely interesting.


TCMalloc never munmaps, instead it mmap(MAP_FIXED) within unpopulated PROT_NONE regions, and then madvise(MADV_FREE) at page granularity to reduce RSS. Perhaps a similar approach for file I/O could help to dodge the cost of munmap TLB shootdowns after a file has been read, but using MADV_DONTNEED instead of MADV_FREE. There will probably be a shootdown associated with the MADV_DONTNEED, but maybe it will be lower cost than munmap?

You might also just keep around the file mapping until memory/address space pressure requires, and at that point MAP_FIXED over it.



That doesn't speed up uerspace<>kernel memcopy, it just reduces cache churn. Despite its name it still goes through the page cache, it just triggers writeback and drops the pages once that's done. For example when copying to a tmpfs it makes zero difference since that lives entirely in memory.


So you're less dependent on the page replacement algorithm being scan-resistant, since you can use this flag for scan/loop workloads, right?


I would initially add it for WAL writes and reads. There should never be another read in normal operation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: