% grep -R "" /sys/kernel/mm/hugepages/ /proc/sys/vm/*huge*.Various runtime settings (see documentation) % grep HUGETLB /boot/config-$(uname -r).Standard Debian Kernel have HUGETLB enabled (What about Lenny ? Xen ?): (read Documentation/vm/hugetlbpage.txt for more information about it) Hugeadm also displays the number of allocated huge pages per available size:Īn alternative to retrieve the current available/used page for the default huge page size is /proc/meminfo: You can get a list of available huge page sizes with hugeadm: The following kernel boot parameters enable 1GB pages and create a pool of one 1GB page:Īfter boot, the huge page pools look like this: If this commands returns a non-empty string, 1GB pages are supported.īefore they are actually both available, they may have to be activated at boot time. If this commands returns a non-empty string, 2MB pages are supported. proc/cpuinfo shows whether the two flags are set. If the CPU supports 2MB pages, it has the PSE cpuinfo flag, for 1GB it has the PDPE1GB flag. These are available at run time.ĭepending on the processor, there are at least two different huge page sizes on the x86_64 architecture: 2MB and 1GB. If one elects to build their own Debian arm64 kernel with CONFIG_ARM64_64K_PAGES=y, then only 512MB HugeTLB (and THP) pages are available. One has to pre-allocate 1GB HugeTLB pages on boot by specifying arguments on the kernel command line, the following will pre-allocate 10 x 1GB huge pages: hugepagesz=1G hugepages=10 The Debian arm64 kernel (running with a 4KB standard PAGE_SIZE) supports 2MB and 1GB HugeTLB page sizes. Some architectures (like ia64) can have multiple and/or configuration "huge" pages size.īoot parameters and mount options in hugetlbpage.txt in documentations. See Limits ( ulimit -l and memlock in /etc/security/nf). (how many pages do you want to allocate?) You should allow the process to lock a little bit more memory that just the the space for hugepages. Note that any page can be locked in RAM, not just huge pages. You should configure the amount of memory a user can lock, so an application can't crash your operating system by locking all the memory. if grep "Huge" /proc/meminfo don't show all the pages, you can try to free the cache with sync echo 3 > /proc/sys/vm/drop_caches (where "3" stands for "purge pagecache, dentries and inodes") then try sysctl -p again. You can try to run sysctl -p to apply the changes. Reboot (This is the most reliable method of allocating huge pages before the memory gets fragmented. 1) hugetlbfs /hugepages hugetlbfs mode=1770,gid=2021 0 0 # Members of group my-hugetlbfs(2021) can allocate "huge" Shared memory segmentĬreate a mount point for the file system % mkdir /hugepagesĪdd this line in /etc/fstab (The mode of 1770 allows anyone in the group to create files but not unlink or rename each other's files. Īdding user franklin to group my-hugetlbfsĮdit /etc/nf and add this text to specify the number of pages you want to reserve (see pages-size) # Allocate 256*2MiB for HugePageTables (YMMV) Note: this should not be needed for libvirt (see /etc/libvirt/nf) % groupadd my-hugetlbfsĪdding user `franklin' to group `my-hugetlbfs'. A good introduction to large pages is available from ibm.com.Ĭreate a group for users of hugepages, and retrieve it's GID (is this example, 2021) then add yourself to the group. Linux support "Huge page tables" (HugeTlb) is available in Debian since DebianLenny (actually, since 2.6.23). (Fedora mounts it in /dev/hugepages/, so don't be surprised if you find some example on the web that use this location) Read the documentation for more information about hugetlbpage.Ĭurrently, there is no standard way to enable HugeTLBfs, mainly because the FHS has no provision for such kind of virtual file system, see 572733.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |