Page 1 of 1

Big RAM size, advice please

PostPosted: Sep 28th, '15, 09:00
by mackowiakp
My new server has 32 GB of RAM. "Ordinary" installation of M5 consumes aprox 3,6 GB of RAM, SWAP - several kB. In any possibility to speed up server using - for example - RAM-disk (or any other solution) to make system working more from RAM not from disk? What about potential problem with power failure in case of using "more RAM than HDD"? Any idea?

Re: Big RAM size, advice please

PostPosted: Sep 28th, '15, 09:50
by magfan
Just a few ideas which work for my system (256 GB RAM):

Change swappiness from 60 (=default) to a smaller value, e.g. 10 in /etc/sysctl.conf by adding the following line:
Code: Select all
vm.swappiness=10

Setting swappiness to "0" will switch off using swap space.

Create a RAM disk for processes / programs which may require high disk I/O so that read/write operations can benefit from the RAM disk. Just be aware that in case of a power failure these data will get lost if you do not have an UPS (Uninterruptible Power Supply). In general, do not place important files there without a backup. You can create a RAM disk by adding a line like the following to your /etc/fstab:
Code: Select all
tmpfs /data/projects/images tmpfs size=32G 0 0

The parameter "size" can be specified with a certain percentage of RAM: size=10%

Well, the most critical potential problem of a power failure is loss of data. Just keep your backups up to date and do not place important files on a RAM disk if you do not use an UPS.

Re: Big RAM size, advice please

PostPosted: Sep 28th, '15, 12:43
by wintpe
if you are using tmpfs, then it will be far more efficient if you use hugepages

what this does is to reduce the effect of memory lookups on the TLB, and will in itself make tmpfs much faster.

as the hugepages implementation changes from one revision of kernel to another, you need to be aware, that what im showing you is a redhat 6.2 example, and may well change in our kernel.

add vm.nr_hugepages=20 to /etc/sysctl.conf and reread

(dont add more hugepages that available ram, that will cause the kernel to crash.)

mkdir /largepage
mount -t hugetlbfs none /largepage


alternatively use transparent hugepages, as these are able to be swapped out.

ive not tried this myself on mageia, so im just quoting from a book.

so be aware, you might need to do some research, to confirm my suggestion.

regards peter

Re: Big RAM size, advice please

PostPosted: Sep 30th, '15, 10:58
by magfan
Yes, huge pages may perform better than tmpfs (alone) but I do use several RAM disks. Each one for a specific program. In this case I just add a new line to /etc/fstab for each RAM disk. This cannot be done with huge pages. As far as I understand you can reserve only one pool of huge pages. All my programs which currently have their own RAM disk would have to share that pool. Or is it possible to create RAM disks within that pool? Something like:
Code: Select all
mkdir /largepage
mount -t hugetlbfs none /largepage
mkdir /largepage/images
mkdir /largepage/videos
mkdir /largepage/signals

and in /etc/fstab something like:
Code: Select all
tmpfs /largepage/images tmpfs size=32G 0 0
tmpfs /largepage/videos tmpfs size=32G 0 0
tmpfs /largepage/signals tmpfs size=16G 0 0