data_dir
command-line argument.ulimit -n
./etc/sysctl.conf
and add the following line:sysctl -p
to make the changes take effect./etc/security/limits.conf
and add the following line:ulimit -n
. You can also change the limit for the current session via ulimit -n 10000000
/etc/systemd/user.conf
and /etc/systemd/system.conf
./bin/logs -f
in the Arweave directory in a different terminal../bin/stop
or kill the OS process (kill -sigterm <pid>
or pkill <name>
). Sending a SIGKILL (kill -9
) is not recommended.Miner spora rate: 1546 h/s
and logs -miner_sporas_per_second
. Note that it is 0 when you start the miner without data and slowly increases as more data is synchronized. After the number stabilizes, you can input it into the mining calculator generously created by the community member @tiamat here to see the expected return.randomx-benchmark
script../bin/randomx-benchmark --mine --init 32 --threads 32 --jit --largePages
. Replace 32 with the number of CPU threads. Note that reducing the number of threads might improve the outcome. Do not specify --largePages
if you have not configured them yet. For the reference, a 32-threads AMD Ryzen 3950x can do about 10000 h/s, a 32-threads AMD EPYC 7502P - 24000 h/s, a 12-threads Intel Xeon E-2276G CPU - 2500 h/s, a 2-threads Intel Xeon CPU E5-2650 machine in the cloud - 600 h/s.hdparm -t /dev/sda
. Replace /dev/sda
with the disk name from df -h
. To be competitive, consider a fast NVMe SSD capable of several GiB per second and more../bin/hashrate-upper-limit 2500 1 3
where 2500 is a RandomX hashrate, 1 is the number of GiB a disk reads per second, 3 is 1/replicated share of the weave. For example, a 12-core Intel Xeon with a 1 GiB/s SSD with a third of the weave is capped at 540 h/s. In practice, the performance is usually about 0.7 - 0.9 of the upper limit.stage_one_hashing_threads
(between 1 and the number of CPU threads), stage_two_hashing_threads
, io_threads
, randomx_bulk_hashing_iterations
. For example,recall bytes computed/s
should be roughly equal to Miner spora rate
divided by your share of the weave. If it is not, consider increasing io_threads
and decreasingstage_one_hashing_threads
. You can learn the share of the weave the node has synced to date by dividing the size of the chunk_storage
folder (du -sh /path/to/data/dir/chunk_storage
) by the total weave size. Increasing randomx_bulk_hashing_iterations
to 128 or bigger might make a big difference on the powerful machine.chunk_storage
and rocksdb
folders, the node will fill it up, and your miner may nevertheless be competitive, assuming the disk and the processor are sufficiently performant.sync_jobs
configuration parameter like this:mine
flag) to further speed up syncing.cat /proc/meminfo | grep HugePages
. To set a value, run sudo sysctl -w vm.nr_hugepages=1000
. To make the configuration survive reboots, create /etc/sysctl.d/local.conf
and put vm.nr_hugepages=1000
there.cat /proc/meminfo | grep HugePages
should then look like this:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB HugePages_Total: 1000 HugePages_Free: 1000 HugePages_Rsvd: 0 HugePages_Surp: 0
enable randomx_large_pages
on startup:chunk_storage
and rocksdb
folders:df -h
should look like:/dev/hdd1 5720650792 344328088 5087947920 7% /your/dir /dev/nvme1n1 104857600 2097152 102760448 2% /your/dir/chunk_storage /dev/nvme1n2 104857600 2097152 102760448 2% /your/dir/rocksdb
/your/dir
with the directory you specify on startup:http://[Your Internet IP]:1984
. You can obtain your public IP here, or by running curl ifconfig.me/ip
. If you specified a different port when starting the miner, replace "1984" anywhere in these instructions with your port. If you can not access the node, you need to set up TCP port forwarding for incoming HTTP requests to your Internet IP address on port 1984 to the selected port on your mining machine. For more details on how to set up port forwarding, consult your ISP or cloud provider.data_dir
folder to the new machine. Note that the chunk_storage
folder contains sparse files, so copying them the usual way will take a lot of time and the size of the destination folder will be too large. To copy this folder, use rsync with the -aS
flags or archive it via tar -Scf
before copying.=ERROR REPORT====...=== Socket connection error: exit badarg, [{gen_tcp,connect,4, [{file,"gen_tcp.erl"},{line,149}]}
TCP/IP failed to establish an outgoing connection because the selected local endpoint was recently used to connect to the same remote endpoint. This error typically occurs when outgoing connections are opened and closed at a high rate, causing all available local ports to be used and forcing TCP/IP to reuse a local port for an outgoing connection. To minimize the risk of data corruption, the TCP/IP standard requires a minimum time period to elapse between successive connections from a given local endpoint to a given remote endpoint.