Notes
#
General NotesApplication performance can be affected by the following factors.
File Size
Transferring a large number of small files tends to cause a drop in throughput. In our performance evaluations of transferring many files, throughput starts to drop when the file size is around 128KB (files are of the same size).
Encryption, Compression or Digest Calculation
During transferring, the throughput may decrease due to an increasing load on CPU by a bottleneck of encryption or decryption processing.
And it also can decrease during encryption processing if AES-NI acceleration is unavailable.
In networks over 1 Gbps, cipher methods, including CBC mode or HMAC digest, may cause a bottleneck in cryptographic processing even though AES-NI is functional, which could result in throughput drops.
Block Size
When transferring files in bandwidth over 10Gbps utilizing the multi-channel function, the following parameters should be set from 1MB to 4MB.
- InitContentBlockSize
- MaxContentBlockSize
When the parameters are set around 100KB, the performance may not be reached the expected level (approximately from 60Gbps to 70Gbps on TCP multi-channel) due to the burden of the synchronization processing to bond channels for multiplexing.
Number of Connections in Multi-Channel
A number of connections from 8 to 16 is optimal for the multi-channel function. When the number is over the range, the performance tends to decrease due to overhead processing many connections.
Reference (data from CLEALINK TECHNOLOGY Co., Ltd.)
- TCP plain : single 14Gbps, 8 channels 61Gbps, 12 channels 68Gbps, 16 channels 62Gbps
- TCP encrypted : single 5Gbps, 8 channels 28Gbps, 12 channels 26Gbps, 16 channels 25Gbps
Memory Usage Limitation
MaxTotalBufferSize might cause a performance bottleneck in simultaneous multiple connections in broadband environments. Because the buffer is shared with these multiple sessions.
Log Level or Debug Log
When the log level gets DEBUG in the following, or debug log gets enabled, the performance might decrease.
Configuration File or Command Setting Item or Option Name hcpd.conf SystemLogLevel and set it to DEBUG hcp.conf or other client configuration files ApplicationLogLevel and set it to DEBUG All commands --investigation option and use it Antivirus Software
The real-time protection of the Windows Defender might decrease file transfer throughput due to a decrease in disk I/O performance by it. It has not been found on Norton Internet Security provided by Symantec Corporation.
#
HpFP Performance CharacteristicsTransport performance would be affected by the following factors.
MTU Size
When MTU size is about 1.5KB, HpFP throughput may not be able to achieve 10Gbps. Jumbo frame, whose MTU is about 9KB, is recommended in the environment over several Gbps.
IP Network Buffer Size
When the following OS's parameters are small (Ex. CentOS 122KB), HpFP might not be able to achieve around 10Gbps throughput due to packet loss.
- net.core.rmem_max
- net.core.wmem_max
CPU Power Saving Mode
CPU performance may get lower than required for high bandwidth transport by the following OS power-saving configuration, which may degrade HpFP performance.
Windows
Processor Power Management
Linux
/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Packet Queue Size on Intermediate Network Equipment
When the network includes devices with small queue sizes or equivalent conditions, packet loss may happen without RTT increase, where HpFP can't properly work the congestion control function and cause decreases in the performance or fairness.
Congestion Control Mode
The aggressive mode of HpFP is an experimental function. In the mode, the performance may decrease beause of packet loss under situations with high processing load like transmitting over 10Gbps.Especially, when using the multi-channel function, the throughput might be unstable due to uneven distribution of processing on CPUs that would be driven by the RSS (Receive Side Scaling) function of NIC (Network Interface Card). In these cases, the Fair mode is recommended.
#
Considerations Regarding FeaturesWhen Using the Same UDP Port to Plural hcpd Processes
When connecting from a client using UDPListenAddress(deprecated,) problems, such as a connection timeout, etc., might happen and not work as expected. HPFPListenAdr doesn't cause.
Example :
--hcpd1UDPListenAddress 0.0.0.0:884* The UDP port number 65520 is used as default. The privileged port number is 884. systemctl start hcpd* Start hcpd as daemon.-- --hcpd2UDPListenAddress 0.0.0.0:1884* The UDP port number 65520 is used as default. The non-privileged port number is 1884. hcpd -f -c ~/hcpd.conf -p ~/hcpd.pid* Start hcpd in foreground.--
This configuration result in using the same UDP port of 65520 on two hcpd processes while service ports of 884 and 1884 are different.
The following is the command to connect from a client to a host running in the above configuration.
--hcp --udp=D:D:D:D:D my_src.txt 192.168.100.100:884:my_dst.txt--
This would, however, result in server connection timeout.
Workaround: Change the UDP port number for one of the hcpd processes as below.
UDPListenAddress 0.0.0.0:1884:65519
Workaround for the Issue That a Process is Killed by Linux OOM (Out Of Memory) Killer
As an OS mechanism, Linux monitors memory consumption for each process and terminates the processes running out of memory.
If the following buffer size is set to larger than the system memory, Linux OOM Killer might kill the process.
Configuration File Setting Item hcp.conf MaxBufferSize hcpd.conf MaxTotalBufferSize Workaround: Lower the buffer size or increase the system memory.
Log Level or Debug Log
When the log level gets DEBUG in the following, or debug log gets enabled, a timeout might happen because transmitting files take a long time, which has been found in environments via NAT.
Configuration File or Command Setting Item or Option Name hcpd.conf SystemLogLevel and set it to DEBUG hcp.conf or other client configuration files ApplicationLogLevel and set it to DEBUG All commands --investigation option and use it