Skip to main content

Notes

General Notes#

Application performance can be affected by the following factors.

  • File Size

    Transferring a large number of small files tends to cause a drop in throughput. In our performance evaluations of transferring many files, throughput starts to drop when the file size is around 128KB (files are of the same size).

  • Encryption, Compression or Digest Calculation

    During transferring, the throughput may decrease due to an increasing load on CPU by a bottleneck of encryption or decryption processing.

    And it also can decrease during encryption processing if AES-NI acceleration is unavailable.

    In networks over 1 Gbps, cipher methods, including CBC mode or HMAC digest, may cause a bottleneck in cryptographic processing even though AES-NI is functional, which could result in throughput drops.

  • Memory Usage Limitation

    MaxTotalBufferSize might cause a performance bottleneck in simultaneous multiple connections in broadband environments. Because the buffer is shared with these multiple sessions.

  • Log Level or Debug Log

    When the log level gets DEBUG in the following, or debug log gets enabled, the performance might decrease.

    Configuration File or CommandSetting Item or Option Name
    hcpd.confSystemLogLevel and set it to DEBUG
    hcp.conf or other client configuration filesApplicationLogLevel and set it to DEBUG
    All commands--investigation option and use it
  • Antivirus Software

    The real-time protection of the Windows Defender might decrease file transfer throughput due to a decrease in disk I/O performance by it. It has not been found on Norton Internet Security provided by Symantec Corporation.

HpFP Performance Characteristics#

Transport performance would be affected by the following factors.

  • MTU Size

    When MTU size is about 1.5KB, HpFP throughput may not be able to achieve 10Gbps. Jumbo frame, whose MTU is about 9KB, is recommended in the environment over several Gbps.

  • IP Network Buffer Size

    When the following OS's parameters are small (Ex. CentOS 122KB), HpFP might not be able to achieve around 10Gbps throughput due to packet loss.

    • net.core.rmem_max
    • net.core.wmem_max
  • CPU Power Saving Mode

    CPU performance may get lower than required for high bandwidth transport by the following OS power-saving configuration, which may degrade HpFP performance.

    • Windows

      Processor Power Management

    • Linux

      /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

  • Packet Queue Size on Intermediate Network Equipment

    When the network includes devices with small queue sizes or equivalent conditions, packet loss may happen without RTT increase, where HpFP can't properly work the congestion control function and cause decreases in the performance or fairness.

Considerations Regarding Features#

  • When Using the Same UDP Port to Plural hcpd Processes

    When connecting from a client using UDPListenAddress(deprecated,) problems, such as a connection timeout, etc., might happen and not work as expected. HPFPListenAdr doesn't cause.

    Example :

    --hcpd1UDPListenAddress 0.0.0.0:884* The UDP port number 65520 is used as default. The privileged port number is 884.
    systemctl start hcpd* Start hcpd as daemon.--
    --hcpd2UDPListenAddress 0.0.0.0:1884* The UDP port number 65520 is used as default. The non-privileged port number is 1884.
    hcpd -f -c ~/hcpd.conf -p ~/hcpd.pid* Start hcpd in foreground.--

    This configuration result in using the same UDP port of 65520 on two hcpd processes while service ports of 884 and 1884 are different.

    The following is the command to connect from a client to a host running in the above configuration.

    --hcp --udp=D:D:D:D:D my_src.txt 192.168.100.100:884:my_dst.txt--

    This would, however, result in server connection timeout.

    Workaround: Change the UDP port number for one of the hcpd processes as below.

    UDPListenAddress 0.0.0.0:1884:65519
  • Workaround for the Issue That a Process is Killed by Linux OOM (Out Of Memory) Killer

    As an OS mechanism, Linux monitors memory consumption for each process and terminates the processes running out of memory.

    If the following buffer size is set to larger than the system memory, Linux OOM Killer might kill the process.

    Configuration FileSetting Item
    hcp.confMaxBufferSize
    hcpd.confMaxTotalBufferSize

    Workaround: Lower the buffer size or increase the system memory.

  • Log Level or Debug Log

    When the log level gets DEBUG in the following, or debug log gets enabled, a timeout might happen because transmitting files take a long time, which has been found in environments via NAT.

    Configuration File or CommandSetting Item or Option Name
    hcpd.confSystemLogLevel and set it to DEBUG
    hcp.conf or other client configuration filesApplicationLogLevel and set it to DEBUG
    All commands--investigation option and use it