KB Article #58499
the speed for ftp mode on Gateway
- FTP
- Binary vs. ascii transfer
- Performance difference
Why there is speed difference in Binary mode and ASCII mode?
Resolution
Binary mode is used for transferring executable programs, compressed files and image/picture files. Html/text based files are supposed to be transferred in ASCII mode.
When a transfer is made using FTP in " ASCII " mode, we are automatically in "Variable ASCII Text" mode, which means that locally each record is read up to the maximum record length, searching for an EOL character (0D0A on Windows, 0A on Unix). When this is found, the read record is transformed into "variable" (in transfer), creating a header indicating the length of the record that follows and the EOL characters are *stripped* (they''re NOT send).
On the receiving side, in ASCII mode, the receiver knows the length of the record it is going to receive and *adds* the EOL character, native for the OS of the receiver, at the end of the record.
In binary mode characters are read to the EOF and on the fly the data buffer is send. In that case, *if* control characters (like the EOL) were in the data read, they will be send as if it was part of the date, and that is what "creates" the ^M effect when we transfer from Windows to Unix (as 0D for Unix doesn''t mean anything) and concatenated text (no visible EOL characters) when we go from Unix to Windows in binary mode.
As binary streaming doesn''t have to "convert" (text into Variable) and doesn''t have to strip/add any control characters, binary streaming will go faster.
Also, it should be noted that a receiving FTP server doesn''t control anything; the flushing of received data is "assumed", so a transfer can go way faster (in receiver''s memory) than the data was actually flushed. That gives the impression that it goes very quick, but doesn''t guarantee a correct writing of data.