Streaming and buffering with dd and ftp
By uligraef on Jul 30, 2012
Recently at our last FraOSUG meeting the question arose how to efficiently transfer data from one computer to another. The examples for zfs send typically contain a pipe through an ssh like in
zfs send tank/data | ssh otherhost zfs receive ...
That can be slow because
- The ssh has a small buffer which limits the data transfer rate
(fixed in newer versions, soon in Solaris)
- There is no buffering specified, and this can lead to alternating work,
where the programs (zfs send and zfs receive in this case) work only on
one side and the other side is waiting on IO (and changing vice versa).
- The ssh is encrypting the data, which is good for Internet transfers,
but most of the time not necessary in datacenters or at home.
You can add a gzip -1 to the pipe which improves the transfer speed,
when the data is compressible . The gzip -1 does not compress very good (as gzip -9),
but is pretty fast (fast enough for 1Gbit ethernet, but not for faster networks...).
Unfortunately most people try that and are not satisfied with the result,
because gzip uses only a small output buffer and this leads to the alternating behaviour (2. above).
The critical path is mostly the time on the local host, and not the time on the server.
An alternate solution is to do it with ftp and dd:
> user ...
> pass ...
> cd /tmp
> put "| zfs send tank/data | gzip -1 | dd ibs=16k obs=8x1024k |
dd bs=8x1024k " /tmp/tank-data.zfssend.gz
- The data transfer is not encrypted, but perhaps this is ok for use inside the datacenter or at home.
- The classic ftp is able to replace the local file by a command by using "| .... " as local filename.
Thats in the docs but very good hidden and there is no example...
It worked all the time, its not a new feature (but does not work in ncftp or some other clients).
(I used that 24 years ago to use the mainframe printer for my unix box...)
- The "| .... " syntax also works for the local file at ftp get .
- The zfs receive is delayed, because it should not stop the zfs send process.
Most of the ftp servers are not allowing running a command remotely,
so it is necessary to have enough space on the server side to use ftp for that purpose.
- Only the possible write speed is limiting the process on the server side.
- The data is compressed fast with gzip -1 , if that is not fast enough (10GBE, Infiniband, ...),
you should omit that step.
- The first dd is creating large blocks from the small blocks coming out of gzip.
- The second dd is working as a buffer, it is not necessary to install additional software (mbuffer, rbuffer)
for this functionality:
- When the first dd is writing its 8M block, the second dd can read the whole block
- The first dd can directly continue to collect the small 16k blocks
- The second dd has fulfilled its read criteria and can write it to the ftp
- The ftp process will read the data in smaller blocks until the whole block is read.
- The filling of the buffer in the first dd and the reading of the ftp process runs in parallel.
This enhances the speed.
- Receipe for buffering: Use a dd and then let a dd follow which has the same output blocksize.
- Perhaps use ibs= and obs= on the first dd to create larger blocks by repacking the blocks
- The 8x1024k syntax enables to let dd compute the numbers, it is also in the classic dd for a long time,
and can also used for other parameters of dd like count= and skip= .
Looks like the manpage is not read very carefully in this part although the megabyte and gigabyte
units are missed a lot (can be written as 1kx1k and 1kx1kx1k - ok, I agree: its not nice!).
I also used that to binary backup my Solaris laptop disk by booting from cdrom, exit to shell,
and then starting ftp and transfer the compressed dd image of the physical disk to a storage server,
without the need to store something locally.
With a fast transfer program (f.e. netcat) you can also use the same dd buffering technique on the receiving side.