Posts: 3,616
Threads: 287
Joined: Jan 2003
Just wondering if anyone knows a faster way than copying files than using the traditional code...
Code:
OPEN source$ FOR BINARY AS #1
OPEN dest$ FOR BINARY AS #2
DO UNTIL EOF(1)
GET #1,,byte$
PUT #2,,byte$
LOOP
And don't say use the SHELL "copy..." alternative, because what if others are using a different language version of DOS, as someone pointed out.
So, is there a way?
f only life let you press CTRL-Z.
--------------------------------------
Freebasic is like QB, except it doesn't suck.
Posts: 1,752
Threads: 21
Joined: Jun 2002
Code:
OPEN source$ FOR BINARY AS #1
OPEN dest$ FOR BINARY AS #2
DO UNTIL EOF(1)
buffer$ = INPUT$(16384, #1)
PUT #2, , buffer$
LOOP
CLOSE
Posts: 3,616
Threads: 287
Joined: Jan 2003
Ah, chunk-copying. Of course.
But what if the file is smaller than the chunk size?
f only life let you press CTRL-Z.
--------------------------------------
Freebasic is like QB, except it doesn't suck.
Posts: 1,752
Threads: 21
Joined: Jun 2002
INPUT$ will adjust for this by returning a smaller string.
Posts: 3,616
Threads: 287
Joined: Jan 2003
Ah, excellent. 8)
Although could I pull an explanation out of you about why using chunks is faster? I mean, it still has to get a bunch of bytes from a file, but does it use a different method of file input or something?
f only life let you press CTRL-Z.
--------------------------------------
Freebasic is like QB, except it doesn't suck.
Posts: 1,752
Threads: 21
Joined: Jun 2002
With a buffer, you're reading 16K (or whatever size you choose) from the disk at once, rather than a byte at a time. Each read will cost you an interrupt and possibly seek time, so the fewer reads the better.
Posts: 3,616
Threads: 287
Joined: Jan 2003
Eugh, why are interrupts so gosh-darned slow.
Thanks.
f only life let you press CTRL-Z.
--------------------------------------
Freebasic is like QB, except it doesn't suck.
Posts: 1,956
Threads: 65
Joined: Jun 2003
Zack,
Everytime you do an I/O operation to a disk there is a considerable amount of overhead other than the actual transfer of bytes, i.e., seek time, latency, etc. Therefore, transferring 10K bytes at a time for a 100K file is a heck of a lot faster than transfers of 1K at a time. Also, there is the added overhead of executing additional instructions in your program for each "block" of data transferred.
*****
Posts: 3,616
Threads: 287
Joined: Jan 2003
So basically try to copy the largest chunk possible. Got it.
f only life let you press CTRL-Z.
--------------------------------------
Freebasic is like QB, except it doesn't suck.
Posts: 1,956
Threads: 65
Joined: Jun 2003
Zack, yes, actually your buffer size can be up to 32,767 for one i/o operation. For multiple i/o operations make sure that the sum of the buffer sizes don't exceed 32,767. Of course, if your program is very large, you will have to reduce this total buffer size.
The increase in speed is amazing on older machines of say 300 mhz or less. On the new machines of over 1 ghz, you won't notice any difference on files less than 1MB, and hardly any difference on files up to 100MB.
My advice is to try the program without the buffering or blocking first. If it runs fast enough, don't bother with the added logic. If it's a general purpose program that you intend running on different machines, then go ahead and do the buffering. As my Jewish friends used to say "It's like chicken soup --- it wouldn't hurt."
*****