The problem is that the read command doesn’t know whether there is more data still to come or not. You write something to an external process, and the external process writes something back – but how do you know when it is done sending data back to you? It might send back a few characters. It might send back megabytes of data.
If the external process sends a lot of data, it can be read a little at a time (as you are doing in your script) but unless there is a definitive way to know when it has finished sending data, you’ll have to make assumptions such as guessing that if a second goes by with no more data then it must be finished. It appears your script is doing that as well.
The only way to improve the situation is by having some advance knowledge of what to expect, so your script can know that it has received all of the data and stop trying to read more. If there is some “end of data” marker that is sent, for example, or something at the beginning of the output that indicates the amount of data that will be returned, then you can use that information to decide how much data to read.
If you don’t have either of those situations, you may be able to force your own “end of data” marker. For example, here’s a modified version of my example script that causes 4 dollar signs to be sent back after the “ls” command finishes:
open process "#1"
write "ls; echo '$$$$' " & return to process "#1"
read from process "#1" until "$$$$"
close process "#1"
This ensures that the entire output of the ls command will always be read, provided that 4 dollar signs will never occur as part of the normal output from the command. You may be able to adapt this approach to solve your situation.