postables thanks for writing back i appreciate it. It looks like to me that the ipfs-cluster-service is like an ipfs private swarm that has a ctl that remotely controls pinning of remotes? is my understanding correct? that's not entirely what I would expect for 'DHT discovery'. I think that 'discovery' word was important, because of course ipfs cluster-service use DHT since hashes are still used etc. I can't remotely control the clients that run our OS image but they do need a script for ipfs pin job management from a blockchain source. so I do not know ahead of time who peers will be or not be that need to do the pinning So it seemed like the ipfs cluster service was for 'clusters only', and not for a peer on the regular ipfs network. I only want better job management in ipfs, not a private controlled cluster. theoretically I could make our OS callback to somewhere and report its peer id, but because so many different retail users will be running it it seems using the cluster secret key is impractical, since successive peers require that secret key, it is not clear to me what the risk of that is. It just seems man, all I want to do is have add a list of ipfs hashes, and neither ipfs binary or the cluster service seems really capable of that. It is pretty simple what I am trying to do right, I just want to instruct ipfs to import a list of ipfs hashes , I have tried ipfs-pack, i have tried ipfs get and ipfs add, i have tried ipfs pin add which is best I think, but there is no job control. How to add many ipfs hashes simultaneously in ipfs? it looks not possible. I thought then that a --read-timeout nseconds and --retry flag made sense, but therte would need to be job control native within ipfs for this, and I felt that job control belong in ipfs daemon itself, to make it more practical for mirroring