Multi Processing for loop

APorter1031

I am trying to create a script that clones repositories and then removes write access from the local repos. I have a list that stores a repo object and I iterate over this list to clone and lock the repo.

I tried using multiprocessing to speed up this task but it seems to have actually slowed it down...

def install():
    os.chdir(ROOT_DIR)
    if os.path.isdir("./repos"):
        for repo in getRepos():
            os.chdir(ROOT_DIR)
            #Process(target=repo.clone()).start()
            #Process(target=lock, args=(repo,)).start()
            repo.clone()
            lock(repo)
    else:
        os.mkdir("./repos")
        install()

The two commented lines are the subproccesses that I tried to create. Am I using this wrong?

With subprocessing my average execution time was: 5.8 seconds Without subprocessing my average execution time was: 4.5 seconds.

Roomm

Try something like:

from multiprocessing import Pool

def processRepo(repo):
    repo.clone()
    lock(repo)


def install():
    os.chdir(ROOT_DIR)
    if os.path.isdir("./repos"):
        pool = Pool()
        pool.map(processRepo, getRepos())
        pool.close()
        pool.join()
    else:
        os.mkdir("./repos")
        install()

Collected from the Internet

Please contact [email protected] to delete if infringement.

edited at
0

Comments

0 comments
Login to comment

Related