When doing a huge amount of parallel transfers, we must not add them to
the per_transfer list frivolously since they all use memory after all.
This was previous done without really considering millions or billions
of transfers. Massive parallelism would use a lot of memory for no good
purpose.
The queue is now limited to twice the paralleism number.
This makes the 'Qd' value in the parallel progress meter mostly useless
for users, but works for now for us as a debug display.
Reported-by: justchen1369 on github
Fixes #8933
Closes #9389
struct per_transfer *transfers; /* first node */
static struct per_transfer *transfersl; /* last node */
+static long all_pers;
/* add_per_transfer creates a new 'per_transfer' node in the linked
list of transfers */
}
*per = p;
all_xfers++; /* count total number of transfers added */
+ all_pers++;
+
return CURLE_OK;
}
transfersl = p;
free(per);
+ all_pers--;
return n;
}
bool sleeping = FALSE;
*addedp = FALSE;
*morep = FALSE;
- result = create_transfer(global, share, addedp);
- if(result)
- return result;
+ if(all_pers < (global->parallel_max*2)) {
+ result = create_transfer(global, share, addedp);
+ if(result)
+ return result;
+ }
for(per = transfers; per && (all_added < global->parallel_max);
per = per->next) {
bool getadded = FALSE;