From: Zdenek Dohnal Date: Tue, 8 Jul 2025 13:51:59 +0000 (+0200) Subject: scheduler: Fix cleaning jobs by loading times when needed X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=e129afd55ae9873451f6d5299bbaff19b42d2098;p=thirdparty%2Fcups.git scheduler: Fix cleaning jobs by loading times when needed Currently if we load jobs from job.cache, we don't set correct times for `history_time` and `file_time`, resulting them in being 0 and the jobs avoids the cleanup by cupsd when needed, leading into eating up memory space. It happens because none of the functions which set those job members are not called - `cupsdSetJobState()` is used when changing job states, `cupsdUpdateJobs()` during partial reload and `cupsdLoadJob()` is guarded by condition in `load_job_cache()`. The fix is to change conditional in `load_job_cache()` which will cause loading of the job if cupsd is set to clean up job history, or if cupsd should clean up job files and the job still has some. --- diff --git a/scheduler/job.c b/scheduler/job.c index 4383c7743f..047e5dfc94 100644 --- a/scheduler/job.c +++ b/scheduler/job.c @@ -4428,7 +4428,8 @@ load_job_cache(const char *filename) /* I - job.cache filename */ cupsArrayAdd(ActiveJobs, job); else if (job->state_value > IPP_JSTATE_STOPPED) { - if (!job->completed_time || !job->creation_time || !job->name || !job->koctets) + if (!job->completed_time || !job->creation_time || !job->name || !job->koctets || + JobHistory < INT_MAX || (JobFiles < INT_MAX && job->num_files)) { cupsdLoadJob(job); unload_job(job);