Currently if we load jobs from job.cache, we don't set correct times
for `history_time` and `file_time`, resulting them in being 0 and the
jobs avoids the cleanup by cupsd when needed, leading into eating up
memory space.
It happens because none of the functions which set those job members are
not called - `cupsdSetJobState()` is used when changing job states,
`cupsdUpdateJobs()` during partial reload and `cupsdLoadJob()` is
guarded by condition in `load_job_cache()`.
The fix is to change conditional in `load_job_cache()` which will cause
loading of the job if cupsd is set to clean up job history, or if cupsd
should clean up job files and the job still has some.
cupsArrayAdd(ActiveJobs, job);
else if (job->state_value > IPP_JSTATE_STOPPED)
{
- if (!job->completed_time || !job->creation_time || !job->name || !job->koctets)
+ if (!job->completed_time || !job->creation_time || !job->name || !job->koctets ||
+ JobHistory < INT_MAX || (JobFiles < INT_MAX && job->num_files))
{
cupsdLoadJob(job);
unload_job(job);