The commit below reworked how iommu_put_pages_list() worked to not do
list_del() on every entry. This was done expecting all the callers to
already re-init the list so doing a per-item deletion is not
efficient.
It was missed that fq_ring_free_locked() re-uses its list after calling
iommu_put_pages_list() and so the leftover list reaches free'd struct
pages and will crash or WARN/BUG/etc.
Reinit the list to empty in fq_ring_free_locked() after calling
iommu_put_pages_list().
Audit to see if any other callers of iommu_put_pages_list() need the list
to be empty:
- iommu_dma_free_fq_single() and iommu_dma_free_fq_percpu() immediately
frees the memory
- iommu_v1_map_pages(), v1_free_pgtable(), domain_exit(),
riscv_iommu_map_pages() uses a stack variable which goes out of scope
- intel_iommu_tlb_sync() uses a gather in a iotlb_sync() callback, the
caller re-inits the gather
Fixes: 13f43d7cf3e0 ("iommu/pages: Formalize the freelist API")
Reported-by: Borah, Chaitanya Kumar <chaitanya.kumar.borah@intel.com>
Closes: https://lore.kernel.org/r/SJ1PR11MB61292CE72D7BE06B8810021CB997A@SJ1PR11MB6129.namprd11.prod.outlook.com
Tested-by: Borah, Chaitanya Kumar <chaitanya.kumar.borah@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/0-v1-7d4dfa6140f7+11f04-iommu_freelist_init_jgg@nvidia.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
fq->entries[idx].iova_pfn,
fq->entries[idx].pages);
+ fq->entries[idx].freelist =
+ IOMMU_PAGES_LIST_INIT(fq->entries[idx].freelist);
fq->head = (fq->head + 1) & fq->mod_mask;
}
}
* iommu_put_pages_list - free a list of pages.
* @list: The list of pages to be freed
*
- * Frees a list of pages allocated by iommu_alloc_pages_node_sz().
+ * Frees a list of pages allocated by iommu_alloc_pages_node_sz(). On return the
+ * passed list is invalid, the caller must use IOMMU_PAGES_LIST_INIT to reinit
+ * the list if it expects to use it again.
*/
void iommu_put_pages_list(struct iommu_pages_list *list)
{