If the HGFS server node cache becomes full and all the nodes are
of the type that cannot be foreclosed and so cannot be removed
the removal goes into an infinite loop.
Fix it by only checking the number of entries in the list and then
exiting with failure.
Signed-off-by: Marcelo Vanzin <mvanzin@vmware.com>
HgfsFileNode *lruNode = NULL;
HgfsHandle handle;
Bool found = FALSE;
+ uint32 numOpenNodes = session->numCachedOpenNodes;
ASSERT(session);
ASSERT(session->numCachedOpenNodes > 0);
* Remove the first item from the list that does not have a server lock,
* file context or is open in sequential mode.
*/
- while (!found) {
+ while (!found && (numOpenNodes-- > 0)) {
lruNode = DblLnkLst_Container(session->nodeCachedList.next,
HgfsFileNode, links);