The VMA manager is page-size based so drm_vma_node_size() returns the size
in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply
PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small
buffers.
This bug was introduced in commit:
0de23977cfeb5b357ec884ba15417ae118ff9e9b
"drm/gem: convert to new unified vma manager"
Fixes i915 gtt mmap failure reported by Sedat Dilek in:
Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ]
Cc: Daniel Vetter <[email protected]>
Cc: Chris Wilson <[email protected]>
Signed-off-by: David Herrmann <[email protected]>
Reported-by: Sedat Dilek <[email protected]>
Tested-by: Sedat Dilek <[email protected]>
Signed-off-by: Dave Airlie <[email protected]>
}
obj = container_of(node, struct drm_gem_object, vma_node);
- ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node), vma);
+ ret = drm_gem_mmap_obj(obj, drm_vma_node_size(node) << PAGE_SHIFT, vma);
mutex_unlock(&dev->struct_mutex);