]>
Commit | Line | Data |
---|---|---|
22554020 | 1 | =================== |
ca00c2b9 JN |
2 | Userland interfaces |
3 | =================== | |
4 | ||
5 | The DRM core exports several interfaces to applications, generally | |
6 | intended to be used through corresponding libdrm wrapper functions. In | |
7 | addition, drivers export device-specific interfaces for use by userspace | |
8 | drivers & device-aware applications through ioctls and sysfs files. | |
9 | ||
10 | External interfaces include: memory mapping, context management, DMA | |
11 | operations, AGP management, vblank control, fence management, memory | |
12 | management, and output management. | |
13 | ||
14 | Cover generic ioctls and sysfs layout here. We only need high-level | |
15 | info, since man pages should cover the rest. | |
16 | ||
17 | Render nodes | |
22554020 | 18 | ============ |
ca00c2b9 JN |
19 | |
20 | DRM core provides multiple character-devices for user-space to use. | |
21 | Depending on which device is opened, user-space can perform a different | |
22 | set of operations (mainly ioctls). The primary node is always created | |
23 | and called card<num>. Additionally, a currently unused control node, | |
24 | called controlD<num> is also created. The primary node provides all | |
25 | legacy operations and historically was the only interface used by | |
26 | userspace. With KMS, the control node was introduced. However, the | |
27 | planned KMS control interface has never been written and so the control | |
28 | node stays unused to date. | |
29 | ||
30 | With the increased use of offscreen renderers and GPGPU applications, | |
31 | clients no longer require running compositors or graphics servers to | |
32 | make use of a GPU. But the DRM API required unprivileged clients to | |
33 | authenticate to a DRM-Master prior to getting GPU access. To avoid this | |
34 | step and to grant clients GPU access without authenticating, render | |
35 | nodes were introduced. Render nodes solely serve render clients, that | |
36 | is, no modesetting or privileged ioctls can be issued on render nodes. | |
37 | Only non-global rendering commands are allowed. If a driver supports | |
38 | render nodes, it must advertise it via the DRIVER_RENDER DRM driver | |
39 | capability. If not supported, the primary node must be used for render | |
40 | clients together with the legacy drmAuth authentication procedure. | |
41 | ||
42 | If a driver advertises render node support, DRM core will create a | |
43 | separate render node called renderD<num>. There will be one render node | |
44 | per device. No ioctls except PRIME-related ioctls will be allowed on | |
45 | this node. Especially GEM_OPEN will be explicitly prohibited. Render | |
46 | nodes are designed to avoid the buffer-leaks, which occur if clients | |
47 | guess the flink names or mmap offsets on the legacy interface. | |
48 | Additionally to this basic interface, drivers must mark their | |
49 | driver-dependent render-only ioctls as DRM_RENDER_ALLOW so render | |
50 | clients can use them. Driver authors must be careful not to allow any | |
51 | privileged ioctls on render nodes. | |
52 | ||
53 | With render nodes, user-space can now control access to the render node | |
54 | via basic file-system access-modes. A running graphics server which | |
55 | authenticates clients on the privileged primary/legacy node is no longer | |
56 | required. Instead, a client can open the render node and is immediately | |
57 | granted GPU access. Communication between clients (or servers) is done | |
58 | via PRIME. FLINK from render node to legacy node is not supported. New | |
59 | clients must not use the insecure FLINK interface. | |
60 | ||
61 | Besides dropping all modeset/global ioctls, render nodes also drop the | |
62 | DRM-Master concept. There is no reason to associate render clients with | |
63 | a DRM-Master as they are independent of any graphics server. Besides, | |
64 | they must work without any running master, anyway. Drivers must be able | |
65 | to run without a master object if they support render nodes. If, on the | |
66 | other hand, a driver requires shared state between clients which is | |
67 | visible to user-space and accessible beyond open-file boundaries, they | |
68 | cannot support render nodes. | |
69 | ||
70 | VBlank event handling | |
22554020 | 71 | ===================== |
ca00c2b9 JN |
72 | |
73 | The DRM core exposes two vertical blank related ioctls: | |
74 | ||
75 | DRM_IOCTL_WAIT_VBLANK | |
76 | This takes a struct drm_wait_vblank structure as its argument, and | |
77 | it is used to block or request a signal when a specified vblank | |
78 | event occurs. | |
79 | ||
80 | DRM_IOCTL_MODESET_CTL | |
81 | This was only used for user-mode-settind drivers around modesetting | |
82 | changes to allow the kernel to update the vblank interrupt after | |
83 | mode setting, since on many devices the vertical blank counter is | |
84 | reset to 0 at some point during modeset. Modern drivers should not | |
85 | call this any more since with kernel mode setting it is a no-op. | |
86 | ||
87 | This second part of the GPU Driver Developer's Guide documents driver | |
88 | code, implementation details and also all the driver-specific userspace | |
89 | interfaces. Especially since all hardware-acceleration interfaces to | |
90 | userspace are driver specific for efficiency and other reasons these | |
91 | interfaces can be rather substantial. Hence every driver has its own | |
92 | chapter. |