]> git.ipfire.org Git - thirdparty/man-pages.git/blob - man7/epoll.7
Wrapped long lines, wrapped at sentence boundaries; stripped trailing
[thirdparty/man-pages.git] / man7 / epoll.7
1 .\"
2 .\" epoll by Davide Libenzi ( efficient event notification retrieval )
3 .\" Copyright (C) 2003 Davide Libenzi
4 .\"
5 .\" This program is free software; you can redistribute it and/or modify
6 .\" it under the terms of the GNU General Public License as published by
7 .\" the Free Software Foundation; either version 2 of the License, or
8 .\" (at your option) any later version.
9 .\"
10 .\" This program is distributed in the hope that it will be useful,
11 .\" but WITHOUT ANY WARRANTY; without even the implied warranty of
12 .\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 .\" GNU General Public License for more details.
14 .\"
15 .\" You should have received a copy of the GNU General Public License
16 .\" along with this program; if not, write to the Free Software
17 .\" Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
18 .\"
19 .\" Davide Libenzi <davidel@xmailserver.org>
20 .\"
21 .\"
22 .TH EPOLL 7 "2002-10-23" Linux "Linux Programmer's Manual"
23 .SH NAME
24 epoll \- I/O event notification facility
25 .SH SYNOPSIS
26 .B #include <sys/epoll.h>
27 .SH DESCRIPTION
28 .B epoll
29 is a variant of
30 .BR poll (2)
31 that can be used either as Edge or Level Triggered interface and scales
32 well to large numbers of watched fds.
33 Three system calls are provided to
34 set up and control an
35 .B epoll
36 set:
37 .BR epoll_create (2),
38 .BR epoll_ctl (2),
39 .BR epoll_wait (2).
40
41 An
42 .B epoll
43 set is connected to a file descriptor created by
44 .BR epoll_create (2).
45 Interest for certain file descriptors is then registered via
46 .BR epoll_ctl (2).
47 Finally, the actual wait is started by
48 .BR epoll_wait (2).
49 .SH NOTES
50 The
51 .B epoll
52 event distribution interface is able to behave both as Edge Triggered
53 ( ET ) and Level Triggered ( LT ).
54 The difference between ET and LT
55 event distribution mechanism can be described as follows.
56 Suppose that
57 this scenario happens :
58 .TP
59 .B 1
60 The file descriptor that represents the read side of a pipe (
61 .B RFD
62 ) is added inside the
63 .B epoll
64 device.
65 .TP
66 .B 2
67 Pipe writer writes 2Kb of data on the write side of the pipe.
68 .TP
69 .B 3
70 A call to
71 .BR epoll_wait (2)
72 is done that will return
73 .B RFD
74 as ready file descriptor.
75 .TP
76 .B 4
77 The pipe reader reads 1Kb of data from
78 .BR RFD .
79 .TP
80 .B 5
81 A call to
82 .BR epoll_wait (2)
83 is done.
84 .PP
85 If the
86 .B RFD
87 file descriptor has been added to the
88 .B epoll
89 interface using the
90 .B EPOLLET
91 flag, the call to
92 .BR epoll_wait (2)
93 done in step
94 .B 5
95 will probably hang because of the available data still present in the file
96 input buffers and the remote peer might be expecting a response based on the
97 data it already sent.
98 The reason for this is that Edge Triggered event
99 distribution delivers events only when events happens on the monitored file.
100 So, in step
101 .B 5
102 the caller might end up waiting for some data that is already present inside
103 the input buffer.
104 In the above example, an event on
105 .B RFD
106 will be generated because of the write done in
107 .BR 2
108 and the event is consumed in
109 .BR 3 .
110 Since the read operation done in
111 .B 4
112 does not consume the whole buffer data, the call to
113 .BR epoll_wait (2)
114 done in step
115 .B 5
116 might lock indefinitely.
117 The
118 .B epoll
119 interface, when used with the
120 .B EPOLLET
121 flag ( Edge Triggered )
122 should use non-blocking file descriptors to avoid having a blocking
123 read or write starve the task that is handling multiple file descriptors.
124 The suggested way to use
125 .B epoll
126 as an Edge Triggered
127 .RB ( EPOLLET )
128 interface is below, and possible pitfalls to avoid follow.
129 .RS
130 .TP
131 .B i
132 with non-blocking file descriptors
133 .TP
134 .B ii
135 by going to wait for an event only after
136 .BR read (2)
137 or
138 .BR write (2)
139 return EAGAIN
140 .RE
141 .PP
142 On the contrary, when used as a Level Triggered interface,
143 .B epoll
144 is by all means a faster
145 .BR poll (2),
146 and can be used wherever the latter is used since it shares the
147 same semantics.
148 Since even with the Edge Triggered
149 .B epoll
150 multiple events can be generated up on receipt of multiple chunks of data,
151 the caller has the option to specify the
152 .B EPOLLONESHOT
153 flag, to tell
154 .B epoll
155 to disable the associated file descriptor after the receipt of an event with
156 .BR epoll_wait (2).
157 When the
158 .B EPOLLONESHOT
159 flag is specified,
160 it is caller responsibility to rearm the file descriptor using
161 .BR epoll_ctl (2)
162 with
163 .BR EPOLL_CTL_MOD .
164 .SH EXAMPLE FOR SUGGESTED USAGE
165 While the usage of
166 .B epoll
167 when employed like a Level Triggered interface does have the same
168 semantics of
169 .BR poll (2),
170 an Edge Triggered usage requires more clarification to avoid stalls
171 in the application event loop.
172 In this example, listener is a
173 non-blocking socket on which
174 .BR listen (2)
175 has been called.
176 The function do_use_fd() uses the new ready
177 file descriptor until EAGAIN is returned by either
178 .BR read (2)
179 or
180 .BR write (2).
181 An event driven state machine application should, after having received
182 EAGAIN, record its current state so that at the next call to do_use_fd()
183 it will continue to
184 .BR read (2)
185 or
186 .BR write (2)
187 from where it stopped before.
188
189 .nf
190 struct epoll_event ev, *events;
191
192 for(;;) {
193 nfds = epoll_wait(kdpfd, events, maxevents, \-1);
194
195 for (n = 0; n < nfds; ++n) {
196 if (events[n].data.fd == listener) {
197 client = accept(listener, (struct sockaddr *) &local,
198 &addrlen);
199 if(client < 0){
200 perror("accept");
201 continue;
202 }
203 setnonblocking(client);
204 ev.events = EPOLLIN | EPOLLET;
205 ev.data.fd = client;
206 if (epoll_ctl(kdpfd, EPOLL_CTL_ADD, client, &ev) < 0) {
207 fprintf(stderr, "epoll set insertion error: fd=%d\\n",
208 client);
209 return \-1;
210 }
211 } else {
212 do_use_fd(events[n].data.fd);
213 }
214 }
215 }
216 .fi
217
218 When used as an Edge triggered interface, for performance reasons, it is
219 possible to add the file descriptor inside the epoll interface (
220 .B EPOLL_CTL_ADD
221 ) once by specifying (
222 .BR EPOLLIN | EPOLLOUT
223 ).
224 This allows you to avoid
225 continuously switching between
226 .B EPOLLIN
227 and
228 .B EPOLLOUT
229 calling
230 .BR epoll_ctl (2)
231 with
232 .BR EPOLL_CTL_MOD .
233 .SH QUESTIONS AND ANSWERS
234 .TP
235 .B Q1
236 What happens if you add the same fd to an epoll_set twice?
237 .TP
238 .B A1
239 You will probably get EEXIST.
240 However, it is possible that two
241 threads may add the same fd twice.
242 This is a harmless condition.
243 .TP
244 .B Q2
245 Can two
246 .B epoll
247 sets wait for the same fd? If so, are events reported
248 to both
249 .B epoll
250 sets fds?
251 .TP
252 .B A2
253 Yes.
254 However, it is not recommended.
255 Yes it would be reported to both.
256 .TP
257 .B Q3
258 Is the
259 .B epoll
260 fd itself poll/epoll/selectable?
261 .TP
262 .B A3
263 Yes.
264 .TP
265 .B Q4
266 What happens if the
267 .B epoll
268 fd is put into its own fd set?
269 .TP
270 .B A4
271 It will fail.
272 However, you can add an
273 .B epoll
274 fd inside another epoll fd set.
275 .TP
276 .B Q5
277 Can I send the
278 .B epoll
279 fd over a unix-socket to another process?
280 .TP
281 .B A5
282 No.
283 .TP
284 .B Q6
285 Will the close of an fd cause it to be removed from all
286 .B epoll
287 sets automatically?
288 .TP
289 .B A6
290 Yes.
291 .TP
292 .B Q7
293 If more than one event comes in between
294 .BR epoll_wait (2)
295 calls, are they combined or reported separately?
296 .TP
297 .B A7
298 They will be combined.
299 .TP
300 .B Q8
301 Does an operation on an fd affect the already collected but not yet reported
302 events?
303 .TP
304 .B A8
305 You can do two operations on an existing fd.
306 Remove would be meaningless for
307 this case.
308 Modify will re-read available I/O.
309 .TP
310 .B Q9
311 Do I need to continuously read/write an fd until EAGAIN when using the
312 .B EPOLLET
313 flag ( Edge Triggered behaviour ) ?
314 .TP
315 .B A9
316 No you don't.
317 Receiving an event from
318 .BR epoll_wait (2)
319 should suggest to you that such file descriptor is ready
320 for the requested I/O operation.
321 You have simply to consider it ready until you will receive the
322 next EAGAIN.
323 When and how you will use such file descriptor is entirely up
324 to you.
325 Also, the condition that the read/write I/O space is exhausted can
326 be detected by checking the amount of data read/write from/to the target
327 file descriptor.
328 For example, if you call
329 .BR read (2)
330 by asking to read a certain amount of data and
331 .BR read (2)
332 returns a lower number of bytes, you can be sure to have exhausted the read
333 I/O space for such file descriptor.
334 Same is valid when writing using the
335 .BR write (2)
336 function.
337 .SH POSSIBLE PITFALLS AND WAYS TO AVOID THEM
338 .TP
339 .B o Starvation ( Edge Triggered )
340 .PP
341 If there is a large amount of I/O space,
342 it is possible that by trying to drain
343 it the other files will not get processed causing starvation.
344 This is not specific to
345 .BR epoll .
346 .PP
347 The solution is to maintain a ready list
348 and mark the file descriptor as ready
349 in its associated data structure, thereby allowing the application to
350 remember which files need to be processed but still round robin amongst
351 all the ready files.
352 This also supports ignoring subsequent events you
353 receive for fd's that are already ready.
354 .TP
355 .B o If using an event cache...
356 .PP
357 If you use an event cache or store all the fd's returned from
358 .BR epoll_wait (2),
359 then make sure to provide a way to mark
360 its closure dynamically (ie- caused by
361 a previous event's processing).
362 Suppose you receive 100 events from
363 .BR epoll_wait (2),
364 and in event #47 a condition causes event #13 to be closed.
365 If you remove the structure and
366 .BR close ()
367 the fd for event #13, then your
368 event cache might still say there are events waiting for that fd causing
369 confusion.
370 .PP
371 One solution for this is to call, during the processing of event 47,
372 .BR epoll_ctl ( EPOLL_CTL_DEL )
373 to delete fd 13 and
374 .BR close (),
375 then mark its associated
376 data structure as removed and link it to a cleanup list.
377 If you find another
378 event for fd 13 in your batch processing, you will discover the fd had been
379 previously removed and there will be no confusion.
380 .SH CONFORMING TO
381 The epoll API is Linux specific.
382 Some other systems provide similar
383 mechanisms, e.g., FreeBSD has
384 .IR kqueue ,
385 and Solaris has
386 .IR /dev/poll .
387 .SH VERSIONS
388 .BR epoll (7)
389 is a new API introduced in Linux kernel 2.5.44.
390 Its interface should be finalized in Linux kernel 2.5.66.
391 .SH "SEE ALSO"
392 .BR epoll_create (2),
393 .BR epoll_ctl (2),
394 .BR epoll_wait (2)