2 .\" epoll by Davide Libenzi ( efficient event notification retrieval )
3 .\" Copyright (C) 2003 Davide Libenzi
5 .\" This program is free software; you can redistribute it and/or modify
6 .\" it under the terms of the GNU General Public License as published by
7 .\" the Free Software Foundation; either version 2 of the License, or
8 .\" (at your option) any later version.
10 .\" This program is distributed in the hope that it will be useful,
11 .\" but WITHOUT ANY WARRANTY; without even the implied warranty of
12 .\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 .\" GNU General Public License for more details.
15 .\" You should have received a copy of the GNU General Public License
16 .\" along with this program; if not, write to the Free Software
17 .\" Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
19 .\" Davide Libenzi <davidel@xmailserver.org>
22 .TH EPOLL 4 "2002-10-23" Linux "Linux Programmer's Manual"
24 epoll \- I/O event notification facility
26 .B #include <sys/epoll.h>
31 that can be used either as Edge or Level Triggered interface and scales
32 well to large numbers of watched fds. Three system calls are provided to
42 set is connected to a file descriptor created by
44 Interest for certain file descriptors is then registered via
46 Finally, the actual wait is started by
52 event distribution interface is able to behave both as Edge Triggered
53 ( ET ) and Level Triggered ( LT ). The difference between ET and LT
54 event distribution mechanism can be described as follows. Suppose that
55 this scenario happens :
58 The file descriptor that represents the read side of a pipe (
65 Pipe writer writes 2Kb of data on the write side of the pipe.
70 is done that will return
72 as ready file descriptor.
75 The pipe reader reads 1Kb of data from
86 file descriptor has been added to the
94 will probably hang because of the available data still present in the file
95 input buffers and the remote peer might be expecting a response based on the
96 data it already sent. The reason for this is that Edge Triggered event
97 distribution delivers events only when events happens on the monitored file.
100 the caller might end up waiting for some data that is already present inside
101 the input buffer. In the above example, an event on
103 will be generated because of the write done in
105 , and the event is consumed in
107 Since the read operation done in
109 does not consume the whole buffer data, the call to
113 might lock indefinitely. The
115 interface, when used with the
117 flag ( Edge Triggered )
118 should use non-blocking file descriptors to avoid having a blocking
119 read or write starve the task that is handling multiple file descriptors.
120 The suggested way to use
122 as an Edge Triggered (
124 ) interface is below, and possible pitfalls to avoid follow.
128 with non-blocking file descriptors
131 by going to wait for an event only after
138 On the contrary, when used as a Level Triggered interface,
140 is by all means a faster
142 and can be used wherever the latter is used since it shares the
143 same semantics. Since even with the Edge Triggered
145 multiple events can be generated up on receival of multiple chunks of data,
146 the caller has the option to specify the
150 to disable the associated file descriptor after the receival of an event with
154 flag is specified, it is caller responsibility to rearm the file descriptor using
159 .SH EXAMPLE FOR SUGGESTED USAGE
163 when employed like a Level Triggered interface does have the same
166 an Edge Triggered usage requires more clarifiction to avoid stalls
167 in the application event loop. In this example, listener is a
168 non-blocking socket on which
170 has been called. The function do_use_fd() uses the new ready
171 file descriptor until EAGAIN is returned by either
175 An event driven state machine application should, after having received
176 EAGAIN, record its current state so that at the next call to do_use_fd()
181 from where it stopped before.
184 struct epoll_event ev, *events;
187 nfds = epoll_wait(kdpfd, events, maxevents, -1);
189 for(n = 0; n < nfds; ++n) {
190 if(events[n].data.fd == listener) {
191 client = accept(listener, (struct sockaddr *) &local,
197 setnonblocking(client);
198 ev.events = EPOLLIN | EPOLLET;
200 if (epoll_ctl(kdpfd, EPOLL_CTL_ADD, client, &ev) < 0) {
201 fprintf(stderr, "epoll set insertion error: fd=%d\n",
207 do_use_fd(events[n].data.fd);
212 When used as an Edge triggered interface, for performance reasons, it is
213 possible to add the file descriptor inside the epoll interface (
215 ) once by specifying (
216 .BR EPOLLIN | EPOLLOUT
217 ). This allows you to avoid
218 continuously switching between
227 .SH QUESTIONS AND ANSWERS (from linux-kernel)
232 What happens if you add the same fd to an epoll_set twice?
235 You will probably get EEXIST. However, it is possible that two
236 threads may add the same fd twice. This is a harmless condition.
241 sets wait for the same fd? If so, are events reported
247 Yes. However, it is not recommended. Yes it would be reported to both.
252 fd itself poll/epoll/selectable?
260 fd is put into its own fd set?
263 It will fail. However, you can add an
265 fd inside another epoll fd set.
270 fd over a unix-socket to another process?
276 Will the close of an fd cause it to be removed from all
284 If more than one event comes in between
286 calls, are they combined or reported separately?
289 They will be combined.
292 Does an operation on an fd affect the already collected but not yet reported
296 You can do two operations on an existing fd. Remove would be meaningless for
297 this case. Modify will re-read available I/O.
300 Do I need to continuously read/write an fd until EAGAIN when using the
302 flag ( Edge Triggered behaviour ) ?
305 No you don't. Receiving an event from
307 should suggest to you that such file descriptor is ready for the requested I/O
308 operation. You have simply to consider it ready until you will receive the
309 next EAGAIN. When and how you will use such file descriptor is entirely up
310 to you. Also, the condition that the read/write I/O space is exhausted can
311 be detected by checking the amount of data read/write from/to the target
312 file descriptor. For example, if you call
314 by asking to read a certain amount of data and
316 returns a lower number of bytes, you can be sure to have exhausted the read
317 I/O space for such file descriptor. Same is valid when writing using the
322 .SH POSSIBLE PITFALLS AND WAYS TO AVOID THEM
325 .B o Starvation ( Edge Triggered )
327 If there is a large amount of I/O space, it is possible that by trying to drain
328 it the other files will not get processed causing starvation. This
333 The solution is to maintain a ready list and mark the file descriptor as ready
334 in its associated data structure, thereby allowing the application to
335 remember which files need to be processed but still round robin amongst
336 all the ready files. This also supports ignoring subsequent events you
337 receive for fd's that are already ready.
341 .B o If using an event cache...
343 If you use an event cache or store all the fd's returned from
345 then make sure to provide a way to mark its closure dynamically (ie- caused by
346 a previous event's processing). Suppose you receive 100 events from
348 and in eventi #47 a condition causes event #13 to be closed.
349 If you remove the structure and close() the fd for event #13, then your
350 event cache might still say there are events waiting for that fd causing
354 One solution for this is to call, during the processing of event 47,
355 .BR epoll_ctl ( EPOLL_CTL_DEL )
356 to delete fd 13 and close(), then mark its associated
357 data structure as removed and link it to a cleanup list. If you find another
358 event for fd 13 in your batch processing, you will discover the fd had been
359 previously removed and there will be no confusion.
365 is a new API introduced in Linux kernel 2.5.44.
366 Its interface should be finalized in Linux kernel 2.5.66.
368 .BR epoll_create (2),