]> git.ipfire.org Git - thirdparty/man-pages.git/blame - man4/epoll.4
hyphen/dash fixes
[thirdparty/man-pages.git] / man4 / epoll.4
CommitLineData
fea681da
MK
1.\"
2.\" epoll by Davide Libenzi ( efficient event notification retrieval )
3.\" Copyright (C) 2003 Davide Libenzi
4.\"
5.\" This program is free software; you can redistribute it and/or modify
6.\" it under the terms of the GNU General Public License as published by
7.\" the Free Software Foundation; either version 2 of the License, or
8.\" (at your option) any later version.
9.\"
10.\" This program is distributed in the hope that it will be useful,
11.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
12.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13.\" GNU General Public License for more details.
14.\"
15.\" You should have received a copy of the GNU General Public License
16.\" along with this program; if not, write to the Free Software
17.\" Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
18.\"
19.\" Davide Libenzi <davidel@xmailserver.org>
20.\"
21.\"
22.TH EPOLL 4 "2002-10-23" Linux "Linux Programmer's Manual"
23.SH NAME
24epoll \- I/O event notification facility
25.SH SYNOPSIS
26.B #include <sys/epoll.h>
27.SH DESCRIPTION
28.B epoll
29is a variant of
30.BR poll (2)
31that can be used either as Edge or Level Triggered interface and scales
32well to large numbers of watched fds. Three system calls are provided to
33set up and control an
34.B epoll
35set:
36.BR epoll_create (2),
37.BR epoll_ctl (2),
38.BR epoll_wait (2).
39
40An
41.B epoll
42set is connected to a file descriptor created by
43.BR epoll_create (2).
44Interest for certain file descriptors is then registered via
45.BR epoll_ctl (2).
46Finally, the actual wait is started by
47.BR epoll_wait (2).
48
49.SH NOTES
50The
51.B epoll
52event distribution interface is able to behave both as Edge Triggered
53( ET ) and Level Triggered ( LT ). The difference between ET and LT
54event distribution mechanism can be described as follows. Suppose that
55this scenario happens :
56.TP
57.B 1
58The file descriptor that represents the read side of a pipe (
59.B RFD
60) is added inside the
61.B epoll
62device.
63.TP
64.B 2
65Pipe writer writes 2Kb of data on the write side of the pipe.
66.TP
67.B 3
68A call to
69.BR epoll_wait (2)
70is done that will return
71.B RFD
72as ready file descriptor.
73.TP
74.B 4
75The pipe reader reads 1Kb of data from
76.BR RFD .
77.TP
78.B 5
79A call to
80.BR epoll_wait (2)
81is done.
82.PP
83
84If the
85.B RFD
86file descriptor has been added to the
87.B epoll
88interface using the
89.B EPOLLET
90flag, the call to
91.BR epoll_wait (2)
92done in step
93.B 5
94will probably hang because of the available data still present in the file
95input buffers and the remote peer might be expecting a response based on the
96data it already sent. The reason for this is that Edge Triggered event
97distribution delivers events only when events happens on the monitored file.
98So, in step
99.B 5
100the caller might end up waiting for some data that is already present inside
101the input buffer. In the above example, an event on
102.B RFD
103will be generated because of the write done in
104.B 2
105, and the event is consumed in
106.BR 3 .
107Since the read operation done in
108.B 4
109does not consume the whole buffer data, the call to
110.BR epoll_wait (2)
111done in step
112.B 5
113might lock indefinitely. The
114.B epoll
115interface, when used with the
116.B EPOLLET
117flag ( Edge Triggered )
118should use non-blocking file descriptors to avoid having a blocking
119read or write starve the task that is handling multiple file descriptors.
120The suggested way to use
121.B epoll
122as an Edge Triggered (
123.B EPOLLET
124) interface is below, and possible pitfalls to avoid follow.
125.RS
126.TP
127.B i
128with non-blocking file descriptors
129.TP
130.B ii
131by going to wait for an event only after
132.BR read (2)
133or
134.BR write (2)
135return EAGAIN
136.RE
137.PP
138On the contrary, when used as a Level Triggered interface,
139.B epoll
140is by all means a faster
141.BR poll (2),
142and can be used wherever the latter is used since it shares the
143same semantics. Since even with the Edge Triggered
144.B epoll
145multiple events can be generated up on receival of multiple chunks of data,
146the caller has the option to specify the
147.B EPOLLONESHOT
148flag, to tell
149.B epoll
150to disable the associated file descriptor after the receival of an event with
151.BR epoll_wait (2).
152When the
153.B EPOLLONESHOT
154flag is specified, it is caller responsibility to rearm the file descriptor using
155.BR epoll_ctl (2)
156with
157.BR EPOLL_CTL_MOD .
158
159.SH EXAMPLE FOR SUGGESTED USAGE
160
161While the usage of
162.B epoll
163when employed like a Level Triggered interface does have the same
164semantics of
165.BR poll (2),
9fdfa163 166an Edge Triggered usage requires more clarification to avoid stalls
fea681da
MK
167in the application event loop. In this example, listener is a
168non-blocking socket on which
169.BR listen (2)
170has been called. The function do_use_fd() uses the new ready
171file descriptor until EAGAIN is returned by either
172.BR read (2)
173or
174.BR write (2).
175An event driven state machine application should, after having received
176EAGAIN, record its current state so that at the next call to do_use_fd()
177it will continue to
178.BR read (2)
179or
180.BR write (2)
181from where it stopped before.
182
183.nf
184struct epoll_event ev, *events;
185
186for(;;) {
2bc2f479 187 nfds = epoll_wait(kdpfd, events, maxevents, \-1);
fea681da
MK
188
189 for(n = 0; n < nfds; ++n) {
190 if(events[n].data.fd == listener) {
191 client = accept(listener, (struct sockaddr *) &local,
192 &addrlen);
193 if(client < 0){
194 perror("accept");
195 continue;
196 }
197 setnonblocking(client);
198 ev.events = EPOLLIN | EPOLLET;
199 ev.data.fd = client;
200 if (epoll_ctl(kdpfd, EPOLL_CTL_ADD, client, &ev) < 0) {
201 fprintf(stderr, "epoll set insertion error: fd=%d\n",
202 client);
2bc2f479 203 return \-1;
fea681da
MK
204 }
205 }
206 else
207 do_use_fd(events[n].data.fd);
208 }
209}
210.fi
211
212When used as an Edge triggered interface, for performance reasons, it is
213possible to add the file descriptor inside the epoll interface (
214.B EPOLL_CTL_ADD
215) once by specifying (
216.BR EPOLLIN | EPOLLOUT
217). This allows you to avoid
218continuously switching between
219.B EPOLLIN
220and
221.B EPOLLOUT
222calling
223.BR epoll_ctl (2)
224with
225.BR EPOLL_CTL_MOD .
226
227.SH QUESTIONS AND ANSWERS (from linux-kernel)
228
229.RS
230.TP
231.B Q1
232What happens if you add the same fd to an epoll_set twice?
233.TP
234.B A1
235You will probably get EEXIST. However, it is possible that two
236threads may add the same fd twice. This is a harmless condition.
237.TP
238.B Q2
239Can two
240.B epoll
241sets wait for the same fd? If so, are events reported
242to both
243.B epoll
244sets fds?
245.TP
246.B A2
247Yes. However, it is not recommended. Yes it would be reported to both.
248.TP
249.B Q3
250Is the
251.B epoll
252fd itself poll/epoll/selectable?
253.TP
254.B A3
255Yes.
256.TP
257.B Q4
258What happens if the
259.B epoll
260fd is put into its own fd set?
261.TP
262.B A4
263It will fail. However, you can add an
264.B epoll
265fd inside another epoll fd set.
266.TP
267.B Q5
268Can I send the
269.B epoll
270fd over a unix-socket to another process?
271.TP
272.B A5
273No.
274.TP
275.B Q6
276Will the close of an fd cause it to be removed from all
277.B epoll
278sets automatically?
279.TP
280.B A6
281Yes.
282.TP
283.B Q7
284If more than one event comes in between
285.BR epoll_wait (2)
286calls, are they combined or reported separately?
287.TP
288.B A7
289They will be combined.
290.TP
291.B Q8
292Does an operation on an fd affect the already collected but not yet reported
293events?
294.TP
295.B A8
296You can do two operations on an existing fd. Remove would be meaningless for
297this case. Modify will re-read available I/O.
298.TP
299.B Q9
300Do I need to continuously read/write an fd until EAGAIN when using the
301.B EPOLLET
302flag ( Edge Triggered behaviour ) ?
303.TP
304.B A9
305No you don't. Receiving an event from
306.BR epoll_wait (2)
307should suggest to you that such file descriptor is ready for the requested I/O
308operation. You have simply to consider it ready until you will receive the
309next EAGAIN. When and how you will use such file descriptor is entirely up
310to you. Also, the condition that the read/write I/O space is exhausted can
311be detected by checking the amount of data read/write from/to the target
312file descriptor. For example, if you call
313.BR read (2)
314by asking to read a certain amount of data and
315.BR read (2)
316returns a lower number of bytes, you can be sure to have exhausted the read
317I/O space for such file descriptor. Same is valid when writing using the
318.BR write (2)
319function.
320.RE
321
322.SH POSSIBLE PITFALLS AND WAYS TO AVOID THEM
323.RS
324.TP
325.B o Starvation ( Edge Triggered )
326.PP
327If there is a large amount of I/O space, it is possible that by trying to drain
328it the other files will not get processed causing starvation. This
329is not specific to
330.BR epoll .
331.PP
332.PP
333The solution is to maintain a ready list and mark the file descriptor as ready
334in its associated data structure, thereby allowing the application to
335remember which files need to be processed but still round robin amongst
336all the ready files. This also supports ignoring subsequent events you
337receive for fd's that are already ready.
338.PP
339
340.TP
341.B o If using an event cache...
342.PP
343If you use an event cache or store all the fd's returned from
344.BR epoll_wait (2),
345then make sure to provide a way to mark its closure dynamically (ie- caused by
346a previous event's processing). Suppose you receive 100 events from
347.BR epoll_wait (2),
9fdfa163 348and in event #47 a condition causes event #13 to be closed.
fea681da
MK
349If you remove the structure and close() the fd for event #13, then your
350event cache might still say there are events waiting for that fd causing
351confusion.
352.PP
353.PP
354One solution for this is to call, during the processing of event 47,
355.BR epoll_ctl ( EPOLL_CTL_DEL )
356to delete fd 13 and close(), then mark its associated
357data structure as removed and link it to a cleanup list. If you find another
358event for fd 13 in your batch processing, you will discover the fd had been
359previously removed and there will be no confusion.
360.PP
361
362.RE
363.SH CONFORMING TO
364.BR epoll (4)
365is a new API introduced in Linux kernel 2.5.44.
366Its interface should be finalized in Linux kernel 2.5.66.
367.SH "SEE ALSO"
368.BR epoll_create (2),
369.BR epoll_ctl (2),
370.BR epoll_wait (2)