]>
Commit | Line | Data |
---|---|---|
12670d88 | 1 | /* Implements exception handling. |
4956d07c MS |
2 | Copyright (C) 1989, 92-95, 1996 Free Software Foundation, Inc. |
3 | Contributed by Mike Stump <mrs@cygnus.com>. | |
4 | ||
5 | This file is part of GNU CC. | |
6 | ||
7 | GNU CC is free software; you can redistribute it and/or modify | |
8 | it under the terms of the GNU General Public License as published by | |
9 | the Free Software Foundation; either version 2, or (at your option) | |
10 | any later version. | |
11 | ||
12 | GNU CC is distributed in the hope that it will be useful, | |
13 | but WITHOUT ANY WARRANTY; without even the implied warranty of | |
14 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
15 | GNU General Public License for more details. | |
16 | ||
17 | You should have received a copy of the GNU General Public License | |
18 | along with GNU CC; see the file COPYING. If not, write to | |
19 | the Free Software Foundation, 59 Temple Place - Suite 330, | |
20 | Boston, MA 02111-1307, USA. */ | |
21 | ||
22 | ||
12670d88 RK |
23 | /* An exception is an event that can be signaled from within a |
24 | function. This event can then be "caught" or "trapped" by the | |
25 | callers of this function. This potentially allows program flow to | |
26 | be transferred to any arbitrary code assocated with a function call | |
27 | several levels up the stack. | |
28 | ||
29 | The intended use for this mechanism is for signaling "exceptional | |
30 | events" in an out-of-band fashion, hence its name. The C++ language | |
31 | (and many other OO-styled or functional languages) practically | |
32 | requires such a mechanism, as otherwise it becomes very difficult | |
33 | or even impossible to signal failure conditions in complex | |
34 | situations. The traditional C++ example is when an error occurs in | |
35 | the process of constructing an object; without such a mechanism, it | |
36 | is impossible to signal that the error occurs without adding global | |
37 | state variables and error checks around every object construction. | |
38 | ||
39 | The act of causing this event to occur is referred to as "throwing | |
40 | an exception". (Alternate terms include "raising an exception" or | |
41 | "signaling an exception".) The term "throw" is used because control | |
42 | is returned to the callers of the function that is signaling the | |
43 | exception, and thus there is the concept of "throwing" the | |
44 | exception up the call stack. | |
45 | ||
27a36778 MS |
46 | There are two major codegen options for exception handling. The |
47 | flag -fsjlj-exceptions can be used to select the setjmp/longjmp | |
48 | approach, which is the default. -fnosjlj-exceptions can be used to | |
49 | get the PC range table approach. While this is a compile time | |
50 | flag, an entire application must be compiled with the same codegen | |
51 | option. The first is a PC range table approach, the second is a | |
52 | setjmp/longjmp based scheme. We will first discuss the PC range | |
53 | table approach, after that, we will discuss the setjmp/longjmp | |
54 | based approach. | |
55 | ||
12670d88 RK |
56 | It is appropriate to speak of the "context of a throw". This |
57 | context refers to the address where the exception is thrown from, | |
58 | and is used to determine which exception region will handle the | |
59 | exception. | |
60 | ||
61 | Regions of code within a function can be marked such that if it | |
62 | contains the context of a throw, control will be passed to a | |
63 | designated "exception handler". These areas are known as "exception | |
64 | regions". Exception regions cannot overlap, but they can be nested | |
65 | to any arbitrary depth. Also, exception regions cannot cross | |
66 | function boundaries. | |
67 | ||
2ed18e63 MS |
68 | Exception handlers can either be specified by the user (which we |
69 | will call a "user-defined handler") or generated by the compiler | |
70 | (which we will designate as a "cleanup"). Cleanups are used to | |
71 | perform tasks such as destruction of objects allocated on the | |
72 | stack. | |
73 | ||
74 | In the current implementaion, cleanups are handled by allocating an | |
75 | exception region for the area that the cleanup is designated for, | |
76 | and the handler for the region performs the cleanup and then | |
77 | rethrows the exception to the outer exception region. From the | |
78 | standpoint of the current implementation, there is little | |
79 | distinction made between a cleanup and a user-defined handler, and | |
80 | the phrase "exception handler" can be used to refer to either one | |
81 | equally well. (The section "Future Directions" below discusses how | |
82 | this will change). | |
83 | ||
84 | Each object file that is compiled with exception handling contains | |
85 | a static array of exception handlers named __EXCEPTION_TABLE__. | |
86 | Each entry contains the starting and ending addresses of the | |
87 | exception region, and the address of the handler designated for | |
88 | that region. | |
12670d88 RK |
89 | |
90 | At program startup each object file invokes a function named | |
91 | __register_exceptions with the address of its local | |
92 | __EXCEPTION_TABLE__. __register_exceptions is defined in libgcc2.c, | |
93 | and is responsible for recording all of the exception regions into | |
94 | one list (which is kept in a static variable named exception_table_list). | |
95 | ||
27a36778 MS |
96 | The function __throw is actually responsible for doing the |
97 | throw. In the C++ frontend, __throw is generated on a | |
12670d88 | 98 | per-object-file basis for each source file compiled with |
27a36778 | 99 | -fexceptions. Before __throw is invoked, the current context |
2ed18e63 | 100 | of the throw needs to be placed in the global variable __eh_pc. |
12670d88 | 101 | |
27a36778 | 102 | __throw attempts to find the appropriate exception handler for the |
12670d88 | 103 | PC value stored in __eh_pc by calling __find_first_exception_table_match |
2ed18e63 MS |
104 | (which is defined in libgcc2.c). If __find_first_exception_table_match |
105 | finds a relevant handler, __throw jumps directly to it. | |
12670d88 | 106 | |
2ed18e63 | 107 | If a handler for the context being thrown from can't be found, |
12670d88 RK |
108 | __throw is responsible for unwinding the stack, determining the |
109 | address of the caller of the current function (which will be used | |
2ed18e63 MS |
110 | as the new context to throw from), and then restarting the process |
111 | of searching for a handler for the new context. __throw may also | |
27a36778 | 112 | call abort if it is unable to unwind the stack, and can also |
2ed18e63 MS |
113 | call an external library function named __terminate if it reaches |
114 | the top of the stack without finding an appropriate handler. (By | |
27a36778 | 115 | default __terminate invokes abort, but this behavior can be |
2ed18e63 MS |
116 | changed by the user to perform some sort of cleanup behavior before |
117 | exiting). | |
12670d88 RK |
118 | |
119 | Internal implementation details: | |
120 | ||
12670d88 | 121 | To associate a user-defined handler with a block of statements, the |
27a36778 | 122 | function expand_start_try_stmts is used to mark the start of the |
12670d88 | 123 | block of statements with which the handler is to be associated |
2ed18e63 MS |
124 | (which is known as a "try block"). All statements that appear |
125 | afterwards will be associated with the try block. | |
126 | ||
27a36778 | 127 | A call to expand_start_all_catch marks the end of the try block, |
2ed18e63 MS |
128 | and also marks the start of the "catch block" (the user-defined |
129 | handler) associated with the try block. | |
130 | ||
131 | This user-defined handler will be invoked for *every* exception | |
132 | thrown with the context of the try block. It is up to the handler | |
133 | to decide whether or not it wishes to handle any given exception, | |
134 | as there is currently no mechanism in this implementation for doing | |
135 | this. (There are plans for conditionally processing an exception | |
136 | based on its "type", which will provide a language-independent | |
137 | mechanism). | |
138 | ||
139 | If the handler chooses not to process the exception (perhaps by | |
140 | looking at an "exception type" or some other additional data | |
141 | supplied with the exception), it can fall through to the end of the | |
27a36778 | 142 | handler. expand_end_all_catch and expand_leftover_cleanups |
2ed18e63 MS |
143 | add additional code to the end of each handler to take care of |
144 | rethrowing to the outer exception handler. | |
145 | ||
146 | The handler also has the option to continue with "normal flow of | |
147 | code", or in other words to resume executing at the statement | |
148 | immediately after the end of the exception region. The variable | |
149 | caught_return_label_stack contains a stack of labels, and jumping | |
27a36778 | 150 | to the topmost entry's label via expand_goto will resume normal |
2ed18e63 MS |
151 | flow to the statement immediately after the end of the exception |
152 | region. If the handler falls through to the end, the exception will | |
153 | be rethrown to the outer exception region. | |
154 | ||
155 | The instructions for the catch block are kept as a separate | |
156 | sequence, and will be emitted at the end of the function along with | |
27a36778 MS |
157 | the handlers specified via expand_eh_region_end. The end of the |
158 | catch block is marked with expand_end_all_catch. | |
12670d88 RK |
159 | |
160 | Any data associated with the exception must currently be handled by | |
161 | some external mechanism maintained in the frontend. For example, | |
162 | the C++ exception mechanism passes an arbitrary value along with | |
163 | the exception, and this is handled in the C++ frontend by using a | |
2ed18e63 MS |
164 | global variable to hold the value. (This will be changing in the |
165 | future.) | |
166 | ||
167 | The mechanism in C++ for handling data associated with the | |
168 | exception is clearly not thread-safe. For a thread-based | |
169 | environment, another mechanism must be used (possibly using a | |
170 | per-thread allocation mechanism if the size of the area that needs | |
171 | to be allocated isn't known at compile time.) | |
172 | ||
173 | Internally-generated exception regions (cleanups) are marked by | |
27a36778 | 174 | calling expand_eh_region_start to mark the start of the region, |
2ed18e63 MS |
175 | and expand_eh_region_end (handler) is used to both designate the |
176 | end of the region and to associate a specified handler/cleanup with | |
177 | the region. The rtl code in HANDLER will be invoked whenever an | |
178 | exception occurs in the region between the calls to | |
179 | expand_eh_region_start and expand_eh_region_end. After HANDLER is | |
180 | executed, additional code is emitted to handle rethrowing the | |
181 | exception to the outer exception handler. The code for HANDLER will | |
182 | be emitted at the end of the function. | |
12670d88 RK |
183 | |
184 | TARGET_EXPRs can also be used to designate exception regions. A | |
185 | TARGET_EXPR gives an unwind-protect style interface commonly used | |
186 | in functional languages such as LISP. The associated expression is | |
2ed18e63 MS |
187 | evaluated, and whether or not it (or any of the functions that it |
188 | calls) throws an exception, the protect expression is always | |
189 | invoked. This implementation takes care of the details of | |
190 | associating an exception table entry with the expression and | |
191 | generating the necessary code (it actually emits the protect | |
192 | expression twice, once for normal flow and once for the exception | |
193 | case). As for the other handlers, the code for the exception case | |
194 | will be emitted at the end of the function. | |
195 | ||
196 | Cleanups can also be specified by using add_partial_entry (handler) | |
27a36778 | 197 | and end_protect_partials. add_partial_entry creates the start of |
2ed18e63 MS |
198 | a new exception region; HANDLER will be invoked if an exception is |
199 | thrown with the context of the region between the calls to | |
200 | add_partial_entry and end_protect_partials. end_protect_partials is | |
201 | used to mark the end of these regions. add_partial_entry can be | |
202 | called as many times as needed before calling end_protect_partials. | |
203 | However, end_protect_partials should only be invoked once for each | |
27a36778 | 204 | group of calls to add_partial_entry as the entries are queued |
2ed18e63 MS |
205 | and all of the outstanding entries are processed simultaneously |
206 | when end_protect_partials is invoked. Similarly to the other | |
207 | handlers, the code for HANDLER will be emitted at the end of the | |
208 | function. | |
12670d88 RK |
209 | |
210 | The generated RTL for an exception region includes | |
211 | NOTE_INSN_EH_REGION_BEG and NOTE_INSN_EH_REGION_END notes that mark | |
212 | the start and end of the exception region. A unique label is also | |
2ed18e63 MS |
213 | generated at the start of the exception region, which is available |
214 | by looking at the ehstack variable. The topmost entry corresponds | |
215 | to the current region. | |
12670d88 RK |
216 | |
217 | In the current implementation, an exception can only be thrown from | |
218 | a function call (since the mechanism used to actually throw an | |
219 | exception involves calling __throw). If an exception region is | |
220 | created but no function calls occur within that region, the region | |
2ed18e63 | 221 | can be safely optimized away (along with its exception handlers) |
27a36778 MS |
222 | since no exceptions can ever be caught in that region. This |
223 | optimization is performed unless -fasynchronous-exceptions is | |
224 | given. If the user wishes to throw from a signal handler, or other | |
225 | asynchronous place, -fasynchronous-exceptions should be used when | |
226 | compiling for maximally correct code, at the cost of additional | |
227 | exception regions. Using -fasynchronous-exceptions only produces | |
228 | code that is reasonably safe in such situations, but a correct | |
229 | program cannot rely upon this working. It can be used in failsafe | |
230 | code, where trying to continue on, and proceeding with potentially | |
231 | incorrect results is better than halting the program. | |
232 | ||
12670d88 RK |
233 | |
234 | Unwinding the stack: | |
235 | ||
236 | The details of unwinding the stack to the next frame can be rather | |
27a36778 | 237 | complex. While in many cases a generic __unwind_function routine |
12670d88 RK |
238 | can be used by the generated exception handling code to do this, it |
239 | is often necessary to generate inline code to do the unwinding. | |
240 | ||
241 | Whether or not these inlined unwinders are necessary is | |
242 | target-specific. | |
243 | ||
244 | By default, if the target-specific backend doesn't supply a | |
27a36778 | 245 | definition for __unwind_function, inlined unwinders will be used |
12670d88 RK |
246 | instead. The main tradeoff here is in text space utilization. |
247 | Obviously, if inline unwinders have to be generated repeatedly, | |
2ed18e63 MS |
248 | this uses much more space than if a single routine is used. |
249 | ||
250 | However, it is simply not possible on some platforms to write a | |
251 | generalized routine for doing stack unwinding without having some | |
252 | form of additional data associated with each function. The current | |
253 | implementation encodes this data in the form of additional machine | |
254 | instructions. This is clearly not desirable, as it is extremely | |
255 | inefficient. The next implementation will provide a set of metadata | |
256 | for each function that will provide the needed information. | |
12670d88 RK |
257 | |
258 | The backend macro DOESNT_NEED_UNWINDER is used to conditionalize | |
259 | whether or not per-function unwinders are needed. If DOESNT_NEED_UNWINDER | |
260 | is defined and has a non-zero value, a per-function unwinder is | |
261 | not emitted for the current function. | |
262 | ||
27a36778 | 263 | On some platforms it is possible that neither __unwind_function |
12670d88 | 264 | nor inlined unwinders are available. For these platforms it is not |
27a36778 | 265 | possible to throw through a function call, and abort will be |
2ed18e63 MS |
266 | invoked instead of performing the throw. |
267 | ||
268 | Future directions: | |
269 | ||
27a36778 | 270 | Currently __throw makes no differentiation between cleanups and |
2ed18e63 MS |
271 | user-defined exception regions. While this makes the implementation |
272 | simple, it also implies that it is impossible to determine if a | |
273 | user-defined exception handler exists for a given exception without | |
274 | completely unwinding the stack in the process. This is undesirable | |
275 | from the standpoint of debugging, as ideally it would be possible | |
276 | to trap unhandled exceptions in the debugger before the process of | |
277 | unwinding has even started. | |
278 | ||
279 | This problem can be solved by marking user-defined handlers in a | |
280 | special way (probably by adding additional bits to exception_table_list). | |
27a36778 | 281 | A two-pass scheme could then be used by __throw to iterate |
2ed18e63 MS |
282 | through the table. The first pass would search for a relevant |
283 | user-defined handler for the current context of the throw, and if | |
284 | one is found, the second pass would then invoke all needed cleanups | |
285 | before jumping to the user-defined handler. | |
286 | ||
287 | Many languages (including C++ and Ada) make execution of a | |
288 | user-defined handler conditional on the "type" of the exception | |
289 | thrown. (The type of the exception is actually the type of the data | |
290 | that is thrown with the exception.) It will thus be necessary for | |
27a36778 | 291 | __throw to be able to determine if a given user-defined |
2ed18e63 MS |
292 | exception handler will actually be executed, given the type of |
293 | exception. | |
294 | ||
295 | One scheme is to add additional information to exception_table_list | |
27a36778 | 296 | as to the types of exceptions accepted by each handler. __throw |
2ed18e63 MS |
297 | can do the type comparisons and then determine if the handler is |
298 | actually going to be executed. | |
299 | ||
300 | There is currently no significant level of debugging support | |
27a36778 | 301 | available, other than to place a breakpoint on __throw. While |
2ed18e63 MS |
302 | this is sufficient in most cases, it would be helpful to be able to |
303 | know where a given exception was going to be thrown to before it is | |
304 | actually thrown, and to be able to choose between stopping before | |
305 | every exception region (including cleanups), or just user-defined | |
306 | exception regions. This should be possible to do in the two-pass | |
27a36778 | 307 | scheme by adding additional labels to __throw for appropriate |
2ed18e63 MS |
308 | breakpoints, and additional debugger commands could be added to |
309 | query various state variables to determine what actions are to be | |
310 | performed next. | |
311 | ||
312 | Another major problem that is being worked on is the issue with | |
313 | stack unwinding on various platforms. Currently the only platform | |
27a36778 | 314 | that has support for __unwind_function is the Sparc; all other |
2ed18e63 MS |
315 | ports require per-function unwinders, which causes large amounts of |
316 | code bloat. | |
317 | ||
318 | Ideally it would be possible to store a small set of metadata with | |
319 | each function that would then make it possible to write a | |
27a36778 | 320 | __unwind_function for every platform. This would eliminate the |
2ed18e63 MS |
321 | need for per-function unwinders. |
322 | ||
323 | The main reason the data is needed is that on some platforms the | |
324 | order and types of data stored on the stack can vary depending on | |
325 | the type of function, its arguments and returned values, and the | |
326 | compilation options used (optimization versus non-optimization, | |
327 | -fomit-frame-pointer, processor variations, etc). | |
328 | ||
329 | Unfortunately, this also means that throwing through functions that | |
330 | aren't compiled with exception handling support will still not be | |
331 | possible on some platforms. This problem is currently being | |
332 | investigated, but no solutions have been found that do not imply | |
27a36778 MS |
333 | some unacceptable performance penalties. |
334 | ||
335 | For setjmp/longjmp based exception handling, some of the details | |
336 | are as above, but there are some additional details. This section | |
337 | discusses the details. | |
338 | ||
339 | We don't use NOTE_INSN_EH_REGION_{BEG,END} pairs. We don't | |
340 | optimize EH regions yet. We don't have to worry about machine | |
341 | specific issues with unwinding the stack, as we rely upon longjmp | |
342 | for all the machine specific details. There is no variable context | |
343 | of a throw, just the one implied by the dynamic handler stack | |
344 | pointed to by the dynamic handler chain. There is no exception | |
345 | table, and no calls to __register_excetpions. __sjthrow is used | |
346 | instead of __throw, and it works by using the dynamic handler | |
347 | chain, and longjmp. -fasynchronous-exceptions has no effect, as | |
348 | the elimination of trivial exception regions is not yet performed. | |
349 | ||
350 | A frontend can set protect_cleanup_actions_with_terminate when all | |
351 | the cleanup actions should be protected with an EH region that | |
352 | calls terminate when an unhandled exception is throw. C++ does | |
353 | this, Ada does not. */ | |
4956d07c MS |
354 | |
355 | ||
356 | #include "config.h" | |
357 | #include <stdio.h> | |
358 | #include "rtl.h" | |
359 | #include "tree.h" | |
360 | #include "flags.h" | |
361 | #include "except.h" | |
362 | #include "function.h" | |
363 | #include "insn-flags.h" | |
364 | #include "expr.h" | |
365 | #include "insn-codes.h" | |
366 | #include "regs.h" | |
367 | #include "hard-reg-set.h" | |
368 | #include "insn-config.h" | |
369 | #include "recog.h" | |
370 | #include "output.h" | |
12670d88 | 371 | #include "assert.h" |
4956d07c | 372 | |
27a36778 MS |
373 | /* One to use setjmp/longjmp method of generating code for exception |
374 | handling. */ | |
375 | ||
376 | int exceptions_via_longjmp = 1; | |
377 | ||
378 | /* One to enable asynchronous exception support. */ | |
379 | ||
380 | int asynchronous_exceptions = 0; | |
381 | ||
382 | /* One to protect cleanup actions with a handler that calls | |
383 | __terminate, zero otherwise. */ | |
384 | ||
385 | int protect_cleanup_actions_with_terminate = 0; | |
386 | ||
12670d88 | 387 | /* A list of labels used for exception handlers. Created by |
4956d07c MS |
388 | find_exception_handler_labels for the optimization passes. */ |
389 | ||
390 | rtx exception_handler_labels; | |
391 | ||
12670d88 RK |
392 | /* Nonzero means that __throw was invoked. |
393 | ||
394 | This is used by the C++ frontend to know if code needs to be emitted | |
395 | for __throw or not. */ | |
4956d07c MS |
396 | |
397 | int throw_used; | |
398 | ||
27a36778 MS |
399 | /* The dynamic handler chain. Nonzero if the function has already |
400 | fetched a pointer to the dynamic handler chain for exception | |
401 | handling. */ | |
402 | ||
403 | rtx current_function_dhc; | |
404 | ||
405 | /* The dynamic cleanup chain. Nonzero if the function has already | |
406 | fetched a pointer to the dynamic cleanup chain for exception | |
407 | handling. */ | |
408 | ||
409 | rtx current_function_dcc; | |
410 | ||
4956d07c | 411 | /* A stack used for keeping track of the currectly active exception |
12670d88 | 412 | handling region. As each exception region is started, an entry |
4956d07c MS |
413 | describing the region is pushed onto this stack. The current |
414 | region can be found by looking at the top of the stack, and as we | |
12670d88 RK |
415 | exit regions, the corresponding entries are popped. |
416 | ||
27a36778 | 417 | Entries cannot overlap; they can be nested. So there is only one |
12670d88 RK |
418 | entry at most that corresponds to the current instruction, and that |
419 | is the entry on the top of the stack. */ | |
4956d07c | 420 | |
27a36778 | 421 | static struct eh_stack ehstack; |
4956d07c | 422 | |
12670d88 RK |
423 | /* A queue used for tracking which exception regions have closed but |
424 | whose handlers have not yet been expanded. Regions are emitted in | |
425 | groups in an attempt to improve paging performance. | |
426 | ||
427 | As we exit a region, we enqueue a new entry. The entries are then | |
27a36778 | 428 | dequeued during expand_leftover_cleanups and expand_start_all_catch, |
12670d88 RK |
429 | |
430 | We should redo things so that we either take RTL for the handler, | |
431 | or we expand the handler expressed as a tree immediately at region | |
432 | end time. */ | |
4956d07c | 433 | |
27a36778 | 434 | static struct eh_queue ehqueue; |
4956d07c | 435 | |
12670d88 | 436 | /* Insns for all of the exception handlers for the current function. |
abeeec2a | 437 | They are currently emitted by the frontend code. */ |
4956d07c MS |
438 | |
439 | rtx catch_clauses; | |
440 | ||
12670d88 RK |
441 | /* A TREE_CHAINed list of handlers for regions that are not yet |
442 | closed. The TREE_VALUE of each entry contains the handler for the | |
abeeec2a | 443 | corresponding entry on the ehstack. */ |
4956d07c | 444 | |
12670d88 | 445 | static tree protect_list; |
4956d07c MS |
446 | |
447 | /* Stacks to keep track of various labels. */ | |
448 | ||
12670d88 RK |
449 | /* Keeps track of the label to resume to should one want to resume |
450 | normal control flow out of a handler (instead of, say, returning to | |
451 | the caller of the current function or exiting the program). Also | |
452 | used as the context of a throw to rethrow an exception to the outer | |
abeeec2a | 453 | exception region. */ |
4956d07c MS |
454 | |
455 | struct label_node *caught_return_label_stack = NULL; | |
456 | ||
12670d88 | 457 | /* A random data area for the front end's own use. */ |
4956d07c MS |
458 | |
459 | struct label_node *false_label_stack = NULL; | |
460 | ||
843e8335 | 461 | /* The rtx and the tree for the saved PC value. */ |
4956d07c MS |
462 | |
463 | rtx eh_saved_pc_rtx; | |
843e8335 | 464 | tree eh_saved_pc; |
4956d07c MS |
465 | |
466 | rtx expand_builtin_return_addr PROTO((enum built_in_function, int, rtx)); | |
467 | \f | |
468 | /* Various support routines to manipulate the various data structures | |
469 | used by the exception handling code. */ | |
470 | ||
471 | /* Push a label entry onto the given STACK. */ | |
472 | ||
473 | void | |
474 | push_label_entry (stack, rlabel, tlabel) | |
475 | struct label_node **stack; | |
476 | rtx rlabel; | |
477 | tree tlabel; | |
478 | { | |
479 | struct label_node *newnode | |
480 | = (struct label_node *) xmalloc (sizeof (struct label_node)); | |
481 | ||
482 | if (rlabel) | |
483 | newnode->u.rlabel = rlabel; | |
484 | else | |
485 | newnode->u.tlabel = tlabel; | |
486 | newnode->chain = *stack; | |
487 | *stack = newnode; | |
488 | } | |
489 | ||
490 | /* Pop a label entry from the given STACK. */ | |
491 | ||
492 | rtx | |
493 | pop_label_entry (stack) | |
494 | struct label_node **stack; | |
495 | { | |
496 | rtx label; | |
497 | struct label_node *tempnode; | |
498 | ||
499 | if (! *stack) | |
500 | return NULL_RTX; | |
501 | ||
502 | tempnode = *stack; | |
503 | label = tempnode->u.rlabel; | |
504 | *stack = (*stack)->chain; | |
505 | free (tempnode); | |
506 | ||
507 | return label; | |
508 | } | |
509 | ||
510 | /* Return the top element of the given STACK. */ | |
511 | ||
512 | tree | |
513 | top_label_entry (stack) | |
514 | struct label_node **stack; | |
515 | { | |
516 | if (! *stack) | |
517 | return NULL_TREE; | |
518 | ||
519 | return (*stack)->u.tlabel; | |
520 | } | |
521 | ||
12670d88 | 522 | /* Make a copy of ENTRY using xmalloc to allocate the space. */ |
4956d07c MS |
523 | |
524 | static struct eh_entry * | |
525 | copy_eh_entry (entry) | |
526 | struct eh_entry *entry; | |
527 | { | |
528 | struct eh_entry *newentry; | |
529 | ||
530 | newentry = (struct eh_entry *) xmalloc (sizeof (struct eh_entry)); | |
531 | bcopy ((char *) entry, (char *) newentry, sizeof (struct eh_entry)); | |
532 | ||
533 | return newentry; | |
534 | } | |
535 | ||
12670d88 | 536 | /* Push a new eh_node entry onto STACK, and return the start label for |
abeeec2a | 537 | the entry. */ |
4956d07c MS |
538 | |
539 | static rtx | |
540 | push_eh_entry (stack) | |
541 | struct eh_stack *stack; | |
542 | { | |
543 | struct eh_node *node = (struct eh_node *) xmalloc (sizeof (struct eh_node)); | |
544 | struct eh_entry *entry = (struct eh_entry *) xmalloc (sizeof (struct eh_entry)); | |
545 | ||
546 | entry->start_label = gen_label_rtx (); | |
547 | entry->end_label = gen_label_rtx (); | |
548 | entry->exception_handler_label = gen_label_rtx (); | |
549 | entry->finalization = NULL_TREE; | |
550 | ||
551 | node->entry = entry; | |
552 | node->chain = stack->top; | |
553 | stack->top = node; | |
554 | ||
555 | return entry->start_label; | |
556 | } | |
557 | ||
558 | /* Pop an entry from the given STACK. */ | |
559 | ||
560 | static struct eh_entry * | |
561 | pop_eh_entry (stack) | |
562 | struct eh_stack *stack; | |
563 | { | |
564 | struct eh_node *tempnode; | |
565 | struct eh_entry *tempentry; | |
566 | ||
567 | tempnode = stack->top; | |
568 | tempentry = tempnode->entry; | |
569 | stack->top = stack->top->chain; | |
570 | free (tempnode); | |
571 | ||
572 | return tempentry; | |
573 | } | |
574 | ||
575 | /* Enqueue an ENTRY onto the given QUEUE. */ | |
576 | ||
577 | static void | |
578 | enqueue_eh_entry (queue, entry) | |
579 | struct eh_queue *queue; | |
580 | struct eh_entry *entry; | |
581 | { | |
582 | struct eh_node *node = (struct eh_node *) xmalloc (sizeof (struct eh_node)); | |
583 | ||
584 | node->entry = entry; | |
585 | node->chain = NULL; | |
586 | ||
587 | if (queue->head == NULL) | |
588 | { | |
589 | queue->head = node; | |
590 | } | |
591 | else | |
592 | { | |
593 | queue->tail->chain = node; | |
594 | } | |
595 | queue->tail = node; | |
596 | } | |
597 | ||
598 | /* Dequeue an entry from the given QUEUE. */ | |
599 | ||
600 | static struct eh_entry * | |
601 | dequeue_eh_entry (queue) | |
602 | struct eh_queue *queue; | |
603 | { | |
604 | struct eh_node *tempnode; | |
605 | struct eh_entry *tempentry; | |
606 | ||
607 | if (queue->head == NULL) | |
608 | return NULL; | |
609 | ||
610 | tempnode = queue->head; | |
611 | queue->head = queue->head->chain; | |
612 | ||
613 | tempentry = tempnode->entry; | |
614 | free (tempnode); | |
615 | ||
616 | return tempentry; | |
617 | } | |
618 | \f | |
619 | /* Routine to see if exception exception handling is turned on. | |
620 | DO_WARN is non-zero if we want to inform the user that exception | |
12670d88 RK |
621 | handling is turned off. |
622 | ||
623 | This is used to ensure that -fexceptions has been specified if the | |
abeeec2a | 624 | compiler tries to use any exception-specific functions. */ |
4956d07c MS |
625 | |
626 | int | |
627 | doing_eh (do_warn) | |
628 | int do_warn; | |
629 | { | |
630 | if (! flag_exceptions) | |
631 | { | |
632 | static int warned = 0; | |
633 | if (! warned && do_warn) | |
634 | { | |
635 | error ("exception handling disabled, use -fexceptions to enable"); | |
636 | warned = 1; | |
637 | } | |
638 | return 0; | |
639 | } | |
640 | return 1; | |
641 | } | |
642 | ||
12670d88 | 643 | /* Given a return address in ADDR, determine the address we should use |
abeeec2a | 644 | to find the corresponding EH region. */ |
4956d07c MS |
645 | |
646 | rtx | |
647 | eh_outer_context (addr) | |
648 | rtx addr; | |
649 | { | |
650 | /* First mask out any unwanted bits. */ | |
651 | #ifdef MASK_RETURN_ADDR | |
652 | emit_insn (gen_rtx (SET, Pmode, | |
653 | addr, | |
654 | gen_rtx (AND, Pmode, | |
655 | addr, MASK_RETURN_ADDR))); | |
656 | #endif | |
657 | ||
12670d88 RK |
658 | /* Then subtract out enough to get into the appropriate region. If |
659 | this is defined, assume we don't need to subtract anything as it | |
660 | is already within the correct region. */ | |
4956d07c MS |
661 | #if ! defined (RETURN_ADDR_OFFSET) |
662 | addr = plus_constant (addr, -1); | |
663 | #endif | |
664 | ||
665 | return addr; | |
666 | } | |
667 | ||
27a36778 MS |
668 | /* Start a new exception region for a region of code that has a |
669 | cleanup action and push the HANDLER for the region onto | |
670 | protect_list. All of the regions created with add_partial_entry | |
671 | will be ended when end_protect_partials is invoked. */ | |
12670d88 RK |
672 | |
673 | void | |
674 | add_partial_entry (handler) | |
675 | tree handler; | |
676 | { | |
677 | expand_eh_region_start (); | |
678 | ||
abeeec2a | 679 | /* Make sure the entry is on the correct obstack. */ |
12670d88 RK |
680 | push_obstacks_nochange (); |
681 | resume_temporary_allocation (); | |
27a36778 MS |
682 | |
683 | /* Because this is a cleanup action, we may have to protect the handler | |
684 | with __terminate. */ | |
685 | handler = protect_with_terminate (handler); | |
686 | ||
12670d88 RK |
687 | protect_list = tree_cons (NULL_TREE, handler, protect_list); |
688 | pop_obstacks (); | |
689 | } | |
690 | ||
27a36778 MS |
691 | /* Get a reference to the dynamic handler chain. It points to the |
692 | pointer to the next element in the dynamic handler chain. It ends | |
693 | when there are no more elements in the dynamic handler chain, when | |
694 | the value is &top_elt from libgcc2.c. Immediately after the | |
695 | pointer, is an area suitable for setjmp/longjmp when | |
696 | USE_BUILTIN_SETJMP isn't defined, and an area suitable for | |
697 | __builtin_setjmp/__builtin_longjmp when USE_BUILTIN_SETJMP is | |
698 | defined. | |
699 | ||
700 | This routine is here to facilitate the porting of this code to | |
701 | systems with threads. One can either replace the routine we emit a | |
702 | call for here in libgcc2.c, or one can modify this routine to work | |
703 | with their thread system. */ | |
704 | ||
705 | rtx | |
706 | get_dynamic_handler_chain () | |
707 | { | |
708 | #if 0 | |
709 | /* Do this once we figure out how to get this to the front of the | |
710 | function, and we really only want one per real function, not one | |
711 | per inlined function. */ | |
712 | if (current_function_dhc == 0) | |
713 | { | |
714 | rtx dhc, insns; | |
715 | start_sequence (); | |
716 | ||
717 | dhc = emit_library_call_value (get_dynamic_handler_chain_libfunc, | |
718 | NULL_RTX, 1, | |
719 | Pmode, 0); | |
720 | current_function_dhc = copy_to_reg (dhc); | |
721 | insns = get_insns (); | |
722 | end_sequence (); | |
723 | emit_insns_before (insns, get_first_nonparm_insn ()); | |
724 | } | |
725 | #else | |
726 | rtx dhc; | |
727 | dhc = emit_library_call_value (get_dynamic_handler_chain_libfunc, | |
728 | NULL_RTX, 1, | |
729 | Pmode, 0); | |
730 | current_function_dhc = copy_to_reg (dhc); | |
731 | #endif | |
732 | ||
733 | /* We don't want a copy of the dhc, but rather, the single dhc. */ | |
734 | return gen_rtx (MEM, Pmode, current_function_dhc); | |
735 | } | |
736 | ||
737 | /* Get a reference to the dynamic cleanup chain. It points to the | |
738 | pointer to the next element in the dynamic cleanup chain. | |
739 | Immediately after the pointer, are two Pmode variables, one for a | |
740 | pointer to a function that performs the cleanup action, and the | |
741 | second, the argument to pass to that function. */ | |
742 | ||
743 | rtx | |
744 | get_dynamic_cleanup_chain () | |
745 | { | |
746 | rtx dhc, dcc; | |
747 | ||
748 | dhc = get_dynamic_handler_chain (); | |
749 | dcc = plus_constant (dhc, GET_MODE_SIZE (Pmode)); | |
750 | ||
751 | current_function_dcc = copy_to_reg (dcc); | |
752 | ||
753 | /* We don't want a copy of the dcc, but rather, the single dcc. */ | |
754 | return gen_rtx (MEM, Pmode, current_function_dcc); | |
755 | } | |
756 | ||
757 | /* Generate code to evaluate X and jump to LABEL if the value is nonzero. | |
758 | LABEL is an rtx of code CODE_LABEL, in this function. */ | |
759 | ||
760 | void | |
761 | jumpif_rtx (x, label) | |
762 | rtx x; | |
763 | rtx label; | |
764 | { | |
765 | jumpif (make_tree (type_for_mode (GET_MODE (x), 0), x), label); | |
766 | } | |
767 | ||
768 | /* Generate code to evaluate X and jump to LABEL if the value is zero. | |
769 | LABEL is an rtx of code CODE_LABEL, in this function. */ | |
770 | ||
771 | void | |
772 | jumpifnot_rtx (x, label) | |
773 | rtx x; | |
774 | rtx label; | |
775 | { | |
776 | jumpifnot (make_tree (type_for_mode (GET_MODE (x), 0), x), label); | |
777 | } | |
778 | ||
779 | /* Start a dynamic cleanup on the EH runtime dynamic cleanup stack. | |
780 | We just need to create an element for the cleanup list, and push it | |
781 | into the chain. | |
782 | ||
783 | A dynamic cleanup is a cleanup action implied by the presence of an | |
784 | element on the EH runtime dynamic cleanup stack that is to be | |
785 | performed when an exception is thrown. The cleanup action is | |
786 | performed by __sjthrow when an exception is thrown. Only certain | |
787 | actions can be optimized into dynamic cleanup actions. For the | |
788 | restrictions on what actions can be performed using this routine, | |
789 | see expand_eh_region_start_tree. */ | |
790 | ||
791 | static void | |
792 | start_dynamic_cleanup (func, arg) | |
793 | tree func; | |
794 | tree arg; | |
795 | { | |
796 | rtx dhc, dcc; | |
797 | rtx new_func, new_arg; | |
798 | rtx x, buf; | |
799 | int size; | |
800 | ||
801 | /* We allocate enough room for a pointer to the function, and | |
802 | one argument. */ | |
803 | size = 2; | |
804 | ||
805 | /* XXX, FIXME: The stack space allocated this way is too long lived, | |
806 | but there is no allocation routine that allocates at the level of | |
807 | the last binding contour. */ | |
808 | buf = assign_stack_local (BLKmode, | |
809 | GET_MODE_SIZE (Pmode)*(size+1), | |
810 | 0); | |
811 | ||
812 | buf = change_address (buf, Pmode, NULL_RTX); | |
813 | ||
814 | /* Store dcc into the first word of the newly allocated buffer. */ | |
815 | ||
816 | dcc = get_dynamic_cleanup_chain (); | |
817 | emit_move_insn (buf, dcc); | |
818 | ||
819 | /* Store func and arg into the cleanup list element. */ | |
820 | ||
821 | new_func = gen_rtx (MEM, Pmode, plus_constant (XEXP (buf, 0), | |
822 | GET_MODE_SIZE (Pmode))); | |
823 | new_arg = gen_rtx (MEM, Pmode, plus_constant (XEXP (buf, 0), | |
824 | GET_MODE_SIZE (Pmode)*2)); | |
825 | x = expand_expr (func, new_func, Pmode, 0); | |
826 | if (x != new_func) | |
827 | emit_move_insn (new_func, x); | |
828 | ||
829 | x = expand_expr (arg, new_arg, Pmode, 0); | |
830 | if (x != new_arg) | |
831 | emit_move_insn (new_arg, x); | |
832 | ||
833 | /* Update the cleanup chain. */ | |
834 | ||
835 | emit_move_insn (dcc, XEXP (buf, 0)); | |
836 | } | |
837 | ||
838 | /* Emit RTL to start a dynamic handler on the EH runtime dynamic | |
839 | handler stack. This should only be used by expand_eh_region_start | |
840 | or expand_eh_region_start_tree. */ | |
841 | ||
842 | static void | |
843 | start_dynamic_handler () | |
844 | { | |
845 | rtx dhc, dcc; | |
846 | rtx x, arg; | |
847 | int size; | |
848 | ||
849 | #ifdef USE_BUILTIN_SETJMP | |
850 | /* The number of Pmode words for the setjmp buffer, when using the | |
851 | builtin setjmp/longjmp, see expand_builtin, case | |
852 | BUILT_IN_LONGJMP. */ | |
853 | size = 5; | |
854 | #else | |
855 | #ifdef JMP_BUF_SIZE | |
856 | size = JMP_BUF_SIZE; | |
857 | #else | |
858 | /* Should be large enough for most systems, if it is not, | |
859 | JMP_BUF_SIZE should be defined with the proper value. It will | |
860 | also tend to be larger than necessary for most systems, a more | |
861 | optimal port will define JMP_BUF_SIZE. */ | |
862 | size = FIRST_PSEUDO_REGISTER+2; | |
863 | #endif | |
864 | #endif | |
865 | /* XXX, FIXME: The stack space allocated this way is too long lived, | |
866 | but there is no allocation routine that allocates at the level of | |
867 | the last binding contour. */ | |
868 | arg = assign_stack_local (BLKmode, | |
869 | GET_MODE_SIZE (Pmode)*(size+1), | |
870 | 0); | |
871 | ||
872 | arg = change_address (arg, Pmode, NULL_RTX); | |
873 | ||
874 | /* Store dhc into the first word of the newly allocated buffer. */ | |
875 | ||
876 | dhc = get_dynamic_handler_chain (); | |
877 | dcc = gen_rtx (MEM, Pmode, plus_constant (XEXP (arg, 0), | |
878 | GET_MODE_SIZE (Pmode))); | |
879 | emit_move_insn (arg, dhc); | |
880 | ||
881 | /* Zero out the start of the cleanup chain. */ | |
882 | emit_move_insn (dcc, const0_rtx); | |
883 | ||
884 | /* The jmpbuf starts two words into the area allocated. */ | |
885 | ||
886 | x = emit_library_call_value (setjmp_libfunc, NULL_RTX, 1, SImode, 1, | |
887 | plus_constant (XEXP (arg, 0), GET_MODE_SIZE (Pmode)*2), | |
888 | Pmode); | |
889 | ||
890 | /* If we come back here for a catch, transfer control to the | |
891 | handler. */ | |
892 | ||
893 | jumpif_rtx (x, ehstack.top->entry->exception_handler_label); | |
894 | ||
895 | /* We are committed to this, so update the handler chain. */ | |
896 | ||
897 | emit_move_insn (dhc, XEXP (arg, 0)); | |
898 | } | |
899 | ||
900 | /* Start an exception handling region for the given cleanup action. | |
12670d88 | 901 | All instructions emitted after this point are considered to be part |
27a36778 MS |
902 | of the region until expand_eh_region_end is invoked. CLEANUP is |
903 | the cleanup action to perform. The return value is true if the | |
904 | exception region was optimized away. If that case, | |
905 | expand_eh_region_end does not need to be called for this cleanup, | |
906 | nor should it be. | |
907 | ||
908 | This routine notices one particular common case in C++ code | |
909 | generation, and optimizes it so as to not need the exception | |
910 | region. It works by creating a dynamic cleanup action, instead of | |
911 | of a using an exception region. */ | |
912 | ||
913 | int | |
914 | expand_eh_region_start_tree (cleanup) | |
915 | tree cleanup; | |
916 | { | |
917 | rtx note; | |
918 | ||
919 | /* This is the old code. */ | |
920 | if (! doing_eh (0)) | |
921 | return 0; | |
922 | ||
923 | /* The optimization only applies to actions protected with | |
924 | terminate, and only applies if we are using the setjmp/longjmp | |
925 | codegen method. */ | |
926 | if (exceptions_via_longjmp | |
927 | && protect_cleanup_actions_with_terminate) | |
928 | { | |
929 | tree func, arg; | |
930 | tree args; | |
931 | ||
932 | /* Ignore any UNSAVE_EXPR. */ | |
933 | if (TREE_CODE (cleanup) == UNSAVE_EXPR) | |
934 | cleanup = TREE_OPERAND (cleanup, 0); | |
935 | ||
936 | /* Further, it only applies if the action is a call, if there | |
937 | are 2 arguments, and if the second argument is 2. */ | |
938 | ||
939 | if (TREE_CODE (cleanup) == CALL_EXPR | |
940 | && (args = TREE_OPERAND (cleanup, 1)) | |
941 | && (func = TREE_OPERAND (cleanup, 0)) | |
942 | && (arg = TREE_VALUE (args)) | |
943 | && (args = TREE_CHAIN (args)) | |
944 | ||
945 | /* is the second argument 2? */ | |
946 | && TREE_CODE (TREE_VALUE (args)) == INTEGER_CST | |
947 | && TREE_INT_CST_LOW (TREE_VALUE (args)) == 2 | |
948 | && TREE_INT_CST_HIGH (TREE_VALUE (args)) == 0 | |
949 | ||
950 | /* Make sure there are no other arguments. */ | |
951 | && TREE_CHAIN (args) == NULL_TREE) | |
952 | { | |
953 | /* Arrange for returns and gotos to pop the entry we make on the | |
954 | dynamic cleanup stack. */ | |
955 | expand_dcc_cleanup (); | |
956 | start_dynamic_cleanup (func, arg); | |
957 | return 1; | |
958 | } | |
959 | } | |
960 | ||
961 | if (exceptions_via_longjmp) | |
962 | { | |
963 | /* We need a new block to record the start and end of the | |
964 | dynamic handler chain. We could always do this, but we | |
965 | really want to permit jumping into such a block, and we want | |
966 | to avoid any errors or performance impact in the SJ EH code | |
967 | for now. */ | |
968 | expand_start_bindings (0); | |
969 | ||
970 | /* But we don't need or want a new temporary level. */ | |
971 | pop_temp_slots (); | |
972 | ||
973 | /* Mark this block as created by expand_eh_region_start. This | |
974 | is so that we can pop the block with expand_end_bindings | |
975 | automatically. */ | |
976 | mark_block_as_eh_region (); | |
977 | ||
978 | /* Arrange for returns and gotos to pop the entry we make on the | |
979 | dynamic handler stack. */ | |
980 | expand_dhc_cleanup (); | |
981 | } | |
982 | ||
983 | if (exceptions_via_longjmp == 0) | |
984 | note = emit_note (NULL_PTR, NOTE_INSN_EH_REGION_BEG); | |
985 | emit_label (push_eh_entry (&ehstack)); | |
986 | if (exceptions_via_longjmp == 0) | |
987 | NOTE_BLOCK_NUMBER (note) | |
988 | = CODE_LABEL_NUMBER (ehstack.top->entry->exception_handler_label); | |
989 | if (exceptions_via_longjmp) | |
990 | start_dynamic_handler (); | |
991 | ||
992 | return 0; | |
993 | } | |
994 | ||
995 | /* Start an exception handling region. All instructions emitted after | |
996 | this point are considered to be part of the region until | |
997 | expand_eh_region_end is invoked. */ | |
4956d07c MS |
998 | |
999 | void | |
1000 | expand_eh_region_start () | |
1001 | { | |
1002 | rtx note; | |
1003 | ||
1004 | /* This is the old code. */ | |
1005 | if (! doing_eh (0)) | |
1006 | return; | |
1007 | ||
27a36778 MS |
1008 | if (exceptions_via_longjmp) |
1009 | { | |
1010 | /* We need a new block to record the start and end of the | |
1011 | dynamic handler chain. We could always do this, but we | |
1012 | really want to permit jumping into such a block, and we want | |
1013 | to avoid any errors or performance impact in the SJ EH code | |
1014 | for now. */ | |
1015 | expand_start_bindings (0); | |
1016 | ||
1017 | /* But we don't need or want a new temporary level. */ | |
1018 | pop_temp_slots (); | |
1019 | ||
1020 | /* Mark this block as created by expand_eh_region_start. This | |
1021 | is so that we can pop the block with expand_end_bindings | |
1022 | automatically. */ | |
1023 | mark_block_as_eh_region (); | |
1024 | ||
1025 | /* Arrange for returns and gotos to pop the entry we make on the | |
1026 | dynamic handler stack. */ | |
1027 | expand_dhc_cleanup (); | |
1028 | } | |
4956d07c | 1029 | |
27a36778 MS |
1030 | if (exceptions_via_longjmp == 0) |
1031 | note = emit_note (NULL_PTR, NOTE_INSN_EH_REGION_BEG); | |
4956d07c | 1032 | emit_label (push_eh_entry (&ehstack)); |
27a36778 MS |
1033 | if (exceptions_via_longjmp == 0) |
1034 | NOTE_BLOCK_NUMBER (note) | |
1035 | = CODE_LABEL_NUMBER (ehstack.top->entry->exception_handler_label); | |
1036 | if (exceptions_via_longjmp) | |
1037 | start_dynamic_handler (); | |
4956d07c MS |
1038 | } |
1039 | ||
27a36778 MS |
1040 | /* End an exception handling region. The information about the region |
1041 | is found on the top of ehstack. | |
12670d88 RK |
1042 | |
1043 | HANDLER is either the cleanup for the exception region, or if we're | |
1044 | marking the end of a try block, HANDLER is integer_zero_node. | |
1045 | ||
27a36778 | 1046 | HANDLER will be transformed to rtl when expand_leftover_cleanups |
abeeec2a | 1047 | is invoked. */ |
4956d07c MS |
1048 | |
1049 | void | |
1050 | expand_eh_region_end (handler) | |
1051 | tree handler; | |
1052 | { | |
4956d07c MS |
1053 | struct eh_entry *entry; |
1054 | ||
1055 | if (! doing_eh (0)) | |
1056 | return; | |
1057 | ||
1058 | entry = pop_eh_entry (&ehstack); | |
1059 | ||
27a36778 MS |
1060 | if (exceptions_via_longjmp == 0) |
1061 | { | |
1062 | rtx note = emit_note (NULL_PTR, NOTE_INSN_EH_REGION_END); | |
1063 | NOTE_BLOCK_NUMBER (note) = CODE_LABEL_NUMBER (entry->exception_handler_label); | |
1064 | } | |
4956d07c | 1065 | |
abeeec2a | 1066 | /* Emit a label marking the end of this exception region. */ |
4956d07c MS |
1067 | emit_label (entry->end_label); |
1068 | ||
27a36778 MS |
1069 | if (exceptions_via_longjmp == 0) |
1070 | { | |
1071 | /* Put in something that takes up space, as otherwise the end | |
1072 | address for this EH region could have the exact same address as | |
1073 | its outer region. This would cause us to miss the fact that | |
1074 | resuming exception handling with this PC value would be inside | |
1075 | the outer region. */ | |
1076 | emit_insn (gen_nop ()); | |
1077 | } | |
4956d07c MS |
1078 | |
1079 | entry->finalization = handler; | |
1080 | ||
1081 | enqueue_eh_entry (&ehqueue, entry); | |
1082 | ||
27a36778 MS |
1083 | /* If we have already started ending the bindings, don't recurse. |
1084 | This only happens when exceptions_via_longjmp is true. */ | |
1085 | if (is_eh_region ()) | |
1086 | { | |
1087 | /* Because we don't need or want a new temporary level and | |
1088 | because we didn't create one in expand_eh_region_start, | |
1089 | create a fake one now to avoid removing one in | |
1090 | expand_end_bindings. */ | |
1091 | push_temp_slots (); | |
1092 | ||
1093 | mark_block_as_not_eh_region (); | |
1094 | ||
1095 | /* Maybe do this to prevent jumping in and so on... */ | |
1096 | expand_end_bindings (NULL_TREE, 0, 0); | |
1097 | } | |
4956d07c MS |
1098 | } |
1099 | ||
27a36778 MS |
1100 | /* If we are using the setjmp/longjmp EH codegen method, we emit a |
1101 | call to __sjthrow. | |
1102 | ||
1103 | Otherwise, we emit a call to __throw and note that we threw | |
1104 | something, so we know we need to generate the necessary code for | |
1105 | __throw. | |
12670d88 RK |
1106 | |
1107 | Before invoking throw, the __eh_pc variable must have been set up | |
1108 | to contain the PC being thrown from. This address is used by | |
27a36778 | 1109 | __throw to determine which exception region (if any) is |
abeeec2a | 1110 | responsible for handling the exception. */ |
4956d07c | 1111 | |
27a36778 | 1112 | void |
4956d07c MS |
1113 | emit_throw () |
1114 | { | |
27a36778 MS |
1115 | if (exceptions_via_longjmp) |
1116 | { | |
1117 | emit_library_call (sjthrow_libfunc, 0, VOIDmode, 0); | |
1118 | } | |
1119 | else | |
1120 | { | |
4956d07c | 1121 | #ifdef JUMP_TO_THROW |
27a36778 | 1122 | emit_indirect_jump (throw_libfunc); |
4956d07c | 1123 | #else |
27a36778 MS |
1124 | SYMBOL_REF_USED (throw_libfunc) = 1; |
1125 | emit_library_call (throw_libfunc, 0, VOIDmode, 0); | |
4956d07c | 1126 | #endif |
27a36778 MS |
1127 | throw_used = 1; |
1128 | } | |
4956d07c MS |
1129 | emit_barrier (); |
1130 | } | |
1131 | ||
12670d88 | 1132 | /* An internal throw with an indirect CONTEXT we want to throw from. |
abeeec2a | 1133 | CONTEXT evaluates to the context of the throw. */ |
4956d07c | 1134 | |
12670d88 | 1135 | static void |
4956d07c MS |
1136 | expand_internal_throw_indirect (context) |
1137 | rtx context; | |
1138 | { | |
843e8335 | 1139 | assemble_external (eh_saved_pc); |
4956d07c MS |
1140 | emit_move_insn (eh_saved_pc_rtx, context); |
1141 | emit_throw (); | |
1142 | } | |
1143 | ||
12670d88 RK |
1144 | /* An internal throw with a direct CONTEXT we want to throw from. |
1145 | CONTEXT must be a label; its address will be used as the context of | |
abeeec2a | 1146 | the throw. */ |
4956d07c MS |
1147 | |
1148 | void | |
1149 | expand_internal_throw (context) | |
1150 | rtx context; | |
1151 | { | |
1152 | expand_internal_throw_indirect (gen_rtx (LABEL_REF, Pmode, context)); | |
1153 | } | |
1154 | ||
1155 | /* Called from expand_exception_blocks and expand_end_catch_block to | |
27a36778 | 1156 | emit any pending handlers/cleanups queued from expand_eh_region_end. */ |
4956d07c MS |
1157 | |
1158 | void | |
1159 | expand_leftover_cleanups () | |
1160 | { | |
1161 | struct eh_entry *entry; | |
1162 | ||
1163 | while ((entry = dequeue_eh_entry (&ehqueue)) != 0) | |
1164 | { | |
1165 | rtx prev; | |
1166 | ||
12670d88 RK |
1167 | /* A leftover try block. Shouldn't be one here. */ |
1168 | if (entry->finalization == integer_zero_node) | |
1169 | abort (); | |
1170 | ||
abeeec2a | 1171 | /* Output the label for the start of the exception handler. */ |
4956d07c MS |
1172 | emit_label (entry->exception_handler_label); |
1173 | ||
abeeec2a | 1174 | /* And now generate the insns for the handler. */ |
4956d07c MS |
1175 | expand_expr (entry->finalization, const0_rtx, VOIDmode, 0); |
1176 | ||
1177 | prev = get_last_insn (); | |
27a36778 | 1178 | if (prev == NULL || GET_CODE (prev) != BARRIER) |
4956d07c | 1179 | { |
27a36778 MS |
1180 | if (exceptions_via_longjmp) |
1181 | emit_throw (); | |
1182 | else | |
1183 | { | |
1184 | /* The below can be optimized away, and we could just fall into the | |
1185 | next EH handler, if we are certain they are nested. */ | |
1186 | /* Emit code to throw to the outer context if we fall off | |
1187 | the end of the handler. */ | |
1188 | expand_internal_throw (entry->end_label); | |
1189 | } | |
4956d07c MS |
1190 | } |
1191 | ||
4956d07c MS |
1192 | free (entry); |
1193 | } | |
1194 | } | |
1195 | ||
abeeec2a | 1196 | /* Called at the start of a block of try statements. */ |
12670d88 RK |
1197 | void |
1198 | expand_start_try_stmts () | |
1199 | { | |
1200 | if (! doing_eh (1)) | |
1201 | return; | |
1202 | ||
1203 | expand_eh_region_start (); | |
1204 | } | |
1205 | ||
1206 | /* Generate RTL for the start of a group of catch clauses. | |
1207 | ||
1208 | It is responsible for starting a new instruction sequence for the | |
1209 | instructions in the catch block, and expanding the handlers for the | |
1210 | internally-generated exception regions nested within the try block | |
abeeec2a | 1211 | corresponding to this catch block. */ |
4956d07c MS |
1212 | |
1213 | void | |
1214 | expand_start_all_catch () | |
1215 | { | |
1216 | struct eh_entry *entry; | |
1217 | tree label; | |
1218 | ||
1219 | if (! doing_eh (1)) | |
1220 | return; | |
1221 | ||
abeeec2a | 1222 | /* End the try block. */ |
12670d88 RK |
1223 | expand_eh_region_end (integer_zero_node); |
1224 | ||
4956d07c MS |
1225 | emit_line_note (input_filename, lineno); |
1226 | label = build_decl (LABEL_DECL, NULL_TREE, NULL_TREE); | |
1227 | ||
12670d88 RK |
1228 | /* The label for the exception handling block that we will save. |
1229 | This is Lresume in the documention. */ | |
4956d07c MS |
1230 | expand_label (label); |
1231 | ||
27a36778 MS |
1232 | if (exceptions_via_longjmp == 0) |
1233 | { | |
1234 | /* Put in something that takes up space, as otherwise the end | |
1235 | address for the EH region could have the exact same address as | |
1236 | the outer region, causing us to miss the fact that resuming | |
1237 | exception handling with this PC value would be inside the outer | |
1238 | region. */ | |
1239 | emit_insn (gen_nop ()); | |
1240 | } | |
4956d07c | 1241 | |
12670d88 | 1242 | /* Push the label that points to where normal flow is resumed onto |
abeeec2a | 1243 | the top of the label stack. */ |
4956d07c MS |
1244 | push_label_entry (&caught_return_label_stack, NULL_RTX, label); |
1245 | ||
1246 | /* Start a new sequence for all the catch blocks. We will add this | |
12670d88 | 1247 | to the global sequence catch_clauses when we have completed all |
4956d07c MS |
1248 | the handlers in this handler-seq. */ |
1249 | start_sequence (); | |
1250 | ||
1251 | while (1) | |
1252 | { | |
1253 | rtx prev; | |
1254 | ||
1255 | entry = dequeue_eh_entry (&ehqueue); | |
12670d88 RK |
1256 | /* Emit the label for the exception handler for this region, and |
1257 | expand the code for the handler. | |
1258 | ||
1259 | Note that a catch region is handled as a side-effect here; | |
1260 | for a try block, entry->finalization will contain | |
1261 | integer_zero_node, so no code will be generated in the | |
1262 | expand_expr call below. But, the label for the handler will | |
1263 | still be emitted, so any code emitted after this point will | |
abeeec2a | 1264 | end up being the handler. */ |
4956d07c | 1265 | emit_label (entry->exception_handler_label); |
4956d07c | 1266 | |
12670d88 | 1267 | /* When we get down to the matching entry for this try block, stop. */ |
4956d07c | 1268 | if (entry->finalization == integer_zero_node) |
12670d88 | 1269 | { |
abeeec2a | 1270 | /* Don't forget to free this entry. */ |
12670d88 RK |
1271 | free (entry); |
1272 | break; | |
1273 | } | |
4956d07c | 1274 | |
27a36778 MS |
1275 | /* And now generate the insns for the handler. */ |
1276 | expand_expr (entry->finalization, const0_rtx, VOIDmode, 0); | |
1277 | ||
4956d07c | 1278 | prev = get_last_insn (); |
12670d88 | 1279 | if (prev == NULL || GET_CODE (prev) != BARRIER) |
4956d07c | 1280 | { |
27a36778 MS |
1281 | if (exceptions_via_longjmp) |
1282 | emit_throw (); | |
1283 | else | |
1284 | { | |
1285 | /* Code to throw out to outer context when we fall off end | |
1286 | of the handler. We can't do this here for catch blocks, | |
1287 | so it's done in expand_end_all_catch instead. | |
12670d88 | 1288 | |
27a36778 MS |
1289 | The below can be optimized away (and we could just fall |
1290 | into the next EH handler) if we are certain they are | |
1291 | nested. */ | |
12670d88 | 1292 | |
27a36778 MS |
1293 | expand_internal_throw (entry->end_label); |
1294 | } | |
4956d07c | 1295 | } |
4956d07c MS |
1296 | free (entry); |
1297 | } | |
1298 | } | |
1299 | ||
12670d88 RK |
1300 | /* Finish up the catch block. At this point all the insns for the |
1301 | catch clauses have already been generated, so we only have to add | |
1302 | them to the catch_clauses list. We also want to make sure that if | |
1303 | we fall off the end of the catch clauses that we rethrow to the | |
abeeec2a | 1304 | outer EH region. */ |
4956d07c MS |
1305 | |
1306 | void | |
1307 | expand_end_all_catch () | |
1308 | { | |
1309 | rtx new_catch_clause; | |
1310 | ||
1311 | if (! doing_eh (1)) | |
1312 | return; | |
1313 | ||
27a36778 MS |
1314 | if (exceptions_via_longjmp) |
1315 | emit_throw (); | |
1316 | else | |
1317 | { | |
1318 | /* Code to throw out to outer context, if we fall off end of catch | |
1319 | handlers. This is rethrow (Lresume, same id, same obj) in the | |
1320 | documentation. We use Lresume because we know that it will throw | |
1321 | to the correct context. | |
12670d88 | 1322 | |
27a36778 MS |
1323 | In other words, if the catch handler doesn't exit or return, we |
1324 | do a "throw" (using the address of Lresume as the point being | |
1325 | thrown from) so that the outer EH region can then try to process | |
1326 | the exception. */ | |
12670d88 | 1327 | |
27a36778 MS |
1328 | expand_internal_throw (DECL_RTL (top_label_entry (&caught_return_label_stack))); |
1329 | } | |
4956d07c MS |
1330 | |
1331 | /* Now we have the complete catch sequence. */ | |
1332 | new_catch_clause = get_insns (); | |
1333 | end_sequence (); | |
1334 | ||
1335 | /* This level of catch blocks is done, so set up the successful | |
1336 | catch jump label for the next layer of catch blocks. */ | |
1337 | pop_label_entry (&caught_return_label_stack); | |
1338 | ||
1339 | /* Add the new sequence of catches to the main one for this function. */ | |
1340 | push_to_sequence (catch_clauses); | |
1341 | emit_insns (new_catch_clause); | |
1342 | catch_clauses = get_insns (); | |
1343 | end_sequence (); | |
1344 | ||
1345 | /* Here we fall through into the continuation code. */ | |
1346 | } | |
1347 | ||
12670d88 | 1348 | /* End all the pending exception regions on protect_list. The handlers |
27a36778 | 1349 | will be emitted when expand_leftover_cleanups is invoked. */ |
4956d07c MS |
1350 | |
1351 | void | |
1352 | end_protect_partials () | |
1353 | { | |
1354 | while (protect_list) | |
1355 | { | |
1356 | expand_eh_region_end (TREE_VALUE (protect_list)); | |
1357 | protect_list = TREE_CHAIN (protect_list); | |
1358 | } | |
1359 | } | |
27a36778 MS |
1360 | |
1361 | /* Arrange for __terminate to be called if there is an unhandled throw | |
1362 | from within E. */ | |
1363 | ||
1364 | tree | |
1365 | protect_with_terminate (e) | |
1366 | tree e; | |
1367 | { | |
1368 | /* We only need to do this when using setjmp/longjmp EH and the | |
1369 | language requires it, as otherwise we protect all of the handlers | |
1370 | at once, if we need to. */ | |
1371 | if (exceptions_via_longjmp && protect_cleanup_actions_with_terminate) | |
1372 | { | |
1373 | tree handler, result; | |
1374 | ||
1375 | /* All cleanups must be on the function_obstack. */ | |
1376 | push_obstacks_nochange (); | |
1377 | resume_temporary_allocation (); | |
1378 | ||
1379 | handler = make_node (RTL_EXPR); | |
1380 | TREE_TYPE (handler) = void_type_node; | |
1381 | RTL_EXPR_RTL (handler) = const0_rtx; | |
1382 | TREE_SIDE_EFFECTS (handler) = 1; | |
1383 | start_sequence_for_rtl_expr (handler); | |
1384 | ||
1385 | emit_library_call (terminate_libfunc, 0, VOIDmode, 0); | |
1386 | emit_barrier (); | |
1387 | ||
1388 | RTL_EXPR_SEQUENCE (handler) = get_insns (); | |
1389 | end_sequence (); | |
1390 | ||
1391 | result = build (TRY_CATCH_EXPR, TREE_TYPE (e), e, handler); | |
1392 | TREE_SIDE_EFFECTS (result) = TREE_SIDE_EFFECTS (e); | |
1393 | TREE_THIS_VOLATILE (result) = TREE_THIS_VOLATILE (e); | |
1394 | TREE_READONLY (result) = TREE_READONLY (e); | |
1395 | ||
1396 | pop_obstacks (); | |
1397 | ||
1398 | e = result; | |
1399 | } | |
1400 | ||
1401 | return e; | |
1402 | } | |
4956d07c MS |
1403 | \f |
1404 | /* The exception table that we build that is used for looking up and | |
12670d88 RK |
1405 | dispatching exceptions, the current number of entries, and its |
1406 | maximum size before we have to extend it. | |
1407 | ||
1408 | The number in eh_table is the code label number of the exception | |
27a36778 MS |
1409 | handler for the region. This is added by add_eh_table_entry and |
1410 | used by output_exception_table_entry. */ | |
12670d88 | 1411 | |
4956d07c MS |
1412 | static int *eh_table; |
1413 | static int eh_table_size; | |
1414 | static int eh_table_max_size; | |
1415 | ||
1416 | /* Note the need for an exception table entry for region N. If we | |
12670d88 RK |
1417 | don't need to output an explicit exception table, avoid all of the |
1418 | extra work. | |
1419 | ||
1420 | Called from final_scan_insn when a NOTE_INSN_EH_REGION_BEG is seen. | |
1421 | N is the NOTE_BLOCK_NUMBER of the note, which comes from the code | |
abeeec2a | 1422 | label number of the exception handler for the region. */ |
4956d07c MS |
1423 | |
1424 | void | |
1425 | add_eh_table_entry (n) | |
1426 | int n; | |
1427 | { | |
1428 | #ifndef OMIT_EH_TABLE | |
1429 | if (eh_table_size >= eh_table_max_size) | |
1430 | { | |
1431 | if (eh_table) | |
1432 | { | |
1433 | eh_table_max_size += eh_table_max_size>>1; | |
1434 | ||
1435 | if (eh_table_max_size < 0) | |
1436 | abort (); | |
1437 | ||
abf3bf38 JW |
1438 | if ((eh_table = (int *) realloc (eh_table, |
1439 | eh_table_max_size * sizeof (int))) | |
4956d07c MS |
1440 | == 0) |
1441 | fatal ("virtual memory exhausted"); | |
1442 | } | |
1443 | else | |
1444 | { | |
1445 | eh_table_max_size = 252; | |
1446 | eh_table = (int *) xmalloc (eh_table_max_size * sizeof (int)); | |
1447 | } | |
1448 | } | |
1449 | eh_table[eh_table_size++] = n; | |
1450 | #endif | |
1451 | } | |
1452 | ||
12670d88 RK |
1453 | /* Return a non-zero value if we need to output an exception table. |
1454 | ||
1455 | On some platforms, we don't have to output a table explicitly. | |
1456 | This routine doesn't mean we don't have one. */ | |
4956d07c MS |
1457 | |
1458 | int | |
1459 | exception_table_p () | |
1460 | { | |
1461 | if (eh_table) | |
1462 | return 1; | |
1463 | ||
1464 | return 0; | |
1465 | } | |
1466 | ||
12670d88 RK |
1467 | /* Output the entry of the exception table corresponding to to the |
1468 | exception region numbered N to file FILE. | |
1469 | ||
1470 | N is the code label number corresponding to the handler of the | |
abeeec2a | 1471 | region. */ |
4956d07c MS |
1472 | |
1473 | static void | |
1474 | output_exception_table_entry (file, n) | |
1475 | FILE *file; | |
1476 | int n; | |
1477 | { | |
1478 | char buf[256]; | |
1479 | rtx sym; | |
1480 | ||
1481 | ASM_GENERATE_INTERNAL_LABEL (buf, "LEHB", n); | |
1482 | sym = gen_rtx (SYMBOL_REF, Pmode, buf); | |
1483 | assemble_integer (sym, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1484 | ||
1485 | ASM_GENERATE_INTERNAL_LABEL (buf, "LEHE", n); | |
1486 | sym = gen_rtx (SYMBOL_REF, Pmode, buf); | |
1487 | assemble_integer (sym, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1488 | ||
1489 | ASM_GENERATE_INTERNAL_LABEL (buf, "L", n); | |
1490 | sym = gen_rtx (SYMBOL_REF, Pmode, buf); | |
1491 | assemble_integer (sym, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1492 | ||
1493 | putc ('\n', file); /* blank line */ | |
1494 | } | |
1495 | ||
abeeec2a | 1496 | /* Output the exception table if we have and need one. */ |
4956d07c MS |
1497 | |
1498 | void | |
1499 | output_exception_table () | |
1500 | { | |
1501 | int i; | |
1502 | extern FILE *asm_out_file; | |
1503 | ||
1504 | if (! doing_eh (0)) | |
1505 | return; | |
1506 | ||
1507 | exception_section (); | |
1508 | ||
1509 | /* Beginning marker for table. */ | |
1510 | assemble_align (GET_MODE_ALIGNMENT (ptr_mode)); | |
1511 | assemble_label ("__EXCEPTION_TABLE__"); | |
1512 | ||
1513 | assemble_integer (const0_rtx, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1514 | assemble_integer (const0_rtx, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1515 | assemble_integer (const0_rtx, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1516 | putc ('\n', asm_out_file); /* blank line */ | |
1517 | ||
1518 | for (i = 0; i < eh_table_size; ++i) | |
1519 | output_exception_table_entry (asm_out_file, eh_table[i]); | |
1520 | ||
1521 | free (eh_table); | |
1522 | ||
1523 | /* Ending marker for table. */ | |
1524 | assemble_label ("__EXCEPTION_END__"); | |
1525 | assemble_integer (constm1_rtx, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1526 | assemble_integer (constm1_rtx, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1527 | assemble_integer (constm1_rtx, POINTER_SIZE / BITS_PER_UNIT, 1); | |
1528 | putc ('\n', asm_out_file); /* blank line */ | |
1529 | } | |
1530 | ||
1531 | /* Generate code to initialize the exception table at program startup | |
1532 | time. */ | |
1533 | ||
1534 | void | |
1535 | register_exception_table () | |
1536 | { | |
1537 | emit_library_call (gen_rtx (SYMBOL_REF, Pmode, "__register_exceptions"), 0, | |
1538 | VOIDmode, 1, | |
1539 | gen_rtx (SYMBOL_REF, Pmode, "__EXCEPTION_TABLE__"), | |
1540 | Pmode); | |
1541 | } | |
1542 | \f | |
12670d88 | 1543 | /* Emit the RTL for the start of the per-function unwinder for the |
27a36778 | 1544 | current function. See emit_unwinder for further information. |
12670d88 RK |
1545 | |
1546 | DOESNT_NEED_UNWINDER is a target-specific macro that determines if | |
1547 | the current function actually needs a per-function unwinder or not. | |
abeeec2a | 1548 | By default, all functions need one. */ |
4956d07c MS |
1549 | |
1550 | void | |
1551 | start_eh_unwinder () | |
1552 | { | |
1553 | #ifdef DOESNT_NEED_UNWINDER | |
1554 | if (DOESNT_NEED_UNWINDER) | |
1555 | return; | |
1556 | #endif | |
1557 | ||
27a36778 MS |
1558 | /* If we are using the setjmp/longjmp implementation, we don't need a |
1559 | per function unwinder. */ | |
1560 | ||
1561 | if (exceptions_via_longjmp) | |
1562 | return; | |
1563 | ||
4956d07c MS |
1564 | expand_eh_region_start (); |
1565 | } | |
1566 | ||
12670d88 | 1567 | /* Emit insns for the end of the per-function unwinder for the |
4956d07c MS |
1568 | current function. */ |
1569 | ||
1570 | void | |
1571 | end_eh_unwinder () | |
1572 | { | |
1573 | tree expr; | |
1574 | rtx return_val_rtx, ret_val, label, end, insns; | |
1575 | ||
1576 | if (! doing_eh (0)) | |
1577 | return; | |
1578 | ||
1579 | #ifdef DOESNT_NEED_UNWINDER | |
1580 | if (DOESNT_NEED_UNWINDER) | |
1581 | return; | |
1582 | #endif | |
1583 | ||
27a36778 MS |
1584 | /* If we are using the setjmp/longjmp implementation, we don't need a |
1585 | per function unwinder. */ | |
1586 | ||
1587 | if (exceptions_via_longjmp) | |
1588 | return; | |
1589 | ||
843e8335 MS |
1590 | assemble_external (eh_saved_pc); |
1591 | ||
4956d07c MS |
1592 | expr = make_node (RTL_EXPR); |
1593 | TREE_TYPE (expr) = void_type_node; | |
1594 | RTL_EXPR_RTL (expr) = const0_rtx; | |
1595 | TREE_SIDE_EFFECTS (expr) = 1; | |
1596 | start_sequence_for_rtl_expr (expr); | |
1597 | ||
12670d88 | 1598 | /* ret_val will contain the address of the code where the call |
abeeec2a | 1599 | to the current function occurred. */ |
4956d07c MS |
1600 | ret_val = expand_builtin_return_addr (BUILT_IN_RETURN_ADDRESS, |
1601 | 0, hard_frame_pointer_rtx); | |
1602 | return_val_rtx = copy_to_reg (ret_val); | |
1603 | ||
12670d88 | 1604 | /* Get the address we need to use to determine what exception |
abeeec2a | 1605 | handler should be invoked, and store it in __eh_pc. */ |
4956d07c | 1606 | return_val_rtx = eh_outer_context (return_val_rtx); |
4956d07c MS |
1607 | emit_move_insn (eh_saved_pc_rtx, return_val_rtx); |
1608 | ||
12670d88 | 1609 | /* Either set things up so we do a return directly to __throw, or |
abeeec2a | 1610 | we return here instead. */ |
4956d07c MS |
1611 | #ifdef JUMP_TO_THROW |
1612 | emit_move_insn (ret_val, throw_libfunc); | |
1613 | #else | |
1614 | label = gen_label_rtx (); | |
1615 | emit_move_insn (ret_val, gen_rtx (LABEL_REF, Pmode, label)); | |
1616 | #endif | |
1617 | ||
1618 | #ifdef RETURN_ADDR_OFFSET | |
1619 | return_val_rtx = plus_constant (ret_val, -RETURN_ADDR_OFFSET); | |
1620 | if (return_val_rtx != ret_val) | |
1621 | emit_move_insn (ret_val, return_val_rtx); | |
1622 | #endif | |
1623 | ||
1624 | end = gen_label_rtx (); | |
1625 | emit_jump (end); | |
1626 | ||
1627 | RTL_EXPR_SEQUENCE (expr) = get_insns (); | |
1628 | end_sequence (); | |
27a36778 | 1629 | |
4956d07c MS |
1630 | expand_eh_region_end (expr); |
1631 | ||
1632 | emit_jump (end); | |
1633 | ||
1634 | #ifndef JUMP_TO_THROW | |
1635 | emit_label (label); | |
1636 | emit_throw (); | |
1637 | #endif | |
1638 | ||
1639 | expand_leftover_cleanups (); | |
1640 | ||
1641 | emit_label (end); | |
1642 | } | |
1643 | ||
12670d88 RK |
1644 | /* If necessary, emit insns for the per function unwinder for the |
1645 | current function. Called after all the code that needs unwind | |
1646 | protection is output. | |
1647 | ||
1648 | The unwinder takes care of catching any exceptions that have not | |
1649 | been previously caught within the function, unwinding the stack to | |
1650 | the next frame, and rethrowing using the address of the current | |
1651 | function's caller as the context of the throw. | |
1652 | ||
1653 | On some platforms __throw can do this by itself (or with the help | |
1654 | of __unwind_function) so the per-function unwinder is | |
1655 | unnecessary. | |
1656 | ||
1657 | We cannot place the unwinder into the function until after we know | |
1658 | we are done inlining, as we don't want to have more than one | |
1659 | unwinder per non-inlined function. */ | |
4956d07c MS |
1660 | |
1661 | void | |
1662 | emit_unwinder () | |
1663 | { | |
12670d88 | 1664 | rtx insns, insn; |
4956d07c MS |
1665 | |
1666 | start_sequence (); | |
1667 | start_eh_unwinder (); | |
1668 | insns = get_insns (); | |
1669 | end_sequence (); | |
1670 | ||
12670d88 RK |
1671 | /* We place the start of the exception region associated with the |
1672 | per function unwinder at the top of the function. */ | |
4956d07c MS |
1673 | if (insns) |
1674 | emit_insns_after (insns, get_insns ()); | |
1675 | ||
12670d88 | 1676 | start_sequence (); |
4956d07c | 1677 | end_eh_unwinder (); |
12670d88 RK |
1678 | insns = get_insns (); |
1679 | end_sequence (); | |
1680 | ||
1681 | /* And we place the end of the exception region before the USE and | |
1682 | CLOBBER insns that may come at the end of the function. */ | |
1683 | if (insns == 0) | |
1684 | return; | |
1685 | ||
1686 | insn = get_last_insn (); | |
1687 | while (GET_CODE (insn) == NOTE | |
1688 | || (GET_CODE (insn) == INSN | |
1689 | && (GET_CODE (PATTERN (insn)) == USE | |
1690 | || GET_CODE (PATTERN (insn)) == CLOBBER))) | |
1691 | insn = PREV_INSN (insn); | |
1692 | ||
1693 | if (GET_CODE (insn) == CODE_LABEL | |
1694 | && GET_CODE (PREV_INSN (insn)) == BARRIER) | |
1695 | { | |
1696 | insn = PREV_INSN (insn); | |
1697 | } | |
1698 | else | |
1699 | { | |
1700 | rtx label = gen_label_rtx (); | |
1701 | emit_label_after (label, insn); | |
1702 | insn = emit_jump_insn_after (gen_jump (label), insn); | |
1703 | insn = emit_barrier_after (insn); | |
1704 | } | |
1705 | ||
1706 | emit_insns_after (insns, insn); | |
4956d07c MS |
1707 | } |
1708 | ||
12670d88 RK |
1709 | /* Scan the current insns and build a list of handler labels. The |
1710 | resulting list is placed in the global variable exception_handler_labels. | |
1711 | ||
1712 | It is called after the last exception handling region is added to | |
1713 | the current function (when the rtl is almost all built for the | |
1714 | current function) and before the jump optimization pass. */ | |
4956d07c MS |
1715 | |
1716 | void | |
1717 | find_exception_handler_labels () | |
1718 | { | |
1719 | rtx insn; | |
1720 | int max_labelno = max_label_num (); | |
1721 | int min_labelno = get_first_label_num (); | |
1722 | rtx *labels; | |
1723 | ||
1724 | exception_handler_labels = NULL_RTX; | |
1725 | ||
1726 | /* If we aren't doing exception handling, there isn't much to check. */ | |
1727 | if (! doing_eh (0)) | |
1728 | return; | |
1729 | ||
12670d88 | 1730 | /* Generate a handy reference to each label. */ |
4956d07c MS |
1731 | |
1732 | labels = (rtx *) alloca ((max_labelno - min_labelno) * sizeof (rtx)); | |
abeeec2a | 1733 | bzero ((char *) labels, (max_labelno - min_labelno) * sizeof (rtx)); |
12670d88 | 1734 | |
abeeec2a | 1735 | /* Arrange for labels to be indexed directly by CODE_LABEL_NUMBER. */ |
4956d07c MS |
1736 | labels -= min_labelno; |
1737 | ||
1738 | for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) | |
1739 | { | |
1740 | if (GET_CODE (insn) == CODE_LABEL) | |
1741 | if (CODE_LABEL_NUMBER (insn) >= min_labelno | |
1742 | && CODE_LABEL_NUMBER (insn) < max_labelno) | |
1743 | labels[CODE_LABEL_NUMBER (insn)] = insn; | |
1744 | } | |
1745 | ||
12670d88 RK |
1746 | /* For each start of a region, add its label to the list. */ |
1747 | ||
4956d07c MS |
1748 | for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) |
1749 | { | |
1750 | if (GET_CODE (insn) == NOTE | |
1751 | && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG) | |
1752 | { | |
1753 | rtx label = NULL_RTX; | |
1754 | ||
1755 | if (NOTE_BLOCK_NUMBER (insn) >= min_labelno | |
1756 | && NOTE_BLOCK_NUMBER (insn) < max_labelno) | |
1757 | { | |
1758 | label = labels[NOTE_BLOCK_NUMBER (insn)]; | |
1759 | ||
1760 | if (label) | |
1761 | exception_handler_labels | |
1762 | = gen_rtx (EXPR_LIST, VOIDmode, | |
1763 | label, exception_handler_labels); | |
1764 | else | |
1765 | warning ("didn't find handler for EH region %d", | |
1766 | NOTE_BLOCK_NUMBER (insn)); | |
1767 | } | |
1768 | else | |
1769 | warning ("mismatched EH region %d", NOTE_BLOCK_NUMBER (insn)); | |
1770 | } | |
1771 | } | |
1772 | } | |
1773 | ||
12670d88 RK |
1774 | /* Perform sanity checking on the exception_handler_labels list. |
1775 | ||
1776 | Can be called after find_exception_handler_labels is called to | |
1777 | build the list of exception handlers for the current function and | |
1778 | before we finish processing the current function. */ | |
4956d07c MS |
1779 | |
1780 | void | |
1781 | check_exception_handler_labels () | |
1782 | { | |
1783 | rtx insn, handler; | |
1784 | ||
1785 | /* If we aren't doing exception handling, there isn't much to check. */ | |
1786 | if (! doing_eh (0)) | |
1787 | return; | |
1788 | ||
12670d88 RK |
1789 | /* Ensure that the CODE_LABEL_NUMBER for the CODE_LABEL entry point |
1790 | in each handler corresponds to the CODE_LABEL_NUMBER of the | |
abeeec2a | 1791 | handler. */ |
12670d88 | 1792 | |
4956d07c MS |
1793 | for (handler = exception_handler_labels; |
1794 | handler; | |
1795 | handler = XEXP (handler, 1)) | |
1796 | { | |
1797 | for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) | |
1798 | { | |
1799 | if (GET_CODE (insn) == CODE_LABEL) | |
1800 | { | |
1801 | if (CODE_LABEL_NUMBER (insn) | |
1802 | == CODE_LABEL_NUMBER (XEXP (handler, 0))) | |
1803 | { | |
1804 | if (insn != XEXP (handler, 0)) | |
1805 | warning ("mismatched handler %d", | |
1806 | CODE_LABEL_NUMBER (insn)); | |
1807 | break; | |
1808 | } | |
1809 | } | |
1810 | } | |
1811 | if (insn == NULL_RTX) | |
1812 | warning ("handler not found %d", | |
1813 | CODE_LABEL_NUMBER (XEXP (handler, 0))); | |
1814 | } | |
1815 | ||
12670d88 RK |
1816 | /* Now go through and make sure that for each region there is a |
1817 | corresponding label. */ | |
4956d07c MS |
1818 | for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) |
1819 | { | |
1820 | if (GET_CODE (insn) == NOTE | |
27a36778 MS |
1821 | && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG |
1822 | || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END)) | |
4956d07c MS |
1823 | { |
1824 | for (handler = exception_handler_labels; | |
1825 | handler; | |
1826 | handler = XEXP (handler, 1)) | |
1827 | { | |
1828 | if (CODE_LABEL_NUMBER (XEXP (handler, 0)) | |
1829 | == NOTE_BLOCK_NUMBER (insn)) | |
1830 | break; | |
1831 | } | |
1832 | if (handler == NULL_RTX) | |
1833 | warning ("region exists, no handler %d", | |
1834 | NOTE_BLOCK_NUMBER (insn)); | |
1835 | } | |
1836 | } | |
1837 | } | |
1838 | \f | |
1839 | /* This group of functions initializes the exception handling data | |
1840 | structures at the start of the compilation, initializes the data | |
12670d88 | 1841 | structures at the start of a function, and saves and restores the |
4956d07c MS |
1842 | exception handling data structures for the start/end of a nested |
1843 | function. */ | |
1844 | ||
1845 | /* Toplevel initialization for EH things. */ | |
1846 | ||
1847 | void | |
1848 | init_eh () | |
1849 | { | |
12670d88 | 1850 | /* Generate rtl to reference the variable in which the PC of the |
abeeec2a | 1851 | current context is saved. */ |
843e8335 MS |
1852 | tree type = build_pointer_type (make_node (VOID_TYPE)); |
1853 | ||
1854 | eh_saved_pc = build_decl (VAR_DECL, get_identifier ("__eh_pc"), type); | |
1855 | DECL_EXTERNAL (eh_saved_pc) = 1; | |
1856 | TREE_PUBLIC (eh_saved_pc) = 1; | |
1857 | make_decl_rtl (eh_saved_pc, NULL_PTR, 1); | |
1858 | eh_saved_pc_rtx = DECL_RTL (eh_saved_pc); | |
4956d07c MS |
1859 | } |
1860 | ||
abeeec2a | 1861 | /* Initialize the per-function EH information. */ |
4956d07c MS |
1862 | |
1863 | void | |
1864 | init_eh_for_function () | |
1865 | { | |
1866 | ehstack.top = 0; | |
1867 | ehqueue.head = ehqueue.tail = 0; | |
1868 | catch_clauses = NULL_RTX; | |
1869 | false_label_stack = 0; | |
1870 | caught_return_label_stack = 0; | |
1871 | protect_list = NULL_TREE; | |
27a36778 MS |
1872 | current_function_dhc = NULL_RTX; |
1873 | current_function_dcc = NULL_RTX; | |
4956d07c MS |
1874 | } |
1875 | ||
12670d88 RK |
1876 | /* Save some of the per-function EH info into the save area denoted by |
1877 | P. | |
1878 | ||
27a36778 | 1879 | This is currently called from save_stmt_status. */ |
4956d07c MS |
1880 | |
1881 | void | |
1882 | save_eh_status (p) | |
1883 | struct function *p; | |
1884 | { | |
12670d88 RK |
1885 | assert (p != NULL); |
1886 | ||
4956d07c MS |
1887 | p->ehstack = ehstack; |
1888 | p->ehqueue = ehqueue; | |
1889 | p->catch_clauses = catch_clauses; | |
1890 | p->false_label_stack = false_label_stack; | |
1891 | p->caught_return_label_stack = caught_return_label_stack; | |
1892 | p->protect_list = protect_list; | |
27a36778 MS |
1893 | p->dhc = current_function_dhc; |
1894 | p->dcc = current_function_dcc; | |
4956d07c MS |
1895 | |
1896 | init_eh (); | |
1897 | } | |
1898 | ||
12670d88 RK |
1899 | /* Restore the per-function EH info saved into the area denoted by P. |
1900 | ||
abeeec2a | 1901 | This is currently called from restore_stmt_status. */ |
4956d07c MS |
1902 | |
1903 | void | |
1904 | restore_eh_status (p) | |
1905 | struct function *p; | |
1906 | { | |
12670d88 RK |
1907 | assert (p != NULL); |
1908 | ||
4956d07c MS |
1909 | protect_list = p->protect_list; |
1910 | caught_return_label_stack = p->caught_return_label_stack; | |
1911 | false_label_stack = p->false_label_stack; | |
1912 | catch_clauses = p->catch_clauses; | |
1913 | ehqueue = p->ehqueue; | |
1914 | ehstack = p->ehstack; | |
27a36778 MS |
1915 | current_function_dhc = p->dhc; |
1916 | current_function_dcc = p->dcc; | |
4956d07c MS |
1917 | } |
1918 | \f | |
1919 | /* This section is for the exception handling specific optimization | |
1920 | pass. First are the internal routines, and then the main | |
1921 | optimization pass. */ | |
1922 | ||
1923 | /* Determine if the given INSN can throw an exception. */ | |
1924 | ||
1925 | static int | |
1926 | can_throw (insn) | |
1927 | rtx insn; | |
1928 | { | |
abeeec2a | 1929 | /* Calls can always potentially throw exceptions. */ |
4956d07c MS |
1930 | if (GET_CODE (insn) == CALL_INSN) |
1931 | return 1; | |
1932 | ||
27a36778 MS |
1933 | if (asynchronous_exceptions) |
1934 | { | |
1935 | /* If we wanted asynchronous exceptions, then everything but NOTEs | |
1936 | and CODE_LABELs could throw. */ | |
1937 | if (GET_CODE (insn) != NOTE && GET_CODE (insn) != CODE_LABEL) | |
1938 | return 1; | |
1939 | } | |
4956d07c MS |
1940 | |
1941 | return 0; | |
1942 | } | |
1943 | ||
12670d88 RK |
1944 | /* Scan a exception region looking for the matching end and then |
1945 | remove it if possible. INSN is the start of the region, N is the | |
1946 | region number, and DELETE_OUTER is to note if anything in this | |
1947 | region can throw. | |
1948 | ||
1949 | Regions are removed if they cannot possibly catch an exception. | |
27a36778 | 1950 | This is determined by invoking can_throw on each insn within the |
12670d88 RK |
1951 | region; if can_throw returns true for any of the instructions, the |
1952 | region can catch an exception, since there is an insn within the | |
1953 | region that is capable of throwing an exception. | |
1954 | ||
1955 | Returns the NOTE_INSN_EH_REGION_END corresponding to this region, or | |
27a36778 | 1956 | calls abort if it can't find one. |
12670d88 RK |
1957 | |
1958 | Can abort if INSN is not a NOTE_INSN_EH_REGION_BEGIN, or if N doesn't | |
abeeec2a | 1959 | correspond to the region number, or if DELETE_OUTER is NULL. */ |
4956d07c MS |
1960 | |
1961 | static rtx | |
1962 | scan_region (insn, n, delete_outer) | |
1963 | rtx insn; | |
1964 | int n; | |
1965 | int *delete_outer; | |
1966 | { | |
1967 | rtx start = insn; | |
1968 | ||
1969 | /* Assume we can delete the region. */ | |
1970 | int delete = 1; | |
1971 | ||
12670d88 RK |
1972 | assert (insn != NULL_RTX |
1973 | && GET_CODE (insn) == NOTE | |
1974 | && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG | |
1975 | && NOTE_BLOCK_NUMBER (insn) == n | |
1976 | && delete_outer != NULL); | |
1977 | ||
4956d07c MS |
1978 | insn = NEXT_INSN (insn); |
1979 | ||
1980 | /* Look for the matching end. */ | |
1981 | while (! (GET_CODE (insn) == NOTE | |
1982 | && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END)) | |
1983 | { | |
1984 | /* If anything can throw, we can't remove the region. */ | |
1985 | if (delete && can_throw (insn)) | |
1986 | { | |
1987 | delete = 0; | |
1988 | } | |
1989 | ||
1990 | /* Watch out for and handle nested regions. */ | |
1991 | if (GET_CODE (insn) == NOTE | |
1992 | && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG) | |
1993 | { | |
1994 | insn = scan_region (insn, NOTE_BLOCK_NUMBER (insn), &delete); | |
1995 | } | |
1996 | ||
1997 | insn = NEXT_INSN (insn); | |
1998 | } | |
1999 | ||
2000 | /* The _BEG/_END NOTEs must match and nest. */ | |
2001 | if (NOTE_BLOCK_NUMBER (insn) != n) | |
2002 | abort (); | |
2003 | ||
12670d88 | 2004 | /* If anything in this exception region can throw, we can throw. */ |
4956d07c MS |
2005 | if (! delete) |
2006 | *delete_outer = 0; | |
2007 | else | |
2008 | { | |
2009 | /* Delete the start and end of the region. */ | |
2010 | delete_insn (start); | |
2011 | delete_insn (insn); | |
2012 | ||
2013 | /* Only do this part if we have built the exception handler | |
2014 | labels. */ | |
2015 | if (exception_handler_labels) | |
2016 | { | |
2017 | rtx x, *prev = &exception_handler_labels; | |
2018 | ||
2019 | /* Find it in the list of handlers. */ | |
2020 | for (x = exception_handler_labels; x; x = XEXP (x, 1)) | |
2021 | { | |
2022 | rtx label = XEXP (x, 0); | |
2023 | if (CODE_LABEL_NUMBER (label) == n) | |
2024 | { | |
2025 | /* If we are the last reference to the handler, | |
2026 | delete it. */ | |
2027 | if (--LABEL_NUSES (label) == 0) | |
2028 | delete_insn (label); | |
2029 | ||
2030 | if (optimize) | |
2031 | { | |
2032 | /* Remove it from the list of exception handler | |
2033 | labels, if we are optimizing. If we are not, then | |
2034 | leave it in the list, as we are not really going to | |
2035 | remove the region. */ | |
2036 | *prev = XEXP (x, 1); | |
2037 | XEXP (x, 1) = 0; | |
2038 | XEXP (x, 0) = 0; | |
2039 | } | |
2040 | ||
2041 | break; | |
2042 | } | |
2043 | prev = &XEXP (x, 1); | |
2044 | } | |
2045 | } | |
2046 | } | |
2047 | return insn; | |
2048 | } | |
2049 | ||
2050 | /* Perform various interesting optimizations for exception handling | |
2051 | code. | |
2052 | ||
12670d88 RK |
2053 | We look for empty exception regions and make them go (away). The |
2054 | jump optimization code will remove the handler if nothing else uses | |
abeeec2a | 2055 | it. */ |
4956d07c MS |
2056 | |
2057 | void | |
2058 | exception_optimize () | |
2059 | { | |
2060 | rtx insn, regions = NULL_RTX; | |
2061 | int n; | |
2062 | ||
27a36778 MS |
2063 | /* The below doesn't apply to setjmp/longjmp EH. */ |
2064 | if (exceptions_via_longjmp) | |
2065 | return; | |
2066 | ||
12670d88 | 2067 | /* Remove empty regions. */ |
4956d07c MS |
2068 | for (insn = get_insns (); insn; insn = NEXT_INSN (insn)) |
2069 | { | |
2070 | if (GET_CODE (insn) == NOTE | |
2071 | && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG) | |
2072 | { | |
27a36778 | 2073 | /* Since scan_region will return the NOTE_INSN_EH_REGION_END |
12670d88 RK |
2074 | insn, we will indirectly skip through all the insns |
2075 | inbetween. We are also guaranteed that the value of insn | |
27a36778 | 2076 | returned will be valid, as otherwise scan_region won't |
abeeec2a | 2077 | return. */ |
4956d07c MS |
2078 | insn = scan_region (insn, NOTE_BLOCK_NUMBER (insn), &n); |
2079 | } | |
2080 | } | |
2081 | } |