]>
Commit | Line | Data |
---|---|---|
9c7e2978 | 1 | /* Perform instruction reorganizations for delay slot filling. |
af841dbd | 2 | Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, |
1e052c19 | 3 | 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. |
1923e516 | 4 | Contributed by Richard Kenner (kenner@vlsi1.ultra.nyu.edu). |
9c7e2978 RK |
5 | Hacked by Michael Tiemann (tiemann@cygnus.com). |
6 | ||
1322177d | 7 | This file is part of GCC. |
9c7e2978 | 8 | |
1322177d LB |
9 | GCC is free software; you can redistribute it and/or modify it under |
10 | the terms of the GNU General Public License as published by the Free | |
11 | Software Foundation; either version 2, or (at your option) any later | |
12 | version. | |
9c7e2978 | 13 | |
1322177d LB |
14 | GCC is distributed in the hope that it will be useful, but WITHOUT ANY |
15 | WARRANTY; without even the implied warranty of MERCHANTABILITY or | |
16 | FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License | |
17 | for more details. | |
9c7e2978 RK |
18 | |
19 | You should have received a copy of the GNU General Public License | |
1322177d LB |
20 | along with GCC; see the file COPYING. If not, write to the Free |
21 | Software Foundation, 59 Temple Place - Suite 330, Boston, MA | |
22 | 02111-1307, USA. */ | |
9c7e2978 | 23 | |
9c7e2978 RK |
24 | /* Instruction reorganization pass. |
25 | ||
26 | This pass runs after register allocation and final jump | |
27 | optimization. It should be the last pass to run before peephole. | |
28 | It serves primarily to fill delay slots of insns, typically branch | |
29 | and call insns. Other insns typically involve more complicated | |
6dc42e49 | 30 | interactions of data dependencies and resource constraints, and |
9c7e2978 RK |
31 | are better handled by scheduling before register allocation (by the |
32 | function `schedule_insns'). | |
33 | ||
34 | The Branch Penalty is the number of extra cycles that are needed to | |
35 | execute a branch insn. On an ideal machine, branches take a single | |
36 | cycle, and the Branch Penalty is 0. Several RISC machines approach | |
37 | branch delays differently: | |
38 | ||
fcfcb20e | 39 | The MIPS has a single branch delay slot. Most insns |
9c7e2978 RK |
40 | (except other branches) can be used to fill this slot. When the |
41 | slot is filled, two insns execute in two cycles, reducing the | |
42 | branch penalty to zero. | |
43 | ||
9c7e2978 RK |
44 | The SPARC always has a branch delay slot, but its effects can be |
45 | annulled when the branch is not taken. This means that failing to | |
46 | find other sources of insns, we can hoist an insn from the branch | |
47 | target that would only be safe to execute knowing that the branch | |
48 | is taken. | |
49 | ||
35523fce | 50 | The HP-PA always has a branch delay slot. For unconditional branches |
f27e6bb2 | 51 | its effects can be annulled when the branch is taken. The effects |
35523fce JL |
52 | of the delay slot in a conditional branch can be nullified for forward |
53 | taken branches, or for untaken backward branches. This means | |
54 | we can hoist insns from the fall-through path for forward branches or | |
55 | steal insns from the target of backward branches. | |
56 | ||
18e765cb JL |
57 | The TMS320C3x and C4x have three branch delay slots. When the three |
58 | slots are filled, the branch penalty is zero. Most insns can fill the | |
59 | delay slots except jump insns. | |
60 | ||
9c7e2978 RK |
61 | Three techniques for filling delay slots have been implemented so far: |
62 | ||
63 | (1) `fill_simple_delay_slots' is the simplest, most efficient way | |
64 | to fill delay slots. This pass first looks for insns which come | |
65 | from before the branch and which are safe to execute after the | |
66 | branch. Then it searches after the insn requiring delay slots or, | |
67 | in the case of a branch, for insns that are after the point at | |
68 | which the branch merges into the fallthrough code, if such a point | |
69 | exists. When such insns are found, the branch penalty decreases | |
70 | and no code expansion takes place. | |
71 | ||
72 | (2) `fill_eager_delay_slots' is more complicated: it is used for | |
73 | scheduling conditional jumps, or for scheduling jumps which cannot | |
74 | be filled using (1). A machine need not have annulled jumps to use | |
75 | this strategy, but it helps (by keeping more options open). | |
76 | `fill_eager_delay_slots' tries to guess the direction the branch | |
77 | will go; if it guesses right 100% of the time, it can reduce the | |
c0e12601 | 78 | branch penalty as much as `fill_simple_delay_slots' does. If it |
fcfcb20e | 79 | guesses wrong 100% of the time, it might as well schedule nops. When |
9c7e2978 RK |
80 | `fill_eager_delay_slots' takes insns from the fall-through path of |
81 | the jump, usually there is no code expansion; when it takes insns | |
82 | from the branch target, there is code expansion if it is not the | |
83 | only way to reach that target. | |
84 | ||
85 | (3) `relax_delay_slots' uses a set of rules to simplify code that | |
86 | has been reorganized by (1) and (2). It finds cases where | |
87 | conditional test can be eliminated, jumps can be threaded, extra | |
88 | insns can be eliminated, etc. It is the job of (1) and (2) to do a | |
89 | good job of scheduling locally; `relax_delay_slots' takes care of | |
90 | making the various individual schedules work well together. It is | |
91 | especially tuned to handle the control flow interactions of branch | |
92 | insns. It does nothing for insns with delay slots that do not | |
93 | branch. | |
94 | ||
95 | On machines that use CC0, we are very conservative. We will not make | |
96 | a copy of an insn involving CC0 since we want to maintain a 1-1 | |
d45cf215 | 97 | correspondence between the insn that sets and uses CC0. The insns are |
9c7e2978 RK |
98 | allowed to be separated by placing an insn that sets CC0 (but not an insn |
99 | that uses CC0; we could do this, but it doesn't seem worthwhile) in a | |
100 | delay slot. In that case, we point each insn at the other with REG_CC_USER | |
101 | and REG_CC_SETTER notes. Note that these restrictions affect very few | |
102 | machines because most RISC machines with delay slots will not use CC0 | |
103 | (the RT is the only known exception at this point). | |
104 | ||
105 | Not yet implemented: | |
106 | ||
107 | The Acorn Risc Machine can conditionally execute most insns, so | |
108 | it is profitable to move single insns into a position to execute | |
109 | based on the condition code of the previous insn. | |
110 | ||
111 | The HP-PA can conditionally nullify insns, providing a similar | |
6d2f8887 | 112 | effect to the ARM, differing mostly in which insn is "in charge". */ |
9c7e2978 | 113 | |
9c7e2978 | 114 | #include "config.h" |
670ee920 | 115 | #include "system.h" |
4977bab6 ZW |
116 | #include "coretypes.h" |
117 | #include "tm.h" | |
01198c2f | 118 | #include "toplev.h" |
9c7e2978 | 119 | #include "rtl.h" |
6baf1cc8 | 120 | #include "tm_p.h" |
51549d76 | 121 | #include "expr.h" |
49ad7cfa | 122 | #include "function.h" |
9c7e2978 RK |
123 | #include "insn-config.h" |
124 | #include "conditions.h" | |
125 | #include "hard-reg-set.h" | |
126 | #include "basic-block.h" | |
127 | #include "regs.h" | |
9c7e2978 RK |
128 | #include "recog.h" |
129 | #include "flags.h" | |
130 | #include "output.h" | |
131 | #include "obstack.h" | |
d80e9fd7 | 132 | #include "insn-attr.h" |
ca545bb5 | 133 | #include "resource.h" |
78b41166 | 134 | #include "except.h" |
0443f602 | 135 | #include "params.h" |
d80e9fd7 RS |
136 | |
137 | #ifdef DELAY_SLOTS | |
9c7e2978 | 138 | |
9c7e2978 | 139 | #ifndef ANNUL_IFTRUE_SLOTS |
35523fce | 140 | #define eligible_for_annul_true(INSN, SLOTS, TRIAL, FLAGS) 0 |
9c7e2978 RK |
141 | #endif |
142 | #ifndef ANNUL_IFFALSE_SLOTS | |
35523fce | 143 | #define eligible_for_annul_false(INSN, SLOTS, TRIAL, FLAGS) 0 |
9c7e2978 RK |
144 | #endif |
145 | ||
146 | /* Insns which have delay slots that have not yet been filled. */ | |
147 | ||
148 | static struct obstack unfilled_slots_obstack; | |
149 | static rtx *unfilled_firstobj; | |
150 | ||
151 | /* Define macros to refer to the first and last slot containing unfilled | |
152 | insns. These are used because the list may move and its address | |
153 | should be recomputed at each use. */ | |
154 | ||
155 | #define unfilled_slots_base \ | |
156 | ((rtx *) obstack_base (&unfilled_slots_obstack)) | |
157 | ||
158 | #define unfilled_slots_next \ | |
159 | ((rtx *) obstack_next_free (&unfilled_slots_obstack)) | |
160 | ||
9c7e2978 RK |
161 | /* Points to the label before the end of the function. */ |
162 | static rtx end_of_function_label; | |
163 | ||
9c7e2978 RK |
164 | /* Mapping between INSN_UID's and position in the code since INSN_UID's do |
165 | not always monotonically increase. */ | |
166 | static int *uid_to_ruid; | |
167 | ||
168 | /* Highest valid index in `uid_to_ruid'. */ | |
169 | static int max_uid; | |
170 | ||
0c20a65f AJ |
171 | static int stop_search_p (rtx, int); |
172 | static int resource_conflicts_p (struct resources *, struct resources *); | |
173 | static int insn_references_resource_p (rtx, struct resources *, int); | |
174 | static int insn_sets_resource_p (rtx, struct resources *, int); | |
175 | static rtx find_end_label (void); | |
176 | static rtx emit_delay_sequence (rtx, rtx, int); | |
177 | static rtx add_to_delay_list (rtx, rtx); | |
178 | static rtx delete_from_delay_slot (rtx); | |
179 | static void delete_scheduled_jump (rtx); | |
180 | static void note_delay_statistics (int, int); | |
80cca0e1 | 181 | #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS) |
0c20a65f | 182 | static rtx optimize_skip (rtx); |
80cca0e1 | 183 | #endif |
0c20a65f AJ |
184 | static int get_jump_flags (rtx, rtx); |
185 | static int rare_destination (rtx); | |
186 | static int mostly_true_jump (rtx, rtx); | |
187 | static rtx get_branch_condition (rtx, rtx); | |
188 | static int condition_dominates_p (rtx, rtx); | |
189 | static int redirect_with_delay_slots_safe_p (rtx, rtx, rtx); | |
190 | static int redirect_with_delay_list_safe_p (rtx, rtx, rtx); | |
191 | static int check_annul_list_true_false (int, rtx); | |
192 | static rtx steal_delay_list_from_target (rtx, rtx, rtx, rtx, | |
193 | struct resources *, | |
194 | struct resources *, | |
195 | struct resources *, | |
196 | int, int *, int *, rtx *); | |
197 | static rtx steal_delay_list_from_fallthrough (rtx, rtx, rtx, rtx, | |
198 | struct resources *, | |
199 | struct resources *, | |
200 | struct resources *, | |
201 | int, int *, int *); | |
202 | static void try_merge_delay_insns (rtx, rtx); | |
203 | static rtx redundant_insn (rtx, rtx, rtx); | |
204 | static int own_thread_p (rtx, rtx, int); | |
205 | static void update_block (rtx, rtx); | |
206 | static int reorg_redirect_jump (rtx, rtx); | |
207 | static void update_reg_dead_notes (rtx, rtx); | |
208 | static void fix_reg_dead_note (rtx, rtx); | |
209 | static void update_reg_unused_notes (rtx, rtx); | |
210 | static void fill_simple_delay_slots (int); | |
211 | static rtx fill_slots_from_thread (rtx, rtx, rtx, rtx, int, int, int, int, | |
212 | int *, rtx); | |
213 | static void fill_eager_delay_slots (void); | |
214 | static void relax_delay_slots (rtx); | |
57bed152 | 215 | #ifdef HAVE_return |
0c20a65f | 216 | static void make_return_insns (rtx); |
57bed152 | 217 | #endif |
9c7e2978 RK |
218 | \f |
219 | /* Return TRUE if this insn should stop the search for insn to fill delay | |
220 | slots. LABELS_P indicates that labels should terminate the search. | |
221 | In all cases, jumps terminate the search. */ | |
222 | ||
223 | static int | |
0c20a65f | 224 | stop_search_p (rtx insn, int labels_p) |
9c7e2978 RK |
225 | { |
226 | if (insn == 0) | |
227 | return 1; | |
228 | ||
5cfc5356 EB |
229 | /* If the insn can throw an exception that is caught within the function, |
230 | it may effectively perform a jump from the viewpoint of the function. | |
231 | Therefore act like for a jump. */ | |
232 | if (can_throw_internal (insn)) | |
233 | return 1; | |
234 | ||
9c7e2978 RK |
235 | switch (GET_CODE (insn)) |
236 | { | |
237 | case NOTE: | |
238 | case CALL_INSN: | |
239 | return 0; | |
240 | ||
241 | case CODE_LABEL: | |
242 | return labels_p; | |
243 | ||
244 | case JUMP_INSN: | |
245 | case BARRIER: | |
246 | return 1; | |
247 | ||
248 | case INSN: | |
249 | /* OK unless it contains a delay slot or is an `asm' insn of some type. | |
250 | We don't know anything about these. */ | |
251 | return (GET_CODE (PATTERN (insn)) == SEQUENCE | |
252 | || GET_CODE (PATTERN (insn)) == ASM_INPUT | |
253 | || asm_noperands (PATTERN (insn)) >= 0); | |
254 | ||
255 | default: | |
41374e13 | 256 | gcc_unreachable (); |
9c7e2978 RK |
257 | } |
258 | } | |
259 | \f | |
260 | /* Return TRUE if any resources are marked in both RES1 and RES2 or if either | |
261 | resource set contains a volatile memory reference. Otherwise, return FALSE. */ | |
262 | ||
263 | static int | |
0c20a65f | 264 | resource_conflicts_p (struct resources *res1, struct resources *res2) |
9c7e2978 RK |
265 | { |
266 | if ((res1->cc && res2->cc) || (res1->memory && res2->memory) | |
8eae5ed6 | 267 | || (res1->unch_memory && res2->unch_memory) |
9c7e2978 RK |
268 | || res1->volatil || res2->volatil) |
269 | return 1; | |
270 | ||
271 | #ifdef HARD_REG_SET | |
272 | return (res1->regs & res2->regs) != HARD_CONST (0); | |
273 | #else | |
274 | { | |
275 | int i; | |
276 | ||
277 | for (i = 0; i < HARD_REG_SET_LONGS; i++) | |
278 | if ((res1->regs[i] & res2->regs[i]) != 0) | |
279 | return 1; | |
280 | return 0; | |
281 | } | |
282 | #endif | |
283 | } | |
284 | ||
285 | /* Return TRUE if any resource marked in RES, a `struct resources', is | |
ab63953e | 286 | referenced by INSN. If INCLUDE_DELAYED_EFFECTS is set, return if the called |
9c7e2978 RK |
287 | routine is using those resources. |
288 | ||
289 | We compute this by computing all the resources referenced by INSN and | |
290 | seeing if this conflicts with RES. It might be faster to directly check | |
291 | ourselves, and this is the way it used to work, but it means duplicating | |
292 | a large block of complex code. */ | |
293 | ||
294 | static int | |
0c20a65f AJ |
295 | insn_references_resource_p (rtx insn, struct resources *res, |
296 | int include_delayed_effects) | |
9c7e2978 RK |
297 | { |
298 | struct resources insn_res; | |
299 | ||
300 | CLEAR_RESOURCE (&insn_res); | |
674345b1 | 301 | mark_referenced_resources (insn, &insn_res, include_delayed_effects); |
9c7e2978 RK |
302 | return resource_conflicts_p (&insn_res, res); |
303 | } | |
304 | ||
305 | /* Return TRUE if INSN modifies resources that are marked in RES. | |
ab63953e | 306 | INCLUDE_DELAYED_EFFECTS is set if the actions of that routine should be |
9c7e2978 RK |
307 | included. CC0 is only modified if it is explicitly set; see comments |
308 | in front of mark_set_resources for details. */ | |
309 | ||
310 | static int | |
0c20a65f AJ |
311 | insn_sets_resource_p (rtx insn, struct resources *res, |
312 | int include_delayed_effects) | |
9c7e2978 RK |
313 | { |
314 | struct resources insn_sets; | |
315 | ||
316 | CLEAR_RESOURCE (&insn_sets); | |
674345b1 | 317 | mark_set_resources (insn, &insn_sets, 0, include_delayed_effects); |
9c7e2978 RK |
318 | return resource_conflicts_p (&insn_sets, res); |
319 | } | |
320 | \f | |
b9185691 EB |
321 | /* Find a label at the end of the function or before a RETURN. If there |
322 | is none, try to make one. If that fails, returns 0. | |
323 | ||
324 | The property of such a label is that it is placed just before the | |
325 | epilogue or a bare RETURN insn, so that another bare RETURN can be | |
326 | turned into a jump to the label unconditionally. In particular, the | |
327 | label cannot be placed before a RETURN insn with a filled delay slot. | |
328 | ||
329 | ??? There may be a problem with the current implementation. Suppose | |
330 | we start with a bare RETURN insn and call find_end_label. It may set | |
331 | end_of_function_label just before the RETURN. Suppose the machinery | |
332 | is able to fill the delay slot of the RETURN insn afterwards. Then | |
333 | end_of_function_label is no longer valid according to the property | |
334 | described above and find_end_label will still return it unmodified. | |
335 | Note that this is probably mitigated by the following observation: | |
336 | once end_of_function_label is made, it is very likely the target of | |
337 | a jump, so filling the delay slot of the RETURN will be much more | |
338 | difficult. */ | |
9c7e2978 RK |
339 | |
340 | static rtx | |
0c20a65f | 341 | find_end_label (void) |
9c7e2978 RK |
342 | { |
343 | rtx insn; | |
344 | ||
345 | /* If we found one previously, return it. */ | |
346 | if (end_of_function_label) | |
347 | return end_of_function_label; | |
348 | ||
349 | /* Otherwise, see if there is a label at the end of the function. If there | |
350 | is, it must be that RETURN insns aren't needed, so that is our return | |
351 | label and we don't have to do anything else. */ | |
352 | ||
353 | insn = get_last_insn (); | |
4b4bf941 JQ |
354 | while (NOTE_P (insn) |
355 | || (NONJUMP_INSN_P (insn) | |
9c7e2978 RK |
356 | && (GET_CODE (PATTERN (insn)) == USE |
357 | || GET_CODE (PATTERN (insn)) == CLOBBER))) | |
358 | insn = PREV_INSN (insn); | |
359 | ||
f27e6bb2 | 360 | /* When a target threads its epilogue we might already have a |
e572bad3 JL |
361 | suitable return insn. If so put a label before it for the |
362 | end_of_function_label. */ | |
4b4bf941 JQ |
363 | if (BARRIER_P (insn) |
364 | && JUMP_P (PREV_INSN (insn)) | |
b9185691 | 365 | && GET_CODE (PATTERN (PREV_INSN (insn))) == RETURN) |
e572bad3 JL |
366 | { |
367 | rtx temp = PREV_INSN (PREV_INSN (insn)); | |
368 | end_of_function_label = gen_label_rtx (); | |
369 | LABEL_NUSES (end_of_function_label) = 0; | |
370 | ||
b9185691 | 371 | /* Put the label before an USE insns that may precede the RETURN insn. */ |
e572bad3 JL |
372 | while (GET_CODE (temp) == USE) |
373 | temp = PREV_INSN (temp); | |
374 | ||
375 | emit_label_after (end_of_function_label, temp); | |
376 | } | |
377 | ||
4b4bf941 | 378 | else if (LABEL_P (insn)) |
e572bad3 | 379 | end_of_function_label = insn; |
9c7e2978 RK |
380 | else |
381 | { | |
9c7e2978 RK |
382 | end_of_function_label = gen_label_rtx (); |
383 | LABEL_NUSES (end_of_function_label) = 0; | |
bf6302f0 HB |
384 | /* If the basic block reorder pass moves the return insn to |
385 | some other place try to locate it again and put our | |
386 | end_of_function_label there. */ | |
b9185691 EB |
387 | while (insn && ! (GET_CODE (insn) == JUMP_INSN |
388 | && (GET_CODE (PATTERN (insn)) == RETURN))) | |
bf6302f0 HB |
389 | insn = PREV_INSN (insn); |
390 | if (insn) | |
9c7e2978 | 391 | { |
bf6302f0 HB |
392 | insn = PREV_INSN (insn); |
393 | ||
f27e6bb2 | 394 | /* Put the label before an USE insns that may proceed the |
bf6302f0 | 395 | RETURN insn. */ |
f27e6bb2 | 396 | while (GET_CODE (insn) == USE) |
bf6302f0 HB |
397 | insn = PREV_INSN (insn); |
398 | ||
f27e6bb2 | 399 | emit_label_after (end_of_function_label, insn); |
9c7e2978 | 400 | } |
bf6302f0 HB |
401 | else |
402 | { | |
b9185691 EB |
403 | #ifdef HAVE_epilogue |
404 | if (HAVE_epilogue | |
405 | #ifdef HAVE_return | |
406 | && ! HAVE_return | |
407 | #endif | |
408 | ) | |
409 | { | |
410 | /* The RETURN insn has its delay slot filled so we cannot | |
411 | emit the label just before it. Since we already have | |
412 | an epilogue and cannot emit a new RETURN, we cannot | |
413 | emit the label at all. */ | |
414 | end_of_function_label = NULL_RTX; | |
415 | return end_of_function_label; | |
416 | } | |
417 | #endif /* HAVE_epilogue */ | |
418 | ||
f27e6bb2 | 419 | /* Otherwise, make a new label and emit a RETURN and BARRIER, |
bf6302f0 | 420 | if needed. */ |
f27e6bb2 | 421 | emit_label (end_of_function_label); |
bf6302f0 | 422 | #ifdef HAVE_return |
09f9e288 HPN |
423 | /* We don't bother trying to create a return insn if the |
424 | epilogue has filled delay-slots; we would have to try and | |
425 | move the delay-slot fillers to the delay-slots for the new | |
426 | return insn or in front of the new return insn. */ | |
427 | if (current_function_epilogue_delay_list == NULL | |
428 | && HAVE_return) | |
bf6302f0 HB |
429 | { |
430 | /* The return we make may have delay slots too. */ | |
431 | rtx insn = gen_return (); | |
432 | insn = emit_jump_insn (insn); | |
433 | emit_barrier (); | |
f27e6bb2 KH |
434 | if (num_delay_slots (insn) > 0) |
435 | obstack_ptr_grow (&unfilled_slots_obstack, insn); | |
bf6302f0 | 436 | } |
9c7e2978 | 437 | #endif |
bf6302f0 | 438 | } |
9c7e2978 RK |
439 | } |
440 | ||
441 | /* Show one additional use for this label so it won't go away until | |
442 | we are done. */ | |
443 | ++LABEL_NUSES (end_of_function_label); | |
444 | ||
445 | return end_of_function_label; | |
446 | } | |
447 | \f | |
448 | /* Put INSN and LIST together in a SEQUENCE rtx of LENGTH, and replace | |
449 | the pattern of INSN with the SEQUENCE. | |
450 | ||
451 | Chain the insns so that NEXT_INSN of each insn in the sequence points to | |
452 | the next and NEXT_INSN of the last insn in the sequence points to | |
453 | the first insn after the sequence. Similarly for PREV_INSN. This makes | |
454 | it easier to scan all insns. | |
455 | ||
456 | Returns the SEQUENCE that replaces INSN. */ | |
457 | ||
458 | static rtx | |
0c20a65f | 459 | emit_delay_sequence (rtx insn, rtx list, int length) |
9c7e2978 | 460 | { |
b3694847 SS |
461 | int i = 1; |
462 | rtx li; | |
9c7e2978 RK |
463 | int had_barrier = 0; |
464 | ||
38e01259 | 465 | /* Allocate the rtvec to hold the insns and the SEQUENCE. */ |
9c7e2978 | 466 | rtvec seqv = rtvec_alloc (length + 1); |
38a448ca | 467 | rtx seq = gen_rtx_SEQUENCE (VOIDmode, seqv); |
9c7e2978 RK |
468 | rtx seq_insn = make_insn_raw (seq); |
469 | rtx first = get_insns (); | |
470 | rtx last = get_last_insn (); | |
471 | ||
0f41302f | 472 | /* Make a copy of the insn having delay slots. */ |
9c7e2978 RK |
473 | rtx delay_insn = copy_rtx (insn); |
474 | ||
475 | /* If INSN is followed by a BARRIER, delete the BARRIER since it will only | |
f27e6bb2 | 476 | confuse further processing. Update LAST in case it was the last insn. |
9c7e2978 | 477 | We will put the BARRIER back in later. */ |
4b4bf941 | 478 | if (NEXT_INSN (insn) && BARRIER_P (NEXT_INSN (insn))) |
9c7e2978 | 479 | { |
53c17031 | 480 | delete_related_insns (NEXT_INSN (insn)); |
9c7e2978 RK |
481 | last = get_last_insn (); |
482 | had_barrier = 1; | |
483 | } | |
484 | ||
485 | /* Splice our SEQUENCE into the insn stream where INSN used to be. */ | |
486 | NEXT_INSN (seq_insn) = NEXT_INSN (insn); | |
487 | PREV_INSN (seq_insn) = PREV_INSN (insn); | |
488 | ||
f5546425 JL |
489 | if (insn != last) |
490 | PREV_INSN (NEXT_INSN (seq_insn)) = seq_insn; | |
491 | ||
492 | if (insn != first) | |
493 | NEXT_INSN (PREV_INSN (seq_insn)) = seq_insn; | |
494 | ||
495 | /* Note the calls to set_new_first_and_last_insn must occur after | |
496 | SEQ_INSN has been completely spliced into the insn stream. | |
497 | ||
498 | Otherwise CUR_INSN_UID will get set to an incorrect value because | |
499 | set_new_first_and_last_insn will not find SEQ_INSN in the chain. */ | |
9c7e2978 RK |
500 | if (insn == last) |
501 | set_new_first_and_last_insn (first, seq_insn); | |
9c7e2978 RK |
502 | |
503 | if (insn == first) | |
504 | set_new_first_and_last_insn (seq_insn, last); | |
9c7e2978 RK |
505 | |
506 | /* Build our SEQUENCE and rebuild the insn chain. */ | |
507 | XVECEXP (seq, 0, 0) = delay_insn; | |
508 | INSN_DELETED_P (delay_insn) = 0; | |
509 | PREV_INSN (delay_insn) = PREV_INSN (seq_insn); | |
510 | ||
511 | for (li = list; li; li = XEXP (li, 1), i++) | |
512 | { | |
513 | rtx tem = XEXP (li, 0); | |
e41c7831 | 514 | rtx note, next; |
9c7e2978 RK |
515 | |
516 | /* Show that this copy of the insn isn't deleted. */ | |
517 | INSN_DELETED_P (tem) = 0; | |
518 | ||
519 | XVECEXP (seq, 0, i) = tem; | |
520 | PREV_INSN (tem) = XVECEXP (seq, 0, i - 1); | |
521 | NEXT_INSN (XVECEXP (seq, 0, i - 1)) = tem; | |
522 | ||
281ac604 JH |
523 | /* SPARC assembler, for instance, emit warning when debug info is output |
524 | into the delay slot. */ | |
525 | if (INSN_LOCATOR (tem) && !INSN_LOCATOR (seq_insn)) | |
526 | INSN_LOCATOR (seq_insn) = INSN_LOCATOR (tem); | |
527 | INSN_LOCATOR (tem) = 0; | |
528 | ||
e41c7831 RH |
529 | for (note = REG_NOTES (tem); note; note = next) |
530 | { | |
531 | next = XEXP (note, 1); | |
532 | switch (REG_NOTE_KIND (note)) | |
533 | { | |
534 | case REG_DEAD: | |
535 | /* Remove any REG_DEAD notes because we can't rely on them now | |
536 | that the insn has been moved. */ | |
537 | remove_note (tem, note); | |
538 | break; | |
539 | ||
540 | case REG_LABEL: | |
541 | /* Keep the label reference count up to date. */ | |
4b4bf941 | 542 | if (LABEL_P (XEXP (note, 0))) |
0864c526 | 543 | LABEL_NUSES (XEXP (note, 0)) ++; |
e41c7831 RH |
544 | break; |
545 | ||
546 | default: | |
547 | break; | |
548 | } | |
549 | } | |
9c7e2978 RK |
550 | } |
551 | ||
552 | NEXT_INSN (XVECEXP (seq, 0, length)) = NEXT_INSN (seq_insn); | |
553 | ||
f1f9081a RS |
554 | /* If the previous insn is a SEQUENCE, update the NEXT_INSN pointer on the |
555 | last insn in that SEQUENCE to point to us. Similarly for the first | |
556 | insn in the following insn if it is a SEQUENCE. */ | |
557 | ||
4b4bf941 | 558 | if (PREV_INSN (seq_insn) && NONJUMP_INSN_P (PREV_INSN (seq_insn)) |
f1f9081a RS |
559 | && GET_CODE (PATTERN (PREV_INSN (seq_insn))) == SEQUENCE) |
560 | NEXT_INSN (XVECEXP (PATTERN (PREV_INSN (seq_insn)), 0, | |
561 | XVECLEN (PATTERN (PREV_INSN (seq_insn)), 0) - 1)) | |
562 | = seq_insn; | |
563 | ||
4b4bf941 | 564 | if (NEXT_INSN (seq_insn) && NONJUMP_INSN_P (NEXT_INSN (seq_insn)) |
f1f9081a RS |
565 | && GET_CODE (PATTERN (NEXT_INSN (seq_insn))) == SEQUENCE) |
566 | PREV_INSN (XVECEXP (PATTERN (NEXT_INSN (seq_insn)), 0, 0)) = seq_insn; | |
f27e6bb2 | 567 | |
9c7e2978 RK |
568 | /* If there used to be a BARRIER, put it back. */ |
569 | if (had_barrier) | |
570 | emit_barrier_after (seq_insn); | |
571 | ||
41374e13 | 572 | gcc_assert (i == length + 1); |
9c7e2978 RK |
573 | |
574 | return seq_insn; | |
575 | } | |
576 | ||
577 | /* Add INSN to DELAY_LIST and return the head of the new list. The list must | |
578 | be in the order in which the insns are to be executed. */ | |
579 | ||
580 | static rtx | |
0c20a65f | 581 | add_to_delay_list (rtx insn, rtx delay_list) |
9c7e2978 | 582 | { |
c8cfe1f6 | 583 | /* If we have an empty list, just make a new list element. If |
9ec36da5 | 584 | INSN has its block number recorded, clear it since we may |
c8cfe1f6 RK |
585 | be moving the insn to a new block. */ |
586 | ||
9c7e2978 | 587 | if (delay_list == 0) |
c8cfe1f6 | 588 | { |
ca545bb5 | 589 | clear_hashed_info_for_insn (insn); |
38a448ca | 590 | return gen_rtx_INSN_LIST (VOIDmode, insn, NULL_RTX); |
c8cfe1f6 | 591 | } |
9c7e2978 RK |
592 | |
593 | /* Otherwise this must be an INSN_LIST. Add INSN to the end of the | |
594 | list. */ | |
595 | XEXP (delay_list, 1) = add_to_delay_list (insn, XEXP (delay_list, 1)); | |
596 | ||
597 | return delay_list; | |
f27e6bb2 | 598 | } |
9c7e2978 | 599 | \f |
ab87f8c8 JL |
600 | /* Delete INSN from the delay slot of the insn that it is in, which may |
601 | produce an insn with no delay slots. Return the new insn. */ | |
9c7e2978 | 602 | |
96960d10 | 603 | static rtx |
0c20a65f | 604 | delete_from_delay_slot (rtx insn) |
9c7e2978 RK |
605 | { |
606 | rtx trial, seq_insn, seq, prev; | |
607 | rtx delay_list = 0; | |
608 | int i; | |
52fe4de8 | 609 | int had_barrier = 0; |
9c7e2978 RK |
610 | |
611 | /* We first must find the insn containing the SEQUENCE with INSN in its | |
612 | delay slot. Do this by finding an insn, TRIAL, where | |
613 | PREV_INSN (NEXT_INSN (TRIAL)) != TRIAL. */ | |
614 | ||
615 | for (trial = insn; | |
616 | PREV_INSN (NEXT_INSN (trial)) == trial; | |
617 | trial = NEXT_INSN (trial)) | |
618 | ; | |
619 | ||
620 | seq_insn = PREV_INSN (NEXT_INSN (trial)); | |
621 | seq = PATTERN (seq_insn); | |
622 | ||
4b4bf941 | 623 | if (NEXT_INSN (seq_insn) && BARRIER_P (NEXT_INSN (seq_insn))) |
52fe4de8 JDA |
624 | had_barrier = 1; |
625 | ||
9c7e2978 RK |
626 | /* Create a delay list consisting of all the insns other than the one |
627 | we are deleting (unless we were the only one). */ | |
628 | if (XVECLEN (seq, 0) > 2) | |
629 | for (i = 1; i < XVECLEN (seq, 0); i++) | |
630 | if (XVECEXP (seq, 0, i) != insn) | |
631 | delay_list = add_to_delay_list (XVECEXP (seq, 0, i), delay_list); | |
632 | ||
633 | /* Delete the old SEQUENCE, re-emit the insn that used to have the delay | |
634 | list, and rebuild the delay list if non-empty. */ | |
635 | prev = PREV_INSN (seq_insn); | |
636 | trial = XVECEXP (seq, 0, 0); | |
53c17031 | 637 | delete_related_insns (seq_insn); |
9c7e2978 RK |
638 | add_insn_after (trial, prev); |
639 | ||
52fe4de8 JDA |
640 | /* If there was a barrier after the old SEQUENCE, remit it. */ |
641 | if (had_barrier) | |
9c7e2978 RK |
642 | emit_barrier_after (trial); |
643 | ||
644 | /* If there are any delay insns, remit them. Otherwise clear the | |
645 | annul flag. */ | |
646 | if (delay_list) | |
91a51951 | 647 | trial = emit_delay_sequence (trial, delay_list, XVECLEN (seq, 0) - 2); |
4b4bf941 | 648 | else if (INSN_P (trial)) |
9c7e2978 RK |
649 | INSN_ANNULLED_BRANCH_P (trial) = 0; |
650 | ||
651 | INSN_FROM_TARGET_P (insn) = 0; | |
652 | ||
653 | /* Show we need to fill this insn again. */ | |
654 | obstack_ptr_grow (&unfilled_slots_obstack, trial); | |
96960d10 HB |
655 | |
656 | return trial; | |
9c7e2978 RK |
657 | } |
658 | \f | |
659 | /* Delete INSN, a JUMP_INSN. If it is a conditional jump, we must track down | |
660 | the insn that sets CC0 for it and delete it too. */ | |
661 | ||
662 | static void | |
0c20a65f | 663 | delete_scheduled_jump (rtx insn) |
9c7e2978 RK |
664 | { |
665 | /* Delete the insn that sets cc0 for us. On machines without cc0, we could | |
666 | delete the insn that sets the condition code, but it is hard to find it. | |
667 | Since this case is rare anyway, don't bother trying; there would likely | |
668 | be other insns that became dead anyway, which we wouldn't know to | |
669 | delete. */ | |
670 | ||
671 | #ifdef HAVE_cc0 | |
672 | if (reg_mentioned_p (cc0_rtx, insn)) | |
673 | { | |
fb3821f7 | 674 | rtx note = find_reg_note (insn, REG_CC_SETTER, NULL_RTX); |
9c7e2978 RK |
675 | |
676 | /* If a reg-note was found, it points to an insn to set CC0. This | |
677 | insn is in the delay list of some other insn. So delete it from | |
678 | the delay list it was in. */ | |
679 | if (note) | |
680 | { | |
fb3821f7 | 681 | if (! FIND_REG_INC_NOTE (XEXP (note, 0), NULL_RTX) |
9c7e2978 RK |
682 | && sets_cc0_p (PATTERN (XEXP (note, 0))) == 1) |
683 | delete_from_delay_slot (XEXP (note, 0)); | |
684 | } | |
685 | else | |
686 | { | |
687 | /* The insn setting CC0 is our previous insn, but it may be in | |
688 | a delay slot. It will be the last insn in the delay slot, if | |
689 | it is. */ | |
690 | rtx trial = previous_insn (insn); | |
4b4bf941 | 691 | if (NOTE_P (trial)) |
9c7e2978 RK |
692 | trial = prev_nonnote_insn (trial); |
693 | if (sets_cc0_p (PATTERN (trial)) != 1 | |
ff81832f | 694 | || FIND_REG_INC_NOTE (trial, NULL_RTX)) |
9c7e2978 RK |
695 | return; |
696 | if (PREV_INSN (NEXT_INSN (trial)) == trial) | |
53c17031 | 697 | delete_related_insns (trial); |
9c7e2978 RK |
698 | else |
699 | delete_from_delay_slot (trial); | |
700 | } | |
701 | } | |
702 | #endif | |
703 | ||
53c17031 | 704 | delete_related_insns (insn); |
9c7e2978 RK |
705 | } |
706 | \f | |
707 | /* Counters for delay-slot filling. */ | |
708 | ||
709 | #define NUM_REORG_FUNCTIONS 2 | |
710 | #define MAX_DELAY_HISTOGRAM 3 | |
711 | #define MAX_REORG_PASSES 2 | |
712 | ||
713 | static int num_insns_needing_delays[NUM_REORG_FUNCTIONS][MAX_REORG_PASSES]; | |
714 | ||
715 | static int num_filled_delays[NUM_REORG_FUNCTIONS][MAX_DELAY_HISTOGRAM+1][MAX_REORG_PASSES]; | |
716 | ||
717 | static int reorg_pass_number; | |
718 | ||
719 | static void | |
0c20a65f | 720 | note_delay_statistics (int slots_filled, int index) |
9c7e2978 RK |
721 | { |
722 | num_insns_needing_delays[index][reorg_pass_number]++; | |
723 | if (slots_filled > MAX_DELAY_HISTOGRAM) | |
724 | slots_filled = MAX_DELAY_HISTOGRAM; | |
725 | num_filled_delays[index][slots_filled][reorg_pass_number]++; | |
726 | } | |
727 | \f | |
728 | #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS) | |
729 | ||
730 | /* Optimize the following cases: | |
731 | ||
732 | 1. When a conditional branch skips over only one instruction, | |
733 | use an annulling branch and put that insn in the delay slot. | |
6dc42e49 RS |
734 | Use either a branch that annuls when the condition if true or |
735 | invert the test with a branch that annuls when the condition is | |
9c7e2978 RK |
736 | false. This saves insns, since otherwise we must copy an insn |
737 | from the L1 target. | |
738 | ||
739 | (orig) (skip) (otherwise) | |
740 | Bcc.n L1 Bcc',a L1 Bcc,a L1' | |
741 | insn insn insn2 | |
742 | L1: L1: L1: | |
743 | insn2 insn2 insn2 | |
744 | insn3 insn3 L1': | |
745 | insn3 | |
746 | ||
747 | 2. When a conditional branch skips over only one instruction, | |
748 | and after that, it unconditionally branches somewhere else, | |
749 | perform the similar optimization. This saves executing the | |
750 | second branch in the case where the inverted condition is true. | |
751 | ||
752 | Bcc.n L1 Bcc',a L2 | |
753 | insn insn | |
754 | L1: L1: | |
755 | Bra L2 Bra L2 | |
756 | ||
757 | INSN is a JUMP_INSN. | |
758 | ||
759 | This should be expanded to skip over N insns, where N is the number | |
760 | of delay slots required. */ | |
761 | ||
762 | static rtx | |
0c20a65f | 763 | optimize_skip (rtx insn) |
9c7e2978 | 764 | { |
b3694847 | 765 | rtx trial = next_nonnote_insn (insn); |
9c7e2978 RK |
766 | rtx next_trial = next_active_insn (trial); |
767 | rtx delay_list = 0; | |
35523fce JL |
768 | int flags; |
769 | ||
770 | flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
9c7e2978 RK |
771 | |
772 | if (trial == 0 | |
4b4bf941 | 773 | || !NONJUMP_INSN_P (trial) |
9c7e2978 RK |
774 | || GET_CODE (PATTERN (trial)) == SEQUENCE |
775 | || recog_memoized (trial) < 0 | |
35523fce | 776 | || (! eligible_for_annul_false (insn, 0, trial, flags) |
1db828df DM |
777 | && ! eligible_for_annul_true (insn, 0, trial, flags)) |
778 | || can_throw_internal (trial)) | |
9c7e2978 RK |
779 | return 0; |
780 | ||
781 | /* There are two cases where we are just executing one insn (we assume | |
782 | here that a branch requires only one insn; this should be generalized | |
783 | at some point): Where the branch goes around a single insn or where | |
784 | we have one insn followed by a branch to the same label we branch to. | |
785 | In both of these cases, inverting the jump and annulling the delay | |
786 | slot give the same effect in fewer insns. */ | |
c5c76735 JL |
787 | if ((next_trial == next_active_insn (JUMP_LABEL (insn)) |
788 | && ! (next_trial == 0 && current_function_epilogue_delay_list != 0)) | |
9c7e2978 | 789 | || (next_trial != 0 |
4b4bf941 | 790 | && JUMP_P (next_trial) |
9c7e2978 RK |
791 | && JUMP_LABEL (insn) == JUMP_LABEL (next_trial) |
792 | && (simplejump_p (next_trial) | |
793 | || GET_CODE (PATTERN (next_trial)) == RETURN))) | |
794 | { | |
35523fce | 795 | if (eligible_for_annul_false (insn, 0, trial, flags)) |
9c7e2978 | 796 | { |
9ba11d5a | 797 | if (invert_jump (insn, JUMP_LABEL (insn), 1)) |
9c7e2978 | 798 | INSN_FROM_TARGET_P (trial) = 1; |
35523fce | 799 | else if (! eligible_for_annul_true (insn, 0, trial, flags)) |
9c7e2978 RK |
800 | return 0; |
801 | } | |
802 | ||
fb3821f7 | 803 | delay_list = add_to_delay_list (trial, NULL_RTX); |
9c7e2978 RK |
804 | next_trial = next_active_insn (trial); |
805 | update_block (trial, trial); | |
53c17031 | 806 | delete_related_insns (trial); |
9c7e2978 RK |
807 | |
808 | /* Also, if we are targeting an unconditional | |
809 | branch, thread our jump to the target of that branch. Don't | |
810 | change this into a RETURN here, because it may not accept what | |
811 | we have in the delay slot. We'll fix this up later. */ | |
4b4bf941 | 812 | if (next_trial && JUMP_P (next_trial) |
9c7e2978 RK |
813 | && (simplejump_p (next_trial) |
814 | || GET_CODE (PATTERN (next_trial)) == RETURN)) | |
815 | { | |
b9185691 | 816 | rtx target_label = JUMP_LABEL (next_trial); |
9c7e2978 RK |
817 | if (target_label == 0) |
818 | target_label = find_end_label (); | |
0e5bad53 | 819 | |
b9185691 EB |
820 | if (target_label) |
821 | { | |
822 | /* Recompute the flags based on TARGET_LABEL since threading | |
823 | the jump to TARGET_LABEL may change the direction of the | |
824 | jump (which may change the circumstances in which the | |
825 | delay slot is nullified). */ | |
826 | flags = get_jump_flags (insn, target_label); | |
827 | if (eligible_for_annul_true (insn, 0, trial, flags)) | |
828 | reorg_redirect_jump (insn, target_label); | |
829 | } | |
9c7e2978 RK |
830 | } |
831 | ||
832 | INSN_ANNULLED_BRANCH_P (insn) = 1; | |
833 | } | |
834 | ||
835 | return delay_list; | |
836 | } | |
837 | #endif | |
838 | \f | |
35523fce JL |
839 | /* Encode and return branch direction and prediction information for |
840 | INSN assuming it will jump to LABEL. | |
841 | ||
842 | Non conditional branches return no direction information and | |
843 | are predicted as very likely taken. */ | |
0f41302f | 844 | |
35523fce | 845 | static int |
0c20a65f | 846 | get_jump_flags (rtx insn, rtx label) |
35523fce JL |
847 | { |
848 | int flags; | |
849 | ||
850 | /* get_jump_flags can be passed any insn with delay slots, these may | |
851 | be INSNs, CALL_INSNs, or JUMP_INSNs. Only JUMP_INSNs have branch | |
852 | direction information, and only if they are conditional jumps. | |
853 | ||
854 | If LABEL is zero, then there is no way to determine the branch | |
855 | direction. */ | |
4b4bf941 | 856 | if (JUMP_P (insn) |
3480bb98 | 857 | && (condjump_p (insn) || condjump_in_parallel_p (insn)) |
35523fce | 858 | && INSN_UID (insn) <= max_uid |
e328af29 RK |
859 | && label != 0 |
860 | && INSN_UID (label) <= max_uid) | |
f27e6bb2 | 861 | flags |
35523fce JL |
862 | = (uid_to_ruid[INSN_UID (label)] > uid_to_ruid[INSN_UID (insn)]) |
863 | ? ATTR_FLAG_forward : ATTR_FLAG_backward; | |
864 | /* No valid direction information. */ | |
865 | else | |
866 | flags = 0; | |
f27e6bb2 | 867 | |
35523fce | 868 | /* If insn is a conditional branch call mostly_true_jump to get |
f27e6bb2 | 869 | determine the branch prediction. |
35523fce JL |
870 | |
871 | Non conditional branches are predicted as very likely taken. */ | |
4b4bf941 | 872 | if (JUMP_P (insn) |
3480bb98 | 873 | && (condjump_p (insn) || condjump_in_parallel_p (insn))) |
35523fce JL |
874 | { |
875 | int prediction; | |
876 | ||
877 | prediction = mostly_true_jump (insn, get_branch_condition (insn, label)); | |
878 | switch (prediction) | |
879 | { | |
f27e6bb2 KH |
880 | case 2: |
881 | flags |= (ATTR_FLAG_very_likely | ATTR_FLAG_likely); | |
882 | break; | |
883 | case 1: | |
884 | flags |= ATTR_FLAG_likely; | |
885 | break; | |
886 | case 0: | |
887 | flags |= ATTR_FLAG_unlikely; | |
888 | break; | |
889 | case -1: | |
890 | flags |= (ATTR_FLAG_very_unlikely | ATTR_FLAG_unlikely); | |
891 | break; | |
35523fce | 892 | |
f27e6bb2 | 893 | default: |
41374e13 | 894 | gcc_unreachable (); |
35523fce JL |
895 | } |
896 | } | |
897 | else | |
898 | flags |= (ATTR_FLAG_very_likely | ATTR_FLAG_likely); | |
899 | ||
900 | return flags; | |
901 | } | |
902 | ||
65753f55 | 903 | /* Return 1 if INSN is a destination that will be branched to rarely (the |
0275a51b RK |
904 | return point of a function); return 2 if DEST will be branched to very |
905 | rarely (a call to a function that doesn't return). Otherwise, | |
906 | return 0. */ | |
907 | ||
908 | static int | |
0c20a65f | 909 | rare_destination (rtx insn) |
0275a51b RK |
910 | { |
911 | int jump_count = 0; | |
65753f55 | 912 | rtx next; |
0275a51b | 913 | |
65753f55 | 914 | for (; insn; insn = next) |
0275a51b | 915 | { |
4b4bf941 | 916 | if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == SEQUENCE) |
0275a51b RK |
917 | insn = XVECEXP (PATTERN (insn), 0, 0); |
918 | ||
65753f55 RK |
919 | next = NEXT_INSN (insn); |
920 | ||
0275a51b RK |
921 | switch (GET_CODE (insn)) |
922 | { | |
923 | case CODE_LABEL: | |
924 | return 0; | |
925 | case BARRIER: | |
f27e6bb2 | 926 | /* A BARRIER can either be after a JUMP_INSN or a CALL_INSN. We |
0275a51b RK |
927 | don't scan past JUMP_INSNs, so any barrier we find here must |
928 | have been after a CALL_INSN and hence mean the call doesn't | |
929 | return. */ | |
930 | return 2; | |
931 | case JUMP_INSN: | |
932 | if (GET_CODE (PATTERN (insn)) == RETURN) | |
933 | return 1; | |
934 | else if (simplejump_p (insn) | |
935 | && jump_count++ < 10) | |
65753f55 | 936 | next = JUMP_LABEL (insn); |
0275a51b RK |
937 | else |
938 | return 0; | |
91a51951 KG |
939 | |
940 | default: | |
941 | break; | |
0275a51b RK |
942 | } |
943 | } | |
944 | ||
945 | /* If we got here it means we hit the end of the function. So this | |
946 | is an unlikely destination. */ | |
947 | ||
948 | return 1; | |
949 | } | |
950 | ||
9c7e2978 RK |
951 | /* Return truth value of the statement that this branch |
952 | is mostly taken. If we think that the branch is extremely likely | |
953 | to be taken, we return 2. If the branch is slightly more likely to be | |
0275a51b RK |
954 | taken, return 1. If the branch is slightly less likely to be taken, |
955 | return 0 and if the branch is highly unlikely to be taken, return -1. | |
9c7e2978 | 956 | |
40f03658 | 957 | CONDITION, if nonzero, is the condition that JUMP_INSN is testing. */ |
9c7e2978 RK |
958 | |
959 | static int | |
0c20a65f | 960 | mostly_true_jump (rtx jump_insn, rtx condition) |
9c7e2978 RK |
961 | { |
962 | rtx target_label = JUMP_LABEL (jump_insn); | |
10f13594 | 963 | rtx insn, note; |
0275a51b RK |
964 | int rare_dest = rare_destination (target_label); |
965 | int rare_fallthrough = rare_destination (NEXT_INSN (jump_insn)); | |
966 | ||
a6c383b0 DE |
967 | /* If branch probabilities are available, then use that number since it |
968 | always gives a correct answer. */ | |
10f13594 RH |
969 | note = find_reg_note (jump_insn, REG_BR_PROB, 0); |
970 | if (note) | |
a6c383b0 | 971 | { |
10f13594 RH |
972 | int prob = INTVAL (XEXP (note, 0)); |
973 | ||
974 | if (prob >= REG_BR_PROB_BASE * 9 / 10) | |
f27e6bb2 | 975 | return 2; |
10f13594 | 976 | else if (prob >= REG_BR_PROB_BASE / 2) |
f27e6bb2 | 977 | return 1; |
10f13594 | 978 | else if (prob >= REG_BR_PROB_BASE / 10) |
f27e6bb2 | 979 | return 0; |
10f13594 | 980 | else |
f27e6bb2 | 981 | return -1; |
a6c383b0 DE |
982 | } |
983 | ||
10f13594 RH |
984 | /* ??? Ought to use estimate_probability instead. */ |
985 | ||
0275a51b RK |
986 | /* If this is a branch outside a loop, it is highly unlikely. */ |
987 | if (GET_CODE (PATTERN (jump_insn)) == SET | |
988 | && GET_CODE (SET_SRC (PATTERN (jump_insn))) == IF_THEN_ELSE | |
989 | && ((GET_CODE (XEXP (SET_SRC (PATTERN (jump_insn)), 1)) == LABEL_REF | |
990 | && LABEL_OUTSIDE_LOOP_P (XEXP (SET_SRC (PATTERN (jump_insn)), 1))) | |
991 | || (GET_CODE (XEXP (SET_SRC (PATTERN (jump_insn)), 2)) == LABEL_REF | |
992 | && LABEL_OUTSIDE_LOOP_P (XEXP (SET_SRC (PATTERN (jump_insn)), 2))))) | |
993 | return -1; | |
994 | ||
995 | if (target_label) | |
4d0e69c3 | 996 | { |
0275a51b RK |
997 | /* If this is the test of a loop, it is very likely true. We scan |
998 | backwards from the target label. If we find a NOTE_INSN_LOOP_BEG | |
f27e6bb2 | 999 | before the next real insn, we assume the branch is to the top of |
0275a51b RK |
1000 | the loop. */ |
1001 | for (insn = PREV_INSN (target_label); | |
4b4bf941 | 1002 | insn && NOTE_P (insn); |
0275a51b RK |
1003 | insn = PREV_INSN (insn)) |
1004 | if (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG) | |
1005 | return 2; | |
4d0e69c3 | 1006 | } |
9c7e2978 | 1007 | |
abc95ed3 | 1008 | /* Look at the relative rarities of the fallthrough and destination. If |
0f41302f | 1009 | they differ, we can predict the branch that way. */ |
9c7e2978 | 1010 | |
0275a51b RK |
1011 | switch (rare_fallthrough - rare_dest) |
1012 | { | |
1013 | case -2: | |
1014 | return -1; | |
1015 | case -1: | |
1016 | return 0; | |
1017 | case 0: | |
1018 | break; | |
1019 | case 1: | |
4d0e69c3 | 1020 | return 1; |
0275a51b RK |
1021 | case 2: |
1022 | return 2; | |
1023 | } | |
4d0e69c3 | 1024 | |
f27e6bb2 | 1025 | /* If we couldn't figure out what this jump was, assume it won't be |
9c7e2978 RK |
1026 | taken. This should be rare. */ |
1027 | if (condition == 0) | |
1028 | return 0; | |
1029 | ||
1030 | /* EQ tests are usually false and NE tests are usually true. Also, | |
1031 | most quantities are positive, so we can make the appropriate guesses | |
1032 | about signed comparisons against zero. */ | |
1033 | switch (GET_CODE (condition)) | |
1034 | { | |
1035 | case CONST_INT: | |
1036 | /* Unconditional branch. */ | |
1037 | return 1; | |
1038 | case EQ: | |
1039 | return 0; | |
1040 | case NE: | |
1041 | return 1; | |
1042 | case LE: | |
1043 | case LT: | |
1044 | if (XEXP (condition, 1) == const0_rtx) | |
f27e6bb2 | 1045 | return 0; |
9c7e2978 RK |
1046 | break; |
1047 | case GE: | |
1048 | case GT: | |
1049 | if (XEXP (condition, 1) == const0_rtx) | |
1050 | return 1; | |
1051 | break; | |
91a51951 KG |
1052 | |
1053 | default: | |
1054 | break; | |
9c7e2978 RK |
1055 | } |
1056 | ||
1057 | /* Predict backward branches usually take, forward branches usually not. If | |
1058 | we don't know whether this is forward or backward, assume the branch | |
1059 | will be taken, since most are. */ | |
13b8df74 RK |
1060 | return (target_label == 0 || INSN_UID (jump_insn) > max_uid |
1061 | || INSN_UID (target_label) > max_uid | |
9c7e2978 | 1062 | || (uid_to_ruid[INSN_UID (jump_insn)] |
6d649d26 | 1063 | > uid_to_ruid[INSN_UID (target_label)])); |
9c7e2978 RK |
1064 | } |
1065 | ||
1066 | /* Return the condition under which INSN will branch to TARGET. If TARGET | |
1067 | is zero, return the condition under which INSN will return. If INSN is | |
1068 | an unconditional branch, return const_true_rtx. If INSN isn't a simple | |
1069 | type of jump, or it doesn't go to TARGET, return 0. */ | |
1070 | ||
1071 | static rtx | |
0c20a65f | 1072 | get_branch_condition (rtx insn, rtx target) |
9c7e2978 RK |
1073 | { |
1074 | rtx pat = PATTERN (insn); | |
1075 | rtx src; | |
f27e6bb2 | 1076 | |
3480bb98 JL |
1077 | if (condjump_in_parallel_p (insn)) |
1078 | pat = XVECEXP (pat, 0, 0); | |
1079 | ||
9c7e2978 RK |
1080 | if (GET_CODE (pat) == RETURN) |
1081 | return target == 0 ? const_true_rtx : 0; | |
1082 | ||
1083 | else if (GET_CODE (pat) != SET || SET_DEST (pat) != pc_rtx) | |
1084 | return 0; | |
1085 | ||
1086 | src = SET_SRC (pat); | |
1087 | if (GET_CODE (src) == LABEL_REF && XEXP (src, 0) == target) | |
1088 | return const_true_rtx; | |
1089 | ||
1090 | else if (GET_CODE (src) == IF_THEN_ELSE | |
1091 | && ((target == 0 && GET_CODE (XEXP (src, 1)) == RETURN) | |
1092 | || (GET_CODE (XEXP (src, 1)) == LABEL_REF | |
1093 | && XEXP (XEXP (src, 1), 0) == target)) | |
1094 | && XEXP (src, 2) == pc_rtx) | |
1095 | return XEXP (src, 0); | |
1096 | ||
1097 | else if (GET_CODE (src) == IF_THEN_ELSE | |
1098 | && ((target == 0 && GET_CODE (XEXP (src, 2)) == RETURN) | |
1099 | || (GET_CODE (XEXP (src, 2)) == LABEL_REF | |
1100 | && XEXP (XEXP (src, 2), 0) == target)) | |
1101 | && XEXP (src, 1) == pc_rtx) | |
34295799 RH |
1102 | { |
1103 | enum rtx_code rev; | |
1104 | rev = reversed_comparison_code (XEXP (src, 0), insn); | |
1105 | if (rev != UNKNOWN) | |
1106 | return gen_rtx_fmt_ee (rev, GET_MODE (XEXP (src, 0)), | |
1107 | XEXP (XEXP (src, 0), 0), | |
1108 | XEXP (XEXP (src, 0), 1)); | |
1109 | } | |
58c8c593 RK |
1110 | |
1111 | return 0; | |
9c7e2978 RK |
1112 | } |
1113 | ||
40f03658 | 1114 | /* Return nonzero if CONDITION is more strict than the condition of |
9c7e2978 RK |
1115 | INSN, i.e., if INSN will always branch if CONDITION is true. */ |
1116 | ||
1117 | static int | |
0c20a65f | 1118 | condition_dominates_p (rtx condition, rtx insn) |
9c7e2978 RK |
1119 | { |
1120 | rtx other_condition = get_branch_condition (insn, JUMP_LABEL (insn)); | |
1121 | enum rtx_code code = GET_CODE (condition); | |
1122 | enum rtx_code other_code; | |
1123 | ||
1124 | if (rtx_equal_p (condition, other_condition) | |
1125 | || other_condition == const_true_rtx) | |
1126 | return 1; | |
1127 | ||
1128 | else if (condition == const_true_rtx || other_condition == 0) | |
1129 | return 0; | |
1130 | ||
1131 | other_code = GET_CODE (other_condition); | |
1132 | if (GET_RTX_LENGTH (code) != 2 || GET_RTX_LENGTH (other_code) != 2 | |
1133 | || ! rtx_equal_p (XEXP (condition, 0), XEXP (other_condition, 0)) | |
1134 | || ! rtx_equal_p (XEXP (condition, 1), XEXP (other_condition, 1))) | |
1135 | return 0; | |
1136 | ||
1137 | return comparison_dominates_p (code, other_code); | |
1138 | } | |
83fd5651 | 1139 | |
40f03658 | 1140 | /* Return nonzero if redirecting JUMP to NEWLABEL does not invalidate |
83fd5651 JL |
1141 | any insns already in the delay slot of JUMP. */ |
1142 | ||
1143 | static int | |
0c20a65f | 1144 | redirect_with_delay_slots_safe_p (rtx jump, rtx newlabel, rtx seq) |
83fd5651 | 1145 | { |
91a51951 | 1146 | int flags, i; |
83fd5651 JL |
1147 | rtx pat = PATTERN (seq); |
1148 | ||
1149 | /* Make sure all the delay slots of this jump would still | |
1150 | be valid after threading the jump. If they are still | |
40f03658 | 1151 | valid, then return nonzero. */ |
83fd5651 JL |
1152 | |
1153 | flags = get_jump_flags (jump, newlabel); | |
1154 | for (i = 1; i < XVECLEN (pat, 0); i++) | |
1155 | if (! ( | |
1156 | #ifdef ANNUL_IFFALSE_SLOTS | |
1157 | (INSN_ANNULLED_BRANCH_P (jump) | |
1158 | && INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
1159 | ? eligible_for_annul_false (jump, i - 1, | |
1160 | XVECEXP (pat, 0, i), flags) : | |
1161 | #endif | |
1162 | #ifdef ANNUL_IFTRUE_SLOTS | |
1163 | (INSN_ANNULLED_BRANCH_P (jump) | |
1164 | && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
1165 | ? eligible_for_annul_true (jump, i - 1, | |
1166 | XVECEXP (pat, 0, i), flags) : | |
1167 | #endif | |
f27e6bb2 | 1168 | eligible_for_delay (jump, i - 1, XVECEXP (pat, 0, i), flags))) |
83fd5651 JL |
1169 | break; |
1170 | ||
1171 | return (i == XVECLEN (pat, 0)); | |
1172 | } | |
1173 | ||
40f03658 | 1174 | /* Return nonzero if redirecting JUMP to NEWLABEL does not invalidate |
b304ad47 JL |
1175 | any insns we wish to place in the delay slot of JUMP. */ |
1176 | ||
1177 | static int | |
0c20a65f | 1178 | redirect_with_delay_list_safe_p (rtx jump, rtx newlabel, rtx delay_list) |
b304ad47 JL |
1179 | { |
1180 | int flags, i; | |
1181 | rtx li; | |
1182 | ||
1183 | /* Make sure all the insns in DELAY_LIST would still be | |
1184 | valid after threading the jump. If they are still | |
40f03658 | 1185 | valid, then return nonzero. */ |
b304ad47 JL |
1186 | |
1187 | flags = get_jump_flags (jump, newlabel); | |
1188 | for (li = delay_list, i = 0; li; li = XEXP (li, 1), i++) | |
1189 | if (! ( | |
1190 | #ifdef ANNUL_IFFALSE_SLOTS | |
1191 | (INSN_ANNULLED_BRANCH_P (jump) | |
1192 | && INSN_FROM_TARGET_P (XEXP (li, 0))) | |
4791d99b | 1193 | ? eligible_for_annul_false (jump, i, XEXP (li, 0), flags) : |
b304ad47 JL |
1194 | #endif |
1195 | #ifdef ANNUL_IFTRUE_SLOTS | |
1196 | (INSN_ANNULLED_BRANCH_P (jump) | |
1197 | && ! INSN_FROM_TARGET_P (XEXP (li, 0))) | |
4791d99b | 1198 | ? eligible_for_annul_true (jump, i, XEXP (li, 0), flags) : |
b304ad47 | 1199 | #endif |
4791d99b | 1200 | eligible_for_delay (jump, i, XEXP (li, 0), flags))) |
b304ad47 JL |
1201 | break; |
1202 | ||
1203 | return (li == NULL); | |
1204 | } | |
1205 | ||
96960d10 HB |
1206 | /* DELAY_LIST is a list of insns that have already been placed into delay |
1207 | slots. See if all of them have the same annulling status as ANNUL_TRUE_P. | |
1208 | If not, return 0; otherwise return 1. */ | |
1209 | ||
1210 | static int | |
0c20a65f | 1211 | check_annul_list_true_false (int annul_true_p, rtx delay_list) |
96960d10 | 1212 | { |
f0c76b51 | 1213 | rtx temp; |
96960d10 HB |
1214 | |
1215 | if (delay_list) | |
1216 | { | |
1217 | for (temp = delay_list; temp; temp = XEXP (temp, 1)) | |
f27e6bb2 KH |
1218 | { |
1219 | rtx trial = XEXP (temp, 0); | |
1220 | ||
1221 | if ((annul_true_p && INSN_FROM_TARGET_P (trial)) | |
96960d10 HB |
1222 | || (!annul_true_p && !INSN_FROM_TARGET_P (trial))) |
1223 | return 0; | |
f27e6bb2 | 1224 | } |
96960d10 | 1225 | } |
ab87f8c8 | 1226 | |
96960d10 HB |
1227 | return 1; |
1228 | } | |
9c7e2978 RK |
1229 | \f |
1230 | /* INSN branches to an insn whose pattern SEQ is a SEQUENCE. Given that | |
1231 | the condition tested by INSN is CONDITION and the resources shown in | |
1232 | OTHER_NEEDED are needed after INSN, see whether INSN can take all the insns | |
1233 | from SEQ's delay list, in addition to whatever insns it may execute | |
1234 | (in DELAY_LIST). SETS and NEEDED are denote resources already set and | |
1235 | needed while searching for delay slot insns. Return the concatenated | |
1236 | delay list if possible, otherwise, return 0. | |
1237 | ||
1238 | SLOTS_TO_FILL is the total number of slots required by INSN, and | |
1239 | PSLOTS_FILLED points to the number filled so far (also the number of | |
1240 | insns in DELAY_LIST). It is updated with the number that have been | |
1241 | filled from the SEQUENCE, if any. | |
1242 | ||
40f03658 | 1243 | PANNUL_P points to a nonzero value if we already know that we need |
9c7e2978 | 1244 | to annul INSN. If this routine determines that annulling is needed, |
40f03658 | 1245 | it may set that value nonzero. |
9c7e2978 RK |
1246 | |
1247 | PNEW_THREAD points to a location that is to receive the place at which | |
1248 | execution should continue. */ | |
1249 | ||
1250 | static rtx | |
0c20a65f AJ |
1251 | steal_delay_list_from_target (rtx insn, rtx condition, rtx seq, |
1252 | rtx delay_list, struct resources *sets, | |
1253 | struct resources *needed, | |
1254 | struct resources *other_needed, | |
1255 | int slots_to_fill, int *pslots_filled, | |
1256 | int *pannul_p, rtx *pnew_thread) | |
9c7e2978 RK |
1257 | { |
1258 | rtx temp; | |
1259 | int slots_remaining = slots_to_fill - *pslots_filled; | |
1260 | int total_slots_filled = *pslots_filled; | |
1261 | rtx new_delay_list = 0; | |
1262 | int must_annul = *pannul_p; | |
f0c76b51 | 1263 | int used_annul = 0; |
ab87f8c8 | 1264 | int i; |
18e765cb | 1265 | struct resources cc_set; |
9c7e2978 RK |
1266 | |
1267 | /* We can't do anything if there are more delay slots in SEQ than we | |
1268 | can handle, or if we don't know that it will be a taken branch. | |
9c7e2978 | 1269 | We know that it will be a taken branch if it is either an unconditional |
fe41a98e JW |
1270 | branch or a conditional branch with a stricter branch condition. |
1271 | ||
1272 | Also, exit if the branch has more than one set, since then it is computing | |
1273 | other results that can't be ignored, e.g. the HPPA mov&branch instruction. | |
1274 | ??? It may be possible to move other sets into INSN in addition to | |
18e765cb JL |
1275 | moving the instructions in the delay slots. |
1276 | ||
1277 | We can not steal the delay list if one of the instructions in the | |
f27e6bb2 | 1278 | current delay_list modifies the condition codes and the jump in the |
18e765cb JL |
1279 | sequence is a conditional jump. We can not do this because we can |
1280 | not change the direction of the jump because the condition codes | |
f27e6bb2 | 1281 | will effect the direction of the jump in the sequence. */ |
18e765cb JL |
1282 | |
1283 | CLEAR_RESOURCE (&cc_set); | |
1284 | for (temp = delay_list; temp; temp = XEXP (temp, 1)) | |
1285 | { | |
1286 | rtx trial = XEXP (temp, 0); | |
1287 | ||
73780b74 | 1288 | mark_set_resources (trial, &cc_set, 0, MARK_SRC_DEST_CALL); |
18e765cb | 1289 | if (insn_references_resource_p (XVECEXP (seq , 0, 0), &cc_set, 0)) |
f27e6bb2 | 1290 | return delay_list; |
18e765cb | 1291 | } |
9c7e2978 RK |
1292 | |
1293 | if (XVECLEN (seq, 0) - 1 > slots_remaining | |
fe41a98e JW |
1294 | || ! condition_dominates_p (condition, XVECEXP (seq, 0, 0)) |
1295 | || ! single_set (XVECEXP (seq, 0, 0))) | |
9c7e2978 RK |
1296 | return delay_list; |
1297 | ||
6e7b03e1 AH |
1298 | #ifdef MD_CAN_REDIRECT_BRANCH |
1299 | /* On some targets, branches with delay slots can have a limited | |
1300 | displacement. Give the back end a chance to tell us we can't do | |
1301 | this. */ | |
1302 | if (! MD_CAN_REDIRECT_BRANCH (insn, XVECEXP (seq, 0, 0))) | |
1303 | return delay_list; | |
1304 | #endif | |
1305 | ||
9c7e2978 RK |
1306 | for (i = 1; i < XVECLEN (seq, 0); i++) |
1307 | { | |
1308 | rtx trial = XVECEXP (seq, 0, i); | |
35523fce | 1309 | int flags; |
9c7e2978 RK |
1310 | |
1311 | if (insn_references_resource_p (trial, sets, 0) | |
1312 | || insn_sets_resource_p (trial, needed, 0) | |
1313 | || insn_sets_resource_p (trial, sets, 0) | |
1314 | #ifdef HAVE_cc0 | |
1315 | /* If TRIAL sets CC0, we can't copy it, so we can't steal this | |
1316 | delay list. */ | |
fb3821f7 | 1317 | || find_reg_note (trial, REG_CC_USER, NULL_RTX) |
9c7e2978 RK |
1318 | #endif |
1319 | /* If TRIAL is from the fallthrough code of an annulled branch insn | |
1320 | in SEQ, we cannot use it. */ | |
1321 | || (INSN_ANNULLED_BRANCH_P (XVECEXP (seq, 0, 0)) | |
1322 | && ! INSN_FROM_TARGET_P (trial))) | |
1323 | return delay_list; | |
1324 | ||
1325 | /* If this insn was already done (usually in a previous delay slot), | |
1326 | pretend we put it in our delay slot. */ | |
f898abd7 | 1327 | if (redundant_insn (trial, insn, new_delay_list)) |
9c7e2978 RK |
1328 | continue; |
1329 | ||
35523fce JL |
1330 | /* We will end up re-vectoring this branch, so compute flags |
1331 | based on jumping to the new label. */ | |
1332 | flags = get_jump_flags (insn, JUMP_LABEL (XVECEXP (seq, 0, 0))); | |
1333 | ||
9c7e2978 RK |
1334 | if (! must_annul |
1335 | && ((condition == const_true_rtx | |
1336 | || (! insn_sets_resource_p (trial, other_needed, 0) | |
1337 | && ! may_trap_p (PATTERN (trial))))) | |
35523fce | 1338 | ? eligible_for_delay (insn, total_slots_filled, trial, flags) |
96960d10 HB |
1339 | : (must_annul || (delay_list == NULL && new_delay_list == NULL)) |
1340 | && (must_annul = 1, | |
1341 | check_annul_list_true_false (0, delay_list) | |
1342 | && check_annul_list_true_false (0, new_delay_list) | |
1343 | && eligible_for_annul_false (insn, total_slots_filled, | |
1344 | trial, flags))) | |
9c7e2978 | 1345 | { |
96960d10 HB |
1346 | if (must_annul) |
1347 | used_annul = 1; | |
9c7e2978 RK |
1348 | temp = copy_rtx (trial); |
1349 | INSN_FROM_TARGET_P (temp) = 1; | |
1350 | new_delay_list = add_to_delay_list (temp, new_delay_list); | |
1351 | total_slots_filled++; | |
1352 | ||
1353 | if (--slots_remaining == 0) | |
1354 | break; | |
1355 | } | |
1356 | else | |
1357 | return delay_list; | |
1358 | } | |
1359 | ||
1360 | /* Show the place to which we will be branching. */ | |
1361 | *pnew_thread = next_active_insn (JUMP_LABEL (XVECEXP (seq, 0, 0))); | |
1362 | ||
1363 | /* Add any new insns to the delay list and update the count of the | |
1364 | number of slots filled. */ | |
1365 | *pslots_filled = total_slots_filled; | |
96960d10 HB |
1366 | if (used_annul) |
1367 | *pannul_p = 1; | |
9c7e2978 RK |
1368 | |
1369 | if (delay_list == 0) | |
1370 | return new_delay_list; | |
1371 | ||
1372 | for (temp = new_delay_list; temp; temp = XEXP (temp, 1)) | |
1373 | delay_list = add_to_delay_list (XEXP (temp, 0), delay_list); | |
1374 | ||
1375 | return delay_list; | |
1376 | } | |
1377 | \f | |
f27e6bb2 | 1378 | /* Similar to steal_delay_list_from_target except that SEQ is on the |
9c7e2978 RK |
1379 | fallthrough path of INSN. Here we only do something if the delay insn |
1380 | of SEQ is an unconditional branch. In that case we steal its delay slot | |
1381 | for INSN since unconditional branches are much easier to fill. */ | |
1382 | ||
1383 | static rtx | |
0c20a65f AJ |
1384 | steal_delay_list_from_fallthrough (rtx insn, rtx condition, rtx seq, |
1385 | rtx delay_list, struct resources *sets, | |
1386 | struct resources *needed, | |
1387 | struct resources *other_needed, | |
1388 | int slots_to_fill, int *pslots_filled, | |
1389 | int *pannul_p) | |
9c7e2978 RK |
1390 | { |
1391 | int i; | |
35523fce | 1392 | int flags; |
96960d10 HB |
1393 | int must_annul = *pannul_p; |
1394 | int used_annul = 0; | |
35523fce JL |
1395 | |
1396 | flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
9c7e2978 RK |
1397 | |
1398 | /* We can't do anything if SEQ's delay insn isn't an | |
1399 | unconditional branch. */ | |
1400 | ||
1401 | if (! simplejump_p (XVECEXP (seq, 0, 0)) | |
1402 | && GET_CODE (PATTERN (XVECEXP (seq, 0, 0))) != RETURN) | |
1403 | return delay_list; | |
1404 | ||
1405 | for (i = 1; i < XVECLEN (seq, 0); i++) | |
1406 | { | |
1407 | rtx trial = XVECEXP (seq, 0, i); | |
1408 | ||
1409 | /* If TRIAL sets CC0, stealing it will move it too far from the use | |
1410 | of CC0. */ | |
1411 | if (insn_references_resource_p (trial, sets, 0) | |
1412 | || insn_sets_resource_p (trial, needed, 0) | |
1413 | || insn_sets_resource_p (trial, sets, 0) | |
1414 | #ifdef HAVE_cc0 | |
1415 | || sets_cc0_p (PATTERN (trial)) | |
1416 | #endif | |
1417 | ) | |
1418 | ||
1419 | break; | |
1420 | ||
1421 | /* If this insn was already done, we don't need it. */ | |
f898abd7 | 1422 | if (redundant_insn (trial, insn, delay_list)) |
9c7e2978 RK |
1423 | { |
1424 | delete_from_delay_slot (trial); | |
1425 | continue; | |
1426 | } | |
1427 | ||
96960d10 | 1428 | if (! must_annul |
9c7e2978 RK |
1429 | && ((condition == const_true_rtx |
1430 | || (! insn_sets_resource_p (trial, other_needed, 0) | |
1431 | && ! may_trap_p (PATTERN (trial))))) | |
35523fce | 1432 | ? eligible_for_delay (insn, *pslots_filled, trial, flags) |
96960d10 HB |
1433 | : (must_annul || delay_list == NULL) && (must_annul = 1, |
1434 | check_annul_list_true_false (1, delay_list) | |
1435 | && eligible_for_annul_true (insn, *pslots_filled, trial, flags))) | |
9c7e2978 | 1436 | { |
96960d10 HB |
1437 | if (must_annul) |
1438 | used_annul = 1; | |
9c7e2978 RK |
1439 | delete_from_delay_slot (trial); |
1440 | delay_list = add_to_delay_list (trial, delay_list); | |
1441 | ||
1442 | if (++(*pslots_filled) == slots_to_fill) | |
1443 | break; | |
1444 | } | |
1445 | else | |
1446 | break; | |
1447 | } | |
1448 | ||
96960d10 HB |
1449 | if (used_annul) |
1450 | *pannul_p = 1; | |
9c7e2978 RK |
1451 | return delay_list; |
1452 | } | |
1453 | \f | |
1454 | /* Try merging insns starting at THREAD which match exactly the insns in | |
1455 | INSN's delay list. | |
1456 | ||
1457 | If all insns were matched and the insn was previously annulling, the | |
1458 | annul bit will be cleared. | |
1459 | ||
1460 | For each insn that is merged, if the branch is or will be non-annulling, | |
1461 | we delete the merged insn. */ | |
1462 | ||
1463 | static void | |
0c20a65f | 1464 | try_merge_delay_insns (rtx insn, rtx thread) |
9c7e2978 RK |
1465 | { |
1466 | rtx trial, next_trial; | |
1467 | rtx delay_insn = XVECEXP (PATTERN (insn), 0, 0); | |
1468 | int annul_p = INSN_ANNULLED_BRANCH_P (delay_insn); | |
1469 | int slot_number = 1; | |
1470 | int num_slots = XVECLEN (PATTERN (insn), 0); | |
1471 | rtx next_to_match = XVECEXP (PATTERN (insn), 0, slot_number); | |
1472 | struct resources set, needed; | |
1473 | rtx merged_insns = 0; | |
1474 | int i; | |
35523fce JL |
1475 | int flags; |
1476 | ||
a2f54138 | 1477 | flags = get_jump_flags (delay_insn, JUMP_LABEL (delay_insn)); |
9c7e2978 RK |
1478 | |
1479 | CLEAR_RESOURCE (&needed); | |
1480 | CLEAR_RESOURCE (&set); | |
1481 | ||
1482 | /* If this is not an annulling branch, take into account anything needed in | |
96960d10 | 1483 | INSN's delay slot. This prevents two increments from being incorrectly |
9c7e2978 RK |
1484 | folded into one. If we are annulling, this would be the correct |
1485 | thing to do. (The alternative, looking at things set in NEXT_TO_MATCH | |
1486 | will essentially disable this optimization. This method is somewhat of | |
1487 | a kludge, but I don't see a better way.) */ | |
1488 | if (! annul_p) | |
f27e6bb2 | 1489 | for (i = 1 ; i < num_slots; i++) |
96960d10 | 1490 | if (XVECEXP (PATTERN (insn), 0, i)) |
f27e6bb2 | 1491 | mark_referenced_resources (XVECEXP (PATTERN (insn), 0, i), &needed, 1); |
9c7e2978 RK |
1492 | |
1493 | for (trial = thread; !stop_search_p (trial, 1); trial = next_trial) | |
1494 | { | |
1495 | rtx pat = PATTERN (trial); | |
ce15adaa | 1496 | rtx oldtrial = trial; |
9c7e2978 RK |
1497 | |
1498 | next_trial = next_nonnote_insn (trial); | |
1499 | ||
1500 | /* TRIAL must be a CALL_INSN or INSN. Skip USE and CLOBBER. */ | |
4b4bf941 | 1501 | if (NONJUMP_INSN_P (trial) |
9c7e2978 RK |
1502 | && (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)) |
1503 | continue; | |
1504 | ||
1505 | if (GET_CODE (next_to_match) == GET_CODE (trial) | |
1506 | #ifdef HAVE_cc0 | |
1507 | /* We can't share an insn that sets cc0. */ | |
1508 | && ! sets_cc0_p (pat) | |
1509 | #endif | |
1510 | && ! insn_references_resource_p (trial, &set, 1) | |
1511 | && ! insn_sets_resource_p (trial, &set, 1) | |
1512 | && ! insn_sets_resource_p (trial, &needed, 1) | |
1513 | && (trial = try_split (pat, trial, 0)) != 0 | |
9772d94f JW |
1514 | /* Update next_trial, in case try_split succeeded. */ |
1515 | && (next_trial = next_nonnote_insn (trial)) | |
ce15adaa RK |
1516 | /* Likewise THREAD. */ |
1517 | && (thread = oldtrial == thread ? trial : thread) | |
9c7e2978 RK |
1518 | && rtx_equal_p (PATTERN (next_to_match), PATTERN (trial)) |
1519 | /* Have to test this condition if annul condition is different | |
1520 | from (and less restrictive than) non-annulling one. */ | |
35523fce | 1521 | && eligible_for_delay (delay_insn, slot_number - 1, trial, flags)) |
9c7e2978 | 1522 | { |
9c7e2978 RK |
1523 | |
1524 | if (! annul_p) | |
1525 | { | |
f1f9081a | 1526 | update_block (trial, thread); |
8ad4abfc RK |
1527 | if (trial == thread) |
1528 | thread = next_active_insn (thread); | |
1529 | ||
53c17031 | 1530 | delete_related_insns (trial); |
9c7e2978 RK |
1531 | INSN_FROM_TARGET_P (next_to_match) = 0; |
1532 | } | |
1533 | else | |
38a448ca | 1534 | merged_insns = gen_rtx_INSN_LIST (VOIDmode, trial, merged_insns); |
9c7e2978 RK |
1535 | |
1536 | if (++slot_number == num_slots) | |
1537 | break; | |
1538 | ||
1539 | next_to_match = XVECEXP (PATTERN (insn), 0, slot_number); | |
9c7e2978 RK |
1540 | } |
1541 | ||
73780b74 | 1542 | mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); |
9c7e2978 RK |
1543 | mark_referenced_resources (trial, &needed, 1); |
1544 | } | |
1545 | ||
1546 | /* See if we stopped on a filled insn. If we did, try to see if its | |
1547 | delay slots match. */ | |
1548 | if (slot_number != num_slots | |
4b4bf941 | 1549 | && trial && NONJUMP_INSN_P (trial) |
9c7e2978 RK |
1550 | && GET_CODE (PATTERN (trial)) == SEQUENCE |
1551 | && ! INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (trial), 0, 0))) | |
1552 | { | |
1553 | rtx pat = PATTERN (trial); | |
058acefd JW |
1554 | rtx filled_insn = XVECEXP (pat, 0, 0); |
1555 | ||
1556 | /* Account for resources set/needed by the filled insn. */ | |
73780b74 | 1557 | mark_set_resources (filled_insn, &set, 0, MARK_SRC_DEST_CALL); |
058acefd | 1558 | mark_referenced_resources (filled_insn, &needed, 1); |
9c7e2978 RK |
1559 | |
1560 | for (i = 1; i < XVECLEN (pat, 0); i++) | |
1561 | { | |
1562 | rtx dtrial = XVECEXP (pat, 0, i); | |
1563 | ||
1564 | if (! insn_references_resource_p (dtrial, &set, 1) | |
1565 | && ! insn_sets_resource_p (dtrial, &set, 1) | |
1566 | && ! insn_sets_resource_p (dtrial, &needed, 1) | |
1567 | #ifdef HAVE_cc0 | |
1568 | && ! sets_cc0_p (PATTERN (dtrial)) | |
1569 | #endif | |
1570 | && rtx_equal_p (PATTERN (next_to_match), PATTERN (dtrial)) | |
35523fce | 1571 | && eligible_for_delay (delay_insn, slot_number - 1, dtrial, flags)) |
9c7e2978 RK |
1572 | { |
1573 | if (! annul_p) | |
1574 | { | |
96960d10 HB |
1575 | rtx new; |
1576 | ||
f1f9081a | 1577 | update_block (dtrial, thread); |
96960d10 HB |
1578 | new = delete_from_delay_slot (dtrial); |
1579 | if (INSN_DELETED_P (thread)) | |
1580 | thread = new; | |
9c7e2978 RK |
1581 | INSN_FROM_TARGET_P (next_to_match) = 0; |
1582 | } | |
1583 | else | |
38a448ca RH |
1584 | merged_insns = gen_rtx_INSN_LIST (SImode, dtrial, |
1585 | merged_insns); | |
9c7e2978 RK |
1586 | |
1587 | if (++slot_number == num_slots) | |
1588 | break; | |
1589 | ||
1590 | next_to_match = XVECEXP (PATTERN (insn), 0, slot_number); | |
1591 | } | |
96960d10 HB |
1592 | else |
1593 | { | |
f0c76b51 JL |
1594 | /* Keep track of the set/referenced resources for the delay |
1595 | slots of any trial insns we encounter. */ | |
f27e6bb2 KH |
1596 | mark_set_resources (dtrial, &set, 0, MARK_SRC_DEST_CALL); |
1597 | mark_referenced_resources (dtrial, &needed, 1); | |
96960d10 | 1598 | } |
9c7e2978 RK |
1599 | } |
1600 | } | |
1601 | ||
1602 | /* If all insns in the delay slot have been matched and we were previously | |
1603 | annulling the branch, we need not any more. In that case delete all the | |
38e01259 | 1604 | merged insns. Also clear the INSN_FROM_TARGET_P bit of each insn in |
9c7e2978 RK |
1605 | the delay list so that we know that it isn't only being used at the |
1606 | target. */ | |
d58b6986 | 1607 | if (slot_number == num_slots && annul_p) |
9c7e2978 RK |
1608 | { |
1609 | for (; merged_insns; merged_insns = XEXP (merged_insns, 1)) | |
1610 | { | |
1611 | if (GET_MODE (merged_insns) == SImode) | |
1612 | { | |
96960d10 HB |
1613 | rtx new; |
1614 | ||
f1f9081a | 1615 | update_block (XEXP (merged_insns, 0), thread); |
96960d10 HB |
1616 | new = delete_from_delay_slot (XEXP (merged_insns, 0)); |
1617 | if (INSN_DELETED_P (thread)) | |
1618 | thread = new; | |
9c7e2978 RK |
1619 | } |
1620 | else | |
1621 | { | |
f1f9081a | 1622 | update_block (XEXP (merged_insns, 0), thread); |
53c17031 | 1623 | delete_related_insns (XEXP (merged_insns, 0)); |
9c7e2978 RK |
1624 | } |
1625 | } | |
1626 | ||
1627 | INSN_ANNULLED_BRANCH_P (delay_insn) = 0; | |
1628 | ||
1629 | for (i = 0; i < XVECLEN (PATTERN (insn), 0); i++) | |
1630 | INSN_FROM_TARGET_P (XVECEXP (PATTERN (insn), 0, i)) = 0; | |
1631 | } | |
1632 | } | |
1633 | \f | |
1634 | /* See if INSN is redundant with an insn in front of TARGET. Often this | |
1635 | is called when INSN is a candidate for a delay slot of TARGET. | |
1636 | DELAY_LIST are insns that will be placed in delay slots of TARGET in front | |
1637 | of INSN. Often INSN will be redundant with an insn in a delay slot of | |
1638 | some previous insn. This happens when we have a series of branches to the | |
1639 | same label; in that case the first insn at the target might want to go | |
1640 | into each of the delay slots. | |
1641 | ||
1642 | If we are not careful, this routine can take up a significant fraction | |
1643 | of the total compilation time (4%), but only wins rarely. Hence we | |
1644 | speed this routine up by making two passes. The first pass goes back | |
5d3cc252 | 1645 | until it hits a label and sees if it finds an insn with an identical |
9c7e2978 RK |
1646 | pattern. Only in this (relatively rare) event does it check for |
1647 | data conflicts. | |
1648 | ||
1649 | We do not split insns we encounter. This could cause us not to find a | |
1650 | redundant insn, but the cost of splitting seems greater than the possible | |
1651 | gain in rare cases. */ | |
1652 | ||
5317d2f8 | 1653 | static rtx |
0c20a65f | 1654 | redundant_insn (rtx insn, rtx target, rtx delay_list) |
9c7e2978 RK |
1655 | { |
1656 | rtx target_main = target; | |
1657 | rtx ipat = PATTERN (insn); | |
1658 | rtx trial, pat; | |
1659 | struct resources needed, set; | |
1660 | int i; | |
0443f602 | 1661 | unsigned insns_to_search; |
9c7e2978 | 1662 | |
cbae24bc RK |
1663 | /* If INSN has any REG_UNUSED notes, it can't match anything since we |
1664 | are allowed to not actually assign to such a register. */ | |
1665 | if (find_reg_note (insn, REG_UNUSED, NULL_RTX) != 0) | |
1666 | return 0; | |
1667 | ||
9c7e2978 | 1668 | /* Scan backwards looking for a match. */ |
0443f602 JO |
1669 | for (trial = PREV_INSN (target), |
1670 | insns_to_search = MAX_DELAY_SLOT_INSN_SEARCH; | |
1671 | trial && insns_to_search > 0; | |
1672 | trial = PREV_INSN (trial), --insns_to_search) | |
9c7e2978 | 1673 | { |
4b4bf941 | 1674 | if (LABEL_P (trial)) |
9c7e2978 RK |
1675 | return 0; |
1676 | ||
2c3c49de | 1677 | if (! INSN_P (trial)) |
9c7e2978 RK |
1678 | continue; |
1679 | ||
1680 | pat = PATTERN (trial); | |
1681 | if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
1682 | continue; | |
1683 | ||
1684 | if (GET_CODE (pat) == SEQUENCE) | |
1685 | { | |
ca104c13 JL |
1686 | /* Stop for a CALL and its delay slots because it is difficult to |
1687 | track its resource needs correctly. */ | |
4b4bf941 | 1688 | if (CALL_P (XVECEXP (pat, 0, 0))) |
9c7e2978 RK |
1689 | return 0; |
1690 | ||
ca104c13 | 1691 | /* Stop for an INSN or JUMP_INSN with delayed effects and its delay |
f27e6bb2 | 1692 | slots because it is difficult to track its resource needs |
ca104c13 JL |
1693 | correctly. */ |
1694 | ||
1695 | #ifdef INSN_SETS_ARE_DELAYED | |
1696 | if (INSN_SETS_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
f27e6bb2 | 1697 | return 0; |
ca104c13 JL |
1698 | #endif |
1699 | ||
1700 | #ifdef INSN_REFERENCES_ARE_DELAYED | |
1701 | if (INSN_REFERENCES_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
f27e6bb2 | 1702 | return 0; |
ca104c13 JL |
1703 | #endif |
1704 | ||
1705 | /* See if any of the insns in the delay slot match, updating | |
1706 | resource requirements as we go. */ | |
9c7e2978 RK |
1707 | for (i = XVECLEN (pat, 0) - 1; i > 0; i--) |
1708 | if (GET_CODE (XVECEXP (pat, 0, i)) == GET_CODE (insn) | |
cbae24bc RK |
1709 | && rtx_equal_p (PATTERN (XVECEXP (pat, 0, i)), ipat) |
1710 | && ! find_reg_note (XVECEXP (pat, 0, i), REG_UNUSED, NULL_RTX)) | |
9c7e2978 RK |
1711 | break; |
1712 | ||
1713 | /* If found a match, exit this loop early. */ | |
1714 | if (i > 0) | |
1715 | break; | |
1716 | } | |
1717 | ||
cbae24bc RK |
1718 | else if (GET_CODE (trial) == GET_CODE (insn) && rtx_equal_p (pat, ipat) |
1719 | && ! find_reg_note (trial, REG_UNUSED, NULL_RTX)) | |
9c7e2978 RK |
1720 | break; |
1721 | } | |
1722 | ||
1723 | /* If we didn't find an insn that matches, return 0. */ | |
1724 | if (trial == 0) | |
1725 | return 0; | |
1726 | ||
1727 | /* See what resources this insn sets and needs. If they overlap, or | |
1728 | if this insn references CC0, it can't be redundant. */ | |
1729 | ||
1730 | CLEAR_RESOURCE (&needed); | |
1731 | CLEAR_RESOURCE (&set); | |
73780b74 | 1732 | mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL); |
9c7e2978 RK |
1733 | mark_referenced_resources (insn, &needed, 1); |
1734 | ||
1735 | /* If TARGET is a SEQUENCE, get the main insn. */ | |
4b4bf941 | 1736 | if (NONJUMP_INSN_P (target) && GET_CODE (PATTERN (target)) == SEQUENCE) |
9c7e2978 RK |
1737 | target_main = XVECEXP (PATTERN (target), 0, 0); |
1738 | ||
1739 | if (resource_conflicts_p (&needed, &set) | |
1740 | #ifdef HAVE_cc0 | |
1741 | || reg_mentioned_p (cc0_rtx, ipat) | |
1742 | #endif | |
1743 | /* The insn requiring the delay may not set anything needed or set by | |
1744 | INSN. */ | |
1745 | || insn_sets_resource_p (target_main, &needed, 1) | |
1746 | || insn_sets_resource_p (target_main, &set, 1)) | |
1747 | return 0; | |
1748 | ||
1749 | /* Insns we pass may not set either NEEDED or SET, so merge them for | |
1750 | simpler tests. */ | |
1751 | needed.memory |= set.memory; | |
8eae5ed6 | 1752 | needed.unch_memory |= set.unch_memory; |
9c7e2978 RK |
1753 | IOR_HARD_REG_SET (needed.regs, set.regs); |
1754 | ||
1755 | /* This insn isn't redundant if it conflicts with an insn that either is | |
1756 | or will be in a delay slot of TARGET. */ | |
1757 | ||
1758 | while (delay_list) | |
1759 | { | |
1760 | if (insn_sets_resource_p (XEXP (delay_list, 0), &needed, 1)) | |
1761 | return 0; | |
1762 | delay_list = XEXP (delay_list, 1); | |
1763 | } | |
1764 | ||
4b4bf941 | 1765 | if (NONJUMP_INSN_P (target) && GET_CODE (PATTERN (target)) == SEQUENCE) |
9c7e2978 RK |
1766 | for (i = 1; i < XVECLEN (PATTERN (target), 0); i++) |
1767 | if (insn_sets_resource_p (XVECEXP (PATTERN (target), 0, i), &needed, 1)) | |
1768 | return 0; | |
1769 | ||
1770 | /* Scan backwards until we reach a label or an insn that uses something | |
1771 | INSN sets or sets something insn uses or sets. */ | |
1772 | ||
0443f602 JO |
1773 | for (trial = PREV_INSN (target), |
1774 | insns_to_search = MAX_DELAY_SLOT_INSN_SEARCH; | |
4b4bf941 | 1775 | trial && !LABEL_P (trial) && insns_to_search > 0; |
0443f602 | 1776 | trial = PREV_INSN (trial), --insns_to_search) |
9c7e2978 | 1777 | { |
4b4bf941 | 1778 | if (!INSN_P (trial)) |
9c7e2978 RK |
1779 | continue; |
1780 | ||
1781 | pat = PATTERN (trial); | |
1782 | if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
1783 | continue; | |
1784 | ||
1785 | if (GET_CODE (pat) == SEQUENCE) | |
1786 | { | |
1787 | /* If this is a CALL_INSN and its delay slots, it is hard to track | |
1788 | the resource needs properly, so give up. */ | |
4b4bf941 | 1789 | if (CALL_P (XVECEXP (pat, 0, 0))) |
9c7e2978 RK |
1790 | return 0; |
1791 | ||
38e01259 | 1792 | /* If this is an INSN or JUMP_INSN with delayed effects, it |
ca104c13 JL |
1793 | is hard to track the resource needs properly, so give up. */ |
1794 | ||
1795 | #ifdef INSN_SETS_ARE_DELAYED | |
1796 | if (INSN_SETS_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
f27e6bb2 | 1797 | return 0; |
ca104c13 JL |
1798 | #endif |
1799 | ||
1800 | #ifdef INSN_REFERENCES_ARE_DELAYED | |
1801 | if (INSN_REFERENCES_ARE_DELAYED (XVECEXP (pat, 0, 0))) | |
f27e6bb2 | 1802 | return 0; |
ca104c13 JL |
1803 | #endif |
1804 | ||
9c7e2978 RK |
1805 | /* See if any of the insns in the delay slot match, updating |
1806 | resource requirements as we go. */ | |
1807 | for (i = XVECLEN (pat, 0) - 1; i > 0; i--) | |
1808 | { | |
1809 | rtx candidate = XVECEXP (pat, 0, i); | |
1810 | ||
1811 | /* If an insn will be annulled if the branch is false, it isn't | |
1812 | considered as a possible duplicate insn. */ | |
1813 | if (rtx_equal_p (PATTERN (candidate), ipat) | |
1814 | && ! (INSN_ANNULLED_BRANCH_P (XVECEXP (pat, 0, 0)) | |
1815 | && INSN_FROM_TARGET_P (candidate))) | |
1816 | { | |
1817 | /* Show that this insn will be used in the sequel. */ | |
1818 | INSN_FROM_TARGET_P (candidate) = 0; | |
5317d2f8 | 1819 | return candidate; |
9c7e2978 RK |
1820 | } |
1821 | ||
1822 | /* Unless this is an annulled insn from the target of a branch, | |
1823 | we must stop if it sets anything needed or set by INSN. */ | |
1824 | if ((! INSN_ANNULLED_BRANCH_P (XVECEXP (pat, 0, 0)) | |
1825 | || ! INSN_FROM_TARGET_P (candidate)) | |
1826 | && insn_sets_resource_p (candidate, &needed, 1)) | |
1827 | return 0; | |
1828 | } | |
1829 | ||
f27e6bb2 | 1830 | /* If the insn requiring the delay slot conflicts with INSN, we |
9c7e2978 RK |
1831 | must stop. */ |
1832 | if (insn_sets_resource_p (XVECEXP (pat, 0, 0), &needed, 1)) | |
1833 | return 0; | |
1834 | } | |
1835 | else | |
1836 | { | |
1837 | /* See if TRIAL is the same as INSN. */ | |
1838 | pat = PATTERN (trial); | |
1839 | if (rtx_equal_p (pat, ipat)) | |
5317d2f8 | 1840 | return trial; |
9c7e2978 RK |
1841 | |
1842 | /* Can't go any further if TRIAL conflicts with INSN. */ | |
1843 | if (insn_sets_resource_p (trial, &needed, 1)) | |
1844 | return 0; | |
1845 | } | |
1846 | } | |
1847 | ||
1848 | return 0; | |
1849 | } | |
1850 | \f | |
40f03658 | 1851 | /* Return 1 if THREAD can only be executed in one way. If LABEL is nonzero, |
9c7e2978 | 1852 | it is the target of the branch insn being scanned. If ALLOW_FALLTHROUGH |
40f03658 | 1853 | is nonzero, we are allowed to fall into this thread; otherwise, we are |
9c7e2978 RK |
1854 | not. |
1855 | ||
1856 | If LABEL is used more than one or we pass a label other than LABEL before | |
1857 | finding an active insn, we do not own this thread. */ | |
1858 | ||
1859 | static int | |
0c20a65f | 1860 | own_thread_p (rtx thread, rtx label, int allow_fallthrough) |
9c7e2978 RK |
1861 | { |
1862 | rtx active_insn; | |
1863 | rtx insn; | |
1864 | ||
1865 | /* We don't own the function end. */ | |
1866 | if (thread == 0) | |
1867 | return 0; | |
1868 | ||
1869 | /* Get the first active insn, or THREAD, if it is an active insn. */ | |
1870 | active_insn = next_active_insn (PREV_INSN (thread)); | |
1871 | ||
1872 | for (insn = thread; insn != active_insn; insn = NEXT_INSN (insn)) | |
4b4bf941 | 1873 | if (LABEL_P (insn) |
9c7e2978 RK |
1874 | && (insn != label || LABEL_NUSES (insn) != 1)) |
1875 | return 0; | |
1876 | ||
1877 | if (allow_fallthrough) | |
1878 | return 1; | |
1879 | ||
1880 | /* Ensure that we reach a BARRIER before any insn or label. */ | |
1881 | for (insn = prev_nonnote_insn (thread); | |
4b4bf941 | 1882 | insn == 0 || !BARRIER_P (insn); |
9c7e2978 RK |
1883 | insn = prev_nonnote_insn (insn)) |
1884 | if (insn == 0 | |
4b4bf941 JQ |
1885 | || LABEL_P (insn) |
1886 | || (NONJUMP_INSN_P (insn) | |
9c7e2978 RK |
1887 | && GET_CODE (PATTERN (insn)) != USE |
1888 | && GET_CODE (PATTERN (insn)) != CLOBBER)) | |
1889 | return 0; | |
1890 | ||
1891 | return 1; | |
1892 | } | |
1893 | \f | |
9c7e2978 | 1894 | /* Called when INSN is being moved from a location near the target of a jump. |
aa2c50d6 | 1895 | We leave a marker of the form (use (INSN)) immediately in front |
9c7e2978 | 1896 | of WHERE for mark_target_live_regs. These markers will be deleted when |
aa2c50d6 RK |
1897 | reorg finishes. |
1898 | ||
1899 | We used to try to update the live status of registers if WHERE is at | |
1900 | the start of a basic block, but that can't work since we may remove a | |
1901 | BARRIER in relax_delay_slots. */ | |
9c7e2978 RK |
1902 | |
1903 | static void | |
0c20a65f | 1904 | update_block (rtx insn, rtx where) |
9c7e2978 | 1905 | { |
f27e6bb2 | 1906 | /* Ignore if this was in a delay slot and it came from the target of |
9c7e2978 RK |
1907 | a branch. */ |
1908 | if (INSN_FROM_TARGET_P (insn)) | |
1909 | return; | |
1910 | ||
38a448ca | 1911 | emit_insn_before (gen_rtx_USE (VOIDmode, insn), where); |
9c7e2978 RK |
1912 | |
1913 | /* INSN might be making a value live in a block where it didn't use to | |
1914 | be. So recompute liveness information for this block. */ | |
aa2c50d6 | 1915 | |
ca545bb5 | 1916 | incr_ticks_for_insn (insn); |
9c7e2978 | 1917 | } |
28c9500b | 1918 | |
326f06f7 RK |
1919 | /* Similar to REDIRECT_JUMP except that we update the BB_TICKS entry for |
1920 | the basic block containing the jump. */ | |
1921 | ||
1922 | static int | |
0c20a65f | 1923 | reorg_redirect_jump (rtx jump, rtx nlabel) |
326f06f7 | 1924 | { |
ca545bb5 | 1925 | incr_ticks_for_insn (jump); |
9ba11d5a | 1926 | return redirect_jump (jump, nlabel, 1); |
326f06f7 RK |
1927 | } |
1928 | ||
28c9500b JW |
1929 | /* Called when INSN is being moved forward into a delay slot of DELAYED_INSN. |
1930 | We check every instruction between INSN and DELAYED_INSN for REG_DEAD notes | |
1931 | that reference values used in INSN. If we find one, then we move the | |
1932 | REG_DEAD note to INSN. | |
1933 | ||
1934 | This is needed to handle the case where an later insn (after INSN) has a | |
1935 | REG_DEAD note for a register used by INSN, and this later insn subsequently | |
1936 | gets moved before a CODE_LABEL because it is a redundant insn. In this | |
1937 | case, mark_target_live_regs may be confused into thinking the register | |
1938 | is dead because it sees a REG_DEAD note immediately before a CODE_LABEL. */ | |
1939 | ||
1940 | static void | |
0c20a65f | 1941 | update_reg_dead_notes (rtx insn, rtx delayed_insn) |
28c9500b JW |
1942 | { |
1943 | rtx p, link, next; | |
1944 | ||
1945 | for (p = next_nonnote_insn (insn); p != delayed_insn; | |
1946 | p = next_nonnote_insn (p)) | |
1947 | for (link = REG_NOTES (p); link; link = next) | |
1948 | { | |
1949 | next = XEXP (link, 1); | |
1950 | ||
1951 | if (REG_NOTE_KIND (link) != REG_DEAD | |
f8cfc6aa | 1952 | || !REG_P (XEXP (link, 0))) |
28c9500b JW |
1953 | continue; |
1954 | ||
1955 | if (reg_referenced_p (XEXP (link, 0), PATTERN (insn))) | |
1956 | { | |
1957 | /* Move the REG_DEAD note from P to INSN. */ | |
1958 | remove_note (p, link); | |
1959 | XEXP (link, 1) = REG_NOTES (insn); | |
1960 | REG_NOTES (insn) = link; | |
1961 | } | |
1962 | } | |
1963 | } | |
5317d2f8 | 1964 | |
c170c8c2 JW |
1965 | /* Called when an insn redundant with start_insn is deleted. If there |
1966 | is a REG_DEAD note for the target of start_insn between start_insn | |
1967 | and stop_insn, then the REG_DEAD note needs to be deleted since the | |
1968 | value no longer dies there. | |
1969 | ||
1970 | If the REG_DEAD note isn't deleted, then mark_target_live_regs may be | |
1971 | confused into thinking the register is dead. */ | |
1972 | ||
1973 | static void | |
0c20a65f | 1974 | fix_reg_dead_note (rtx start_insn, rtx stop_insn) |
c170c8c2 JW |
1975 | { |
1976 | rtx p, link, next; | |
1977 | ||
1978 | for (p = next_nonnote_insn (start_insn); p != stop_insn; | |
1979 | p = next_nonnote_insn (p)) | |
1980 | for (link = REG_NOTES (p); link; link = next) | |
1981 | { | |
1982 | next = XEXP (link, 1); | |
1983 | ||
1984 | if (REG_NOTE_KIND (link) != REG_DEAD | |
f8cfc6aa | 1985 | || !REG_P (XEXP (link, 0))) |
c170c8c2 JW |
1986 | continue; |
1987 | ||
1988 | if (reg_set_p (XEXP (link, 0), PATTERN (start_insn))) | |
1989 | { | |
1990 | remove_note (p, link); | |
1991 | return; | |
1992 | } | |
1993 | } | |
1994 | } | |
1995 | ||
5317d2f8 RK |
1996 | /* Delete any REG_UNUSED notes that exist on INSN but not on REDUNDANT_INSN. |
1997 | ||
1998 | This handles the case of udivmodXi4 instructions which optimize their | |
1999 | output depending on whether any REG_UNUSED notes are present. | |
2000 | we must make sure that INSN calculates as many results as REDUNDANT_INSN | |
2001 | does. */ | |
2002 | ||
2003 | static void | |
0c20a65f | 2004 | update_reg_unused_notes (rtx insn, rtx redundant_insn) |
5317d2f8 | 2005 | { |
91a51951 | 2006 | rtx link, next; |
5317d2f8 RK |
2007 | |
2008 | for (link = REG_NOTES (insn); link; link = next) | |
2009 | { | |
2010 | next = XEXP (link, 1); | |
2011 | ||
2012 | if (REG_NOTE_KIND (link) != REG_UNUSED | |
f8cfc6aa | 2013 | || !REG_P (XEXP (link, 0))) |
5317d2f8 RK |
2014 | continue; |
2015 | ||
2016 | if (! find_regno_note (redundant_insn, REG_UNUSED, | |
2017 | REGNO (XEXP (link, 0)))) | |
2018 | remove_note (insn, link); | |
2019 | } | |
2020 | } | |
9c7e2978 | 2021 | \f |
9c7e2978 RK |
2022 | /* Scan a function looking for insns that need a delay slot and find insns to |
2023 | put into the delay slot. | |
2024 | ||
40f03658 | 2025 | NON_JUMPS_P is nonzero if we are to only try to fill non-jump insns (such |
9c7e2978 RK |
2026 | as calls). We do these first since we don't want jump insns (that are |
2027 | easier to fill) to get the only insns that could be used for non-jump insns. | |
2028 | When it is zero, only try to fill JUMP_INSNs. | |
2029 | ||
2030 | When slots are filled in this manner, the insns (including the | |
2031 | delay_insn) are put together in a SEQUENCE rtx. In this fashion, | |
2032 | it is possible to tell whether a delay slot has really been filled | |
2033 | or not. `final' knows how to deal with this, by communicating | |
2034 | through FINAL_SEQUENCE. */ | |
2035 | ||
2036 | static void | |
0c20a65f | 2037 | fill_simple_delay_slots (int non_jumps_p) |
9c7e2978 | 2038 | { |
b3694847 SS |
2039 | rtx insn, pat, trial, next_trial; |
2040 | int i; | |
9c7e2978 RK |
2041 | int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base; |
2042 | struct resources needed, set; | |
126db1fa | 2043 | int slots_to_fill, slots_filled; |
9c7e2978 RK |
2044 | rtx delay_list; |
2045 | ||
2046 | for (i = 0; i < num_unfilled_slots; i++) | |
2047 | { | |
35523fce | 2048 | int flags; |
9c7e2978 RK |
2049 | /* Get the next insn to fill. If it has already had any slots assigned, |
2050 | we can't do anything with it. Maybe we'll improve this later. */ | |
2051 | ||
2052 | insn = unfilled_slots_base[i]; | |
2053 | if (insn == 0 | |
2054 | || INSN_DELETED_P (insn) | |
4b4bf941 | 2055 | || (NONJUMP_INSN_P (insn) |
9c7e2978 | 2056 | && GET_CODE (PATTERN (insn)) == SEQUENCE) |
4b4bf941 JQ |
2057 | || (JUMP_P (insn) && non_jumps_p) |
2058 | || (!JUMP_P (insn) && ! non_jumps_p)) | |
9c7e2978 | 2059 | continue; |
f27e6bb2 | 2060 | |
c5c76735 JL |
2061 | /* It may have been that this insn used to need delay slots, but |
2062 | now doesn't; ignore in that case. This can happen, for example, | |
2063 | on the HP PA RISC, where the number of delay slots depends on | |
2064 | what insns are nearby. */ | |
9c7e2978 | 2065 | slots_to_fill = num_delay_slots (insn); |
c3a3b536 JL |
2066 | |
2067 | /* Some machine description have defined instructions to have | |
2068 | delay slots only in certain circumstances which may depend on | |
2069 | nearby insns (which change due to reorg's actions). | |
2070 | ||
2071 | For example, the PA port normally has delay slots for unconditional | |
2072 | jumps. | |
2073 | ||
2074 | However, the PA port claims such jumps do not have a delay slot | |
2075 | if they are immediate successors of certain CALL_INSNs. This | |
2076 | allows the port to favor filling the delay slot of the call with | |
2077 | the unconditional jump. */ | |
9c7e2978 | 2078 | if (slots_to_fill == 0) |
c3a3b536 | 2079 | continue; |
9c7e2978 RK |
2080 | |
2081 | /* This insn needs, or can use, some delay slots. SLOTS_TO_FILL | |
464b453d TW |
2082 | says how many. After initialization, first try optimizing |
2083 | ||
2084 | call _foo call _foo | |
2085 | nop add %o7,.-L1,%o7 | |
2086 | b,a L1 | |
2087 | nop | |
2088 | ||
2089 | If this case applies, the delay slot of the call is filled with | |
2090 | the unconditional jump. This is done first to avoid having the | |
2091 | delay slot of the call filled in the backward scan. Also, since | |
2092 | the unconditional jump is likely to also have a delay slot, that | |
733fa7ef JL |
2093 | insn must exist when it is subsequently scanned. |
2094 | ||
2095 | This is tried on each insn with delay slots as some machines | |
f27e6bb2 | 2096 | have insns which perform calls, but are not represented as |
733fa7ef | 2097 | CALL_INSNs. */ |
464b453d TW |
2098 | |
2099 | slots_filled = 0; | |
2100 | delay_list = 0; | |
2101 | ||
4b4bf941 | 2102 | if (JUMP_P (insn)) |
c5c76735 JL |
2103 | flags = get_jump_flags (insn, JUMP_LABEL (insn)); |
2104 | else | |
2105 | flags = get_jump_flags (insn, NULL_RTX); | |
2106 | ||
733fa7ef | 2107 | if ((trial = next_active_insn (insn)) |
4b4bf941 | 2108 | && JUMP_P (trial) |
464b453d | 2109 | && simplejump_p (trial) |
35523fce | 2110 | && eligible_for_delay (insn, slots_filled, trial, flags) |
1db828df DM |
2111 | && no_labels_between_p (insn, trial) |
2112 | && ! can_throw_internal (trial)) | |
464b453d | 2113 | { |
45d9a5c6 | 2114 | rtx *tmp; |
464b453d TW |
2115 | slots_filled++; |
2116 | delay_list = add_to_delay_list (trial, delay_list); | |
45d9a5c6 RK |
2117 | |
2118 | /* TRIAL may have had its delay slot filled, then unfilled. When | |
2119 | the delay slot is unfilled, TRIAL is placed back on the unfilled | |
2120 | slots obstack. Unfortunately, it is placed on the end of the | |
2121 | obstack, not in its original location. Therefore, we must search | |
2122 | from entry i + 1 to the end of the unfilled slots obstack to | |
2123 | try and find TRIAL. */ | |
2124 | tmp = &unfilled_slots_base[i + 1]; | |
2125 | while (*tmp != trial && tmp != unfilled_slots_next) | |
2126 | tmp++; | |
2127 | ||
464b453d | 2128 | /* Remove the unconditional jump from consideration for delay slot |
6d2f8887 | 2129 | filling and unthread it. */ |
45d9a5c6 RK |
2130 | if (*tmp == trial) |
2131 | *tmp = 0; | |
464b453d TW |
2132 | { |
2133 | rtx next = NEXT_INSN (trial); | |
2134 | rtx prev = PREV_INSN (trial); | |
2135 | if (prev) | |
2136 | NEXT_INSN (prev) = next; | |
2137 | if (next) | |
2138 | PREV_INSN (next) = prev; | |
2139 | } | |
2140 | } | |
2141 | ||
2142 | /* Now, scan backwards from the insn to search for a potential | |
2143 | delay-slot candidate. Stop searching when a label or jump is hit. | |
2144 | ||
9c7e2978 RK |
2145 | For each candidate, if it is to go into the delay slot (moved |
2146 | forward in execution sequence), it must not need or set any resources | |
2147 | that were set by later insns and must not set any resources that | |
2148 | are needed for those insns. | |
f27e6bb2 | 2149 | |
9c7e2978 RK |
2150 | The delay slot insn itself sets resources unless it is a call |
2151 | (in which case the called routine, not the insn itself, is doing | |
2152 | the setting). */ | |
2153 | ||
464b453d | 2154 | if (slots_filled < slots_to_fill) |
9c7e2978 | 2155 | { |
464b453d TW |
2156 | CLEAR_RESOURCE (&needed); |
2157 | CLEAR_RESOURCE (&set); | |
73780b74 | 2158 | mark_set_resources (insn, &set, 0, MARK_SRC_DEST); |
464b453d | 2159 | mark_referenced_resources (insn, &needed, 0); |
9c7e2978 | 2160 | |
464b453d TW |
2161 | for (trial = prev_nonnote_insn (insn); ! stop_search_p (trial, 1); |
2162 | trial = next_trial) | |
2163 | { | |
2164 | next_trial = prev_nonnote_insn (trial); | |
9c7e2978 | 2165 | |
464b453d TW |
2166 | /* This must be an INSN or CALL_INSN. */ |
2167 | pat = PATTERN (trial); | |
2168 | ||
2169 | /* USE and CLOBBER at this level was just for flow; ignore it. */ | |
2170 | if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2171 | continue; | |
9c7e2978 | 2172 | |
f27e6bb2 | 2173 | /* Check for resource conflict first, to avoid unnecessary |
464b453d TW |
2174 | splitting. */ |
2175 | if (! insn_references_resource_p (trial, &set, 1) | |
2176 | && ! insn_sets_resource_p (trial, &set, 1) | |
2177 | && ! insn_sets_resource_p (trial, &needed, 1) | |
9c7e2978 | 2178 | #ifdef HAVE_cc0 |
464b453d | 2179 | /* Can't separate set of cc0 from its use. */ |
c5c76735 | 2180 | && ! (reg_mentioned_p (cc0_rtx, pat) && ! sets_cc0_p (pat)) |
9c7e2978 | 2181 | #endif |
1db828df | 2182 | && ! can_throw_internal (trial)) |
9c7e2978 | 2183 | { |
464b453d TW |
2184 | trial = try_split (pat, trial, 1); |
2185 | next_trial = prev_nonnote_insn (trial); | |
35523fce | 2186 | if (eligible_for_delay (insn, slots_filled, trial, flags)) |
464b453d TW |
2187 | { |
2188 | /* In this case, we are searching backward, so if we | |
2189 | find insns to put on the delay list, we want | |
2190 | to put them at the head, rather than the | |
2191 | tail, of the list. */ | |
2192 | ||
28c9500b | 2193 | update_reg_dead_notes (trial, insn); |
38a448ca RH |
2194 | delay_list = gen_rtx_INSN_LIST (VOIDmode, |
2195 | trial, delay_list); | |
464b453d | 2196 | update_block (trial, trial); |
53c17031 | 2197 | delete_related_insns (trial); |
464b453d TW |
2198 | if (slots_to_fill == ++slots_filled) |
2199 | break; | |
2200 | continue; | |
2201 | } | |
9c7e2978 | 2202 | } |
9c7e2978 | 2203 | |
73780b74 | 2204 | mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); |
464b453d TW |
2205 | mark_referenced_resources (trial, &needed, 1); |
2206 | } | |
9c7e2978 RK |
2207 | } |
2208 | ||
9c7e2978 RK |
2209 | /* If all needed slots haven't been filled, we come here. */ |
2210 | ||
2211 | /* Try to optimize case of jumping around a single insn. */ | |
2212 | #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS) | |
2f9ba5a9 JL |
2213 | if (slots_filled != slots_to_fill |
2214 | && delay_list == 0 | |
4b4bf941 | 2215 | && JUMP_P (insn) |
3480bb98 | 2216 | && (condjump_p (insn) || condjump_in_parallel_p (insn))) |
9c7e2978 RK |
2217 | { |
2218 | delay_list = optimize_skip (insn); | |
2219 | if (delay_list) | |
2220 | slots_filled += 1; | |
2221 | } | |
2222 | #endif | |
2223 | ||
9c7e2978 RK |
2224 | /* Try to get insns from beyond the insn needing the delay slot. |
2225 | These insns can neither set or reference resources set in insns being | |
2226 | skipped, cannot set resources in the insn being skipped, and, if this | |
2227 | is a CALL_INSN (or a CALL_INSN is passed), cannot trap (because the | |
2228 | call might not return). | |
2229 | ||
f03e51d4 RK |
2230 | There used to be code which continued past the target label if |
2231 | we saw all uses of the target label. This code did not work, | |
2232 | because it failed to account for some instructions which were | |
2233 | both annulled and marked as from the target. This can happen as a | |
2234 | result of optimize_skip. Since this code was redundant with | |
2235 | fill_eager_delay_slots anyways, it was just deleted. */ | |
9c7e2978 | 2236 | |
2f9ba5a9 | 2237 | if (slots_filled != slots_to_fill |
be672de8 MM |
2238 | /* If this instruction could throw an exception which is |
2239 | caught in the same function, then it's not safe to fill | |
2240 | the delay slot with an instruction from beyond this | |
2241 | point. For example, consider: | |
2242 | ||
2243 | int i = 2; | |
2244 | ||
ff81832f | 2245 | try { |
be672de8 MM |
2246 | f(); |
2247 | i = 3; | |
2248 | } catch (...) {} | |
ff81832f | 2249 | |
be672de8 MM |
2250 | return i; |
2251 | ||
2252 | Even though `i' is a local variable, we must be sure not | |
2253 | to put `i = 3' in the delay slot if `f' might throw an | |
2254 | exception. | |
2255 | ||
2256 | Presumably, we should also check to see if we could get | |
2257 | back to this function via `setjmp'. */ | |
1db828df | 2258 | && ! can_throw_internal (insn) |
4b4bf941 | 2259 | && (!JUMP_P (insn) |
3480bb98 | 2260 | || ((condjump_p (insn) || condjump_in_parallel_p (insn)) |
f27e6bb2 KH |
2261 | && ! simplejump_p (insn) |
2262 | && JUMP_LABEL (insn) != 0))) | |
9c7e2978 | 2263 | { |
0443f602 JO |
2264 | /* Invariant: If insn is a JUMP_INSN, the insn's jump |
2265 | label. Otherwise, zero. */ | |
9c7e2978 RK |
2266 | rtx target = 0; |
2267 | int maybe_never = 0; | |
0443f602 | 2268 | rtx pat, trial_delay; |
9c7e2978 RK |
2269 | |
2270 | CLEAR_RESOURCE (&needed); | |
2271 | CLEAR_RESOURCE (&set); | |
2272 | ||
4b4bf941 | 2273 | if (CALL_P (insn)) |
9c7e2978 | 2274 | { |
73780b74 | 2275 | mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL); |
9c7e2978 RK |
2276 | mark_referenced_resources (insn, &needed, 1); |
2277 | maybe_never = 1; | |
2278 | } | |
f27e6bb2 | 2279 | else |
9c7e2978 | 2280 | { |
73780b74 | 2281 | mark_set_resources (insn, &set, 0, MARK_SRC_DEST_CALL); |
674345b1 | 2282 | mark_referenced_resources (insn, &needed, 1); |
4b4bf941 | 2283 | if (JUMP_P (insn)) |
f03e51d4 | 2284 | target = JUMP_LABEL (insn); |
9c7e2978 RK |
2285 | } |
2286 | ||
0443f602 JO |
2287 | if (target == 0) |
2288 | for (trial = next_nonnote_insn (insn); trial; trial = next_trial) | |
2289 | { | |
2290 | next_trial = next_nonnote_insn (trial); | |
9c7e2978 | 2291 | |
4b4bf941 JQ |
2292 | if (LABEL_P (trial) |
2293 | || BARRIER_P (trial)) | |
0443f602 | 2294 | break; |
9c7e2978 | 2295 | |
0443f602 JO |
2296 | /* We must have an INSN, JUMP_INSN, or CALL_INSN. */ |
2297 | pat = PATTERN (trial); | |
9c7e2978 | 2298 | |
0443f602 JO |
2299 | /* Stand-alone USE and CLOBBER are just for flow. */ |
2300 | if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2301 | continue; | |
9c7e2978 | 2302 | |
0443f602 JO |
2303 | /* If this already has filled delay slots, get the insn needing |
2304 | the delay slots. */ | |
2305 | if (GET_CODE (pat) == SEQUENCE) | |
2306 | trial_delay = XVECEXP (pat, 0, 0); | |
2307 | else | |
2308 | trial_delay = trial; | |
2309 | ||
2310 | /* Stop our search when seeing an unconditional jump. */ | |
4b4bf941 | 2311 | if (JUMP_P (trial_delay)) |
0443f602 JO |
2312 | break; |
2313 | ||
2314 | /* See if we have a resource problem before we try to | |
6d2f8887 | 2315 | split. */ |
0443f602 JO |
2316 | if (GET_CODE (pat) != SEQUENCE |
2317 | && ! insn_references_resource_p (trial, &set, 1) | |
2318 | && ! insn_sets_resource_p (trial, &set, 1) | |
2319 | && ! insn_sets_resource_p (trial, &needed, 1) | |
9c7e2978 | 2320 | #ifdef HAVE_cc0 |
0443f602 | 2321 | && ! (reg_mentioned_p (cc0_rtx, pat) && ! sets_cc0_p (pat)) |
9c7e2978 | 2322 | #endif |
0443f602 JO |
2323 | && ! (maybe_never && may_trap_p (pat)) |
2324 | && (trial = try_split (pat, trial, 0)) | |
1db828df DM |
2325 | && eligible_for_delay (insn, slots_filled, trial, flags) |
2326 | && ! can_throw_internal(trial)) | |
0443f602 JO |
2327 | { |
2328 | next_trial = next_nonnote_insn (trial); | |
2329 | delay_list = add_to_delay_list (trial, delay_list); | |
9c7e2978 RK |
2330 | |
2331 | #ifdef HAVE_cc0 | |
0443f602 JO |
2332 | if (reg_mentioned_p (cc0_rtx, pat)) |
2333 | link_cc0_insns (trial); | |
9c7e2978 RK |
2334 | #endif |
2335 | ||
53c17031 | 2336 | delete_related_insns (trial); |
0443f602 JO |
2337 | if (slots_to_fill == ++slots_filled) |
2338 | break; | |
2339 | continue; | |
2340 | } | |
9c7e2978 | 2341 | |
0443f602 JO |
2342 | mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); |
2343 | mark_referenced_resources (trial, &needed, 1); | |
9c7e2978 | 2344 | |
0443f602 JO |
2345 | /* Ensure we don't put insns between the setting of cc and the |
2346 | comparison by moving a setting of cc into an earlier delay | |
2347 | slot since these insns could clobber the condition code. */ | |
2348 | set.cc = 1; | |
9c7e2978 | 2349 | |
0443f602 | 2350 | /* If this is a call or jump, we might not get here. */ |
4b4bf941 JQ |
2351 | if (CALL_P (trial_delay) |
2352 | || JUMP_P (trial_delay)) | |
0443f602 JO |
2353 | maybe_never = 1; |
2354 | } | |
9c7e2978 RK |
2355 | |
2356 | /* If there are slots left to fill and our search was stopped by an | |
2357 | unconditional branch, try the insn at the branch target. We can | |
f27e6bb2 | 2358 | redirect the branch if it works. |
6f7775d5 JL |
2359 | |
2360 | Don't do this if the insn at the branch target is a branch. */ | |
9c7e2978 RK |
2361 | if (slots_to_fill != slots_filled |
2362 | && trial | |
4b4bf941 | 2363 | && JUMP_P (trial) |
9c7e2978 RK |
2364 | && simplejump_p (trial) |
2365 | && (target == 0 || JUMP_LABEL (trial) == target) | |
2366 | && (next_trial = next_active_insn (JUMP_LABEL (trial))) != 0 | |
4b4bf941 | 2367 | && ! (NONJUMP_INSN_P (next_trial) |
9c7e2978 | 2368 | && GET_CODE (PATTERN (next_trial)) == SEQUENCE) |
4b4bf941 | 2369 | && !JUMP_P (next_trial) |
9c7e2978 RK |
2370 | && ! insn_references_resource_p (next_trial, &set, 1) |
2371 | && ! insn_sets_resource_p (next_trial, &set, 1) | |
2372 | && ! insn_sets_resource_p (next_trial, &needed, 1) | |
2373 | #ifdef HAVE_cc0 | |
d6749dec | 2374 | && ! reg_mentioned_p (cc0_rtx, PATTERN (next_trial)) |
9c7e2978 RK |
2375 | #endif |
2376 | && ! (maybe_never && may_trap_p (PATTERN (next_trial))) | |
2377 | && (next_trial = try_split (PATTERN (next_trial), next_trial, 0)) | |
1db828df DM |
2378 | && eligible_for_delay (insn, slots_filled, next_trial, flags) |
2379 | && ! can_throw_internal (trial)) | |
9c7e2978 | 2380 | { |
afce2847 HPN |
2381 | /* See comment in relax_delay_slots about necessity of using |
2382 | next_real_insn here. */ | |
2383 | rtx new_label = next_real_insn (next_trial); | |
9c7e2978 RK |
2384 | |
2385 | if (new_label != 0) | |
2386 | new_label = get_label_before (new_label); | |
99f14de7 JW |
2387 | else |
2388 | new_label = find_end_label (); | |
9c7e2978 | 2389 | |
b9185691 EB |
2390 | if (new_label) |
2391 | { | |
2392 | delay_list | |
2393 | = add_to_delay_list (copy_rtx (next_trial), delay_list); | |
2394 | slots_filled++; | |
2395 | reorg_redirect_jump (trial, new_label); | |
2396 | ||
2397 | /* If we merged because we both jumped to the same place, | |
2398 | redirect the original insn also. */ | |
2399 | if (target) | |
2400 | reorg_redirect_jump (insn, new_label); | |
2401 | } | |
9c7e2978 RK |
2402 | } |
2403 | } | |
2404 | ||
126db1fa JL |
2405 | /* If this is an unconditional jump, then try to get insns from the |
2406 | target of the jump. */ | |
4b4bf941 | 2407 | if (JUMP_P (insn) |
126db1fa JL |
2408 | && simplejump_p (insn) |
2409 | && slots_filled != slots_to_fill) | |
2410 | delay_list | |
2411 | = fill_slots_from_thread (insn, const_true_rtx, | |
2412 | next_active_insn (JUMP_LABEL (insn)), | |
2413 | NULL, 1, 1, | |
2414 | own_thread_p (JUMP_LABEL (insn), | |
2415 | JUMP_LABEL (insn), 0), | |
ab63953e JL |
2416 | slots_to_fill, &slots_filled, |
2417 | delay_list); | |
126db1fa | 2418 | |
9c7e2978 RK |
2419 | if (delay_list) |
2420 | unfilled_slots_base[i] | |
91a51951 | 2421 | = emit_delay_sequence (insn, delay_list, slots_filled); |
9c7e2978 RK |
2422 | |
2423 | if (slots_to_fill == slots_filled) | |
2424 | unfilled_slots_base[i] = 0; | |
2425 | ||
2426 | note_delay_statistics (slots_filled, 0); | |
2427 | } | |
2428 | ||
2429 | #ifdef DELAY_SLOTS_FOR_EPILOGUE | |
2430 | /* See if the epilogue needs any delay slots. Try to fill them if so. | |
f27e6bb2 | 2431 | The only thing we can do is scan backwards from the end of the |
9c7e2978 RK |
2432 | function. If we did this in a previous pass, it is incorrect to do it |
2433 | again. */ | |
2434 | if (current_function_epilogue_delay_list) | |
2435 | return; | |
2436 | ||
2437 | slots_to_fill = DELAY_SLOTS_FOR_EPILOGUE; | |
2438 | if (slots_to_fill == 0) | |
2439 | return; | |
2440 | ||
2441 | slots_filled = 0; | |
9c7e2978 RK |
2442 | CLEAR_RESOURCE (&set); |
2443 | ||
8c2977e2 JW |
2444 | /* The frame pointer and stack pointer are needed at the beginning of |
2445 | the epilogue, so instructions setting them can not be put in the | |
2446 | epilogue delay slot. However, everything else needed at function | |
2447 | end is safe, so we don't want to use end_of_function_needs here. */ | |
2448 | CLEAR_RESOURCE (&needed); | |
2449 | if (frame_pointer_needed) | |
2450 | { | |
2451 | SET_HARD_REG_BIT (needed.regs, FRAME_POINTER_REGNUM); | |
2452 | #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM | |
2453 | SET_HARD_REG_BIT (needed.regs, HARD_FRAME_POINTER_REGNUM); | |
2454 | #endif | |
fdb8a883 JW |
2455 | if (! EXIT_IGNORE_STACK |
2456 | || current_function_sp_is_unchanging) | |
8c2977e2 JW |
2457 | SET_HARD_REG_BIT (needed.regs, STACK_POINTER_REGNUM); |
2458 | } | |
2459 | else | |
2460 | SET_HARD_REG_BIT (needed.regs, STACK_POINTER_REGNUM); | |
2461 | ||
632c9d9e | 2462 | #ifdef EPILOGUE_USES |
f27e6bb2 | 2463 | for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) |
632c9d9e MS |
2464 | { |
2465 | if (EPILOGUE_USES (i)) | |
2466 | SET_HARD_REG_BIT (needed.regs, i); | |
2467 | } | |
2468 | #endif | |
2469 | ||
9c7e2978 RK |
2470 | for (trial = get_last_insn (); ! stop_search_p (trial, 1); |
2471 | trial = PREV_INSN (trial)) | |
2472 | { | |
4b4bf941 | 2473 | if (NOTE_P (trial)) |
9c7e2978 RK |
2474 | continue; |
2475 | pat = PATTERN (trial); | |
2476 | if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2477 | continue; | |
2478 | ||
2479 | if (! insn_references_resource_p (trial, &set, 1) | |
2480 | && ! insn_sets_resource_p (trial, &needed, 1) | |
8c2977e2 | 2481 | && ! insn_sets_resource_p (trial, &set, 1) |
9c7e2978 RK |
2482 | #ifdef HAVE_cc0 |
2483 | /* Don't want to mess with cc0 here. */ | |
2484 | && ! reg_mentioned_p (cc0_rtx, pat) | |
2485 | #endif | |
1db828df | 2486 | && ! can_throw_internal (trial)) |
9c7e2978 RK |
2487 | { |
2488 | trial = try_split (pat, trial, 1); | |
2489 | if (ELIGIBLE_FOR_EPILOGUE_DELAY (trial, slots_filled)) | |
2490 | { | |
17448690 RK |
2491 | /* Here as well we are searching backward, so put the |
2492 | insns we find on the head of the list. */ | |
2493 | ||
9c7e2978 | 2494 | current_function_epilogue_delay_list |
38a448ca RH |
2495 | = gen_rtx_INSN_LIST (VOIDmode, trial, |
2496 | current_function_epilogue_delay_list); | |
ca545bb5 | 2497 | mark_end_of_function_resources (trial, 1); |
9c7e2978 | 2498 | update_block (trial, trial); |
53c17031 | 2499 | delete_related_insns (trial); |
9c7e2978 RK |
2500 | |
2501 | /* Clear deleted bit so final.c will output the insn. */ | |
2502 | INSN_DELETED_P (trial) = 0; | |
2503 | ||
2504 | if (slots_to_fill == ++slots_filled) | |
2505 | break; | |
2506 | continue; | |
2507 | } | |
2508 | } | |
2509 | ||
73780b74 | 2510 | mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); |
9c7e2978 RK |
2511 | mark_referenced_resources (trial, &needed, 1); |
2512 | } | |
2513 | ||
2514 | note_delay_statistics (slots_filled, 0); | |
2515 | #endif | |
2516 | } | |
2517 | \f | |
2518 | /* Try to find insns to place in delay slots. | |
2519 | ||
2520 | INSN is the jump needing SLOTS_TO_FILL delay slots. It tests CONDITION | |
2521 | or is an unconditional branch if CONDITION is const_true_rtx. | |
2522 | *PSLOTS_FILLED is updated with the number of slots that we have filled. | |
2523 | ||
2524 | THREAD is a flow-of-control, either the insns to be executed if the | |
2525 | branch is true or if the branch is false, THREAD_IF_TRUE says which. | |
2526 | ||
2527 | OPPOSITE_THREAD is the thread in the opposite direction. It is used | |
2528 | to see if any potential delay slot insns set things needed there. | |
2529 | ||
40f03658 | 2530 | LIKELY is nonzero if it is extremely likely that the branch will be |
9c7e2978 RK |
2531 | taken and THREAD_IF_TRUE is set. This is used for the branch at the |
2532 | end of a loop back up to the top. | |
2533 | ||
2534 | OWN_THREAD and OWN_OPPOSITE_THREAD are true if we are the only user of the | |
2535 | thread. I.e., it is the fallthrough code of our jump or the target of the | |
2536 | jump when we are the only jump going there. | |
2537 | ||
2538 | If OWN_THREAD is false, it must be the "true" thread of a jump. In that | |
2539 | case, we can only take insns from the head of the thread for our delay | |
2540 | slot. We then adjust the jump to point after the insns we have taken. */ | |
2541 | ||
2542 | static rtx | |
0c20a65f AJ |
2543 | fill_slots_from_thread (rtx insn, rtx condition, rtx thread, |
2544 | rtx opposite_thread, int likely, int thread_if_true, | |
2545 | int own_thread, int slots_to_fill, | |
2546 | int *pslots_filled, rtx delay_list) | |
9c7e2978 | 2547 | { |
d674b9e3 | 2548 | rtx new_thread; |
9c7e2978 RK |
2549 | struct resources opposite_needed, set, needed; |
2550 | rtx trial; | |
2551 | int lose = 0; | |
2552 | int must_annul = 0; | |
35523fce | 2553 | int flags; |
9c7e2978 RK |
2554 | |
2555 | /* Validate our arguments. */ | |
41374e13 NS |
2556 | gcc_assert(condition != const_true_rtx || thread_if_true); |
2557 | gcc_assert(own_thread || thread_if_true); | |
9c7e2978 | 2558 | |
35523fce JL |
2559 | flags = get_jump_flags (insn, JUMP_LABEL (insn)); |
2560 | ||
9c7e2978 RK |
2561 | /* If our thread is the end of subroutine, we can't get any delay |
2562 | insns from that. */ | |
2563 | if (thread == 0) | |
f27e6bb2 | 2564 | return delay_list; |
9c7e2978 RK |
2565 | |
2566 | /* If this is an unconditional branch, nothing is needed at the | |
2567 | opposite thread. Otherwise, compute what is needed there. */ | |
2568 | if (condition == const_true_rtx) | |
2569 | CLEAR_RESOURCE (&opposite_needed); | |
2570 | else | |
ca545bb5 | 2571 | mark_target_live_regs (get_insns (), opposite_thread, &opposite_needed); |
9c7e2978 | 2572 | |
d674b9e3 RK |
2573 | /* If the insn at THREAD can be split, do it here to avoid having to |
2574 | update THREAD and NEW_THREAD if it is done in the loop below. Also | |
2575 | initialize NEW_THREAD. */ | |
2576 | ||
d45cf215 | 2577 | new_thread = thread = try_split (PATTERN (thread), thread, 0); |
d674b9e3 | 2578 | |
9c7e2978 RK |
2579 | /* Scan insns at THREAD. We are looking for an insn that can be removed |
2580 | from THREAD (it neither sets nor references resources that were set | |
2581 | ahead of it and it doesn't set anything needs by the insns ahead of | |
2582 | it) and that either can be placed in an annulling insn or aren't | |
2583 | needed at OPPOSITE_THREAD. */ | |
2584 | ||
2585 | CLEAR_RESOURCE (&needed); | |
2586 | CLEAR_RESOURCE (&set); | |
2587 | ||
2588 | /* If we do not own this thread, we must stop as soon as we find | |
2589 | something that we can't put in a delay slot, since all we can do | |
2590 | is branch into THREAD at a later point. Therefore, labels stop | |
2591 | the search if this is not the `true' thread. */ | |
2592 | ||
2593 | for (trial = thread; | |
2594 | ! stop_search_p (trial, ! thread_if_true) && (! lose || own_thread); | |
2595 | trial = next_nonnote_insn (trial)) | |
2596 | { | |
70011923 | 2597 | rtx pat, old_trial; |
9c7e2978 RK |
2598 | |
2599 | /* If we have passed a label, we no longer own this thread. */ | |
4b4bf941 | 2600 | if (LABEL_P (trial)) |
9c7e2978 RK |
2601 | { |
2602 | own_thread = 0; | |
2603 | continue; | |
2604 | } | |
2605 | ||
2606 | pat = PATTERN (trial); | |
2607 | if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) | |
2608 | continue; | |
2609 | ||
2610 | /* If TRIAL conflicts with the insns ahead of it, we lose. Also, | |
2611 | don't separate or copy insns that set and use CC0. */ | |
2612 | if (! insn_references_resource_p (trial, &set, 1) | |
2613 | && ! insn_sets_resource_p (trial, &set, 1) | |
2614 | && ! insn_sets_resource_p (trial, &needed, 1) | |
2615 | #ifdef HAVE_cc0 | |
2616 | && ! (reg_mentioned_p (cc0_rtx, pat) | |
2617 | && (! own_thread || ! sets_cc0_p (pat))) | |
2618 | #endif | |
1db828df | 2619 | && ! can_throw_internal (trial)) |
9c7e2978 | 2620 | { |
5317d2f8 RK |
2621 | rtx prior_insn; |
2622 | ||
9c7e2978 RK |
2623 | /* If TRIAL is redundant with some insn before INSN, we don't |
2624 | actually need to add it to the delay list; we can merely pretend | |
2625 | we did. */ | |
91a51951 | 2626 | if ((prior_insn = redundant_insn (trial, insn, delay_list))) |
9c7e2978 | 2627 | { |
c170c8c2 | 2628 | fix_reg_dead_note (prior_insn, insn); |
9c7e2978 RK |
2629 | if (own_thread) |
2630 | { | |
f1f9081a | 2631 | update_block (trial, thread); |
9bfe7965 | 2632 | if (trial == thread) |
8b11645c RK |
2633 | { |
2634 | thread = next_active_insn (thread); | |
2635 | if (new_thread == trial) | |
2636 | new_thread = thread; | |
2637 | } | |
9bfe7965 | 2638 | |
53c17031 | 2639 | delete_related_insns (trial); |
9c7e2978 RK |
2640 | } |
2641 | else | |
5317d2f8 RK |
2642 | { |
2643 | update_reg_unused_notes (prior_insn, trial); | |
088131ee | 2644 | new_thread = next_active_insn (trial); |
5317d2f8 | 2645 | } |
9c7e2978 RK |
2646 | |
2647 | continue; | |
2648 | } | |
2649 | ||
2650 | /* There are two ways we can win: If TRIAL doesn't set anything | |
2651 | needed at the opposite thread and can't trap, or if it can | |
2652 | go into an annulled delay slot. */ | |
96960d10 HB |
2653 | if (!must_annul |
2654 | && (condition == const_true_rtx | |
2655 | || (! insn_sets_resource_p (trial, &opposite_needed, 1) | |
2656 | && ! may_trap_p (pat)))) | |
9c7e2978 | 2657 | { |
70011923 | 2658 | old_trial = trial; |
9c7e2978 | 2659 | trial = try_split (pat, trial, 0); |
70011923 ILT |
2660 | if (new_thread == old_trial) |
2661 | new_thread = trial; | |
9b9cd81b JW |
2662 | if (thread == old_trial) |
2663 | thread = trial; | |
9c7e2978 | 2664 | pat = PATTERN (trial); |
35523fce | 2665 | if (eligible_for_delay (insn, *pslots_filled, trial, flags)) |
9c7e2978 RK |
2666 | goto winner; |
2667 | } | |
2668 | else if (0 | |
2669 | #ifdef ANNUL_IFTRUE_SLOTS | |
2670 | || ! thread_if_true | |
2671 | #endif | |
2672 | #ifdef ANNUL_IFFALSE_SLOTS | |
2673 | || thread_if_true | |
2674 | #endif | |
2675 | ) | |
2676 | { | |
70011923 | 2677 | old_trial = trial; |
9c7e2978 | 2678 | trial = try_split (pat, trial, 0); |
70011923 ILT |
2679 | if (new_thread == old_trial) |
2680 | new_thread = trial; | |
760607e8 RK |
2681 | if (thread == old_trial) |
2682 | thread = trial; | |
9c7e2978 | 2683 | pat = PATTERN (trial); |
96960d10 HB |
2684 | if ((must_annul || delay_list == NULL) && (thread_if_true |
2685 | ? check_annul_list_true_false (0, delay_list) | |
2686 | && eligible_for_annul_false (insn, *pslots_filled, trial, flags) | |
2687 | : check_annul_list_true_false (1, delay_list) | |
2688 | && eligible_for_annul_true (insn, *pslots_filled, trial, flags))) | |
9c7e2978 RK |
2689 | { |
2690 | rtx temp; | |
2691 | ||
2692 | must_annul = 1; | |
2693 | winner: | |
2694 | ||
2695 | #ifdef HAVE_cc0 | |
2696 | if (reg_mentioned_p (cc0_rtx, pat)) | |
2697 | link_cc0_insns (trial); | |
2698 | #endif | |
2699 | ||
2700 | /* If we own this thread, delete the insn. If this is the | |
2701 | destination of a branch, show that a basic block status | |
2702 | may have been updated. In any case, mark the new | |
2703 | starting point of this thread. */ | |
2704 | if (own_thread) | |
2705 | { | |
e41c7831 RH |
2706 | rtx note; |
2707 | ||
f1f9081a | 2708 | update_block (trial, thread); |
a7ad699e RK |
2709 | if (trial == thread) |
2710 | { | |
2711 | thread = next_active_insn (thread); | |
2712 | if (new_thread == trial) | |
2713 | new_thread = thread; | |
2714 | } | |
e41c7831 RH |
2715 | |
2716 | /* We are moving this insn, not deleting it. We must | |
2717 | temporarily increment the use count on any referenced | |
2718 | label lest it be deleted by delete_related_insns. */ | |
2719 | note = find_reg_note (trial, REG_LABEL, 0); | |
0864c526 | 2720 | /* REG_LABEL could be NOTE_INSN_DELETED_LABEL too. */ |
4b4bf941 | 2721 | if (note && LABEL_P (XEXP (note, 0))) |
e41c7831 RH |
2722 | LABEL_NUSES (XEXP (note, 0))++; |
2723 | ||
53c17031 | 2724 | delete_related_insns (trial); |
e41c7831 | 2725 | |
4b4bf941 | 2726 | if (note && LABEL_P (XEXP (note, 0))) |
e41c7831 | 2727 | LABEL_NUSES (XEXP (note, 0))--; |
9c7e2978 RK |
2728 | } |
2729 | else | |
2730 | new_thread = next_active_insn (trial); | |
2731 | ||
2732 | temp = own_thread ? trial : copy_rtx (trial); | |
2733 | if (thread_if_true) | |
2734 | INSN_FROM_TARGET_P (temp) = 1; | |
2735 | ||
2736 | delay_list = add_to_delay_list (temp, delay_list); | |
2737 | ||
2738 | if (slots_to_fill == ++(*pslots_filled)) | |
2739 | { | |
2740 | /* Even though we have filled all the slots, we | |
2741 | may be branching to a location that has a | |
2742 | redundant insn. Skip any if so. */ | |
2743 | while (new_thread && ! own_thread | |
2744 | && ! insn_sets_resource_p (new_thread, &set, 1) | |
2745 | && ! insn_sets_resource_p (new_thread, &needed, 1) | |
2746 | && ! insn_references_resource_p (new_thread, | |
2747 | &set, 1) | |
f78c792c JL |
2748 | && (prior_insn |
2749 | = redundant_insn (new_thread, insn, | |
2750 | delay_list))) | |
2751 | { | |
2752 | /* We know we do not own the thread, so no need | |
2753 | to call update_block and delete_insn. */ | |
2754 | fix_reg_dead_note (prior_insn, insn); | |
2755 | update_reg_unused_notes (prior_insn, new_thread); | |
2756 | new_thread = next_active_insn (new_thread); | |
2757 | } | |
9c7e2978 RK |
2758 | break; |
2759 | } | |
2760 | ||
2761 | continue; | |
2762 | } | |
2763 | } | |
2764 | } | |
2765 | ||
2766 | /* This insn can't go into a delay slot. */ | |
2767 | lose = 1; | |
73780b74 | 2768 | mark_set_resources (trial, &set, 0, MARK_SRC_DEST_CALL); |
9c7e2978 RK |
2769 | mark_referenced_resources (trial, &needed, 1); |
2770 | ||
2771 | /* Ensure we don't put insns between the setting of cc and the comparison | |
2772 | by moving a setting of cc into an earlier delay slot since these insns | |
2773 | could clobber the condition code. */ | |
2774 | set.cc = 1; | |
2775 | ||
2776 | /* If this insn is a register-register copy and the next insn has | |
2777 | a use of our destination, change it to use our source. That way, | |
2778 | it will become a candidate for our delay slot the next time | |
2779 | through this loop. This case occurs commonly in loops that | |
2780 | scan a list. | |
2781 | ||
2782 | We could check for more complex cases than those tested below, | |
2783 | but it doesn't seem worth it. It might also be a good idea to try | |
9b72fab4 RK |
2784 | to swap the two insns. That might do better. |
2785 | ||
963d6142 RK |
2786 | We can't do this if the next insn modifies our destination, because |
2787 | that would make the replacement into the insn invalid. We also can't | |
2788 | do this if it modifies our source, because it might be an earlyclobber | |
2789 | operand. This latter test also prevents updating the contents of | |
5a82c0b1 HPN |
2790 | a PRE_INC. We also can't do this if there's overlap of source and |
2791 | destination. Overlap may happen for larger-than-register-size modes. */ | |
9c7e2978 | 2792 | |
4b4bf941 | 2793 | if (NONJUMP_INSN_P (trial) && GET_CODE (pat) == SET |
f8cfc6aa JQ |
2794 | && REG_P (SET_SRC (pat)) |
2795 | && REG_P (SET_DEST (pat)) | |
5a82c0b1 | 2796 | && !reg_overlap_mentioned_p (SET_DEST (pat), SET_SRC (pat))) |
9c7e2978 RK |
2797 | { |
2798 | rtx next = next_nonnote_insn (trial); | |
9c7e2978 | 2799 | |
4b4bf941 | 2800 | if (next && NONJUMP_INSN_P (next) |
9b72fab4 RK |
2801 | && GET_CODE (PATTERN (next)) != USE |
2802 | && ! reg_set_p (SET_DEST (pat), next) | |
963d6142 | 2803 | && ! reg_set_p (SET_SRC (pat), next) |
b746d31a TM |
2804 | && reg_referenced_p (SET_DEST (pat), PATTERN (next)) |
2805 | && ! modified_in_p (SET_DEST (pat), next)) | |
9c7e2978 RK |
2806 | validate_replace_rtx (SET_DEST (pat), SET_SRC (pat), next); |
2807 | } | |
2808 | } | |
2809 | ||
2810 | /* If we stopped on a branch insn that has delay slots, see if we can | |
2811 | steal some of the insns in those slots. */ | |
4b4bf941 | 2812 | if (trial && NONJUMP_INSN_P (trial) |
9c7e2978 | 2813 | && GET_CODE (PATTERN (trial)) == SEQUENCE |
4b4bf941 | 2814 | && JUMP_P (XVECEXP (PATTERN (trial), 0, 0))) |
9c7e2978 RK |
2815 | { |
2816 | /* If this is the `true' thread, we will want to follow the jump, | |
2817 | so we can only do this if we have taken everything up to here. */ | |
18e765cb | 2818 | if (thread_if_true && trial == new_thread) |
86c7007a HPN |
2819 | { |
2820 | delay_list | |
2821 | = steal_delay_list_from_target (insn, condition, PATTERN (trial), | |
2822 | delay_list, &set, &needed, | |
2823 | &opposite_needed, slots_to_fill, | |
2824 | pslots_filled, &must_annul, | |
2825 | &new_thread); | |
2826 | /* If we owned the thread and are told that it branched | |
2827 | elsewhere, make sure we own the thread at the new location. */ | |
2828 | if (own_thread && trial != new_thread) | |
2829 | own_thread = own_thread_p (new_thread, new_thread, 0); | |
2830 | } | |
9c7e2978 RK |
2831 | else if (! thread_if_true) |
2832 | delay_list | |
2833 | = steal_delay_list_from_fallthrough (insn, condition, | |
2834 | PATTERN (trial), | |
2835 | delay_list, &set, &needed, | |
2836 | &opposite_needed, slots_to_fill, | |
2837 | pslots_filled, &must_annul); | |
2838 | } | |
2839 | ||
2840 | /* If we haven't found anything for this delay slot and it is very | |
2841 | likely that the branch will be taken, see if the insn at our target | |
d674b9e3 RK |
2842 | increments or decrements a register with an increment that does not |
2843 | depend on the destination register. If so, try to place the opposite | |
9c7e2978 RK |
2844 | arithmetic insn after the jump insn and put the arithmetic insn in the |
2845 | delay slot. If we can't do this, return. */ | |
a0a7cb35 | 2846 | if (delay_list == 0 && likely && new_thread |
4b4bf941 | 2847 | && NONJUMP_INSN_P (new_thread) |
a0a7cb35 JL |
2848 | && GET_CODE (PATTERN (new_thread)) != ASM_INPUT |
2849 | && asm_noperands (PATTERN (new_thread)) < 0) | |
9c7e2978 RK |
2850 | { |
2851 | rtx pat = PATTERN (new_thread); | |
2852 | rtx dest; | |
2853 | rtx src; | |
2854 | ||
d674b9e3 | 2855 | trial = new_thread; |
9c7e2978 RK |
2856 | pat = PATTERN (trial); |
2857 | ||
4b4bf941 | 2858 | if (!NONJUMP_INSN_P (trial) |
1db828df DM |
2859 | || GET_CODE (pat) != SET |
2860 | || ! eligible_for_delay (insn, 0, trial, flags) | |
2861 | || can_throw_internal (trial)) | |
9c7e2978 RK |
2862 | return 0; |
2863 | ||
2864 | dest = SET_DEST (pat), src = SET_SRC (pat); | |
2865 | if ((GET_CODE (src) == PLUS || GET_CODE (src) == MINUS) | |
d674b9e3 | 2866 | && rtx_equal_p (XEXP (src, 0), dest) |
12d7e99c HPN |
2867 | && ! reg_overlap_mentioned_p (dest, XEXP (src, 1)) |
2868 | && ! side_effects_p (pat)) | |
9c7e2978 RK |
2869 | { |
2870 | rtx other = XEXP (src, 1); | |
2871 | rtx new_arith; | |
2872 | rtx ninsn; | |
2873 | ||
2874 | /* If this is a constant adjustment, use the same code with | |
2875 | the negated constant. Otherwise, reverse the sense of the | |
2876 | arithmetic. */ | |
2877 | if (GET_CODE (other) == CONST_INT) | |
38a448ca RH |
2878 | new_arith = gen_rtx_fmt_ee (GET_CODE (src), GET_MODE (src), dest, |
2879 | negate_rtx (GET_MODE (src), other)); | |
9c7e2978 | 2880 | else |
38a448ca RH |
2881 | new_arith = gen_rtx_fmt_ee (GET_CODE (src) == PLUS ? MINUS : PLUS, |
2882 | GET_MODE (src), dest, other); | |
9c7e2978 | 2883 | |
38a448ca | 2884 | ninsn = emit_insn_after (gen_rtx_SET (VOIDmode, dest, new_arith), |
9c7e2978 RK |
2885 | insn); |
2886 | ||
2887 | if (recog_memoized (ninsn) < 0 | |
0eadeb15 | 2888 | || (extract_insn (ninsn), ! constrain_operands (1))) |
9c7e2978 | 2889 | { |
53c17031 | 2890 | delete_related_insns (ninsn); |
9c7e2978 RK |
2891 | return 0; |
2892 | } | |
2893 | ||
2894 | if (own_thread) | |
2895 | { | |
f1f9081a | 2896 | update_block (trial, thread); |
a7ad699e RK |
2897 | if (trial == thread) |
2898 | { | |
2899 | thread = next_active_insn (thread); | |
2900 | if (new_thread == trial) | |
2901 | new_thread = thread; | |
2902 | } | |
53c17031 | 2903 | delete_related_insns (trial); |
9c7e2978 RK |
2904 | } |
2905 | else | |
2906 | new_thread = next_active_insn (trial); | |
2907 | ||
2908 | ninsn = own_thread ? trial : copy_rtx (trial); | |
2909 | if (thread_if_true) | |
2910 | INSN_FROM_TARGET_P (ninsn) = 1; | |
2911 | ||
fb3821f7 | 2912 | delay_list = add_to_delay_list (ninsn, NULL_RTX); |
9c7e2978 RK |
2913 | (*pslots_filled)++; |
2914 | } | |
2915 | } | |
2916 | ||
2917 | if (delay_list && must_annul) | |
2918 | INSN_ANNULLED_BRANCH_P (insn) = 1; | |
2919 | ||
2920 | /* If we are to branch into the middle of this thread, find an appropriate | |
2921 | label or make a new one if none, and redirect INSN to it. If we hit the | |
2922 | end of the function, use the end-of-function label. */ | |
2923 | if (new_thread != thread) | |
2924 | { | |
2925 | rtx label; | |
2926 | ||
41374e13 | 2927 | gcc_assert (thread_if_true); |
9c7e2978 | 2928 | |
4b4bf941 | 2929 | if (new_thread && JUMP_P (new_thread) |
9c7e2978 | 2930 | && (simplejump_p (new_thread) |
b304ad47 JL |
2931 | || GET_CODE (PATTERN (new_thread)) == RETURN) |
2932 | && redirect_with_delay_list_safe_p (insn, | |
2933 | JUMP_LABEL (new_thread), | |
2934 | delay_list)) | |
22422dbf | 2935 | new_thread = follow_jumps (JUMP_LABEL (new_thread)); |
9c7e2978 RK |
2936 | |
2937 | if (new_thread == 0) | |
2938 | label = find_end_label (); | |
4b4bf941 | 2939 | else if (LABEL_P (new_thread)) |
9c7e2978 RK |
2940 | label = new_thread; |
2941 | else | |
2942 | label = get_label_before (new_thread); | |
2943 | ||
b9185691 EB |
2944 | if (label) |
2945 | reorg_redirect_jump (insn, label); | |
9c7e2978 RK |
2946 | } |
2947 | ||
2948 | return delay_list; | |
2949 | } | |
2950 | \f | |
2951 | /* Make another attempt to find insns to place in delay slots. | |
2952 | ||
2953 | We previously looked for insns located in front of the delay insn | |
2954 | and, for non-jump delay insns, located behind the delay insn. | |
2955 | ||
2956 | Here only try to schedule jump insns and try to move insns from either | |
2957 | the target or the following insns into the delay slot. If annulling is | |
2958 | supported, we will be likely to do this. Otherwise, we can do this only | |
2959 | if safe. */ | |
2960 | ||
2961 | static void | |
0c20a65f | 2962 | fill_eager_delay_slots (void) |
9c7e2978 | 2963 | { |
b3694847 SS |
2964 | rtx insn; |
2965 | int i; | |
9c7e2978 RK |
2966 | int num_unfilled_slots = unfilled_slots_next - unfilled_slots_base; |
2967 | ||
2968 | for (i = 0; i < num_unfilled_slots; i++) | |
2969 | { | |
2970 | rtx condition; | |
2971 | rtx target_label, insn_at_target, fallthrough_insn; | |
2972 | rtx delay_list = 0; | |
2973 | int own_target; | |
2974 | int own_fallthrough; | |
2975 | int prediction, slots_to_fill, slots_filled; | |
2976 | ||
2977 | insn = unfilled_slots_base[i]; | |
2978 | if (insn == 0 | |
2979 | || INSN_DELETED_P (insn) | |
4b4bf941 | 2980 | || !JUMP_P (insn) |
3480bb98 | 2981 | || ! (condjump_p (insn) || condjump_in_parallel_p (insn))) |
9c7e2978 RK |
2982 | continue; |
2983 | ||
2984 | slots_to_fill = num_delay_slots (insn); | |
c3a3b536 JL |
2985 | /* Some machine description have defined instructions to have |
2986 | delay slots only in certain circumstances which may depend on | |
2987 | nearby insns (which change due to reorg's actions). | |
2988 | ||
0c20a65f | 2989 | For example, the PA port normally has delay slots for unconditional |
c3a3b536 JL |
2990 | jumps. |
2991 | ||
2992 | However, the PA port claims such jumps do not have a delay slot | |
2993 | if they are immediate successors of certain CALL_INSNs. This | |
2994 | allows the port to favor filling the delay slot of the call with | |
2995 | the unconditional jump. */ | |
9c7e2978 | 2996 | if (slots_to_fill == 0) |
c5c76735 | 2997 | continue; |
9c7e2978 RK |
2998 | |
2999 | slots_filled = 0; | |
3000 | target_label = JUMP_LABEL (insn); | |
3001 | condition = get_branch_condition (insn, target_label); | |
3002 | ||
3003 | if (condition == 0) | |
3004 | continue; | |
3005 | ||
abc95ed3 | 3006 | /* Get the next active fallthrough and target insns and see if we own |
9c7e2978 RK |
3007 | them. Then see whether the branch is likely true. We don't need |
3008 | to do a lot of this for unconditional branches. */ | |
3009 | ||
3010 | insn_at_target = next_active_insn (target_label); | |
3011 | own_target = own_thread_p (target_label, target_label, 0); | |
3012 | ||
3013 | if (condition == const_true_rtx) | |
3014 | { | |
3015 | own_fallthrough = 0; | |
3016 | fallthrough_insn = 0; | |
3017 | prediction = 2; | |
3018 | } | |
3019 | else | |
3020 | { | |
3021 | fallthrough_insn = next_active_insn (insn); | |
fb3821f7 | 3022 | own_fallthrough = own_thread_p (NEXT_INSN (insn), NULL_RTX, 1); |
9c7e2978 RK |
3023 | prediction = mostly_true_jump (insn, condition); |
3024 | } | |
3025 | ||
3026 | /* If this insn is expected to branch, first try to get insns from our | |
0443f602 | 3027 | target, then our fallthrough insns. If it is not expected to branch, |
9c7e2978 RK |
3028 | try the other order. */ |
3029 | ||
0275a51b | 3030 | if (prediction > 0) |
9c7e2978 RK |
3031 | { |
3032 | delay_list | |
3033 | = fill_slots_from_thread (insn, condition, insn_at_target, | |
3034 | fallthrough_insn, prediction == 2, 1, | |
91a51951 | 3035 | own_target, |
ab63953e | 3036 | slots_to_fill, &slots_filled, delay_list); |
9c7e2978 RK |
3037 | |
3038 | if (delay_list == 0 && own_fallthrough) | |
3039 | { | |
3040 | /* Even though we didn't find anything for delay slots, | |
3041 | we might have found a redundant insn which we deleted | |
3042 | from the thread that was filled. So we have to recompute | |
3043 | the next insn at the target. */ | |
3044 | target_label = JUMP_LABEL (insn); | |
3045 | insn_at_target = next_active_insn (target_label); | |
3046 | ||
3047 | delay_list | |
3048 | = fill_slots_from_thread (insn, condition, fallthrough_insn, | |
3049 | insn_at_target, 0, 0, | |
91a51951 | 3050 | own_fallthrough, |
ab63953e JL |
3051 | slots_to_fill, &slots_filled, |
3052 | delay_list); | |
9c7e2978 RK |
3053 | } |
3054 | } | |
3055 | else | |
3056 | { | |
3057 | if (own_fallthrough) | |
3058 | delay_list | |
3059 | = fill_slots_from_thread (insn, condition, fallthrough_insn, | |
3060 | insn_at_target, 0, 0, | |
91a51951 | 3061 | own_fallthrough, |
d9f1e3da MH |
3062 | slots_to_fill, &slots_filled, |
3063 | delay_list); | |
9c7e2978 RK |
3064 | |
3065 | if (delay_list == 0) | |
3066 | delay_list | |
3067 | = fill_slots_from_thread (insn, condition, insn_at_target, | |
3068 | next_active_insn (insn), 0, 1, | |
91a51951 | 3069 | own_target, |
ab63953e JL |
3070 | slots_to_fill, &slots_filled, |
3071 | delay_list); | |
9c7e2978 RK |
3072 | } |
3073 | ||
3074 | if (delay_list) | |
3075 | unfilled_slots_base[i] | |
91a51951 | 3076 | = emit_delay_sequence (insn, delay_list, slots_filled); |
9c7e2978 RK |
3077 | |
3078 | if (slots_to_fill == slots_filled) | |
3079 | unfilled_slots_base[i] = 0; | |
3080 | ||
3081 | note_delay_statistics (slots_filled, 1); | |
3082 | } | |
3083 | } | |
3084 | \f | |
3085 | /* Once we have tried two ways to fill a delay slot, make a pass over the | |
3086 | code to try to improve the results and to do such things as more jump | |
3087 | threading. */ | |
3088 | ||
3089 | static void | |
0c20a65f | 3090 | relax_delay_slots (rtx first) |
9c7e2978 | 3091 | { |
b3694847 SS |
3092 | rtx insn, next, pat; |
3093 | rtx trial, delay_insn, target_label; | |
9c7e2978 RK |
3094 | |
3095 | /* Look at every JUMP_INSN and see if we can improve it. */ | |
3096 | for (insn = first; insn; insn = next) | |
3097 | { | |
3098 | rtx other; | |
3099 | ||
3100 | next = next_active_insn (insn); | |
3101 | ||
3102 | /* If this is a jump insn, see if it now jumps to a jump, jumps to | |
3103 | the next insn, or jumps to a label that is not the last of a | |
3104 | group of consecutive labels. */ | |
4b4bf941 | 3105 | if (JUMP_P (insn) |
3480bb98 | 3106 | && (condjump_p (insn) || condjump_in_parallel_p (insn)) |
9c7e2978 RK |
3107 | && (target_label = JUMP_LABEL (insn)) != 0) |
3108 | { | |
6c2511d3 | 3109 | target_label = skip_consecutive_labels (follow_jumps (target_label)); |
de5292c7 RK |
3110 | if (target_label == 0) |
3111 | target_label = find_end_label (); | |
3112 | ||
b9185691 | 3113 | if (target_label && next_active_insn (target_label) == next |
3480bb98 | 3114 | && ! condjump_in_parallel_p (insn)) |
9c7e2978 RK |
3115 | { |
3116 | delete_jump (insn); | |
3117 | continue; | |
3118 | } | |
3119 | ||
b9185691 | 3120 | if (target_label && target_label != JUMP_LABEL (insn)) |
326f06f7 | 3121 | reorg_redirect_jump (insn, target_label); |
9c7e2978 | 3122 | |
4fe9b91c | 3123 | /* See if this jump branches around an unconditional jump. |
9c7e2978 RK |
3124 | If so, invert this jump and point it to the target of the |
3125 | second jump. */ | |
4b4bf941 | 3126 | if (next && JUMP_P (next) |
9c7e2978 | 3127 | && (simplejump_p (next) || GET_CODE (PATTERN (next)) == RETURN) |
b9185691 | 3128 | && target_label |
9c7e2978 RK |
3129 | && next_active_insn (target_label) == next_active_insn (next) |
3130 | && no_labels_between_p (insn, next)) | |
3131 | { | |
3132 | rtx label = JUMP_LABEL (next); | |
3133 | ||
3134 | /* Be careful how we do this to avoid deleting code or | |
3135 | labels that are momentarily dead. See similar optimization | |
3136 | in jump.c. | |
3137 | ||
3138 | We also need to ensure we properly handle the case when | |
3139 | invert_jump fails. */ | |
3140 | ||
3141 | ++LABEL_NUSES (target_label); | |
3142 | if (label) | |
3143 | ++LABEL_NUSES (label); | |
3144 | ||
9ba11d5a | 3145 | if (invert_jump (insn, label, 1)) |
9c7e2978 | 3146 | { |
53c17031 | 3147 | delete_related_insns (next); |
9c7e2978 RK |
3148 | next = insn; |
3149 | } | |
3150 | ||
3151 | if (label) | |
3152 | --LABEL_NUSES (label); | |
3153 | ||
3154 | if (--LABEL_NUSES (target_label) == 0) | |
53c17031 | 3155 | delete_related_insns (target_label); |
9c7e2978 RK |
3156 | |
3157 | continue; | |
3158 | } | |
3159 | } | |
f27e6bb2 | 3160 | |
9c7e2978 RK |
3161 | /* If this is an unconditional jump and the previous insn is a |
3162 | conditional jump, try reversing the condition of the previous | |
3163 | insn and swapping our targets. The next pass might be able to | |
3164 | fill the slots. | |
3165 | ||
3166 | Don't do this if we expect the conditional branch to be true, because | |
3167 | we would then be making the more common case longer. */ | |
3168 | ||
4b4bf941 | 3169 | if (JUMP_P (insn) |
9c7e2978 RK |
3170 | && (simplejump_p (insn) || GET_CODE (PATTERN (insn)) == RETURN) |
3171 | && (other = prev_active_insn (insn)) != 0 | |
3480bb98 | 3172 | && (condjump_p (other) || condjump_in_parallel_p (other)) |
9c7e2978 | 3173 | && no_labels_between_p (other, insn) |
6f20793d | 3174 | && 0 > mostly_true_jump (other, |
0275a51b RK |
3175 | get_branch_condition (other, |
3176 | JUMP_LABEL (other)))) | |
9c7e2978 RK |
3177 | { |
3178 | rtx other_target = JUMP_LABEL (other); | |
7ca4e06e | 3179 | target_label = JUMP_LABEL (insn); |
9c7e2978 | 3180 | |
9ba11d5a | 3181 | if (invert_jump (other, target_label, 0)) |
326f06f7 | 3182 | reorg_redirect_jump (insn, other_target); |
9c7e2978 RK |
3183 | } |
3184 | ||
3185 | /* Now look only at cases where we have filled a delay slot. */ | |
4b4bf941 | 3186 | if (!NONJUMP_INSN_P (insn) |
9c7e2978 RK |
3187 | || GET_CODE (PATTERN (insn)) != SEQUENCE) |
3188 | continue; | |
3189 | ||
3190 | pat = PATTERN (insn); | |
3191 | delay_insn = XVECEXP (pat, 0, 0); | |
3192 | ||
3193 | /* See if the first insn in the delay slot is redundant with some | |
3194 | previous insn. Remove it from the delay slot if so; then set up | |
3195 | to reprocess this insn. */ | |
f898abd7 | 3196 | if (redundant_insn (XVECEXP (pat, 0, 1), delay_insn, 0)) |
9c7e2978 RK |
3197 | { |
3198 | delete_from_delay_slot (XVECEXP (pat, 0, 1)); | |
3199 | next = prev_active_insn (next); | |
3200 | continue; | |
3201 | } | |
3202 | ||
9f5a2691 JL |
3203 | /* See if we have a RETURN insn with a filled delay slot followed |
3204 | by a RETURN insn with an unfilled a delay slot. If so, we can delete | |
d1a6adeb | 3205 | the first RETURN (but not its delay insn). This gives the same |
9f5a2691 JL |
3206 | effect in fewer instructions. |
3207 | ||
3208 | Only do so if optimizing for size since this results in slower, but | |
3209 | smaller code. */ | |
3210 | if (optimize_size | |
3211 | && GET_CODE (PATTERN (delay_insn)) == RETURN | |
3212 | && next | |
4b4bf941 | 3213 | && JUMP_P (next) |
9f5a2691 JL |
3214 | && GET_CODE (PATTERN (next)) == RETURN) |
3215 | { | |
2f937369 | 3216 | rtx after; |
9f5a2691 JL |
3217 | int i; |
3218 | ||
3219 | /* Delete the RETURN and just execute the delay list insns. | |
3220 | ||
3221 | We do this by deleting the INSN containing the SEQUENCE, then | |
3222 | re-emitting the insns separately, and then deleting the RETURN. | |
3223 | This allows the count of the jump target to be properly | |
3224 | decremented. */ | |
3225 | ||
3226 | /* Clear the from target bit, since these insns are no longer | |
3227 | in delay slots. */ | |
3228 | for (i = 0; i < XVECLEN (pat, 0); i++) | |
3229 | INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0; | |
3230 | ||
3231 | trial = PREV_INSN (insn); | |
53c17031 | 3232 | delete_related_insns (insn); |
41374e13 | 3233 | gcc_assert (GET_CODE (pat) == SEQUENCE); |
2f937369 DM |
3234 | after = trial; |
3235 | for (i = 0; i < XVECLEN (pat, 0); i++) | |
3236 | { | |
3237 | rtx this_insn = XVECEXP (pat, 0, i); | |
3238 | add_insn_after (this_insn, after); | |
3239 | after = this_insn; | |
3240 | } | |
9f5a2691 JL |
3241 | delete_scheduled_jump (delay_insn); |
3242 | continue; | |
3243 | } | |
3244 | ||
9c7e2978 | 3245 | /* Now look only at the cases where we have a filled JUMP_INSN. */ |
4b4bf941 | 3246 | if (!JUMP_P (XVECEXP (PATTERN (insn), 0, 0)) |
3480bb98 JL |
3247 | || ! (condjump_p (XVECEXP (PATTERN (insn), 0, 0)) |
3248 | || condjump_in_parallel_p (XVECEXP (PATTERN (insn), 0, 0)))) | |
9c7e2978 RK |
3249 | continue; |
3250 | ||
3251 | target_label = JUMP_LABEL (delay_insn); | |
3252 | ||
3253 | if (target_label) | |
3254 | { | |
3255 | /* If this jump goes to another unconditional jump, thread it, but | |
3256 | don't convert a jump into a RETURN here. */ | |
6c2511d3 RS |
3257 | trial = skip_consecutive_labels (follow_jumps (target_label)); |
3258 | if (trial == 0) | |
9c7e2978 RK |
3259 | trial = find_end_label (); |
3260 | ||
b9185691 | 3261 | if (trial && trial != target_label |
83fd5651 | 3262 | && redirect_with_delay_slots_safe_p (delay_insn, trial, insn)) |
9c7e2978 | 3263 | { |
326f06f7 | 3264 | reorg_redirect_jump (delay_insn, trial); |
9c7e2978 RK |
3265 | target_label = trial; |
3266 | } | |
3267 | ||
3268 | /* If the first insn at TARGET_LABEL is redundant with a previous | |
3269 | insn, redirect the jump to the following insn process again. */ | |
3270 | trial = next_active_insn (target_label); | |
3271 | if (trial && GET_CODE (PATTERN (trial)) != SEQUENCE | |
1db828df DM |
3272 | && redundant_insn (trial, insn, 0) |
3273 | && ! can_throw_internal (trial)) | |
9c7e2978 | 3274 | { |
eed04dff JL |
3275 | /* Figure out where to emit the special USE insn so we don't |
3276 | later incorrectly compute register live/death info. */ | |
b9185691 | 3277 | rtx tmp = next_active_insn (trial); |
eed04dff JL |
3278 | if (tmp == 0) |
3279 | tmp = find_end_label (); | |
3280 | ||
b9185691 EB |
3281 | if (tmp) |
3282 | { | |
3283 | /* Insert the special USE insn and update dataflow info. */ | |
3284 | update_block (trial, tmp); | |
eed04dff | 3285 | |
b9185691 EB |
3286 | /* Now emit a label before the special USE insn, and |
3287 | redirect our jump to the new label. */ | |
3288 | target_label = get_label_before (PREV_INSN (tmp)); | |
3289 | reorg_redirect_jump (delay_insn, target_label); | |
3290 | next = insn; | |
3291 | continue; | |
3292 | } | |
9c7e2978 RK |
3293 | } |
3294 | ||
3295 | /* Similarly, if it is an unconditional jump with one insn in its | |
3296 | delay list and that insn is redundant, thread the jump. */ | |
3297 | if (trial && GET_CODE (PATTERN (trial)) == SEQUENCE | |
3298 | && XVECLEN (PATTERN (trial), 0) == 2 | |
4b4bf941 | 3299 | && JUMP_P (XVECEXP (PATTERN (trial), 0, 0)) |
9c7e2978 RK |
3300 | && (simplejump_p (XVECEXP (PATTERN (trial), 0, 0)) |
3301 | || GET_CODE (PATTERN (XVECEXP (PATTERN (trial), 0, 0))) == RETURN) | |
f898abd7 | 3302 | && redundant_insn (XVECEXP (PATTERN (trial), 0, 1), insn, 0)) |
9c7e2978 RK |
3303 | { |
3304 | target_label = JUMP_LABEL (XVECEXP (PATTERN (trial), 0, 0)); | |
3305 | if (target_label == 0) | |
b9185691 | 3306 | target_label = find_end_label (); |
83fd5651 | 3307 | |
9ac617d4 | 3308 | if (target_label |
b9185691 | 3309 | && redirect_with_delay_slots_safe_p (delay_insn, target_label, |
9ac617d4 | 3310 | insn)) |
83fd5651 JL |
3311 | { |
3312 | reorg_redirect_jump (delay_insn, target_label); | |
3313 | next = insn; | |
3314 | continue; | |
3315 | } | |
9c7e2978 RK |
3316 | } |
3317 | } | |
3318 | ||
3319 | if (! INSN_ANNULLED_BRANCH_P (delay_insn) | |
3320 | && prev_active_insn (target_label) == insn | |
3480bb98 | 3321 | && ! condjump_in_parallel_p (delay_insn) |
9c7e2978 RK |
3322 | #ifdef HAVE_cc0 |
3323 | /* If the last insn in the delay slot sets CC0 for some insn, | |
3324 | various code assumes that it is in a delay slot. We could | |
3325 | put it back where it belonged and delete the register notes, | |
6dc42e49 | 3326 | but it doesn't seem worthwhile in this uncommon case. */ |
9c7e2978 | 3327 | && ! find_reg_note (XVECEXP (pat, 0, XVECLEN (pat, 0) - 1), |
fb3821f7 | 3328 | REG_CC_USER, NULL_RTX) |
9c7e2978 RK |
3329 | #endif |
3330 | ) | |
3331 | { | |
2f937369 | 3332 | rtx after; |
ac224823 RK |
3333 | int i; |
3334 | ||
9c7e2978 RK |
3335 | /* All this insn does is execute its delay list and jump to the |
3336 | following insn. So delete the jump and just execute the delay | |
3337 | list insns. | |
3338 | ||
3339 | We do this by deleting the INSN containing the SEQUENCE, then | |
3340 | re-emitting the insns separately, and then deleting the jump. | |
3341 | This allows the count of the jump target to be properly | |
3342 | decremented. */ | |
3343 | ||
ac224823 RK |
3344 | /* Clear the from target bit, since these insns are no longer |
3345 | in delay slots. */ | |
3346 | for (i = 0; i < XVECLEN (pat, 0); i++) | |
3347 | INSN_FROM_TARGET_P (XVECEXP (pat, 0, i)) = 0; | |
3348 | ||
9c7e2978 | 3349 | trial = PREV_INSN (insn); |
53c17031 | 3350 | delete_related_insns (insn); |
41374e13 | 3351 | gcc_assert (GET_CODE (pat) == SEQUENCE); |
2f937369 DM |
3352 | after = trial; |
3353 | for (i = 0; i < XVECLEN (pat, 0); i++) | |
3354 | { | |
3355 | rtx this_insn = XVECEXP (pat, 0, i); | |
3356 | add_insn_after (this_insn, after); | |
3357 | after = this_insn; | |
3358 | } | |
9c7e2978 RK |
3359 | delete_scheduled_jump (delay_insn); |
3360 | continue; | |
3361 | } | |
3362 | ||
51ec2375 JW |
3363 | /* See if this is an unconditional jump around a single insn which is |
3364 | identical to the one in its delay slot. In this case, we can just | |
3365 | delete the branch and the insn in its delay slot. */ | |
4b4bf941 | 3366 | if (next && NONJUMP_INSN_P (next) |
51ec2375 JW |
3367 | && prev_label (next_active_insn (next)) == target_label |
3368 | && simplejump_p (insn) | |
3369 | && XVECLEN (pat, 0) == 2 | |
3370 | && rtx_equal_p (PATTERN (next), PATTERN (XVECEXP (pat, 0, 1)))) | |
3371 | { | |
53c17031 | 3372 | delete_related_insns (insn); |
51ec2375 JW |
3373 | continue; |
3374 | } | |
3375 | ||
9c7e2978 RK |
3376 | /* See if this jump (with its delay slots) branches around another |
3377 | jump (without delay slots). If so, invert this jump and point | |
3378 | it to the target of the second jump. We cannot do this for | |
3379 | annulled jumps, though. Again, don't convert a jump to a RETURN | |
3380 | here. */ | |
3381 | if (! INSN_ANNULLED_BRANCH_P (delay_insn) | |
4b4bf941 | 3382 | && next && JUMP_P (next) |
9c7e2978 RK |
3383 | && (simplejump_p (next) || GET_CODE (PATTERN (next)) == RETURN) |
3384 | && next_active_insn (target_label) == next_active_insn (next) | |
3385 | && no_labels_between_p (insn, next)) | |
3386 | { | |
3387 | rtx label = JUMP_LABEL (next); | |
3388 | rtx old_label = JUMP_LABEL (delay_insn); | |
3389 | ||
3390 | if (label == 0) | |
3391 | label = find_end_label (); | |
3392 | ||
bf6302f0 | 3393 | /* find_end_label can generate a new label. Check this first. */ |
b9185691 EB |
3394 | if (label |
3395 | && no_labels_between_p (insn, next) | |
bf6302f0 | 3396 | && redirect_with_delay_slots_safe_p (delay_insn, label, insn)) |
9c7e2978 | 3397 | { |
83fd5651 JL |
3398 | /* Be careful how we do this to avoid deleting code or labels |
3399 | that are momentarily dead. See similar optimization in | |
3400 | jump.c */ | |
3401 | if (old_label) | |
3402 | ++LABEL_NUSES (old_label); | |
9c7e2978 | 3403 | |
9ba11d5a | 3404 | if (invert_jump (delay_insn, label, 1)) |
83fd5651 | 3405 | { |
9e8b2461 RK |
3406 | int i; |
3407 | ||
3408 | /* Must update the INSN_FROM_TARGET_P bits now that | |
3409 | the branch is reversed, so that mark_target_live_regs | |
3410 | will handle the delay slot insn correctly. */ | |
3411 | for (i = 1; i < XVECLEN (PATTERN (insn), 0); i++) | |
3412 | { | |
3413 | rtx slot = XVECEXP (PATTERN (insn), 0, i); | |
3414 | INSN_FROM_TARGET_P (slot) = ! INSN_FROM_TARGET_P (slot); | |
3415 | } | |
3416 | ||
53c17031 | 3417 | delete_related_insns (next); |
83fd5651 JL |
3418 | next = insn; |
3419 | } | |
3420 | ||
3421 | if (old_label && --LABEL_NUSES (old_label) == 0) | |
53c17031 | 3422 | delete_related_insns (old_label); |
83fd5651 JL |
3423 | continue; |
3424 | } | |
9c7e2978 RK |
3425 | } |
3426 | ||
3427 | /* If we own the thread opposite the way this insn branches, see if we | |
3428 | can merge its delay slots with following insns. */ | |
3429 | if (INSN_FROM_TARGET_P (XVECEXP (pat, 0, 1)) | |
3430 | && own_thread_p (NEXT_INSN (insn), 0, 1)) | |
3431 | try_merge_delay_insns (insn, next); | |
3432 | else if (! INSN_FROM_TARGET_P (XVECEXP (pat, 0, 1)) | |
3433 | && own_thread_p (target_label, target_label, 0)) | |
3434 | try_merge_delay_insns (insn, next_active_insn (target_label)); | |
3435 | ||
3436 | /* If we get here, we haven't deleted INSN. But we may have deleted | |
3437 | NEXT, so recompute it. */ | |
3438 | next = next_active_insn (insn); | |
3439 | } | |
3440 | } | |
3441 | \f | |
3442 | #ifdef HAVE_return | |
3443 | ||
3444 | /* Look for filled jumps to the end of function label. We can try to convert | |
3445 | them into RETURN insns if the insns in the delay slot are valid for the | |
3446 | RETURN as well. */ | |
3447 | ||
3448 | static void | |
0c20a65f | 3449 | make_return_insns (rtx first) |
9c7e2978 RK |
3450 | { |
3451 | rtx insn, jump_insn, pat; | |
3452 | rtx real_return_label = end_of_function_label; | |
3453 | int slots, i; | |
3454 | ||
c134da6b HPN |
3455 | #ifdef DELAY_SLOTS_FOR_EPILOGUE |
3456 | /* If a previous pass filled delay slots in the epilogue, things get a | |
3457 | bit more complicated, as those filler insns would generally (without | |
3458 | data flow analysis) have to be executed after any existing branch | |
3459 | delay slot filler insns. It is also unknown whether such a | |
3460 | transformation would actually be profitable. Note that the existing | |
3461 | code only cares for branches with (some) filled delay slots. */ | |
3462 | if (current_function_epilogue_delay_list != NULL) | |
3463 | return; | |
3464 | #endif | |
3465 | ||
9c7e2978 RK |
3466 | /* See if there is a RETURN insn in the function other than the one we |
3467 | made for END_OF_FUNCTION_LABEL. If so, set up anything we can't change | |
3468 | into a RETURN to jump to it. */ | |
3469 | for (insn = first; insn; insn = NEXT_INSN (insn)) | |
4b4bf941 | 3470 | if (JUMP_P (insn) && GET_CODE (PATTERN (insn)) == RETURN) |
9c7e2978 RK |
3471 | { |
3472 | real_return_label = get_label_before (insn); | |
3473 | break; | |
3474 | } | |
f27e6bb2 | 3475 | |
9c7e2978 RK |
3476 | /* Show an extra usage of REAL_RETURN_LABEL so it won't go away if it |
3477 | was equal to END_OF_FUNCTION_LABEL. */ | |
3478 | LABEL_NUSES (real_return_label)++; | |
3479 | ||
3480 | /* Clear the list of insns to fill so we can use it. */ | |
3481 | obstack_free (&unfilled_slots_obstack, unfilled_firstobj); | |
3482 | ||
3483 | for (insn = first; insn; insn = NEXT_INSN (insn)) | |
3484 | { | |
35523fce JL |
3485 | int flags; |
3486 | ||
9c7e2978 RK |
3487 | /* Only look at filled JUMP_INSNs that go to the end of function |
3488 | label. */ | |
4b4bf941 | 3489 | if (!NONJUMP_INSN_P (insn) |
9c7e2978 | 3490 | || GET_CODE (PATTERN (insn)) != SEQUENCE |
4b4bf941 | 3491 | || !JUMP_P (XVECEXP (PATTERN (insn), 0, 0)) |
9c7e2978 RK |
3492 | || JUMP_LABEL (XVECEXP (PATTERN (insn), 0, 0)) != end_of_function_label) |
3493 | continue; | |
3494 | ||
3495 | pat = PATTERN (insn); | |
3496 | jump_insn = XVECEXP (pat, 0, 0); | |
3497 | ||
18beb3e9 | 3498 | /* If we can't make the jump into a RETURN, try to redirect it to the best |
9c7e2978 | 3499 | RETURN and go on to the next insn. */ |
326f06f7 | 3500 | if (! reorg_redirect_jump (jump_insn, NULL_RTX)) |
9c7e2978 | 3501 | { |
18beb3e9 JL |
3502 | /* Make sure redirecting the jump will not invalidate the delay |
3503 | slot insns. */ | |
3504 | if (redirect_with_delay_slots_safe_p (jump_insn, | |
3505 | real_return_label, | |
3506 | insn)) | |
3507 | reorg_redirect_jump (jump_insn, real_return_label); | |
9c7e2978 RK |
3508 | continue; |
3509 | } | |
3510 | ||
3511 | /* See if this RETURN can accept the insns current in its delay slot. | |
3512 | It can if it has more or an equal number of slots and the contents | |
3513 | of each is valid. */ | |
3514 | ||
35523fce | 3515 | flags = get_jump_flags (jump_insn, JUMP_LABEL (jump_insn)); |
9c7e2978 RK |
3516 | slots = num_delay_slots (jump_insn); |
3517 | if (slots >= XVECLEN (pat, 0) - 1) | |
3518 | { | |
3519 | for (i = 1; i < XVECLEN (pat, 0); i++) | |
3520 | if (! ( | |
3521 | #ifdef ANNUL_IFFALSE_SLOTS | |
3522 | (INSN_ANNULLED_BRANCH_P (jump_insn) | |
3523 | && INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
3524 | ? eligible_for_annul_false (jump_insn, i - 1, | |
35523fce | 3525 | XVECEXP (pat, 0, i), flags) : |
9c7e2978 RK |
3526 | #endif |
3527 | #ifdef ANNUL_IFTRUE_SLOTS | |
3528 | (INSN_ANNULLED_BRANCH_P (jump_insn) | |
3529 | && ! INSN_FROM_TARGET_P (XVECEXP (pat, 0, i))) | |
3530 | ? eligible_for_annul_true (jump_insn, i - 1, | |
35523fce | 3531 | XVECEXP (pat, 0, i), flags) : |
9c7e2978 | 3532 | #endif |
f27e6bb2 KH |
3533 | eligible_for_delay (jump_insn, i - 1, |
3534 | XVECEXP (pat, 0, i), flags))) | |
9c7e2978 RK |
3535 | break; |
3536 | } | |
3537 | else | |
3538 | i = 0; | |
3539 | ||
3540 | if (i == XVECLEN (pat, 0)) | |
3541 | continue; | |
3542 | ||
3543 | /* We have to do something with this insn. If it is an unconditional | |
3544 | RETURN, delete the SEQUENCE and output the individual insns, | |
3545 | followed by the RETURN. Then set things up so we try to find | |
3546 | insns for its delay slots, if it needs some. */ | |
3547 | if (GET_CODE (PATTERN (jump_insn)) == RETURN) | |
3548 | { | |
3549 | rtx prev = PREV_INSN (insn); | |
3550 | ||
53c17031 | 3551 | delete_related_insns (insn); |
9c7e2978 RK |
3552 | for (i = 1; i < XVECLEN (pat, 0); i++) |
3553 | prev = emit_insn_after (PATTERN (XVECEXP (pat, 0, i)), prev); | |
3554 | ||
3555 | insn = emit_jump_insn_after (PATTERN (jump_insn), prev); | |
3556 | emit_barrier_after (insn); | |
3557 | ||
3558 | if (slots) | |
3559 | obstack_ptr_grow (&unfilled_slots_obstack, insn); | |
3560 | } | |
3561 | else | |
3562 | /* It is probably more efficient to keep this with its current | |
3563 | delay slot as a branch to a RETURN. */ | |
326f06f7 | 3564 | reorg_redirect_jump (jump_insn, real_return_label); |
9c7e2978 RK |
3565 | } |
3566 | ||
3567 | /* Now delete REAL_RETURN_LABEL if we never used it. Then try to fill any | |
3568 | new delay slots we have created. */ | |
3569 | if (--LABEL_NUSES (real_return_label) == 0) | |
53c17031 | 3570 | delete_related_insns (real_return_label); |
9c7e2978 | 3571 | |
91a51951 KG |
3572 | fill_simple_delay_slots (1); |
3573 | fill_simple_delay_slots (0); | |
9c7e2978 RK |
3574 | } |
3575 | #endif | |
3576 | \f | |
3577 | /* Try to find insns to place in delay slots. */ | |
3578 | ||
3579 | void | |
0c20a65f | 3580 | dbr_schedule (rtx first, FILE *file) |
9c7e2978 | 3581 | { |
7bd80f37 | 3582 | rtx insn, next, epilogue_insn = 0; |
9c7e2978 RK |
3583 | int i; |
3584 | #if 0 | |
3585 | int old_flag_no_peephole = flag_no_peephole; | |
3586 | ||
3587 | /* Execute `final' once in prescan mode to delete any insns that won't be | |
3588 | used. Don't let final try to do any peephole optimization--it will | |
3589 | ruin dataflow information for this pass. */ | |
3590 | ||
3591 | flag_no_peephole = 1; | |
3592 | final (first, 0, NO_DEBUG, 1, 1); | |
3593 | flag_no_peephole = old_flag_no_peephole; | |
3594 | #endif | |
3595 | ||
f27e6bb2 | 3596 | /* If the current function has no insns other than the prologue and |
c3b80729 | 3597 | epilogue, then do not try to fill any delay slots. */ |
0b17ab2f | 3598 | if (n_basic_blocks == 0) |
c3b80729 JL |
3599 | return; |
3600 | ||
9c7e2978 RK |
3601 | /* Find the highest INSN_UID and allocate and initialize our map from |
3602 | INSN_UID's to position in code. */ | |
3603 | for (max_uid = 0, insn = first; insn; insn = NEXT_INSN (insn)) | |
7bd80f37 TW |
3604 | { |
3605 | if (INSN_UID (insn) > max_uid) | |
3606 | max_uid = INSN_UID (insn); | |
4b4bf941 | 3607 | if (NOTE_P (insn) |
7bd80f37 TW |
3608 | && NOTE_LINE_NUMBER (insn) == NOTE_INSN_EPILOGUE_BEG) |
3609 | epilogue_insn = insn; | |
3610 | } | |
9c7e2978 | 3611 | |
703ad42b | 3612 | uid_to_ruid = xmalloc ((max_uid + 1) * sizeof (int)); |
9c7e2978 RK |
3613 | for (i = 0, insn = first; insn; i++, insn = NEXT_INSN (insn)) |
3614 | uid_to_ruid[INSN_UID (insn)] = i; | |
f27e6bb2 | 3615 | |
9c7e2978 RK |
3616 | /* Initialize the list of insns that need filling. */ |
3617 | if (unfilled_firstobj == 0) | |
3618 | { | |
3619 | gcc_obstack_init (&unfilled_slots_obstack); | |
703ad42b | 3620 | unfilled_firstobj = obstack_alloc (&unfilled_slots_obstack, 0); |
9c7e2978 RK |
3621 | } |
3622 | ||
3623 | for (insn = next_active_insn (first); insn; insn = next_active_insn (insn)) | |
3624 | { | |
3625 | rtx target; | |
3626 | ||
9c64026d | 3627 | INSN_ANNULLED_BRANCH_P (insn) = 0; |
9c7e2978 RK |
3628 | INSN_FROM_TARGET_P (insn) = 0; |
3629 | ||
3630 | /* Skip vector tables. We can't get attributes for them. */ | |
4b4bf941 | 3631 | if (JUMP_P (insn) |
9c7e2978 RK |
3632 | && (GET_CODE (PATTERN (insn)) == ADDR_VEC |
3633 | || GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC)) | |
3634 | continue; | |
f27e6bb2 | 3635 | |
9c7e2978 RK |
3636 | if (num_delay_slots (insn) > 0) |
3637 | obstack_ptr_grow (&unfilled_slots_obstack, insn); | |
3638 | ||
3639 | /* Ensure all jumps go to the last of a set of consecutive labels. */ | |
4b4bf941 | 3640 | if (JUMP_P (insn) |
3480bb98 | 3641 | && (condjump_p (insn) || condjump_in_parallel_p (insn)) |
9c7e2978 | 3642 | && JUMP_LABEL (insn) != 0 |
6c2511d3 | 3643 | && ((target = skip_consecutive_labels (JUMP_LABEL (insn))) |
9c7e2978 | 3644 | != JUMP_LABEL (insn))) |
9ba11d5a | 3645 | redirect_jump (insn, target, 1); |
9c7e2978 RK |
3646 | } |
3647 | ||
ca545bb5 | 3648 | init_resource_info (epilogue_insn); |
7bd80f37 | 3649 | |
9c7e2978 RK |
3650 | /* Show we haven't computed an end-of-function label yet. */ |
3651 | end_of_function_label = 0; | |
3652 | ||
9c7e2978 | 3653 | /* Initialize the statistics for this function. */ |
703ad42b KG |
3654 | memset (num_insns_needing_delays, 0, sizeof num_insns_needing_delays); |
3655 | memset (num_filled_delays, 0, sizeof num_filled_delays); | |
9c7e2978 RK |
3656 | |
3657 | /* Now do the delay slot filling. Try everything twice in case earlier | |
3658 | changes make more slots fillable. */ | |
3659 | ||
3660 | for (reorg_pass_number = 0; | |
3661 | reorg_pass_number < MAX_REORG_PASSES; | |
3662 | reorg_pass_number++) | |
3663 | { | |
91a51951 KG |
3664 | fill_simple_delay_slots (1); |
3665 | fill_simple_delay_slots (0); | |
3666 | fill_eager_delay_slots (); | |
9c7e2978 RK |
3667 | relax_delay_slots (first); |
3668 | } | |
3669 | ||
3670 | /* Delete any USE insns made by update_block; subsequent passes don't need | |
3671 | them or know how to deal with them. */ | |
3672 | for (insn = first; insn; insn = next) | |
3673 | { | |
3674 | next = NEXT_INSN (insn); | |
3675 | ||
4b4bf941 | 3676 | if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == USE |
2c3c49de | 3677 | && INSN_P (XEXP (PATTERN (insn), 0))) |
53c17031 | 3678 | next = delete_related_insns (insn); |
9c7e2978 RK |
3679 | } |
3680 | ||
3681 | /* If we made an end of function label, indicate that it is now | |
3682 | safe to delete it by undoing our prior adjustment to LABEL_NUSES. | |
3683 | If it is now unused, delete it. */ | |
3684 | if (end_of_function_label && --LABEL_NUSES (end_of_function_label) == 0) | |
53c17031 | 3685 | delete_related_insns (end_of_function_label); |
9c7e2978 RK |
3686 | |
3687 | #ifdef HAVE_return | |
3688 | if (HAVE_return && end_of_function_label != 0) | |
3689 | make_return_insns (first); | |
3690 | #endif | |
3691 | ||
3692 | obstack_free (&unfilled_slots_obstack, unfilled_firstobj); | |
3693 | ||
3694 | /* It is not clear why the line below is needed, but it does seem to be. */ | |
703ad42b | 3695 | unfilled_firstobj = obstack_alloc (&unfilled_slots_obstack, 0); |
9c7e2978 RK |
3696 | |
3697 | if (file) | |
3698 | { | |
b3694847 | 3699 | int i, j, need_comma; |
2e02044a HB |
3700 | int total_delay_slots[MAX_DELAY_HISTOGRAM + 1]; |
3701 | int total_annul_slots[MAX_DELAY_HISTOGRAM + 1]; | |
9c7e2978 RK |
3702 | |
3703 | for (reorg_pass_number = 0; | |
3704 | reorg_pass_number < MAX_REORG_PASSES; | |
3705 | reorg_pass_number++) | |
3706 | { | |
3707 | fprintf (file, ";; Reorg pass #%d:\n", reorg_pass_number + 1); | |
3708 | for (i = 0; i < NUM_REORG_FUNCTIONS; i++) | |
3709 | { | |
3710 | need_comma = 0; | |
3711 | fprintf (file, ";; Reorg function #%d\n", i); | |
3712 | ||
3713 | fprintf (file, ";; %d insns needing delay slots\n;; ", | |
3714 | num_insns_needing_delays[i][reorg_pass_number]); | |
3715 | ||
2e02044a | 3716 | for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++) |
9c7e2978 RK |
3717 | if (num_filled_delays[i][j][reorg_pass_number]) |
3718 | { | |
3719 | if (need_comma) | |
3720 | fprintf (file, ", "); | |
3721 | need_comma = 1; | |
3722 | fprintf (file, "%d got %d delays", | |
3723 | num_filled_delays[i][j][reorg_pass_number], j); | |
3724 | } | |
3725 | fprintf (file, "\n"); | |
3726 | } | |
3727 | } | |
703ad42b KG |
3728 | memset (total_delay_slots, 0, sizeof total_delay_slots); |
3729 | memset (total_annul_slots, 0, sizeof total_annul_slots); | |
2e02044a HB |
3730 | for (insn = first; insn; insn = NEXT_INSN (insn)) |
3731 | { | |
3732 | if (! INSN_DELETED_P (insn) | |
4b4bf941 | 3733 | && NONJUMP_INSN_P (insn) |
2e02044a HB |
3734 | && GET_CODE (PATTERN (insn)) != USE |
3735 | && GET_CODE (PATTERN (insn)) != CLOBBER) | |
3736 | { | |
3737 | if (GET_CODE (PATTERN (insn)) == SEQUENCE) | |
3738 | { | |
3739 | j = XVECLEN (PATTERN (insn), 0) - 1; | |
3740 | if (j > MAX_DELAY_HISTOGRAM) | |
3741 | j = MAX_DELAY_HISTOGRAM; | |
3742 | if (INSN_ANNULLED_BRANCH_P (XVECEXP (PATTERN (insn), 0, 0))) | |
3743 | total_annul_slots[j]++; | |
3744 | else | |
3745 | total_delay_slots[j]++; | |
3746 | } | |
f27e6bb2 | 3747 | else if (num_delay_slots (insn) > 0) |
2e02044a HB |
3748 | total_delay_slots[0]++; |
3749 | } | |
3750 | } | |
3751 | fprintf (file, ";; Reorg totals: "); | |
3752 | need_comma = 0; | |
3753 | for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++) | |
3754 | { | |
3755 | if (total_delay_slots[j]) | |
3756 | { | |
3757 | if (need_comma) | |
3758 | fprintf (file, ", "); | |
3759 | need_comma = 1; | |
3760 | fprintf (file, "%d got %d delays", total_delay_slots[j], j); | |
3761 | } | |
3762 | } | |
3763 | fprintf (file, "\n"); | |
3764 | #if defined (ANNUL_IFTRUE_SLOTS) || defined (ANNUL_IFFALSE_SLOTS) | |
3765 | fprintf (file, ";; Reorg annuls: "); | |
3766 | need_comma = 0; | |
3767 | for (j = 0; j < MAX_DELAY_HISTOGRAM + 1; j++) | |
3768 | { | |
3769 | if (total_annul_slots[j]) | |
3770 | { | |
3771 | if (need_comma) | |
3772 | fprintf (file, ", "); | |
3773 | need_comma = 1; | |
3774 | fprintf (file, "%d got %d delays", total_annul_slots[j], j); | |
3775 | } | |
3776 | } | |
3777 | fprintf (file, "\n"); | |
3778 | #endif | |
3779 | fprintf (file, "\n"); | |
9c7e2978 | 3780 | } |
c107334d DM |
3781 | |
3782 | /* For all JUMP insns, fill in branch prediction notes, so that during | |
3783 | assembler output a target can set branch prediction bits in the code. | |
3784 | We have to do this now, as up until this point the destinations of | |
3785 | JUMPS can be moved around and changed, but past right here that cannot | |
3786 | happen. */ | |
3787 | for (insn = first; insn; insn = NEXT_INSN (insn)) | |
3788 | { | |
3789 | int pred_flags; | |
3790 | ||
4b4bf941 | 3791 | if (NONJUMP_INSN_P (insn)) |
f27e6bb2 | 3792 | { |
e0d80184 DM |
3793 | rtx pat = PATTERN (insn); |
3794 | ||
3795 | if (GET_CODE (pat) == SEQUENCE) | |
3796 | insn = XVECEXP (pat, 0, 0); | |
3797 | } | |
4b4bf941 | 3798 | if (!JUMP_P (insn)) |
c107334d DM |
3799 | continue; |
3800 | ||
3801 | pred_flags = get_jump_flags (insn, JUMP_LABEL (insn)); | |
38a448ca RH |
3802 | REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_BR_PRED, |
3803 | GEN_INT (pred_flags), | |
3804 | REG_NOTES (insn)); | |
c107334d | 3805 | } |
ca545bb5 | 3806 | free_resource_info (); |
67289ea6 | 3807 | free (uid_to_ruid); |
281ac604 JH |
3808 | #ifdef DELAY_SLOTS_FOR_EPILOGUE |
3809 | /* SPARC assembler, for instance, emit warning when debug info is output | |
3810 | into the delay slot. */ | |
3811 | { | |
3812 | rtx link; | |
3813 | ||
3814 | for (link = current_function_epilogue_delay_list; | |
3815 | link; | |
3816 | link = XEXP (link, 1)) | |
3817 | INSN_LOCATOR (XEXP (link, 0)) = 0; | |
3818 | } | |
3819 | #endif | |
9c7e2978 RK |
3820 | } |
3821 | #endif /* DELAY_SLOTS */ |